Showing all 1 blog posts

Blog Navigation Instructions

Cybersecurity Research

Help Me Benchmark Web Crypto: A Community Research Project

A futuristic dashboard showing performance graphs for different browser logos and cryptographic algorithm names.

Abstract

This article launches a community-powered research project to analyze the real-world performance of cryptographic hashing (SHA-256, SHA-384, SHA-512) in modern web browsers. I've built an intelligent, one-click benchmarking tool that collects anonymous data on browser, OS, and hardware performance. By contributing your results, you can help quantify the impact of different JavaScript engines (V8, SpiderMonkey, JavaScriptCore), hardware (desktop vs. mobile), and power states on these critical web security primitives. The goal is to create the most comprehensive, realistic dataset on web crypto speed to date, with all findings and raw data to be published in a follow-up analysis.

Backstory

While implementing Content Security Policy hash verification for this website, I noticed significant performance variance across different browsers and devices. A simple SHA-256 hash that took 2ms on my desktop was taking 15ms on some mobile devices—enough to cause noticeable delays during page load. When I searched for real-world benchmarking data to understand these performance characteristics, I found only synthetic lab tests that didn't account for thermal throttling, battery states, or the diversity of actual user devices. I realized the web security community needed a comprehensive, crowdsourced dataset that reflects how cryptographic primitives actually perform in production environments. This project is my attempt to fill that gap while demonstrating best practices for high-precision browser benchmarking.

I’m launching a new research project to answer a question that’s fundamental to web performance and security, and I need your help. How fast are modern browsers at cryptographic hashing? Not in a synthetic lab, but on the countless real-world devices your users—and you—use every day.

This research is about moving beyond simple tests and building a “living benchmark” powered by the global tech community. The data we gather will shed light on practical questions that affect developers, security engineers, and performance enthusiasts.

Why Does This Matter?

Hashing is a silent workhorse of the web. It’s a key part of CSP for verifying inline scripts, a foundation for SRI, and countless other security mechanisms. But this security has a performance cost. My goal is to quantify that cost across the ecosystem with the highest possible precision.

This research aims to answer questions like:

  1. Browser Engine Wars: How do Chrome’s V8, Firefox’s SpiderMonkey, and Safari’s JavaScriptCore stack up in pure cryptographic performance?
  2. Hardware & Platform Impact: How much faster is a new M-series MacBook than a mid-range Android phone when accounting for real-world constraints like thermal throttling?
  3. The Power State Question: Does running on battery power significantly throttle performance for these intensive tasks?

By collecting data from a wide variety of devices, we can build a public, data-driven picture of the web’s real-world crypto performance.

How You Can Contribute

I’ve created a simple, user-friendly benchmarking tool to make participation frictionless. All it takes is a single click.

Here’s how it works:

  1. Close other CPU-intensive apps and tabs for the most accurate results.
  2. Click “Start Benchmark” below. The test is very thorough and its duration is adapted to your device (approx. 30-40 seconds on mobile, 90 seconds on desktop).
  3. Review the results, then click “Submit” to send them anonymously to our collector.

Browser Hash Speed Benchmark

Browser Hash Speed Benchmark: This runs a short on-device test (SHA-256/384/512). No personal data or cookies are collected. You can review before submitting anonymously.

Benchmark Controls
Benchmark results table
AlgorithmSizeMoM (ms)95% CI (ms)Median (ms)IQR (ms)StableRemediation Attempts

Hint: scroll the table horizontally to see all columns.

That’s it. Your contribution is now part of a high-quality, public dataset.

A Lab-Grade Methodology

For those interested in the technical details, this is not a simple loop. The benchmark is engineered to produce exceptionally low-noise, repeatable results.

  • Environment-Aware Engine: The benchmark intelligently adapts to your device. It detects mobile environments and applies a more conservative testing profile to prevent thermal throttling. On mobile, it even inserts cooldown pauses between heavy tasks to ensure the results reflect sustained performance, not just an initial burst.
  • Mandatory Isolated Environment: The benchmark requires a modern browser and a “cross-origin isolated” environment. This unlocks high-resolution timers and stronger process isolation, which are critical for accurate sub-millisecond measurements.
  • Zero-Copy Data Channel: All high-frequency timing data is streamed from a Web Worker to the main thread via a SharedArrayBuffer. This “silent” communication channel eliminates the measurement interference caused by standard postMessage calls.
  • Data Integrity Safeguards: The test automatically aborts if you switch tabs, preventing the collection of invalid data from a throttled process.
  • Robust Statistics: The final numbers aren’t simple averages. We use robust statistical estimators like Median-of-Means and Bootstrap Confidence Intervals to provide a more accurate picture of performance, resistant to system noise. The engine also intelligently re-runs unstable tests to improve data quality.

This approach minimizes common benchmarking pitfalls and aims to capture a true picture of the browser’s hashing throughput.

What Data is Collected?

The process is designed to be transparent and anonymous. The script collects only non-identifiable performance and hardware data:

  • Performance Metrics: Operations per second, Median-of-Means, Bootstrap 95% Confidence Intervals, and other quality metrics for each algorithm and data size.
  • Browser & OS: User agent string (e.g., Chrome 127, Windows 11).
  • Hardware Specs: CPU core count (navigator.hardwareConcurrency), device memory, and high-entropy client hints where available (like device model, e.g., “iPhone 15 Pro”).
  • Power State: Battery level and whether the device is currently charging.
  • Anonymized Run Metadata: A unique, randomly generated ID for your browser (anonId) and for each specific benchmark run (runId) are collected to prevent duplicate submissions. The version of the benchmark script is also included.

The data collector is designed with privacy as its highest priority. We do not store IP addresses, and no cookies are used. High-entropy debug data is stripped from all public submissions. The goal is strictly to analyze hardware and software trends, not to track individuals.

The complete source code, methodology, and security policy are available for public review in the official GitHub Repository.

See the Results So Far

This is a living project, and you can see our progress toward our research goals.

Total Submissions Received:

A full analysis and downloadable raw dataset will be published once we reach our initial goal of approximately 200 submissions for each major browser (Chrome, Firefox, Safari) on both desktop and mobile platforms. Thank you for helping us get there!

The End Goal: A Public Analysis

After we’ve gathered a substantial dataset, I will perform a full analysis and publish the findings right here on this blog. The article will feature clear visualizations and conclusions that answer the questions posed earlier.

Furthermore, I will make the complete, anonymized raw dataset available for download, so that others can perform their own analysis and explorations.

Thank you for being a part of this community-driven research. Let’s go find some answers!

Support My Work

If this material saved you time or sparked new ideas, consider supporting my work.

Tip me on Ko-fi

The quickest way to show your appreciation

For privacy-focused donations, you can support me using cryptocurrency. Click any address to copy.

Bitcoin (BTC)

Network: Bitcoin

bc1q99evmn80nmfk8vyxs2emc9t4a4k2pdmkjlwah4
Ethereum (ETH)

Network: Ethereum

0x81bF6f880D0010F47830cbF01c0F3C8a6E825Cc3
BNB (Binance Coin)

Network: BNB Smart Chain (BSC)

0x81bF6f880D0010F47830cbF01c0F3C8a6E825Cc3
USDT (Tether)

Network: TRON

TWnLLRVX9NgZAwD5LhrzvnEfg69jxEAGgA
Monero (XMR)

Network: Monero

88MbtU2R1ufG2BcnCrkqmn3oYxvvKKR1u2yb6YeZZrQV8akocEnrHrrhzoGowkijRpLsAzTjGczfEPdL9wyzrotLTSXbEg6
Solana (SOL)

Network: Solana

BBTdnfdojXzifJaV4CG8LcsNiZpfvva5o9cCpbS3Esmg
Bitcoin Cash (BCH)

Network: Bitcoin Cash

bitcoincash:qqhv3qtsldf0w8cjzzqyame35urs0cf2xg92s2chf0
Litecoin (LTC)

Network: Litecoin

ltc1qy6wkwtlmzt0kngx8wrgt6x5v5nn2s7wqyvh23m
TRX (TRON)

Network: TRON

TWnLLRVX9NgZAwD5LhrzvnEfg69jxEAGgA
Portrait of David Osipov

David Osipov

Innovative Product Leader with in B2B SaaS, specializing in AI-driven solutions, data-powered enterprise IT products, and secure SaaS platforms. Alumnus of GSOM SPbU. Active OpenStreetMap cartographer in Georgia and citizen scientist. Wikidata: Q130604188, ISNI: 0000 0005 1802 960X.

David Osipov David Zurabovich Osipov MSc B2B Product Manager