I’m launching a new research project to answer a question that’s fundamental to web performance and security, and I need your help. How fast are modern browsers at cryptographic hashing? Not in a synthetic lab, but on the countless real-world devices your users—and you—use every day.
This research is about moving beyond simple tests and building a “living benchmark” powered by the global tech community. The data we gather will shed light on practical questions that affect developers, security engineers, and performance enthusiasts.
Why Does This Matter?
Hashing is a silent workhorse of the web. It’s a key part of CSP for verifying inline scripts, a foundation for SRI, and countless other security mechanisms. But this security has a performance cost. My goal is to quantify that cost across the ecosystem with the highest possible precision.
This research aims to answer questions like:
- Browser Engine Wars: How do Chrome’s V8, Firefox’s SpiderMonkey, and Safari’s JavaScriptCore stack up in pure cryptographic performance?
- Hardware & Platform Impact: How much faster is a new M-series MacBook than a mid-range Android phone when accounting for real-world constraints like thermal throttling?
- The Power State Question: Does running on battery power significantly throttle performance for these intensive tasks?
By collecting data from a wide variety of devices, we can build a public, data-driven picture of the web’s real-world crypto performance.
How You Can Contribute
I’ve created a simple, user-friendly benchmarking tool to make participation frictionless. All it takes is a single click.
Here’s how it works:
- Close other CPU-intensive apps and tabs for the most accurate results.
- Click “Start Benchmark” below. The test is very thorough and its duration is adapted to your device (approx. 30-40 seconds on mobile, 90 seconds on desktop).
- Review the results, then click “Submit” to send them anonymously to our collector.
Browser Hash Speed Benchmark
Browser Hash Speed Benchmark: This runs a short on-device test (SHA-256/384/512). No personal data or cookies are collected. You can review before submitting anonymously.
| Algorithm | Size | MoM (ms) | 95% CI (ms) | Median (ms) | IQR (ms) | Stable | Remediation Attempts |
|---|
Hint: scroll the table horizontally to see all columns.
That’s it. Your contribution is now part of a high-quality, public dataset.
A Lab-Grade Methodology
For those interested in the technical details, this is not a simple loop. The benchmark is engineered to produce exceptionally low-noise, repeatable results.
- Environment-Aware Engine: The benchmark intelligently adapts to your device. It detects mobile environments and applies a more conservative testing profile to prevent thermal throttling. On mobile, it even inserts cooldown pauses between heavy tasks to ensure the results reflect sustained performance, not just an initial burst.
- Mandatory Isolated Environment: The benchmark requires a modern browser and a “cross-origin isolated” environment. This unlocks high-resolution timers and stronger process isolation, which are critical for accurate sub-millisecond measurements.
- Zero-Copy Data Channel: All high-frequency timing data is streamed from a Web Worker to the main thread via a SharedArrayBuffer. This “silent” communication channel eliminates the measurement interference caused by standard
postMessagecalls. - Data Integrity Safeguards: The test automatically aborts if you switch tabs, preventing the collection of invalid data from a throttled process.
- Robust Statistics: The final numbers aren’t simple averages. We use robust statistical estimators like Median-of-Means and Bootstrap Confidence Intervals to provide a more accurate picture of performance, resistant to system noise. The engine also intelligently re-runs unstable tests to improve data quality.
This approach minimizes common benchmarking pitfalls and aims to capture a true picture of the browser’s hashing throughput.
What Data is Collected?
The process is designed to be transparent and anonymous. The script collects only non-identifiable performance and hardware data:
- Performance Metrics: Operations per second, Median-of-Means, Bootstrap 95% Confidence Intervals, and other quality metrics for each algorithm and data size.
- Browser & OS: User agent string (e.g., Chrome 127, Windows 11).
- Hardware Specs: CPU core count (
navigator.hardwareConcurrency), device memory, and high-entropy client hints where available (like device model, e.g., “iPhone 15 Pro”). - Power State: Battery level and whether the device is currently charging.
- Anonymized Run Metadata: A unique, randomly generated ID for your browser (
anonId) and for each specific benchmark run (runId) are collected to prevent duplicate submissions. The version of the benchmark script is also included.
The data collector is designed with privacy as its highest priority. We do not store IP addresses, and no cookies are used. High-entropy debug data is stripped from all public submissions. The goal is strictly to analyze hardware and software trends, not to track individuals.
The complete source code, methodology, and security policy are available for public review in the official GitHub Repository.
See the Results So Far
This is a living project, and you can see our progress toward our research goals.
Total Submissions Received: …
A full analysis and downloadable raw dataset will be published once we reach our initial goal of approximately 200 submissions for each major browser (Chrome, Firefox, Safari) on both desktop and mobile platforms. Thank you for helping us get there!
The End Goal: A Public Analysis
After we’ve gathered a substantial dataset, I will perform a full analysis and publish the findings right here on this blog. The article will feature clear visualizations and conclusions that answer the questions posed earlier.
Furthermore, I will make the complete, anonymized raw dataset available for download, so that others can perform their own analysis and explorations.
Thank you for being a part of this community-driven research. Let’s go find some answers!

