StereumLabs Introduction
This site documents what StereumLabs measures, how we run measurements, how to read the results, and how to reproduce them.
Pages in this documentation are tagged for different Plans. The tags are displayed on the bottom of each page. Not all features are available in all plans. Please review your features on the Plans page for a detailed information of features and access to metrics & logs.
1. Purpose & Scope
- Provide neutral, hardware-backed measurements of Ethereum client behavior across execution (EC) and consensus (CC) clients.
- Publish definitions, procedures, and limitations so others can reproduce or critique results.
- Cover bare-metal and cloud environments where feasible.
Out of scope: node management tooling, validator operations guidance, price/performance benchmarking of cloud providers beyond what’s necessary to interpret metrics.
2. Audience
- Client teams (EC/CC)
- Institutional node operators
- Researchers and ecosystem contributors
3. Status & Versioning
- Methodology and dashboards may change. All material changes are recorded in Changelog.
- Active work and planned additions are tracked in Roadmap.
- When a change affects metric meaning or comparability, the related pages include “Breaking change” notes.
4. Client Coverage
- Execution: Besu, Erigon, Ethrex, Geth, Nethermind, Reth
- Consensus: Grandine, Lighthouse, Lodestar, Nimbus, Prysm, Teku
5. Metrics Index (Abbreviated)
| Category | Examples (see glossary for precise definitions) |
|---|---|
| Resource | CPU %, RSS/bytes, disk IO ops/bytes, network throughput/packets |
| Protocol | Slot/epoch participation, missed proposals/attestations, reorg indicators |
| Connectivity | Peer count, peer churn, sync status |
| Performance | Block import time, execution latency, queue depths |
| Reliability | Error rates, restarts, uptime |
A detailed list of collected client specific are available at Client Metrics Lists. Unit conventions and aggregation rules are specified per metric.
6. Accessing Results
- Dashboards: Public dashboards with links to definitions on each panel.
- Data Exports (Enterprise Plan only): CSV/Prometheus snapshots and notes on sampling intervals on request.
Each published artifact indicates: dataset time window, EC/CC versions, environment profile, and scenario.
7. Limitations & Sources of Variance
- Client-specific flags and tunables required for stability/compliance.
- Network conditions (peer set differences, public internet variability for cloud).
- Kernel/IO scheduler differences and firmware updates.
- Time sync drift and exporter sampling bias.
Methodological related notes: Purpose & Scope.
8. Getting Help / Contributing
- Questions or issues: contact@stereumlabs.com
- Contribute runs or environments: contact@stereumlabs.com
Change History
See Changelog for document changes and Roadmap for planned work.