Dashboards
This page documents the dashboards maintained by StereumLabs, how access works across subscription plans, the lifecycle status (active, legacy, experimental), and how to request or track new dashboards.
We actively design additional dashboards for specific audiences (client teams, operators, researchers) and continue to maintain legacy dashboards for backward compatibility and reproducibility.
Please review your subscribed Plan's features on the Plans page for a detailed information.
Purpose
- Provide time-aligned, comparable views of client behavior and system resources.
- Link every panel to metric definitions and procedures used to generate it.
- Offer audience-focused dashboards with scoped metrics and interpretation notes.
Data Cadence & Retention
- Ingest cadence: typically 5–15s scrape intervals (panel-specific).
- Aggregation windows: documented per panel (e.g., 30s mean, 5m rate).
- Retention: varies by plan and dataset size; noted in each dashboard’s header.
Dashboard Lifecycle
- Active — Supported and updated for current methodology/clients.
- Experimental — Under evaluation; may change without notice.
- Legacy — Preserved for continuity; may receive security/compatibility fixes and version pin notes.
We tag each dashboard with a lifecycle badge at the top. Legacy dashboards remain online to avoid breaking citations and to compare historical results.
Changelog & Versioning
Material changes (panel semantics, metric definitions, aggregation rules) are logged in Changelog.
When a change affects comparability, the dashboard header includes a Breaking change note and a link to prior versions (when available).
Requesting or Suggesting Dashboards
- Use TBA to propose audience, scope, and key questions.
- For Enterprise, we can scope custom dashboards (subject to feasibility and data availability).
- We track accepted requests on the Roadmap.
Catalog (growing list)
The list below is not exhaustive and will be filled over time. Status and access reflect the current release.
| Dashboard Name | Audience | Lifecycle | Data Sources | Notes / Link |
|---|---|---|---|---|
| Execution Client – Resource & Throughput | Operators, EC teams | Active | Node exporters, EC logs | Link TBD |
| Beacon Node – P2P & Gossip (non-staking) | CC teams, researchers | Active | CC metrics, libp2p | Link TBD |
| Sync & State – Initial & Snap/Beam | Operators, EC teams | Experimental | EC stages, disk I/O | Link TBD |
| Mempool & Propagation (EC) | Researchers, EC teams | Active | Tx pool, P2P | Link TBD |
| Storage & DB Behavior | Client teams, SRE | Active | RocksDB/MDBX, FS | Link TBD |
Tip: prefer “audience + question” style names, e.g., “Operators – Resource Saturation & Headroom”.
Panel & Naming Conventions
- Panel titles:
Category – Metric (Scope), e.g.,CPU – Utilization (EC process) - Units: SI where applicable; per-core normalization is labeled
per core - Rate labels: use
/ssuffix for rates derived from counters - Links: each panel’s info icon (
ⓘ) links to the exact definition entry
Deprecated Panels
- We deprecate panels when definitions change or when upstream metrics are removed.
- Deprecated panels gain a (Deprecated) suffix and a link to the replacement.
- Removal happens only in major dashboard revisions and is noted in the Changelog.
Known Limitations
- Cloud dashboards may show variance due to noisy neighbors and hardware heterogeneity.
- Short aggregation windows can exaggerate jitter; compare with 5–15m views for trends.
- Some definitions are client-specific; cross-client comparisons are labeled when caveats apply.
Change control for this page: material edits will be logged in the global Changelog with a short rationale and effective date.