<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>StereumLabs Docs Blog</title>
        <link>https://your-docusaurus-site.example.com/blog</link>
        <description>StereumLabs Docs Blog</description>
        <lastBuildDate>Wed, 15 Apr 2026 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <item>
            <title><![CDATA[Nimbus v26.3.1: Validator monitoring and block building across 5 execution clients]]></title>
            <link>https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building</link>
            <guid>https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building</guid>
            <pubDate>Wed, 15 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A deep dive into how Nimbus observes 1,000 validators and how Besu, Erigon, Ethrex, Geth, and Nethermind behave when building blocks, processing attestations, and handling the Engine API, measured on our NDC2 bare-metal fleet over 48 hours.]]></description>
            <content:encoded><![CDATA[<p>How does each execution client behave when Nimbus asks it to build a block? We monitored 1,000 validator pubkeys across 5 EC pairings for 48 hours and found that block building performance varies dramatically, with one client producing near-empty blocks while the other four packed in millions of gas.</p>
<p><img decoding="async" loading="lazy" alt="Nimbus v26.3.1: Validator monitoring and block building across 5 execution clients" src="https://your-docusaurus-site.example.com/assets/images/thumbnail-2723b2fb05afb7059583d0961cf8ae67.png" width="1785" height="930" class="img_ev3q"></p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="overview">Overview<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#overview" class="hash-link" aria-label="Direct link to Overview" title="Direct link to Overview" translate="no">​</a></h2>
<p>We run 5 Nimbus <code>multiarch-v26.3.1</code> beacon nodes on our NDC2 bare-metal fleet in Vienna, each paired with a different execution client:</p>
<table><thead><tr><th>Consensus Client</th><th>Execution Client</th><th>Location</th></tr></thead><tbody><tr><td>Nimbus v26.3.1</td><td>Besu 26.2.0</td><td>NDC2, Vienna</td></tr><tr><td>Nimbus v26.3.1</td><td>Erigon v3.3.10</td><td>NDC2, Vienna</td></tr><tr><td>Nimbus v26.3.1</td><td>Ethrex 9.0.0</td><td>NDC2, Vienna</td></tr><tr><td>Nimbus v26.3.1</td><td>Geth v1.17.2</td><td>NDC2, Vienna</td></tr><tr><td>Nimbus v26.3.1</td><td>Nethermind 1.36.2</td><td>NDC2, Vienna</td></tr></tbody></table>
<p>A sixth node (Nimbus + Reth v1.11.3) is also deployed, but Reth has not yet completed its initial sync and is therefore excluded from this analysis.</p>
<p>All 5 nodes use Nimbus' built-in <code>validator_monitor</code> feature to passively observe the same set of <strong>1,000 validator pubkeys</strong> on-chain. The validator monitor tracks attestation inclusion, vote correctness, block proposals, and timing data for these validators without requiring the signing keys to be locally attached.</p>
<div class="theme-admonition theme-admonition-note admonition_xJq3 alert alert--secondary"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 14 16"><path fill-rule="evenodd" d="M6.3 5.69a.942.942 0 0 1-.28-.7c0-.28.09-.52.28-.7.19-.18.42-.28.7-.28.28 0 .52.09.7.28.18.19.28.42.28.7 0 .28-.09.52-.28.7a1 1 0 0 1-.7.3c-.28 0-.52-.11-.7-.3zM8 7.99c-.02-.25-.11-.48-.31-.69-.2-.19-.42-.3-.69-.31H6c-.27.02-.48.13-.69.31-.2.2-.3.44-.31.69h1v3c.02.27.11.5.31.69.2.2.42.31.69.31h1c.27 0 .48-.11.69-.31.2-.19.3-.42.31-.69H8V7.98v.01zM7 2.3c-3.14 0-5.7 2.54-5.7 5.68 0 3.14 2.56 5.7 5.7 5.7s5.7-2.55 5.7-5.7c0-3.15-2.56-5.69-5.7-5.69v.01zM7 .98c3.86 0 7 3.14 7 7s-3.14 7-7 7-7-3.12-7-7 3.14-7 7-7z"></path></svg></span>Shadow setup</div><div class="admonitionContent_BuS1"><p>This is a shadow configuration: a reverse proxy mirrors the validator client's requests to these beacon nodes, but the beacon nodes' responses do not reach the validator client. This means the data reflects how each CC+EC pairing <em>observes and reacts to</em> validator duties, but is not fully representative of a production validator setup where the EC's block building output would actually be submitted to the network.</p></div></div>
<p>The full analysis covers a 48-hour window ending April 13, 2026. Data sources: Prometheus (<code>prometheus-cold</code>) for metrics and Elasticsearch (Filebeat) for both EC container logs and Nimbus CC container logs.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="block-building-the-headline-finding">Block building: the headline finding<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#block-building-the-headline-finding" class="hash-link" aria-label="Direct link to Block building: the headline finding" title="Direct link to Block building: the headline finding" translate="no">​</a></h2>
<p>When one of the 1,000 monitored validators is selected as block proposer, Nimbus triggers <code>engine_forkchoiceUpdatedV3</code> with payload attributes on its paired EC, asking it to build a block. The EC then constructs the execution payload iteratively, improving it over several seconds until Nimbus calls <code>engine_getPayloadV4</code> to retrieve the result.</p>
<p>Over 48 hours, block building was triggered for approximately 13 unique blocks. <strong>The results differ dramatically between execution clients.</strong></p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="build-latency-vs-payload-output">Build latency vs. payload output<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#build-latency-vs-payload-output" class="hash-link" aria-label="Direct link to Build latency vs. payload output" title="Direct link to Build latency vs. payload output" translate="no">​</a></h3>
<p>Nimbus logs the exact timestamps for <code>Requesting engine payload</code> and <code>Received engine payload</code>, giving us the end-to-end build latency and resulting gas_used for every payload. This CC-side perspective is consistent across all ECs and fills in data even when the EC's own logs lack detail.</p>
<p><img decoding="async" loading="lazy" alt="Build latency vs. payload gas output" src="https://your-docusaurus-site.example.com/assets/images/latency_vs_gas-69fff28ab0a4030ad6fed61d39477262.png" width="1420" height="880" class="img_ev3q"></p>
<table><thead><tr><th>EC</th><th>Avg latency</th><th>Min</th><th>Max</th><th>Avg gas (Mgas)</th><th>Blocks</th></tr></thead><tbody><tr><td><strong>Besu</strong></td><td>546ms</td><td>75ms</td><td>848ms</td><td><strong>23.5</strong></td><td>11</td></tr><tr><td><strong>Ethrex</strong></td><td>535ms</td><td>27ms</td><td>728ms</td><td><strong>21.8</strong></td><td>13</td></tr><tr><td><strong>Nethermind</strong></td><td>519ms</td><td>28ms</td><td>696ms</td><td><strong>21.2</strong></td><td>13</td></tr><tr><td><strong>Geth</strong></td><td>524ms</td><td>24ms</td><td>756ms</td><td><strong>15.0</strong></td><td>13</td></tr><tr><td><strong>Erigon</strong></td><td><strong>479ms</strong> (fastest)</td><td>18ms</td><td>523ms</td><td><strong>0.0</strong></td><td>12</td></tr></tbody></table>
<p>The most counterintuitive finding: <strong>Erigon is the fastest responder (479ms avg) yet delivers 0 gas.</strong> Its transaction pool is apparently unable to supply transactions within the build window, so it returns an empty block quickly rather than spending time filling it. Besu takes 67ms longer on average but uses that time to pack 23.5 Mgas into the payload.</p>
<p>For high-gas blocks (slots with blob transactions), latency increases across all ECs: Besu peaks at 848ms for a 60M gas block, Ethrex at 728ms for the same block. This confirms that build latency scales with payload complexity.</p>
<div class="theme-admonition theme-admonition-note admonition_xJq3 alert alert--secondary"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 14 16"><path fill-rule="evenodd" d="M6.3 5.69a.942.942 0 0 1-.28-.7c0-.28.09-.52.28-.7.19-.18.42-.28.7-.28.28 0 .52.09.7.28.18.19.28.42.28.7 0 .28-.09.52-.28.7a1 1 0 0 1-.7.3c-.28 0-.52-.11-.7-.3zM8 7.99c-.02-.25-.11-.48-.31-.69-.2-.19-.42-.3-.69-.31H6c-.27.02-.48.13-.69.31-.2.2-.3.44-.31.69h1v3c.02.27.11.5.31.69.2.2.42.31.69.31h1c.27 0 .48-.11.69-.31.2-.19.3-.42.31-.69H8V7.98v.01zM7 2.3c-3.14 0-5.7 2.54-5.7 5.68 0 3.14 2.56 5.7 5.7 5.7s5.7-2.55 5.7-5.7c0-3.15-2.56-5.69-5.7-5.69v.01zM7 .98c3.86 0 7 3.14 7 7s-3.14 7-7 7-7-3.12-7-7 3.14-7 7-7z"></path></svg></span>note</div><div class="admonitionContent_BuS1"><p>One slot shows anomalously low latency for all ECs (Erigon 18ms, Geth 24ms, Ethrex 27ms, Nethermind 28ms, Besu 75ms). This is likely a cached or pre-built payload that Nimbus retrieved immediately.</p></div></div>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="per-block-distribution">Per-block distribution<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#per-block-distribution" class="hash-link" aria-label="Direct link to Per-block distribution" title="Direct link to Per-block distribution" translate="no">​</a></h3>
<p>The averages above compress a wide range of behavior into single numbers. Plotting every individual block build reveals the full distribution:</p>
<p><img decoding="async" loading="lazy" alt="All block builds: latency vs. gas output per EC (48h)" src="https://your-docusaurus-site.example.com/assets/images/all_blocks_scatter-9c77f4b92a19b6b2b2539a2041d9a758.png" width="2138" height="1237" class="img_ev3q"></p>
<p>Three distinct patterns emerge:</p>
<p><strong>Standard blocks (500-650ms, 7-18 Mgas):</strong> The main cluster where most builds land. Besu, Ethrex, and Nethermind consistently occupy the upper band (14-18 Mgas), while Geth sits lower (7-15 Mgas). All four ECs overlap in latency, confirming that response time differences between them are marginal for regular blocks.</p>
<p><strong>Blob-heavy blocks (700-850ms, 38-60 Mgas):</strong> A few outlier slots where blob transactions push gas usage to 38-60M. These blocks take noticeably longer to build across all ECs, with Besu and Ethrex reaching the gas limit (60M) while Geth tops out around 40-48M. The latency increase is proportional to payload complexity.</p>
<p><strong>Erigon (480-520ms, 0 Mgas):</strong> Every single Erigon build sits flat on the x-axis at 0 gas, forming a tight horizontal cluster. Regardless of whether the same slot produced a 17M gas block on Besu or a 60M gas block on Ethrex, Erigon delivered an empty payload. This pattern is consistent across all 12 blocks with no exceptions.</p>
<p>The scatter also reveals a handful of <strong>cached payloads</strong> (sub-100ms) where all ECs returned near-instantly, likely from a previously prepared payload that Nimbus retrieved before the build window elapsed.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="cross-ec-payload-comparison">Cross-EC payload comparison<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#cross-ec-payload-comparison" class="hash-link" aria-label="Direct link to Cross-EC payload comparison" title="Direct link to Cross-EC payload comparison" translate="no">​</a></h3>
<p>By examining the <code>Received engine payload</code> log from all 5 Nimbus CC instances, we can compare the actual gas_used delivered by each EC for the same blocks:</p>
<p><img decoding="async" loading="lazy" alt="Engine payload gas per block (from Nimbus CC logs, 48h)" src="https://your-docusaurus-site.example.com/assets/images/payload_gas_comparison-ec8d1578b8c8c71b985d0a9322110f8a.png" width="2141" height="971" class="img_ev3q"></p>
<table><thead><tr><th>EC</th><th>Block #869114</th><th>Block #869617</th><th>Block #867424</th><th>Block #866897</th><th>Avg gas</th></tr></thead><tbody><tr><td><strong>Besu</strong></td><td><strong>17.6M</strong></td><td><strong>15.1M</strong></td><td><strong>13.0M</strong></td><td><strong>17.5M</strong></td><td><strong>~23.5M</strong></td></tr><tr><td><strong>Ethrex</strong></td><td><strong>16.9M</strong></td><td><strong>17.8M</strong></td><td><strong>18.0M</strong></td><td><strong>18.0M</strong></td><td><strong>~21.8M</strong></td></tr><tr><td><strong>Nethermind</strong></td><td><strong>17.3M</strong></td><td><strong>10.1M</strong></td><td><strong>15.4M</strong></td><td><strong>16.6M</strong></td><td><strong>~21.2M</strong></td></tr><tr><td><strong>Geth</strong></td><td>14.0M</td><td>7.1M</td><td>11.0M</td><td>8.3M</td><td>~15.0M</td></tr><tr><td><strong>Erigon</strong></td><td><strong>0</strong></td><td><strong>41K</strong></td><td>—</td><td><strong>0</strong></td><td><strong>~0</strong></td></tr></tbody></table>
<p>Besu, Ethrex, and Nethermind are the strongest block builders, routinely filling blocks to 15-30% gas utilization. Ethrex is particularly consistent, delivering 14-18M gas for standard blocks. Geth produces lighter payloads. Erigon's payloads are effectively empty.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="besu-the-most-aggressive-builder">Besu: the most aggressive builder<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#besu-the-most-aggressive-builder" class="hash-link" aria-label="Direct link to Besu: the most aggressive builder" title="Direct link to Besu: the most aggressive builder" translate="no">​</a></h3>
<p>Besu produced <strong>661 block improvement iterations</strong> across its built blocks. Its logs show the full iterative building process with reward tracking:</p>
<div class="language-text codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#393A34;--prism-background-color:#f6f8fa"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-text codeBlock_bY9V thin-scrollbar" style="color:#393A34;background-color:#f6f8fa"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#393A34"><span class="token plain">New proposal for payloadId 0x36f36d block 24869114</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  gas used 17,624,773  transactions 366  reward 2.22 finney</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  is better than the previous one by 103.24 szabo</span><br></span></code></pre></div></div>
<p>For block #24,869,114, Besu's best build reached <strong>366 transactions, 17.6M gas, and a 2.22 finney block reward</strong>. It logged every improvement step, including the marginal reward increase between iterations.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="geth-clean-lifecycle-strong-output">Geth: clean lifecycle, strong output<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#geth-clean-lifecycle-strong-output" class="hash-link" aria-label="Direct link to Geth: clean lifecycle, strong output" title="Direct link to Geth: clean lifecycle, strong output" translate="no">​</a></h3>
<p>Geth logged 214 payload updates and provides the clearest view of the block building lifecycle:</p>
<p><img decoding="async" loading="lazy" alt="Geth block #24,869,114 iterative build timeline" src="https://your-docusaurus-site.example.com/assets/images/geth_build_timeline-12b69b415ebfa7471f8c852a05fae94d.png" width="1600" height="789" class="img_ev3q"></p>
<p>The build started with 6 transactions and grew to <strong>349 transactions over ~10.5 seconds</strong>, with each update taking 25-67ms. Geth logs <code>Starting work on payload</code>, then multiple <code>Updated payload</code> entries (with txs, gas, fees, elapsed time), and finally <code>Stopping work on payload reason=delivery</code> when Nimbus retrieves the result.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="ethrex-strong-builder-detailed-execution-breakdown">Ethrex: strong builder, detailed execution breakdown<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#ethrex-strong-builder-detailed-execution-breakdown" class="hash-link" aria-label="Direct link to Ethrex: strong builder, detailed execution breakdown" title="Direct link to Ethrex: strong builder, detailed execution breakdown" translate="no">​</a></h3>
<p>Ethrex does not log block building progress in its own container logs at the default log level. However, Nimbus' CC logs reveal it is one of the strongest builders in the fleet, delivering <strong>21.8 Mgas on average</strong> with one block hitting <strong>100% gas utilization (60M gas)</strong>. Each payload carries <code>extra_data: ethrex 9.0.0</code>.</p>
<p>What Ethrex does log is an excellent per-block execution breakdown when processing incoming blocks:</p>
<div class="language-text codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#393A34;--prism-background-color:#f6f8fa"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-text codeBlock_bY9V thin-scrollbar" style="color:#393A34;background-color:#f6f8fa"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#393A34"><span class="token plain">BLOCK EXECUTION THROUGHPUT (24869233): 0.366 Ggas/s</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  TIME SPENT: 55 ms. Gas Used: 0.020 (33%), #Txs: 134</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  block validation: 1% | exec(w/merkle): 91% | merkle-only: 2% | store: 7%</span><br></span></code></pre></div></div>
<p>This level of transparency into where execution time is spent (91% in execution+merkle, 7% in storage, 1% in validation) is unique among the tested ECs and valuable for performance profiling.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="nethermind-strong-builder-quiet-logs">Nethermind: strong builder, quiet logs<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#nethermind-strong-builder-quiet-logs" class="hash-link" aria-label="Direct link to Nethermind: strong builder, quiet logs" title="Direct link to Nethermind: strong builder, quiet logs" translate="no">​</a></h3>
<p>Nethermind's own logs only show production requests (<code>Production Request 24869114 PayloadId: 0x2321052e5846b8a2</code>) without per-iteration detail at the default log level. However, <strong>Nimbus' CC logs reveal the full picture:</strong></p>
<div class="language-text codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#393A34;--prism-background-color:#f6f8fa"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-text codeBlock_bY9V thin-scrollbar" style="color:#393A34;background-color:#f6f8fa"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#393A34"><span class="token plain">INF Received engine payload  slot=14103195 payload="(block_number: 24869114,</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  gas_used: 17257515, gas_limit: 60000000, extra_data: Nethermind v1.36.2, ...)"</span><br></span></code></pre></div></div>
<p>For block #24,869,114, Nethermind delivered <strong>17.3M gas</strong>, competitive with Besu's 17.6M and Ethrex's 16.9M. Across all 13 payloads in 48h, Nethermind consistently produced blocks in the 9-17M gas range.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="erigon-severely-impaired-block-building">Erigon: severely impaired block building<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#erigon-severely-impaired-block-building" class="hash-link" aria-label="Direct link to Erigon: severely impaired block building" title="Direct link to Erigon: severely impaired block building" translate="no">​</a></h3>
<p>Erigon's own logs show it actively attempts to build blocks but consistently fails to include transactions:</p>
<p><img decoding="async" loading="lazy" alt="Erigon block build outcomes (54 builds, 48h)" src="https://your-docusaurus-site.example.com/assets/images/erigon_build_outcomes-838549fad876cbf1739ea702cadba87f.png" width="862" height="845" class="img_ev3q"></p>
<p><strong>76% of Erigon's 54 block build iterations produced completely empty blocks</strong> with 0 transactions and 0 gas. When it did include transactions, the count was dramatically lower than other clients: a maximum of 36 transactions versus 300+ for Besu and Geth.</p>
<div class="language-text codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#393A34;--prism-background-color:#f6f8fa"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-text codeBlock_bY9V thin-scrollbar" style="color:#393A34;background-color:#f6f8fa"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#393A34"><span class="token plain">Built block  height=24869114  txs=0  executionRequests=0</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  gas used %=0.000  time=782.686ms</span><br></span></code></pre></div></div>
<p>The Nimbus CC logs confirm this: across 12 blocks, every single payload from Erigon contained 0 or near-0 gas. This is not a latency issue. At 479ms average, Erigon is the <em>fastest</em> EC to return a payload. The problem is upstream: Erigon's block execution latency (423-567ms per imported block) delays transaction pool maintenance, so when Nimbus asks for a payload, Erigon has nothing to put in it.</p>
<div class="theme-admonition theme-admonition-warning admonition_xJq3 alert alert--warning"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 16 16"><path fill-rule="evenodd" d="M8.893 1.5c-.183-.31-.52-.5-.887-.5s-.703.19-.886.5L.138 13.499a.98.98 0 0 0 0 1.001c.193.31.53.501.886.501h13.964c.367 0 .704-.19.877-.5a1.03 1.03 0 0 0 .01-1.002L8.893 1.5zm.133 11.497H6.987v-2.003h2.039v2.003zm0-3.004H6.987V5.987h2.039v4.006z"></path></svg></span>Operator impact</div><div class="admonitionContent_BuS1"><p>In a production setup, if Erigon were the EC responsible for building the block that gets submitted to the network, the validator would propose a near-empty block, forfeiting transaction fees and MEV revenue. For the ~13 blocks built in our 48h window, the difference between Besu's output (2.22 finney reward) and Erigon's (0 reward) is a direct loss per proposal.</p></div></div>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="attestation-pipeline-per-ec-timing-differences">Attestation pipeline: per-EC timing differences<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#attestation-pipeline-per-ec-timing-differences" class="hash-link" aria-label="Direct link to Attestation pipeline: per-EC timing differences" title="Direct link to Attestation pipeline: per-EC timing differences" translate="no">​</a></h2>
<p>While all 5 nodes observe the same 1,000 validators and see identical on-chain results (99.996% attestation inclusion rate), <em>how quickly</em> each node observes attestation events varies by EC pairing. Three pipeline stages were measured:</p>
<p><img decoding="async" loading="lazy" alt="Attestation pipeline delay by EC pairing (p50, 48h)" src="https://your-docusaurus-site.example.com/assets/images/attestation_delay-b252db05f19f50beb3c94573ea74ed22.png" width="1780" height="881" class="img_ev3q"></p>
<table><thead><tr><th>Pipeline stage</th><th>Besu</th><th>Erigon</th><th>Ethrex</th><th>Geth</th><th>Nethermind</th><th>Spread</th></tr></thead><tbody><tr><td><strong>Unaggregated attestation</strong> (p50)</td><td><strong>49ms</strong></td><td>50ms</td><td>53ms</td><td>52ms</td><td>52ms</td><td>4ms</td></tr><tr><td><strong>Aggregated attestation</strong> (p50)</td><td><strong>35ms</strong></td><td>35ms</td><td>38ms</td><td>38ms</td><td>38ms</td><td>3ms</td></tr><tr><td><strong>Attestation in aggregate</strong> (p50)</td><td>142ms</td><td><strong>153ms</strong></td><td>141ms</td><td><strong>135ms</strong></td><td>145ms</td><td>18ms</td></tr></tbody></table>
<p>Besu is the fastest for raw attestation observation. However, Geth leads in the third stage (attestation appearing inside aggregates), which is the most relevant for actual inclusion. Erigon is consistently the slowest in this final stage, adding 18ms of latency versus Geth. Ethrex performs well across all three stages.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="attestation-volume-erigons-observation-gap">Attestation volume: Erigon's observation gap<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#attestation-volume-erigons-observation-gap" class="hash-link" aria-label="Direct link to Attestation volume: Erigon's observation gap" title="Direct link to Attestation volume: Erigon's observation gap" translate="no">​</a></h2>
<p>All 5 nodes should see roughly the same number of attestations for the monitored validators. Over 48 hours, they do not:</p>
<p><img decoding="async" loading="lazy" alt="Unaggregated attestations received via API (48h)" src="https://your-docusaurus-site.example.com/assets/images/attestation_volume-65c17df9eb75b2ff90bb44f489d823b2.png" width="1422" height="789" class="img_ev3q"></p>
<table><thead><tr><th>EC pairing</th><th>Attestations received (API)</th><th>Delta vs. best</th></tr></thead><tbody><tr><td>Geth</td><td>404,876</td><td>baseline</td></tr><tr><td>Nethermind</td><td>403,726</td><td>-0.3%</td></tr><tr><td>Besu</td><td>402,563</td><td>-0.6%</td></tr><tr><td>Ethrex</td><td>400,752</td><td>-1.0%</td></tr><tr><td><strong>Erigon</strong></td><td><strong>359,104</strong></td><td><strong>-11.3%</strong></td></tr></tbody></table>
<p>Erigon misses <strong>~45,000 attestations</strong> that the other 4 nodes see within the same 48-hour window. This is not a network issue; all nodes are on the same bare-metal fleet in Vienna. Erigon's slower block processing causes attestations to expire before the node can process them.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="vote-accuracy-source-target-and-head">Vote accuracy: source, target, and head<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#vote-accuracy-source-target-and-head" class="hash-link" aria-label="Direct link to Vote accuracy: source, target, and head" title="Direct link to Vote accuracy: source, target, and head" translate="no">​</a></h2>
<p>The 1,000 monitored validators' three Casper FFG vote types show dramatically different difficulty levels:</p>
<p><img decoding="async" loading="lazy" alt="Attestation vote accuracy (48h, 1000 validators)" src="https://your-docusaurus-site.example.com/assets/images/vote_type_hierarchy-f30e1137467b13ca7e57685aa8b5c144.png" width="1422" height="789" class="img_ev3q"></p>
<table><thead><tr><th>Vote type</th><th>Hits (48h)</th><th>Misses (48h)</th><th>Hit rate</th><th>What it measures</th></tr></thead><tbody><tr><td><strong>Source</strong></td><td>450,137</td><td>19</td><td>99.996%</td><td>Correct justified checkpoint</td></tr><tr><td><strong>Target</strong></td><td>449,857</td><td>299</td><td>99.934%</td><td>Correct finalized checkpoint</td></tr><tr><td><strong>Head</strong></td><td>~446,200</td><td>~3,987</td><td>99.12%</td><td>Correct chain head at slot boundary</td></tr></tbody></table>
<p>Head vote accuracy is the hardest duty, with ~210x more misses than source. It requires the node to see the latest block before the slot boundary, making it the most sensitive to processing speed.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="head-vote-diurnal-pattern">Head vote: diurnal pattern<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#head-vote-diurnal-pattern" class="hash-link" aria-label="Direct link to Head vote: diurnal pattern" title="Direct link to Head vote: diurnal pattern" translate="no">​</a></h3>
<p>Over 48 hours, head vote accuracy shows a clear cyclic pattern:</p>
<p><img decoding="async" loading="lazy" alt="Head vote accuracy over 48 hours" src="https://your-docusaurus-site.example.com/assets/images/head_vote_48h-d6de3647341b7f10476221c7db2be58d.png" width="1780" height="699" class="img_ev3q"></p>
<p>The rate fluctuates between <strong>98.64% and 99.54%</strong>, likely driven by network congestion patterns. All 5 nodes report identical values at each time point, confirming this reflects on-chain truth, not individual node performance.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="block-proposals-via-gossip">Block proposals via gossip<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#block-proposals-via-gossip" class="hash-link" aria-label="Direct link to Block proposals via gossip" title="Direct link to Block proposals via gossip" translate="no">​</a></h2>
<p>Over the monitoring period, the nodes observed <strong>27 block proposals</strong> from the 1,000 monitored validators via gossip. In the last 48 hours, <strong>12 block proposals</strong> were detected via the <code>validator_monitor_block_hit_total</code> counter. All proposals arrived via the gossip network, confirming the validators are actively proposing on-chain.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="nimbus-cc-logs-as-an-observability-source">Nimbus CC logs as an observability source<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#nimbus-cc-logs-as-an-observability-source" class="hash-link" aria-label="Direct link to Nimbus CC logs as an observability source" title="Direct link to Nimbus CC logs as an observability source" translate="no">​</a></h2>
<p>One of the practical takeaways from this analysis: <strong>Nimbus' consensus client logs are a valuable observability layer for block building</strong>, regardless of how detailed the execution client's own logs are.</p>
<p>Nimbus logs three key events per block building cycle:</p>
<ol>
<li class=""><code>Requesting engine payload</code> — timestamp, slot, beacon head, fee recipient</li>
<li class=""><code>Received engine payload</code> — full payload contents: gas_used, gas_limit, block_number, extra_data</li>
<li class=""><code>Block proposal included</code> — slot, validator ID</li>
</ol>
<p>For Ethrex and Nethermind, whose default log levels don't expose per-iteration block building details, the Nimbus CC logs were the only way to determine that both clients are strong builders (17-22M gas avg). This CC-side perspective also revealed the build latency data that showed Erigon's problem is not response time but empty transaction pools.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="ec-log-verbosity-at-default-levels">EC log verbosity at default levels<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#ec-log-verbosity-at-default-levels" class="hash-link" aria-label="Direct link to EC log verbosity at default levels" title="Direct link to EC log verbosity at default levels" translate="no">​</a></h2>
<p>An interesting infrastructure finding: the default log levels produce vastly different volumes across execution clients.</p>
<p><img decoding="async" loading="lazy" alt="EC container log volume at default log levels (per hour)" src="https://your-docusaurus-site.example.com/assets/images/log_verbosity-70585912782b01de617fcc4171b9f731.png" width="1420" height="699" class="img_ev3q"></p>
<table><thead><tr><th>Execution Client</th><th>Log entries per hour</th><th>Factor vs. least</th></tr></thead><tbody><tr><td>Nethermind 1.36.2</td><td>5,661</td><td>9.4x</td></tr><tr><td>Ethrex 9.0.0</td><td>1,569</td><td>2.6x</td></tr><tr><td>Geth v1.17.2</td><td>1,205</td><td>2.0x</td></tr><tr><td>Reth v1.11.3</td><td>838</td><td>1.4x</td></tr><tr><td>Erigon v3.3.10</td><td>781</td><td>1.3x</td></tr><tr><td>Besu 26.2.0</td><td>604</td><td>1.0x</td></tr></tbody></table>
<p>Nethermind produces <strong>~9x more log output</strong> than Besu at default log levels. All 6 ECs in our fleet (including Reth, which is syncing) are included here since this is a general infrastructure observation.</p>
<div class="theme-admonition theme-admonition-tip admonition_xJq3 alert alert--success"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 12 16"><path fill-rule="evenodd" d="M6.5 0C3.48 0 1 2.19 1 5c0 .92.55 2.25 1 3 1.34 2.25 1.78 2.78 2 4v1h5v-1c.22-1.22.66-1.75 2-4 .45-.75 1-2.08 1-3 0-2.81-2.48-5-5.5-5zm3.64 7.48c-.25.44-.47.8-.67 1.11-.86 1.41-1.25 2.06-1.45 3.23-.02.05-.02.11-.02.17H5c0-.06 0-.13-.02-.17-.2-1.17-.59-1.83-1.45-3.23-.2-.31-.42-.67-.67-1.11C2.44 6.78 2 5.65 2 5c0-2.2 2.02-4 4.5-4 1.22 0 2.36.42 3.22 1.19C10.55 2.94 11 3.94 11 5c0 .66-.44 1.78-.86 2.48zM4 14h5c-.23 1.14-1.3 2-2.5 2s-2.27-.86-2.5-2z"></path></svg></span>Operator consideration</div><div class="admonitionContent_BuS1"><p>If you're running Nethermind with centralized logging (Filebeat/Elasticsearch or similar), account for its higher log volume when sizing your log storage and ingestion capacity. Consider adjusting Nethermind's log level if storage is constrained.</p></div></div>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="summary">Summary<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#summary" class="hash-link" aria-label="Direct link to Summary" title="Direct link to Summary" translate="no">​</a></h2>
<table><thead><tr><th>Dimension</th><th>Besu</th><th>Erigon</th><th>Ethrex</th><th>Geth</th><th>Nethermind</th></tr></thead><tbody><tr><td><strong>Block building (avg gas)</strong></td><td>🟢 <strong>23.5M</strong> (best)</td><td>🔴 <strong>0M</strong> (empty)</td><td>🟢 <strong>21.8M</strong></td><td>⚪ 15.0M</td><td>🟢 <strong>21.2M</strong></td></tr><tr><td><strong>Build latency</strong></td><td>546ms</td><td><strong>479ms</strong> (fastest)</td><td>535ms</td><td>524ms</td><td>519ms</td></tr><tr><td><strong>Attestation delay</strong> (agg.)</td><td>⚪ 142ms</td><td>🔴 153ms</td><td>⚪ 141ms</td><td>🟢 <strong>135ms</strong></td><td>⚪ 145ms</td></tr><tr><td><strong>Attestation volume</strong></td><td>⚪ -0.6%</td><td>🔴 <strong>-11.3%</strong></td><td>⚪ -1.0%</td><td>🟢 Baseline</td><td>⚪ -0.3%</td></tr><tr><td><strong>EC build log detail</strong></td><td>🟢 Full</td><td>⚪ txs + gas</td><td>🟠 None (CC fills gap)</td><td>🟢 Full</td><td>🟠 Minimal (CC fills gap)</td></tr><tr><td><strong>Log verbosity</strong> (default)</td><td>🟢 604/hr</td><td>⚪ 781/hr</td><td>⚪ 1,569/hr</td><td>⚪ 1,205/hr</td><td>🟠 5,661/hr</td></tr></tbody></table>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="key-takeaways">Key takeaways<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#key-takeaways" class="hash-link" aria-label="Direct link to Key takeaways" title="Direct link to Key takeaways" translate="no">​</a></h3>
<ol>
<li class="">
<p><strong>Erigon's block building is severely impaired.</strong> 76% of built blocks were empty, and even when it did include transactions, the count was a fraction of what other clients achieved. Paradoxically, Erigon responds fastest (479ms) but with nothing in the payload. The root cause is its slow block execution (423-567ms) cascading into transaction pool readiness.</p>
</li>
<li class="">
<p><strong>Besu, Ethrex, and Nethermind are all strong block builders.</strong> Besu leads on raw gas output (23.5M avg) and iteration count (661 per block). Ethrex is the most consistent (14-18M gas for standard blocks, 60M for full blocks). Nethermind is competitive at 21.2M avg despite minimal logging.</p>
</li>
<li class="">
<p><strong>Nimbus' CC logs are a valuable observability source.</strong> The <code>Received engine payload</code> log provides payload gas_used, build latency, and block details for every EC, making it the most reliable cross-client comparison tool, especially for ECs like Ethrex and Nethermind whose own logs lack block building detail at default levels.</p>
</li>
<li class="">
<p><strong>Erigon misses ~11% of attestation observations</strong> due to slower block processing. This volume gap is unique to Erigon; the other 4 ECs are within 1% of each other.</p>
</li>
<li class="">
<p><strong>Head vote accuracy (~99.1%) shows a diurnal pattern</strong> visible across all nodes identically, driven by network congestion rather than client behavior.</p>
</li>
<li class="">
<p><strong>Log volume varies 9x</strong> between the most and least verbose EC at default log levels. Nethermind's high verbosity (5,661/hr) is an infrastructure sizing consideration.</p>
</li>
</ol>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="methodology-notes">Methodology notes<a href="https://your-docusaurus-site.example.com/blog/nimbus-v26-3-1-validator-monitoring-block-building#methodology-notes" class="hash-link" aria-label="Direct link to Methodology notes" title="Direct link to Methodology notes" translate="no">​</a></h2>
<p>All Prometheus data sourced from the <code>prometheus-cold</code> datasource in the StereumLabs Grafana instance. Validator monitor metrics use the <code>validator_monitor_*</code> family with <code>cc_client="nimbus"</code> and <code>validator="total"</code> label selectors. 48-hour data uses <code>increase(...[48h])</code> instant queries and <code>histogram_quantile</code> over <code>rate(...[48h])</code> for delay distributions.</p>
<p>EC container logs are stored in Elasticsearch (Filebeat 9.3.0 → Elasticsearch 9.3.0) and queried via the <code>container.image.name</code> field to isolate each execution client's output. Block building logs were identified by client-specific patterns: Besu's <code>New proposal for payloadId</code>, Geth's <code>Updated payload</code>/<code>Starting work on payload</code>/<code>Stopping work on payload</code>, and Erigon's <code>Built block</code>/<code>Building block</code>.</p>
<p>Additionally, Nimbus CC container logs (<code>statusim/nimbus-eth2:multiarch-v26.3.1</code>) were analyzed for engine API interactions: <code>Requesting engine payload</code> (build request timestamps), <code>Received engine payload</code> (payload contents including gas_used), and <code>Block proposal included</code> (on-chain confirmation). This CC-side perspective provides build latency measurements and fills in payload details for ECs whose own logs lack that information at default log levels (notably Ethrex and Nethermind).</p>
<p>All nodes run on NDC2 bare-metal hardware (Vienna), eliminating cloud-induced variance.</p>
<p>For details on our label conventions and how to build your own dashboards against our data, see <a href="https://docs.stereumlabs.com/docs/dashboards/build-your-own" target="_blank" rel="noopener noreferrer" class="">Build your own dashboards</a>.</p>]]></content:encoded>
            <category>Nimbus</category>
            <category>consensus client</category>
            <category>block building</category>
            <category>validator monitoring</category>
            <category>Erigon</category>
            <category>Geth</category>
            <category>Besu</category>
            <category>Nethermind</category>
            <category>Ethrex</category>
            <category>execution client</category>
            <category>PeerDAS</category>
            <category>performance</category>
        </item>
        <item>
            <title><![CDATA[Teku 25.12.0 vs 26.2.0 vs 26.3.0: Cross-version resource & performance analysis]]></title>
            <link>https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources</link>
            <guid>https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources</guid>
            <pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A comprehensive comparison of three Teku consensus client releases across CPU, memory, JVM garbage collection, disk I/O, block import latency, P2P networking, and PeerDAS metrics — measured on our NDC2 bare-metal fleet across all 6 execution client pairings.]]></description>
            <content:encoded><![CDATA[<p>A deep dive into how Teku evolved across three releases: the RocksDB migration in 26.2.0, jemalloc in 26.3.0, and what both changes mean for CPU, memory, GC overhead, disk I/O, and block import latency on real hardware.</p>
<p><img decoding="async" loading="lazy" alt="Teku 25.12.0 vs 26.2.0 vs 26.3.0: Cross-version resource &amp;amp; performance analysis" src="https://your-docusaurus-site.example.com/assets/images/teku-version-comparison-thumbnail-bf60fe808551255aa87f575a8ee8d59e.png" width="1800" height="945" class="img_ev3q"></p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="overview">Overview<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#overview" class="hash-link" aria-label="Direct link to Overview" title="Direct link to Overview" translate="no">​</a></h2>
<p>We compared three Teku consensus client versions on our NDC2 bare-metal fleet in Vienna, each paired with all 6 execution clients (Besu, Erigon, Ethrex, Geth, Nethermind, Reth). Each version was measured over a 14-day window during its active deployment period using <code>avg_over_time(...[14d:1h])</code> instant queries against our Prometheus-cold datasource.</p>
<table><thead><tr><th>Version</th><th>Release Date</th><th>Measurement Window</th><th>Key Changes</th></tr></thead><tbody><tr><td><strong>25.12.0</strong></td><td>Dec 16, 2025</td><td>Jan 15 – Jan 29, 2026</td><td>Late block reorg, block building prep, sidecar recovery</td></tr><tr><td><strong>26.2.0</strong></td><td>Feb 11, 2026</td><td>Feb 15 – Mar 1, 2026</td><td>RocksDB as default DB, DAS backfiller, getBlobs API</td></tr><tr><td><strong>26.3.0</strong></td><td>Mar 5, 2026</td><td>Mar 15 – Mar 29, 2026</td><td>jemalloc allocator, SSZ serialization fix, partial sidecar import</td></tr></tbody></table>
<p>The full report is available as a PDF download at the <a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#download" class="">bottom of this post</a>.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="fleet-level-headline-numbers">Fleet-level headline numbers<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#fleet-level-headline-numbers" class="hash-link" aria-label="Direct link to Fleet-level headline numbers" title="Direct link to Fleet-level headline numbers" translate="no">​</a></h2>
<p>The two architectural changes — LevelDB → RocksDB in 26.2.0 and the jemalloc memory allocator in 26.3.0 — drove the majority of the performance shifts:</p>
<p><img decoding="async" loading="lazy" alt="Fleet-wide changes from Teku 25.12.0 to 26.3.0" src="https://your-docusaurus-site.example.com/assets/images/headline_kpis-9393ae7df83cfe2a3df56133aea45e52.png" width="1600" height="882" class="img_ev3q"></p>
<table><thead><tr><th>Metric</th><th>25.12.0 → 26.3.0</th><th>What happened</th></tr></thead><tbody><tr><td><strong>CPU usage</strong></td><td><strong>−44%</strong></td><td>RocksDB caching + jemalloc reducing GC-driven CPU spikes</td></tr><tr><td><strong>GC overhead</strong></td><td><strong>−36%</strong></td><td>jemalloc reduced heap fragmentation → fewer full GC cycles</td></tr><tr><td><strong>Disk reads (host)</strong></td><td><strong>−79%</strong></td><td>RocksDB block cache eliminates most disk reads</td></tr><tr><td><strong>Disk writes (host)</strong></td><td><strong>−61%</strong></td><td>LSM-tree architecture writes more efficiently than LevelDB</td></tr><tr><td><strong>Block import delay</strong></td><td><strong>−24%</strong></td><td>Fastest average: 297ms in 26.3.0</td></tr><tr><td><strong>RSS memory</strong></td><td><strong>+2.8%</strong></td><td>RocksDB uses more in-memory structures (peaked at +5.5% in 26.2.0, recovered)</td></tr><tr><td><strong>Open file descriptors</strong></td><td><strong>+101%</strong></td><td>Expected: RocksDB holds many SST files open</td></tr><tr><td><strong>Peer count (libp2p)</strong></td><td><strong>+9%</strong></td><td>Steady improvement across versions</td></tr></tbody></table>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="cpu-utilization">CPU utilization<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#cpu-utilization" class="hash-link" aria-label="Direct link to CPU utilization" title="Direct link to CPU utilization" translate="no">​</a></h2>
<p><strong>Metric:</strong> <code>rate(process_cpu_seconds_total{job="teku"}[5m])</code> — Teku JVM process CPU in CPU-seconds per second.</p>
<p><img decoding="async" loading="lazy" alt="Teku CPU usage by EC pairing across three versions" src="https://your-docusaurus-site.example.com/assets/images/cpu_usage-78a75dc4aff4ca7fc3172ca116acf05d.png" width="1780" height="886" class="img_ev3q"></p>
<table><thead><tr><th>EC Pairing</th><th>25.12.0</th><th>26.2.0</th><th>26.3.0</th><th>Δ 25.12→26.2</th><th>Δ 26.2→26.3</th></tr></thead><tbody><tr><td>Besu</td><td>0.761</td><td>0.441</td><td>0.431</td><td>🟢 −42%</td><td>🟢 −2%</td></tr><tr><td>Erigon</td><td>0.704</td><td>0.414</td><td>0.409</td><td>🟢 −41%</td><td>🟢 −1%</td></tr><tr><td>Ethrex</td><td>0.594</td><td>0.552</td><td>0.419</td><td>🟢 −7%</td><td>🟢 −24%</td></tr><tr><td>Geth</td><td>0.771</td><td>0.553</td><td>0.429</td><td>🟢 −28%</td><td>🟢 −22%</td></tr><tr><td>Nethermind</td><td>0.720</td><td>0.330</td><td>0.305</td><td>🟢 −54%</td><td>🟢 −8%</td></tr><tr><td>Reth</td><td>0.722</td><td>0.364</td><td>0.377</td><td>🟢 −50%</td><td>🟠 +4%</td></tr><tr><td><strong>Fleet Average</strong></td><td><strong>0.712</strong></td><td><strong>0.442</strong></td><td><strong>0.395</strong></td><td><strong>🟢 −38%</strong></td><td><strong>🟢 −11%</strong></td></tr></tbody></table>
<p>CPU dropped 38% from 25.12.0 to 26.2.0. RocksDB's block cache and bloom filters reduce per-lookup processing compared to LevelDB. Version 26.3.0 added another 11% reduction through jemalloc's more efficient allocation patterns reducing GC-driven CPU spikes. The Nethermind pairing consistently shows the lowest Teku CPU usage across all three versions.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="memory--jvm-analysis">Memory &amp; JVM analysis<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#memory--jvm-analysis" class="hash-link" aria-label="Direct link to Memory &amp; JVM analysis" title="Direct link to Memory &amp; JVM analysis" translate="no">​</a></h2>
<p><img decoding="async" loading="lazy" alt="RSS and JVM heap memory across three versions" src="https://your-docusaurus-site.example.com/assets/images/memory-1b180e20a83db4b2dca15b8c53d99d6b.png" width="2140" height="881" class="img_ev3q"></p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="process-rss-memory">Process RSS memory<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#process-rss-memory" class="hash-link" aria-label="Direct link to Process RSS memory" title="Direct link to Process RSS memory" translate="no">​</a></h3>
<p><strong>Metric:</strong> <code>process_resident_memory_bytes{job="teku"}</code> — total physical memory consumed by the Teku JVM process.</p>
<table><thead><tr><th>EC Pairing</th><th>25.12.0 (GB)</th><th>26.2.0 (GB)</th><th>26.3.0 (GB)</th><th>Δ 25.12→26.3</th></tr></thead><tbody><tr><td>Besu</td><td>8.67</td><td>9.43</td><td>9.20</td><td>🟠 +6.2%</td></tr><tr><td>Erigon</td><td>9.10</td><td>9.31</td><td>9.16</td><td>⚪ +0.6%</td></tr><tr><td>Ethrex</td><td>8.74</td><td>9.48</td><td>9.20</td><td>🟠 +5.3%</td></tr><tr><td>Geth</td><td>9.03</td><td>9.50</td><td>9.10</td><td>⚪ +0.7%</td></tr><tr><td>Nethermind</td><td>9.07</td><td>9.34</td><td>9.13</td><td>⚪ +0.6%</td></tr><tr><td>Reth</td><td>8.90</td><td>9.39</td><td>9.20</td><td>🟠 +3.4%</td></tr><tr><td><strong>Fleet Average</strong></td><td><strong>8.92</strong></td><td><strong>9.41</strong></td><td><strong>9.16</strong></td><td><strong>🟠 +2.8%</strong></td></tr></tbody></table>
<p>RSS peaked in 26.2.0 (+5.5% vs 25.12.0), consistent with RocksDB's larger in-memory structures (block cache, memtables, bloom filters). Version 26.3.0 clawed back roughly half through jemalloc's reduced memory fragmentation, leaving a net +2.8%.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="jvm-heap-memory">JVM heap memory<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#jvm-heap-memory" class="hash-link" aria-label="Direct link to JVM heap memory" title="Direct link to JVM heap memory" translate="no">​</a></h3>
<p><strong>Metric:</strong> <code>jvm_memory_used_bytes{area="heap"}</code> — Java heap utilization.</p>
<table><thead><tr><th>EC Pairing</th><th>25.12.0 (GB)</th><th>26.2.0 (GB)</th><th>26.3.0 (GB)</th><th>Δ 25.12→26.3</th></tr></thead><tbody><tr><td>Besu</td><td>5.33</td><td>5.73</td><td>5.99</td><td>🟠 +12.3%</td></tr><tr><td>Erigon</td><td>5.76</td><td>5.70</td><td>5.82</td><td>⚪ +1.0%</td></tr><tr><td>Ethrex</td><td>5.89</td><td>5.77</td><td>6.05</td><td>🟠 +2.7%</td></tr><tr><td>Geth</td><td>5.38</td><td>6.14</td><td>6.05</td><td>🟠 +12.3%</td></tr><tr><td>Nethermind</td><td>5.36</td><td>6.01</td><td>6.16</td><td>🟠 +15.0%</td></tr><tr><td>Reth</td><td>5.32</td><td>5.78</td><td>6.15</td><td>🟠 +15.5%</td></tr><tr><td><strong>Fleet Average</strong></td><td><strong>5.51</strong></td><td><strong>5.85</strong></td><td><strong>6.04</strong></td><td><strong>🟠 +9.6%</strong></td></tr></tbody></table>
<p>Heap grew steadily, reflecting increased in-heap state caching from the RocksDB JNI bridge and the natural growth of Ethereum's state tree. All values remain well within Teku's default 8 GB max heap.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="jvm-native-memory">JVM native memory<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#jvm-native-memory" class="hash-link" aria-label="Direct link to JVM native memory" title="Direct link to JVM native memory" translate="no">​</a></h3>
<table><thead><tr><th>Version</th><th>Fleet Avg (MB)</th><th>Delta</th></tr></thead><tbody><tr><td>25.12.0</td><td>705</td><td>—</td></tr><tr><td>26.2.0</td><td>739</td><td>🟠 +4.8%</td></tr><tr><td>26.3.0</td><td>736</td><td>⚪ −0.4%</td></tr></tbody></table>
<p>Native (off-heap) memory rose modestly with RocksDB and stabilized with jemalloc.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="jvm-garbage-collection">JVM garbage collection<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#jvm-garbage-collection" class="hash-link" aria-label="Direct link to JVM garbage collection" title="Direct link to JVM garbage collection" translate="no">​</a></h2>
<p><strong>Metric:</strong> <code>rate(jvm_gc_collection_seconds_sum[5m])</code> — fraction of time spent in GC per second. This is a critical Teku-specific metric because GC pauses directly impact block processing latency.</p>
<p><img decoding="async" loading="lazy" alt="GC overhead by EC pairing across three versions" src="https://your-docusaurus-site.example.com/assets/images/gc_overhead-bcbfc290a973ec34413d7880d0ed45be.png" width="1780" height="886" class="img_ev3q"></p>
<table><thead><tr><th>EC Pairing</th><th>25.12.0 (ms/s)</th><th>26.2.0 (ms/s)</th><th>26.3.0 (ms/s)</th><th>Δ 25.12→26.3</th></tr></thead><tbody><tr><td>Besu</td><td>2.77</td><td>2.82</td><td>1.90</td><td>🟢 −31%</td></tr><tr><td>Erigon</td><td>1.59</td><td>3.01</td><td>2.29</td><td>🔴 +44%</td></tr><tr><td>Ethrex</td><td>2.85</td><td>3.87</td><td>1.58</td><td>🟢 −45%</td></tr><tr><td>Geth</td><td>3.29</td><td>3.01</td><td>1.60</td><td>🟢 −51%</td></tr><tr><td>Nethermind</td><td>3.15</td><td>2.14</td><td>1.73</td><td>🟢 −45%</td></tr><tr><td>Reth</td><td>2.83</td><td>2.34</td><td>1.47</td><td>🟢 −48%</td></tr><tr><td><strong>Fleet Average</strong></td><td><strong>2.75</strong></td><td><strong>2.87</strong></td><td><strong>1.76</strong></td><td><strong>🟢 −36%</strong></td></tr></tbody></table>
<p>GC overhead was flat from 25.12.0 to 26.2.0. The introduction of <strong>jemalloc</strong> in 26.3.0 is the standout: fleet-wide GC time dropped 36%. jemalloc reduces heap fragmentation, meaning fewer large-object promotions to old gen and fewer full GC cycles. For a Java client like Teku, this matters directly — GC pauses are a primary contributor to block import latency.</p>
<div class="theme-admonition theme-admonition-note admonition_xJq3 alert alert--secondary"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 14 16"><path fill-rule="evenodd" d="M6.3 5.69a.942.942 0 0 1-.28-.7c0-.28.09-.52.28-.7.19-.18.42-.28.7-.28.28 0 .52.09.7.28.18.19.28.42.28.7 0 .28-.09.52-.28.7a1 1 0 0 1-.7.3c-.28 0-.52-.11-.7-.3zM8 7.99c-.02-.25-.11-.48-.31-.69-.2-.19-.42-.3-.69-.31H6c-.27.02-.48.13-.69.31-.2.2-.3.44-.31.69h1v3c.02.27.11.5.31.69.2.2.42.31.69.31h1c.27 0 .48-.11.69-.31.2-.19.3-.42.31-.69H8V7.98v.01zM7 2.3c-3.14 0-5.7 2.54-5.7 5.68 0 3.14 2.56 5.7 5.7 5.7s5.7-2.55 5.7-5.7c0-3.15-2.56-5.69-5.7-5.69v.01zM7 .98c3.86 0 7 3.14 7 7s-3.14 7-7 7-7-3.12-7-7 3.14-7 7-7z"></path></svg></span>note</div><div class="admonitionContent_BuS1"><p>The Erigon pairing shows an anomalous GC increase across all versions. This may reflect interaction effects with Erigon's execution response patterns rather than a Teku-internal issue.</p></div></div>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="storage-engine--disk-io">Storage engine &amp; disk I/O<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#storage-engine--disk-io" class="hash-link" aria-label="Direct link to Storage engine &amp; disk I/O" title="Direct link to Storage engine &amp; disk I/O" translate="no">​</a></h2>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="host-level-disk-io">Host-level disk I/O<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#host-level-disk-io" class="hash-link" aria-label="Direct link to Host-level disk I/O" title="Direct link to Host-level disk I/O" translate="no">​</a></h3>
<p><strong>Metrics:</strong> <code>rate(node_disk_read_bytes_total[5m])</code> and <code>rate(node_disk_written_bytes_total[5m])</code> — host-level metrics that include both Teku CC and its paired EC. Since EC versions didn't change across measurement windows, deltas are attributable to the Teku version change.</p>
<p><img decoding="async" loading="lazy" alt="Disk read and write rates across three versions" src="https://your-docusaurus-site.example.com/assets/images/disk_io-40fbd5ce837b8511fb1e0e53aee49c3f.png" width="2140" height="881" class="img_ev3q"></p>
<table><thead><tr><th>EC Pairing</th><th>Read 25.12</th><th>Read 26.2</th><th>Read 26.3</th><th>Write 25.12</th><th>Write 26.2</th><th>Write 26.3</th></tr></thead><tbody><tr><td>Besu</td><td>2,625</td><td>483</td><td>390</td><td>6,040</td><td>1,305</td><td>1,272</td></tr><tr><td>Erigon</td><td>424</td><td>470</td><td>350</td><td>1,141</td><td>1,282</td><td>1,294</td></tr><tr><td>Ethrex</td><td>1,127</td><td>603</td><td>392</td><td>2,143</td><td>1,888</td><td>1,303</td></tr><tr><td>Geth</td><td>1,528</td><td>710</td><td>259</td><td>3,172</td><td>2,003</td><td>1,227</td></tr><tr><td>Nethermind</td><td>1,489</td><td>220</td><td>186</td><td>2,583</td><td>1,145</td><td>1,304</td></tr><tr><td>Reth</td><td>1,916</td><td>326</td><td>283</td><td>4,468</td><td>1,243</td><td>1,220</td></tr><tr><td><strong>Fleet Avg</strong></td><td><strong>1,518</strong></td><td><strong>469</strong></td><td><strong>310</strong></td><td><strong>3,258</strong></td><td><strong>1,478</strong></td><td><strong>1,270</strong></td></tr></tbody></table>
<p><em>All values in KB/s.</em></p>
<p>Read throughput dropped <strong>79%</strong> (1,518 → 310 KB/s) and write throughput fell <strong>61%</strong> (3,258 → 1,270 KB/s). RocksDB's block cache and SST-based design is far more read-efficient than LevelDB. The Besu pairing saw the most dramatic read improvement (−85%).</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="rocksdb-internal-metrics-2620-only">RocksDB internal metrics (26.2.0+ only)<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#rocksdb-internal-metrics-2620-only" class="hash-link" aria-label="Direct link to RocksDB internal metrics (26.2.0+ only)" title="Direct link to RocksDB internal metrics (26.2.0+ only)" translate="no">​</a></h3>
<p>These metrics are only available for versions using RocksDB. Version 25.12.0 ran LevelDB which does not expose equivalent counters.</p>
<table><thead><tr><th>Metric</th><th>26.2.0 Fleet Avg</th><th>26.3.0 Fleet Avg</th><th>Delta</th></tr></thead><tbody><tr><td><code>storage_bytes_read</code> rate (KB/s)</td><td>84.3</td><td>103.6</td><td>🟠 +23%</td></tr><tr><td><code>storage_bytes_written</code> rate (MB/s)</td><td>1.53</td><td>1.35</td><td>🟢 −12%</td></tr><tr><td><code>storage_compact_write_bytes</code> rate (KB/s)</td><td>543.5</td><td>484.0</td><td>🟢 −11%</td></tr></tbody></table>
<p>Write amplification improved in 26.3.0: both raw write rate and compaction writes decreased ~11%. The slight read increase likely reflects the partial sidecar import feature doing more background reads.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="block-import-performance">Block import performance<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#block-import-performance" class="hash-link" aria-label="Direct link to Block import performance" title="Direct link to Block import performance" translate="no">​</a></h2>
<p><strong>Metric:</strong> <code>beacon_block_import_delay_latest</code> — most recent block import delay in milliseconds. This captures end-to-end latency from receiving a block to completing its import into the beacon state.</p>
<p><img decoding="async" loading="lazy" alt="Block import delay by EC pairing across three versions" src="https://your-docusaurus-site.example.com/assets/images/block_import_delay-69329f9ae50f3237c014138e14594f80.png" width="1780" height="886" class="img_ev3q"></p>
<table><thead><tr><th>EC Pairing</th><th>25.12.0 (ms)</th><th>26.2.0 (ms)</th><th>26.3.0 (ms)</th><th>Δ 25.12→26.3</th></tr></thead><tbody><tr><td>Besu</td><td>268</td><td>482</td><td>221</td><td>🟢 −18%</td></tr><tr><td>Erigon</td><td>519</td><td>490</td><td>347</td><td>🟢 −33%</td></tr><tr><td>Ethrex</td><td>772</td><td>303</td><td>209</td><td>🟢 −73%</td></tr><tr><td>Geth</td><td>251</td><td>319</td><td>209</td><td>🟢 −17%</td></tr><tr><td>Nethermind</td><td>239</td><td>513</td><td>444</td><td>🔴 +86%</td></tr><tr><td>Reth</td><td>284</td><td>426</td><td>352</td><td>🔴 +24%</td></tr><tr><td><strong>Fleet Average</strong></td><td><strong>389</strong></td><td><strong>422</strong></td><td><strong>297</strong></td><td><strong>🟢 −24%</strong></td></tr></tbody></table>
<p>Version 26.2.0 was slightly slower fleet-wide — likely early-stage RocksDB tuning and the concurrent DAS backfiller adding load. Version 26.3.0 brought a strong recovery, achieving the <strong>lowest fleet-wide latency at 297 ms</strong>. The Ethrex pairing improved most dramatically (−73%), while the Nethermind pairing shows persistently elevated import times that warrant EC-side investigation.</p>
<div class="theme-admonition theme-admonition-tip admonition_xJq3 alert alert--success"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 12 16"><path fill-rule="evenodd" d="M6.5 0C3.48 0 1 2.19 1 5c0 .92.55 2.25 1 3 1.34 2.25 1.78 2.78 2 4v1h5v-1c.22-1.22.66-1.75 2-4 .45-.75 1-2.08 1-3 0-2.81-2.48-5-5.5-5zm3.64 7.48c-.25.44-.47.8-.67 1.11-.86 1.41-1.25 2.06-1.45 3.23-.02.05-.02.11-.02.17H5c0-.06 0-.13-.02-.17-.2-1.17-.59-1.83-1.45-3.23-.2-.31-.42-.67-.67-1.11C2.44 6.78 2 5.65 2 5c0-2.2 2.02-4 4.5-4 1.22 0 2.36.42 3.22 1.19C10.55 2.94 11 3.94 11 5c0 .66-.44 1.78-.86 2.48zM4 14h5c-.23 1.14-1.3 2-2.5 2s-2.27-.86-2.5-2z"></path></svg></span>Operator impact</div><div class="admonitionContent_BuS1"><p>Lower block import delay means attestations can be created sooner after a block arrives, directly improving validator effectiveness and reducing inclusion delay.</p></div></div>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="p2p-networking--peer-connectivity">P2P networking &amp; peer connectivity<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#p2p-networking--peer-connectivity" class="hash-link" aria-label="Direct link to P2P networking &amp; peer connectivity" title="Direct link to P2P networking &amp; peer connectivity" translate="no">​</a></h2>
<p><img decoding="async" loading="lazy" alt="Peer connectivity metrics across three versions" src="https://your-docusaurus-site.example.com/assets/images/peers-1e4c26552d85845413d4355f1a142091.png" width="1420" height="794" class="img_ev3q"></p>
<table><thead><tr><th>Metric</th><th>25.12.0</th><th>26.2.0</th><th>26.3.0</th><th>Trend</th></tr></thead><tbody><tr><td><code>beacon_peer_count</code> (fleet avg)</td><td>44.9</td><td>45.9</td><td>49.5</td><td>🟢 +10%</td></tr><tr><td><code>libp2p_peers</code> (fleet avg)</td><td>90.8</td><td>91.9</td><td>98.9</td><td>🟢 +9%</td></tr><tr><td><code>discovery_live_nodes</code> (fleet avg)</td><td>160.7</td><td>168.4</td><td>176.4</td><td>🟢 +10%</td></tr><tr><td>Gossip rate (msg/s, fleet avg)</td><td>6.23</td><td>5.99</td><td>7.85</td><td>🟢 +26%</td></tr></tbody></table>
<p>All peer metrics improved steadily. Discovery live nodes grew from ~161 to ~176, suggesting the Discv5 layer is maintaining more active ENR records. Gossip throughput peaked in 26.3.0 — consistent with faster block processing enabling quicker re-gossip.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="data-availability-sampling-das--peerdas">Data availability sampling (DAS) &amp; PeerDAS<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#data-availability-sampling-das--peerdas" class="hash-link" aria-label="Direct link to Data availability sampling (DAS) &amp; PeerDAS" title="Direct link to Data availability sampling (DAS) &amp; PeerDAS" translate="no">​</a></h2>
<p><strong>Metric:</strong> <code>rate(beacon_data_column_sidecar_processing_requests_total[5m])</code> — DAS workload rate.</p>
<p><img decoding="async" loading="lazy" alt="DAS sidecar processing rate by EC pairing" src="https://your-docusaurus-site.example.com/assets/images/das_processing-2b6facd135b0599c4d44633538eeed03.png" width="1780" height="886" class="img_ev3q"></p>
<p>DAS processing rates show significant per-pairing variance. The zero values in 25.12.0 for Erigon/Ethrex reflect sync issues that resolved in later versions. Version 26.3.0 allows nodes with &gt;50% custody requirements to begin importing blocks after downloading only 50% of sidecars — an architectural improvement that doesn't compromise data availability guarantees.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="open-file-descriptors--the-trade-off">Open file descriptors — the trade-off<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#open-file-descriptors--the-trade-off" class="hash-link" aria-label="Direct link to Open file descriptors — the trade-off" title="Direct link to Open file descriptors — the trade-off" translate="no">​</a></h2>
<p><img decoding="async" loading="lazy" alt="Open file descriptors by EC pairing across three versions" src="https://your-docusaurus-site.example.com/assets/images/file_descriptors-9a01c845ae303b6bf2da759e464720f2.png" width="1780" height="886" class="img_ev3q"></p>
<table><thead><tr><th>EC Pairing</th><th>25.12.0</th><th>26.2.0</th><th>26.3.0</th><th>Δ 25.12→26.3</th></tr></thead><tbody><tr><td>Besu</td><td>480</td><td>957</td><td>1,329</td><td>🔴 +177%</td></tr><tr><td>Erigon</td><td>780</td><td>931</td><td>1,375</td><td>🔴 +76%</td></tr><tr><td>Ethrex</td><td>801</td><td>1,031</td><td>1,299</td><td>🔴 +62%</td></tr><tr><td>Geth</td><td>643</td><td>939</td><td>1,319</td><td>🔴 +105%</td></tr><tr><td>Nethermind</td><td>682</td><td>751</td><td>1,245</td><td>🔴 +83%</td></tr><tr><td>Reth</td><td>527</td><td>787</td><td>1,279</td><td>🔴 +143%</td></tr><tr><td><strong>Fleet Average</strong></td><td><strong>652</strong></td><td><strong>899</strong></td><td><strong>1,308</strong></td><td><strong>🔴 +101%</strong></td></tr></tbody></table>
<p>File descriptors doubled — the most visible trade-off of RocksDB. Its LSM-tree architecture holds many SST files open simultaneously for efficient reads. Growth continued from 26.2.0 to 26.3.0 as databases accumulated more SST files through compaction.</p>
<div class="theme-admonition theme-admonition-warning admonition_xJq3 alert alert--warning"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 16 16"><path fill-rule="evenodd" d="M8.893 1.5c-.183-.31-.52-.5-.887-.5s-.703.19-.886.5L.138 13.499a.98.98 0 0 0 0 1.001c.193.31.53.501.886.501h13.964c.367 0 .704-.19.877-.5a1.03 1.03 0 0 0 .01-1.002L8.893 1.5zm.133 11.497H6.987v-2.003h2.039v2.003zm0-3.004H6.987V5.987h2.039v4.006z"></path></svg></span>Operator action required</div><div class="admonitionContent_BuS1"><p>Ensure <code>ulimit -n</code> is set to at least <strong>65536</strong> on hosts running Teku 26.x. The default 1024 on many Linux distributions will cause failures. Docker containers should pass <code>--ulimit nofile=65536:65536</code>.</p></div></div>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="summary--recommendations">Summary &amp; recommendations<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#summary--recommendations" class="hash-link" aria-label="Direct link to Summary &amp; recommendations" title="Direct link to Summary &amp; recommendations" translate="no">​</a></h2>
<table><thead><tr><th>Dimension</th><th>25.12.0</th><th>26.2.0</th><th>26.3.0</th></tr></thead><tbody><tr><td>CPU Efficiency</td><td>Baseline</td><td>🟢 Major improvement</td><td>🟢 <strong>Best</strong></td></tr><tr><td>Memory Footprint</td><td>🟢 Lowest</td><td>🟠 +5.5%</td><td>🟠 +2.8% (recovering)</td></tr><tr><td>GC Overhead</td><td>Baseline</td><td>⚪ Similar</td><td>🟢 <strong>Best (−36%)</strong></td></tr><tr><td>Disk I/O</td><td>🔴 Highest</td><td>🟢 Major improvement</td><td>🟢 <strong>Best</strong></td></tr><tr><td>Block Import Speed</td><td>Moderate</td><td>🟠 Slight regression</td><td>🟢 <strong>Best (297ms)</strong></td></tr><tr><td>Peer Connectivity</td><td>Good</td><td>Good</td><td>🟢 <strong>Best</strong></td></tr><tr><td>File Descriptors</td><td>🟢 Lowest</td><td>🟠 +38%</td><td>🔴 +101% (monitor)</td></tr><tr><td>Storage Backend</td><td>LevelDB</td><td>RocksDB</td><td>RocksDB + jemalloc</td></tr><tr><td>Stability</td><td>🟢 Stable</td><td>🟢 Stable</td><td>🟢 Mandatory (SSZ fix)</td></tr></tbody></table>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="key-takeaways">Key takeaways<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#key-takeaways" class="hash-link" aria-label="Direct link to Key takeaways" title="Direct link to Key takeaways" translate="no">​</a></h3>
<ol>
<li class="">
<p><strong>Upgrade to 26.3.0 is mandatory.</strong> Beyond the SSZ serialization bug fix, 26.3.0 delivers the best performance profile across nearly every dimension.</p>
</li>
<li class="">
<p><strong>Verify file descriptor limits.</strong> With 26.3.0 averaging 1,308 open FDs (and likely higher under peak load), ensure systems allow at least 65,536 file descriptors.</p>
</li>
<li class="">
<p><strong>Monitor RocksDB compaction.</strong> Add <code>storage_compact_write_bytes</code> and <code>storage_bytes_written</code> to your dashboards. Abnormal compaction spikes can indicate database health issues.</p>
</li>
<li class="">
<p><strong>Consider DAS backfiller tuning.</strong> If 26.x nodes show elevated CPU during backfill, <code>--Xp2p-reworked-sidecar-custody-sync-batch-size=1</code> can throttle the backfiller.</p>
</li>
<li class="">
<p><strong>Watch the Nethermind pairing.</strong> Block import delays are persistently higher with Nethermind across 26.x versions — this warrants investigation on the EL side.</p>
</li>
</ol>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="download">Download the full report<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#download" class="hash-link" aria-label="Direct link to Download the full report" title="Direct link to Download the full report" translate="no">​</a></h2>
<p>The complete analysis with additional detail on JVM thread pools, executor queue depths, and RocksDB internal counters is available as a styled PDF:</p>
<p>📄 <a href="https://your-docusaurus-site.example.com/downloads/teku_version_comparison_report.pdf" target="_blank">Download full report (PDF)</a></p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="methodology-notes">Methodology notes<a href="https://your-docusaurus-site.example.com/blog/teku-version-25-12-0-26-2-0-26-3-0-comparison-resources#methodology-notes" class="hash-link" aria-label="Direct link to Methodology notes" title="Direct link to Methodology notes" translate="no">​</a></h2>
<p>All data sourced from the <code>prometheus-cold</code> datasource (Org 6) in the StereumLabs Grafana instance. Query pattern: <code>avg by (ec_client) (avg_over_time(metric{cc_client="teku", cc_version="...", role="cc", job="teku"}[14d:1h]))</code> evaluated as instant queries at the end of each 14-day window. Rate metrics use <code>rate(...[5m])</code> inside the subquery. Host-level metrics filtered with <code>device!~"lo|veth.*|docker.*|br.*"</code> to exclude virtual interfaces. All nodes run on NDC2 bare-metal (Vienna), eliminating cloud noise.</p>
<p>For details on our label conventions and how to build your own dashboards against our data, see <a href="https://docs.stereumlabs.com/docs/dashboards/build-your-own" target="_blank" rel="noopener noreferrer" class="">Build your own dashboards</a>.</p>]]></content:encoded>
            <category>Teku</category>
            <category>consensus client</category>
            <category>version comparison</category>
            <category>RocksDB</category>
            <category>jemalloc</category>
            <category>PeerDAS</category>
            <category>Fulu</category>
            <category>performance</category>
        </item>
        <item>
            <title><![CDATA[EthCC[9] talk recap: AI-powered observability for Ethereum staking]]></title>
            <link>https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking</link>
            <guid>https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking</guid>
            <pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A recap of our EthCC[9] presentation in Cannes covering StereumLabs, AI-powered infrastructure monitoring, Fusaka/PeerDAS runtime metrics, and what we're building next.]]></description>
            <content:encoded><![CDATA[<p>A recap of our EthCC[9] presentation in Cannes: what StereumLabs is, how we use AI on top of our monitoring data, and what Fusaka actually did to hardware across 36 client pairings.</p>
<p><img decoding="async" loading="lazy" alt="EthCC[9] Talk: AI-Powered Observability for Ethereum Staking" src="https://your-docusaurus-site.example.com/assets/images/ethcc9-talk-thumbnail-bc69eec266ddac6e9360f6f71e57071f.jpg" width="2000" height="1126" class="img_ev3q"></p>
<p>On April 2, 2026 we presented StereumLabs at <a href="https://ethcc.io/ethcc-9/agenda/meet-fusakapeerdas-runtime-metrics" target="_blank" rel="noopener noreferrer" class="">EthCC[9] in Cannes</a>. The talk covered what we've built, why AI on raw metrics alone produces useless output, and concrete Fusaka/PeerDAS runtime data from our bare-metal fleet.</p>
<p>This post is a written companion to that talk. If you prefer watching, the recording is embedded below. The <a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#slides" class="">slide deck is available as a PDF download</a>.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="video">Video<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#video" class="hash-link" aria-label="Direct link to Video" title="Direct link to Video" translate="no">​</a></h2>
<iframe width="100%" height="400" src="https://www.youtube.com/embed/1Eoz8O-WZOY" title="StereumLabs EthCC[9] Talk" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-stereumlabs-platform">The StereumLabs platform<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#the-stereumlabs-platform" class="hash-link" aria-label="Direct link to The StereumLabs platform" title="Direct link to The StereumLabs platform" translate="no">​</a></h2>
<p>StereumLabs is our observability and analytics platform for Ethereum staking infrastructure. We run every relevant client combination on dedicated bare-metal hardware: 6 execution layer clients (Geth, Nethermind, Besu, Erigon, Reth, Ethrex), 6 consensus layer clients (Lighthouse, Prysm, Teku, Nimbus, Lodestar, Grandine), plus the standalone Erigon + Caplin pairing. That's 37 combinations, monitored 24/7 with 90-day rolling metrics.</p>
<p>All nodes run on isolated bare metal. No shared cloud instances, no noisy-neighbor effects. When we measure performance differences between clients, the data is reproducible and directly comparable.</p>
<p>The platform provides 20+ dashboards covering resource consumption (CPU, RAM, disk, network), client-specific metrics (attestation rates, block processing times, peer counts, GC behavior), and system logs. Client development teams already have free access to the dashboards. The project is supported by an Ethereum Foundation grant.</p>
<p>But dashboards have limits. And that's where AI comes in.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="ai-chatbot-natural-language-meets-live-data">AI Chatbot: natural language meets live data<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#ai-chatbot-natural-language-meets-live-data" class="hash-link" aria-label="Direct link to AI Chatbot: natural language meets live data" title="Direct link to AI Chatbot: natural language meets live data" translate="no">​</a></h2>
<p>We built an AI chatbot connected directly to our full monitoring dataset. Instead of navigating dashboards and writing queries, users ask questions in plain English:</p>
<ul>
<li class="">"Compare disk growth between Geth and Erigon over the last 30 days"</li>
<li class="">"How did the Prysm update from v7.1.1 to v7.1.2 affect resource usage?"</li>
<li class="">"Which consensus client uses the most bandwidth as a supernode?"</li>
</ul>
<p>The result is a structured analysis with actual numbers, across all EL pairings, in seconds.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-instruction-set">The Instruction Set<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#the-instruction-set" class="hash-link" aria-label="Direct link to The Instruction Set" title="Direct link to The Instruction Set" translate="no">​</a></h3>
<p>Here's what most people get wrong about AI in infrastructure monitoring: the model alone doesn't produce useful results. If you point a language model at raw Prometheus metrics, it doesn't know which queries to run, what normal ranges look like, or how to interpret differences between client architectures.</p>
<p>That's why we've built a continuously evolving Instruction Set. It encodes which metrics matter for which client combination, what normal ranges look like per pairing, how to interpret architectural differences (Go vs. Java GC behavior, Rust memory models), and which queries to run in which order to build a meaningful analysis.</p>
<p>Without the Instruction Set, the AI produces generic answers. With it, it produces the kind of analysis that would take an experienced engineer hours to assemble manually. We expand it continuously as we encounter new patterns, new client versions, and new edge cases. It's built from years of running these clients professionally.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="proof-prysm-v711-to-v712">Proof: Prysm v7.1.1 to v7.1.2<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#proof-prysm-v711-to-v712" class="hash-link" aria-label="Direct link to Proof: Prysm v7.1.1 to v7.1.2" title="Direct link to Proof: Prysm v7.1.1 to v7.1.2" translate="no">​</a></h3>
<p>When the Prysm team shipped v7.1.2, we asked the chatbot one question and got a full resource impact analysis across all 6 EL pairings:</p>
<table><thead><tr><th>Metric</th><th>Result</th></tr></thead><tbody><tr><td><strong>Memory (RSS)</strong></td><td>Dropped 5.1% on average. Biggest improvement with Geth pairing (-8.8%)</td></tr><tr><td><strong>Block processing</strong></td><td>Improved 25% overall. Erigon pairing went from 403ms to 90ms (-78%)</td></tr><tr><td><strong>Peer count</strong></td><td>Stable at ~71 across both versions. No regression</td></tr><tr><td><strong>CPU</strong></td><td>Mixed results, EL-dependent. Reth dropped 21%, Besu increased 28%</td></tr></tbody></table>
<p>This analysis would normally take hours of manual work. The chatbot produced it from a single question. The full report is published on our blog: <a class="" href="https://your-docusaurus-site.example.com/blog/prysm-version-7-1-1-and-7-1-2-comparison-resources">Prysm v7.1.1 &amp; 7.1.2 resources</a>.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="ai-alerting-from-something-is-wrong-to-heres-why">AI Alerting: from "something is wrong" to "here's why"<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#ai-alerting-from-something-is-wrong-to-heres-why" class="hash-link" aria-label="Direct link to AI Alerting: from &quot;something is wrong&quot; to &quot;here's why&quot;" title="Direct link to AI Alerting: from &quot;something is wrong&quot; to &quot;here's why&quot;" translate="no">​</a></h2>
<p>The chatbot is great for proactive analysis. But what about when things go wrong at 3am? That's where AI Alerting comes in.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="two-stage-architecture">Two-stage architecture<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#two-stage-architecture" class="hash-link" aria-label="Direct link to Two-stage architecture" title="Direct link to Two-stage architecture" translate="no">​</a></h3>
<p><strong>Stage 1: Near-real-time threshold alerts.</strong> Monitors hard thresholds like attestation rate, disk usage, peer count, and missed blocks. Fires within seconds. No AI inference delay, no additional cost. If the AI layer is slow or unavailable, the basic alert still arrives.</p>
<p><strong>Stage 2: AI root-cause analysis.</strong> When a threshold alert fires, a webhook triggers the AI. It pulls relevant metrics and logs, correlates the data against our neutral baseline from all 37 client combinations, and delivers a root-cause analysis with actionable next steps. Delivered in 5 to 15 seconds.</p>
<p>The result: operators don't just get "attestation rate dropped below 95%." They get: "Your attestation rate dropped because Geth's peer count fell to 3, likely due to a network partition. Your Prysm instance is healthy. Recommended action: check firewall rules and restart the EL client."</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="built-in-baseline-instant-context">Built-in baseline: instant context<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#built-in-baseline-instant-context" class="hash-link" aria-label="Direct link to Built-in baseline: instant context" title="Direct link to Built-in baseline: instant context" translate="no">​</a></h3>
<p>What makes our alerting especially useful is the neutral baseline dataset from our own fleet. When an alert fires, the AI automatically compares against data from all 37 client combinations.</p>
<p>Every alert answers three questions: Is this happening across the Ethereum network right now? Is it specific to this client version? Or is it unique to your local environment?</p>
<p>That distinction between "the whole network is seeing elevated block processing times after a fork" and "your Geth instance is the only one with this problem" is the difference between waiting it out and taking immediate action.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="security-monitoring">Security monitoring<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#security-monitoring" class="hash-link" aria-label="Direct link to Security monitoring" title="Direct link to Security monitoring" translate="no">​</a></h3>
<p>The same two-stage architecture applies to security events:</p>
<ul>
<li class=""><strong>SSH login checks</strong> — authorized key? Expected source IP? Expected time window?</li>
<li class=""><strong>Service restart analysis</strong> — when an execution client restarts, the AI verifies that fee recipient addresses haven't been changed. A compromised operator could redirect staking rewards without anyone noticing for days.</li>
<li class=""><strong>Configuration drift detection</strong> — unauthorized processes, unexpected port openings, validator key access patterns.</li>
</ul>
<p>Traditional monitoring tells you "Geth restarted." Our AI layer tells you "Geth restarted, fee recipient address changed from 0xABC to 0xDEF, this was not initiated through the operator's usual deployment pipeline, severity: critical."</p>
<p>For operators staking millions in ETH, the difference between detecting a compromised reward address in minutes versus days is the difference between a security incident and a financial disaster.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="fusaka--peerdas-what-the-hardfork-did-to-hardware">Fusaka + PeerDAS: what the hardfork did to hardware<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#fusaka--peerdas-what-the-hardfork-did-to-hardware" class="hash-link" aria-label="Direct link to Fusaka + PeerDAS: what the hardfork did to hardware" title="Direct link to Fusaka + PeerDAS: what the hardfork did to hardware" translate="no">​</a></h2>
<p>A significant portion of the talk covered our Fusaka hardfork measurements. We compared two 14-day windows (before and after the December 3, 2025 activation) across all 36 non-supernode client pairings.</p>
<p>The fleet-level headline numbers:</p>
<table><thead><tr><th>Metric</th><th>Change</th><th>What happened</th></tr></thead><tbody><tr><td><strong>Network RX</strong></td><td><strong>-60%</strong></td><td>PeerDAS in action: nodes sample slices instead of downloading full blobs</td></tr><tr><td><strong>CPU</strong></td><td><strong>+30%</strong></td><td>Expected trade-off: sampling routines cost compute</td></tr><tr><td><strong>Memory</strong></td><td><strong>-8%</strong></td><td>Less blob data held in RAM</td></tr><tr><td><strong>Disk reads</strong></td><td><strong>-53%</strong></td><td>Fewer full-blob fetches from disk</td></tr></tbody></table>
<p>Notable client outliers: Nimbus CPU jumped +257% (most compute-intensive PeerDAS implementation), Lighthouse was the only consensus client to reduce CPU (-13%), and Besu saw the largest memory drop (-35%).</p>
<p>The worst pairing post-fork: Nimbus + Reth at 16.78% CPU.</p>
<p>The full analysis with per-client breakdowns, heatmaps, daily trends, and PromQL queries is available as a dedicated blog post: <a class="" href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes">Fusaka hardfork: hardware impact on non-supernodes</a>.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="deployment-models">Deployment models<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#deployment-models" class="hash-link" aria-label="Direct link to Deployment models" title="Direct link to Deployment models" translate="no">​</a></h2>
<p>One thing we hear constantly from professional operators: "I'm interested, but I can't send my metrics to your cloud." That's why StereumLabs supports multiple deployment options:</p>
<ul>
<li class=""><strong>SaaS</strong> — hosted by us in our ISO 27001 certified environment. Best for smaller operators and researchers.</li>
<li class=""><strong>Alerting-as-a-Service (pull model)</strong> — you expose a Prometheus endpoint, we scrape it. Your data never enters our systems. It only meets our baseline data at the AI inference layer.</li>
<li class=""><strong>On-premise</strong> — the entire StereumLabs stack deployed on your infrastructure. Your own API keys. Data never leaves your network.</li>
</ul>
<p>The key message: your infrastructure data doesn't become someone else's competitive intelligence.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="current-status">Current status<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#current-status" class="hash-link" aria-label="Direct link to Current status" title="Direct link to Current status" translate="no">​</a></h2>
<table><thead><tr><th>Component</th><th>Status</th></tr></thead><tbody><tr><td>Dashboards (20+)</td><td>Live, all 37 client combinations</td></tr><tr><td>AI Chatbot</td><td>Working proof of concept against live data</td></tr><tr><td>AI Alerting</td><td>Q2 2026</td></tr><tr><td>Security Monitoring</td><td>Q2 2026</td></tr><tr><td>On-premise deployment</td><td>Ready</td></tr></tbody></table>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="get-in-touch">Get in touch<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#get-in-touch" class="hash-link" aria-label="Direct link to Get in touch" title="Direct link to Get in touch" translate="no">​</a></h2>
<p>We're looking for node operators who want to try the AI chatbot, client development teams interested in automated cross-EL impact analysis, and staking protocols looking for monitoring and security standards across their operator ecosystem.</p>
<p>Reach out at <a href="mailto:contact@stereumlabs.com" target="_blank" rel="noopener noreferrer" class="">contact@stereumlabs.com</a> or visit <a href="https://stereumlabs.com/" target="_blank" rel="noopener noreferrer" class="">stereumlabs.com</a>.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="slides">Slides<a href="https://your-docusaurus-site.example.com/blog/ethcc9-talk-recap-ai-observability-ethereum-staking#slides" class="hash-link" aria-label="Direct link to Slides" title="Direct link to Slides" translate="no">​</a></h2>
<p>The full slide deck from the talk is available for download:</p>
<p>📄 <a href="https://your-docusaurus-site.example.com/downloads/StereumLabs_EthCC9.pdf" target="_blank">Download slides (PDF)</a></p>]]></content:encoded>
            <category>AI</category>
            <category>EthCC</category>
            <category>talk</category>
            <category>observability</category>
            <category>Fusaka</category>
            <category>PeerDAS</category>
            <category>alerting</category>
            <category>security</category>
        </item>
        <item>
            <title><![CDATA[Fusaka hardfork: hardware impact on non-supernodes]]></title>
            <link>https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes</link>
            <guid>https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes</guid>
            <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[We measured CPU, memory, disk I/O, and network across all 36 CC×EC pairings on our non-supernode fleet — here's what the Fusaka hardfork actually did to hardware consumption.]]></description>
            <content:encoded><![CDATA[<p>We measured CPU, memory, disk I/O, and network across all 36 CC×EC pairings on our non-supernode fleet — here's what the Fusaka hardfork actually did to hardware consumption.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-is-fusaka">What is Fusaka?<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#what-is-fusaka" class="hash-link" aria-label="Direct link to What is Fusaka?" title="Direct link to What is Fusaka?" translate="no">​</a></h2>
<p>Ethereum's Fusaka hardfork activated on <strong>December 3, 2025 at 21:49 UTC</strong> (slot 13,164,544). It was the second hard fork of 2025 after Pectra (May 2025) and arguably the most consequential upgrade since the Merge. The name combines "Fulu" (consensus layer, named after a star) and "Osaka" (execution layer, named after the host city of Devcon 2025).</p>
<p>The headline feature is <strong>PeerDAS</strong> (Peer Data Availability Sampling, <a href="https://eips.ethereum.org/EIPS/eip-7594" target="_blank" rel="noopener noreferrer" class="">EIP-7594</a>) — a fundamental change in how Ethereum verifies blob data. Instead of every node downloading and verifying every blob, nodes now only need to sample small slices, verifying that the full data exists without actually possessing all of it. Vitalik Buterin called it "literally sharding" — Ethereum reaching consensus on blocks without requiring any single node to see more than a tiny fraction of the data.</p>
<p>Beyond PeerDAS, Fusaka shipped approximately 12 additional EIPs:</p>
<ul>
<li class=""><strong>EIP-7935</strong> — raises the default block gas limit, targeting ~60 million gas</li>
<li class=""><strong>EIP-7825</strong> — introduces a per-transaction gas cap of 16.78 million to prevent single-transaction DoS attacks</li>
<li class=""><strong>EIP-7892</strong> — the Blob Parameter Only (BPO) mechanism, allowing blob capacity increases between hard forks</li>
<li class=""><strong>EIP-7951</strong> — secp256r1 precompile for device-native signing and passkeys</li>
<li class=""><strong>EOF (EVM Object Format)</strong> — a cleaner, more efficient programming structure for smart contracts</li>
<li class=""><strong>EIP-7918</strong> — stabilizes blob fees</li>
<li class=""><strong>EIP-7742</strong> — new opcodes including CLZ (count leading zeros) for more efficient cryptographic operations</li>
</ul>
<p>Two scheduled BPO forks followed:</p>
<ul>
<li class=""><strong>BPO-1</strong> (~December 9–10): blob target/max raised from 6/9 to 10/15</li>
<li class=""><strong>BPO-2</strong> (~December 23 – January 7): blob target/max raised to 14/21</li>
</ul>
<p>We wanted to know: what did all this actually do to the hardware running our nodes?</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="methodology">Methodology<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#methodology" class="hash-link" aria-label="Direct link to Methodology" title="Direct link to Methodology" translate="no">​</a></h2>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="fleet-setup">Fleet setup<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#fleet-setup" class="hash-link" aria-label="Direct link to Fleet setup" title="Direct link to Fleet setup" translate="no">​</a></h3>
<p>StereumLabs runs approximately <strong>90 hosts</strong> split across GCP cloud instances and NDC2 bare-metal nodes in Vienna, Austria. Each non-supernode host runs one consensus client (CC) paired with one execution client (EC), covering all 36 possible combinations of:</p>
<ul>
<li class=""><strong>Consensus clients:</strong> Grandine, Lighthouse, Lodestar, Nimbus, Prysm, Teku</li>
<li class=""><strong>Execution clients:</strong> Besu, Erigon, Ethrex, Geth, Nethermind, Reth</li>
</ul>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="data-source">Data source<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#data-source" class="hash-link" aria-label="Direct link to Data source" title="Direct link to Data source" translate="no">​</a></h3>
<p>All metrics come from our <code>prometheus-cold</code> datasource (UID <code>aez9ck4wz05q8e</code>, Org 6), which covers all available metrics without retention or delay restrictions. Node-level system metrics are collected via <code>prometheus-node-exporter</code> at a 15-second scrape interval.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="comparison-windows">Comparison windows<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#comparison-windows" class="hash-link" aria-label="Direct link to Comparison windows" title="Direct link to Comparison windows" translate="no">​</a></h3>
<p>We compared two 14-day windows:</p>
<table><thead><tr><th>Period</th><th>Range</th><th>Label</th></tr></thead><tbody><tr><td>Before Fusaka</td><td>November 19 – December 3, 2025</td><td>Pre-fork baseline</td></tr><tr><td>After Fusaka</td><td>December 4 – December 18, 2025</td><td>Post-fork + BPO-1</td></tr></tbody></table>
<p>Both windows use <code>avg_over_time(...[14d:1h])</code> — a 14-day average at 1-hour subquery resolution, computed as an instant query at the boundary timestamp. This smooths out transient spikes while preserving meaningful shifts.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="filtering">Filtering<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#filtering" class="hash-link" aria-label="Direct link to Filtering" title="Direct link to Filtering" translate="no">​</a></h3>
<p>Supernodes are excluded via the label filter <code>cc_client!~".*-super"</code>. Only instances with <code>role=~"cc|ec"</code> are included. Network metrics exclude loopback and virtual interfaces (<code>device!~"lo|veth.*|docker.*|br.*"</code>).</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="fleet-level-results">Fleet-level results<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#fleet-level-results" class="hash-link" aria-label="Direct link to Fleet-level results" title="Direct link to Fleet-level results" translate="no">​</a></h2>
<p>Here's what happened across the entire non-supernode fleet:</p>
<table><thead><tr><th>Metric</th><th>Before</th><th>After</th><th>Change</th></tr></thead><tbody><tr><td>CPU utilization (avg)</td><td>4.66%</td><td>6.08%</td><td><strong>+30%</strong></td></tr><tr><td>Memory used (avg)</td><td>6.06 GiB</td><td>5.59 GiB</td><td><strong>−8%</strong></td></tr><tr><td>Network RX (avg)</td><td>2.20 MiB/s</td><td>0.87 MiB/s</td><td><strong>−60%</strong></td></tr><tr><td>Network TX (avg)</td><td>0.20 MiB/s</td><td>0.18 MiB/s</td><td><strong>−13%</strong></td></tr><tr><td>Disk read rate (avg)</td><td>2.58 MiB/s</td><td>1.21 MiB/s</td><td><strong>−53%</strong></td></tr><tr><td>Disk write rate (avg)</td><td>1.73 MiB/s</td><td>1.92 MiB/s</td><td><strong>+11%</strong></td></tr></tbody></table>
<p><img decoding="async" loading="lazy" alt="Fleet-level hardware changes across the Fusaka hardfork" src="https://your-docusaurus-site.example.com/assets/images/fleet_summary-ce362e657d05051e0087cfade23c76f3.png" width="1775" height="784" class="img_ev3q"></p>
<div class="theme-admonition theme-admonition-tip admonition_xJq3 alert alert--success"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 12 16"><path fill-rule="evenodd" d="M6.5 0C3.48 0 1 2.19 1 5c0 .92.55 2.25 1 3 1.34 2.25 1.78 2.78 2 4v1h5v-1c.22-1.22.66-1.75 2-4 .45-.75 1-2.08 1-3 0-2.81-2.48-5-5.5-5zm3.64 7.48c-.25.44-.47.8-.67 1.11-.86 1.41-1.25 2.06-1.45 3.23-.02.05-.02.11-.02.17H5c0-.06 0-.13-.02-.17-.2-1.17-.59-1.83-1.45-3.23-.2-.31-.42-.67-.67-1.11C2.44 6.78 2 5.65 2 5c0-2.2 2.02-4 4.5-4 1.22 0 2.36.42 3.22 1.19C10.55 2.94 11 3.94 11 5c0 .66-.44 1.78-.86 2.48zM4 14h5c-.23 1.14-1.3 2-2.5 2s-2.27-.86-2.5-2z"></path></svg></span>The short version</div><div class="admonitionContent_BuS1"><p>PeerDAS delivered exactly what it promised: dramatically less network bandwidth and lower memory at the cost of modestly higher CPU. Disk reads halved. Operators paying per-GB egress on cloud providers should see meaningful cost savings.</p></div></div>
<p>Let's dig into each metric.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="network-bandwidth-the-biggest-win">Network bandwidth: the biggest win<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#network-bandwidth-the-biggest-win" class="hash-link" aria-label="Direct link to Network bandwidth: the biggest win" title="Direct link to Network bandwidth: the biggest win" translate="no">​</a></h2>
<p>The most striking result is the <strong>60% drop in network receive bandwidth</strong> — from 2.20 MiB/s to 0.87 MiB/s on average. This is PeerDAS in action: nodes now verify only sampled slices of blob data rather than downloading entire blobs.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="daily-trend">Daily trend<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#daily-trend" class="hash-link" aria-label="Direct link to Daily trend" title="Direct link to Daily trend" translate="no">​</a></h3>
<p>Looking at the daily time series, the decline actually began <strong>before the fork itself</strong> — around November 28–29 — suggesting some nodes started running fork-compatible client versions with PeerDAS-like optimizations ahead of the December 3 activation. On the fork day itself, the fleet-average RX was already down to 0.26 MiB/s.</p>
<p>The post-fork trend shows a brief recovery as nodes settled into the new protocol, stabilizing around 1.4 MiB/s by mid-December — still a <strong>36% reduction</strong> from the pre-decline baseline of ~2.2 MiB/s.</p>
<p><img decoding="async" loading="lazy" alt="Fleet average network receive bandwidth — daily trend" src="https://your-docusaurus-site.example.com/assets/images/network_rx_trend-24db51516409121ff6c6be59af4f38ff.png" width="1749" height="607" class="img_ev3q"></p>
<table><thead><tr><th>Date</th><th>RX (MiB/s)</th><th>Event</th></tr></thead><tbody><tr><td>Nov 19</td><td>3.04</td><td></td></tr><tr><td>Nov 28</td><td>2.38</td><td>Pre-fork decline begins</td></tr><tr><td>Dec 3</td><td>0.26</td><td><strong>Fusaka fork</strong></td></tr><tr><td>Dec 4</td><td>1.09</td><td>Post-fork stabilization</td></tr><tr><td>Dec 10</td><td>0.59</td><td>BPO-1</td></tr><tr><td>Dec 18</td><td>1.39</td><td>Settled baseline</td></tr></tbody></table>
<p>Network transmit bandwidth also decreased, though more modestly — from 0.20 to 0.18 MiB/s (−13%).</p>
<div class="theme-admonition theme-admonition-info admonition_xJq3 alert alert--info"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 14 16"><path fill-rule="evenodd" d="M7 2.3c3.14 0 5.7 2.56 5.7 5.7s-2.56 5.7-5.7 5.7A5.71 5.71 0 0 1 1.3 8c0-3.14 2.56-5.7 5.7-5.7zM7 1C3.14 1 0 4.14 0 8s3.14 7 7 7 7-3.14 7-7-3.14-7-7-7zm1 3H6v5h2V4zm0 6H6v2h2v-2z"></path></svg></span>Why this matters for operators</div><div class="admonitionContent_BuS1"><p>For GCP instances, network egress costs $0.085–$0.12/GB depending on region and volume. A node running at 2.2 MiB/s averages ~5.7 TB/month of inbound traffic. At the new 0.87 MiB/s rate, that drops to ~2.2 TB/month — a potential saving of $300–$400/month per node on egress alone, depending on provider and plan.</p></div></div>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="cpu-utilization-moderate-increase-one-clear-outlier">CPU utilization: moderate increase, one clear outlier<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#cpu-utilization-moderate-increase-one-clear-outlier" class="hash-link" aria-label="Direct link to CPU utilization: moderate increase, one clear outlier" title="Direct link to CPU utilization: moderate increase, one clear outlier" translate="no">​</a></h2>
<p>Fleet-average CPU rose from <strong>4.66% to 6.08%</strong> — a 30% relative increase, but in absolute terms still very modest. This is the expected trade-off: PeerDAS introduces data availability sampling routines (compute) in exchange for reduced data transfer (bandwidth).</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-fork-day-spike">The fork-day spike<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#the-fork-day-spike" class="hash-link" aria-label="Direct link to The fork-day spike" title="Direct link to The fork-day spike" translate="no">​</a></h3>
<p>The daily time series reveals two notable spikes:</p>
<ul>
<li class=""><strong>December 4: 14.82%</strong> — the day after fork activation. All clients simultaneously processed new consensus rules, PeerDAS bootstrapping, and EOF-related state transitions. This spike was transient and resolved within 24 hours.</li>
<li class=""><strong>December 10: 10.09%</strong> — coincides with BPO-1 raising the blob target/max from 6/9 to 10/15. The higher blob capacity required additional sampling work.</li>
</ul>
<p>After settling, the new baseline sits around <strong>5.5–5.7%</strong> — roughly 1 percentage point above pre-fork levels.</p>
<p><img decoding="async" loading="lazy" alt="Fleet average CPU and memory — daily trend" src="https://your-docusaurus-site.example.com/assets/images/cpu_mem_trend-c292ba84c771acfd2e501aae1c3e59ff.png" width="1774" height="697" class="img_ev3q"></p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="per-consensus-client-breakdown">Per-consensus-client breakdown<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#per-consensus-client-breakdown" class="hash-link" aria-label="Direct link to Per-consensus-client breakdown" title="Direct link to Per-consensus-client breakdown" translate="no">​</a></h3>
<p>Not all consensus clients handled Fusaka equally:</p>
<table><thead><tr><th>CC client</th><th>Before</th><th>After</th><th>Change</th></tr></thead><tbody><tr><td>Grandine</td><td>5.84%</td><td>7.06%</td><td>+21%</td></tr><tr><td>Lighthouse</td><td>3.81%</td><td>3.30%</td><td><strong>−13%</strong></td></tr><tr><td>Lodestar</td><td>5.80%</td><td>5.85%</td><td>+1%</td></tr><tr><td>Nimbus</td><td>2.63%</td><td>9.40%</td><td><strong>+257%</strong></td></tr><tr><td>Prysm</td><td>4.38%</td><td>4.86%</td><td>+11%</td></tr><tr><td>Teku</td><td>5.66%</td><td>5.96%</td><td>+5%</td></tr></tbody></table>
<p><img decoding="async" loading="lazy" alt="CPU change by consensus client" src="https://your-docusaurus-site.example.com/assets/images/cc_cpu-368f9f6d056fb8f610c6aedcc6e3defc.png" width="1415" height="696" class="img_ev3q"></p>
<div class="theme-admonition theme-admonition-warning admonition_xJq3 alert alert--warning"><div class="admonitionHeading_Gvgb"><span class="admonitionIcon_Rf37"><svg viewBox="0 0 16 16"><path fill-rule="evenodd" d="M8.893 1.5c-.183-.31-.52-.5-.887-.5s-.703.19-.886.5L.138 13.499a.98.98 0 0 0 0 1.001c.193.31.53.501.886.501h13.964c.367 0 .704-.19.877-.5a1.03 1.03 0 0 0 .01-1.002L8.893 1.5zm.133 11.497H6.987v-2.003h2.039v2.003zm0-3.004H6.987V5.987h2.039v4.006z"></path></svg></span>Nimbus CPU anomaly</div><div class="admonitionContent_BuS1"><p><strong>Nimbus</strong> stands out dramatically with a <strong>+257% CPU increase</strong> (2.63% → 9.40%). This suggests its PeerDAS implementation is currently more compute-intensive than competitors. The <code>nimbus + reth</code> pairing hit <strong>16.78%</strong> post-fork — the highest of any combination in the fleet. Nimbus operators should monitor their CPU headroom closely, especially on resource-constrained setups.</p></div></div>
<p>On the positive side, <strong>Lighthouse</strong> was the only consensus client to <em>decrease</em> CPU usage (−13%), suggesting either an efficient PeerDAS implementation or concurrent optimizations shipped in its fork-compatible release.</p>
<p><strong>Lodestar</strong> remained essentially flat (+1%), and <strong>Prysm</strong> (+11%), <strong>Teku</strong> (+5%), and <strong>Grandine</strong> (+21%) showed moderate increases — all within comfortable bounds for typical node hardware.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="per-execution-client-breakdown">Per-execution-client breakdown<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#per-execution-client-breakdown" class="hash-link" aria-label="Direct link to Per-execution-client breakdown" title="Direct link to Per-execution-client breakdown" translate="no">​</a></h3>
<p>Execution clients saw a more uniform increase, consistent with the raised gas limit (EIP-7935) and new EVM opcodes (EOF):</p>
<table><thead><tr><th>EC client</th><th>Before</th><th>After</th><th>Change</th></tr></thead><tbody><tr><td>Besu</td><td>2.58%</td><td>5.19%</td><td><strong>+101%</strong></td></tr><tr><td>Erigon</td><td>3.62%</td><td>5.83%</td><td>+61%</td></tr><tr><td>Ethrex</td><td>4.07%</td><td>6.04%</td><td>+48%</td></tr><tr><td>Geth</td><td>3.99%</td><td>6.53%</td><td><strong>+64%</strong></td></tr><tr><td>Nethermind</td><td>6.83%</td><td>5.08%</td><td><strong>−26%</strong></td></tr><tr><td>Reth</td><td>5.56%</td><td>7.89%</td><td>+42%</td></tr></tbody></table>
<p><img decoding="async" loading="lazy" alt="CPU change by execution client" src="https://your-docusaurus-site.example.com/assets/images/ec_cpu-7d2dfdd6d657c0fe0d5ed88e1ab25b53.png" width="1415" height="696" class="img_ev3q"></p>
<p><strong>Besu</strong> (+101%) and <strong>Geth</strong> (+64%) saw the largest relative increases. For Besu, the doubling (from a very low 2.58% baseline) likely reflects the new gas limit activating code paths that were previously idle. Geth's increase is consistent with heavier block processing under the raised 60M gas ceiling.</p>
<p><strong>Nethermind</strong> bucked the trend entirely with a <strong>−26% decrease</strong> — a surprising result that may indicate a coincidental version update with CPU optimizations during the measurement window.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-full-picture-cpu-heatmap-across-all-36-pairings">The full picture: CPU heatmap across all 36 pairings<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#the-full-picture-cpu-heatmap-across-all-36-pairings" class="hash-link" aria-label="Direct link to The full picture: CPU heatmap across all 36 pairings" title="Direct link to The full picture: CPU heatmap across all 36 pairings" translate="no">​</a></h3>
<p>This heatmap shows the percentage change in CPU utilization for every CC×EC combination. Red means higher CPU after Fusaka, green means lower:</p>
<p><img decoding="async" loading="lazy" alt="CPU utilization change per CC×EC pairing" src="https://your-docusaurus-site.example.com/assets/images/cpu_heatmap-74964bdbd01959392a57e2849ded1f7c.png" width="1416" height="875" class="img_ev3q"></p>
<p>The Nimbus row is unmistakable — deep red across all EC pairings, with <code>nimbus + ethrex</code> showing an extreme +1060% (from 0.83% to 9.63%, though the low baseline suggests the pairing may have been partially offline pre-fork). The Nethermind column is notably green across most CC pairings, reinforcing that Nethermind itself saw a concurrent optimization.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="memory-a-modest-but-welcome-decrease">Memory: a modest but welcome decrease<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#memory-a-modest-but-welcome-decrease" class="hash-link" aria-label="Direct link to Memory: a modest but welcome decrease" title="Direct link to Memory: a modest but welcome decrease" translate="no">​</a></h2>
<p>Average memory usage dropped from <strong>6.06 GiB to 5.59 GiB</strong> (−8%) across the fleet. This aligns perfectly with PeerDAS's design: nodes no longer hold entire blobs in memory, only the sampled slices they need for verification.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="daily-trend-1">Daily trend<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#daily-trend-1" class="hash-link" aria-label="Direct link to Daily trend" title="Direct link to Daily trend" translate="no">​</a></h3>
<p>The CPU + memory daily trend chart above also shows the memory trajectory (orange line). There's a sharp drop on <strong>December 4</strong> (from 6.22 to 5.26 GiB) — the first full day after the fork — followed by a gradual recovery over the next two weeks as caches and new protocol state accumulated. By December 18, memory was back to 6.27 GiB, suggesting the initial drop was partly transient (e.g., cleared blob caches from the pre-fork protocol).</p>
<p>The 14-day average still shows a net decrease because the early post-fork days pull the average down significantly.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="per-consensus-client-breakdown-1">Per-consensus-client breakdown<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#per-consensus-client-breakdown-1" class="hash-link" aria-label="Direct link to Per-consensus-client breakdown" title="Direct link to Per-consensus-client breakdown" translate="no">​</a></h3>
<table><thead><tr><th>CC client</th><th>Before (GiB)</th><th>After (GiB)</th><th>Change</th></tr></thead><tbody><tr><td>Grandine</td><td>5.79</td><td>4.80</td><td><strong>−17%</strong></td></tr><tr><td>Lighthouse</td><td>5.41</td><td>5.42</td><td>0%</td></tr><tr><td>Lodestar</td><td>7.97</td><td>6.82</td><td>−14%</td></tr><tr><td>Nimbus</td><td>4.89</td><td>3.91</td><td><strong>−20%</strong></td></tr><tr><td>Prysm</td><td>6.14</td><td>6.51</td><td>+6%</td></tr><tr><td>Teku</td><td>6.39</td><td>6.24</td><td>−2%</td></tr></tbody></table>
<p><img decoding="async" loading="lazy" alt="Memory change by consensus client" src="https://your-docusaurus-site.example.com/assets/images/cc_mem-7499fe4250f171c71386bfd77d51b1c9.png" width="1415" height="696" class="img_ev3q"></p>
<p><strong>Nimbus</strong> (−20%) and <strong>Grandine</strong> (−17%) saw the largest memory reductions. Interestingly, Nimbus traded memory savings for CPU — a classic compute-vs-memory trade-off in its PeerDAS implementation.</p>
<p><strong>Prysm</strong> was the only consensus client to <em>increase</em> memory (+6%), suggesting it keeps more sampling state in memory than peers. Given Prysm's CPU increase was moderate (+11%), this is a reasonable engineering choice.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="per-execution-client-breakdown-1">Per-execution-client breakdown<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#per-execution-client-breakdown-1" class="hash-link" aria-label="Direct link to Per-execution-client breakdown" title="Direct link to Per-execution-client breakdown" translate="no">​</a></h3>
<table><thead><tr><th>EC client</th><th>Before (GiB)</th><th>After (GiB)</th><th>Change</th></tr></thead><tbody><tr><td>Besu</td><td>6.59</td><td>4.31</td><td><strong>−35%</strong></td></tr><tr><td>Erigon</td><td>5.82</td><td>5.96</td><td>+2%</td></tr><tr><td>Ethrex</td><td>6.50</td><td>6.13</td><td>−6%</td></tr><tr><td>Geth</td><td>7.09</td><td>6.51</td><td>−8%</td></tr><tr><td>Nethermind</td><td>6.15</td><td>5.37</td><td>−13%</td></tr><tr><td>Reth</td><td>4.70</td><td>5.33</td><td>+13%</td></tr></tbody></table>
<p><img decoding="async" loading="lazy" alt="Memory change by execution client" src="https://your-docusaurus-site.example.com/assets/images/ec_mem-55a0874c0fb4d37f68a7f284b82e9d2b.png" width="1415" height="696" class="img_ev3q"></p>
<p><strong>Besu</strong> saw a dramatic <strong>−35% memory drop</strong> — from 6.59 to 4.31 GiB. This is the largest single-client change in the dataset and likely reflects aggressive cache invalidation during the fork transition. <strong>Reth</strong> moved in the opposite direction (+13%), possibly due to its database engine accumulating more state under the new protocol.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="disk-io-reads-halved-writes-slightly-up">Disk I/O: reads halved, writes slightly up<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#disk-io-reads-halved-writes-slightly-up" class="hash-link" aria-label="Direct link to Disk I/O: reads halved, writes slightly up" title="Direct link to Disk I/O: reads halved, writes slightly up" translate="no">​</a></h2>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="read-rate">Read rate<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#read-rate" class="hash-link" aria-label="Direct link to Read rate" title="Direct link to Read rate" translate="no">​</a></h3>
<p>Disk reads dropped <strong>53%</strong> — from 2.58 to 1.21 MiB/s. This is directly attributable to PeerDAS: nodes no longer need to retrieve full blobs from disk for verification. The reduction was visible across nearly all pairings.</p>
<p>Some notable per-pairing changes:</p>
<ul>
<li class=""><strong>teku + ethrex</strong> went from 26.18 MiB/s (an extreme outlier pre-fork) to 2.86 MiB/s — a <strong>89% drop</strong></li>
<li class=""><strong>lighthouse + reth</strong> dropped from 1.71 to 0.55 MiB/s (−68%)</li>
<li class=""><strong>grandine + reth</strong> moved in the opposite direction — from 0.08 to 3.23 MiB/s — suggesting a change in Grandine's blob retrieval strategy for PeerDAS</li>
</ul>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="write-rate">Write rate<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#write-rate" class="hash-link" aria-label="Direct link to Write rate" title="Direct link to Write rate" translate="no">​</a></h3>
<p>Disk writes increased <strong>11%</strong> — from 1.73 to 1.92 MiB/s. The increase is modest and likely comes from:</p>
<ol>
<li class=""><strong>PeerDAS sampling metadata</strong> — nodes now store sampling proofs and column indices</li>
<li class=""><strong>EOF state changes</strong> — the EVM Object Format introduces new bytecode validation and storage patterns</li>
<li class=""><strong>Higher gas limit</strong> — more transactions per block means more state writes</li>
</ol>
<p><img decoding="async" loading="lazy" alt="Disk I/O rate change" src="https://your-docusaurus-site.example.com/assets/images/disk_io-7ad5e3f1e9177cc2791083cb1a709622.png" width="1055" height="606" class="img_ev3q"></p>
<p>This is a small overhead relative to the substantial read savings.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="full-36-pairing-matrices">Full 36-pairing matrices<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#full-36-pairing-matrices" class="hash-link" aria-label="Direct link to Full 36-pairing matrices" title="Direct link to Full 36-pairing matrices" translate="no">​</a></h2>
<p>For the detail-oriented, here are the complete CPU and memory matrices across all CC×EC combinations.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="cpu-utilization--before-fusaka-">CPU utilization — before Fusaka (%)<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#cpu-utilization--before-fusaka-" class="hash-link" aria-label="Direct link to CPU utilization — before Fusaka (%)" title="Direct link to CPU utilization — before Fusaka (%)" translate="no">​</a></h3>
<table><thead><tr><th>CC \ EC</th><th>Besu</th><th>Erigon</th><th>Ethrex</th><th>Geth</th><th>Nethermind</th><th>Reth</th></tr></thead><tbody><tr><td>Grandine</td><td>2.02</td><td>4.04</td><td>4.92</td><td>5.32</td><td>11.25</td><td>7.48</td></tr><tr><td>Lighthouse</td><td>2.39</td><td>3.20</td><td>2.74</td><td>3.65</td><td>5.20</td><td>4.43</td></tr><tr><td>Lodestar</td><td>2.32</td><td>3.75</td><td>9.62</td><td>3.66</td><td>7.68</td><td>7.73</td></tr><tr><td>Nimbus</td><td>1.41</td><td>2.42</td><td>0.83</td><td>2.40</td><td>5.32</td><td>3.40</td></tr><tr><td>Prysm</td><td>2.58</td><td>3.56</td><td>3.42</td><td>3.35</td><td>4.95</td><td>7.02</td></tr><tr><td>Teku</td><td>4.74</td><td>4.63</td><td>4.23</td><td>5.55</td><td>8.81</td><td>4.11</td></tr></tbody></table>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="cpu-utilization--after-fusaka-">CPU utilization — after Fusaka (%)<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#cpu-utilization--after-fusaka-" class="hash-link" aria-label="Direct link to CPU utilization — after Fusaka (%)" title="Direct link to CPU utilization — after Fusaka (%)" translate="no">​</a></h3>
<table><thead><tr><th>CC \ EC</th><th>Besu</th><th>Erigon</th><th>Ethrex</th><th>Geth</th><th>Nethermind</th><th>Reth</th></tr></thead><tbody><tr><td>Grandine</td><td>5.54</td><td>7.28</td><td>6.58</td><td>7.71</td><td>6.62</td><td>9.05</td></tr><tr><td>Lighthouse</td><td>1.28</td><td>1.89</td><td>3.87</td><td>4.64</td><td>3.98</td><td>4.13</td></tr><tr><td>Lodestar</td><td>8.88</td><td>5.69</td><td>5.42</td><td>5.37</td><td>3.46</td><td>6.27</td></tr><tr><td>Nimbus</td><td>6.57</td><td>8.21</td><td>9.63</td><td>8.46</td><td>6.77</td><td>16.78</td></tr><tr><td>Prysm</td><td>4.00</td><td>5.17</td><td>4.42</td><td>5.05</td><td>4.11</td><td>6.40</td></tr><tr><td>Teku</td><td>5.37</td><td>5.97</td><td>7.36</td><td>7.08</td><td>4.98</td><td>5.03</td></tr></tbody></table>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="memory-usage--before-fusaka-gib">Memory usage — before Fusaka (GiB)<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#memory-usage--before-fusaka-gib" class="hash-link" aria-label="Direct link to Memory usage — before Fusaka (GiB)" title="Direct link to Memory usage — before Fusaka (GiB)" translate="no">​</a></h3>
<table><thead><tr><th>CC \ EC</th><th>Besu</th><th>Erigon</th><th>Ethrex</th><th>Geth</th><th>Nethermind</th><th>Reth</th></tr></thead><tbody><tr><td>Grandine</td><td>6.12</td><td>5.70</td><td>5.16</td><td>6.18</td><td>7.15</td><td>4.46</td></tr><tr><td>Lighthouse</td><td>6.15</td><td>5.01</td><td>6.52</td><td>6.73</td><td>4.92</td><td>4.40</td></tr><tr><td>Lodestar</td><td>7.27</td><td>6.64</td><td>11.33</td><td>7.76</td><td>7.35</td><td>7.46</td></tr><tr><td>Nimbus</td><td>5.66</td><td>5.21</td><td>2.94</td><td>6.42</td><td>5.94</td><td>3.17</td></tr><tr><td>Prysm</td><td>6.31</td><td>6.08</td><td>8.42</td><td>6.70</td><td>5.66</td><td>4.28</td></tr><tr><td>Teku</td><td>8.01</td><td>6.51</td><td>4.59</td><td>8.74</td><td>7.20</td><td>4.93</td></tr></tbody></table>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="memory-usage--after-fusaka-gib">Memory usage — after Fusaka (GiB)<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#memory-usage--after-fusaka-gib" class="hash-link" aria-label="Direct link to Memory usage — after Fusaka (GiB)" title="Direct link to Memory usage — after Fusaka (GiB)" translate="no">​</a></h3>
<table><thead><tr><th>CC \ EC</th><th>Besu</th><th>Erigon</th><th>Ethrex</th><th>Geth</th><th>Nethermind</th><th>Reth</th></tr></thead><tbody><tr><td>Grandine</td><td>3.68</td><td>5.59</td><td>5.48</td><td>5.07</td><td>4.43</td><td>4.96</td></tr><tr><td>Lighthouse</td><td>2.48</td><td>3.57</td><td>7.36</td><td>7.10</td><td>5.58</td><td>6.41</td></tr><tr><td>Lodestar</td><td>5.61</td><td>7.30</td><td>7.05</td><td>8.21</td><td>5.45</td><td>7.32</td></tr><tr><td>Nimbus</td><td>2.24</td><td>5.74</td><td>3.29</td><td>4.73</td><td>4.97</td><td>2.47</td></tr><tr><td>Prysm</td><td>6.35</td><td>6.95</td><td>7.05</td><td>7.33</td><td>5.22</td><td>6.18</td></tr><tr><td>Teku</td><td>4.74</td><td>5.96</td><td>6.26</td><td>8.67</td><td>7.69</td><td>4.14</td></tr></tbody></table>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="things-to-watch">Things to watch<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#things-to-watch" class="hash-link" aria-label="Direct link to Things to watch" title="Direct link to Things to watch" translate="no">​</a></h2>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="nimbus-cpu-trajectory">Nimbus CPU trajectory<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#nimbus-cpu-trajectory" class="hash-link" aria-label="Direct link to Nimbus CPU trajectory" title="Direct link to Nimbus CPU trajectory" translate="no">​</a></h3>
<p>The +257% CPU increase on Nimbus deserves monitoring over subsequent weeks and versions. If this is a temporary bootstrapping cost (PeerDAS column sync, initial sampling table construction), it should decline. If it persists, Nimbus operators on lower-spec hardware may need to re-evaluate their headroom.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="bpo-2-effects-not-captured-here">BPO-2 effects (not captured here)<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#bpo-2-effects-not-captured-here" class="hash-link" aria-label="Direct link to BPO-2 effects (not captured here)" title="Direct link to BPO-2 effects (not captured here)" translate="no">​</a></h3>
<p>BPO-2 was scheduled for late December 2025 to early January 2026, raising blob parameters from 10/15 to 14/21. Our post-fork window (Dec 4–18) captures BPO-1 but not BPO-2. A follow-up analysis with a wider window would reveal whether the second parameter increase added further CPU or bandwidth pressure.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="memory-recovery-trend">Memory recovery trend<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#memory-recovery-trend" class="hash-link" aria-label="Direct link to Memory recovery trend" title="Direct link to Memory recovery trend" translate="no">​</a></h3>
<p>The sharp initial memory drop post-fork appears to be partly transient — memory was trending back upward by December 18. We'll continue monitoring whether this recovery plateaus below the pre-fork baseline or converges back to original levels.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-nimbus--ethrex-anomaly">The nimbus + ethrex anomaly<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#the-nimbus--ethrex-anomaly" class="hash-link" aria-label="Direct link to The nimbus + ethrex anomaly" title="Direct link to The nimbus + ethrex anomaly" translate="no">​</a></h3>
<p>The <code>nimbus + ethrex</code> pairing showed anomalously low CPU pre-fork (0.83%), which may indicate the pairing was partially offline or in a degraded state during the pre-fork window. The post-fork value of 9.63% is more consistent with fleet norms, suggesting the pairing recovered during or after the fork transition.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="client-version-confounding">Client version confounding<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#client-version-confounding" class="hash-link" aria-label="Direct link to Client version confounding" title="Direct link to Client version confounding" translate="no">​</a></h3>
<p>Client versions were not pinned across the fork boundary — some pairings may have undergone version upgrades alongside Fusaka. This means we cannot cleanly attribute all changes to the fork itself. In particular, Nethermind's −26% CPU decrease may reflect a version update rather than a Fusaka effect.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="bottom-line">Bottom line<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#bottom-line" class="hash-link" aria-label="Direct link to Bottom line" title="Direct link to Bottom line" translate="no">​</a></h2>
<p>Fusaka delivered on its core promise for non-supernode operators: <strong>dramatically lower bandwidth requirements</strong> (−60% receive, −13% transmit) and <strong>modestly lower memory</strong> (−8%), at the cost of a <strong>moderate CPU increase</strong> (+30%) that remains well within typical hardware headroom. Disk reads dropped by half.</p>
<p>For operators managing hosting costs, the network savings alone are significant — especially on cloud providers where egress is metered. The CPU overhead is real but manageable: even the worst-case fleet average post-fork (6.08%) leaves substantial headroom on modern hardware.</p>
<p>The main action item is <strong>monitoring Nimbus deployments</strong> for elevated CPU, and <strong>tracking BPO-2 effects</strong> as blob capacity continues to expand.</p>
<hr>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="appendix-promql-queries-used">Appendix: PromQL queries used<a href="https://your-docusaurus-site.example.com/blog/fusaka-hardfork-hardware-impact-non-supernodes#appendix-promql-queries-used" class="hash-link" aria-label="Direct link to Appendix: PromQL queries used" title="Direct link to Appendix: PromQL queries used" translate="no">​</a></h2>
<p>For reproducibility, here are the exact queries run against <code>prometheus-cold</code>:</p>
<div class="language-promql codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#393A34;--prism-background-color:#f6f8fa"><div class="codeBlockTitle_OeMC">CPU utilization (non-idle) — per pairing</div><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-promql codeBlock_bY9V thin-scrollbar" style="color:#393A34;background-color:#f6f8fa"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#393A34"><span class="token plain">avg by (cc_client, ec_client) (</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  avg_over_time(</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">    (1 - rate(node_cpu_seconds_total{mode="idle", role=~"cc|ec", cc_client!~".*-super"}[1h]))</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">    [14d:1h]</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  )</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">)</span><br></span></code></pre></div></div>
<div class="language-promql codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#393A34;--prism-background-color:#f6f8fa"><div class="codeBlockTitle_OeMC">Memory used (GiB) — per pairing</div><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-promql codeBlock_bY9V thin-scrollbar" style="color:#393A34;background-color:#f6f8fa"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#393A34"><span class="token plain">avg by (cc_client, ec_client) (</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  avg_over_time(</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">    (node_memory_MemTotal_bytes{role=~"cc|ec", cc_client!~".*-super"}</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">     - node_memory_MemAvailable_bytes{role=~"cc|ec", cc_client!~".*-super"})</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">    [14d:1h]</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  )</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">) / 1024 / 1024 / 1024</span><br></span></code></pre></div></div>
<div class="language-promql codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#393A34;--prism-background-color:#f6f8fa"><div class="codeBlockTitle_OeMC">Disk read rate (MiB/s) — per pairing</div><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-promql codeBlock_bY9V thin-scrollbar" style="color:#393A34;background-color:#f6f8fa"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#393A34"><span class="token plain">avg by (cc_client, ec_client) (</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  avg_over_time(</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">    rate(node_disk_read_bytes_total{role=~"cc|ec", cc_client!~".*-super"}[1h])</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">    [14d:1h]</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  )</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">) / 1024 / 1024</span><br></span></code></pre></div></div>
<div class="language-promql codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#393A34;--prism-background-color:#f6f8fa"><div class="codeBlockTitle_OeMC">Network receive rate (MiB/s) — per pairing</div><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-promql codeBlock_bY9V thin-scrollbar" style="color:#393A34;background-color:#f6f8fa"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#393A34"><span class="token plain">avg by (cc_client, ec_client) (</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  avg_over_time(</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">    rate(node_network_receive_bytes_total{role=~"cc|ec", cc_client!~".*-super",</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">         device!~"lo|veth.*|docker.*|br.*"}[1h])</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">    [14d:1h]</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">  )</span><br></span><span class="token-line" style="color:#393A34"><span class="token plain">) / 1024 / 1024</span><br></span></code></pre></div></div>
<p><strong>Datasource:</strong> <code>prometheus-cold</code> (UID: <code>aez9ck4wz05q8e</code>, Org 6)</p>
<p><strong>Before snapshot:</strong> instant query at <code>2025-12-03T00:00:00Z</code></p>
<p><strong>After snapshot:</strong> instant query at <code>2025-12-18T00:00:00Z</code></p>
<p><strong>Daily time series:</strong> range query from <code>2025-11-19T00:00:00Z</code> to <code>2025-12-18T00:00:00Z</code>, step = 86400s</p>]]></content:encoded>
            <category>AI</category>
            <category>analysis</category>
            <category>fusaka</category>
            <category>hardfork</category>
            <category>PeerDAS</category>
            <category>hardware</category>
            <category>resources</category>
        </item>
        <item>
            <title><![CDATA[Prysm v7.1.1 & 7.1.2 resources]]></title>
            <link>https://your-docusaurus-site.example.com/blog/prysm-version-7-1-1-and-7-1-2-comparison-resources</link>
            <guid>https://your-docusaurus-site.example.com/blog/prysm-version-7-1-1-and-7-1-2-comparison-resources</guid>
            <pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Let's have a look at both versions of Prysm 7.1.1 and 7.1.2 and it's resource consumption]]></description>
            <content:encoded><![CDATA[<p>Let's have a look at both versions of Prysm 7.1.1 and 7.1.2 and it's resource consumption</p>
<p>We at StereumLabs run Prysm continuously across all six supported execution-layer clients — Besu, Erigon, Ethrex, Geth, Nethermind, and Reth — on isolated bare-metal nodes. When v7.1.2 landed, we pulled 90-day averages from our Prometheus-cold datasource to see exactly what changed. Here's the short version.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-improved">What improved<a href="https://your-docusaurus-site.example.com/blog/prysm-version-7-1-1-and-7-1-2-comparison-resources#what-improved" class="hash-link" aria-label="Direct link to What improved" title="Direct link to What improved" translate="no">​</a></h2>
<p><strong>Memory is the clearest win.</strong> Process RSS dropped by 5.1% on average (3.36 → 3.19 GB), with the biggest gains when paired with Geth (−8.8%) and Besu (−7.8%). Heap in-use stayed flat, which points to the savings coming from outside the Go heap — likely reduced stack allocations or more efficient mmap regions.</p>
<p><strong>Block processing time also improved significantly</strong> — down 25% on average (142 → 107 ms). The headline number is dominated by the Erigon pairing, which went from 403 ms to 90 ms (−78%). For the production-grade trio of Geth, Nethermind, and Reth, results were stable to slightly better.</p>
<p><strong>Network connectivity was unaffected.</strong> Average libp2p peer count held at ~71 across both versions.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-to-watch">What to watch<a href="https://your-docusaurus-site.example.com/blog/prysm-version-7-1-1-and-7-1-2-comparison-resources#what-to-watch" class="hash-link" aria-label="Direct link to What to watch" title="Direct link to What to watch" translate="no">​</a></h2>
<p>CPU utilization showed a mixed picture that varies by EL pairing — Reth dropped 21%, while Besu increased 28%. Because we measure system-wide CPU across the full node, EL-side changes and PeerDAS load from the Fusaka hard fork both contribute noise here. No clear regression in Prysm itself.</p>
<p>GC pause duration ticked up marginally (+5.5%, from 0.735 to 0.775 ms) — well within acceptable bounds and unlikely to affect attestation or block proposal timelines.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="bottom-line">Bottom line<a href="https://your-docusaurus-site.example.com/blog/prysm-version-7-1-1-and-7-1-2-comparison-resources#bottom-line" class="hash-link" aria-label="Direct link to Bottom line" title="Direct link to Bottom line" translate="no">​</a></h2>
<p>v7.1.2 is a straightforward upgrade for production operators: lower memory footprint, faster block processing on the pairings that matter most, and no regressions in network behavior. If you're running Ethrex, treat its +138% block processing regression as an outlier specific to that experimental client, not a Prysm issue.</p>
<hr>
<p>📄 <strong>Full report with per-client breakdowns and methodology:</strong> <a href="https://your-docusaurus-site.example.com/assets/files/prysm_resource_report-b1359b2c8ced083dc5d2dd56bc9abb86.pdf" target="_blank" class="">Download PDF</a></p>]]></content:encoded>
            <category>AI</category>
            <category>analysis</category>
            <category>prysm</category>
        </item>
    </channel>
</rss>