There has been growing discussion around the possibility of increasing Ethereum's gas throughput, either by raising the gas limit or reducing slot time. The key argument in favor of this is that the hardware requirements for running a validator have steadily decreased over the past four years.
Additionally 2 approaches to increase the Gas Limit have surfaced:
EIP-7782: A reduction in the block time on Ethereum protocol
EIP-7783: A “gradual increase”-based mechanism to slowly increase the Gas Limit over time.
In this post, I will analyze the potential worst-case and average-case scenarios across bandwidth, computational, and storage requirements if the gas limit were to be doubled.
Recap of Ethereum's history with the gas limit
When Ethereum launched in 2015, the gas limit was initially set to **5,000 gas per block**. Over time, this limit saw significant changes:
2016: The gas limit was first increased to around 3 million, and later that same year, it was raised again to approximately 4.7 million.
- Following the Tangerine Whistle hard fork and more specifically the implementation of EIP-150, the gas limit was increased to 5.5 million. This adjustment was made as part of a repricing of certain I/O-heavy opcodes in response to denial-of-service (DoS) attacks.
- In July 2017, the gas limit was raised to 6.7 million, and it continued to increase:
- December 2017: ~8 million
- September 2019: ~10 million
- August 2020: 12.5 million
- April 2021: 15 million
Under EIP-1559, there is also a maximum (or "hard cap") gas limit, which is set to twice the target. This means that a block can include transactions with up to 30 million gas.
And for almost four years, there has been no increase in the gas limit at all.
Is it finally time to revisit the Gas Limit?
In order to answer this question, we need to analyze three aspects of hardware requirements: bandwidth, computation, and storage if the gas limit were to be raised to 60 million today.
Storage
When considering an increase in the gas limit, storage stands out as the biggest bottleneck and concern for the Ethereum network. The reason for this lies in Ethereum's historical growth in state size and the ongoing strain that this places on validators.
There are two types of "growth" in Ethereum:
* State Growth
* History Growth
State growth
Ethereum's state—the collection of all account balances, smart contract code, and storage—continues to expand as more transactions are processed and smart contracts are deployed. Since its inception, the state size has grown significantly, with periods of accelerated growth driven by network congestion, increased transaction activity, and the rise of decentralized finance (DeFi) and NFTs. Currently, state growth is approximately 2.5 GB per month, or 30 GB per year.
This state growth can lead to the following issues:
- Slower access times to disk
- Increased hardware requirements
However, as of the time of writing, neither of these issues is particularly significant. In fact, the difference in access time between storage systems that differ by just a few tens of gigabytes is fairly negligible due to the algorithmic complexity of querying, which is typically logarithmic. Storage requirements are also insignifcant, as the cost of new hardware is decreasing at a rate that far outpaces the relatively small growth in state size of 30 GB per year. Even if raised to 60 GB/year, the difference would probably not stand out and would still be outpaced by technological progress in hardware anyway.
History growth
This increase in state size is still outpaced by technological progress by a significant margin. Even if the gas limit were to double, the cost of hardware continues to decrease exponentially, making the required hardware cheaper over time.
However, it’s worth noting that soon, solo stakers will need more than 2 TB of storage to run a validator on Ethereum. This will effectively raise the requirement to 4 TB of storage, as most hardware is sold in powers of two. Paradoxically, this means that Ethereum might as well make use of the additional storage, as validators would already need to invest in the higher-capacity hardware, regardless of whether the gas limit is increased or not.
NOTE: There is no average vs worst-case analysis on storage because consistently manipulating blocks for an extended period of time (weeks and months) is an insanely expensive endeavor.
Storage cost over time
To justify my claims that storage cost has been decreasing at exponential rates, we can take a look at the price fluctuations in USD of 1 GB worth of SSD over the past four years:
It seems that every two years, the cost of a GB of SSD tends to halve.
If we compare this to storage and state growth, the difference is negligible. The current growth of Ethereum is linear, while hardware costs tend to decrease at an exponential rate.
I found a more telling chart about this trend with storage costs, but it is from a Reddit post and not from an actual scientific publication (although the results match).
Bandwidth
The average case for bandwidth in Ethereum looks something like 2MB/s
; however, most of this number comes from the CL gossiping blobs and aggregates. When it comes to increasing the gas limit, the only thing to look at is the block size.
Average case with 2x the Gas Limit
Currently, the maximum block size recorded is 270 KB, and the current block size post-Deneb stands at 75 KB. If we were to double this, the change would be equivalent to a 0.5-2
blob increase when compared to the historical maximum and current average, which would be equivalent to a ≈ 2-5% increase in node bandwidth (ingoing and outgoing). So, as for the average case, it is not a significant change. As a matter of fact, an additional three blobs would be way more deteriorating.
Worst case with 2x the Gas Limit
The worst case has been calculated to be 1.7MB
, which would become 3.4MB
(+50%
bandwidth needed for the spike). This is not that much but still significant. The reason why I think that this is not a lot is that such a DoS would be quite expensive and the spike would be a +50%
of the current average requirements, which is something already accounted for. As I was saying, filling blocks worth of 15 million gas to the brim for many successive blocks is very expensive. So, even though an attacker could potentially launch a DoS for a few blocks, they would have to spend a significant amount of money doing so. Additionally, they would have to compete with other transactions to get into the block, which makes this even more expensive.
In any case, regardless of opinions on the numbers, an increase in calldata cost would fix this issue completely, so I am not worried about it in any case. Additionally, if the gas limit is raised through EIP-7783, these risks are negligible and controllable.
Computation
Computation and block times were never a problem to begin with, but here we go.
Average case
The average case for block computation is usually <1 second, even for slow machines with bad disks. There isn’t much to argue here—on average, this was never the bottleneck.
Worst case
The worst case seems to be unclear and depends on the client. After talking to some client teams, it seems the consensus is that the only issue would be that some opcodes do not scale well (such as MODEXP).
However, any DoS vectors here can be fixed with a repricing, and if the gas limit increase is done with EIP-7783, then these risks are negligible.
Conclusion
Overall, it seems that storage growth is not the bottleneck for increasing the gas limit, as hardware like storage is easy to upgrade. However, bandwidth poses a greater threat, as it is much harder to scale. Fortunately, with EIP-7783, the risks related to bandwidth and potential increases in computation are effectively mitigated. Nonetheless, it might be wise to reprice the calldata cost to ensure additional safety (although in my opion, not likely to be necessary).
In my personal opinion, It is possible to currently increase the Gas Limit by 33% of even doubling it today if done with the gradual increase introduced EIP-7783.
I think it is still too early to do that through EIP-7782, because it would be punitive towards DVT and SSF. However, once those are figured out - a decrease in slot times is definitely due.