Cointime

Download App
iOS & Android

Scaling Ethereum L1 and L2s in 2025 and beyond

Cointime Official

From vitalik

Special thanks to Tim Beiko, Justin Drake, and developers from various L2 teams for feedback and review

The goal of Ethereum is the same as what it has been from day 1: building a global, censorship-resistant permissionless blockchain. A free and open platform for decentralized applications, built upon the same principles (what we might call today the regen and cypherpunk ethos) as GNU + Linux, Mozilla, Tor, Wikipedia, and many other great free and open source software projects that came before it.

Over the past ten years, Ethereum has also evolved another property that I have come to greatly appreciate: in addition to the innovation in cryptography and economics, Ethereum is also an innovation in social technology. Ethereum as an ecosystem is a working, live demonstration of a new, more open and decentralized way of building things together. Political philosopher Ahmed Gatnash describes his experience at Devcon as follows:

... A glimpse of what an alternative world could look like - one mostly free of gatekeeping, and with no attachment to legacy systems. In its inversion of society's standard status systems, the people who are held in highest social status here are the nerds who spend all their time hyper focused on independently solving a problem that they really deeply care about, not playing a game to climb the hierarchies of legacy institutions and amass power. Almost all the power here was soft power. I found it beautiful and very inspiring - it makes you feel like anything would be possible in a world like this, and that a world like this is actually within reach.

The technical project and the social project are inherently intertwined. If you have a decentralized technical system at time T, but a centralized social process maintaining it, there is no guarantee that your technical system will still be decentralized at time T+1. Similarly, the social process is kept alive in many ways by the technology: the tech brings in users, the ecosystem made possible by the tech provides incentives for developers to come and stay, it keeps the community grounded and focused on building rather than just socializing, and so on.

Where you can use Ethereum to pay for things around the world, Oct 2024. Source.

As a result of ten years of hard work governed by this mix of technical and social properties, Ethereum has come to embody another important quality: Ethereum does useful things for people, at scale. Millions of people hold ETH or stablecoins as a form of savings, and many more use these assets for payment: I'm one of them. It has effective, working privacy tools that I use to pay for VPNs to protect my internet data. It has ENS, a robust decentralized alternative to DNS and more generally public key infrastructure. It has working and easy-to-use Twitter alternatives. It has defi tools that offer millions of people higher-yielding low-risk assets than what they can access in tradfi.

Five years ago, I was not comfortable talking about the latter use case for one primary reason: the infrastructure and the code were not mature, we were only a few years removed from the massive and highly traumatic smart contract hacks of 2016-17, and there is no point in having a 7% APY instead of a 5% APY if every year there is a 5% chance you will instead get a -100% APY. On top of this, transaction fees were too high to make these things usable at scale. Today, these tools have shown their resilience over time, the quality of auditing tools has increased, and we are increasingly confident in their security. We know what not to do. L2 scaling is working. Transaction fees have been very low for almost a year.

We need to continue building up the technical and social properties, and the utility, of Ethereum. If we have the former, but not the latter, then we devolve into a more-and-more-ineffective "decel" community that can howl into the wind about how various mainstream actors are immoral and bad, but has no position to actually offer a better alternative. If wed have the latter, but not the former, then we have exactly the Wall Street greed-is-good mentality that many of us came here precisely to escape.

There are many implications of the duality that I have just described. In this post, I want to focus on a specific one, which matters greatly Ethereum's users in the short and medium term: Ethereum's scaling strategy.

The rise of layer 2s

Today, the path that we are taking to scale Ethereum is layer 2 protocols (L2s). The L2s of 2025 are a far cry from the early experiments they were in 2019: they have reached key decentralization milestones, they are securing billions of dollars of value, and they are currently scaling Ethereum's transaction capacity by a factor of 17x, dropping fees by a similar amount.

Left: stage 1 and stage 2 rollups. On Jan 22, Ink has joined as the sixth stage 1+ rollup (and third full-EVM stage 1+ rollup). Right, top rollups by TPS, with Base leading at roughly 40% of Ethereum's capacity.

This is all happening just in time for a wave of successful applications: various defi platformssocial networksprediction markets, exotic contraptions like Worldchain (now with 10 million users) and more. The "enterprise blockchain" movement, widely viewed as a dead end after the failure of consortium blockchains in 2010s, is coming back to life with L2s, with Soneium providing a leading example.

These successes are also a testament to the social side of Ethereum's decentralized and modular approach to scaling: instead of the Ethereum Foundation having to seek out all of these users itself, there are dozens of independent entities who are motivated to do so. These entities have also made crucial contributions to the technology, without which Ethereum would not be anywhere close to as far as it is today. And as a result, we are finally approaching escape velocity.

Challenges: scale and dealing with heterogeneity

There are two primary challenges facing L2s today:

  • Scale: our blob space is barely covering the L2s and the usecases of today, and we have far from enough for the needs of tomorrow.
  • Challenges of heterogeneity. The early vision for how Ethereum could scale involved creating a blockchain that contains many shards, each shard being a copy of the EVM that gets processed by a small fraction of the nodes. L2s are, in theory, an implementation of exactly this approach. In practice, however, there is a key difference: each shard (or set of shards) is created by a different actor, is treated by infrastructure as being a different chain, and often follows different standards. Today, this translates into composability and user experience problems for developers and users.

The first problem is an easy-to-understand technical challenge, and has an easy-to-describe (but hard-to-implement) technical solution: give Ethereum more blobs. In addition to this, the L1 can also do moderate amount of scaling in the short term, as well as improvements to proof of stakestateless and light verificationstorage, the EVM and cryptography.

The second problem, which has received the bulk of public attention, is a coordination problem. Ethereum is no stranger to performing complex technical tasks between multiple teams: after all, we did the merge. Here, the coordination problem is more challenging, because of the greater number and diversity of actors and goals andt the fact that the process is starting much later in the game. But even still, our ecosystem has solved difficult problems before, and we can do so again.

One possible shortcut for scaling is to give up on L2s, and do everything through L1 with a much higher gas limit (either across many shards, or on one shard). However, this approach compromises too much of the benefits of Ethereum's current social structure, which has been so effective at getting the benefits of different forms of research, development and ecosystem-building culture at the same time. Hence, instead we should stay the course, continue to scale primarily through L2s, but make sure that L2s actually fulfill the promise that they were meant to fulfill.

This means the following:

  • L1 needs to accelerate scaling blobs.
  • L1 also needs to do a moderate amount of scaling the EVM and increasing the gas limit, to be able to handle the activity that it will continue to have even in an L2-dominated world (eg. proofs, large-scale defi, deposits and withdrawals, exceptional mass exit scenarios, keystore wallets, asset issuance).
  • L2s need to continue improving security. The same security guarantees that one would expect from sharding (including eg. censorship resistance, light client verifiability, lack of enshrined trusted parties) should be available on L2s.
  • L2s, and wallets need to accelerate improving and standardizing interoperability. This includes chain-specific addresses, message-passing and bridge standards, efficient cross-chain payments, on-chain configs and more. Using Ethereum should feel like using a single ecosystem, not 34 different blockchains.
  • L2 deposit and withdraw times need to become much faster.
  • As long as basic interoperability needs are met, L2 heterogeneity is good. Some L2s will be governance-minimized based rollups that run exact copies of the L1 EVM. Others will experiment with different VMs. Others will act more like servers that use Ethereum to give users extra security guarantees. We need L2s at each part of that spectrum.
  • We should think explicitly about economics of ETH. We need to make sure that ETH continues to accrue value even in an L2-heavy world, ideally solving for a variety of models of how value accrual happens.

Let us now go through each of these topic areas in more detail.

Scaling: blobs, blobs, blobs

With EIP-4844, we now have 3 blobs per slot, or a data bandwidth of 384 kB per slot. Quick napkin math suggests that this is 32 kB per second, and each transaction takes about 150 bytes onchain, so we get ~210 tx/sec. L2beat data gives us almost exactly this number.

With Pectra, scheduled for release in March, we plan to double this to 6 blobs per slot.

The current goal of Fusaka is to focus primarily on PeerDAS, ideally having nothing other than PeerDAS and EOF. PeerDAS could increase the blob count by another 2-3x.

After that point, the goal is to keep increasing the blob count over time. When we get to 2D sampling, we can reach 128 blobs per slot, and then keep going further. With this, and improvements to data compression, we can reach 100,000 TPS onchain.

So far, the above is all a re-statement of the pre-2025 status quo roadmap. The key question is: what can we actually change to make this go faster? My answers are the following:

  • We should be more willing to explicitly deprioritize features that are not blobs.
  • We should be clearer that blobs are the goal, and make relevant p2p R&D a talent acquisition priority.
  • We can make the blob target adjusted directly by stakers, similar to the gas limit. This would allow the blob target to increase more quickly in response to technology improvements, without waiting for a hard fork.
  • We can consider more radical approaches that get us more blobs faster with more trust assumptions for lower-resourced stakers, though we should be careful about this.

Improving security: proof systems and native rollups

Today, there are three stage 1 rollups (Optimism, Arbitrum, Ink) and three stage 2 rollups (DeGate, zk.money, Fuel). The majority of activity still happens on stage 0 rollups (ie. multisigs). This needs to change. A big reason why this has not changed faster, is that building a proof system, and getting enough confidence in it to be willing to give up training wheels and rely fully on it for security, is hard.

There are two paths toward getting there:

  • Stage 2 + multi-provers + formal verification: use multiple proving systems for redundancy, and use formal verification (see: the verified ZK-EVM initiative) to get confidence that they are secure.
  • Native rollups: make EVM state transition function verification part of the protocol itself, eg. through a precompile (see: [1] [2] [3] for research)

Today, we should work on both in parallel. For stage 2 + multi-provers + formal verification, the roadmap is relatively well-understood. The main practical place where we can accelerate is to cooperate more on software stacks, reducing the need for duplicate work while increasing interoperability as a by-product.

Native rollups are still an early-stage idea. There is a lot of active thinking to be done, particularly on the topic of how to make a native rollup precompile maximally flexible. An ideal goal would be for it to support not just exact clones of the EVM, but also EVMs with various arbitrary changes, in such a way that an L2 with a modified EVM could still use the native rollup precompile, and "bring its own prover" only for the modifications. This could be done for precompiles, opcodes, the state tree, and potentially other pieces.

Interoperability and standards

The goal is to make it so that moving assets between and using applications on different L2s has the same experience as you would have if they were different "shards" of the same blockchain. There has for a few months been a pretty well-understood roadmap for how to do this:

  • Chain-specific addresses: the address should include both the account on the chain, and some kind of identifier for the chain itself. ERC-3770 is an early attempt at this, there are now more sophisticated ideas, which also move the registry for L2s to the Ethereum L1 itself.
  • Standardized cross-chain bridges and cross-chain message passing: there should be standard ways to verify proofs and pass messages between L2s, and these standards should not require trusting anything except for the proof systems of the L2s themselves. An ecosystem relying on multisig bridges is NOT acceptable. If it's a trust assumption that would not exist if we had done 2016-style sharding, it's not acceptable today, full stop.
  • Speeding up deposit and withdraw times, so that "native" messages can take minutes (and eventually one slot) rather than weeks. This involves faster ZK-EVM provers, and proof aggregation.
  • Synchronous read of L1 from L2. See: L1SLOADREMOTESTATICCALL. This makes cross-L2 interoperability significantly easier, and also helps keystore wallets.
  • Shared sequencing, and other longer-term work. Based rollups are valuable in part because they may be able to do this more effectively.

As long as standards like these are satisfied, there is still a lot of room for L2s to have very different properties from each other: experimenting with different virtual machines, different sequencing models, scale vs security tradeoffs, and other differences. However, it must be clear to users and application developers what level of security they are getting.

To make faster progress, a large share of the work can be done by entities that operate across the ecosystem: the Ethereum Foundation, client development teams, major application teams, etc. This will reduce coordination effort and make adopting standards more of a no-brainer, because the work that will be done by each individual L2 and wallet will be reduced. However, L2s and wallets, as extensions of Ethereum, both still need to step up work on the last mile of actually implementing these features and bringing them to users.

Economics of ETH

ETH as triple-point asset

We should pursue a multi-pronged strategy, to cover all major possible sources of the value of ETH as a triple-point asset. Some key planks of that strategy could be the following:

  • Agree broadly to cement ETH as the primary asset of the greater (L1 + L2) Ethereum economy, support applications using ETH as the primary collateral, etc
  • Encourage L2s supporting ETH with some percentage of fees. This could be done through burning a portion of fees, permanently staking them and donating proceeds to Ethereum ecosystem public goods, or a number of other formulas.
  • Support based rollups in part as a path for L1 to capture value through MEV, but do not attempt to force all rollups to be based (because it does not work for all applications), and do not assume that this alone will solve the problem.
  • Raise the blob count, consider a minimum blob price, and keep blobs in mind as another possible revenue generator. As an example possible future, if you take the average blob fee of the last 30 days, and suppose it stays the same (due to induced demand) while blob count increases to 128, then Ethereum would burn 713,000 ETH per year. However, such a favorable demand curve is not guaranteed, so also do not assume that this alone will solve the problem.

Conclusion: The Road Ahead

Ethereum has matured as a technology stack and a social ecosystem, bringing us closer to a more free and open future where hundreds of millions of people can benefit from crypto assets and decentralized applications. However, there is a lot of work to be done, and now is the time to double down.

If you're an L2 developer, contribute to the tooling to make blobs scale more safely, the code to scale the execution of your EVM, and the features and standards to make the L2 interoperable. If you are a wallet developer, be similarly engaged in contributing to and implementing standards to make the ecosystem more seamless for users, and at the same time as secure and decentralized as it was when Ethereum was just an L1. If you are an ETH holder or community member, actively participate in these discussions; there are many areas that still require active thought and brainstorming. The future of Ethereum depends on every one of us playing an active role.

Comments

All Comments

Recommended for you