The debate on the centralization of ETH has long been used as an argument against the adoption of ETH by well-informed Bitcoin investors such as Preston Pysh and Lyn Alden. This topic is relatively complicated, and opinions are slightly different, but we agree on one point, that is, if blockchain/smart contracts are to be adopted on a large scale, decentralization will always be a key pillar of the layer1 protocol.
In this report, we break down the debate into quantifiable parts and analyze the possible impact of ETH 2.0 on this debate. Much of the literature on this controversy was written before the ETH 2.0 pledge. We believe that these new data may become a turning point for investors to adopt ETH.
Summary of the centralization debate
Contrary to popular belief, (de)centralization can be defined across multiple axes. Vitalik Buterin measures (de)centralization through architectural, political, and logical axes.
Although political and logical decentralization is valuable in itself, the debate about centralization is mainly focused on the architectural axis. After all, if it is sufficiently centralized, local compromise on this axis may cause the entire network to collapse.
We recommend dividing the architectural axis into the following areas:
Let’s take a look at these in detail:
This argument is for ETH 1.0. If validators choose to collude, then the high centralized hash rate output of a small number of miners will endanger the entire network.
On this indicator, ETH’s performance is similar to BTC, and no one (or two) miners generate more than 50% of the computing power. This is important because malicious collusion between more than two parties is difficult to execute, because even if one of them deviates, the cost will be high.
As ETH shifts to the PoS mechanism under 2.0, and as the barriers to becoming a validator lower, we expect the consensus will become more decentralized and there will be more participants. We have actually begun to see this happening. In the past 7 days, there are 63 active ETH pools/miners in ETH 1.0; under ETH 2.0, about 27k unique wallets have pledged to pledge.
This is the crux of the debate. Under ETH 1.0, consensus and storage are separated. It is a consensus reached by miners to run dedicated hardware to solve Ethash functions. Since this is only feasible for a few people, another group must run and operate the node. The purpose of the node is to store and relay the transaction history of the blockchain and verify the transactions added by the miners.
There are three types of nodes: archive nodes, full nodes, and light nodes. The amount of data in each storage blockchain is decreasing. What we really care about is full nodes, because they carry enough data to protect the network in a decentralized way, but few people can run it. Under ETH 1.0, every Dapp developer needs to run a node so that the system can eventually become sufficiently distributed over time.
However, running a node is a tedious task. Unlike a miner, a node operator will not be compensated for running a node. Therefore, many Dapp developers choose to run their nodes through infrastructure (i.e. service (IaaS)) providers such as Infura in exchange for fees. This is the triple problem we encountered.
The fewer independent nodes, the lower the backup/security
The high concentration of nodes and a small number of large suppliers brings key people risks to the system (this was partially realized when Infura was down for 5 hours in November 2020)
As we all know, Infura and other large centralized cloud providers that use AWS will bring third-party risks
Currently there are only ~3.8k ETH nodes (compared to ~11k for BTC).
In addition, so far, many of these nodes are still concentrated in large cloud providers.
When Lyn Alden made this criticism earlier this year, other members of the Ethereum community tried to respond to it.
As a member of the Ethereum community, we admire Bankless very much, but we think that this response still has much room for improvement.
With the huge transformation of Ethereum to ETH 2.0, many architectures are also changing. There are two key factors:
Convenience of running nodes
Ethereum correctly pointed out that under ETH 1.0, the hardware requirements required to run nodes are a bit of a headache, so it decided to make it one of the key principles behind the ETH 2.0 architecture.
To incentivize running nodes
Under ETH 1.0, most nodes are run by Dapp developers or developers who represent Dapps. This is because, due to hardware limitations, the number of validators/miners is not enough to meet the needs of nodes.
Under ETH 2.0, anyone with 32 ETH can pledge their ETH to become a validator/node. Since verifiers will also act as nodes, the incentives will also be adjusted appropriately. More importantly, there will be enough verifiers to make the node distribution wide and decentralized.
We can see this in the number of unique wallets registered. So far, there are approximately 27K independent verifiers. This is about 9 times the number of ETH 1.0 nodes and about 3 times the current number of BTC nodes. (Note: Each individual ETH 2.0 validator can run multiple nodes, each with 32 ETH).
An important data to further verify decentralization, especially without relying on large cloud providers, is the ISP behind each node (indicating cloud and self-control). Similar to what https://ethernodes.org did for ETH 1.0. We hope that these data will not be biased towards cloud providers like ETH 1.0, because more nodes are voluntary (stakeholders) rather than mandatory (Dapp developers). The fact that there are so many nodes is a positive sign in the first place.
In addition, the Ethereum community is also working on other solutions (weak statelessness/state expiration) to make it easier to run nodes when the blockchain becomes larger.
This view believes that holders of large amounts of ETH can control consensus by staking under ETH 2.0. However, this situation is unlikely to happen because the top 10 wallets currently control less than 20% of the supply. The chance of success in collusion with so many actors is very small.
Another argument related to this is that large equity pools may occupy a large share of the market and may monopolize consensus. Although the incentive structure also avoids this (pool leaders must also pledge their own ETH), the data shows that most nodes currently exist outside the exchange pledge pool.
The last problem to be solved is the advantage of certain client software in running nodes. On ETH 1.0, Geth is the client of approximately 80% of the nodes. If this situation continues in ETH 2.0, a wrong update of a client or malware may cause the entire ecosystem to collapse.
Ethereum seems to be pushing for more even distribution among multiple clients. With about 27k independent verifiers (including multiple nodes) going online, the situation may change rapidly compared to about 3k nodes online today. The data in this regard has not yet been released, but we will continue to pay attention to this indicator.
In general, Ethereum is an ambitious project that aims to utilize the full potential of blockchain technology. It is by no means in its final state, and like any good technology, it is constantly iterating. If blockchain technology is to achieve its full potential, it is likely to be achieved through ETH.
Author/ Translator: Jamie Kim
Bio: Jamie Kim is a technology journalist. Raised in Hong Kong and always vocal at heart. She aims to share her expertise with the readers at blockreview.net. Kim is a Bitcoin maximalist who believes with unwavering conviction that Bitcoin is the only cryptocurrency – in fact, currency – worth caring about.