In this post, we will compare TON with some of the more prominent blockchain projects.
We will use the framework for blockchain project classification outlined in Sections 2.8 and 2.9 of the TON whitepaper.
Blockchain projects are classified based on the following criteria, as detailed in Section 2.8 of the TON whitepaper:
The type and rules of member blockchains: homogeneous, heterogeneous, or hybrid
The presence of a main chain
Native support for sharding, both static and dynamic
Interoperability between blockchains: loose coupling / tight coupling
Additionally, a simplified classification of blockchain projects is presented in Section 2.8.15 of the TON whitepaper, with a table summarizing the basic properties of the most popular blockchain projects in Section 2.9.
Solana is an unusual project in the 2020s. It is a single blockchain project optimized for very fast execution of specialized transactions. In this respect, it resembles EOS (developed from 2016 to 2018) and its predecessor, BitShares (developed in 2013-2014), but it uses a variant of PBFT called Tower Consensus instead of dPOS. Solana claims to generate a block every second or even faster; however, this comes at a cost, as the next block is generated before the previous one is completed, "Unlike PBFT, Tower Consensus. This can result in the creation of temporary forks. When validators are distributed across various locations globally, completing a block in real life requires multiple round trips (optimistic PBFT is essentially a three-phase commit protocol), so the best-case scenario still takes a few seconds. The official documentation suggests that a block is typically finalized after 16 voting rounds, each expected to take about 400 milliseconds, meaning a completion time of 6.4 seconds.
We can say that while Tower Consensus is officially a variant of PBFT, it performs better than the dPOS consensus protocol, which offers shorter block generation times but longer block finalization times.
Another interesting feature of Solana is its optimization for executing very simple predefined transactions, which do not alter account data, though account balances may change. This allows for massive parallel execution and validation of transactions. In this regard, Solana is similar to BitShares, the precursor to EOS, which used dPOS (with short block generation times and long block finalization times) and was optimized for executing simple predefined transactions at scale. Additionally, Solana’s design is such that, compared to the time it takes to generate these transactions, validating their correct order on high-end GPUs can be up to 1,000 times faster.
Ultimately, Solana claims to be capable of executing up to 700,000 simple transactions per second, assuming these transactions do not change account state and do not require much data, and that all data for all accounts fits in RAM. This is consistent with the promises made by BitShares a few years ago. The main difference is that, unlike BitShares, Solana does offer support for transaction types that were not predefined in the blockchain software. For this, it uses a variant of the Berkeley Packet Filter virtual machine, and its precompiled programs can be uploaded to the Solana blockchain and referenced in transactions. While Solana is formally Turing-complete, the performance metrics usually referenced only apply to very simple predefined transactions, and only in cases where all account data fits in RAM, so the comparison with BitShares remains relevant.
In conclusion, Solana is a "third-generation blockchain project alternative" in the terminology of Section 2.8.15 of the TON whitepaper, ultimately resembling BitShares from EOS's predecessor, but with further optimizations. It is formally Turing-complete but is only able to execute large numbers of very simple transactions of various predefined types, or fewer general transactions. It claims to generate a block per second on average and, after future hardware upgrades, execute 700,000 simple transactions per second (though actual numbers appear closer to 65,000). Solana is an inherently non-scalable specialized single-blockchain project, and without a complete redesign, it cannot support or offer sharding or different workchains (we refer to Section 2.8.16 of the TON whitepaper to explain why such a redesign is very difficult). In this respect, it differs from TON, which supports the instant deployment of any complex smart contract and offers higher levels of security due to its consensus mechanism's shorter transaction and block finalization times, as well as dynamic sharding. As the load increases, TON automatically expands the blockchain into an increasing number of shard chains, providing scalability that is impossible for any single blockchain architecture, such as Solana.
Naturally, Solana's predecessors—other single-blockchain or loosely coupled multi-blockchain projects like EOS—lacked sharding support. In their early stages, they appeared promising but proved to be short-lived, as these concepts inevitably hit limitations that negatively impacted their scalability and stability in later stages. The early signs of Solana's blockchain crash in September 2021 indicated that a sudden surge in transactions "caused memory overflow," and the blockchain actually stagnated for 17 hours, causing many validators to crash, slowing down the network and eventually halting it. This raises questions about Solana's future performance in real-world transactions, as opposed to custom-designed very simple transactions that involve only a few different accounts and are executed in very specific test environments, with hundreds of powerful validator servers located in one or several data centers. In this regard, TON seems more robust.
Solana presents an interesting case, embodying an ancient engineering approach that pushes inherent limitations to their extreme. It’s reminiscent of similar stories in the history of technology, which we’ll draw connections to in this discussion.
One such story is the British LNER A4 4468 Mallard steam locomotive, which set a world speed record of 203 km/h in 1938. During regular passenger services, the locomotive didn’t reach those average speeds, instead running at around 150 km/h. However, at that time, it held the world record for all types of locomotives—steam, diesel, or electric. Despite this achievement, it marked a technological dead end. Later, all high-speed trains, such as Japan’s Shinkansen, France’s TGV, and Germany’s ICE, transitioned to multi-unit electric trains. The crucial innovation here was the concept of distributing power across multiple units, with each train car containing one or more engines. This allowed electric engines to scale more easily, while steam technology couldn’t achieve such flexibility.
A second example involves Intel’s Pentium 4 CPUs in the early 2000s. Intel promised to push the clock speeds of these processors to 10 GHz, claiming unprecedented performance. However, in practice, the Pentium 4 often ran slower than its predecessor, the Pentium 3, despite having a higher clock frequency. After hitting the 4 GHz limit, Intel reevaluated its approach, shifting to a multi-core architecture with lower clock speeds but more cores per processor. This multi-core method proved far more scalable and durable, and today we can purchase processors with up to 64 cores. This shift is similar to the evolution from single-unit trains to multi-unit systems, where the goal of making one core faster proved less viable than spreading the load across multiple cores.
A third parallel can be drawn from the world of supercomputers, particularly the Cray machines that were popular in the 1970s and 1980s. These were eventually replaced by clusters of thousands of commercial CPUs (usually server versions of Intel and AMD chips). Today, the top 100 supercomputers are all based on these commercial CPU clusters, further confirming the triumph of multi-unit systems over single, highly optimized units.
When we compare Solana to the super steam locomotive, we see a technology that optimizes an ancient paradigm to its limits but ultimately remains non-scalable and dead-ended. We can admire the ingenuity behind these technological marvels, but they are still ultimately a dead end in the grand scope of technological evolution.
The comparison between TON and Ethereum 2.0 is complex, especially since Ethereum 2.0's development and deployment were still incomplete as of 2022. Here’s what we know so far.
The transition to Ethereum 2.0 will happen in multiple stages. First, a new Beacon Chain will be deployed, which functions similarly to the main chain described in the original TON white paper. This Beacon Chain will utilize a PoS consensus algorithm called Casper. The main goal is to register the state of up to 64 shard chains (auxiliary blockchains) by recording the hash of the last block. What’s unusual about the proposed PoS algorithm is the involvement of a large number of validators (at least 16,384), each staking a small amount of ETH (32 ETH each). These validators are essentially regular Ethereum nodes but are required to stake 32 ETH. Despite the common Ethereum network issues related to block and memory pool propagation, these nodes do not need specific communication between each other. In this regard, Ethereum 2.0 appears relatively “democratic” (most other PoS blockchain projects are quite “oligarchic,” with a handful of validators creating blocks at any given time). However, this approach comes at a cost: the block confirmation time for both the Beacon Chain and 64 shard chains seems to be around 10 to 15 minutes, meaning one would need to wait that long to ensure their transaction is complete.
The second phase of the transition involves merging the current Ethereum 1.0 (PoW) blockchain with one of the shard chains, effectively converting Ethereum into a PoS blockchain.
The final phase will add 63 additional shard chains, resulting in a total of 64 shard chains, including the original Ethereum 1.0 chain and the Beacon Chain.
At this stage, it's unclear what the exact purpose of the 63 new shard chains will be, or how they will interact with each other. Without this information, a complete comparison of multi-blockchain systems remains incomplete. However, it seems that Ethereum 2.0 avoids shard chain interactions. If messages are passed between shard chains, it will take 10 to 15 minutes for a block to be finalized in the originating shard chain before it can be processed in another. Furthermore, these additional shards will not run EVM smart contracts, though there are indications that this could change in the future. Instead, these shards will primarily serve as distributed storage for data, functioning more like Layer 2 solutions, such as Bitcoin’s Lightning Network.
Ethereum 2.0 thus avoids the complexity of shard chain interactions and keeps the smart contract capabilities limited to separate sidechains, storing the final state on Ethereum’s shards. In this sense, Ethereum 2.0 claims it can scale from its current 15 transactions per second to tens of thousands of transactions per second. However, this claim can be misleading since these “transactions” are vastly different from the typical blockchain transactions, and might refer more to specialized operations with a limited number of participants, which only become visible once they are completed.
If we accept that Ethereum 2.0 can handle tens of thousands of “transactions” (sidechain and payment channel transactions), we might compare this to TON, which could theoretically handle billions of such “transactions” per second, given its architecture.
In summary, Ethereum 2.0 avoids the complex issues of shard chain interaction that are inherent in the original Ethereum design. It opts to expand Ethereum 1.0’s blockchain with 63 additional shards that primarily store sidechain and payment channel data. While Ethereum 2.0's approach is pragmatic, it feels somewhat underwhelming, especially for a project that was the first to introduce Turing-complete smart contracts. In its current form, Ethereum 2.0’s goal is not to reach the speed and versatility that TON has already achieved.
The TON blockchain was conceived and described in 2017, and its white paper outlined why the design choices it made were essential for creating a truly scalable blockchain capable of handling millions of transactions per second without fundamentally changing its smart contract logic and interactions. This is why TON was chosen as the only “fifth-generation” blockchain project at the time.
Since then, new blockchain projects have emerged, with expectations that they would overcome the limitations of older blockchain projects discussed in the TON white paper and potentially introduce new methods for blockchain development. However, we have seen the reemergence of blockchain designs based on ideas that were already outdated in 2017, such as Solana. Launched in 2019, Solana is a "third-generation blockchain project" that presents an alternative to TON’s approach, but it faces similar scalability issues to older projects like BitShares and EOS. If history is any guide, Solana may find itself in a similar predicament in 2028, facing the same limitations. Furthermore, adding sharding to Solana to overcome its inherent scalability issues is unlikely to be feasible.
Another disappointing blockchain solution is Ethereum 2.0, which seems to undermine Ethereum’s original achievement of Turing-complete smart contracts, suggesting they are not particularly useful after all. In contrast, Ethereum 2.0 illustrates a general principle related to Solana: unless issues related to shard chain interactions are addressed, scaling solutions like sharding will fail to work on blockchain systems designed without these considerations in mind.
As of 2022, the TON blockchain remains one of the few truly scalable blockchain projects, capable of handling millions of transactions per second and potentially tens of millions with minimal internal changes. Since its inception, TON has remained at the forefront of blockchain technology.
In the five years since its launch, high-performance testnets and mainnets based on TON technology have further validated the efficiency of the architectural approach outlined in the original white paper.