
Scaling Ethereum with Layer 2 Chains
Anthony Campolo explores Ethereum scaling solutions, focusing on sidechains, ZK rollups, and optimistic rollups, discussing their mechanisms, pros, and cons
Episode Description
Anthony Campolo explains Ethereum's scalability challenges and explores sidechains, ZK rollups, and optimistic rollups as solutions.
Episode Summary
Anthony Campolo, a developer advocate at QuickNode, walks through the core scalability problem facing Ethereum: limited block space combined with fixed block times creates a bottleneck for transaction throughput. He traces the historical evolution of scaling ideas, beginning with Vitalik Buterin's early "shadow chain" concept and Bitcoin's Lightning Network, then moving into the Plasma framework developed by Buterin and Joseph Poon, which introduced key ideas like fraud proofs and child chains that would influence later solutions. From there, Campolo examines three main categories of current scaling approaches. Sidechains like the original Polygon operate as independent EVM-compatible blockchains running in parallel with Ethereum, offering familiar developer tooling but typically sacrificing some decentralization. ZK rollups use zero-knowledge proofs to mathematically verify transaction validity, offering massive data compression but facing significant computational complexity challenges that remain unsolved. Optimistic rollups, used by Arbitrum and Optimism, take the opposite approach by assuming transactions are valid and relying on a challenge period for fraud detection, making them more practical today despite lower throughput than ZK rollups. Campolo closes by connecting these scaling efforts to the upcoming Ethereum merge as a foundational step toward a rollup-centric future.
Chapters
00:00:00 - Introduction and the Block Space Problem
Anthony Campolo introduces himself as a developer advocate at QuickNode and frames the talk as a deep dive into Ethereum scalability. He sets the stage by explaining that the presentation covers sidechains, ZK proofs, and optimistic rollups, noting this is an updated version of a talk originally given at Ethereum Amsterdam.
The core problem is laid out clearly: Ethereum produces a new block every 12 to 14 seconds, and the real bottleneck isn't the number of transactions but rather the limited block space available. Campolo argues that block space is a more useful metric than transaction count, since transactions vary in size and complexity. Because the ledger space is finite and takes time to write to, scalability constraints are an inherent structural issue that the ecosystem must address.
00:03:14 - Historical Roots: Shadow Chains, Lightning, and Plasma
Campolo traces the intellectual history of Ethereum scaling, starting with Vitalik Buterin's early "shadow chain" concept, which proposed offloading transactions to a secondary ledger and writing compressed results back to the main chain. He connects this to Bitcoin's Lightning Network as evidence that scalability is a cross-chain problem, not unique to Ethereum.
The discussion then moves to Plasma, a framework co-developed by Buterin and Joseph Poon that introduced ideas like child chains communicating with a root chain, secured by fraud proofs. Campolo explains that while Plasma itself was never fully implemented — splintering into variants like Plasma MVP, Plasma Cash, and Plasma Debit — its foundational concepts around fraud verification and off-chain computation became essential building blocks for the scaling solutions in use today.
00:07:37 - Plasma Pros and Cons and the Transition to Sidechains
The pros and cons of the Plasma approach are examined, including its promise of lower fees and faster computation through reduced data processing, but also its lack of a concrete single implementation and its lengthy fund withdrawal periods. Campolo notes the trade-off between having human-in-the-loop auditing versus purely mathematical verification, framing it as a philosophical choice rather than a strictly technical one.
This leads into the discussion of sidechains, which Campolo treats as essentially synonymous with layer twos despite semantic debates in the community. He explains that a sidechain operates independently and in parallel to Ethereum's main chain, connected through bridging technology, with its own consensus algorithm and block parameters. The key advantage is that sidechains are full blockchains themselves, leveraging well-understood technology for establishing shared, tamper-resistant state.
00:11:42 - Sidechain Implementation: EVM Compatibility and Polygon
Campolo highlights that sidechains and layer twos are designed to be EVM compatible, meaning developers can write Solidity code that behaves exactly as expected on the main chain. This preservation of general-purpose computation is critical because losing smart contract functionality would undermine the entire value proposition of scaling Ethereum through additional chains.
The cons of sidechains include a tendency toward less decentralization due to the cold start problem of competing with Ethereum's established validator set, separate consensus mechanisms that may contain bugs, and reliance on a quorum of validators whose incentives may not always align with honest behavior. Polygon is presented as the most well-known sidechain implementation — originally a clone of Ethereum's layer one that supported asset transfers between chains with its own consensus mechanism, though Campolo notes Polygon has since expanded into ZK proof products and other technologies.
00:15:43 - Zero Knowledge Rollups Explained
ZK rollups are introduced as a fundamentally different verification approach based on zero-knowledge proofs. Campolo uses the classic thought experiment of a prover and verifier with colored objects to illustrate the core idea: verifying that someone holds a secret without requiring them to reveal it. He explains the powerful implications, such as authenticating on systems without exposing passwords.
In the ZK rollup model, funds are held by a smart contract on layer one while computation happens off-chain, with each rollup block producing a state transition proof verified by the main chain. The major advantage is dramatic fee reduction through transaction compression, and no full fraud proof game is needed. However, the computational complexity of generating zero-knowledge proofs remains a serious obstacle, with estimates ranging from one to ten years before the optimization challenges are fully resolved.
00:19:54 - Optimistic Rollups: Arbitrum and Optimism
Optimistic rollups are presented as the most production-ready scaling solution, already heavily used through Arbitrum and Optimism. Unlike ZK rollups that prove validity mathematically, optimistic rollups assume transactions are valid and leave room for others to challenge and prove fraud — hence the name "optimistic." Campolo draws a parallel to optimistic UI patterns in web development where responses are displayed before server confirmation.
Arbitrum's implementation uses a bisect challenge algorithm where a layer one contract arbitrates disputes by dissecting them iteratively between two parties until fraud is confirmed or denied. Optimism takes a slightly different approach with a seven-day challenge window during which funds are locked and participants can inspect and contest proposed state commitments. Both systems are EVM and Solidity compatible and require only a single honest participant to call fraud, making them practical despite offering lower throughput than ZK rollups.
00:25:49 - Resources, The Merge, and Closing Thoughts
Campolo shares resources including QuickNode's website, social channels, and the presentation slides. He highlights a website tracking the status of the Ethereum merge, describing it as a historic event that represents the first step toward a rollup-centric Ethereum roadmap, and notes changes to Ethereum's test networks as part of the transition.
In closing, Campolo emphasizes the importance of Ethereum scalability by pointing to historical periods when the network became overloaded and expensive to use. He encourages developers to explore the layer two options discussed — sidechains, ZK rollups, and optimistic rollups — along with emerging technologies like StarkNet, stressing that making Ethereum usable for a broad user base depends on continued innovation in this space.
Transcript
00:00:02 - Anthony Campolo
Hello everyone. My name is Anthony Campolo, and I'm here to talk about scaling Ethereum. This is for ETH Online. Really excited to speak with you all. Thank you to ETH Online for the opportunity to do this. I am a developer advocate at QuickNode, and QuickNode is a blockchain deployment platform.
00:00:25 - Anthony Campolo
If you want to get access to a node, you can go to QuickNode and find any chain of your choice. Obviously, we'll be talking about Ethereum, but we have a wide range of Ethereum layer 2 options. Today we're going to be talking about scaling Ethereum specifically, and we won't really be talking about QuickNode at all. We'll just be talking about what the problem is with Ethereum scalability and what some different options are to create more scalable applications on Ethereum. So I'm going to go ahead and share some slides with you all. Let me get this going here.
00:01:06 - Anthony Campolo
Now, this is a talk I had originally given at Ethereum Amsterdam, and there's a recording of that. But this will be a slightly more updated, more current version of that because there are some implications with the merge coming up that we're going to talk about a little bit. In this talk, we're going to be talking about sidechains, ZK proofs, and optimistic rollups. If you've never heard any of those terms before, that's okay. We're going to define all of them.
00:01:34 - Anthony Campolo
We're going to get some context on what they mean and why they're important for scaling Ethereum. And then my name is Anthony Campolo. As I already said, I'm a developer advocate at QuickNode. The first thing we should talk about is what the problem is here and why this is something that you need to know about and consider as an Ethereum developer. The problem is that the Ethereum network creates a new block every 12 to 14 seconds.
00:02:02 - Anthony Campolo
Now, when people talk about the issues with scalability, you'll frequently hear people talk about the number of transactions. And this is, I think, a better metric for thinking about scalability because a transaction is kind of a vague term, depending on what's happening within that transaction. And I'm using the term transaction in the sense of a blockchain transaction. There's a lot that can happen within the span of a transaction, and how large or small it is will affect how many transactions you can have on a block. So the real bottleneck here is not so much transactions as it is block space.
00:02:41 - Anthony Campolo
So when people talk about blockchain real estate, they're talking about block space. They're talking about how much space on these blocks you can fit information, data, transactions, just user activity, because that's ultimately what the blockchain is. It's a ledger that we are all collectively using and writing to. So if the ledger space is finite and the ledger space requires a certain amount of time to get written in, then we're going to have a scalability problem. So that is why there is a scalability problem with Ethereum at all.
00:03:14 - Anthony Campolo
And that needs to be addressed. Now we're going to talk a little bit about the history here, and we're going to go through the whole timeline, starting with this first paper from Vitalik Buterin. It may have been a blog post, not a paper, but there'll be citations all at the end. What this was is a concept called a shadow chain. There'd be a mainline state, and then there'd be a shadow chain.
00:03:39 - Anthony Campolo
And this is a very prescient idea because all of the things we're going to be talking about in this talk are about how we create a second, separate ledger that the first main chain, which is Ethereum, can interact with and can use to offload transactions somewhere else. So here there's the mainline state, and then there's the shadow chain. The shadow chain will have transactions, and then those transactions will get combined together into a smaller representation and then put back on the main line. You also see that audit arrow in the middle box. The audit arrow is, well, what if someone tries to write a bunch of transactions that give themselves a million dollars and write it back to the chain?
00:04:23 - Anthony Campolo
That's obviously an issue. And we'll be talking about how we can verify these things are true as we go on. The next important thing to know in terms of scalability is that this is not just an Ethereum-specific problem. This is something that is also affecting Bitcoin and other chains as well. So Bitcoin also wanted its own solution for this, which is the Bitcoin Lightning Network.
00:04:51 - Anthony Campolo
And so the Bitcoin Lightning Network had the same idea where there are kind of two separate things. There's the main Bitcoin chain, and then there's going to be everything else that's offloaded from the chain. Now, with the Lightning Network, it's a little bit different in that it's not a full-on blockchain. It's more of a protocol. But I'm including it in the narrative because it's an important part of this thought that we need to figure out a way to scale blockchains.
00:05:21 - Anthony Campolo
And Joseph Poon in particular, who worked on the Lightning Network, is going to be in the next slide here. So now we have Vitalik Buterin talking about shadow chains and then Joseph Poon, who's talking about the Lightning Network. They kind of came together like, okay, we need to figure this out. So they started talking about Plasma, and with Plasma it was a little complicated. So that's why I'm throwing you a quote here.
00:05:43 - Anthony Campolo
It was a proposed framework for incentivized and enforced execution of smart contracts. The smart contract part is important, and it's scalable to a significant amount of state updates per second. The idea is that whatever the bottleneck is that's keeping Ethereum from being scalable, we want to make that number much, much larger. We want to make it in the billions so that we can fit as much activity as we could ever want onto this blockchain, and then it would enable the blockchain to represent a significant amount of decentralized financial applications worldwide. So it's first saying we want to be able to offload computation, and then we want to do that so that we can do a lot of stuff and people can send as much money as they want without having to pay huge fees along the way.
00:06:31 - Anthony Campolo
Now, if we look at Plasma a little more in depth, you can see this diagram here. This is from the original Plasma paper, Scalable Autonomous Smart Contracts. And I recommend, if people find this stuff interesting, there's a lot of citations at the end and a lot of dense academic papers written about this subject. You can go very deep on this if you want. The top line of how this works is you have a child chain and a root chain, and there's communication and arbitration between the two, secured by fraud proofs. So this is something that, as we get into the different current solutions available, some of them will have fraud proofs kind of baked into them.
00:07:11 - Anthony Campolo
So this is another reason why we're talking about this historical angle. While we don't use Plasma today, the ideas that were explored in Plasma are going to become important later down the road. Then the child chain has its own mechanism for validating blocks. We'll see a couple different ways that blocks are validated across different options that we have. And particular fraud proofs can be built on different consensus algorithms.
00:07:37 - Anthony Campolo
This is important because consensus algorithms can have downstream effects in terms of how energy efficient a chain is. And you want to make sure you have the ability to include your own consensus algorithm if the difference between proof of work and proof of stake is something that's important to you. Now let's look at the pros and cons of this approach. The first pro is that you have a layer two that enables lower fees and faster computation. That is really the core pro that we're going to be talking about throughout all these different options: the ability to enable lower fees and faster computation.
00:08:16 - Anthony Campolo
So you always want cheaper, faster, and then better is the third that may be the hard one to get. Then you reduce the amount of necessary data processing. The way you enable lower fees and faster computation is by reducing the amount of necessary data processing. You also want it to be compatible with layer one scaling solutions like sharding. Now, this is a bit of a historical oddity because sharding is not really as important today in the roadmap as it was previously.
00:08:47 - Anthony Campolo
This was a pro for Plasma in the past, but that's not really something we need to worry about too much right now. When it comes to the cons, though, we have a system and we have a theory, and we have the idea that we want computation offloaded. But if you read the actual Plasma paper itself, it's very long. It has more of a system and less of a specific implementation. It actually recommends various different implementations.
00:09:16 - Anthony Campolo
This is why you ended up with multiple implementations down the road. You ended up with Plasma MVP. You had Plasma Cash and Plasma Debit. And again, this is more of a historical oddity than something you really need to know about.
00:09:31 - Anthony Campolo
But these are all different attempts at reifying Plasma, which we never actually got to, but we were able to take the ideas of Plasma and put them into practice later. Funds being withdrawable only after a lengthy waiting period is a problem that some solutions today have and some don't. And this is really a question of whether you want the ability to stop a chain and slow it down. It's almost a question of how important it is for the operators of a chain to be able to say, "Hey, wait, we need to actually look at something and audit it and figure out whether it's true or not," versus something where a math problem is just going to verify whether something is true or not.
00:10:16 - Anthony Campolo
Sometimes you actually want a human in the loop. This is a con for some and a pro for others. Now let's talk about sidechains. This is where we really get into the meat of what this topic is about when we talk about sidechains and layer two. To me, the two are essentially synonymous terms.
00:10:34 - Anthony Campolo
Some people argue semantics about what's a sidechain versus a layer two. I don't think that's a particularly useful argument. I think that you have Ethereum and you have another blockchain. You can call that a sidechain. You can call it a layer two chain.
00:10:48 - Anthony Campolo
It doesn't really make a difference to me. The point is you've got two chains. That's what's really important here. And with a sidechain, as you can see in this diagram, you have the main chain and then you have the sidechain, and then you have some sort of proof that allows you to arbitrate whether the transactions are true or not. So you have it operating independently and running in parallel to the layer one.
00:11:13 - Anthony Campolo
So both are happening at the same time and they're both running in concert with each other, and then they're speaking to each other through some kind of bridging technology. And then the sidechain also has its own consensus algorithm and its own block parameters, because it is literally its own blockchain. We'll talk about why that is desirable when we look at the pros and cons of this approach. If we look at the first pro, it's, as I said, a blockchain. We already know how blockchains work.
00:11:42 - Anthony Campolo
We know that blockchains are a way to establish a shared state of the world that is very hard to tamper with and that is very useful for auditing and tracking transactions. So that's a huge pro if we're going to use this technology to extend blockchains themselves. It also supports general computation. This is because when we look at these layer twos and sidechains, they are all set up to be, quote-unquote, EVM compatible. So when we use that term, EVM compatible, what we mean by that is you can write Solidity, the programming language of the Ethereum blockchain that you're already used to and expect to work in a certain way, and it will work that way.
00:12:24 - Anthony Campolo
And this is because smart contracts and Solidity and the EVM were all about providing the ability to write general-purpose programs on the blockchain. So we don't want to lose that when we move to another chain, because if we lose that, then we're losing all of the power of these blockchains themselves. But what are the cons here? The con is that if you have a sidechain, it will tend to be less decentralized. Now, this isn't necessarily because the sidechain is required by some theoretical limit to be less decentralized.
00:13:00 - Anthony Campolo
It's more just how these things play out in the real world. And it's more of an empirical fact that when you start up a new chain, you need to compete with Ethereum and with the fact that there's already this whole set of validators and nodes running all this computation on the Ethereum network. To have a comparably decentralized network for a brand-new blockchain, it's a cold-start problem. So this will become less of an issue as time goes on and as we have layer twos that are very decentralized.
00:13:36 - Anthony Campolo
That's an inevitable consequence that we're going to get to. But in general, when you spin up a sidechain or a layer two, it tends to be less decentralized. It also will have a separate consensus mechanism that is not secured by the layer one. This is important if you're trialing a brand-new consensus mechanism that is not used by Ethereum. If that consensus mechanism happens to be faulty or has a bug in it, that could be a huge issue because then you can't really rely on that sidechain or layer two to do what you expect it to do.
00:14:09 - Anthony Campolo
If it doesn't have a sound consensus mechanism, you also have a quorum of validators that you may or may not be able to rely on. This goes back to the idea that it may be less decentralized and may have a less sound consensus mechanism. That means you need to be more conscious of who the validators of this network are. Are they incentivized to have correct transactions and validate in a correct way to keep the Ethereum chain fraud-proof, or are they maybe incentivized to do something else? That's another thing you need to keep in mind with these systems.
00:14:45 - Anthony Campolo
Now we're going to start looking at some implementations here, and Polygon is known as one of the more well-known sidechain or layer two solutions. It is worth giving an asterisk here, which is that Polygon has many different products. Polygon even has a ZK proof product now, which is what we'll talk about after this. So when I talk about the Polygon sidechain, this is kind of like the original Polygon. This is the first Polygon.
00:15:10 - Anthony Campolo
They have a whole bunch of other stuff now that is totally separate from this. But when we talk about this, we're talking about the first iteration of Polygon, and it was just a clone of Ethereum. It was a clone of layer one, and it supported transferring assets to and from layer one to layer two. And again, at the time they called it a sidechain. I'm calling it layer one/layer two because I think it's a more comprehensible terminology if people argue semantics about that. But it's the same idea. Then you have the layer two. It's a brand-new blockchain, it has its own consensus mechanism, and it's able to create blocks based on everything we've talked about up to this point in the talk.
00:15:43 - Anthony Campolo
That should make a lot of sense. And that was Polygon. Now let's talk about ZK rollups. With ZK rollups, you have a brand-new way of verifying this, and it's based on something called zero-knowledge proofs.
00:15:59 - Anthony Campolo
Now, I'm going to try to give my best explanation of zero-knowledge proofs for people who have not heard of them. There is a thought experiment that can help get the idea across. It's a dense mathematical term, but it can actually be kind of simple to explain if you go through this thought experiment with me. So imagine you have two people: you have a prover and you have a verifier. And that's what we can see right now in this diagram.
00:16:23 - Anthony Campolo
The verifier needs to verify that they are not colorblind. And the way they do this is the prover will have two things in their hands.
00:16:43 - Anthony Campolo
You have a red bean and you have a blue bean, like the Matrix red pill and blue pill. They're going to put those behind their back, show them to the verifier, and say, "Here, there's red in one hand, there's blue in the other." Then they're going to put them behind their back, and they may switch them or they may not, and then they'll show them again. Then the verifier can verify whether they changed or not. And by doing that exercise over and over again, the verifier can verify whether the prover is actually switching them or not. They are able to do that because the verifier can see where the colors have changed.
00:17:23 - Anthony Campolo
The prover doesn't necessarily have to reveal the secret to do this exercise. So it allows you to verify a secret without having to share it. That's really the core here: if you're able to verify whether or not someone has a secret without having to reveal that secret, it's very powerful. For example, you could use a password on a website without having to give up that password. That would basically mean you'd be able to access systems without giving that system information it needs to hold onto and potentially guard, because if it gets stolen, someone else could access the system on your behalf. It would completely eliminate that problem.
00:18:05 - Anthony Campolo
So this is really powerful. Now, the layer two scaling solution here is that all the funds are held by a smart contract on a layer one chain, and computation and storage are performed off-chain. For every rollup block, you have a state transition zero-knowledge proof, which is generated and verified by the layer one chain contract. This is where the issues lie, because this is very computationally expensive, but you have a massive amount of data you can transfer because you can roll up such a large amount of transactions into a single transaction. Now, if we look at the pros and cons here, the pro is that it reduces the fees per user transfer.
00:18:49 - Anthony Campolo
This is because, like I was saying, you can take all these transactions and roll them all up into one block, and then you have less data contained in each transaction because the transactions contain multitudes. This does not require a full fraud proof or fraud-game verification. This is something we haven't really defined yet. We'll define that a little more in the next section with Arbitrum. But the con is that computing zero-knowledge proofs requires data optimization.
00:19:21 - Anthony Campolo
And this data optimization is very heavy and has high computational complexity. So if you know anything about O notation and things like that, they still have to figure out a way to optimize these algorithms in a way that's actually going to allow this to scale. So the scaling solution itself is not scalable, unfortunately. But this is something that a lot of people are working on, and it's a question of research that needs to be done. Some people say this research may take a year, some people say five years, some people say ten years.
00:19:54 - Anthony Campolo
You know who's going to be right? Only history will really be able to tell. And then you have a security scheme which assumes a level of unverifiable trust. Now we're going to get into optimistic rollups. Optimistic rollups, unlike ZK rollups, are already being used heavily in production.
00:20:14 - Anthony Campolo
So this is something that, if you're going to be dealing with these types of systems, these types of layer twos, you're likely going to be dealing with optimistic rollups. And that is either through Arbitrum or Optimism. We'll get into both of those after we explain what an optimistic rollup is. An optimistic rollup involves having your main chain. So we see here we have Ethereum, we have the main chain, and then we have the rollup.
00:20:39 - Anthony Campolo
And the rollup is layer two, called sidechain, called whatever you want. It's a chain that is different from the first chain. Then you have these state transitions. At any point, you could run that proveFraud function, and that would be your fraud proof. The reason it's a function you can run is because it's optimistic.
00:20:59 - Anthony Campolo
You're assuming there is no fraud. You're assuming everything is going to work the way you would expect it to. So while ZK rollups prove to Ethereum that transactions are valid, optimistic rollups assume the transactions are valid and then leave room for others to prove fraud. That's why it's optimistic.
00:21:18 - Anthony Campolo
If you've heard the term optimistic UI, in an optimistic UI you send a request to the server and then give a response back as if the server is going to respond with what you expected. Of course, that does not always happen, but it will happen more often than not if your server is correct. So this is what an optimistic rollup is. The pros and cons of this are that we have EVM- and Solidity-compatible optimistic rollups. That's because, when we look at things like Arbitrum and Optimism, they are created to be EVM compatible from the start.
00:21:55 - Anthony Campolo
They're also more flexible than ZK rollups because they don't require the really heavy computation involved with zero-knowledge proofs. And then you also have the data available, and it's secured on-chain. So there's this term data availability, which is a really common thing people are talking about today. So you've got the data, and it's available.
00:22:19 - Anthony Campolo
Now, the cons are that you have limited throughput compared to ZK rollups. This is because of how many transactions you can fit onto a ZK rollup versus an optimistic rollup. So it can't fit as many transactions, but it's able to make those transactions in a faster way. And then it requires both an honest majority of Ethereum validators and at least one aggregator that does not censor transactions. So when it says one aggregator that does not censor transactions, they mean that you only need one person who is honest.
00:22:53 - Anthony Campolo
You only need one person who's available and ready to call fraud on it. And so that is the fraud proof, because it's optimistic. You need someone to actually verify and call fraud if fraud happens. But among the entire system, you only need one person to do that. So if you feel confident that within the entire span of blockchain people out there in the world, you have one person who's actually paying attention, then hopefully this would work.
00:23:19 - Anthony Campolo
Now let's look at some actual implementations here. The first one we're going to look at is Arbitrum. And with Arbitrum we see this kind of complicated diagram over here. There's a lot of different things going on. But the most important thing is that you have this bisect challenge, which basically means anytime someone believes there is fraud, there's the ability to challenge.
00:23:43 - Anthony Campolo
And then the challenge will run this bisect algorithm to prove whether or not fraud has actually happened. So let's imagine that we have Alice and we have Bob, and they're going back and forth. Then we have a layer one contract that is refereeing between the two of them. Now, to resolve that dispute, you need to have this layer one contract arbitrate between the two of them. It does that by dissecting the dispute.
00:24:12 - Anthony Campolo
And that is what is happening in that image on the right over there. We are seeing that we have the challenge, and then the challenge leads to bisect, and then there'll be a waiting period, and then either assert or confirm. So it'll either be an assertion where they say we need to check this again,
00:24:35 - Anthony Campolo
or it'll be a confirmation of fraud or not. Then there'll be a pending state, and they'll basically run that on a loop. The final implementation we're going to look at is Optimism. With Optimism, we have a slightly different paradigm, which is that instead of having a fraud proof, there's something called a challenge window, which basically means that every time there's a transaction, or every time there's a block, we have the ability to lock up the funds for a certain amount of time. And when these funds are locked up, people can look and see whether there was fraud or not, and then call fraud on that.
00:25:18 - Anthony Campolo
So if a proposed state commitment goes unchallenged for the duration of the seven days, then it's considered good and no issue. But if someone does challenge it, then you'll basically have a fraud proof that happens. Once the commitment is considered final, the layer one smart contract will safely accept the proofs based on the commitment. Now, these are all the citations of what we talked about throughout the course of this project. And I highly recommend people check this out.
00:25:49 - Anthony Campolo
I'll also show the link to this slide in a little bit. Here are our resources. If someone wants to check out QuickNode, they can check out quicknode.com. If they want to check out our Twitter account, we have twitter.com/quicknode. There is a Discord link for anyone who wants to check out our Discord.
00:26:06 - Anthony Campolo
And then we have something called Has ETH Merged? Here is the link for the slides if you want to check out these slides, but I also want to show a little bit of Has ETH Merged. Now, if we look at this right here, we have this website where we can let people know whether ETH has merged or not. And this is important because the merge is a very historic event about to happen on the Ethereum chain.
00:26:32 - Anthony Campolo
And this is actually setting us up for this kind of rollup-centric world. This is the first step in the Ethereum roadmap. And so as you can see here, it has not happened yet, but it is likely to happen on December 14th. If you also go to quicknode.com/the-merge, you can learn more about the merge and what it means. For example, you may know about Ethereum's testnets, things like Ropsten and the like.
00:27:02 - Anthony Campolo
A lot of those are being shut down. We're actually going to be in a state where Goerli is the test network you want to be using. And then we also have these that have been decommissioned, and you can learn more about the actual merge itself here as well. That is the rest of my talk, and thank you so much, everyone, for listening.
00:27:26 - Anthony Campolo
This is something I'm very interested in, in terms of the scalability of Ethereum. I think it's a very important topic because if you look at the history of Ethereum, there have been times the network has become really overloaded and it's been very expensive to actually use it. I think we want this to be a usable system for lots of users out there in the world. It's important we make this scalable.
00:27:49 - Anthony Campolo
So hopefully you can take a little bit of inspiration here. You can maybe look at some of the layer twos that are available and check them out, build some stuff with them. These were three options that I talked about here, but there are many more out there, and there's even some ZK stuff that is starting to get added to mainnet, things like StarkNet. There's a lot of innovation here, and there's a lot of work being done. And if it's something you find interesting, feel free to reach out.
00:28:13 - Anthony Campolo
We'd be happy to talk to you about this and help you get spun up with some of these different layer two options. That will conclude my talk. Thank you so much to ETH Online for having us. And my name is Anthony Campolo. You can find me at ajcwebdev on the internet, and you can find quicknode.com. Thank you.