
Optimistic Rollups and Sidechains
Anthony Campolo discusses the pros and cons of different Layer 2 scaling solutions for Ethereum, including sidechains, ZK-rollups, and optimistic rollups
Episode Description
Anthony Campolo explains Ethereum's Layer 2 scaling solutions, covering sidechains, ZK rollups, and optimistic rollups at ETHAmsterdam.
Episode Summary
Anthony Campolo presents a comprehensive overview of Ethereum's scaling challenges and the Layer 2 solutions designed to address them. He begins by explaining why scaling is necessary—Ethereum produces a new block every 12 to 14 seconds, creating a hard ceiling on throughput—and distinguishes between Layer 1 improvements like sharding and Layer 2 approaches that offload computation elsewhere. He traces the intellectual history from Vitalik's early shadow chain concept and the Bitcoin Lightning Network through the Plasma paper co-authored by Joseph Poon and Vitalik, which proposed a theoretical framework but led to a confusing proliferation of implementations that were eventually deprecated. From there, he examines three categories of active solutions: sidechains like Polygon, which run a separate blockchain with its own consensus but face decentralization trade-offs; ZK rollups, which use zero knowledge proofs to mathematically verify transaction validity but remain largely on testnet; and optimistic rollups like Arbitrum and Optimism, which assume transactions are valid and provide a window for fraud challenges. He compares Arbitrum's bisection-based dispute resolution with Optimism's challenge window approach, and concludes with a candid assessment that Polygon, Arbitrum, and Optimism are already running on mainnet while ZK rollups remain more experimental.
Chapters
00:00:12 - Introduction and the Ethereum Scaling Problem
Anthony Campolo introduces himself as a developer at QuickNode who chose to tackle the dense topic of Ethereum scaling for his ETHAmsterdam talk. He shares his background coming from traditional web development and JavaScript into the web3 space over the past year.
He frames the core problem: Ethereum generates a new block only every 12 to 14 seconds, which places a hard limit on network throughput. He distinguishes between Layer 1 scaling, which aims to make Ethereum itself faster through approaches like sharding, and Layer 2 scaling, which offloads computation to a separate environment. The central question for Layer 2, he explains, is figuring out where that offloaded computation actually goes.
00:03:46 - From Shadow Chains to Plasma
Anthony traces the intellectual lineage of Layer 2 scaling, starting with Vitalik Buterin's 2014 shadow chain concept and the Bitcoin Lightning Network, both of which explored moving computation off the main chain. These threads converged in the Plasma paper, co-authored by Joseph Poon and Vitalik, which proposed a framework for scalable smart contract execution using child chains secured by fraud proofs.
He explains that Plasma was more of a theoretical system than a concrete implementation, which led to a confusing landscape of variants like Plasma MVP, Plasma Cash, and Plasma Debit. While these implementations were eventually deprecated, the research wasn't wasted—the same teams went on to build later solutions like Optimism. He also introduces the concept of waiting periods for fund withdrawals, framing them as a security mechanism that enables social coordination in the event of disputes.
00:09:48 - Sidechains and Polygon
The discussion moves to sidechains, where a Layer 2 is explicitly a new blockchain running parallel to Ethereum with its own consensus mechanism, connected via a two-way bridge. Anthony highlights the pros: sidechains are established technology, support general computation, and are Turing complete, making them compatible with Ethereum's workloads.
However, he notes a significant practical downside—sidechains tend to be less decentralized than Ethereum due to the cold start problem of bootstrapping a new validator network. This creates vulnerability to attacks like a 51% takeover of the smaller chain. He then introduces Polygon as the primary sidechain implementation, describing it as a separate blockchain that supports asset transfers to and from Ethereum but operates under its own consensus rules.
00:14:43 - Zero Knowledge Proofs and ZK Rollups
Anthony explains zero knowledge proofs using the colorblind validator thought experiment: a prover demonstrates they can see color by identifying whether beans were switched between hands, without ever revealing the actual colors. He walks through the probabilistic nature of this verification, where repeated successful rounds drive confidence toward certainty.
He then connects this to ZK rollups, where transactions are executed on Layer 2, compressed together, and accompanied by a cryptographic proof that Layer 1 can verify. The key advantage is that ZK rollups don't require fraud proof mechanisms since validity is mathematically guaranteed. However, the computing requirements demand highly specialized expertise, and the security model relies on trusting that the underlying algorithms are sound. He notes that at the time of the talk, no ZK rollup implementations were live on mainnet, with zkSync being the closest on testnet.
00:20:39 - Optimistic Rollups, Arbitrum, and Optimism
Anthony contrasts optimistic rollups with ZK rollups: rather than proving validity upfront, optimistic rollups assume transactions are correct and provide a window for anyone to challenge them with a fraud proof. He draws a parallel to optimistic UI patterns in web development. The approach is EVM-compatible, meaning existing Ethereum code can run on these Layer 2s with minimal changes.
He then examines the two leading implementations. Arbitrum uses an interactive bisection protocol where disputes are narrowed down through repeated halving, similar to a binary search, with a Layer 1 contract acting as referee. Optimism instead uses a challenge window—typically seven days—during which anyone can dispute a transaction by rerunning the computation. If no challenge is raised, the commitment is considered final. Both approaches require at least one honest participant willing to flag fraud.
00:24:30 - Wrap-Up, Q&A, and Predictions
Anthony closes his presentation with resource links and a brief plug for QuickNode, noting that Polygon, Arbitrum, and Optimism nodes are all available on the platform. He reflects on the real-world urgency of scaling, sharing a personal anecdote about paying $200 in gas fees for a $15 ENS domain purchase.
During the Q&A, he shares his perspective on which solutions will succeed. He expresses confidence in Polygon, Arbitrum, and Optimism since all three are already running on mainnet and demonstrably working, while ZK rollup technology remains more experimental and confined to testnets. He emphasizes the importance of distinguishing between solutions you can interact with today versus those still in development, suggesting the existing implementations will continue to operate and grow while ZK rollups have yet to prove themselves in production.
Transcript
00:00:12 - Anthony Campolo
Okay, you want to get started? Cool. We're live. All right. Hello everyone, my name is Anthony Campolo. I am here from QuickNode and we're going to be talking about scaling Ethereum and various different layer 2 scaling solutions, including sidechains, ZK proofs and optimistic rollups. Just a little background about myself. I'm fairly new to professionally working in the web3 space. I've been very interested in it since around 2017 is when I first kind of became hip to Ethereum and what was going on. But I didn't really learn to code until around 2018 or 2019, and came through a traditional web dev sort of bootcamp experience and learned JavaScript and React and stuff like that. And I have been getting more into actually coding web3 stuff over this last year and started working at QuickNode, which is a node provider. And this talk isn't really about QuickNode at all, though many of the chains we'll be talking about are available if you want to host a node on QuickNode. But I actually got hired and then the week before I started they said, hey, you want to speak at ETHAmsterdam? So my first kind of two or three weeks at the company have just been ramping up and creating this talk.
00:01:38 - Anthony Campolo
And when I was talking to my coworker about kind of what the topic should be, it's like, well, you do the NFT APIs, everyone likes NFTs. NFTs are fun and NFTs are very fun. But I was like, nah, I want something a little meatier. So I picked a very dense technical topic just to challenge myself. And this is something that if you have been following the scaling story over the last four or five years, you're probably going to be familiar with a lot of these terms and a lot of these concepts. If not, this is going to be a really great overview of all the different things and all the work that's gone into scaling Ethereum. And the first thing we should talk about is what is the problem here? Why do we need to scale Ethereum at all? And this is because the Ethereum network creates a new block every 12 to 14 seconds. So there's a certain bounded limit to just how much throughput the actual chain, the main chain, can have. And the idea of how to scale it is kind of split into two big buckets you can think of. There's the layer one scaling and the layer two scaling.
00:02:38 - Anthony Campolo
So layer one is, how do we make Ethereum faster? How do we get more transactions to go through Ethereum or larger transactions, because this is why I use the block instead of the transaction. Some people usually say you have this many transactions per second, but that doesn't really tell you all the information you need because a transaction could be a lot of different things based on size and what's actually happening within the transaction. But you have solutions that try and scale layer one. These are things like sharding or previous things like Casper. And you have layer two solutions, which is what we'll really be talking about today is going to be layer two, which is the idea that what if we offload computation and take it off of Ethereum and do it somewhere else? So that question of what is the somewhere else is the big question here. And we're going to look at lots of different somewhere elses, but most of them are going to be a blockchain. That's going to be the somewhere else, but we'll get into that as we go. So first thing that's kind of interesting and worth mentioning is that a lot of these ideas were kind of circling around even before they were actually implemented.
00:03:46 - Anthony Campolo
So there's a really interesting interview with [Karl Floersch?] on the Bankless podcast where he tells the story of Optimism. And this is kind of a spoiler alert, but Optimism. Once they kind of cracked the code and figured it out, they went and told Vitalik and he's like, oh, that sounds a lot like that shadow chain thing that I came up with back in 2014. And this isn't like, as that story illustrates, this is not really work that Optimism was based on. They weren't even aware of it. But after the fact, they realized, oh, this was already prior work leading to it. And so we see here we have the main line state, which is the main chain, and then we have the shadow chain. And so kind of what is that? Shadow chain is the other layer two that we're going to talk about. And so along with the shadow chain idea, there's also the Bitcoin Lightning Network. And this is the same idea of how do you scale Bitcoin? Because Bitcoin has many of the same limitations in terms of throughput and scalability that Ethereum has. And the Bitcoin Lightning Network is the same idea you want to offload computation off of the main chain.
00:04:54 - Anthony Campolo
Now what's different though, is that the Lightning Network was not a separate blockchain. It was another network with all sorts of mechanisms involved, which isn't super important, but the idea is that it's a separate thing offloading computation and this work kind of came together and merged with Plasma. And so with Plasma, you now have Joseph Poon, who did the lightning, one of the authors of the lightning paper, and then Vitalik Buterin who had already had the shadow chains idea. And with Plasma, this is the abstract of the paper. It's a proposed framework for incentivized and enforced execution of smart contracts scalable to a significant amount of state updates per second, potentially billions, and enables the blockchain to represent a significant amount of decentralized financial applications worldwide. So this was kind of the vision of Plasma. But if you actually read the Plasma paper, it's not exactly an implementation, it's more of a theory and a system. And it's important to figure out what that theory and that system is and then we can kind of talk about how it became reified. So again we have a child chain and then a root chain. And so lots of different terminology and mostly it's just there's always a layer one and a layer two and kind of what terms they use to represent layer one or layer two tends to shift depending on who's writing the paper or implementing the thing.
00:06:20 - Anthony Campolo
But here we have the communication between the chains is secured by fraud proofs. And this is really the key idea that we're going to build on throughout this talk is the fraud proof. What happens if someone tries to propose a block that's different or fraudulent or gives themselves a million dollars when they didn't have a million dollars before? This is a key problem that we need to solve here. And so you have each child chain has its own mechanism for validating the blocks. And the different solutions we'll be looking at today will have different mechanisms for doing that, also different consensus algorithms. So the different fraud proofs can be built on different consensus algorithms. And depending on which consensus algorithm you use, you will have the different trade-offs that go along with that. Now let's look at the pros and cons of this idea. The first pro is that the layer two lets you have lower fees and faster computation. This is the core idea and why we want a layer two in the first place and why all the things we're going to be looking at are all different layer 2s. It also reduces the amount of data processing that happens on layer one.
00:07:32 - Anthony Campolo
That's kind of a consequence, or the first part is kind of a consequence of the second part. And then you can create compatible layer one scaling solutions like sharding, which I talked about in the beginning. You have things to scale layer one and hopefully those can be compatible with the way you're scaling layer two as well. What are the cons though? Now the cons is that as I said, the paper is more of a system and a theory than actual implementation. And this is why Plasma was really confusing for a lot of people. For a while when you would talk about Plasma, you would have people not be entirely clear what they're talking about because Plasma is not a single thing. It was a group of many different implementations that were based on this first core paper. So you had Plasma MVP, Plasma Cash, Plasma Debit, there's even a separate Plasma MVP called More Viable Plasma. So very confusing. And you then had these toolchains being built around it that have pretty much been deprecated. Now it's, you could think of it like, oh no, Plasma was a failure and they scrapped all that work.
00:08:42 - Anthony Campolo
But that's not really what happened. The people who are working on Plasma are the same people who work on the later solutions that we're going to look on here like Optimism. And the funds could only be withdrawn after a lengthy waiting period. This is another idea that's going to reemerge throughout this talk. And it's almost a philosophical point of do you want your system to be set up in a way where you can stop and have a waiting period? So you think of something like the DAO hack. We needed time to actually coordinate and fork the chain to make it correct. So sometimes having actually a period of time where something can be challenged, that itself is a key part of the security mechanism. So you sometimes can't really get away from the waiting period. And some of the solutions we'll be looking at will have say a seven-day waiting period where your funds are going to be locked up and you can't withdraw until that seven days is over. And whether you're okay with that is kind of whether you believe in this idea that that's a built-in, almost social mechanism to allow coordination.
00:09:48 - Anthony Campolo
In the worst case where you need to coordinate a new chain. Essentially, the next thing we're going to look at are sidechains. With sidechains, it's the same idea where you have a layer one and a layer two, but you're being very explicit in that the layer two is a brand new blockchain that we're creating. And this blockchain will have all the properties of blockchains that we're used to in terms of needing a consensus mechanism and having linked lists that are append-only and can't be tampered with. All the things that we love about blockchains. And you're going to have the separate layer two running parallel to layer one. And for all of these that we're going to talk about, they're going to be specifically Ethereum is going to be the layer one. But really these ideas could be transferred to almost any chain you can think of. If it's a chain that has the same sort of scalability problem as Ethereum. And then you have a two-way bridge that connects the two. Now it'll also have its own consensus algorithm and block parameters that go along with that. The pros and cons of this is that first pro is that it's an established technology.
00:11:01 - Anthony Campolo
We already buy into the fact that blockchains work the way that we think they do. And so if we know a blockchain works, then it makes sense that we would use it as a solution for the same problem. And then another thing that is a general property of a blockchain like Ethereum, it's general computation. So if you're going to offload computation from the main chain, you need to make sure you're offloading it to something that can actually do that. So it needs to be Turing complete, it needs to have a programming language, it needs to be compatible with whatever computation is going to happen on the layer one. Now the cons is that it's less decentralized. And this is an interesting point, because it's not really like a theoretical limit to sidechains. There's no really inherent reason why a sidechain needs to be less decentralized. It's more of just an empirical fact. Because Ethereum itself has been around for years. It has a very good decentralized network of nodes already. So you have this bootstrapping problem of if you're going to have a whole separate blockchain that's going to have a whole separate stack and it's going to require a whole separate set of validators and nodes together.
00:12:09 - Anthony Campolo
Then there's kind of a cold start problem there in that if you want to have it be as decentralized as Ethereum is anyway, you need to start up this whole blockchain with all these people. So in practice, sidechain implementations tend to be less decentralized, at least in the beginning, and need to grow to become as decentralized as something like Ethereum is. You then end up with a separate consensus mechanism that is not secured by the layer one. And this will introduce additional complexity because if you already know you have a sound consensus mechanism in your layer one, then you may be wary of, well, what is this other chain doing? And how do I know this other chain isn't vulnerable to like a 51% attack or all these other sorts of attacks that blockchains can potentially be vulnerable to? And then you also have a quorum of validators which can commit fraud. You can think of this kind of like an off-chain 51% attack. So if you're relying on this second chain and you have a network that's not very decentralized, then it can be easy for enough nodes in that chain to get 51% and then break that consensus algorithm.
00:13:27 - Anthony Campolo
All right, now we're going to start looking at some of the implementations here. So, so far we had Plasma and Plasma led to a lot of implementations that didn't really pick up. But the ones we're looking at now are actually in practice and people are using them and any of them can be accessed through an RPC provider like QuickNode. So Polygon is a sidechain and it's a clone of the layer one chain that supports transferring assets to and from layer one to layer two. So that should make sense all the things we've already been talking about throughout this talk. You have one chain and you got another chain. Two chains talk to each other. Now the layer two is a new blockchain with its own consensus mechanism for creating blocks. So Polygon is not an exact copy of Ethereum, it's a new chain itself. This leads to other ideas that are not necessarily going to be separate blockchain with its own consensus algorithm. But we're going to now see things called ZK rollups which are using zero knowledge proofs. And zero knowledge proofs are very mathematically kind of dense idea. But the simplest way to describe it is that it's about verifying a secret without sharing it.
00:14:43 - Anthony Campolo
And there's an interesting thought experiment that I actually found, helped me kind of understand this is you can imagine you have a prover and a validator. The prover has to prove that they are not colorblind, and then the validator has to validate whether that is true or not, even though that validator themselves is colorblind. And the way this works is imagine the validator has two beans, one in each hand, one is red and one is blue. So you can think of this kind of like the matrix got the red pill, you got the blue pill. And they are going to have them behind their back and then they're going to show them. They're going to either switch the two in their hands or they're going to keep them in the same hand and then they're going to show it to the prover and then the prover will say whether it was switched or not. So the prover can see whether it's switched or not and they can verify that. And they can do that without needing to actually tell the validator what the colors are, because they know whether they switched or not. And then when they show it, that person can validate that.
00:15:51 - Anthony Campolo
And that doesn't require actually knowing the colors themselves. And this is really useful because it allows the person to validate without actually sharing that secret. But the thing is that it's kind of probabilistic if you think about it, because what if you just guessed the first time and you said it was switched and you happen to be right? So there's a 50/50 chance you actually knew that or not and you're actually proving it and then you do it again, then you can prove it a second time and then it's slightly more likely, 75%. And then as you do it over and over and over again, if you continue to prove it, then the probability that you're actually proving it goes up and up and up and up. And it's an interesting question to ask yourself. What percentage would you be comfortable with? Like, is 99% enough? What about 99.9? 99.99? But eventually it gets to the point where you can say, okay, yes, you have actually proved this and that is what a zero knowledge proof is. Now if we look at how this then factors into all this other stuff we've been talking about, you have your layer two scaling solution and then you have your layer one computation needs to be performed on layer two.
00:17:00 - Anthony Campolo
And then for every rollup block this is on the layer two you have a state transition zero knowledge proof which is generated and verified by the layer one. So they will create this proof which is going to prove that they have actually done all the transactions correctly. And then this allows having a lot of transactions all kind of rolled up together and then that combination of transactions can then be put back onto layer one. And you can kind of think of it like it's being compressed. We have a lot of information, a lot of computation that's happening off-chain, but then they can kind of roll it all up and then put it back on the layer one chain. Now with ZK rollups, we don't actually have real implementations running on mainnet right now. zkSync is pretty close. They're on testnet and they have a lot of funding and they are kind of the closest, at least that I'm aware of, that are about to be on mainnet. But there's no implementation right now for ZK rollups. And that's why there is going to be sidechain implementations and then there's going to be optimistic rollup implementations that we're going to look at.
00:18:12 - Anthony Campolo
But the ZK rollups are still kind of in process, but I feel they're worth mentioning because they're very important to the development of the next thing that we're going to look at, which are the optimistic rollups. But first let's look at the pros and cons of ZK rollups. You have reduced fees per user transaction, and this is true of many of the solutions that we're going to be looking at here. And then you have less data contained in each transaction because they're being all rolled up together and then put onto layer one. And then it doesn't require fraud proof verification. This is what makes ZK rollups different from optimistic rollups, which we're going to look at after this, which require fraud proof. The cons are that computing zero knowledge proofs requires data optimization for maximum throughput. So basically it requires very specialized algorithms that are written by people with very specialized PhDs and this kind of stuff. And this in one sense is good, it's a pro because you know that's being built on like really solid long-term research that's been done for a very long time. But there's also like maybe 100 people in the world who like really, really fully wrapped their mind around this stuff.
00:19:28 - Anthony Campolo
So if you find this interesting, this is a good thing to kind of get into because they need more people and help. But the security scheme, it assumes a level of unverifiable trust because you're kind of trusting that the algorithms are sound and that the fraud proof is correct. And you really need to audit that and know that it works because you aren't really having kind of a way to call fraud on it. You're just saying like, well, you can't defraud this in the first place because it's fundamentally built in a way that can't be defrauded. With optimistic rollups though, we're going to have fraud proofs and we'll get into that as we go. Now with ZK rollups you start by proving to Ethereum that the transactions are valid, whereas with optimistic rollups you assume the transactions are valid and then leave room for others to prove fraud. And this is why it's optimistic, because it's kind of like, if you've ever heard the term optimistic UI, optimistic UI is when you fire requests off to the server and then you give a response back to the client, you assume that it worked.
00:20:39 - Anthony Campolo
And it's the same thing as optimistic rollups. You're going to assume the transactions are valid and correct and you're going to leave the window open for anyone to say, this is not valid. I want to call fraud on this. And so there's a mechanism built in to do that. And there's going to be two separate, different mechanisms, though, that we're going to look at to do that. The pros and cons, is that it's compatible with the EVM. This is really important for the two implementations we're going to look at, which are Arbitrum and Optimism. They both say that we want to be compatible with the EVM. And so that means that you could have computation that would run on the EVM and work on Optimism or Arbitrum, and you don't need to necessarily change the code. Hopefully the stack should be basically identical. And that means that it's more flexible than ZK rollups, because it's basically just like running almost a clone of Ethereum in that sense. And then you have all the data is available and secured on-chain. Now, the cons is that it has more limited throughput compared to ZK rollups, because ZK rollups, you can really take a ton of transactions and smush them all together.
00:21:58 - Anthony Campolo
And it requires an honest majority of Ethereum validators and at least one aggregator that does not censor the transactions, because you actually have to have someone to call fraud and activate that fraud proof. And it only requires one person, though. So as long as you feel that there is one trusted node within this quorum, then you can feel confident that they're going to call fraud on it. The first implementation we'll look at is Arbitrum and this little thing here. So this is from the white paper. And the first time I looked at this I was like, what the heck is going on here? But it actually makes a lot of sense. If you just kind of look at the middle pieces, you have the challenge, and then when you have the challenge, it goes to bisected, bisected goes up to waiting, and then we'll check whether it is confirmed or not. And then it goes to pending and you either rechallenge and then you do the loop again, or you say, okay, this is good. And then you exit and this will then kind of slowly chunk it down little by little. So if you think of like a binary search, it's a little bit like that.
00:23:10 - Anthony Campolo
You can think of Alice and Bob engaging in a back and forth. And then this is refereed by a layer one contract to resolve that dispute. And that's being resolved through the bisection. And this is based on bisection of dispute, which is what the image is showing now. The next one is Optimism. And with Optimism, we don't have the same bisection idea. Instead we have a challenge window. And so with this you have a period of time which you need to wait and say, okay, throughout this period of time we're going to allow anyone to verify whether this is actually valid or not. And if not, then they can challenge it. And then the computation is rerun and verified whether it actually worked or not. And if it goes unchallenged for the duration of the challenge window, that is considered final. So in Optimism you have a seven-day window and then once the commitment is considered final, the layer one smart contract can accept it and then you're all good to go. Okay, and then these are the citations of all of the stuff that I talked about. And these are links for QuickNode, if you're curious.
00:24:30 - Anthony Campolo
So the three different implementations we looked at are all available. If you want to spin up a node on QuickNode and connect to it, check out our Twitter. We have some events and we have an event in two days at 7:30, so feel free to come hang out at our house, Hacker House. And then we're also hiring. So I am still fairly new to the team, but we're hiring in lots and lots of different areas and parts of the company. And then we also have Discord. So feel free to check out our Discord. Yeah, that is the whole talk. Thank you. And does anyone have any questions? We got a couple minutes here. If not, that is totally fine. Appreciate you all being here. I hope that all made sense. This is a very interesting, dense, technical topic and that was kind of a whirlwind overview. But this is work that is very consequential and important to Ethereum. I think most people around like 2017 when CryptoKitties happened, everyone was like, oh wow, this is really slowing down the network. And then for a while people were like, it's not really that big of a deal because you had the whole crash.
00:25:40 - Anthony Campolo
And then now over these last two years, now it's really a problem. I mean, I can say personally for myself, I have an ENS domain. I don't know if any of you have an ETH kind of domain and when I bought it, it was $15 for the domain and $200 for the transaction. So this is definitely a serious problem and it's great that there's lots of projects out there trying to address it and hopefully this gave you a bit of an idea of what is out there and available you can check out. Yeah, yeah, please do.
00:26:10 - Audience questioner
I guess it's more asking about your opinion or perspective, and asking you to speculate a bit. Obviously there are these different methods that people are proposing for scaling. Do you think, when these systems get proposed and then when they
00:26:29 - Anthony Campolo
get tested out in the real world,
00:26:30 - Audience questioner
you find out what's wrong with them, right?
00:26:31 - Anthony Campolo
Sure, yeah.
00:26:33 - Audience questioner
Do you have any predictions about which ones are going to be more successful or more adopted or like if there's going to be tests or kind of potential issues with any particular method that you talked about?
00:26:48 - Anthony Campolo
Yeah, sure. So the three that were kind of like the implementations for Polygon, Arbitrum and Optimism, they're all running on mainnet right now and you can put money into those things and they are running and you can treat them like a real blockchain that you can invest in with those. The proof is in the pudding in the sense that they are already operating and you want to be able to separate between what is actually a scaling solution that you can look at and that you can interact with today versus ones that are still kind of in the works. So that's where the ZK rollup stuff is still kind of in the works and it's running on testnet and it's still unclear what form that's really going to take. So you can look at something like zkSync and you can say, okay, this is on testnet, it's probably going to be on mainnet in a similar form, but you can't really say whether it actually works or not versus things like Arbitrum and Optimism and Polygon. Those are running and you can say, okay, this is something I can point to and say is working.
00:27:46 - Anthony Campolo
So I think that's the main thing is that I feel fairly confident that we're going to continue to see Arbitrum and Optimism and Polygon continue to run and operate and they seem to be stable and working. And then the ZK rollup stuff is still a bit more experimental and theoretical and that we're not entirely sure how that's going to pan out? Yeah. No. All right, cool. Well, thank you so much, everybody.