
Layer0 with Ishan Anand and Mark Brocato
Layer0's CTO and VP of Engineering discuss scaling Jamstack beyond static sites, edge computing, and performance monitoring for large e-commerce websites.
Episode Description
Layer0's CTO and VP of Engineering discuss scaling Jamstack beyond static sites, edge computing, and performance monitoring for large e-commerce websites.
Episode Summary
Ishan Anand and Mark Brocato from Layer0 join the show to explain how their platform tackles the limitations of traditional static Jamstack for large-scale websites. They begin by positioning Layer0 as a Jamstack platform focused on dynamic, high-stakes websites with tens of thousands to millions of pages, distinguishing it from platforms like Netlify and Vercel that tend toward smaller, static sites. The conversation traces Layer0's origins from Moovweb, a company that helped enterprise e-commerce clients go mobile, into its current form as a developer-focused deployment platform. A significant portion of the discussion centers on the evolving definition of "Jamstack" itself, with Ishan arguing it should be reduced to two core principles: serving from the edge and developer empowerment. Mark explains Layer0's technical differentiator — EdgeJS, which compiles JavaScript routing logic down to Varnish Configuration Language for near-zero latency at the edge, contrasting it with slower alternatives like Cloudflare Workers. The hosts explore build friction, incremental static generation, and framework-agnostic approaches before turning to performance measurement, where Ishan makes a strong case for Real User Monitoring over Lighthouse scores, particularly in the context of Google's Core Web Vitals and their implications for search ranking and revenue.
Chapters
00:00:00 - Introductions and Layer0 Overview
The episode opens with introductions of Ishan Anand, CTO of Layer0, and Mark Brocato, VP of Engineering. They mention their own podcast, JavaScript Jam, which covers JavaScript and front-end web development topics. Ishan provides an overview of Layer0 as a Jamstack platform focused on dynamic, high-stakes websites rather than small static sites.
The key distinction Ishan draws is around scale: while most classic Jamstack users work with sites under 10,000 pages, Layer0 targets websites with hundreds of thousands or millions of pages. This complexity demands features like observability and serverless functions that go beyond what static-first platforms typically offer. They also discuss the company's rebrand from Moovweb, drawing a parallel to Meteor's evolution into Apollo, and how Layer0 emerged from enterprise e-commerce work with clients like Walmart and Macy's.
00:05:21 - The Jamstack Identity Crisis
The conversation shifts to the broader philosophical question of what the Jamstack actually means and where it's heading. Ishan references his talk "The Evolution of the Jamstack" and argues that the traditional static-only definition doesn't scale to large websites — even a physical grocery store's 60,000 products would break that model. He traces how the definition has been repeatedly revised, from the original capitalized JAMstack to a more inclusive view that accommodates dynamic content.
Ishan proposes reducing Jamstack to two core principles: serving data from the edge and developer empowerment. Anthony connects this to the idea of "full-stack Jamstack," emphasizing Git-based workflows with atomic deploys and rollbacks. Mark adds that developer empowerment means enabling experimentation like A/B testing through simple Git branching, with the ability to quickly ramp traffic up or down and roll back deployments without complex orchestration, keeping the process low-stress and high-control.
00:10:54 - Build Friction and Scaling Beyond Static
Christopher poses a practical scenario about building an e-commerce site with Gatsby and Shopify that quickly hits build-time limitations as products scale up. Ishan reframes the problem as "build friction," which involves not just page count but also update frequency — merchandisers on large sites make constant changes that would require rebuilding hundreds of thousands of pages unnecessarily, creating both slowdowns and excess cost.
Mark explains how Layer0 addresses this with techniques like parallel static rendering, which uses serverless compute to crawl and cache pages at the edge during deployment for any framework, not just Next.js. He argues that the real distinction between static and dynamic is simply whether content is cacheable and served from the edge, not how it got into the cache. Layer0 leverages AWS Lambda underneath to provide scalable, highly available serverless compute, blurring the line between static and dynamic while maintaining Jamstack-level performance.
00:17:00 - Caching, ISG, and Framework-Agnostic Design
Ishan highlights that only about 11% of major retailers actually cache their HTML, which is the key bottleneck users wait for. Layer0 customers achieve better than 90% cache hit rates on both API and page data through their EdgeJS technology. The discussion then unpacks incremental static generation, where pages are built on demand as traffic arrives rather than all at build time, with subsequent visitors receiving the pre-built result instantly.
The conversation turns to framework support, with Ishan and Mark explaining Layer0's connector architecture. The core platform provides framework-agnostic primitives, while connector packages translate each framework's conventions into those primitives. This means advanced features like ISG, originally tied to Next.js, can work across Nuxt, Angular, Svelte, and specialized e-commerce frameworks like SAP Spartacus. They emphasize that roughly 95% of the platform's functionality is framework-independent.
00:22:31 - EdgeJS and CDN-Level Performance
Anthony asks about Layer0's EdgeJS, comparing it to Cloudflare Workers. Mark explains that while both let developers write JavaScript at the edge, Layer0's approach compiles that JavaScript down to VCL, which runs natively in Varnish as machine code. This means even thousands of route-matching operations add only one to three milliseconds of latency, compared to potentially hundreds of milliseconds with Cloudflare Workers.
Ishan adds context about why this matters for their market segment: large legacy sites with complex routing built over 10 or 20 years often have URL patterns that don't match modern framework defaults, and migrating URLs risks significant SEO and revenue damage. Because Layer0 is framework-aware, it can automatically translate a framework's routing conventions to the edge, display observability data in terms developers understand — like category pages and product pages — rather than raw URLs, and provide performance statistics at the application level without requiring additional configuration.
00:28:53 - Real User Monitoring and Core Web Vitals
The discussion pivots to performance measurement, starting with Real User Monitoring. Ishan explains that RUM aggregates actual visitor experiences rather than relying on lab simulations like Lighthouse, which can vary by machine and don't reflect real-world conditions. This matters because Google's Core Web Vitals ranking uses field data, making speed directly tied to search traffic and revenue — a zero-sum competition where faster sites steal rankings from slower ones.
The group examines shortcomings of both Lighthouse and Core Web Vitals for single-page applications: Lighthouse only measures initial page load, missing the fast subsequent navigations that define SPA experiences, while Cumulative Layout Shift previously penalized long-lived SPA sessions unfairly. Ishan notes that Layer0 built real-time RUM integrated with A/B testing so developers can measure the impact of changes within minutes rather than waiting 28 days for Google's data. Mark recommends their JavaScript Jam episode with Chrome team members Annie and Katie for deeper insight into how Google weighs these metrics.
00:36:11 - Comparing Platforms and Getting Started
Christopher asks directly how Layer0 compares to Vercel for hosting Next.js. Ishan offers a nuanced answer: for static sites, both work well, but for high-stakes sites at scale, Layer0 adds edge-level routing, split testing integrated with RUM, and techniques like parallel static rendering. He acknowledges that Layer0 doesn't currently execute static builds directly, since their target customers already have CI/CD pipelines and aren't building purely static sites.
The conversation wraps with Ishan comparing the Jamstack platform market to e-commerce tiers — WooCommerce and Shopify at one end, SAP and Demandware at the other — positioning Layer0 for the mid-to-high end serving sites with over 10,000 pages and significant revenue. He emphasizes that Layer0 brings framework innovations like ISG to platforms beyond Next.js. The episode closes with getting-started instructions, social media handles, and a brief postscript about Gatsby 4's release with deferred static generation, with Mark quipping that everyone keeps inventing new acronyms for server-side rendering with a cache.
Transcript
00:00:00 - Ishan Anand
Okay. Sounds good.
00:00:12 - Anthony Campolo
Mark and Ishan, welcome to the show.
00:00:14 - Mark Brocato
Thank you very much for having us.
00:00:15 - Ishan Anand
Thank you. Longtime listener. Really excited to be here.
00:00:18 - Anthony Campolo
Great to have you too. You are also a host of your own podcast with some sweet intro sounds when people are on, so I might have to snip a little bit of that to get in the show. Why don't you let our listeners know who you are, what your show is, and a little bit about your company, Layer0?
00:00:35 - Ishan Anand
I'm Ishan Anand, CTO of Layer0.
00:00:38 - Mark Brocato
And I'm the VP of Engineering at Layer0.
00:00:40 - Ishan Anand
And the show you're referring to is our humble little podcast called JavaScript Jam, covering whatever's interesting in JavaScript and front-end web development or full-stack web development as well. I don't know how they add that sound in. That's something the marketing team does, so we'll have to ping them more on it.
Layer0, the simplest way to think about it is we're a Jamstack platform. It's part of a category where internally we've been calling it AppOps, application orchestration. But I think most folks kind of casually know it as something similar to Netlify or Vercel. Where we tend to focus, and what differentiates us, is we focus on dynamic websites. If you went to, say, the Headless Commerce Summit last year, most folks who are using, shall we say, classic static Jamstack are doing it on sites that have 10,000 pages or less, often 1,000 pages or less. Where we tend to focus is the high end of the market. We call them high-stakes websites, where they start at 10,000 pages, hundreds of thousands, millions of pages.
[00:01:45] So when you've got that level of complexity, how do you bring the Jamstack benefits if you can't go static? That's really where we spend a lot of focus. So you'll see a lot of stuff in our platform about observability and serverless functions. We tend to lean on that stuff a little more heavily than the other guys.
00:02:00 - Anthony Campolo
I'd be curious if you guys actually rebranded, I think, recently. So I'm curious how long the company's been around in its current form and its previous form.
00:02:10 - Ishan Anand
Yeah, that's a great question. The best analogy is, you know how the Meteor team was working on Meteor, and then they started doing GraphQL and Apollo and rebranded and said, "Oh, this is the thing that has interesting traction. Let's try this." Layer0 is kind of spun out of a previous company called Moovweb. Moovweb was really in the business of helping customers go mobile, again, in the same segment of the market: high-end e-commerce websites. Folks like Walmart and Macy's were historical customers of Moovweb.
If you're curious, we can get into it. I really feel like Moovweb was actually pre-Jamstack before Jamstack. It had a lot of the benefits of Jamstack, but from kind of an alternate dimension that didn't include static techniques. Layer0 has really, for the last two years, been our Jamstack platform in the flesh, and it's gotten a lot of interest and traction. So we kind of rebranded around that after we saw it take off.
00:03:07 - Mark Brocato
There's also such an advantage to having a company whose name is the product. It's an order of magnitude worse when you're a company that has multiple products. Everybody just refers to you as the company name, and branding gets all confused. When you've got one main thing going on, it's so much better to just be called that one main thing.
00:03:22 - Anthony Campolo
Yeah, I played around a little bit with Layer0, just tried out one of the starters and deployed it. It has a really cool CLI for doing builds and deploys. There's such a large group of different companies that do something like this, and I would guess that the name Layer0 is because we talk about layers of clouds. Like you have layer one clouds people talk about, you have layer two things that are built on top of layer one. So is the idea that you wouldn't need AWS or anything like that because you guys are like the base layer of what you would use?
00:03:55 - Ishan Anand
That's pretty much it. It's the idea that it's the foundation of what you're going to build your app on. You can get started spending your time on your product instead of spending your time on your tools.
Think about the last decade and a half. Cloud has been a huge driver of innovation. Everything is underpinned by the cloud. Whether it's mobile, where you need synchronization between devices, or AI and big data that need a cloud to store and process stuff. We've got a lot of tools, but go log in to AWS and look at that console. There are, if you count them, over 250 different services. And then you have to piece all those services together. We actually mapped this out. We said, like, let's suppose you are trying to deploy a Next.js application and you want to make it production quality. You want to make sure you've got scalability, high availability, reliability, observability. You're going to want things like logs and security and all these other things.
[00:04:48] And you piece it together and you actually have to stitch it across roughly 20 different services. And then, of course, AWS has different availability zones. So you're going to have to spin up all this stack across multiple regions just so that you get high availability. That's a lot of work to piece together.
With Layer0, you just go into your Next.js app and you type npm install layer0 in the command line, and then layer0 deploy, and you're off and running. You can focus on your product and not have to worry about piecing together your infrastructure. So no DevOps, but great performance.
00:05:21 - Anthony Campolo
One thing that I've appreciated, listening to some of your content and seeing you speak at other meetups, is that you are someone who is very interested in the larger philosophy of the Jamstack and like, what is the Jamstack? Where has it been? Where is it going to go? And whether it's a term that's useful or not. This is a conversation that happened at Jamstack Philly, the Summer of Jamstack. Brian Rinaldi, who's also been on the show, was in that discussion.
You guys host a Jamstack podcast, so you are firmly in the keep-the-Jamstack-term camp, I would guess, which is kind of where we are still. You have ideas of the whole history of the Jamstack and where it came from and why it sort of reemerged. So I'd be curious to get into that a little bit.
00:06:05 - Ishan Anand
I also spoke. There's a talk where I kind of gave my manifesto on this. Brian Rinaldi, who you mentioned, runs a meetup called CFE.dev. There's a talk I gave there called The Evolution of the Jamstack. I was going to call it the Jamstack Identity Crisis, because there's kind of this moment now that the Jamstack is facing.
Why this debate really matters is Jamstack for the longest time meant static. It meant build times. At a certain point, that doesn't scale. It doesn't apply to the largest websites on the internet. It wouldn't even apply and work on, say, a physical grocery store with like 60,000 items, right? So if you can't support something that's got 60,000 pages and take it online, then what good is this for the broader web ecosystem? As Jamstack has kind of struggled with scaling to large sites, there's been this kind of question like, oh, is it still just purely static?
You saw things like incremental static generation from the Next.js team, and then the folks at Netlify introduced distributed persistent rendering as a standard.
[00:07:08] It's kind of this "we've always been at war with East Asia" moment. If you remember from 1984, they redefined what Jamstack is. In fact, Jamstack keeps getting redefined. Originally, even how it was written was capitalized as JAMstack. They got rid of the capitalization, and now Jamstack isn't about purely static anymore. It can be this mix of static and dynamic, which to me is the right way to go.
But you still see people, and to his credit, Brian is, you know, and I've had this debate multiple times, that hold on to, well, okay, Jamstack is mostly static or it's static first. And I think that definition may persist maybe for a year or two, but I don't think it's going to be durable if you look ahead to the future. You've got things like personalization and A/B testing that are going to require essentially some type of compute that's happening at the edge. And it's not clear why you can do that at the edge, but you couldn't do it at the server.
[00:08:03] These questions lead to this interesting quandary of like, what is Jamstack now and what is it going to be? My kind of simplistic reduction of it is to say that Jamstack is really only two things: serving data from the edge or from the CDN as much as possible, and developer empowerment. And that's it. If you look at all the other benefits of Jamstack, security, scalability, all those things actually rest on those two principles. They're either leveraging the CDN in some form or they're empowering the developer with serverless functions. That's it in a nutshell.
I don't want to just keep talking forever. I could talk for literally hours on this.
00:08:38 - Anthony Campolo
Yeah, we'll link to that talk in some other material here. And I think this is really great because for us, the full-stack Jamstack was the same idea of expanding out beyond what people think the Jamstack is. And if you're full stack, that means you have a database. So as you're saying, it can't only be static as a database can't be static. That'd be absurd. So it's about having this dynamic content and dynamic abilities. That's what we need to do.
Most of the things we want to do on the web these days, the CDN is also something that we always hook into, and that's really important, getting edge distributed. But for us, I hone in a little more on the dev, the developer part. What does that really mean? For me that usually means version control, Git-type deploys, and having atomic commits and rollbacks and things like that. And that seems to be something that Layer0 does support. And you're in this whole Git workflow, right?
00:09:31 - Ishan Anand
Absolutely.
00:09:32 - Mark Brocato
We believe in empowering the developers to experiment, like doing an A/B test, which is critical, especially in e-commerce where you want to know, is my bright idea actually going to have impact on the bottom line? An A/B test is just Git branch, push, decide how much traffic goes to one branch versus the other, and you can ramp up traffic or ramp down traffic very quickly. Roll forward, roll back.
The broader Jamstack principle is about empowering the developer to have more direct control over how they release software and not have to worry about the infrastructure in order to do that, and to empower them to do so in as low-stress an environment as possible by being able to roll forward and roll back quickly without having a whole deployment orchestration that you have to do manually. So on Layer0, if you push something out and you realize something's messed up, you just click on the deployment you want to restore and then boom, it's right back up and you have a minimal loss of revenue or traffic or whatever your metric is.
00:10:21 - Ishan Anand
And that Git-based workflow, we take it so much for granted. It's really a hallmark of Jamstack platforms. When Matt coined the term, I think it was what, now like five years ago, that wasn't an accepted standard. But I think going forward, there's no way that's not going to be part of everybody's development flow. And I think that's broadly true for a lot of the Jamstack. It's going to be kind of like that term, HTML5. It's interesting for a few years, but it's just kind of the way of doing things that everyone agrees on, and it just fades into the standard and it becomes kind of the accepted way it gets done.
00:10:54 - Christopher Burns
One of my big questions I'm dying to ask. I have a bit of experience in this, so it's going to be interesting. I'm building an e-commerce store for a business, right? I'm a freelancer. Someone came to me saying they need an e-commerce website. I go, great, I know how to build this. I'm going to use Gatsby. I'm going to use Shopify. I'm going to hook them together. And then every product I build, the page, bish bash bosh, website complete.
That's great. You test it. You build it with your five products on Netlify. It builds in like 30 seconds, for example. You're like, this is awesome. Everything's working great. And then they dump another hundred products on, or 200 products, or 50 products with 20 images each. And the size and speed of this balloons to the point where build times are no longer favorable, as you guys probably know. So what's the alternative path that you guys are working on? Because as you said, it's not about the smallest websites you guys are focusing on.
[00:11:56] It's about the bigger websites, the e-commerce of the Jamstack. How should the perfect flow be in your eyes about what tools to build with, opinionated or not? What's the best way? For example, does it all come down to the tool that's building it, i.e. Netlify, Vercel, Layer0? Or do you need to also take things into account when you're developing? One of the biggest questions I have is should we chuck Gatsby out the window for Next.js? ISG. These are all questions that are really hard to know in the moment when you're deciding these factors, because you don't know which frameworks are going to work better with a thousand pages of products, do you, or do you?
00:12:34 - Ishan Anand
There's a lot to unpack there. The first is I want to just make sure that folks understand the motivation. This problem, and I tend to call it build friction, isn't just about number of pages. It's also frequency of updates. Number of pages is a huge thing. Number of products, like I just said, like a grocery store is kind of my common touchstone. A physical grocery store, 60,000 products. If you can't do that, if e-commerce was to have a vastly more selection than the physical store, and if your digital store is being held back at 10,000 pages, there's an issue.
The other thing is frequency of updates. You've got folks whose job it is in a large e-commerce site who are merchandisers. They're moving things in and out of categories. They're preparing collections for distribution. They're changing copy all the time. Imagine you had 100,000 pages, right? And there are like four or five of these merchandisers making changes, 100,000 pages. And maybe they're changing the site once an hour.
[00:13:26] Are you going to have to rebuild all 100,000 pages every hour? And in fact, did anybody visit 90% of those pages in that last hour? You're not only creating slowdowns, you're actually creating excess cost. You're going to have to pay for that build time in CPU one way or the other. It can be a huge waste. So it can sap your team productivity. It can also create excess cost.
So there's kind of two dimensions of the problem: updates and the number of pages. The solution is a property of both your framework and your architecture. You clearly kind of have to throw out purely static techniques after a certain point, but there's a variety of techniques that have emerged that can bridge the divide. One of those is ISG that you mentioned, incremental static generation, which is a property of Next.js. But there are other techniques. We've added one called parallel static, which I'll let Mark describe. All these techniques usually rely on some form of serverless functions to build pages on demand in response to traffic.
00:14:21 - Mark Brocato
I guess I'll start by saying we do a lot of work with Next.js, and many of our customers do as well. Probably half the sites, modern sites at least, that are deployed on Layer0 are Next.js. Next.js has a really fantastic API that's like a gradient from fully static to fully dynamic. So if you're considering something in the React world, that's definitely one to look at.
Sadly, some of those benefits that exist in Next.js haven't fully been ported over to the other big libraries like the Vue world, the Svelte world, or the Angular world. They're probably coming because it's a pretty good design. But one of the things that we put in Layer0 is the ability to do static rendering at deployment time for any framework. The way that we do it is we actually leverage our serverless compute power to crawl the site in parallel and load all of the statically cacheable pages into the edge. It even skips S3 and just goes right to the edge. So that first hit on any long-tail page is on the order of tens of milliseconds.
[00:15:17] So that's totally applicable for any framework, even a fully dynamic framework that just sends back cache-control headers. You could achieve all the same benefits, speed-wise, of a Jamstack or a static application framework. And when you think about it, doing a rendering at build time versus doing it at runtime with traditional caching techniques, there's only really two differences there.
One is the very first page hit. Somebody would have to wait for it. But if you've got millions of users, that's a statistically insignificant difference between caching and static prerendering. The other is, if you're doing everything at deployment time or build time, then if the build went out successfully, all the content's there and nothing can fail at runtime. And so if you could somehow remove that risk from the runtime by having a really reliable runtime backend, then that difference kind of vanishes as well.
That's one of the comforts of using the other half of Layer0, which is the serverless cloud in which your code runs. It's a much lower-risk area to run your code than standing up your own Node.js cluster and trying to make it highly available and scalable. We leverage AWS Lambda underneath the hood, which basically scales as big as your pocketbook can handle. So there's really no chance you'll run out of resources. It's very stable and highly available, multi-region, etc. So we really specialize in blurring the line between fully static and fully dynamic.
But when you think about it, what makes a site static is the ability to be cached. How that content gets into the cache is far less important than that it's cacheable in the first place, and that it can be delivered from the edge, because 99% of your requests are going to be delivered from a static file or a piece of memory from a CDN's edge cache.
00:17:00 - Ishan Anand
That last point that Mark gave, I just want to emphasize. It's what makes Jamstack Jamstack: it came from the edge. I don't know if you guys saw Jamstack conference. It was a year ago or so where they had Matt from WordPress and Matt from Netlify had a debate, right? There's this moment in the debate and both Matts are kind of right here.
WordPress Matt is like, well, just stick a CDN in front of your WordPress site. Netlify Matt is, we've looked at it, nobody caches their HTML. They're both right. We actually did that same analysis. We've crawled the IR 500, which is like the big retailers on the web. About 11% of them are caching their HTML. And that's the first request. That's what your users are actually waiting for. And so if you can essentially cache with really good, high cache hit rates, then you can get pretty close to Jamstack on a dynamic website.
[00:17:51] And so customers on our sites are basically caching their API data and their page data, so that raw HTML gets a better than 90% cache hit rate. But you need a lot of coordination to make that work, so that it's very easy for the application developer to do. That's one of the things we've done and invested in with the technology we call EdgeJS. That makes it so as you're writing the application, you can control what the edge is going to do in very powerful ways.
We did mention ISG. I think it might be helpful to back up and explain to folks what ISG is. It's called incremental static generation, which is basically you deploy your site and maybe some of those pages are statically generated at build time. But after the deploy, when a request comes in, ISG will throw up a little skeleton placeholder while the user waits for it to load. Behind the scenes it's actually running the static build process, but just for that page, and then it saves that result and serves it up to the user.
[00:18:50] Anybody who comes after that will get it lickety-split, already pre-built. So it only builds the pages as the traffic comes in. And I think more important than just ISG itself is, and this was one of the points I made in the talk on CFE.dev, I think there's a whole spectrum of Jamstack-like techniques for solving this problem of how do you take a dynamic website and get Jamstack-like performance out of it. ISG is just one of those.
00:19:16 - Christopher Burns
How much of your features as a platform would you say is agnostic to frameworks? Would you say a lot of your logic that you bring and your features and benefits are agnostic to any framework you use? Or are some of the features necessarily tied to certain frameworks?
00:19:33 - Ishan Anand
That's a really topical question. One of the things that was really great about the way Mark and team built out our ISG support, which is a property of Next, is they tried to do it in a framework-agnostic way. And just yesterday we were speaking at Nuxt Nation and we showed how you could take the same primitives built into how we implemented ISG for Next, and you could do it in Nuxt. Now part of the problem is Nuxt doesn't have inside the framework the same kind of out-of-the-box integrations for it, but you can run the JS commands, the app code yourself, and you can get something that's effectively like ISG on other frameworks. So it's a mix, but we always try, at least where it requires our integration, to be as framework-agnostic as possible. I know, Mark, if you have anything you want to add to that.
00:20:20 - Mark Brocato
The core product provides all the primitives that enable the most sophisticated features of Next.js to work. But we have a layer that we call the connector layer, where you'll see we have a different package to support each framework. And really what it's doing is just converting the way that that framework thinks about the world into our primitives. So all the advanced features of Next.js like ISG and ISR and SWR and every other three-letter acronym they could come up with, you could do it on Svelte, or you could do it on Nuxt. So there's actually very little other than the connector code itself that is tied to any given framework. It's probably 95% agnostic.
00:20:54 - Ishan Anand
Yeah. And if you go to our docs page, you'll see the first thing we show is what all the connectors are. We've got Next, Nuxt, Angular. We've also got a lot of e-commerce frameworks, like Vue Storefront and React Storefront. Ones that are maybe not as well known but really important at the high end of e-commerce. So SAP Commerce Cloud has a headless framework called SAP Spartacus. It's a form of Angular that's open source, and it will only run really on Commerce Cloud. But you can run it headless on Layer0 using Commerce Cloud as a backend. We have connectors for that as well.
00:21:27 - Anthony Campolo
Yeah. When I tried it out, I just used the regular Create React App template. I didn't use Next at all because I usually like to, whenever I'm trying something out, I usually want to try the simplest thing that's available and then try out more complicated versions as I go. It worked fine using just regular old Create React App. It's interesting that you say almost half of the sites are using Next, and this is just a trend that I think anyone paying attention to Web.dev will see, that Next is running away with the game right now, and for good reason. Now other frameworks are trying to essentially copy the features that they have. So it's cool that you guys are trying to abstract out those benefits you get from Next and bring them to your other sites or frameworks.
And I'd like to go back to the EdgeJS stuff on your website. You talk about it. You say it's the world's first JS-based CDN. When I hear that, I think you have the JS at the edge, meaning you can write your logic at the edge.
[00:22:31] And so I think a lot of people would hear something like this and they would think of something like Cloudflare Workers, and how you can write JavaScript that is going to execute all around the edge. So is that a similar idea or is that something else that's going on here?
00:22:45 - Mark Brocato
I would say it's a similar interface or API. In both cases, the developer is writing JavaScript. Whether you're using Cloudflare Workers or Layer0, what makes them different is that a certain portion of the EdgeJS with Layer0 compiles down to something that is far more performant, an order of magnitude more performant, than what you can do with Cloudflare Workers, or even something like Wasm and Fastly's JavaScript Cloud Compute solution.
Layer0 is not only an AppOps platform or a Jamstack platform, but it's also a CDN. We have quite a lot of experience with low-level, high-compute CDN technologies, namely Varnish and VCL. For the uninitiated, Varnish is one of very few pieces of software that runs CDNs. I think Fastly uses Varnish, others do as well. It has a configuration language that basically compiles down to machine code and is super, super performant. So you could do something like trying to match a thousand routes in potentially under a millisecond with VCL.
A lot of the optimization that Layer0 provides is taking that EdgeJS and compiling it down to VCL, which ultimately compiles to machine code and runs natively in Varnish. You can actually be incredibly lazy and unoptimized as a developer and have a huge router that defines all this edge logic where you do different things for different requests. In some cases, you may remove parts of the URL to normalize the cache key in one route. In another route, you might be cleaning up cookies or response headers, or doing redirects, or even doing serverless transforms of the response to add and remove content.
That router can pile up thousands and thousands of routes and you never even really notice any slowdown, even for large, super-large, complicated sites, because it's all being compiled down to VCL. It's like free performance, basically. And if you ever tried to do even 100 routes to match the request before the cache in a Cloudflare Worker, you're adding some very noticeable latency to the site. It might be 50 milliseconds, it might be 150 milliseconds, it might be a quarter of a second or more. And then at that point, what's the point in having a cache if you're adding that much latency in front of it?
[00:24:49] So with Layer0, you can basically be as complicated as you want, and you're only really adding one to two or three milliseconds of latency by running all that logic before the cache. So I think we've really got the best of both worlds. There's a very fluent API, but also incredible performance. There's a JavaScript API where you configure your CDN and it follows all the normal development practices. It's checked in, it's branched, it's merged, it's reviewed.
You can spin up as many staging environments as you want. When you run the site locally in development, it's running the CDN locally in front of your app. So it's always testing the same consistent critical path. You've got a great API, it's very developer-friendly, but in the end it's compiled down to something that works at internet scale and doesn't add any latency to your application, and preserves the value of the CDN, which is improving the speed and reliability of your application. You wouldn't want to start adding more JavaScript code in front of the cache. Then you basically just have a highly distributed set of Lambdas, and you don't really have a CDN anymore. I'm hopeful that other developers would agree.
00:25:48 - Ishan Anand
I want to emphasize something there, and it gets back to the question about how we're different from other Jamstack platforms. It's hard to appreciate just the problem we're trying to solve. If you're used to sites that are under a thousand pages, but because we're working at the high end of the market where these are really complex sites with really large sitemaps, they were built before the Jamstack. They may be five, ten, 15, 20 years old, and they've got very, very large, complex sets of routing that needs to take place and it has to be evaluated very, very fast. Otherwise you're defeating the purpose of the cache, as Mark said.
And very often, part of that problem is the SEO and the link paradigm, like the URL paradigm, doesn't match, say, the default you might get out of Next, right? When you start with Next out of the box, it gives you a URL paradigm for how you map URLs to pages, but that may not map to how you historically have done it. Migrating your URL pattern is actually something not to be taken lightly. SEO is hugely important, and if you reset that, you could jeopardize a ton of revenue.
So those are the types of problems. By being at this type of area of the market, it's kind of forcing us to solve these in ways that for other smaller sites, you don't typically have to encounter yourself.
00:26:57 - Christopher Burns
I've just been looking at the documentation about some of the deploy targets, and I've noticed that the Next one is really interesting. We've obviously been speaking a lot about Next, but you do a lot of your own modules inside of Next, so you replace the routing by the looks of it as well.
00:27:13 - Mark Brocato
We don't replace it, really. What we do is we take the routing that Next.js has by convention, and we move those rules out to the edge so that they can actually execute potentially even pre-cache, and it just speeds up the whole application. By having the edge understand the framework behind it, it can do things very intelligently.
So, for example, let's say there's the ability to do rewrites and redirects in Next.js through their own APIs. We don't ask you to rewrite those for Layer0. We just understand Next.js and we move those redirects pre-cache all the way to the edge so that they happen in a fraction of a second, rather than having to get all the way to Node.js or some serverless layer and add a lot of latency.
00:27:52 - Christopher Burns
So you're, in essence, taking that child process and doing it on the platform, the parent, and then passing it down to the child of Next.
00:28:01 - Ishan Anand
We also basically are using it to do a lot of intelligence and understanding of the application. So, you know, if you look in a regular CDN, it shows you data on cache hit rate. It's going to be in terms of just raw URLs. But that's not how you, as the developer, think of your site. You think of it in terms of your routes. This is my category page. This is my product page. This is my home page.
Because we are framework-aware, we can then display information about memory usage or runtime of your serverless code in the same way you think of it. When you add a new route, like you add a new type of page, like a new type of category page, we automatically recognize it. You don't have to do anything special or any additional tagging. It just happens automatically as part of your normal development workflow, which is really important again when you're working on a large site.
Then we can basically send you statistics, all in terms of the way you think about your application, which is at the framework level, not at the layers below.
00:28:53 - Christopher Burns
Talking about statistics, what is RUM?
00:28:56 - Ishan Anand
RUM stands for Real User Monitoring. It's a way to measure the speed of your website by aggregating and averaging the statistics of every single user who visited your site. How fast is this site for everyone? Then just average each of those measurements together. It's how your users are actually experiencing your site.
In contrast, you use something like Lighthouse and it tries to create a simulation on your computer. It's kind of what they call a lab measurement. It says, for this particular lab situation, here's how fast your site is. The challenge is your users may not match the lab scenario that was set up. So Lighthouse can kind of not align with RUM measurements.
Where this really matters is Core Web Vitals, which is Google's speed ranking metrics. What's new here is we all know that speed meant better conversion rates. Now speed means more traffic, better ranking on search engines because search engines are zero-sum. If you move up, somebody else moves down and vice versa. You're actually now stealing traffic from your competitors. So speed now means growth as well.
The problem is that Lighthouse is not how Google does this ranking. They do it through RUM, Real User Monitoring. So you really need to know how fast your site really is.
00:30:11 - Christopher Burns
Just a side note, I heard the best way to get a Lighthouse score is by doing it on Web.dev instead of doing it on your computer, because your computer actually affects the score.
00:30:23 - Ishan Anand
Supposedly it does. I don't know if you get a better score, but it can vary by your computer. I typically go to PageSpeed Insights.
A big problem you'll have in development teams is somebody says, "I got a 99," and somebody else is like, "Oh, I got an 80." You have no real consistent measurement. You need to make sure whatever machine is doing it is doing it in the same consistent way. This really matters, actually, if you're trying to use Lighthouse CI and you're trying to build it as part of your CI/CD process.
The way we usually settle this is if you're not going to share that result with anyone else and you're just trying to say, am I making my one optimization better or worse, Lighthouse is fine for that. But when you're trying to report results to the team at large or some larger audience, I would go to something like PageSpeed Insights and run Lighthouse just to see what your Lighthouse score is.
Again, I don't think you should really pay attention to your Lighthouse score. I think you should look at your field data and your Core Web Vitals because that's what your users are actually experiencing. But if you have to compare Lighthouse scores, you need to be aware of that. And actually, Lighthouse itself isn't consistent. We have tracked Lighthouse scores day after day and there is even variation there.
PageSpeed Insights will actually route you to a different data center depending on your location. They don't tell you about this. It's really hard to get a very fixed baseline for your performance, but in the end, that's the beauty of RUM. It's how your users are actually experiencing the site.
The challenge, though, with Google's Core Web Vitals is they only tell you, like, 28 days later how your performance is. That makes it really hard to test and optimize your speed. So that's one of the things we built into Layer0 is RUM that's real-time. We'll tell you within a matter of minutes. And it integrates with our A/B testing.
[00:32:03] So you can try a change. You can roll it out the next day. You can say, "Well, did this improve my Core Web Vitals or not?" It automatically knows which version did better than the other.
00:32:12 - Anthony Campolo
Yeah. This is something that we talked about with Sebastian from Docusaurus. The problem with Lighthouse, and something that he was saying, is that with Lighthouse you're measuring the initial page. You're not really measuring the interactions you get when you're switching between pages for a single-page application, whereas what you're talking about is measuring the entire experience someone gets while they're navigating through a site.
00:32:36 - Ishan Anand
It's kind of the worst of both worlds, actually, when it comes in some ways to Core Web Vitals. It doesn't measure the single-page app page transitions. One of the metrics is Largest Contentful Paint. So if you go to a category page, going back and forth between products in a single-page app can be lightning fast, but Core Web Vitals won't reward you for that.
On the other hand, there's another metric called Cumulative Layout Shift, which until recently would actually track for the entire lifetime of the page. So you'd see single-page applications have really terrible Cumulative Layout Shifts. Those pages stay open for a very long time, as far as Core Web Vitals is concerned. They recently fixed that. We'd been hounding on the team for a while at Google, and what they did is they basically added a little averaging window and they cap it at five seconds, and then they do another window and see what your Cumulative Layout Shift score is. But yeah, this is actually a big problem with a lot of performance metrics.
[00:33:30] They don't measure the time after first load. If you look at Wolfgang Digital, they had some data: the average e-commerce session is about five to six pages. But tools like Lighthouse only track the first one. If you actually optimize all those subsequent page loads, you might be able to speed up the entire browsing session by a lot more. That's what single-page apps do. We've clocked them to 300 milliseconds. That's literally the blink of an eye, about 300 to 400 milliseconds. They feel more like a native app than a website, but Core Web Vitals doesn't yet recognize that.
The Chrome team knows this. They're working on it. But for now, just be aware that there's a little extra challenge there when you're doing a single-page app.
00:34:08 - Mark Brocato
We did a really great podcast episode with two developers, Annie and Katie from the Chrome team, where they took us through the importance of Core Web Vitals and what it means, and how it affects, to the extent that they can say, Google's search ranking, and just the evolution of their understanding and recognition of single-page apps. The amortization of the initial load across multiple navigations is fascinating to hear.
The stakes are so high for Google there. Any decision they make has ripple effects throughout web development and throughout the internet world. Definitely recommend checking that one out, our podcast on JavaScript Jam with the Chrome team.
00:34:43 - Ishan Anand
PageSpeed Insights is where a lot of people go to evaluate the performance of their website. I really feel the user experience of it is wrong because the thing at the top is the Lighthouse score, and that score is entirely arbitrary, but everyone just gets focused on it. I think folks should just ignore that score at the top. The weighting is arbitrary. Like how much does TTI contribute to that score is entirely meaningless. It may not correlate to your business.
Again, these are simulations. Time to Interactive, if you look at the way it's measured, it's got a bunch of heuristics in it. One example is it looks for what it calls network quiet state, and it looks for when all your GET requests end. So if you switch your app from, say, REST GET to, say, GraphQL where everything's POST, network quiet happens earlier, but you haven't really necessarily made the site faster. Yet TTI might actually reward that. So the metrics in Lighthouse are again heuristics, but you should not think that reflects how fast your site really is.
00:35:38 - Christopher Burns
I think Lighthouse scores are one of those things. Has your marketing team got hold of putting their tools on the website yet? If the answer is no, then ignore them because as soon as they put their tools on, never look at them again because they'll just depress you. Because you think, oh, I got 99 or whatever. And then your marketing team comes along and goes, we're gonna install these 20 scripts. And you're like, why? And they're like, because conversion. You go, okay, you install the 20 scripts, and then you look at your Lighthouse score and it's now 20 because all these external dependencies you can't do anything about.
00:36:11 - Mark Brocato
Definitely true. I mean, the worst way to wreck a Lighthouse score, the easiest way, is to add more JavaScript. And even if your marketing folks say, well, I'm just adding one library, if that one library is a tag manager, oh boy, here comes the flood of JavaScript. So a lot of truth there.
00:36:26 - Christopher Burns
That's why Core Web Vitals are so much better, because it's actually tracking the use. It's not tracking the, oh, how fast does the page load. It's tracking how does it load for the users over time. And I find it really interesting. My very final question is I see a lot of parallels to Vercel. Which one do you think hosts the Next.js website better?
00:36:47 - Mark Brocato
Well, do you want to tackle that one first?
00:36:50 - Ishan Anand
I think it depends on the site and what techniques you want to use. If you're a static website, then both solutions are going to host your site just fine. If you want to do something like ISG, we both support it out of the box.
If you're a high-stakes website at scale, there are features on top of Next that we add that make the site more performant. We have a large number of points of presence on our edge. Thanks to a new partnership, we now own our infrastructure. We can do a lot of deep integration. We can do a lot of that routing that Mark talked about that happens at the edge. We can do split testing and A/B testing that's integrated with our RUM. We've got a bunch of techniques beyond simply ISG, like parallel static rendering, which in some cases might be simpler to reason about.
So if you've got a lot of pages in your site, or a lot of frequent updates happening on your site, all I can say is that's where we've built our solution.
[00:37:42] But I do want to say we have a lot of respect, and there's a lot of innovation that I think the whole community should be thankful for, from both the teams at the other Jamstack platforms. So our hats are off to them. They're really pushing a lot of the boundaries on developer experience. There are things that they do that we don't. I'll give you an example. Right now, we do not actually execute static builds. We were not built for static websites. So right now if you want to do a purely static website, the way the integration happens is you're actually running it through GitHub Actions on your own account.
We might revisit that down the line, but that's because, again, at the segment of the market we play, folks already had their own CI/CD and they weren't really trying to do a static website. It's less about which is better. It's really about identifying what type of company you are or what type of use case you are. Are we the best fit for that?
[00:38:36] I'll make an analogy to e-commerce. You've got a whole set of e-commerce solutions. You've got WooCommerce and Shopify at one end of the market, and then you've got Magento kind of in the middle, and then you've got SAP and Demandware at another segment of the market, the higher end, or WebSphere. Each of those caters to a different set of personas and a different set of problems that they try to solve. In some cases, those things are at odds, simplicity versus control.
We're really designed at that mid to high end of the market. You've got more than 10,000 pages. You're doing more than $5 million or more in merchandise revenue a year. Then yeah, we're probably better fit for that. If you're a smaller site, our dev experience may not be what you're looking for. If you're hosting your small, static blog, we can totally handle it and the traffic, but that's not what we've built the product for at the moment.
00:39:21 - Christopher Burns
I think that's really interesting to say because every time, obviously we speak to people from these companies, I always imagine Netlify is the jack of all trades, does everything, can do everything to a good enough standard. And then Vercel is very good at your standard Next.js. And it very much seems like Layer0 is this high-performance version for those very specific use cases.
So it's really interesting to see the things, because I think it's such a hard decision for your everyday dev to pick one of these platforms, especially if you're going in from nothing, as in you've just built your first Next.js website. Which one do you pick? Does it really even matter anymore? That's another really good question. Probably for another day, though.
00:40:07 - Ishan Anand
One other thing I would emphasize is, suppose you're on another framework like Angular or Nuxt. You can't get ISG on those platforms. That's something else that we've tried to do: take innovations from all the different frameworks, put them together, and then re-implement them in a way that's framework-agnostic. So any framework can be scalable and still get the Jamstack benefits.
00:40:28 - Anthony Campolo
I really appreciate that answer you gave about respecting Vercel for the innovation that they've given, because that's very in line with the spirit of this show. It's not tearing down competitors who are doing similar things to us because we're all learning from each other. We're all kind of seeing what other people are doing in this space and it helps us all grow and make better technology and have better experience for our users.
So I think that's really awesome. It sounds like you all have done a good job of really honing your own value prop and what you bring to the table, and that's independent from what anyone else is doing. So it's like, you know, this is our thing. This is what we're owning.
The last thing I'd be curious to get, just before you guys give your socials and how to contact you, is how would you tell someone to get started with Layer0? What's the best way to get a foot in?
00:41:13 - Ishan Anand
One way is you can just take your app, install the Layer0 CLI, npm install layer0, and then run layer0 init and layer0 deploy.
You can go to our documentation, pick your framework, and there'll be instructions there, but really the short answer is go to our docs page. There's one-click buttons. Pick the framework you're used to. Click on that deploy button with one click and sign in with your GitHub account. If you do, it'll spin up a sample app in your framework of choice that you can just play with immediately. That's really the best way to get started.
If you run into any questions, feel free to reach out to myself, Mark, or our support team. We also have forums where you can ask a lot of questions as well. But that's the best way. Just go to docs Layer0 and then pick your framework and click on the deploy button and sign up with GitHub. In one click you'll get a project you can play with.
00:42:01 - Anthony Campolo
And then what are your guys' socials and how can we get in touch with either of you two?
00:42:05 - Mark Brocato
My Twitter is at dev. Some people may know me from the Macaroon application. That was actually my baby. It's an application for creating fake data for software testing and demos. So my main Twitter profile is under that, but I do everything on that one.
00:42:21 - Ishan Anand
You can reach me at I and my first initial and then my last name on Twitter. You can also contact Layer0. Our Twitter handle is Layer0 Deploy. Or if you go to our website, you can file a support ticket and get ahold of us that way as well.
00:42:37 - Christopher Burns
I can't wait to see how the company keeps on growing.
00:42:41 - Anthony Campolo
Well, thank you both for being here. Really appreciate it. Hope people will check out Layer0 and also your own podcast, JavaScript Jam, so we'll have links to all of that in the show notes.
00:42:52 - Mark Brocato
Thank you so much for having us.
00:42:53 - Ishan Anand
Thanks.
00:43:24 - Christopher Burns
There's breaking news. Gatsby's just released their fourth version with SSG, SSR, and DSG. Who knows what all these mean?
00:43:34 - Ishan Anand
I'm going to have to read up on that, but I was going to mention that they started adding serverless functions. Everybody in the ecosystem eventually has to add some form of serverless if they want to grow beyond static.
00:43:44 - Christopher Burns
DSG, I think, stands for deferred server something.
00:43:49 - Mark Brocato
Deferred static generation, maybe.
00:43:52 - Ishan Anand
Deferred static generation.
00:43:54 - Christopher Burns
There you go.
00:43:54 - Ishan Anand
So I'll have to add that to my next talk. I'm looking forward to that. I'll have to drill into it.
00:43:58 - Mark Brocato
Everybody's coming up with a new acronym for server-side rendering with a cache in front of it.