
Million v3 and LLRT
An in-depth discussion about AWS’s new LLRT, Next.js’s evolving App Router, and Million.js v3.
Episode Description
Anthony and Ishan discuss Amazon's new LLRT runtime, the Next.js App Router migration debate, and Million.js V3's performance improvements.
Episode Summary
This episode of JavaScript Jam covers three main topics shaping the JavaScript ecosystem. First, hosts Anthony Campolo and Ishan Anand break down Amazon's experimental LLRT runtime, which uses QuickJS instead of Node's V8 engine to minimize cold starts in serverless environments. Ishan explains the trade-offs between JIT-compiled runtimes suited for long-running cloud functions and lightweight runtimes optimized for short-lived edge functions, connecting this to broader efforts like WinterCG to standardize server-side JavaScript across runtimes. The conversation then shifts to the heated debate around Next.js App Router migrations, anchored by Brandon Bayer's article detailing Flight Control's difficult transition. Guests Andrew Lisowski and Dev Agrawal join to discuss whether the App Router should have been a separate product entirely, the challenges of maintaining backward compatibility, and how the ecosystem is grappling with React's increasing complexity. The episode wraps with a look at Million.js V3, which optimizes React reconciliation through its block DOM approach, and how its new Million Wrapped feature helps developers quantify performance gains—drawing comparisons to what React Forget and Solid.js aim to achieve from different angles.
Chapters
00:00:00 - Introduction and JavaScript Jam Overview
The episode opens with hosts Anthony Campolo and Ishan Anand chatting casually about Twitter Spaces finally supporting desktop speakers, a feature that had previously caused issues with past guests. They transition into welcoming listeners to JavaScript Jam, their biweekly Twitter Space covering JavaScript and web development news.
Ishan encourages audience participation, noting that some of their best episodes have been entirely audience-driven. He promotes the JavaScript Jam newsletter at javascriptjam.com, which Anthony curates with a roundup of the latest happenings in the JavaScript ecosystem. The hosts set the stage for the two main topics of the week: Amazon's new LLRT runtime and Next.js App Router migration stories.
00:03:33 - Amazon's LLRT Runtime and Serverless Cold Starts
Anthony introduces Amazon's experimental LLRT runtime and asks Ishan for his perspective. Ishan draws on his experience building serverless JavaScript environments predating AWS Lambda, explaining how cold starts have always been a core challenge in serverless architectures. He describes how LLRT uses QuickJS, a Rust-based JavaScript engine, instead of Node's V8 to dramatically reduce startup times.
The discussion covers the fundamental trade-off between V8's JIT compiler, which optimizes long-running processes by dynamically rewriting code, and QuickJS's lightweight approach that sacrifices JIT optimization for faster cold starts. Ishan explains why LLRT is better suited for edge functions handling brief tasks like JWT validation or redirects rather than cloud functions running server-side rendering or database operations that benefit from V8's runtime optimizations.
00:08:55 - Edge vs. Cloud Runtimes and Security Trade-Offs
The conversation goes deeper into the technical distinctions between edge and cloud function use cases. Ishan explains how the JIT compiler's ability to optimize code is limited in short-lived edge functions anyway, making lightweight runtimes a natural fit. He also touches on security considerations, noting that some platforms use V8 isolates for fast cold starts but face concerns about isolation guarantees in multi-tenant environments.
Ishan references Amazon's various runtime strategies, including Lambda@Edge and CloudFront Functions, each targeting different performance and security profiles. He frames LLRT as another tool in the infrastructure toolkit rather than a competitor to existing runtimes, while emphasizing that its experimental status means the community should view it as a promising sign of continued innovation rather than a production-ready solution.
00:13:17 - Runtime Standardization and the WinterCG Effort
Anthony asks how LLRT might impact projects like Deno and Bun. Ishan argues that those projects offer a much broader value proposition than what LLRT addresses, so competition would only arise indirectly. The discussion shifts to the broader challenge of server-side JavaScript standardization, with Ishan highlighting the work of frameworks like Nitro from the Nuxt team that target runtime-agnostic deployment.
The hosts draw parallels between today's server-side runtime fragmentation and the browser interoperability battles of the past, referencing the W3C and WHATWG split and reunification. They discuss how WinterCG aims to standardize lightweight runtimes but hasn't yet achieved the momentum of annual ECMAScript releases. Ishan emphasizes that a virtuous cycle of frameworks supporting non-Node environments and new runtimes emerging would accelerate adoption, pointing to Nitro and SolidStart as examples of this direction.
00:24:38 - Next.js App Router Migration Debate
The conversation pivots to the newsletter's coverage of Next.js App Router migrations, with Brandon Bayer's article from Flight Control as the centerpiece. Ishan summarizes Brandon's conclusion that he would have chosen Remix if given the chance to start over, citing concerns about React Server Components stability and Next.js owning too much of the application entry points.
Anthony contextualizes this within the broader ecosystem tension, noting that criticism of React sometimes conflates issues with Next.js specifically. He observes that communities around Svelte and Remix are seizing the moment to promote alternatives, while loyal React developers push back against the App Router without wanting to leave the ecosystem entirely. Both hosts emphasize that Brandon's article stood out for being fair and constructive rather than personally attacking the React team.
00:32:08 - Practical Takeaways and Audience Discussion on App Router
Anthony offers his pragmatic take: bleeding-edge technology comes with expected pain, and developers building serious production apps should weigh whether they're prepared for edge cases. He notes that Next.js hasn't deprecated the Pages Router and that two other migration articles in the newsletter had more positive experiences with more conventional use cases like e-commerce.
Guest Andrew Lisowski joins and argues provocatively that the App Router should have been an entirely separate Vercel product, since the paradigm shift is so significant that sharing documentation with the Pages Router creates confusion for newcomers. Dev Agrawal counters that the Next.js brand recognition was strategically valuable, leading to a nuanced discussion about whether framework surface area has simply grown too large, with Ishan drawing comparisons to the AngularJS-to-Angular transition.
00:46:24 - Static Sites, Jamstack's Evolution, and Framework Complexity
Andrew shares his personal frustration with how the App Router made static site generation more difficult, describing unexpected complexity when trying to include markdown files in a deployed app. Ishan connects this to Next.js accumulating features across paradigm shifts, from its non-static origins through Jamstack-era additions like ISR to the current server-component focus.
The group briefly discusses whether Jamstack is still a relevant term, with Dev referencing Matt Biilmann's talk on the subject. Ishan reflects on how enterprise customers treat framework migrations with much more caution than the broader developer community, noting that web developers have been historically spoiled by seamless upgrades and are now encountering the kind of migration complexity common in other areas of software development.
00:50:09 - Million.js V3 and Block DOM Performance
Anthony introduces Million.js V3, highlighting its new compiler with TypeScript support, hydration improvements, internationalized documentation, and the Million Wrapped feature for showcasing per-component performance gains. Dev explains the block DOM concept, describing how Million identifies static portions of the UI that don't need reconciliation and optimizes around dynamic holes where values actually change.
The discussion draws connections to Solid.js's automatic fine-grained reactivity and how Million brings similar optimizations to React without requiring a framework switch. Andrew notes that React Forget is easier to understand conceptually, while Million's value proposition requires more investigation to appreciate. Dev explains that used together, React Forget and Million could approach Solid-level efficiency from complementary directions—one optimizing rendering and the other optimizing reconciliation.
00:58:14 - Million Wrapped, Developer Tooling, and Closing Remarks
Ishan praises Million Wrapped as a savvy product decision, comparing it to enterprise software that bakes ROI measurement directly into the tool. Anthony highlights that it provides granular, per-component render speed improvements, making Million's value immediately visible to developers. Dev adds that the Million team is focused on improving static and runtime analysis to surface optimization opportunities automatically.
The episode wraps with plugs from the guests: Andrew promotes his DevTools FM podcast and shares the story of Million creator Aiden, who built the project while still in high school. Dev mentions he's recently entered the job market. The hosts thank everyone for participating and remind listeners to subscribe to the newsletter and join again in two weeks for the next JavaScript Jam.
Transcript
00:00:00 - Anthony Campolo
Hello. Hello.
00:01:14 - Ishan Anand
Hello.
00:01:17 - Anthony Campolo
By the way, I don't know if you have heard yet, but you can speak on desktop now.
00:01:22 - Ishan Anand
Oh, you can. I totally missed that. When did they fix that?
00:01:29 - Anthony Campolo
I don't know, but I found out at our last space with Nick and Becca and they made it sound like it had been that way for a while. They were very surprised. I didn't know.
00:01:41 - Ishan Anand
We've been burned so badly with all the past guests who tried to join via desktop. They were like, just don't go there. And then they found out they couldn't speak, and we couldn't figure out why they weren't able to come up. So we just, you know, you never get a second chance to make a first impression. So maybe I should give it another whirl. That's useful news. Well, let's maybe start welcoming folks to JavaScript Jam. You want to just kick us off?
00:02:15 - Anthony Campolo
Yeah. This is JavaScript Jam, our biweekly Twitter Space where we talk about the JavaScript news of the week. My name is Anthony Campolo. I am a developer advocate at Edgio and
00:02:31 - Ishan Anand
I am Ishan Anand. I'm the VP of the Applications Platform at Edgio. JavaScript Jam is something we like to think of as an open mic for everything JavaScript- or web-development-related. That's why we host it on Twitter Spaces. Anthony does send out a newsletter, which I encourage you to go subscribe to. You can go to javascriptjam.com and click the link to subscribe, and he gives a really great rundown of the happenings in JavaScript land and web development for the past two weeks. We often talk about what's in the newsletter, but I love it when it's audience-driven. Some of our best conversations, in fact, have been when the audience decides to bring up a topic completely not on the agenda. And even better, somebody else in the audience had the answer. Those are some of my favorite moments, and that's when I get the most and learn the most from this. And hopefully the audience does too.
00:03:33 - Anthony Campolo
Yeah, for sure. I got the newsletter pinned up to the Jumbotron right now. Got a couple of topics this week. One that I was really curious to get your thoughts on is if you've heard anything about this new Amazon runtime, LLRT.
00:03:51 - Ishan Anand
Yeah, so I saw that. I saw the GitHub repo, looked a little bit at it. We've actually discussed it a little bit here as well because we obviously have a variety of environments we run JavaScript in. In fact, going back at our prior company, which was Moovweb Layer0, we actually built out our own serverless cloud before Lambda even existed. Well, actually, it was being built at the same time Lambda was being built, so they basically kind of came out around the same time. And at the time, Lambda wasn't performant enough for serving web requests because of cold starts. Now, since Lambda came out, it's gotten a lot better. But we used to go through a lot of hoops to essentially create a serverless JavaScript environment that had very little cold starts. And we would do all sorts of things to optimize its performance, including spinning stuff up and keeping it in kind of a quiescent state and then waking it up really strategically in order to maximize performance and minimize the cold starts. And so there's this trade-off that you've got in serverless environments: they're really super convenient.
00:05:13 - Ishan Anand
You don't have servers to manage. But the trade-off is that if you get an increase in demand or requests above a certain baseline, then eventually there are servers that need to be spun up and the system has to go find new computers and new processes it has to spin up in order to handle the additional capacity. And so those are what we call cold starts, and that causes delay. And so what this new runtime, LLRT, uses is instead of, say, spinning up a full Node instance to handle JavaScript requests, they are using this runtime that's written in Rust called QuickJS, which we've actually used in our edge functions ourselves.
00:06:05 - Anthony Campolo
Yeah, that was one of the things I was curious to ask about: how does this relate to edge? Is this now an edge function or is this a Lambda function, or is this something in the middle?
00:06:21 - Ishan Anand
In theory, you could do it for both, but the practical implication is this is probably a better runtime for edge functions than it is for what we call, in our platform, cloud functions. An edge function typically runs very, very briefly and needs a very short cold start. Typically they're a lot smaller in size and memory requirements. They're doing things like, say, validating a JWT token to make sure you have access to the upstream request. Or maybe you've built some security feature on top of it, or the types of things you might use in what Next-land they call middleware. Right, very short, quick processing of requests before it gets processed fully by something else. You could even do things like redirects inside them. But essentially there's not a lot of memory usage, and you want them to run as fast as possible. And that's because every single request potentially could get this edge function or this middleware run on top of it. And if you think about the average web page, you can have 10 to 100 different requests making up the webpage, and each one gets a little bit of latency, it's just going to accumulate.
00:07:31 - Ishan Anand
And so the cloud function use case is like your classic AWS Lambda. You're running this for maybe a couple hundred milliseconds, maybe even a couple minutes, and it's maybe aggregating a bunch of APIs together, making a database request, maybe multiple of them, maybe it's doing things like server-side rendering, so it's taking your Next page and turning it into HTML that then gets hydrated back on the client. So those tend to be more long-running, more memory-intensive processes, and classic Lambda is probably the effective choice there. And where it gets really important, the difference as it relates to these runtimes, is Node, or regular Lambda, uses the V8 JavaScript engine, which has a JIT, just-in-time compiler. So as it's running your code, it's figuring out where the slowdowns are and it's dynamically making those parts faster. It's actually rewriting the assembly code. QuickJS, because of the way it's written, doesn't do that. And if you actually look at the benchmarks, you can see this right in the benchmarks. And so I don't know if you'd want to be running a long-running process in a QuickJS LLRT runtime.
00:08:55 - Ishan Anand
It kind of isn't what it's used for.
00:08:57 - Anthony Campolo
It's kind of like, why not? What's the issue there?
00:09:01 - Ishan Anand
Yeah, so you can run it. It'll give you some nice, let's call it, safety isolation. But the performance characteristics aren't the ones you're essentially optimizing for. It's like trying to use a hammer when you should be using a screwdriver. So in a long-running process, you pay the cold start once and then the process continues to run for most of the duration of its lifetime. It's a small percentage of the runtime of that execution. And so you don't care as much about the cold start if this thing's going to be running on the order of minutes, for example. Let's say it's doing some log processing or something like that. And in that case, especially if it's doing SSR and that's being cached out, you don't care about the cold start as much. Now, if it's directly serving web requests, then you might. But in the context of an edge function, you probably aren't running that for a long time. So even if you had, say, a JIT-like environment, it's kind of the flip side of that: when you're in an edge function, you run it very briefly and it doesn't even run for a long time.
00:10:07 - Ishan Anand
So, in theory, the ability of the JIT to optimize that code's runtime isn't going to have as many opportunities to observe the code's behavior, find loops, and optimize them in a very short-running edge function. And there you want a really short runtime. So the question is, where do you want to take your poison? Where do you want to take your trade-offs? Is it at the beginning and you take a longer cold start, but you're running in a more JIT-like environment? Or is it in an environment like an edge function where you want to have very short cold starts and really strong isolation even though you're using these lightweight runtimes? That's something WASM and runtimes like this can provide. So each is kind of fit for a different purpose. I don't find them necessarily as competing with each other. That being said, we've run our own experiments, and I will say that the JIT can be useful in some edge functions. We've certainly seen use cases where JIT would have helped, but on the whole I think LLRT looks like a more useful fit for that context than for a cloud function.
00:11:20 - Ishan Anand
Like, I don't know if you'd want to use this for a long-running server process. Amazon actually has another engine that they use. So for Lambda they use Node. They have something called Lambda@Edge, which is Node-based, but the cold starts were a bit high, and so they have another JavaScript runtime which I think they called CloudFront Functions, which actually ran on yet another runtime in order to maximize its safety and minimize its cold starts. Some of these edge platforms will use what they call V8 isolates to do this, and it's a way to have a fast cold start in a Node environment. Firecracker is different. Firecracker was yet another thing they came out with. But there is some concern in the ecosystem about the level of security guarantees that V8 isolates can give you where you've got code running from a variety of different customers in a very heterogeneous cloud environment. And so you might want something with stronger security guarantees like WASM gives you. But essentially this is just, I'd call it another arrow in the quiver for folks like us building infrastructure that people can use, where they can kind of play off a different set of trade-offs.
00:12:49 - Ishan Anand
So it's really interesting. But it's also worth remembering this just came out. It's super experimental. I would say the great thing here is just to see that there continues to be innovation here and experimentation. And it's really great because they were using QuickJS, which a lot of people have started using, especially in WASM environments, for executing JavaScript because it's written in Rust. And Rust can compile really nicely to WASM. So that's another reason this is really useful.
00:13:17 - Anthony Campolo
Yeah. Do you...
00:13:18 - Ishan Anand
That's... I've been talking for a while.
00:13:19 - Anthony Campolo
I have some questions. Yeah, I was curious, how do you feel like this is going to impact projects like Deno or Bun?
00:13:31 - Ishan Anand
It's a good question. My view on this is the value prop of Deno and Bun is a larger surface area than just what LLRT is designed to solve. So if it creates competition for them, it'll probably be by inclusion into something else, if that makes sense. There's a lot more to those than there is here. There is a kind of indirect competition in that it's another runtime, but I don't really view it as direct competition. Again, it's purely experimental. So maybe that answers the question. I'll leave it there.
00:14:16 - Anthony Campolo
Yeah, no, that makes sense. I agree with you that it's just cool to see more stuff being put out. Especially because Lambda is something that, when it first came out, was pioneering. It was this new thing, and everyone started using it. Then I feel like there was kind of a long hangover where people couldn't quite get it to do exactly what they wanted, or they tried to get it to do something it probably wasn't really well suited for. And then we've been building all of these new things to try and address those issues, and it's still not really clear. I think it does seem clear to me that there will not be one thing that will just replace it and be like, this is the new thing, everyone's going to use this now. It seems like there's going to be a range of things that people are going to either stick with Lambda because it does what they want to do, or they're going to have an issue. And that issue could be startup, it could be streaming, it could be length of execution time. There's all sorts of different kinds of problems people tend to hit.
00:15:18 - Anthony Campolo
So I think all these kind of Lambda extensions, some of them address different pain points. So it's going to kind of depend on like what your pain point is. Is that kind of what you think?
00:15:30 - Ishan Anand
Yeah, I actually think the more fascinating thing is maybe taking a different direction. To me, the interesting direction here is the fact that it used to be there was only Node, and that was really the only JavaScript runtime environment in town on the server. Well, I guess in the browser, if you're thinking client-side. But on the server, Node was the only JavaScript runtime environment in town. That became an unofficial standard. The Node API layers, people just expect it. What we've seen, especially for platforms like ourselves that combine the cloud and edge, but you see this on basically all edge-function platforms that don't have a pure Node environment, there's been a lot of work to support this assumption of just even the Node APIs. And I had thought, and we continue to see, folks like Nitro coming from the Nuxt.js guys, or they call it UnJS as well, building applications or frameworks for applications that target a runtime that has a much more universal surface area and is much more narrow than Node. It doesn't assume it's a Node environment. It assumes as little as possible of the environment it's running in, basically JavaScript and little much else, through a very well-defined contract.
00:16:50 - Ishan Anand
And we saw that growing, I think, over the last three or four years, kind of the frameworks in parallel to all these other runtimes coming out, people realizing, oh, we need to design our frameworks to be essentially runtime-agnostic. And I think that adoption curve, or that migration curve to all these frameworks being more compatible to non-Node environments, has hit some hiccups over the last 12 months. I've seen other folks in the ecosystem say some people are actually even just giving up. And so for me the really interesting question is: can we move our entire JavaScript ecosystem, at least server-side, to this, I'll call this, flexible, more-compatible runtime environment that doesn't really depend on Node and could be any type of light thing? And this is what groups like the WinterCG are trying to do: come up with this kind of standardization among these lightweight runtimes so that things are a lot more portable. But we're not there yet. It's kind of like what, if you squint, interoperability meant for browsers. We're trying to figure that out on the server now, a few decades later.
00:18:06 - Anthony Campolo
Yeah, it's interesting.
00:18:07 - Ishan Anand
I don't know if that answers.
00:18:08 - Anthony Campolo
Yeah, no, I think this is an interesting kind of extension of that because trying to get interop with the browsers, I feel like a lot of that had to do with just standardizing JavaScript, the language itself, or CSS, and in general all the kind of whatever is going to run in the browser. They needed a spec and a standard. And then once we kind of aligned on a lot of the JavaScript conventions and CSS conventions, it's become easier to have standardization in the browser. It's been a little more challenging on the back end because there doesn't really seem to be as much coordination on the back end prior to WinterCG. And now WinterCG is supposedly going to get us there, but I just don't really hear about it very often. It doesn't seem to have as much kind of gravity around it as every year you'd have your ECMA stuff come out and people would be like, oh, what are the new JavaScript things coming out? But as far as I can tell, there's no yearly cadence with WinterCG. Maybe once they ship v1, there will be some sort of cadence, but I'm still kind of not really sure whether that's going to really be the thing we can rally around or not.
00:19:30 - Anthony Campolo
If it's not, then we're kind of screwed because there's not going to be anything else that's going to present itself as the solution. So I hope that it eventually bears some fruit.
00:19:42 - Ishan Anand
Yeah, I think you don't hear about it partially because it's a smaller audience. It doesn't affect users directly. If your app doesn't work because your backend runtime is incompatible, you don't launch. Your dev team finds out about it. But your end users, if they pull up a browser that doesn't work, your end users feel it after you've launched already and then they go try it out. It's a very visible failure mode. We've just started getting more active in what these bodies are doing, so I can't speak from firsthand experience on their cadence, but I do feel like, looking in the ecosystem, folks who are building infrastructure, it does seem like I've seen people say that there's kind of been a little bit of stalling in this stream. I do also feel like the work of standardization isn't just, oh, we get a spec and we agree to what those things are. I think the browsers are actually really interesting. I don't feel like people know the full story of how the W3C and the WHATWG split and then came back together, and the way they approached standard-setting afterwards was much more like, okay, one of the browser vendors has to go implement this and try it out first and demonstrate it working.
00:21:19 - Ishan Anand
And you can in theory write a spec, but you really need some way to validate that it works. And you might write a spec that has so much ambiguity that you don't really have a way to validate it. So you might in theory have two browsers that claim to be following the spec, but they just interpreted the language slightly differently. So I think a lot of those lessons got learned, and hopefully we'll take those as well in these new runtimes.
00:21:46 - Anthony Campolo
Yeah, it reminds me of SQL. Like there's technically a SQL spec, but it's like an incomplete spec. So every database vendor, or just database in general will implement the spec and then make some kind of decisions.
00:22:05 - Ishan Anand
Yeah, it's not like you can take some of the simple queries, but it won't necessarily be identical. Yeah, that's a good example. But I was just happy to see continued innovation in this area. What I would really like to see, though, is more things like what the Nuxt folks are doing with Nitro, which is more frameworks saying we are targeting a non-Node environment, or we're supporting that as a first-class citizen, as a runtime that you can deploy our applications to. Because I think what's really going to drive innovation here is demand. It's going to be a feedback cycle where the more frameworks you get like that, the more runtimes will exist, and then the more frameworks will adopt because they have more runtimes for it and you give your users more options. So that I think would be really cool.
00:23:01 - Anthony Campolo
Yeah, I've only ever used Nitro in a Node kind of way. I've never used it otherwise. So what else can you run with Nitro?
00:23:13 - Ishan Anand
I believe it'll run the same types of jobs and applications. It's just that you can now deploy to a non-Node environment, and it's being handled in a very systematic way.
00:23:28 - Anthony Campolo
Interesting.
00:23:30 - Ishan Anand
Yeah.
00:23:31 - Anthony Campolo
I wonder if this is probably one of the reasons why Ryan used it for SolidStart.
00:23:36 - Ishan Anand
Yeah. In fact, if you go to JavaScript Jam, I think we still have the video on here. If you go to videos... Oh, we don't have the Composability Summit videos up still.
00:23:52 - Anthony Campolo
I can find it on YouTube.
00:23:54 - Ishan Anand
Yeah. Daniel from Nuxt spoke about Nitro in the early days and it should still be up on YouTube. He spoke at our summit a while back. I think that was like a year and a half ago when Nitro was just getting started. And I saw it, I was like, oh, we definitely need to have you here to speak.
00:24:15 - Anthony Campolo
Yeah, Nuxt and the composable web.
00:24:17 - Ishan Anand
Here it is. Yes, exactly. That's the one. So yeah, I definitely recommend folks check that out. So that's the demand side of this equation, the supply side being the LLRT runtimes and the like.
00:24:38 - Anthony Campolo
Yeah.
00:24:39 - Ishan Anand
So on the newsletter, the one that, you know, jumped out at me. Oh, well, we should. We're halfway through. We should do our station break.
00:24:50 - Anthony Campolo
Yeah, go for it.
00:24:52 - Ishan Anand
Okay, yeah, so we're about halfway through. Reminder: you are listening to JavaScript Jam. JavaScript Jam is a Twitter Space and podcast that is run, well, the Space is every two weeks, and we put out the podcast sometimes with additional interviews throughout the year. It is designed to be both a podcast and, we like to say, an interactive open mic for anything JavaScript- and web-development-related. If you go to our website, javascriptjam.com, you can sign up for a newsletter, which Anthony curates and sends out with the latest interesting stories from the JavaScript and web development ecosystem, and often is the fodder for what we talk about here. But we like this to be as audience-driven and participatory as possible. So feel free to raise your hand, and we're happy to bring you up to the stage to talk about whatever topic we brought up or if there's some topic you're curious about or interested in. That's some of the best conversations we've ever had, when the audience said, hey, I want to talk about X, which I just saw in my Twitter feed today. And anything JavaScript-related or web-development-related is on topic.
00:26:03 - Ishan Anand
So go to javascriptjam.com, click on newsletter, and subscribe. And I think you've posted, yeah, you did in the Jumbotron a little bit earlier, the link to this week's newsletter. The item that jumped out at me was the Next.js App Router migration from Brandon of Flight Control.
00:26:33 - Anthony Campolo
Yeah, I'm glad that jumped out at you. I was actually thinking about sending him a message and seeing if we can get him on as a guest to discuss that. Because you'll notice there's actually three separate articles in this week's edition that are all about migrating to the App Router. It's on everyone's mind right now.
00:26:50 - Ishan Anand
It totally is. I had the exact same idea. I saw it, I was like, oh, we should reach out to him to see if he can join today, but that would have been last minute. I mean, not only that, it feels like there's a moment not just about App Router, but a moment about: is either React or Next getting too unwieldy? I'm curious if that's your feeling. I feel that topic is in the wind. At least it's coming through in my feed every once in a while. There's a Hacker News post, it feels like every few days, that's basically that sentiment. I don't know if you're getting that as well.
00:27:26 - Anthony Campolo
Yeah, I think you're seeing a lot of people coming out and kind of complaining about React or talking about migrating off of React. And sometimes it's clear that they're really talking about Next and the App Router, and sometimes they're still just kind of talking about React. I feel like some people have always wanted to get off of React. I've talked to lots of people, I've always been close to the Svelte community, and I feel like the Svelte community kind of uses React as a rallying cry to use Svelte in general. But now, because of server components, because of the changes in React, I feel like they're seizing the moment and seeing, oh, this is a good time to try and push other frameworks. But at the same time, people who are all in on React, they want to be using React, but they're not really quite on board with Next and the App Router yet. So they're also kind of pushing back on that. And so it's not always entirely obvious where people are coming from and whether their critique is with React proper or with the specific implementation of React.
00:28:36 - Anthony Campolo
And then also you see the Remix crowd coming in and kind of trying to seize the moment too, and be like, hey, this is the time to migrate to Remix. Some people are like, hey, you should migrate to another React framework, or you should migrate off React entirely. So I think everyone knows that there's pain in the ecosystem right now and are trying to kind of capitalize on that, which is one, kind of skeevy, but also this is how open source works. What I don't like is when people attack the React team and make it real personal. That always kind of bothers me. And Brandon definitely does not do that. So Brandon, I think, wrote a very fair article that just kind of lays it all out, and I thought he did a really fantastic job.
00:29:27 - Ishan Anand
Yeah. For those who haven't read the article, I guess maybe we can just briefly summarize it. Basically, the punchline is at the end where he said if he had to do it over again... It's right here. It's the headline in the last section: "If we could go back, we'd choose Remix." And he said, aside from much better dev performance, I think it has better architecture and abstraction. For example, with Remix, the user owns the client and the server entry points. But Next.js owns everything, preventing you from doing anything they don't explicitly allow unless you use NPM patches, which have consistently had to do so.
00:30:11 - Ishan Anand
I mean, his conclusion at the end is: here are the things that were good, here are the things that didn't work. In particular, he felt like RSC wasn't as stable as maybe they were led to believe. Now, granted, they started this a while ago. I think it was in April. Let's see if we can... I remember reading it. Yes. So they started in April 2023 and they looked at Next App Router, Remix, TanStack Router, and React Server Components, and App Router seemed like it was the future. And back then, from the vibes I read, it's a lot better than it was back then.
00:30:51 - Anthony Campolo
Vibe driven development.
00:30:54 - Ishan Anand
Yeah, exactly. Vibe-driven is also how you evaluate large language models too nowadays, because everybody just puts the test data in the training set. But yeah, so basically the end of the article is: here are the things that worked well, here's what didn't. But overall they would not have done it this way again if they had to. So I was kind of surprised by that, especially given that Brandon is no slouch when it came to Next, right?
00:31:25 - Anthony Campolo
Yeah. One of the biggest Next experts in the world. Yeah. So it says a lot that even he would find it too complicated. Like, if someone who built a meta framework on top of Next still thinks it's too complicated, that's pretty damning.
00:31:40 - Ishan Anand
Yeah, I was really surprised to see that. So it sounds like you were just as shocked too. That, to me, I thought was surprising. I do get the feeling that he started basically about a year ago. It certainly sounds like it's come a long way and somebody starting now would not have as many pain points. But I don't know, what was your interpretation or takeaway from this?
00:32:08 - Anthony Campolo
I think it still depends on, like, he's building a serious production app that's being used for deployments and has to really work very specifically, whereas... I think it really just depends on your use case. As these new paradigms come out, they're never going to be able to handle every edge case, and it's just going to take time. As the community builds out more things with it, they're going to find more edge cases. The core team will patch those. They'll figure out ways to explain how to use these things in the docs. So I think: don't use bleeding-edge technology if you don't want to bleed. That's really what I think it comes down to with stuff like this. I think some people can quibble with whether it was accurately depicted in terms of how close it was to being ready, or whether it was beta, or whether it should be Canary. There was that whole argument about whether it should be in Canary or not. So I don't know. I feel like if people are really worried about this stuff, they shouldn't use the new thing. I don't think you should use the new thing if you don't...
00:33:16 - Anthony Campolo
If you're not expecting to hit bugs and edge cases and have to work through this kind of pain. So that's kind of my take. And I haven't migrated a large production app to App Router, so I can't really say from a personal standpoint whether it's fully baked or not. But some people think it's fine. Some people hit a lot of issues. I think that if you are really worried about this kind of stuff, then just don't use it. Keep using the Pages Router way and I think it'll probably work just fine. So no one's forcing anyone to migrate. Next hasn't deprecated non-App-Router usage as far as I'm aware. I think that people should be more wary of migrating to the new stuff and they shouldn't just expect everything to work perfectly.
00:34:08 - Ishan Anand
Yeah, it's really interesting, especially when we talk to a lot of large enterprise customers and they treat upgrades and migrations to the latest version with care. It may seem like it works, but you never know when there's an edge case that broke, and especially on a large website that could have serious financial implications. So you do an upgrade and you put it through a full QA cycle even if it seems like it's working. And I think web developers have been historically spoiled that on the long arc of web development, most of the time you could take upgrades, or at least in the Next ecosystem. He says this in the blog post. He said in the past major version upgrades have been seamless, but not this one. And it now seems interesting that web development has gotten to the point that feels like other software development. Once the framework gets to a certain size, there's a certain surface area, and the larger that surface area is, the more care you have to take when you're migrating between versions.
00:35:25 - Anthony Campolo
Word. Did you check out the other articles about the migration?
00:35:33 - Ishan Anand
No. Yeah, I was about to say, the other articles I did not check out, but they seemed... Well, I briefly skimmed them. That was the only one of the App Router ones that I read in depth. What was your takeaway from those? They seemed a lot more positive. Yeah, I mean, one was a case study.
00:35:49 - Anthony Campolo
Yeah, yeah, they were, I think... And so it's kind of like I was saying, it has to do with what your use case is. So I think for them, what they were building, it was more like kind of classic Next apps. One was an e-commerce kind of thing. I'm sure you're familiar with Medusa.
00:36:08 - Ishan Anand
Yep.
00:36:08 - Anthony Campolo
Yeah. And then the other one was kind of like a PWA type thing. So I think with something like that, those were probably well suited and more in the sweet spot of what the App Router is meant to do. And I think that probably helps a lot. So yeah, I definitely recommend people check out these articles. I'll pin both of them. One is like the OpenTask one, is really, really long. It's basically a how-to and goes into all the code that is in the project. So if someone really wants to do this, this is going to be the article to check out.
00:36:51 - Ishan Anand
I mean, he's really clear that this is exactly that. This is a how-to-build case study and it's, I think, entirely net new, whereas the other one is explicitly about a migration. It's like what we learned actually transitioning. And he has the before and after code. I didn't get a chance to go through the before and after examples that much. I don't know if you did. But what I was planning to and didn't get to was to look at them and say, well, do I feel like this looks easier to maintain and easier to understand and build than the before? I don't know if you got a chance to take a look at that.
00:37:29 - Anthony Campolo
Well, this is an interesting question because the easier-to-build thing is, one, subjective, and two, it depends on where you're coming from. Because if someone already knows a ton about Next, they've learned how to use Next, they know all the Next conventions, they know how to build a project with it, that's going to be very easy to do. And the new conventions with the App Router and React Server Components will all seem very weird and foreign and strange. But if you knew nothing about Next and then came to it today and learned how to do it with the App Router, then that would just be the way you know how to do it. So I think it's a similar thing with, go all the way back to class components versus hooks. People thought hooks were weird and strange at the time, and then people who learned through hooks look at class components and they're like, whoa, this is way harder. So I think it has to do slightly with just where are you going to, what are you going to learn, and what is going to seem like the normal way to do it?
00:38:33 - Anthony Campolo
And usually whatever you learn first is going to be the easiest and then learning something different is going to be harder. Like, I think that's almost like a universal rule. Even if the new thing is easier, it will seem harder because it's different from what you already know.
00:38:49 - Ishan Anand
Yeah, it's anything different from what you're used to. It's something you didn't have to think about, so to speak. Now you have to think about it. Someone in the audience, Andrew, just replied and he said, you know, grass is always greener. Andrew, if you want to raise your hand, we're happy to bring you up to the stage and hear your take if you've been following these articles.
00:39:09 - Anthony Campolo
He's also a host of one of my favorite podcasts, DevTools FM.
00:39:16 - Ishan Anand
Oh wow. Okay. Yeah, great. So feel free to raise your hand. We'd love to.
00:39:20 - Anthony Campolo
Yeah. I threw out an invite to him at the beginning, so no worries if you're not available. We threw it out to Brad as well, long-time JSJam speaker.
00:39:27 - Ishan Anand
Oh yeah.
00:39:28 - Anthony Campolo
Nice to have you up here if you can. Oh, looks like
00:39:35 - Andrew Lisowski
Might just crap out at some point.
00:39:38 - Anthony Campolo
No, you're good, man. Thanks so much for joining.
00:39:40 - Andrew Lisowski
Yeah, no problem. That was mostly in response to Brandon's article, like, oh, if we would have done it with Remix, it would have been better. But yeah, I think anytime you say that, it's a grass-is-greener-on-the-other-side situation. It's like, oh yeah, everything would have been great and there would have been no problems. But there are trade-offs either way.
00:40:03 - Ishan Anand
It's definitely a fair callout. It always looks like you only see the warts of the project you're working on and all the benefits of the thing on the other side of the fence. So it's a totally fair callout. He did seem to indicate, though, he felt like the less magical aspect of Remix to him felt like an advantage. I wonder if that's more, if you're an expert, you want to know how the thing works, you don't want the magic. And that might also be a persona-fit type thing. If you had a thought on that...
00:40:49 - Andrew Lisowski
I personally don't think it's that much magic. I think it's just like with App Router, there's been a lot of FUD on Twitter. There are just lots of people not understanding it and lots of bickering going back and forth. Honestly, my take is that Next.js App Router should have been another Vercel product. It's utterly confusing to me when I go to the Next.js docs and there's the Pages Router and the App Router. It's essentially a new paradigm. It makes no sense to me that they're both Next.js.
00:41:27 - Ishan Anand
Oh wow. So in your world, would it be a separate framework?
00:41:35 - Andrew Lisowski
Yeah. After Next.js, I don't know.
00:41:38 - Ishan Anand
Yeah, yeah. Do you feel like maybe the criticism... So then what I was going to ask is, do you feel like this is a bit of a pile-on unfairly, like a misunderstanding of the direction they're trying to take it? Or it sounds like they should have just left Next the way it was set up and just said, hey, this is a new way for a different type of use case of app. This is a different framework, but still React.
00:42:08 - Andrew Lisowski
Yeah, in my opinion, just from the documentation standpoint, it is so hard to navigate. If I were a new person coming into App Router, if App Router were the only docs and I had no way of knowing about the other thing, it might be easy. But as a new person I think I'd be utterly confused. Just like going, oh, I'm looking at next/link, but oh, this is the wrong next/link. It's like, oh, I want to use the router. Oops, that's the wrong next/router. So it's like, why couldn't it have just been another thing? Honestly, App Router to me feels like a reaction to Remix, and it should have been a reaction in a separate repo. I'm all for the App Router way to do things. It's a fun way to code, it's a fun way to encapsulate things, but just to mix the frameworks... In my opinion, if you have a big enough breaking change, that's a new library, it's not a breaking change on a past library. And Next.js 13 is pretty deep into the majors, so to me it just makes sense for it to be a separate project.
00:43:19 - Anthony Campolo
Yeah, it's a really interesting point. I feel like that would be nice, but how many people would really use it? By forcing all the Next people to migrate, it ensures your entire Next ecosystem ends up on it. Whereas if they had done that, maybe everyone would have just ignored it.
00:43:40 - Andrew Lisowski
Maybe. But I think they still plan to support the Pages Router going forward. So it's not like the Pages Router is the old dead way of doing things. It's still just as much a priority as the App Router, if you're to believe the tweets.
00:44:01 - Anthony Campolo
Or so you say, Ishan.
00:44:03 - Ishan Anand
Yeah, I wonder if part of the problem here is the surface area of problems that the framework is trying to solve is just getting too expansive. I agree with Anthony. Why would you throw away the momentum and awareness and traction you get from being part of the existing framework? But I can understand how, if the paradigm is so different... I think, for example, of Angular, the switch from Angular 1 or AngularJS. It was so different it might have, should have been considered a separate framework. And that was more fundamental, I guess, because the underlying architecture changed, so maybe it's more dramatic. But I wonder if maybe that's part of the thing here, that it's trying to do too much and satisfy too many use cases and too many stakeholders.
00:45:13 - Andrew Lisowski
Yeah, that's fair. One interesting thing I found while rebuilding my website with the new App Router is that it kind of feels like static is a lot less easy. For my use case, I just wanted to have some markdown files that were deployed along with my app. I spent far too much time actually getting the text files or the modules included with my deployed app. And it's just like, isn't Next.js supposed to be this easy thing where I have a very simple website, I can generate things statically, and then I ended up having to go through all this stuff to get webpack to know about the modules so they would be included, so I could do things like a search in a React Server Component that would just read the markdown files and do some stuff to them. So just that was hard. So it's like Next.js to me was originally very much static website stuff, but there's kind of some hard edges to the new App Router that don't make static as easy anymore.
00:46:24 - Ishan Anand
I believe it's actually that: the too-many-use-cases problem. Next.js, I think, first started non-static, then it incorporated static as Jamstack got important, doing things like ISR and other forms of rendering. Now Jamstack has been retired as a term a little bit and isn't as important. I can see how it can be de-emphasized, but you've got this legacy of all those decision points in there.
00:46:54 - Dev Agrawal
Are you sure it's not a thing anymore?
00:46:56 - Ishan Anand
Yeah, I'm not sure. I'm being a little dramatic on that, but I've just been listening to, for example, folks at CloudCannon have a series they're doing on, you know, what the Jam... and it's basically about what is Jamstack now and how does it get defined? And you know, two or three years ago at JavaScript Jam, we had about two or three different episodes on what is the new definition of Jamstack, and we even had a panel on it. But what is your opinion?
00:47:29 - Anthony Campolo
Yeah, we got resident RSC advocate Dev with us now.
00:47:33 - Dev Agrawal
Yeah, I was referring to the talk Matt Biilmann did called Jamstack 2024. It was hilarious and a fun watch.
00:47:43 - Ishan Anand
Oh, I haven't seen that. How did I miss that?
00:47:46 - Anthony Campolo
Yeah, isn't Jamstacked the newsletter?
00:47:50 - Ishan Anand
Oh, in Brian's newsletter. I must have missed that. Okay, let me see.
00:47:56 - Anthony Campolo
Do you have thoughts on the topic at hand, Dev?
00:48:02 - Dev Agrawal
Can I get a quick refresher on what it is, what the context is?
00:48:07 - Anthony Campolo
We were talking about App Router, whether Andrew was suggesting that what if App Router had been a new Vercel product, or just a new framework, or was kind of decoupled from Next...
00:48:20 - Dev Agrawal
Entirely, then it wouldn't have the name recognition that Next.js has and wouldn't have as much traction.
00:48:27 - Andrew Lisowski
Vercel is a pretty big company. Vercel can put out more products and they'll be successful. I don't think Next.js is the last product they make.
00:48:38 - Anthony Campolo
That's a good point.
00:48:39 - Dev Agrawal
That's fair. They're building new products for sure. But I think the name recognition that Next.js already had as the best way to set up React and SSR and everything, it made sense that with server components it's like the next evolution of Next.js rather than a completely new framework.
00:49:03 - Ishan Anand
I will say from talking to folks in the enterprise, if they hear, well, more so React, but it's a recognized name and it can become like, you know, no one ever got fired for choosing Microsoft. Regardless of where you decide to host it, if you're like, oh, you're using that? Okay, yes, I know people who use this framework. It just becomes a little bit of a sometimes undeserved passport that says, oh, okay, yeah, that's a sensible choice, even if it's the wrong choice for the particular use case you're focused on.
00:49:42 - Dev Agrawal
Yeah, there's definitely some of that. I've been recently listening to the Primogen on similar topics for quite a bit where he talks about like, if you have used a certain tool and if you have used it over and over again, it doesn't matter how bad the tool is, you just get better at using that tool to build something valuable. So there's definitely some of that that plays in here.
00:50:09 - Ishan Anand
Yeah, that makes sense. We're almost at the top of the hour. I did want to get to the Million.js...
00:50:15 - Anthony Campolo
Well, perfect. Toby just showed up also.
00:50:18 - Ishan Anand
I see that as well. Yeah. So why don't you just quickly summarize, and then I have a question or two about that that I thought we could tackle. Go ahead.
00:50:27 - Anthony Campolo
Well, first, are there any last thoughts from either Dev or Andrew on the Next or App Router topic?
00:50:36 - Andrew Lisowski
Go build.
00:50:37 - Anthony Campolo
Cool.
00:50:37 - Andrew Lisowski
Shit. That's all I got.
00:50:41 - Dev Agrawal
Yeah, what he said.
00:50:44 - Anthony Campolo
Awesome. Yeah. So Million is a project that we've already been talking about for a while on JavaScript Jam and is, I think, a very cool project. They just recently released their v3. Hopefully we can get Toby up here to talk about it a little bit. But the thing they're really focused on is performance, so you've got improvements on hydration and a new compiler, which is going to help with supporting TypeScript, and this will handle a lot of interesting compiler-y type stuff. It seems like a lot of compiler work is now being done in React world, either in React core or in React frameworks. We've also got some internationalization additions. The docs and the website support multiple languages now. And then one thing that I don't know a whole lot about but sounds pretty interesting is something called Million Wrapped, which is providing a tool that can kind of showcase the performance improvements in your app. I know they're always big on, when you migrate from React to Million, really seeing what the performance increases are that you're getting from it. And yeah, it also mentions that the team is looking for new people.
00:52:12 - Anthony Campolo
I know that Aiden had gotten some funding from Tyler Cowen's Emergent Ventures project, which is pretty sweet. So it's great to see that the project is really getting some uptick and some support, some financial support. So yeah, if people are interested in checking out Million, it seems like now is a really good time to get into it. I'd be curious if either Dev or Andrew have tried it out.
00:52:45 - Andrew Lisowski
I haven't tried it out myself. From what I understand about it, it's mostly for large virtual lists, basically. And the app I work on, Descript, doesn't have much of that. I would be interested to see if it helps. But the Descript front-end app is quite complex.
00:53:05 - Anthony Campolo
Yeah, I think if you have a lot of diffing, a lot of virtual DOM stuff, so it doesn't necessarily have to be a list, but it's aimed at heavy DOM manipulation.
00:53:22 - Ishan Anand
So I think maybe the first question somebody might ask is: how does this relate to something like React Forget, which is not live yet but is coming? And I know Aiden's, I think, addressed this on Twitter in the past, but do you have a succinct way to kind of describe it? My understanding is React Forget is more about the rendering and this is more about the reconciliation. That's still the case with Million, I believe.
00:53:53 - Anthony Campolo
I'm going to pin this tweet here that he has about it. That's the sense I've gotten from it as well. It's one of those things, I've heard him explain it multiple times and I feel like I don't really fully understand it well enough to explain it myself. So I'm gonna point people to this post here. We got some other people that might
00:54:16 - Ishan Anand
be the one sharing stuff.
00:54:17 - Anthony Campolo
Also, we have put Dev's Jamstack 2024 up here. Oh, it looks like Toby's not able to hop up right now. He said he's in a noisy environment, but no worries. We appreciate you being in the audience.
00:54:33 - Dev Agrawal
I can try to expand more on the rendering versus reconciliation.
00:54:37 - Anthony Campolo
Yeah, go for it.
00:54:39 - Dev Agrawal
My understanding of Million.js and the block DOM technology behind it, I think of it like this. If you think of the UI that you're creating as a sea of a bunch of different UI components, or VDOM components, then block DOM basically looks at parts of that sea that don't really change. Honestly, islands is an island within the sea, but I don't want to refer to islands like Astro. But basically, parts of your UI that don't really change that often. For example, if you have three divs in a row and then you have a span with some value inside of it, let's say that only the value changes, then really the only thing that needs to change is the text inside the span. The other divs, they don't really need to be rendered much. It tries to figure out what are those blocks of UI that go together where they don't really have structural changes within them. And those blocks are what it optimizes for because React will treat everything as the exact same thing. But Million can recognize the difference between blocks that don't change and dynamic holes inside them where we put values.
00:56:00 - Dev Agrawal
And that's my understanding. I could be somewhat wrong here, but that's at least how I think of where Million best works in my application.
00:56:12 - Ishan Anand
And is it fair to say that's kind of like what, say, Solid tries to do automatically, although not in React?
00:56:19 - Dev Agrawal
Yeah, exactly right.
00:56:20 - Ishan Anand
Yeah, yeah, yeah. Basically understand the dynamic versus, shall we call them, the static nodes, and then optimize the reconciliation performance based on that.
00:56:30 - Dev Agrawal
Yeah, both React Forget and Million, they try to solve the same inefficiency problems of the VDOM from different directions. And both together, I think, if they both can be used together, you kind of approach what something like Solid.js or Svelte Runes is trying to do, which is one thing can optimize from the reconciliation part, take the blocks that don't have to rerender often, and then React Forget on the other side can manage the state inside your components where when one value changes, it only rerenders the UI that needs to change.
00:57:10 - Ishan Anand
It's like automatic memoization.
00:57:12 - Andrew Lisowski
Oh, sorry, yeah, Forget is a lot easier for me to understand. It's like, oh, I don't have to write most of the callbacks and all that type of code anymore. Million's promise is a lot harder for you to wrap your head around because it's like, oh, it just makes things fast. And it's like I wish there was some way to measure it or detect areas. I know they have a new million-lint project, but just being able to see the value I'm getting from it, I just don't feel like it's there without me doing a lot of investigation first.
00:57:46 - Dev Agrawal
I think that's another kind of a good value prop of Million, is that it never really de-opts anything. So the worst-case scenario is that your app is exactly as performant as it would be without Million, and obviously that's the worst-case scenario. And anytime it finds blocks that it can optimize, it basically provides an incremental performance gain.
00:58:14 - Ishan Anand
I think maybe that was the motivation for the other thing that jumped out at me in the release, which was Million Wrapped, which, you know, you mentioned earlier, Anthony, but it's basically: let me advertise the impact, or make it easier to advertise the effect.
00:58:28 - Anthony Campolo
Yeah. And literally it goes down the list of all your components and says this component render is now 22% faster. So that seems like that will really give you like very detailed, fine grained information about how this is affecting your performance, which is pretty cool.
00:58:44 - Andrew Lisowski
That's pretty sweet.
00:58:48 - Ishan Anand
Yeah. I said that's really savvy, kind of from a product-management angle, an element of delight that makes a lot of sense to include in the package. In an enterprise product, very often you want to show the lower TCO or the ROI. The more you can bake it into the product, the better. This was, I don't know, just a hat tip there for putting that kind of thing right into the framework. That was clever.
00:59:16 - Dev Agrawal
Yeah. And I love that Andrew brought up that you have to investigate quite a bit to figure out where the performance gains are. I think, from what I've heard, and I've also talked to Aiden a bit, that's the exact part that they're looking to improve a lot more, where their compiler not only can optimize your components, it can also look at and statically analyze your components, or at runtime analyze your components together. It gives you much better insight into which components, like I think Anthony brought up earlier, re-rendered this many times or have become this much faster now, and it points out ways where you can optimize the component on your own. It's almost like a better dev tool for React performance.
01:00:10 - Ishan Anand
Very cool. It's the top of the hour and I have to drop.
01:00:14 - Anthony Campolo
Yep. Yeah, we can wrap it up. Also, I just pinned DevTools has an episode on Million. So if you want to check that out, for the people up here, do you want to give a little pitch of who you are and what people should check you out for?
01:00:32 - Andrew Lisowski
Sure. I'm the co-host of DevTools FM. We're a podcast about developer tools and the people who make them. We like to interview people who are making the developer tools you use and may use in the future and get into why they made them and the story behind it and the future of their field. So the talk with Aiden was pretty fun. If you don't know, he created Million while he was in high school as a TA in another class. So it's a pretty inspiring story to see that. And we have a bunch of those across all the episodes we have.
01:01:05 - Anthony Campolo
Yeah, he's one of those kids like [unclear], who created [unclear] in high school. They're going places, let me tell you. Dev, you're looking for a job, right?
01:01:18 - Dev Agrawal
Yeah, I recently entered the job market. I'm not super excited about it, but here I am. So if you or someone you know could use my skills, hit me up.
01:01:32 - Anthony Campolo
Yeah, definitely recommend hitting up Dev. If you want a software engineer, developer advocate, he could do either, I think. So yeah, definitely has the skills to pay the bills. Awesome. So thank you so much for joining, everyone. This is a very fun conversation. Thank you, Andrew and Dev, for coming up and joining the conversation. And we will be back in two weeks to talk about more JavaScript Jam news of the week. Anything else you want to say, Ishan?
01:02:05 - Ishan Anand
Nope. Thank you, everyone, and I look forward to seeing you in two weeks.
01:02:09 - Anthony Campolo
All right, bye everyone.