
Deno npm Compatibility
Luca from Deno discusses the 1.28 release bringing stable NPM compatibility, its security model, performance benefits, and the WinterCG standards effort.
Episode Description
Luca from Deno discusses the 1.28 release bringing stable NPM compatibility, its security model, performance benefits, and the WinterCG standards effort.
Episode Summary
In this episode of JavaScript Jam Live, Luca from the Deno team returns to discuss Deno 1.28, which introduces stable NPM module compatibility — arguably the runtime's biggest release since 1.0. The conversation begins with why NPM support matters: it lets developers adopt Deno progressively without abandoning the massive existing Node ecosystem, and the growth numbers since the release have been extraordinary. Luca addresses community concerns that embracing NPM compromises Deno's original vision, arguing that nothing was removed and that a larger user base benefits everyone, including purists who prefer URL imports. The discussion then moves into the technical differences developers should expect — no package.json, no node_modules folder, and a global caching system with Docker-friendly hooks for deployment. A significant portion of the episode covers Deno's permissions model, which now extends to imported NPM packages, offering meaningful protection against supply chain attacks that Node simply cannot match. On performance, Luca explains that Deno actually runs Express faster than Node does, even through the compatibility layer, thanks to aggressively optimized native APIs and a focus on real-world application profiling rather than micro-benchmarks. The episode closes with a look at the WinterCG standards group and the slow but important work of aligning server-side JavaScript runtimes around shared APIs.
Chapters
00:00:00 - Introduction and Deno Overview
Ishan opens the show with the usual JavaScript Jam Live housekeeping, inviting audience participation and plugging the newsletter at javascriptjam.com. He introduces Luca as a repeat guest from the Deno team and sets the stage for the main topic: a major new release.
Luca introduces himself, mentioning his work on TC39, WhatWG, and W3C web standards alongside his role at Deno. He gives a concise overview of Deno as a modern server-side JavaScript and TypeScript runtime created by the same people behind Node — Ryan Dahl and Bert Belder — designed to correct Node's historical shortcomings by embracing ES modules, web APIs, and TypeScript natively.
00:05:09 - The NPM Compatibility Announcement
Luca explains what Deno 1.28 actually shipped: developers can now import NPM packages directly using an npm: specifier in their import statements, with semver ranges specified inline rather than in a package.json file. There is no local node_modules folder; instead, Deno uses a global cache similar to pnpm but fully virtual.
The conversation covers how this approach simplifies the developer experience — no install step, no dependency folder clutter — while still offering hooks like deno cache and environment variables for DevOps teams that need deterministic deployments in Docker or across horizontally scaled infrastructure. Luca also notes Deno's permissions model constrains NPM packages, providing a layer of supply chain attack protection absent in Node.
00:07:24 - Community Reaction and the "Compromising Principles" Debate
Ishan reads critical Hacker News comments from developers who feel Deno is abandoning its original vision by embracing NPM. Luca responds with data: growth since 1.28 has been unlike anything since the original 1.0 launch, with the monthly active user base — previously estimated at 200,000 — poised to jump dramatically.
Luca also makes the philosophical case that nothing was removed; Deno remains ESM-only for application code, and URL imports are still fully supported and even preferable for certain use cases like private registries. He argues that a larger user base produces more native Deno modules, more bug reports, and faster improvements — benefits that flow back even to developers who never touch an NPM import.
00:15:17 - Was This Always the Plan?
Ishan asks whether NPM compatibility was always on the roadmap or a reactive concession. Luca explains that Node polyfill work in the Deno standard library began roughly two and a half to three years ago, but the team deliberately prioritized building a technically excellent, modern foundation first before layering on backward compatibility.
This sequencing meant Deno's core design decisions were never compromised to accommodate Node patterns. NPM support was added as a purely additive change on top of an already mature runtime, which is why CommonJS code can be consumed but not authored — the developer-facing surface remains modern JavaScript and TypeScript throughout.
00:24:35 - Deno's Security Model and Supply Chain Protection
Luca walks through Deno's permissions system using a concrete example: attempting to read /etc/password prompts the user for permission, much like a browser asking for location access. Permissions are denied by default and can be granted with fine-grained flags like --allow-read=./ or --allow-net.
The key revelation is that this model now extends to NPM packages. Any imported module that attempts file reads, network requests, or subprocess execution must pass through the same permission gates. Ishan frames this as putting your entire dependency tree in a sandbox where you control exactly what it can do — a direct answer to the growing threat of supply chain attacks in the NPM ecosystem, which Node has no native mechanism to prevent.
00:33:54 - Migration Path and Node Compatibility Details
Luca outlines two target audiences: developers building greenfield Deno apps who want access to existing NPM libraries, and teams porting Node applications. He shares an example of someone converting a three-year-old Node project by simply updating specifiers to npm: prefixes with minimal other changes.
An audience member named Bro Nifty asks how the compatibility layer works internally. Luca explains that Deno does not embed Node; instead, nearly everything is a JavaScript wrapper around native Deno APIs that translates options, converts between promises and callbacks, and maps Node's event-based patterns to Deno's simpler interfaces. Deno even runs Node's own test suite against its compatibility layer to verify correctness, and supports Node native add-ons through Napi.
00:44:22 - TypeScript Support and Version Tracking
The discussion turns to whether developers could type-check with standalone TSC and then run in Deno. Luca clarifies that deno check uses the canonical TypeScript compiler internally, typically trailing official releases by only two to three weeks — and same-day on the Canary channel.
He distinguishes this from Deno's fast type-stripping path used during deno run, which is powered by SWC for speed and occasionally lags a few weeks on new syntax. The practical takeaway is that developers are unlikely to encounter meaningful gaps between Deno and the latest TypeScript features, making a split type-check-then-execute workflow unnecessary.
00:47:04 - Performance: Why Deno Runs Express Faster Than Node
Luca makes a bold claim: Deno executes Express faster than Node does, even through the NPM compatibility layer. He attributes this to Deno's native HTTP API being roughly two and a half times faster than Node's, which means even with compatibility overhead the result is still faster.
The team's optimization philosophy centers on profiling real applications — including Deno's own websites — rather than chasing micro-benchmarks. Luca highlights "Fast Streams" as a class-level optimization that bypasses JavaScript-to-Rust data copies when piping between native streams, delivering outsized gains for the static file serving, uploads, and downloads that dominate real-world workloads.
00:56:41 - WinterCG and the Future of Server-Side JavaScript Standards
Anthony Campolo asks about WinterCG, the W3C community group working to standardize APIs across server-side JavaScript runtimes. Luca describes the challenge: browser specs like Fetch include behaviors — CORS, privacy checks — that are irrelevant on servers, but there are multiple valid ways to diverge, and every vendor must agree on one path.
The core difficulty is not disagreement but coordination: weighing trade-offs across implementations, ensuring zero user breakage, securing engineering commitment from all vendors, and writing shared tests. Luca emphasizes that a specification only has value if every major runtime implements it, so consensus must be unanimous rather than majoritarian — which explains why progress is slow but ultimately durable.
01:05:10 - Closing Remarks
Luca thanks the audience and directs listeners to deno.land and deno.com for more information, highlighting that Deno is actively hiring across SRE, systems engineering, Rust and C developers, DevRel, and engineering management roles — bucking the industry trend of hiring freezes.
Ishan wraps up the Thanksgiving-week episode by thanking the community for its ongoing participation since the show's early Clubhouse days. He encourages listeners to follow the speakers, subscribe to the JavaScript Jam newsletter for the upcoming schedule, and continue raising hands and joining conversations in future episodes.
Transcript
00:00:01 - Ishan Anand
Hello, hello. Hi, everyone. We're just getting started. Let me bring Anthony up here. There we go, because he helped get this on our radar. Matt, thanks for kicking us off, and Luca, thanks for being here. I'll just get us started a little bit, and then we'll get into today's topics. Welcome, everyone, to JavaScript Jam, or as we like to call it, JavaScript Jam Live. JavaScript Jam Live is an open mic night for anything JavaScript- and web-development-related. We definitely like it to be as audience-driven as possible, where if there's a topic or a question you want to ask, you're welcome to come up and ask it to our speaker and our panel. What we've been doing more recently, because we've had a lot of enthusiasm and engagement around it, is inviting a speaker from the ecosystem to come and give us their view on something that's current and new in the ecosystem for everyone to learn more about. And we generally try to still bring in that audience participation element to it. So at any time, feel free to raise your hand if you want to comment, ask, or add a question to it.
00:01:27 - Ishan Anand
Before we jump into that, I'll just remind folks: you can subscribe to our newsletter if you go to javascriptjam.com, and there's a button right at the top to subscribe to the newsletter. What we highlight in the newsletter is our upcoming list of speakers that are coming. In fact, I believe we've got the calendar nailed down for the rest of the year. You can take a look at that and see some exciting guests coming up. And then we also highlight a couple of discussion topics to get conversation going. But of course, as I said, people are always welcome to ask a question and raise a hand. For today, we have Luca from Deno. He's actually a repeat guest. He was on JavaScript Jam a few weeks ago talking about the Fresh framework and Deno as well. He's back here today to tell us more about a really big announcement they had the other week in terms of compatibility with npm. But first off, before we jump into that, Luca, welcome.
00:02:34 - Luca
Yeah, hey, thanks for having me.
00:02:36 - Ishan Anand
Yeah, absolutely. Before we jump into the announcement, and the big part of the new release, can you just tell folks a little bit about yourself and then remind folks who didn't listen to the previous episode that had Deno what Deno is and why they should care about it?
00:03:07 - Luca
Yeah, totally. So I'm Luca. I work at the Deno company on Deno. I also do web standards work at TC39, that's the standards committee that standardizes JavaScript, and also at the WHATWG and W3C, working on various web specs. So Deno is a new server-side JavaScript runtime similar to Node or Cloudflare Workers, though closer to Node, where you can essentially execute server-side JavaScript code. And it's not just a JavaScript runtime, it's also a TypeScript runtime. It's really built to be very modern and embrace web APIs, embrace TypeScript, embrace all these great things that JavaScript has gotten over the last 10 years that Node never had the ability to use right away. It's really interesting because Deno was built by the same people that originally built Node, Ryan Dahl and Bert Belder. Ryan conceived Node originally, and Bert Belder essentially wrote Node for Windows. So they're back building a new JavaScript runtime with all the lessons they learned from Node. And that's Deno.
00:04:22 - Ishan Anand
Yeah, yeah, that was a really good summary. There's a talk where Ryan announced Deno, and I think the title is something like Learning from the Mistakes of Node, that I encourage people to check out if they want to learn more about it.
00:04:39 - Luca
Things I Regret about Node, I think.
00:04:41 - Ishan Anand
Yeah, thank you. That was the name of it. We should put that link up or throw it up in the next newsletter. One of the challenges, though, was that Node had a large third-party ecosystem, npm, of plugins, because Node is the 800-pound gorilla in the server-side JavaScript ecosystem. Lots of people had written tons of modules for npm, but that didn't work with Deno. So that takes us to the big announcement. You guys just launched version, I think, 1.28. Do you want to just start with the headline feature of what 1.28 released?
00:05:29 - Luca
Yeah, so a little bit of backstory on this. Deno previously supported ES modules, URL-based imports for ES modules, so exactly what the browser supports. If you were to write plain browser code, you could import ES modules using URLs. But Deno did not support importing from npm like you're used to with Node. In Node, you can specify some packages in a package.json file and then import those using either ESM or CommonJS. Deno 1.28 added support for importing npm modules. So essentially you can import all the code that is already written for Node, that people have already written, that's already out there, and import that into your Deno applications. And it's done in a way that I think is easier to use, it's arguably safer, and it's much nicer overall than what you have in Node. You don't need a package.json, you don't need to have a local node_modules folder, and you have it constrained by Deno's permissions model, which we haven't talked about, but you can check out on our website. It essentially allows Deno, or allows you, to specify what system operations your Deno process is allowed to perform.
00:06:46 - Luca
So you can block it from doing writes or reads to your crypto wallet, or making outbound network requests, or block all these kinds of things, which is great with all these supply chain attacks that we're seeing in the npm ecosystem, where npm packages get compromised and they start doing things that they're not meant to. With the permissions model, that problem goes away a little bit because it allows you to constrain what packages can do.
00:07:24 - Ishan Anand
Yeah, I want to actually get into all those things you just mentioned, because there are some key differences. But before we do that, I want to just take a step back and focus on the why and the reaction from the community. I saw some comments on Twitter and Hacker News. I got this one from Hacker News: there's a concern that Deno is being. People embraced Deno's rejection of mistakes of the past, which included npm, and there's a sense that Deno is trying to appeal to the masses. One comment in response to this was, "I'm slightly disappointed that the vision of leaving Node.js and npm behind ended up failing. The last year ended up being a bunch of concessions to make Deno more appealing to the masses, like disabling type checking on run by default." That's in one of the Hacker News comments that came up, and you can find a few more like this. What, A, do you feel was the overall reaction you saw from the community to this announcement? And then B, how would you address people with comments like the one I just brought up?
00:08:46 - Luca
Yeah. So I think, to answer your first question, what was our perception of the reaction? The overwhelming majority of people, like 90, 95, 98 percent of people I've talked to, have really been very, very happy with this, and they're very excited about it because it allows them to use Deno for more things that they couldn't previously. It allows them to adopt Deno progressively rather than having to do this whole rewrite before they can start using Deno, because they can use packages that they already use in the Node ecosystem. And just to put some numbers behind this, we've seen incredible growth since the last release. Our releases usually increase the user base by quite a bit every time. This one's insane. We have never seen growth like this ever before. It's maybe comparable to the original 1.0 release, but the amount of people that are starting to use Deno, even just. it's been a week, not even, or actually, no, slightly more. Slightly over a week since we released this. The user base has grown very, very significantly.
00:10:04 - Luca
And we see this all over the place. There's more people in our Discord. There's more people active on Twitter. There's more people asking questions on Stack Overflow. More people are viewing our website and our manual. It's really one way, and that way is up. The amount of...
00:10:24 - Ishan Anand
Any numbers?
00:10:29 - Luca
I don't know.
00:10:33 - Ishan Anand
Okay.
00:10:34 - Luca
I don't think we have public numbers, but I think the latest public number that we published is. Let me just check real quick. On our website, prior to this, we were at 200,000 monthly active users minimum. This is an estimate. We have some data that we back this off of, but Deno does not phone home when you use it, so we can't have perfectly accurate numbers. This is derived from things like visitors to our website and people downloading the standard library, things like that. But there's been very significant growth here. I wouldn't be surprised if, the next time we update this number, it'll be 50, 100, 200 percent more.
00:11:27 - Ishan Anand
Oh, wow, that's really great. Okay. And that's kind of what I would expect because of how big and massive the npm ecosystem is, and compatibility with the existing solution whenever you're entering a market is really important. If you go to classic Crossing the Chasm or Diffusion of Innovations, one of the key aspects to measure product success, or to predict it, is compatibility with the existing workflow of your target customer. So that makes a ton of sense. From first principles, what would you say to people who are, I don't know, hate to use old curmudgeons, and Deno isn't even that old, but are just like, hey kids, my lawn was great before you added these npm modules to it?
00:12:24 - Luca
Yeah, I have two answers to that. One of them is, there's nothing forcing you to use them, right? That's the obvious answer. We did not remove anything. This was a minor release. Deno still works great with URL imports. I think there are still very many use cases where URL imports are much better than npm imports, especially working with private registries. I think URL imports can be much better if you're working with internal company systems. If you're working with code that's generated on demand, I think URL imports are significantly better. But I think this is great for you even if you don't want to use this, or if you don't plan to use this, because it makes the Deno user base larger, which means there's going to be more native Deno modules for you to consume.
00:13:12 - Ishan Anand
There's...
00:13:12 - Luca
There's going to be more. The more users there are, the more bugs those users find, the more reasons we have to improve features. Even if you don't use this feature, everyone else who uses this feature is also going to use Deno, and it's going to make the Deno ecosystem, the Deno-specific ecosystem, grow, which is great for you if you don't use npm specifiers. So I think even if you don't use npm specifiers, this is still a great week for you.
00:13:44 - Ishan Anand
Yeah, that's great.
00:13:45 - Luca
And I think on the matter of...
00:13:47 - Ishan Anand
Oh, go ahead.
00:13:47 - Luca
Yeah, I think on the matter of what people often phrase as us compromising our principles, I understand where they're coming from, but I don't think that's really how you should look at this. We're not compromising any principles here. Deno still is ESM-first. It's actually still ESM-only. There's no way for you to write CommonJS code specifically for Deno. Sorry, I keep trying to mute myself and then it doesn't work. But yeah, there's no way to write CommonJS code that's specifically for Deno. You import modules which may be written in CommonJS and we'll figure that out, but in the end, the application code you write is still modern JavaScript or TypeScript. It still uses web APIs, it uses ES modules, it uses everything that we've built. You can use all native modules. Nothing's going away here. And I think some people are really worried that this is going to cause everyone to switch over to npm, to using npm for everything. I really don't think that's the case. I think this is going to be a great addition.
00:15:00 - Luca
It's like a back door if you need to use a module that there's no Deno-native alternative for. But I think people will always prefer Deno-native alternatives. And yeah, I don't think those are going away, so I wouldn't be too worried.
00:15:17 - Ishan Anand
Can I ask if this might have been a problem of messaging and communication with the community? Has this always been planned and maybe you guys hadn't told people? I know this was previously behind an unstable flag, but has this always been part of the plan, or was this more organic, that you needed to add npm compatibility and maybe that messaging didn't get through?
00:15:44 - Luca
Yeah, so I think there were always some plans for doing some Node compatibility. The system that this is built on, the Deno Node polyfill that's built into the Deno standard library, started being worked on maybe two and a half, three years ago. So it's been going on for a while. It's been a long-running project. But what we really tried to focus on first, before backward compatibility, was getting our foundation really, really strong, building a technically excellent system that's very modern and is not weighed down by old baggage. And then only after we thought we'd arrived at something pretty great, pretty excellent already, only then did we figure out how to add backward compatibility here with existing systems. That way we don't have to compromise the design of Deno. We didn't compromise any of Deno's design decisions for this. This was something we added as an additive change after we already had this pretty excellent runtime.
00:17:01 - Ishan Anand
Yeah, that makes sense. Okay, so the timing is right, with the foundation better in order. Well, let's go back to the differences that developers should be aware of, and you hit on a lot of these earlier. I just want to go maybe the next level deeper on them. First of all, you're not using package.json anymore. If you're a developer used to the Node ecosystem, you specify your modules in package.json. Now you're importing them in your file through these import statements. Then there's no npm install step, and there's no node_modules directory either, correct by default? That's my understanding. I have to assume what you're basically doing is storing those in some more global cache. Then the logical question is, if you're in a situation where you're deploying something across multiple areas. And I saw another comment to this effect that was like, if I'm horizontally deploying, that's an important part of the step. I can replicate that thing out. And then if every single one of my instances has to redo the work of npm install and populating modules, whereas I can just push it, is that going to make them more efficient?
00:18:22 - Ishan Anand
So let me pause there and see if there are any other key differences that you'd want to highlight. Then we can get to the security ones in a second, that's probably the other one. And then, B, your answer to how you might overcome the issue we're talking about in terms of deployment when you've got no more node_modules folder and how that works.
00:18:39 - Luca
Yeah, I think there are really two parts to this. One is the importing, and the other is the installing. For the importing part, we don't have package.json. Instead, we let you import or specify the npm package name directly in the module specifier. So you import Express from npm:express, and you might ask, okay, how do I specify versions? Well, you do npm:express@4, for example. That's as if you specified Express version 4 in your package.json, and you can have any sort of semver range after that just like you could in package.json. So that's the importing part. For the install part, it sort of works similar to pnpm, where there's a global module directory. The difference is that pnpm creates this local node_modules directory that is essentially symlinked to your global directory. It does some crazy stuff, which is very cool, but it causes you to have this local node_modules directory, which is not great. So what we do is we have a virtual node_modules directory that doesn't actually exist, but that the runtime is aware of.
00:19:58 - Luca
And when it tries to resolve a module through require, for example, it'll use this virtual modules directory. This is great because you don't have to create the folder. That's faster than if you had to create the folder. There's less file system operations going on, there's less junk lying around on your disk, which is great. And on the question of how you make sure you don't have to install every time you start your program, it's the same solution that we have with URL imports. With URL imports, we cache modules on first use and store them in this directory. You can specify where this directory is. If you want to have this directory inside of your repository, you can specify an env var to point to your local directory and then check it into your repository. Or you call deno cache on your file, and that will do the pre-caching without actually executing it. That's another option. If you're using Docker, you can do the cache step inside of the Docker build, and then it doesn't have to do it at runtime.
00:21:17 - Luca
Essentially, you can still force it to do this npm install step, the installation part of this, beforehand if you want to, but you don't have to. If you're locally developing, you don't ever have to run any install steps. It'll all happen automatically at runtime when the package is imported.
00:21:36 - Ishan Anand
So if I were to repeat back what you've said and translate it differently for the developer programming, they don't have to worry about this situation. For them, there's no node_modules directory and their life is simpler and easier and everything's cached. Then for your DevOps team, or managing ops, or if you're on a platform that takes care of this for you, there are sufficient hooks to get back the same node_modules-like directory behavior if you need to, whether that's sticking it in your Docker container or whatever. Is that fair to say? Okay. Similarly, suppose as an individual developer, rather than somebody on the DevOps side, another scenario that sometimes comes up, and it's sometimes embarrassing to say, sometimes you're in the middle of it, you're like, I need to just go into the module and patch the damn thing, or just change how it works because I'm trying to debug something. Presumably you can override the cache as well to make that happen. And you mentioned it was a symlink to the global cache. So do you just create the appropriate file? How do you actually mechanically make that work? Do you have to go to the global cache, get the file, modify it there?
00:22:49 - Ishan Anand
Or can you do it just for my existing project and it will treat the local one without having to follow the symlink?
00:22:56 - Luca
Yeah, I'm actually not completely sure on this right now. With URL imports, we have deno vendor, which is a subcommand that you can run with a URL, and then it'll put it into a local directory, generate an import map for you to remap the remote specifier to that local specifier, and then you can modify the local specifier in the vendor folder. I don't know if that's implemented for npm specifiers yet. I'd have to go check with David on our team. If not, then that's probably how this is meant to work, and then that will be coming soon. I don't know if that is there yet. I've never tried this, so I don't know.
00:23:36 - Ishan Anand
Okay, but it sounds like definitely you've got facilities. I wasn't aware you had a facility for that for non-npm, so that makes sense that there would be a workflow for handling that. Then one of the other crucial differences, and I guess I want to treat it separately because it's not just about how you use it, it's why it might actually be better to take your existing npm-dependent thing and run it in Deno rather than run it in Node, is the security story. I think what we should do is first back up and explain to folks that the way any code works in Deno, before it tries to access the file system or any critical resource, is it's got a very strong permissions model, whereas in Node it doesn't. And then you extend that to npm, so maybe give folks just that background on how that security model of Deno is better, and then we can dovetail into why this is now better for npm.
00:24:35 - Luca
Yeah, so the best thing, I think you could compare this to. Let me actually give an example of how this works. What you can do in Deno is create a file called, I don't know, mod.js or main.js, and you can call Deno.readFile("/etc/passwd"). If you executed the equivalent in Node, it would. Well, it wouldn't work because there's no Deno namespace. But in Node, if you did fs.readFile, it would just read the file and there would be no checks there. It would just work. In Deno, if you run this, it would error out or give you a prompt saying Deno is trying to access /etc/passwd. Do you want to allow this? This is sort of similar to what you do in browsers, right? Websites can't just send you notifications. They have to give you this little notification prompt, and you have to click accept before they're actually allowed to send you notifications. Or if they're trying to write a file to disk using the File System Access API, there'll be a little prompt that asks if you want to give this website access to write to your disk.
00:25:37 - Luca
And then you can press yes or no.
00:25:38 - Ishan Anand
Or get your location is probably the most common.
00:25:41 - Luca
Or get your location. Exactly. Yeah, permissions like that. Deno has the same sort of permissions model. It's denied by default, and you opt into certain permissions. This can either be done with prompts at runtime. This is great for CLIs where maybe you're not too sure what it's going to do, so you just execute it and then it'll prompt you every time it tries to do something, and you can decide if you want to allow it or not every time. For larger applications, or for applications that you wrote yourself, that usually doesn't make sense. So what you can also do is specify flags like --allow-read, which would give access to the entire disk. Or you can do --allow-read=./, which would give access only to your local working directory. --allow-write, --allow-net, --allow-env, --allow-sys. There's a bunch of different allow flags to grant access to different things with various levels of granularity. And that essentially means that a plain Deno runtime with no permissions granted has essentially no access to your system to do anything. It can read the time and maybe your language, and that's about it.
00:27:00 - Luca
It can't read environment variables, can't read things from disk, can't make outbound network requests, can't start arbitrary subprocesses, nothing like that. It's relatively secure. So the way this extends into npm is that you don't have any of this in Node, right? But in Deno, when you import an npm package and that npm package tries to read something from disk, it has to abide by the same permissions model. So you will get a prompt, or you have to specify the flag to grant this module access to do that certain thing. This really allows you to lock down exactly what your program is allowed to do. And then if a program tries to do something that you didn't expect it to, or that you didn't allow it to, it'll just not work. It can't steal your crypto passwords or whatever else people store on their disks nowadays, because Deno will just not allow it. There's no risk of leaking information out here because it just prevents access right at the operations layer.
00:28:09 - Ishan Anand
Yeah. One thing I want to highlight: you picked a really compelling example, but I'm not sure if folks who don't have a Unix background caught it. You had that example of reading /etc/passwd, which is the password file on a Unix system. So you write a Node program, or you run anything in npm and Node, and it could, if you have rights to it. /etc/passwd is probably locked off depending on the current user's environment, but in theory they can read everything.
00:28:40 - Luca
/etc/passwd doesn't even store the password anymore nowadays, I think. But yeah, people have files with secrets, right? There could be an .env file somewhere, or your bash config file, which has AWS secret keys in it. You don't want to leak that.
00:28:58 - Ishan Anand
Yeah, there's this concept of a supply chain attack, which is if you think about all of that massive node_modules graph, all the libraries you import. One library imports another library, which imports another library, and that chain just keeps spiraling out. That tree gets larger and larger. How do you know that none of those have potentially malicious code, or code that's going to do something that you don't expect and access an inappropriate file? With Node, you didn't really have a way to take your entire script and put it in a box and say, okay, it can do only these things. It's kind of like "mother, may I" before it ever does anything. And what basically we're saying is now, with Deno, you can take your same npm code and put it in a secure box where you decide exactly what it does and it can't do anything else. Is that a decent simplification of it?
00:29:59 - Luca
Yeah, that's exactly right.
00:30:01 - Ishan Anand
Yeah. So when I think about that, I think of a lot of, let's call it the DevOps persona and ecosystem. Node is obviously very popular in the web dev space, but I think of a more DevOps-type persona or use cases. That seems extremely compelling. Do they already have a better solution for it, or is this direct competition to that, or is it too early to see if that's the persona that we're most excited by? That to me seems more compelling than even just serving websites and web requests, in a sense.
00:30:46 - Luca
Yeah, I think it's always a combination of things. I think most people don't use Deno for any one specific feature it has, but rather because of the combination of features it has: the security, the built-in tooling, the general niceness of use due to the modern APIs and everything. I think there are definitely people that bet on Deno specifically because of the security model. One of them, for example, is Slack. Their new app platform is built on Deno, and they're really building on the security model and using that to make sure that your Slack apps don't make network requests to systems they're not allowed to. So I think there are definitely people there that come specifically for the security, but I think for most people it's a combination of things. It's the security, the ease of use, and all the built-in tooling.
00:31:42 - Ishan Anand
Yeah, that makes sense. I can vouch for that. At our company, Edgio, basically three other CDN-like infrastructure companies came together: Edgecast, Limelight Networks, and Layer0. Each of them had their own edge solutions, and one of them was Deno-based. One of the things right now is basically figuring out how to harmonize it all. But the Deno camp very much was like, the security pieces are a lot easier because it's easier to lock it down than it is for Node. Let me pause because we're halfway past the hour, and so at the halfway point we do a little station break. Remind people: you are tuned into JavaScript Jam Live. This is an open mic. We try to make it audience-driven, and if you're a beginner or an expert, we want to hear from you. Feel free to raise your hand and ask questions of our speaker or anybody up here. In addition, I encourage you to go to javascriptjam.com, and you can hit subscribe and get our newsletter, which will tell you what we're going to be talking about next week as well as our schedule for the rest of the year.
00:32:51 - Ishan Anand
If you're getting any kind of value from anyone up here, please feel free to click on their profile and give them a like. So as I said earlier, this is kind of the audience-driven portion. Again, feel free to raise your hand. It's at the bottom of the screen, and ask a question of our speaker, Luca from the Deno team, here today talking about the new 1.28 release, which brings npm compatibility to the Deno ecosystem. Stable, I should say. It was in unstable form for a while. So while we're waiting for folks to raise their hand or add questions, the next question I want to ask is: in practice, how much work do you think it is for somebody to migrate an existing Node-based application to Deno with this? Are all their problems solved? Do you have examples of people who've gone through this already and said it was super easy, or they still need other things? How would you describe that process?
00:33:54 - Luca
Yeah, I think there are really two types of people we're targeting with this. One of them is people that are building greenfield applications, but they don't want to lose out on the existing Node ecosystem. Their applications are either already written in Deno, or they're writing new applications and importing Node modules. For those people, I think 99 percent of them are very happy with this and it does everything they need. And then there's the group of people that have existing applications in Node that they would like to port over to Deno. For those, there is a significant chunk of applications that will run in Deno with very minimal modifications today. There was one example from Wes Bos, who did a little YouTube video where he took one of his Node projects from three years ago, updated all the dependencies, changed the specifiers to use npm: rather than what they were previously with Node, and it ran first try. So wow, it's definitely possible for this to just work out of the box. And I'm not saying it's going to work like this for everyone.
00:35:09 - Luca
There are going to be applications where that's not the case. There are going to be applications where you're going to need to do a little bit of manual tweaking here and there. But I think overall, for most people, this is going to work pretty well. And if it doesn't, please let us know. We're very eager to fix all the bugs that come up with this. We're very bullish that by the first half of next year, we're going to be able to run essentially all applications that you throw at it with no problem. Even things that you wouldn't expect to work in Deno, like Node native add-ons using N-API. These are very serious Node-specific pieces of code, right? They will run using Deno's compatibility mode. Deno has support for Node native add-ons even though we're not Node. There's very serious compatibility going on here.
00:36:11 - Ishan Anand
Oh, wow. So you're going to have a native emulation layer.
00:36:20 - Luca
It's not quite an emulation layer, but yeah, we can run native add-ons.
00:36:25 - Ishan Anand
Oh, wow. Okay, that's interesting. But before we get to that, Bro Nifty raised his hand. I want to give him a chance to either comment or ask a question.
00:36:38 - Bro Nifty
Yes, Ishan. Thank you, Luca. Very, very smart man. You and Ryan over there are absolute geniuses. I just want to ask a quick question. It's actually kind of a compound question, or two parts. One, as far as my workflow, if I wanted to. Let me actually ask the last question and then get to the first one. The last question would be: could I, hypothetically, in a workflow, type-check everything in Node and then just run it in Deno? And then I guess the other part of the question would be: does Deno basically just run Node with its own runtime, sort of virtualizing it, or how does that work exactly?
00:37:20 - Luca
Yeah. So on the first question, I'm going to see if I understood this correctly. I think you're asking if you can do type checking with TSC in Node but execute the code in Deno. Is that right?
00:37:33 - Bro Nifty
Yes. Yes.
00:37:34 - Luca
Okay. Yeah. So there is yes and no. You could technically probably type-check this with TSC, but you probably shouldn't. And you probably need to at least use TypeScript 4.5.0, which isn't actually released yet, because TypeScript currently, in its current version, does not support importing other TypeScript files with .ts extensions. It's crazy frustrating. But Deno enforces that you do that. And also TypeScript does not support npm specifiers out of the box yet. But depending on what your code looks like, and depending on if you use an import map or not, this may be possible. You don't really need to do this, though, because Deno has deno check built in, which can do the type checking for you. It uses TSC under the hood, but modified slightly to work with the web-standard things that we do, like enforcing file extensions or the npm: specifiers. So that's question number one. Then question number two: how do we actually do this npm compat? It's a great question. No, we don't actually embed Node. There's one little piece of Node inside of Deno, which is the implementation of Node streams that is the same between Deno and Node.
00:38:58 - Luca
Everything else is a novel implementation. The Node compatibility layer is actually written in JavaScript, and it essentially is a layer that exists on top of the Deno APIs. So if you call fs.readFile from the Node compatibility layer, under the hood this will actually call Deno.readFile and translate everything in the right way: translate the options bags, translate promises to callbacks, callbacks to promises, whatever, all that kind of thing. There are some cases where there's a little more going on. For example, with native add-ons, those can't be implemented in JavaScript. Those are implemented in native land, and there's a little more going on there. But pretty much everything in Node compatibility is just a wrapper around an existing Deno API.
00:39:50 - Bro Nifty
That's cool. It's kind of like a function that calls another function as far as converting, vis-a-vis promises and callbacks. Are you guys using something like util.promisify under the hood for that, or is there something else?
00:40:02 - Luca
No. We'd love to do that, but unfortunately the APIs are not similar enough to be able to do something like this automatically. It's a lot of handwritten glue code that transforms one into the other. For example, the Node TCP API is significantly different from the one that Deno has. Deno's is much simpler and uses promises everywhere, whereas Node uses event listeners and all this crazy stuff. Same with the HTTP API. So there's a lot of manual glue code going on. The problem there is always, how do you ensure that these actually behave the same way? How do you make sure that Deno's wrapper code behaves the same way as Node's implementation? And the answer is that we actually run Node's tests. Other than the streams implementation, we copy Node's tests into Deno's repository and run Node's actual tests against Deno so we can see if we're compatible with Node as a matter of whether we pass Node's actual test suite.
00:41:17 - Ishan Anand
That's brilliant. That's really nice. Bro Nifty asked some of the questions I was going to ask, so that's great. I have a question for you to follow up, though, which is: you asked about running TSC and then actually evaluating the code or running the code in Deno. What was the use case that led you to wanting to do that?
00:41:37 - Bro Nifty
It is a far-out use case that I will never touch, but just out of pure academic curiosity, someone was just saying that. That the...
00:41:53 - Luca
Excuse me.
00:41:53 - Bro Nifty
Oh my God. Ryan.
00:41:55 - Luca
Luca.
00:41:56 - Ishan Anand
Deno.
00:41:57 - Bro Nifty
That Deno does not support type checking at the most explicit, deep, generic level, like the most cutting-edge. They're supporting up to 80 percent of the type checking, but not 100 percent. The Microsoft team working on TSC are developing that, and they can't keep up with the engineering hours to convert it all. So they're just kind of happy doing about 80 percent of the type checking. So that was. Yeah, it was just academic. I'm not.
00:42:28 - Ishan Anand
You wanted the canonical, official type checking. Okay. I actually wasn't aware of that. I don't know, Luca, if you have any thoughts on that 80 percent.
00:42:36 - Luca
Yeah, let me actually clarify that, because I think that's maybe also slightly a miscommunication. Deno actually does use the canonical TSC type checker internally. When you call deno check, this uses TSC internally. It's TSC, but not the exact TSC that you would use as an end user. What we actually use is the TypeScript. I forget what it's called. I think Host Something. I don't know, Host System? I forget what it's called. It's essentially the library version of TypeScript that allows you to plug in a custom host. But the actual type checking itself occurs exactly the same as it would in TSC. And we usually trail TSC by maybe two weeks or three weeks at most. We trail them in versions, not feature-wise. So maybe where the confusion comes from is that Deno also has type-stripping support that doesn't use TSC, where we can very quickly strip out your types and convert it to JavaScript for execution. This is that when you call deno run, it doesn't actually do type checking, but instead it just strips out the types as quickly as possible and executes the code. That part is done using SWC.
00:43:54 - Luca
And that can sometimes be delayed on syntax support from what TSC supports by maybe four or five weeks, but usually we get it pretty quickly. Usually, when we ship a new TSC version with deno check, we'll also support all the same syntax features in our transpiler at the same time.
00:44:22 - Ishan Anand
Interesting. So if I understand that correctly, then even if you did what the original premise of the question asked, which is you ran it in the standalone TSC and then evaluated it in Deno, it's still being run through SWC and the type stripping is still happening. Am I correct? So it wouldn't actually solve that problem.
00:44:48 - Luca
Okay, yeah, that's right. But I don't necessarily agree that this is a problem that needs to be solved, because I don't actually think this is a problem. Essentially, by the time you can pull the new TypeScript version from npm, the next release of Deno will contain that TSC version. So I don't think there are scenarios where you're really unable to use new TypeScript features in Deno in any reasonable time. The other thing is, when TypeScript does a beta release, for example, we'll have PRs to have it all integrated for the beta releases already. Then the day the actual release goes out, we have that merged and we have it in Canary that same day. So if you run Deno Canary, which is our nightly build, you'll have the new TSC version the same day that the standalone version ships. So yeah, I don't think people are really going to run into this.
00:46:06 - Ishan Anand
Basically you guys are very fast followers on the official release: two weeks unofficial, and Canary same day. So it's unlikely. It's not like a security issue comes out like Log4j and you have to patch it that morning. This is just about not being able to use a feature for two weeks, which, at the pace of things, maybe feels slow in the JavaScript ecosystem, but not that slow. That makes sense. Let me pause, Bro Nifty. Do you have any other follow-up questions? Because those were really great.
00:46:40 - Bro Nifty
No, thank you.
00:46:41 - Luca
Okay, yeah, thanks for your questions.
00:46:44 - Ishan Anand
Yeah. And again, as I said earlier, feel free to raise your hand. We still got another 12 minutes. Love to bring anybody else up to the stage and ask a question. The question that raised in my head that we hadn't got to yet was, with this compatibility layer. You actually asked how it works, which is one of the things I wanted to get to, but the other element of compatibility layers is performance. Do you have any stats or sense of whether there's a performance hit from the compatibility layer?
00:47:18 - Luca
There's not. Deno can execute Express faster than Node can execute Express. Yeah, it's faster, not slower.
00:47:28 - Ishan Anand
Even for npm. Yeah, go ahead.
00:47:30 - Luca
Yes. You can import npm:express and run wrk or ApacheBench or whatever HTTP benchmark you want against it. The exact same code in Deno executes faster than Node every time.
00:47:43 - Ishan Anand
And I think actually we covered this when you were a guest previously. Remind me how that works and why that's possible.
00:47:52 - Luca
So there are a couple of things going on. First of all, Deno's native HTTP API is just significantly faster than Node's. Our native API is, I think, about two and a half times faster than Node's. Even if our compatibility layer is a 10 or 20 percent hit on that, it's still going to be faster. The first thing is just that we have much faster native APIs because we really optimized these APIs very aggressively. The other thing is that we put a very significant amount of effort into making the Node compatibility layer very low overhead. The most naive implementation of a compatibility layer on top of Deno's APIs that you would write in userland would probably not be the most excellent implementation and probably not the fastest. We spent a significant amount of time optimizing this implementation and optimizing our internal APIs to be good at the use cases that often arise from using this compatibility layer.
00:49:23 - Luca
For example, if Node on all requests always adds a Connection header with a specific value, we can specifically optimize that code path in our native APIs because we know this is a code path that's going to be hit a lot. So there are those kinds of things we can do to improve performance there.
00:49:49 - Ishan Anand
How did you isolate or determine those critical use cases? Was it done empirically? Did you leverage the tests you mentioned earlier from Node? Did you run the performance benchmarks of those tests against Deno? How did you go about doing that?
00:50:06 - Luca
We don't really like optimizing. There are two pieces to this. One is trying to optimize a specific API and micro-optimize it. That's one way of doing performance, and a lot of people do performance that way. But I think most of our team agrees that that is not the best way to do performance, because, yes, you may have the fastest TextEncoder implementation or the fastest btoa implementation or whatever. And Deno has the fastest both of those implementations. But that's not really what people are going to be spending their time on when they're building applications. What we do instead is look at actual real applications. We run, for example, deno.com and deno.land, which are real applications written in Deno that use a lot of Deno APIs, use the Deno HTTP server, and we try to make those 20 percent faster. We performance-profile those, see what's slow, what things can be optimized, and then we optimize those things in the scenarios that we see in real applications. That's also why we're always very happy when people open issues that say, I have this specific use case in my real application and it's not as fast as I expect it to be.
00:51:15 - Luca
These are great because these are things that people are actually running into. These are actual performance things. One of the big things when we're doing performance is we don't want to optimize code that people aren't actually going to care about being fast. We really want to optimize things that a lot of people are going to use, where it will have an outsized impact if we do optimize it. Part of that is optimizing these real-world code paths. You'll see with a lot of optimizations that they're not small little optimizations that make a single API faster, but classes of optimizations. One of the ones I always love to mention is something we have called Fast Streams, which essentially allows you to. For example, when you're fetching a file from the network and writing that file to disk, the obvious way to do that is: you call fetch. It returns a response object that has a body, and that body is a readable stream.
00:52:19 - Luca
When you open a file in Deno, that gives you back a Deno file that has a writable stream on it. The obvious way to pipe those together is to call readableStream.pipeTo(writableStream), right? That's how most web devs would pipe data nowadays. So one thing we do is, when you call pipeTo on a stream where both sides are native code, we will perform the streaming operation inside of Rust rather than performing it inside of JavaScript. That prevents a bunch of copies of data going between JavaScript and Rust. These are things that from the outside might not seem like a huge deal, but if you think about it, most applications do a lot of streaming data around. If you have a static file server, that's essentially all streaming data. If you're downloading files or uploading files, or receiving files and piping them to S3, or downloading files and ungzipping them and then piping them up to S3, or whatever combination of those, there's a lot of streaming data going on.
00:53:29 - Luca
So if you can optimize this whole class of streams and cut out a huge chunk of performance cost there, it optimizes a whole bunch of real-world applications and has a much higher impact than if you make the btoa function half a percent faster, or even 50 percent faster, because how often do you actually call btoa?
00:53:52 - Ishan Anand
Yeah, I get what you're saying, which is you're starting from the use cases first. The classic evil is premature optimization, as people like to say. I guess what I'm trying to understand is the trade-off with that. I think that's the right approach to go, but it's fundamentally an empirical process, so you can only tune it to the application use cases you get exposed to. So I'm wondering how you made sure that was complete enough. Was it just sitting as an unstable flag and people would send reports? Did you proactively say this is how big the market is, we know it's being used in this percentage of use cases and this other percentage of use cases? How did you tackle that? Or was it more organic and people just came to you with problems?
00:54:37 - Luca
Yeah, it's obviously a bit of everything here. Part of performance work is also preemptively optimizing things that you expect are going to see a lot of usage.
00:54:51 - Ishan Anand
Sure.
00:54:51 - Luca
Or that you empirically know are going to see a lot of usage even if you don't have a concrete example of it. You know that, for example, a lot of people make GET requests compared to the amount of people that make POST requests to an HTTP server. So you need to make sure that GET requests are very fast, even if that comes at the cost of making PUT requests 1 percent slower, for example. Those kinds of optimizations are one thing. Another thing is we strive for technical excellence. I think everyone does, right? But with a lot of our internal systems, we try to re-engineer them and we really try to avoid accumulating technical debt. We try to clean up technical debt very quickly to make sure that we are able to very quickly iterate on all parts of the system. Sometimes that comes at the cost of shipping features a little later, but I think in the end it's worth it because the code is clean, it's readable, it's easy to change, it makes performance optimizations much easier. And also less code and more readable code is usually the fastest code, because it's the code that compiler engineers, for example, expect you to write.
00:56:09 - Luca
So that's the code that they can optimize.
00:56:14 - Ishan Anand
Yeah. Some of that probably also has to do with the fact that you guys are a newer project and you've been able to maintain that commitment compared to something as legacy as Node. We are coming up on the top of the hour, so I'll open it up to Anthony, who's on the speaker list, or anybody else who's in the audience. Feel free to raise your hand and ask a question of Luca. Yeah, go ahead.
00:56:41 - Anthony Campolo
Yeah, I'd be curious just to hear a little bit more about the WinterCG group. I feel like that's a very consequential thing that's going on in the background. At least I don't hear a lot about it. I'm always curious to hear what the process is like and when you expect to see anything out of it. Obviously it's a long-term thing.
00:57:06 - Luca
For those who don't know what WinterCG is, it's a community group at the W3C where we're working on standardizing APIs for server-side JavaScript runtimes. It's going pretty well. I think we're making progress. Standards are always slow and tedious. From the outside it looks like it's boring, and sometimes it is boring, and sometimes you wish things could be done faster. But in the end, we're really trying to build APIs here and standardize on things that are going to last forever. If you think about browsers, if a browser introduces a new feature, that feature can never go away. Fetch is never going away. JavaScript is never going away. These projects are too big to fail. And when you're standardizing things that are going to be used by millions of people every day, you really have to pay a lot of attention to getting this right the first try. You don't have a second attempt. Often you get it right the first time or it's wrong forever. So there's a lot of pressure to get it right.
00:58:18 - Luca
And that means it sometimes takes a while to get things done. But we're making progress. One of the things we're specifically working on is figuring out the exact details of how the Fetch API is meant to work on servers, making sure that Deno and Node and Cloudflare Workers and all other server-side Fetch implementations are aligned. That's one of the big things we're working on right now.
00:58:47 - Ishan Anand
Great question. Any sense of a timeline when we could expect an announcement or a first draft, or are there a couple of public milestones you can share?
00:59:03 - Luca
I could say things, but I would just be making them up on the spot. Exactly whenever they're ready, once everyone's happy with it, is the answer. And yeah, I don't know when that is.
00:59:19 - Anthony Campolo
When you say when everyone's happy, I'm just kind of curious: who are the people who you feel have to give the sign-off for, like, this is good to go and everyone has been made happy?
00:59:31 - Luca
So the thing here is that nobody is required to implement anything that WinterCG says. If there are two implementers, for example, Deno and Cloudflare Workers, and we're like, okay, Fetch should work this way, but Node is unable to implement it that way for whatever technical reason, then we can say, yes, we're two-thirds here, we have the upper hand, this is a democracy, it's going this way. But this doesn't actually help anyone because in the end we're going to be stuck with Deno and Cloudflare Workers having this very pristine, awesome API that you can use if you're using Deno and Cloudflare Workers, but if you try to use it in Node it doesn't work because it's not technically compatible with Node, and even if they wanted to implement it they can't. So this is not something where you have to get 50 percent of people to agree. You have to get everyone to agree that this is something they can implement and something they're happy with.
01:00:30 - Luca
Because without buy-in from all the vendors, the specification is pointless. You can have the best specification, but if no one implements it then you've just wasted a bunch of time. Maybe it's a great specification, but if you're the only one that implements it then it's not portable and people can't use it. It's like there's 15 existing specs, but none of them are perfect, so let's come up with the 16th one. Our engine implements the 16th one. It's awesome. Nobody else implements it, so now we have 16 diverging specifications.
01:01:07 - Anthony Campolo
Well, what's funny is, in theory this should already be a spec, because you're implementing things that in theory are already spec'd in the browser. Obviously it's not actually that simple, so you can't just port that spec exactly. But it's kind of funny, you're writing a spec to make it spec-compliant with another spec.
01:01:27 - Luca
Yeah, the thing is that yes, there's technically a Fetch spec, but the Fetch spec in the browser does all kinds of things that you don't want in a server-side runtime. It does CORS, it does all kinds of security and privacy checks that are completely irrelevant on a server-side runtime. So you can just say we don't do any of those. But then there are six different ways to not do those. Now we have to figure out which of the six different ways of not doing those do we actually choose. So we're intentionally diverging from the upstream spec here, but how are we diverging? That's a big question with this.
01:02:03 - Anthony Campolo
Really interesting problem space.
01:02:07 - Luca
Yeah.
01:02:09 - Ishan Anand
Yeah, I'm curious because, as Anthony described it, very often you already have a spec for something. Each of the different vendors has a way of doing something, so it's not like it's unknown. It's kind of like getting a bunch of people to agree on dinner, which, in my experience, sometimes can be hard depending on the number of people. It's not like nobody has ever made pasta, Mexican, or Indian food. You know what those are. It's just what people's tastes are. When you guys have a disagreement, is it more often about taste? Is it, no, this is going to have really hard impacts? Or is it, no, there's a lot of legacy code that works this way? If you were to characterize the top three root causes or categories of disagreements in that work, what is it typically?
01:03:08 - Luca
Yeah, I think it's really difficult to say. I think it's a little bit of everything. Part of it is that some people just have differing opinions about how specs should be or how APIs should look. That's usually not a problem for APIs that already exist because there's a lot less design work there. A lot of it is really not even so much disagreement as figuring out what to agree on, if that makes sense. Going through all the existing implementations, weighing the trade-offs that each implementation makes, asking what combination of these trade-offs gives us the best result with the least amount of user breakage, ideally with no user breakage. On the web platform, you cannot have user breakage in web APIs. So there are these trade-offs that have to be made, and then it's a lot of talking with all the different vendors to see if this is something they're able to implement, and getting buy-in from them to get prioritization, to get engineers to actually implement it. Again, if you have a spec, it can be the best spec, but if you don't actually have anyone implementing it, it's worthless.
01:04:27 - Luca
So before you merge something, you need to make sure there's buy-in from all the vendors that they're going to spend time actually implementing this, that there are tests written that people can all use. Even the tiniest little changes require tests to be written and require you to figure out who's going to implement this, and all the specs require you to get buy-in from all the different vendors. There's a lot of talking involved and a lot of coordination. More often than not, it's not necessarily disagreement as much as it is weighing trade-offs and figuring out what the best combination of trade-offs is.
01:05:10 - Ishan Anand
Makes sense. Okay, I know we are over time and I want to be cognizant of that. So, Luca, thanks for coming on. Is there anything you want to say just to close before I close this out in terms of comments or thoughts?
01:05:28 - Luca
Yeah, thanks so much for having me. If you want to learn more about Deno, you can go to deno.land or deno.com if you want to learn more about our edge platform, Deno Deploy. Yeah, we're hiring as well. If you're an SRE or a software engineer who does systems engineering, if you're familiar with Rust or C, if you have a performance background, if you would like to head our DevRel team, if you're an engineering manager, if any of those apply to you, go to deno.com/jobs. We're kind of the opposite of all the other companies that have hiring freezes. We have a hiring bonanza. So please come work with me. I promise I'm not terrible to work with. Well, at least I think so. I don't know. Ask my colleagues if you want more accurate opinions.
01:06:17 - Ishan Anand
Okay. Well, thank you again for being a repeat guest. And for those of us here in the US, it is Thanksgiving, so I want to say I'm thankful to all of you who have joined us on JavaScript Jam Live over the, I guess, past year or so. Actually longer than that, if you came with us when we migrated over from Clubhouse. Thank you for all your participation and all the commentary, and we continue to do more of this. So thankful for all that. And that being said, reminder: if you got any value from anyone here that came up to speak, feel free to click on their face and then follow them. And go to javascriptjam.com, and you can subscribe to our newsletter to find out about our future episodes and who's coming up on the rest of the calendar for the year. With that, thank you everyone.
01:07:23 - Luca
Thanks everyone. Bye-bye.