skip to content
Video cover art for Did AI Write This Summary? w/ Anthony Campolo
Video

Did AI Write This Summary? w/ Anthony Campolo

Ryan Carniato hosts Anthony Campolo to explore AI-powered transcription pipelines for content creators, then dives into virtual DOM diffing theory.

Open .md

Episode Description

Ryan Carniato hosts Anthony Campolo to explore AI-powered transcription pipelines for content creators, then dives into virtual DOM diffing theory.

Episode Summary

Ryan Carniato welcomes Anthony Campolo back to the stream to discuss how content creators can use AI tooling to generate timestamps, summaries, and chapter descriptions for long-form video content. Anthony walks through the five-step pipeline he built into his open source project AutoShow: downloading video via yt-dlp, extracting audio with FFmpeg, running transcription through Whisper CPP or paid services like AssemblyAI, crafting prompts that guide an LLM to produce structured markdown output, and assembling the final file. The conversation covers practical considerations like context window limits forcing long streams to be split in half, the trade-offs between open source and paid transcription, and how different LLM models produce varying quality of chapter timestamps and descriptions. Anthony then demonstrates Ryan GPT, a retrieval-augmented generation app built with Solid Start that uses embeddings from every transcribed stream to answer technical questions in Ryan's voice. Ryan tests it with adversarial questions about signals, fine-grained rendering, and framework comparisons, finding the answers impressively accurate on well-covered topics but weaker on subjects discussed only briefly. After the guest segment, Ryan pivots to a deep technical discussion on the trade-offs between virtual DOM diffing and fine-grained reactivity, walking through a new HackMD piece arguing that the virtual DOM's ability to diff only what is currently used gives it an underappreciated advantage when handling fresh server data, and explaining how proxy-based stores in Solid 2.0 aim to capture that advantage while preserving fine-grained updates. The stream closes with brief notes on Svelte's new server functions, single flight mutations, and Dominic Gannaway's experiments with statement-based JSX templating.

Chapters

00:00:00 - Introduction and Stream Setup

Ryan opens the stream by explaining that creating timestamps and post-production content for his lengthy streams has always been a time-consuming challenge. He recounts how Jesse Pence previously handled timestamps manually and how the process of rewatching a five-hour stream to annotate it could take three or more hours. He introduces the idea that AI-generated timestamps could dramatically reduce that burden.

Anthony Campolo joins and the two catch up briefly on personal news, including Ryan's move to Portland and Anthony's glamping trip in Michigan. Ryan explains that Anthony recently started dropping AI-generated timestamps on his videos, which piqued his curiosity about the tooling and process behind them, setting the stage for the main discussion.

00:07:29 - The Problem of Timestamps and Content Indexing

Ryan describes the pain of manually creating timestamps, explaining how the non-linear nature of stream conversations makes it difficult to identify clean topic boundaries. Anthony relates, noting that the same question about generating timestamps for his own FSJam podcast launched an eighteen-month development journey that eventually produced multiple open source projects and a paid application.

Anthony explains his early intuition that if video could be converted to timestamped text, an LLM could chunk and summarize it into chapter headings. He describes his initial workflow of transcribing with Whisper CPP, handwriting a prompt, and pasting everything into ChatGPT's web interface, then iterating through dozens of models and prompt variations to improve output quality.

00:16:13 - Understanding Transcription: Whisper, APIs, and Trade-offs

Ryan asks basic questions about transcript file sizes and sources, and Anthony explains the options: pulling auto-generated transcripts from YouTube, running Whisper CPP locally for better quality, or using paid APIs like AssemblyAI or Deepgram for speaker-labeled transcription. He clarifies that Whisper is an open source model from OpenAI that was rewritten in C for speed, and that the choice between free and paid services hinges on whether you need the transcript itself or just use it as an intermediate step for LLM processing.

The discussion touches on how LLMs can often infer multiple speakers even without labels, how technical vocabulary errors like "View" versus "Vue" require either upfront dictionaries or post-hoc editing, and the general principle that these tools deliver roughly ninety percent accuracy, with the last mile requiring human review.

00:23:09 - The Five-Step Pipeline: Building AutoShow

Anthony and Ryan collaborate on an Excalidraw diagram to map out the five-step content processing pipeline. Step one is downloading the YouTube video with yt-dlp and extracting audio via FFmpeg into a specific WAV format for Whisper CPP. Step two runs the transcription. Step three inserts a carefully crafted prompt between the video's front matter metadata and the transcript text in a single markdown file.

Steps four and five involve feeding the assembled markdown to an LLM like ChatGPT, Claude, or Gemini and then outputting a complete markdown file containing front matter, the LLM-generated summaries and chapters, and the original transcript. Anthony explains the prompt engineering details, including instructions to avoid the word "delve," skip advertisements, and not reproduce the transcript, all lessons learned from repeated errors during development.

00:34:20 - Prompt Engineering and LLM Model Comparisons

Anthony shares his screen to walk through the actual prompt structure, showing how it requests a one-sentence description under 180 characters, a full paragraph summary, and chapter titles with two-paragraph descriptions in a specific timestamp format compatible with YouTube's chapter feature. He explains how context window limitations forced him to split Ryan's five-hour streams in half, finding that most models handle one to two and a half hours of content well.

The comparison between models reveals that ChatGPT 4.1 can ingest a full transcript but produces too few chapters, while O3 Pro generates more granular chapters but spaces them at rigid six-minute intervals. Claude Sonnet produces more naturally varied chapter lengths that better reflect actual topic boundaries, making it Anthony's current recommendation for this use case.

00:44:46 - AutoShow CLI Demo and App Walkthrough

Anthony demonstrates the AutoShow CLI by running it on a short video, showing the terminal output as it downloads, extracts audio, transcribes, inserts prompts, and generates the final markdown. He explains that the tool supports over forty different prompt configurations, from short summaries and key takeaways to comprehension quizzes and even rap songs, and can process individual videos, playlists, or entire YouTube channels.

He then shows the paid AutoShow web application built with Solid and Astro, walking through its UI for selecting transcription services, choosing from prompt options, and picking an LLM model with a credit-based pricing system. The app wraps third-party APIs so users avoid managing their own keys, and Anthony notes it is still in early beta, inviting testers to reach out for free credits in exchange for bug reports.

00:58:23 - Exploring Different LLM Outputs and Accuracy

Ryan and Anthony examine the actual outputs generated for Ryan's mutable signals stream, comparing descriptions and chapter titles across ChatGPT 4.1, O3 Pro, and Claude. The one-sentence descriptions and paragraph summaries prove broadly accurate and useful as quick references, though they read in a detached third-person voice rather than Ryan's natural tone. Anthony notes he has begun experimenting with feeding his own tweets to LLMs to generate output in his personal voice.

The chapter-level outputs vary significantly: some models produce evenly spaced but rigid chapters while others capture natural topic transitions. Ryan observes that the ChatGPT model's chapter titles feel more creative and humorous, while Claude's are more technically precise, highlighting how different models have distinct personalities that affect the usefulness of the output for different purposes.

01:05:41 - AutoShow Prompt Options and Customization

Anthony dives deeper into the prompt system, showing how the CLI flags map to specific prompt templates that can be combined freely. He demonstrates generating chapter titles, social media posts, and a rap song from the same source video, illustrating the flexibility of the pipeline once the transcription and front matter are in place.

Ryan asks clarifying questions about how the CLI handles video downloading, playlist processing, and local files, gaining a clearer picture of the tool's versatility. Anthony explains that yt-dlp powers much of this flexibility and that the same pipeline could be used to summarize newsletters and podcasts from across the web development ecosystem, a use case he plans to turn into a daily newsletter.

01:15:22 - Ryan GPT: Retrieval-Augmented Generation Demo

Anthony introduces Ryan GPT, a Solid Start application that uses OpenAI embeddings generated from the summaries and chapter descriptions of every stream Ryan has ever done. He explains the difference between embeddings and fine-tuning, describing how the text is converted into numerical representations stored in a large JSON file that an LLM can query like a database to assemble answers from across all the transcribed content.

They test it with progressively harder questions. The tool provides notably better answers than vanilla ChatGPT on topics Ryan has covered extensively, like mutable signals and why fine-grained rendering outperforms virtual DOM diffing. It produces a solid comparison of signals in Solid versus Preact and generates a useful bio of Ryan. However, it struggles with topics discussed only briefly, like the argument that Svelte 3 resembles a virtual DOM, revealing the limitations of embeddings based solely on summaries rather than full transcripts.

01:31:42 - Testing Ryan GPT with Adversarial Questions

Ryan feeds increasingly adversarial and nuanced questions to Ryan GPT, including why the React compiler can never match signals, what the biggest myths about signals are, and why signals in Solid will always be better than in Preact. The answers hit the right high-level points and reference appropriate framework-specific details, though they sometimes overemphasize the compiler's role or miss subtle distinctions Ryan would make.

The single flight mutations versus React Server Components question reveals a weakness: because Ryan discussed RSCs in scattered comments across many streams rather than in dedicated deep dives, the embeddings lack sufficient concentrated context. Anthony notes that incorporating Ryan's HackMD notes and blog posts would significantly improve answers on these topics, and they discuss potential improvements like breaking embeddings into chapter-level chunks and using vector databases like Cloudflare Vectorize instead of a flat JSON file.

01:38:32 - Embeddings, Vector Databases, and RAG Architecture

Anthony explains the technical architecture behind Ryan GPT in more detail, clarifying that the embeddings are generated once using OpenAI's text-embedding-3-large model and stored statically, while the chat model querying them can be swapped freely between O3 Mini, O3, GPT-4o, or other models. He describes the difference between his simple JSON-based approach and production-ready vector database solutions.

Chat participants ask about graph RAG and vector databases, prompting Anthony to explain retrieval-augmented generation as a concept and compare different vector storage options including PG Vector, SQLite Vec, and Cloudflare Vectorize. He recommends Cloudflare's solution for its automatic chunking capabilities and notes that experimenting with finer-grained embeddings at the chapter level rather than the episode level could improve answer quality.

01:47:29 - Mutable Reactivity Revisited: Svelte's Reference Model

After Anthony departs, Ryan revisits his conclusions from the previous stream about mutable reactivity and store behavior. He demonstrates in the Svelte playground that direct assignment of state preserves referential identity, a behavior he had incorrectly dismissed the week before. He walks through how Svelte's path-based proxy identity model differs from Solid's global weak map approach.

Ryan explains that this discovery has significant implications for how stores handle shared references during reconciliation and diffing. He examines Svelte's source code to understand how their proxy traps manage signal creation on access rather than on write, and concludes that Svelte's model is more sensible than he previously gave it credit for, representing the right conceptual foundation for source guarding in reactive stores.

01:54:30 - The Genius of the Virtual DOM

Ryan shares a new HackMD article arguing that the virtual DOM possesses an underappreciated advantage: it only diffs what is currently being used. He walks through how a component accessing only item zero of a ten-thousand-item data blob incurs zero cost for the remaining items, whereas a naive reactive store would need to create signals for all items or diff the entire dataset when fresh server data arrives.

He traces the history from React's early design philosophy of not depending on immutable data or stable references through hooks pushing developers toward immutability, explaining how this combination allows React to leverage referential equality for shortcut optimization. The discussion contextualizes why stores with lazy proxy-based signal creation were essential for Solid to compete with this property.

02:08:50 - Push, Pull, and the Reactivity Trade-off

Ryan explains the push-pull model of signals in detail, contrasting it with pure push systems like RxJS and React's pull-based virtual DOM diffing. He demonstrates how signals avoid running user code when state changes don't affect any active effects, unlike both push-based systems that eagerly propagate and React's model that reruns component functions to discover nothing changed.

The discussion connects to Ryan Florence's tweets about using plain let variables for state management, which Ryan argues is conceptually equivalent to React's earliest model minus the scheduling mechanism. He walks through how the shape of React's setState API subtly guides developers toward immutable patterns that enable memoization, and how removing those guardrails leads back to the same diffing-everything baseline that motivated signals in the first place.

02:23:22 - Diffing, Shared References, and Store Design Challenges

Ryan presents a concrete code example showing a subtle bug in Solid 2.0's optimized diffing: when a store object assigned at path B shares a reference with an object at path A, diffing path B shortcuts early because no signals exist along that path, silently orphaning path A's reactive updates. He explains why React never encounters this problem, since its state model prevents truly shared references across components.

He outlines potential solutions including restricting diffing rules to normalized data, creating specialized diffing primitives, or constraining store behavior to prevent shared references. The discussion reveals that achieving virtual-DOM-level diffing optimality in a fine-grained system requires either accepting immutable-style constraints or inventing new proxy mechanics that reconcile shared identity with lazy signal creation.

02:35:20 - Proxy Internals and Swappable Targets

Ryan digs into the implementation details of how proxy-based stores can achieve optimal diffing by making their underlying target objects swappable. He explains that when no signals exist along an unobserved path, the proxy can simply replace its target without walking or diffing that data, achieving the same efficiency as React's approach of ignoring unused state.

He examines Svelte's proxy trap source code to understand their signal-on-access pattern, comparing it with Solid's approach of using a global weak map for proxy identity. The technical analysis reveals that path-based identity is key to making swappable targets work correctly, and that the signal creation strategy during set operations differs meaningfully between frameworks in ways that affect both performance and semantic correctness.

02:48:53 - The Virtual DOM's Advantage with Server Data

Ryan continues his HackMD walkthrough, arguing that fine-grained reactivity's creation overhead and diffing complexity become most apparent when dealing with large datasets where most data arrives fresh from the server. He explains the historical double-mapping problem from the Knockout era and why React's approach of simply setting state with raw data was so appealing by comparison.

He describes how Solid 2.0's stores use lazy proxy-based signal creation to eliminate the upfront mapping cost, and how the new swappable target mechanism enables skipping diff operations on unobserved data paths. The section concludes with Ryan acknowledging that while fine-grained rendering wins decisively in the ninety-plus percent case of client-side interactions, the edge cases around serialization boundaries and fresh server data represent genuine advantages of the virtual DOM model that informed his store design decisions.

03:53:46 - This Week in JavaScript: Svelte Server Functions and JSX Innovation

Ryan transitions to a brief roundup covering Dominic Gannaway's proposal for statement-based JSX that enables JavaScript control flow around template expressions rather than the reverse, noting its similarity to Marco's template language. He highlights Rich Harris's Learn with Jason stream demonstrating Svelte's new server functions including query, command, and form primitives with single flight mutations.

The segment includes a humorous clip of Rich Harris acknowledging that fear of Ryan's criticism motivated Svelte's implementation of single flight mutations. Ryan explains the technical merits of single flight mutations for reducing server-side over-fetching by leveraging client cache knowledge, and celebrates that this architectural pattern he championed is now available across Solid Start, Tanstack Start, and Sveltekit, representing a meaningful convergence in the framework ecosystem.

04:22:34 - Closing Thoughts and Framework Ecosystem Reflections

Ryan reflects on the influence his stream has had on framework development decisions, noting the tension of being both a competing framework author and one of the most vocal critics of architectural choices across the ecosystem. He acknowledges that this position demands exceptional scrutiny of his own decisions in Solid and shares his excitement about the convergence of server-side patterns across multiple frameworks.

The stream wraps with brief mentions of Solid's 3D game demos, a humorous Grok-generated list of signal critics, and Ryan's interest in future collaborations with Dax Raad on AI topics and Dominic Gannaway on new rendering approaches. He commits to continuing work on the diffing and shared reference challenges discussed earlier, promising updates in the following week's stream.

Transcript

00:00:02 - Ryan Carniato

All right everyone, welcome to my stream today. How you all doing? Come say hi. In the chat today we're doing something we haven't done in a while. We're going to have a guest on our stream. Anthony Campolo is joining us today. He's been on the stream before to talk about Redwood, the React Does Everything framework. But today he's going to talk to me about some stuff that is closer to home. Basically, as you guys know, doing content takes a lot of time and it takes time away from doing development streams have been relatively easy because I can just kind of press record and just go with it and then I kind of talk about things and through teaching people I end up learning stuff myself and sometimes, you know, get great feedback from the chat and actually all lends to actually doing my research. But then at the end of the day we want to get more value out of these streams and I've, over the years I've had amazing people like Jesse Pence, literally week after week doing the timestamps. I used to do them myself once I realized we had that feature. While it's not quite like super indexable, at least like people could find key moments from the stream again and reference it.

00:01:20 - Anthony Campolo

Right.

00:01:21 - Ryan Carniato

Recently I've been doing shorter videos which are really much more to the point, but sometimes the content on the stream doesn't really edit down very well, but there's still a lot of really important insights that we gain for it. So anyways, I was, I've been stuck on this for a little while because I just really don't have time to do the timestamps. And Anthony showed up and started dropping these timestamps on me that were a little different than Jesse's. So you know, they were actually AI generated and I don't know, I realized I know absolutely nothing about this topic, like almost zero. So I was like, okay, well I, maybe if there's something interesting for me again, this is me using the stream to learn about stuff so that I can save myself time. I could have just got on the call with Anthony and he would have shown me everything. But maybe, maybe this will benefit some of you. Is. See they, I mentioned a couple weeks ago the Marco guys were like, hey, we should, you know, do something soon. But they didn't tell me that the, about like the release date.

00:02:34 - Ryan Carniato

I guess the, they're on beta track and now that they have a demo site for it. Marco 6 is the GTA of JavaScript. Yeah, GTA 6.

00:02:44 - Anthony Campolo

Sorry.

00:02:45 - Ryan Carniato

Yeah. But hey, I mean, they're on version 6, right. At this point I gotta, you know, it's worth the call it. The Solid still on version one. We haven't even got two, so I mean, they have six versions, you know, in what, 11 years, maybe 12 years. I have one version in like seven years. So I think they're actually winning on that. So I guess two versions if you count version zero. But yeah. So yeah. Hey, Johnny. All right, well, it looks like people are starting to show up, So. Yeah, just catching all the highs in here while we get this thing going. We should get started here in a moment, shouldn't we? Just second. Let's. Let me get Twitch going in the background so I'm good to go. And maybe I should post on. On Blue sky that the stream is starting because they don't get the live stream directly in the. The UI there. Give me a second and then I will invite our guest here to second. Yeah, we're good on Twitch. Good. I usually do this long pre roll to get more people in on Twitch, but it looks like all the viewers are on YouTube or.

00:04:16 - Ryan Carniato

Or on X right now. So. All good. Let's. Let's see here. Just a second. I'm just going to quickly.

00:04:41 - Anthony Campolo

Yeah, here we are.

00:04:49 - Ryan Carniato

Sorry, just getting going. Twitch TV slash Ryan. Solid. All right, all right, all right, all right, let's get going. All right, one last thing. All right, Anthony, I hope you're ready

00:05:19 - Anthony Campolo

because

00:05:23 - Ryan Carniato

let's welcome Anthony back to the stream.

00:05:27 - Anthony Campolo

What's up, man?

00:05:29 - Ryan Carniato

How you doing?

00:05:31 - Anthony Campolo

Super great. I just got off a week long glamping trip. I don't know if you've heard of this. It's when you camp, but they just give you like a giant tent with a bathroom in it and they deliver things to you. It's very nice. But yeah, I was up in Michigan. I went to the Great Lakes for the first time ever been there.

00:05:51 - Ryan Carniato

Not on the Michigan side, but on the Ontario side. I have been. Yeah, I'm Canadian here and my. My parents are from Ontario, so. Okay, cool. Yeah.

00:06:03 - Anthony Campolo

How do you like Portland so far?

00:06:06 - Ryan Carniato

It's been warmer than I expected. I was actually looking at the temperature and it was like warmer here than in San Jose like the last month, which is crazy. But it's very green here, which is nice. I missed the green. I missed the trees and the. And yeah, it just. It feels more like home. Like Vancouver.

00:06:26 - Anthony Campolo

Remind me, how long were you living in San Jose for?

00:06:28 - Ryan Carniato

Five years.

00:06:29 - Anthony Campolo

Five years. Okay, cool. Yeah, because I was born in San Jose actually, and then lived in the Bay Area until I was 30 and visited a couple places in Oregon, Eugene also. I spent some time there. So. Yeah, so familiar with all those places and yeah, it's a good spot for sure.

00:06:47 - Ryan Carniato

Yeah, yeah, no, I, yeah, it's, it's interesting. I guess it's slightly more inland than I've ever lived. I like Vancouver. Even though it's behind Vancouver island is kind of right on the water. San Jose is like on the bay. It's like a little farther in Portland. We're actually like an hour and a bit from the coast.

00:07:08 - Anthony Campolo

So yeah, that's the same. Similar for me except now I'm in the Midwest. I'm in St. Louis. You know, this is the first time I've ever been more than a half hour away from the ocean.

00:07:16 - Ryan Carniato

Yeah. So yeah, but I mean generally seems really nice, really happy with the house. Got everything set up the way I like it. As you can tell by the background to here.

00:07:26 - Anthony Campolo

Yeah, I love the guitars he got up.

00:07:29 - Ryan Carniato

So. Yeah, no, I'm having a good time. But what I have been missing the last few weeks is timestamps. Someone was on actually asking me on Bluesky and they're like, Ryan, for the life of me I can't find this clip where you were talking about something. And I started looking through the videos and I realized that I've been lazy. I haven't been doing timestamps for a while.

00:07:56 - Anthony Campolo

When did Jesse fall off on that? Do you know?

00:07:58 - Ryan Carniato

Around Christmas break, around maybe like January. And the truth of the matter is when I used to do it, it took me forever because I would, I would watch the streams again to

00:08:11 - Anthony Campolo

do it and yeah, multiple times usually to actually do it.

00:08:15 - Ryan Carniato

Yeah, to get the actual amount of detail it meant like I streamed for five hours and it would take, even if I tried to speed things up or kind of I'd missed things that have to jump back or try and like skip over stuff. It still took me about three hours to do the timestamps. Like it was over half the time again to do it. I guess it's a little bit because the problem is when you watch it you kind of go, okay, I think this is a topic. And then you realize that you get interrupted by someone and then you like it's not the topic and then something else is the topic and then you have to readjust it and then you go back. Like it's not just a linear process always to get like the most as

00:08:47 - Anthony Campolo

you're describing this, you're being a perfectionist about this. Where some people listen to it, they'll be like, oh, what do I talk about for this half hour? Like write a broad kind of outlet, but you're actually trying to get highly specific demarcations at the exact right time with the exact right title for the timestamp.

00:09:01 - Ryan Carniato

Yeah. Which, yeah, I mean. Cause then I can reference it. Right. Cause if I'm gonna do the work, you know, it seemed like it was such a small addition initially to just add timestamps. Cause I'd have something that was like permanently indexable. But in a sense it's not. Now that I have so many videos, it's actually not as indexable. But at least it's still like it's something I mean I think it's, it's helpful at least when like I go, oh yeah, I talked about this sort of topic for a while back then. Like sometimes there'll be like a linked like playground or something. Like I'll know. When I like when someone asks that question, what I usually actually do is go, I know what example they're talking about. I look up it in Discord. See when I originally posted in Discord because for those who don't know, yes, the stream is great. I talk about stuff. The next channel in the solid js. Discord is like the stream in real time, like all the time. It's not a video. But like every day I'm like posting what I discover. I Post new hack MDs, I start publishing even more of them.

00:09:59 - Ryan Carniato

I think I've got like 60 or something. I write like two or three hack m d's a week as I like do my research. So like usually the time on Discord's fine. And then I go to the nearest stream after that and then search through

00:10:14 - Anthony Campolo

somewhere in that headspace you were in. Yeah.

00:10:17 - Ryan Carniato

And then search through that stream to find the topic that I'm looking for. Obviously this is not a process that most people can do, but for me it's at least enough because if I was to make videos out of all the content that I did or even trying, like first of all, editing down the stream doesn't really work because like I mean I did that for one video. I did that for like the tier list and it took me hours to edit it to get it down.

00:10:37 - Anthony Campolo

Longer than if you just tried to make the video in the first place.

00:10:40 - Ryan Carniato

Yeah, yeah. Cause I'm, I'm kind of terrible. So.

00:10:43 - Anthony Campolo

Yeah, the fluency in a thing people who are, you know, lifelong video editors obviously could do that. But you're. That's not. You're a musician, you're a programmer, you're not a video editor, you know. Yeah, but so I'm super glad you brought this topic to me. This is, this conversation is a very long time in the making. So I've been seeding this and trying to get you interested in this and what the work I've been doing on it. So this specific question, how do you generate timestamps? Has led me on the last year and a half development mission I've been on. I've built at this point like three or four open source projects and entire now paid app that I'm going to try and build a company around entirely. Built that first started on this question, how do we get the timestamps? It spun out to all these things from here.

00:11:27 - Ryan Carniato

Yeah. How did, how did you get there? I guess you were. Yeah, yeah, exactly. For like the.

00:11:32 - Anthony Campolo

How do I get timestamps for fsjam? It was the exact same question. And that's an hour long podcast, so that's a lot actually simpler that I could have done myself if I really wanted to. Just suck it up. But for like a five hour stream is a whole different thing. So as I've been going on this journey, your stream is what I actually use as a test case to try and battle test this tool. And in the process I have transcribed and created broad chapter outlines for every stream you've ever done. Everything you've ever done, it's already generated in a GitHub repo right now. And yeah, so we'll get into all that in a bit. But I want to keep kind of talking about this history here because as I said, this is like a problem that now I've gone so deep into and I have so many things to say. Like I'm just very glad to have someone to talk about this with first of all. And people who don't know me, I'm Anthony Campolo. You can watch my Redwood stream if you want to know. Kind of my background and stuff, but mostly a devil person.

00:12:23 - Anthony Campolo

Create a lot of content, do a lot of tutorial, live stream type videos, you know, somewhat similar to Ryan's, but obviously not even close to the level of complexity and length and depth. But it's just a stream that I really, really admire. So I'm always super honored to be here first of all. And yeah, so I wanted. I have a podcast, I had a podcast, kind of defunct now called FSjam that maybe dev we can talk a little bit about Redwood SDK, not in tongues. I'm not really the person to talk about it, but I had this podcast where I bring on people like Ryan. He was on the show and we would have an hour to hour and a half long, usually conversation about just web dev topics. Every single framework was featured on the show at one point or another. And I also am in a similar boat to Ryan where I, you know, do this just for the love of the game, for the passion and you know, it's not for money, it doesn't monetize at all. And it's hard to kind of put the time in to really make them as high quality as you.

00:13:24 - Anthony Campolo

You would like to make them in terms of assets, post, pre production, everything. And so at a certain point I kind of, as you know, things like ChatGPT came out and I started playing around more with like AI stuff. I know you have not been doing that so much, Ryan, and this is definitely not going to be a stream where we are trying to like convert everyone to like AI. We're not even talk about like AI coding. This is really completely separate topic because this is about how content creators can use AI. Because ChatGPT is good at working with text, like along with code, you know. So I, the first insight I had was you just have to first turn the video into text, you know, so you have to run it through transcription and then if you have the transcription and the transcription is attached to timestamps, then in theory I was like, the LLM could be able to read it, summarize it, chunk it, and come up with some sort of chapters that may be broadly accurate, even if they're not necessarily written quite the way you want them to be, and even if they don't necessarily align perfectly with the right timestamps.

00:14:24 - Anthony Campolo

I just had this intuition that this is a problem it would be able to solve and that it had the capacity to do. It does. But it's very, very challenging actually. And there's like so many things that go, go on to it. And that's funny dev saying you don't have a podcast. You had many podcasts. Technically, FSGM was my only podcast. I also have a stream, I would say, called AJC and the web Devs. But yeah, he is kind of right. So yeah, I'll pause here in a second, but just to finish the thought. So I eventually got to the point where I was transcribing my content with whisper CPP handwriting, a prompt that would Explain what I wanted it to generate for me and then pasting that into just straight into the chat box on, on chatgpt, like just go around the website, you just plop it in, you have the prompt, you have the transcript, and then that would then write me the chapters. And from there I've, you know, have tried dozens, if not hundreds of different models to do this with. Different ways of writing the prompt, different ways of kind of phrasing it, giving it an example, writing the example in different ways.

00:15:26 - Anthony Campolo

There's like so many different ways to tweak this, so many different variables that go into it. And the outputs are incredibly all over the place in so many respects, both in how accurate they are in terms of the time groupings and how well they actually describe the content in it. But I found that if you do it right, you actually can get super useful stuff. It won't be something that you might want to just copy paste into your thing. Again, that's what I was doing when I was putting them in the comments for your last couple of videos. So those are not going to be quite up to snuff. They're going to be like 90% of the way there. So the idea is it gives you a, it gives you a framework so you like. Okay, I have all these ideas. It kind of chunks the main sections. I can now watch the video and just kind of scroll to the beginning and endings of these chapter headings, see if it's really lining up with the content and kind of go from there. So there's the huge blurb. So.

00:16:13 - Ryan Carniato

Right, yeah, I mean, yeah, I mean, I don't know almost anything about this. I just, I imagine that I do a lot of talking over the course of five hours. Like the, like. Are the transcript files like huge? Are they like multiple books?

00:16:27 - Anthony Campolo

Yeah, there's multiple books worth of text within the repo at this point point, like more than thousands of pages. It's wild.

00:16:38 - Ryan Carniato

And I guess what's interesting with the transcripts is do they, are they like. I haven't even looked them. Are these the ones that just come straight off YouTube or like. Because they do something auto gen from there or.

00:16:48 - Anthony Campolo

So that's where. So you can, you can grab the Transcriptions right off YouTube if you want to do that really quickly. If you want to get higher quality transcriptions, you can use. I think whisper CPP's transcriptions are slightly better. They're probably fairly comparable. Honestly though, if you really want a super duper good transcript, and especially if you want speaker titles like if you have a guest on like me and you want to separate between the two, then you can use something like assembly or Deepgram, which you have an API, you pay like, you know, a certain amount of cents per minute or whatever. And so I built tools that do all these now. Like, I have an open source version you can use with Whisper, I have a paid version you can use with Deepgram. So it's kind of, it's a whole spread of different things. This question of do you actually need the transcript afterwards? Like, is that important to you or are you using it just as a step to leverage the final AI output, which would give you like the chapter timestamps? Because then you can kind of throw the transcript away at the end.

00:17:42 - Anthony Campolo

And if it's like 99% correct, the LLM isn't going to be as worried about small word errors or extra like filler words and stuff like that. It doesn't care about that stuff. It just kind of takes the whole text in and finds large chunks. So you can go with lower quality transcription if you don't really need the transcript in the end.

00:18:01 - Ryan Carniato

Yeah, I was, I was going to ask you that if, like how it identified different speakers, like on guest streams and stuff. That's interesting. Okay, so it is something that it does do because you, I will get to that later. But you made a Ryan GPT, which I was like, is this the opinions of everyone who's ever been on my stream or is it my opinion? So, like,

00:18:23 - Anthony Campolo

yeah, so. And that's, that's such a good question. This is a really interesting facet of it. I found that even if you do not include speaker labels, if you just give it a raw line of text, it is able to figure out most of the time. And the better models obviously are able to figure out most of the time that there are multiple speakers talking to each other because they're able to tell from the context of the text that like we're using each other's names. Like just within this first 18 minutes, I introduced myself, I said, my name is Anthony Campolo. You introduced me, you talked about me through. Anyone could read the text. And even if it was a straight line, any person could read that and figure out what's happening. They can figure out there's two people there. So an LLM can do that as well. Giving it the speaker labels just makes it that much easier. And lower, cheaper models can work better with that. So it's this kind of, this question of how much context do you need to give the Certain LLM to have it figure out the thing it needs to figure out to generate that output at the end.

00:19:22 - Anthony Campolo

And there's a wide range of quality of models and price of models. So if you always go with kind of the best one, I find that giving it just the raw Whisper CPP output, which is the single line of text, is basically good enough for it to figure out what's going on within the conversation.

00:19:37 - Ryan Carniato

Okay, yeah, I admit I never looked at these services at all, so I have no clue. Like when you said Whisper cpp, I'm like, is that a C file?

00:19:47 - Anthony Campolo

Like, let me. Yeah, let me explain that then. Okay, so this is great. Whisper was. Is an open source transcription model that OpenAI released, which is funny. OpenAI is now known for having no open source work whatsoever. They've been teasing an open source model for like six months now. That's nowhere to be seen at this point. They will eventually put it out, I'm assuming. But like three years ago they created this transcription model, which is incredible. Like, for an open source transcription model, it completely blew. It decimated the entire open source transcription field. It was the only thing that mattered. So that opened up a huge space. It was a Python tool though, that was incredibly slow. So someone then rewrote it in C. So that's Whisper cpp. It's the C version of the Python thing. And then also now OpenAI hosts whisper and they have a Whisper API endpoint. So you can send text directly to their endpoint. You can pay them to use their Whisper model. And then other other transcription services host Whisper as well, since it's an open source model.

00:20:46 - Ryan Carniato

Okay, okay, yeah, so it is a C reference. It was just funny because I was like, I was like thinking of file extension and I'm like, am I downloading a C file? Like just dot cpp?

00:20:56 - Anthony Campolo

Like anyway, if you use the AutoShow cli, then you do have to. And you want to use it with Whisper cpp, then you will actually have to build a C kind of thing in your project. But there's a kind of a setup script to kind of do all that for you.

00:21:11 - Ryan Carniato

Okay, yeah, so people still use C. That's. That's good to know.

00:21:16 - Anthony Campolo

Faster than Python. Yeah, as far as I know, it's interesting.

00:21:23 - Ryan Carniato

No, I know. I've heard that there's like, they've actually improved a ton over the years. I. Last time I looked actually was when I think version 14 came out. And it's like crazy, the type of stuff they incorporate. A lot of stuff that like was In C and stuff. Anyways, I'm not here to talk about backend languages. I just. It was, it was just something that was kind of. I was like, what? Okay, so. All right.

00:21:43 - Anthony Campolo

Yeah. And part of that is because the transcription process is very slow and running your transcripts through Whisper CPP on the largest model, even the C version takes like an hour and a half. So that is, that's another thing to keep in mind is the trade off and time factor when you're going with an open source one versus a paid one. Okay, yeah, that's something that for you, if you did a stream, you could just kick it off right after stream's done, come back like an hour and a half and then you'll like have it generated. It'll be that type of thing. Think like a background thing that would run for a while and then eventually deliver it to you. If you want to use one of the paid services with their APIs, then you can get something like that within a couple minutes, like a reasonable amount of time.

00:22:20 - Ryan Carniato

Okay, okay. Okay. So just understanding the pieces. The first piece of the puzzle here is getting the transcript and.

00:22:26 - Anthony Campolo

Exactly. Yeah. And that's a pretty big part. Although I should try to test out just pulling your YouTube transcriptions and kind of compare them because that would simplify the thing quite a lot. But this is all stuff that is not really super duper important to the final output. And then what Dev is saying here makes sense. You have either expressive transcription, so a high quality, really well written transit with speaker labels, or expressive LLM that is able to infer the meaning through the text lines itself.

00:22:55 - Ryan Carniato

I think he actually said expensive. I think he was probably.

00:22:59 - Anthony Campolo

Sorry, yeah, I read that wrong. But expressive versus. Yeah, it's basically, it's basically the same thing. The more expensive it is, the more expressive it is.

00:23:05 - Ryan Carniato

Yeah,

00:23:08 - Anthony Campolo

yeah, okay.

00:23:09 - Ryan Carniato

Yeah, I mean, that's an interesting thing. You get hit on either side, I guess. But you've been building the. Are these wrappers or is it like, like, like the pieces.

00:23:21 - Anthony Campolo

I describe it as like a pipeline because it starts by running the. Well, actually it starts with have. Okay, I didn't even talk about this part yet. And this might be kind of interesting too. Maybe not. Downloading the video from YouTube is a crazy hard problem because there's one specific library everyone uses called ytdlp. Have you heard of this?

00:23:45 - Ryan Carniato

No.

00:23:46 - Anthony Campolo

Yeah. Okay, so it's. It has like 50,000 stars. It's an extremely, like large, very highly used open source thing. It's basically a tool that allows you to give it a YouTube URL and it will just download the video as a video file onto your computer. And something that you think should be so simple and that there should be 10 different libraries that do this. It's essentially the only one that does it. And there's a lot of libraries that kind of wrap it and do a million different things with it. And there's certain APIs you can pay for like unwrap it API that will download for you. There's also like desktop apps like Downy that I use that can download YouTube videos. But it's a thing that's very challenging because YouTube just makes it very hard to do. They just go out of their way to make it extremely challenging. And if you want to like use this tool in production, you have to do all this crazy stuff like IP routing and stuff. So anyway, we don't need to go too far into this but the first step is downloading the video from YouTube.

00:24:38 - Anthony Campolo

Then the second step is once you have the video.

00:24:41 - Ryan Carniato

Maybe I should be writing this down.

00:24:43 - Anthony Campolo

You. Yeah, we should do, we should do an Excalid draw for this. Actually there just. It's a five step process. It would be perfect for five boxes. Do you want to do that?

00:24:50 - Ryan Carniato

Okay, let's. I'm just gonna. Yeah, let me share my screen for a second. Yeah. Because yeah, we're not talking about stores or error catching. Let's just find some space here. Okay.

00:25:00 - Anthony Campolo

So yeah, so it's going to be a five step process.

00:25:04 - Ryan Carniato

Video trans.

00:25:06 - Anthony Campolo

And someone asked in the chat, is YouTube DL still a thing? YT DLP is the successor to YouTube DL. YouTube DL is now a defunct project and YTDLP ported that project and they continue building on it. So all the old flags and options you would have used with YouTube DL are available and kind of somewhat deprecated version in ytdlp. But at this point, if you've never even heard of YouTube deal, none of that matters. You just use YT DLP and use their, their kind of the way they do in terms of their options and flags and stuff.

00:25:40 - Ryan Carniato

Okay, so sorry, first step, download YouTube video.

00:25:44 - Anthony Campolo

Download YouTube video. Yep.

00:25:45 - Ryan Carniato

And for notes here, sorry.

00:25:48 - Anthony Campolo

Yd yeah. So yt so like YouTube DLP y t DLP.

00:25:55 - Ryan Carniato

Okay.

00:25:56 - Anthony Campolo

I'm not sure why it's P and not DL but yeah. And then extract audio from video.

00:26:06 - Ryan Carniato

Is that step two?

00:26:07 - Anthony Campolo

So still within the first step, I would say so kind of like. Yeah. As another. Yeah.

00:26:12 - Ryan Carniato

Extract audio from video from video.

00:26:16 - Anthony Campolo

Yeah, yeah.

00:26:16 - Ryan Carniato

What do you used to do that,

00:26:18 - Anthony Campolo

so YTDLP includes FFMPEG or you just also include FFMPEG and do that. So you get the video. You just do an FFMPEG to extract a quick. And this is important because you have to track a WAV file in a certain specific format to get it to work with Whisper CPP, if you're using the transcription APIs, you can kind of hand them almost any audio vertical file you want. You have to hand them an audio file. You can give a video file to Deep Grammar assembly and it will just give you the transcript for, for it. But for whisper you have to give it a specific type of WAV file. So that's why this extract audio from video step is kind of somewhat important.

00:26:53 - Ryan Carniato

Okay. Okay. Yeah, yeah. It's been years. I used to, back when I used to, you know, download and pirate everything because I still do that was. Well I did a lot of encoding and stuff back then because I actually shared stuff. Whereas these days if I get stuff it's for myself. Whereas back in the day not everyone was capable of doing this kind of stuff in years. But okay, fm, take the audio out specific waveform, use it for whisper. Whisper.

00:27:25 - Anthony Campolo

Whispers, yeah, whispers with an S. No single whisper. You whisper cpp. So it's clear that's for the CPP in particular.

00:27:36 - Ryan Carniato

Sorry, am I spelling it wrong?

00:27:38 - Anthony Campolo

So whisper and then dot and then the letters cpp. Okay, yeah, that's it.

00:27:44 - Ryan Carniato

Yep.

00:27:44 - Anthony Campolo

Yeah, yeah.

00:27:46 - Ryan Carniato

And then wave format. Okay, sure.

00:27:53 - Anthony Campolo

Yeah, you got it. Cool. And then step two is run transcription.

00:27:59 - Ryan Carniato

Transcription which is maybe whisper. Maybe something.

00:28:04 - Anthony Campolo

Either whisper or assembly. I would just say, just say assembly AI. We'll just keep it simple and only

00:28:11 - Ryan Carniato

talk about is it like, like this or is it like asm,

00:28:18 - Anthony Campolo

it's capital AI without the period.

00:28:22 - Ryan Carniato

All right, like solid js. Got you.

00:28:25 - Anthony Campolo

Yeah, exactly, yeah. And so that would be either open source and free or third party hosted and paid. So that's kind of the important distinction between there. Ah, yeah. See there's this huge lineage of different forks of these different projects and YTDLP is where it has ended up. That's, that's some good knowledge. I did not know that. Thank you.

00:28:48 - Ryan Carniato

Yeah, I love how like number of people in chat just know this stuff. I, this is like complete news to me. The only thing I've heard of so far is fm

00:28:59 - Anthony Campolo

and I learned all of this over the course of the last year and a half as I said, because I had this, this, this question of how do I create the time so that's why I'm saying like the journey this has taken me on, the tech, I've learned it's been absolutely fascinating. So, yeah, so from Theo talks about YT DLP all the time. That's what Dev is saying. That doesn't surprise me.

00:29:21 - Ryan Carniato

Yeah. I have to admit, maybe this is bad and I should check into it. Just where I've been at. I've ignored everything Theo's ever done on AI and most of the stuff that he's done on his content, like set up, because it's usually like beyond where I am, where I am at.

00:29:37 - Anthony Campolo

So Theo's I basically gone into an AI. The LLM stuff especially which we're going to get to is very similar to the similar journey I went on in terms of trying out all the different models, importing them into using all the different AI. So you're going to get a huge chunk of, of that through my opinions obviously instead of Theo's, but there's going to be a ton of overlap.

00:29:56 - Ryan Carniato

I was just going to comment that I watched him for his JavaScript framework stuff and that recently I haven't had the opportunity to watch as much stuff because he doesn't talk about it anymore.

00:30:06 - Anthony Campolo

So and so Fuzzy has a comment here saying, and Fuzzy, by the way, knows more about this project than anyone else except me. He's watched almost every stream I've ever done about AutoShow, so he's like a total pro. He says I'm pretty certain Anthony made this so he can get transcripts for Ryan's back hacks and streams. And that is one is one of the first things I did after I used it on my own content, as I did use it on Ryan's content. So he's totally right about that.

00:30:33 - Ryan Carniato

I use this thing to clean up audio and add subtitles for a video I want to watch. Download YouTube, split audio, track back voice and add subtitles. Okay.

00:30:40 - Anthony Campolo

Yeah, yeah. And that's another thing. Whisper CPP does it. It does. It can give you either a transcript in like a written format in like a file, or it can give it to you in specific. SRT and VTT are two of the main ones, subtitle formats. So you could run it on a video and then you have subtitles for the video which if that's important to you and you have something that you don't have that you need to create a quick one for that and you don't have a way to auto generate this non YouTube. That's actually a really cool use case for CPP.

00:31:11 - Ryan Carniato

Very cool.

00:31:13 - Anthony Campolo

Now we should go step three, Step three is insert prompt.

00:31:25 - Ryan Carniato

Okay.

00:31:27 - Anthony Campolo

Yeah, so you should do bullets after this instead of parentheses. We're gonna do a list here. So the first thing I have it do is I have it give a short and long summary.

00:31:39 - Ryan Carniato

Sorry, short and long summary.

00:31:44 - Anthony Campolo

So what that means is I want it to describe the entire video in under 180 characters because that could be like a meta description if you think about it. And then I also want to describe the video in a full paragraph. So three to five sentences. So that gives you a very quick one liner description of the whole video and then a slightly more in depth longer description of the video.

00:32:06 - Ryan Carniato

Okay.

00:32:08 - Anthony Campolo

Yeah. So what I'm just writing right now is the default prompt. AutoShow comes with like 40 prompts at this point that you can generate, but this is the default prompt. So yeah, short and long summary and then you get chapter titles and descriptions.

00:32:28 - Ryan Carniato

So when you're talking about insert prompt, what are we inserting the prompt into?

00:32:33 - Anthony Campolo

Yeah, so what I'm saying. So yeah, so I know this is

00:32:35 - Ryan Carniato

general, but you're saying basically you could take.

00:32:38 - Anthony Campolo

This actually works in very specific. This works in a very specific way, the way I have my pipeline set up. So after you have the transcript, you have a chunk of text, you also will have a front matter chunk that is extracted from the YouTube video that will grab your YouTube URL, your YouTube channel's title, the title of the video, the date the video was released, and then the COVID image of the video. So we'll grab all that and it'll create a front matter thing for you. Then it will have the transcription and then in between the two it will insert the prompt. So the prompt is what's going to have two pieces. It's going to have the description of what I want. So plain English words saying I want a summary written in this many words and then a written example in markdown with each section written out with one sentence with one paragraph. And it's just like a generic example of what I think the output should look like that is then being inserted in between the front matter and in between the transcript. And then that entire three part piece then is what is given to the LLM.

00:33:35 - Anthony Campolo

And then the LLM just go to the full thing, the LLM that gives the output. I then take the output and put on top of the prompt and remove the prompt so that what you're left with is a website page with front matter, the LLM output and the transcription under it.

00:33:49 - Ryan Carniato

Okay, okay, yeah.

00:33:56 - Anthony Campolo

So after this part, step four is, then give all of this to the LLM.

00:34:05 - Ryan Carniato

Okay. So when you say insert prompt, you actually meant like into the.

00:34:09 - Anthony Campolo

Yeah, just. It's just a big. It's a markdown file. So you're working with a markdown file with front matter, the transcript. And then the prompt is inserted under the front matter but above the transcript.

00:34:18 - Ryan Carniato

Okay.

00:34:20 - Anthony Campolo

And after we write all this out, I can do a quick code demo and it'll make more sense once we kind of see this visually happening.

00:34:25 - Ryan Carniato

Yeah, and then. Sorry, what did you say? Give it to the LLM.

00:34:30 - Anthony Campolo

Yeah, I would say run LLM processing is kind of have the phraseology I usually use for this. And this can also be. You can use open source models. I wouldn't really recommend it. They're not that great. Unless you're actually running a Linux server in your house, you're probably not going to have a very powerful model. So usually what you're going to use is something like ChatGPT, Claude or Gemini. Claude.

00:34:58 - Ryan Carniato

Gemini. Gemini is good. Sorry, I don't know anything about this. Gemini's Google's one that I saw that show up recently.

00:35:07 - Anthony Campolo

The other Gemini is pretty legit. Yeah.

00:35:10 - Ryan Carniato

Okay. The other two I've seen people like talk about for ages. Obviously Chat GPT, it's been years. Claude has been at least. Yeah, I don't know.

00:35:18 - Anthony Campolo

Yeah, so chat CBT came out first in November of 2022. Then Claude was kind of right on its heels around the same time. So they've been around now for, you know, was that like two and a half years? And Gemini is probably about a year old. They originally put out. Google put out an LLM called Bard, which has been totally memory hold because it was like an absolute embarrassment and it was really extremely bad. And so they got their crap together and then shipped a really powerful model that is essentially up to snuff with ChatGPT and Claude. Kind of which one you're going to use is almost like kind of a culture thing really. I find for someone who's already all the way in like Google Tech, then you'll have naturally be led towards like Gemini. If you're someone who is not and you already were into ChatGPT, you can use ChatGPT and then a lot of people get kind of led to Claude because Claude is just extremely, extremely good. Like I use Claude exclusively for coding and it was what I used for a long time to generate the outputs for AutoShow.

00:36:24 - Anthony Campolo

I still use Claude and chatgpt kind of interchangeably in a lot of ways. What I'm going to show you once we actually finish this part is the outputs for your chapter titles from both two different ChatGPT models and a Claude model. But with all this stuff, you kind of have to just try them all. You'll get a feel for different things, see which one you think works the best for you and your specific use case. They're all kind of within spitting distance of each other and they're all constantly putting out new versions of their models, like every three months or so.

00:36:54 - Ryan Carniato

So there is a question here, and I actually realized this afterwards, I have to go back and actually edit some of the timestamps you gave me. I definitely have places where instead of View V, it has V I E W like literally in my timestamps right now.

00:37:09 - Anthony Campolo

Yeah, this is, this is a good, this is a good question. So it will be fairly accurate off the bat. There's ways to feed it vocabularies of words. So if you could create like a dictionary of all the main, like JavaScript kind of words you're going to be saying throughout the show, then you can feed that to Whisper to kind of. Or to the paid, the paid transcription services as well. So it's a question of how much upfront work do you want to do versus how much editing after the fact do you want to do. You can front load that work by creating that library of words it's going to need to reference and then giving that to the model before you generate the transcriptions. Or you can just run it with the default settings and then see what it gets right and wrong. This is a big challenge and this is where choosing different transcription things may come more into play. I find usually Whisper CPP does get a lot of the technical terminology mostly right, but it's never, never going to be 100% correct. So that's why I say almost everything that this tool is doing for you is about like he's saying the 80% solution, the 90% solution, trying to get you most of the way there.

00:38:22 - Anthony Campolo

So then there's minimal human effort after editing, but it's gonna be very. That last mile is the most extremely challenging part. And at a certain point it just doesn't make sense so much more because you could just pay for a human to transcribe it for you and then it's like it will be 100 correct, but you're gonna pay like 100 bucks, you know, because forever the length of time for that person's hourly rate, you know.

00:38:47 - Ryan Carniato

Yeah, okay. Yeah, no, all this seems challenging But I mean, I, if the output's valuable, then it's worth it. So I mean let's.

00:39:00 - Anthony Campolo

Exactly. Yeah.

00:39:01 - Ryan Carniato

Cool.

00:39:02 - Anthony Campolo

So that's basically it. You could put step five, which is like print results essentially. So I guess, I guess step five would be output complete markdown file is what I would say with front matter, the LLM output and the transcription

00:39:21 - Ryan Carniato

matter. Lmm, sorry, yeah.

00:39:27 - Anthony Campolo

LLM output and then transcription. Yeah, and transcription. Yep. Yeah. And the Alex is saying here, the chat, I was thinking of a vocabulary dictionary too, which you could share with AI. Exactly. So if you have a couple transcriptions already that have the correct verbiage, because you've kind of gone through those and corrects those, you can then give that to and that'll say, hey, read through this transcript, give me this. You know, dictionary of words is a super good point.

00:39:55 - Ryan Carniato

As someone who just hasn't used these very much. Like my extent of AI is literally copilot. I've like used chat GPT a couple times and just been like minorly frustrated and then giving given up because I literally have no idea how this stuff does it.

00:40:12 - Anthony Campolo

What I found was it's about how you have to figure out how to work with it for sure.

00:40:15 - Ryan Carniato

Right. Like I'm saying, like does it get smarter on prompt after prompt or do you have to like figure out how to combine everything to like, to like in one go to make sure? Because I felt like I would ask for something and then I was like, okay, this feels conversational, so I'll try and ask for the next thing. And then it's like it forgot the first thing that I asked it.

00:40:33 - Anthony Campolo

You know, like, this is pretty, this is, this is like such an important key question. And I will have a take on this that will be. Everyone's going to have a take on this that may be wildly diverse. And so this is just one man's opinion. I think the thing that really makes them useful and really supercharges them and fixes almost all the issues that you can potentially have with them is by figuring out what do you have to give it in your first message so that has the context it needs to actually solve your problem. And the way that I do that, in terms of code, I use this tool called Repo Mix, where you can kind of have it compress a lot of your code base into a single markdown file that then I include in my message about what I'm trying to ask it to do. So I don't use cursor, I use copilot to Fix type errors. I don't really have a chat. I don't work with things that edit my code. I do everything through just a single chat window, which is, I think, kind of the idea with T3 Chat, honestly, is being able to have that workflow be really, really nice.

00:41:35 - Anthony Campolo

I think there's something here in terms of what me and Theo are doing.

00:41:38 - Ryan Carniato

But yeah, I need to bug him to get the free open source license or the cheaper open source license. Deal.

00:41:45 - Anthony Campolo

And you need to get him on. Just do a stream with him about it, because trust me, that will be worth your time. We'll just say that, but. So I have it basically generate a large message that includes a whole bunch of context, including docs and code files about my code before I ask it to do anything. And then I also have a written set of rules, which is like a cursor rules file that explains how I want the code to be written, what version of node I wanted to use, whether I wanted to use async, await or dot, then whether I wanted to use the import require. I just define the code style in basic plain English. So I give it that and my code context, and then I ask what I want to refactor, the feature I want to write, the bug I want to fix, and then I feed all of that to the LLM and then it gives me an output that for me, 90% of the time works on the first try, and then the rest percent of the time it mostly works, but there's a certain error. So I feed it back the air and I have a quick back and forth with it to work that out.

00:42:40 - Anthony Campolo

But I never have a conversation that lasts more than two or three back and forths because the longer you talk to it, the stupider it gets. And this is one of the most important things about LL that no one really explicitly tells you, but we all figure out, and everyone knows the longer you talk to an elem, the dumber it gets. So you always want to have the shortest possible conversation you can with it because its intelligence will degrade with every output.

00:43:03 - Ryan Carniato

Yeah, okay. It conceptually makes sense to me too, because you're broadening the context of what it thinks that the solution could be, which means that it makes it fuzzier what the options are. You're actually giving it more. More choices or more places to go wrong, which is more likely to cause it, like for it to miss than for it to hit. I mean, just conceptually. Yeah, I don't know what this question means, but it's a question in Chat. So I'll throw it up there.

00:43:29 - Anthony Campolo

Yeah. So I can answer this. So what this is is for some reason LLMs seem to speak XML. It's not great, but it's just the way it is. So people will put basically tags like this because, you know, you probably know way more about XML than I do because it was a bit before my time, but it's just. It's a way to chunk large chunks of text and give them kind of conceptual.

00:43:56 - Ryan Carniato

That's what I. Yeah, exactly. It's very clear delineation where something starts or something ends.

00:44:01 - Anthony Campolo

Yeah, yeah, yeah. So the tool I use, Repo Mix, you can have it output either a markdown file or an XML file, which will do this. I stick with markdown just because for me, it's the language I already speak and know really well. But I've been told and heard frequently that XML actually works better and it will have an easier time distinguishing your code context from your instructions from your actual specific feature. Ask it sounds funny. That's exactly correct. That's a funny way to think about it.

00:44:37 - Ryan Carniato

All right, so, yeah, where do we go next? Thanks for answering these beginner questions for me because as I said, I just don't know this stuff at all.

00:44:46 - Anthony Campolo

No, this is great. Yeah. At this point, I should just show you kind of what I now have generated for some of your streams. So. So I'm going to share my screen and I'm going to show you first what I'm actually feeding it. So you want to bring my screen up on stream, so switch my app for yours. Yep. Okay.

00:45:09 - Ryan Carniato

Wanted to make sure you're ready. Yeah.

00:45:11 - Anthony Campolo

Yep. All good. Yeah. So. Whoops. So as I was kind of describing, this is what you kind of get for the output after you run the cli. So you have first.

00:45:23 - Ryan Carniato

Thanks for blowing it up. Yeah, I. I couldn't even read it myself there.

00:45:26 - Anthony Campolo

Yeah, so. So first we have the front matter. So as I said, this includes. This is the link to the video I grabbed the last big one. You did. So the mutable signals, part two. And then you have your channel name, your channel URL, and then the description. I usually then fill in after I get that from the LLMs. That's why this is empty right now. The published date and then the. The COVID image. So, you know, if you open this, we'll see. This is your. Your video right here. So that's the first thing you get. This you get from ytdlp after it downloads the video. You can also have it grab metadata for you and Then I just write it into a. Just a text string essentially and stick it into a markdown file. And that's what kicks off the process. Then you have the transcription down here, which is what was generated with Whisper cpp. The way I do it, you'll see it looks kind of weird here. Some of the words are like cut in half. This is because it's very hard to get it to give consistent type, consistent lengths in terms of the transcript and the length.

00:46:30 - Anthony Campolo

So you see here it's fairly consistent in terms of you're getting a lot for each. I found that when I just ran Whisper CPP by itself, it's like super, super minutia. It would mess up and start then giving me a single word or two per. So I ended up with something that was like 10,000 lines long. It was just completely unmanageable. So what I do is I have it give me word level transcription. I had this open right here, actually. So I have it give me word level transcription and then I have a regex that makes sure each line is roughly equivalent to a certain number of words. That has the downside of that is you end up with this weird thing where my regex is not quite good enough to figure out where words end. But if you're throwing the transcript out at the end, it totally doesn't matter. You got the transcript there and then as I said, now the prompt is injected in between the front matter and the transcription. Step one downloads the audio and gets you this. Step two creates this transcription and then step three inserts the transcript. Or, sorry, the prompt.

00:47:32 - Anthony Campolo

So the prompt then I'll just kind of read through it at a high level. It starts by saying, this is a transcript with timestamps. I find the best way with LLM is just speech them like they're a normal person that you'd want to deliver information to. So that's the first thing you want to tell it. It would frequently ask. It would frequently, Claude especially sometimes would say, I'm sorry, I can't do this because it contains copyrighted material. I would then just tell it, no, it doesn't. And I would say, okay, cool, and do it. So I started building it into the prompt. So that's. These are kind of like things that I've added in over time just to fix certain stupid stuff it does. The next one, most people here will understand, do not ever use the word delve. It still does, which frustrates the hell out of me. But this will make it use delve less because there's A whole joke about. Lots of LLMs use the word delve. It's like a running joke. So anytime people see the word delve, now they think you wrote it with an LLM. You may or may not be aware of this meme, but yeah, I'm not.

00:48:28 - Ryan Carniato

But to be fair, I actually. That's not a word that I actually use as often as I should use it. I use the word explore all the time inside the Dell.

00:48:36 - Anthony Campolo

But you shouldn't use Dell. People are not going to think you're a robot if you do. All right. It's a word I use a lot too. I love the word. I'm sad that we've had this word taken away from us. But anyway, and then I tell it, do not include advertisements in the summaries or descriptions. Because I also, as I've been building this, I've started using it on long form content like a Lex Friedman podcast or something like that. He doesn't have advertisements in the middle, but a lot of shows do. So obviously when it's writing your chapters, you don't want to describe the ad that's in your video because that would just kind of be stupid. And then I say, do not actually write the transcript because sometimes what it would do is when I would give it the whole thing, it would write the LLM output and then it would just start writing the transcript also. And it would just keep going and writing and writing and writing the transcript. I'm like, no, don't do that. I have the transcript, you don't need to rewrite it. So that's what all these are for.

00:49:32 - Anthony Campolo

So these are all the top level, because these are all the errors that I kept getting that I just had to specifically say to it in plain language, don't do this. Beyond that though, this only I need to do for your videos.

00:49:43 - Ryan Carniato

So here's my XML comment. Wow.

00:49:47 - Anthony Campolo

So actually I do it as a comment, actually. So that was just. That was for me. I just wanted to distinguish that. This is not part of the default problem. This is something I then wrote in myself because your videos now are too long for some of the L's. This is really. This is frustrating as hell for me because chatgpt01 handled them like a champ. And for some reason as they put out newer models, some of them have smaller context windows because for whatever reason they have all these different models all in different context windows. So there's no model on ChatGPT, except now 4.1, that I can give your entire Terrariscope to and get an output. But the output 4.1 gives is not very good. 4.1 is a smaller, different kind of. It's not a reasoning model. So 01 was the only model that ChatGPT had that could handle these whole things and give you good output. And they deprecated it. It's not even there anymore. So what's going to work best for your stuff now is just chunk it in two halves. So you give it the first two and a half hours and the second two and a half hours.

00:50:50 - Anthony Campolo

You then can stitch those two chapter titles together fairly easily. So in a year or two as more models have bigger context. Actually Gemini is one that might be better at this because Gemini is a very long context. But this is why I use your streams in particular as I was building this tool because I wanted to be able to just give it this large huge chunk of content is how it give me like 10,000 words back, but just some of them really hard to do that.

00:51:19 - Ryan Carniato

How did you arrive at. Sorry, like is there something special about arriving at approximately half the stream or is this like approximating? Because like it just kind of gets

00:51:28 - Anthony Campolo

worse over time or so I would say it tends to do very well with one to three hours length of content. And then once you get up into the three or four, some of them will just straight up not allow you to have an output because there, there is a certain amount of text you can give each model and it will say this is too long, give me a smaller output. And so some of them you'll, you'll reach that easily if you hit 5 hours of content. So all of them can handle at least two and a half hours of content. So that's why I felt that that was. And then I just, I just tested and a lot of this, I just test stuff like this empirically. So once I realized that it could no longer take five hour videos because they deprecated 01, they have these like now weaker models in terms of context length. I was just like, okay, what's the first simplest thing I can do? I can cut it in half, get two outputs and stitch them back together. So I tried that. It gave me a pretty good output.

00:52:18 - Anthony Campolo

And I've been working with one to two hour videos is mostly the sweet spot I think with the tool. So I already figured that going to two and a half wouldn't get any into any difficult territory for any of them. Once you get up to three hours it can be more challenging and it wouldn't really make sense to try and cut a five hour video into like a three, three hour to two hour chunk. So I, for me it was just a simple having of it. So that's in two pieces instead of one big piece. That's. That's the thought process.

00:52:43 - Ryan Carniato

Okay, that makes a lot of sense. I can't remember if this one's the super long one or if it was the one before I did a six hour stream.

00:52:49 - Anthony Campolo

Actually, that's your first one. And that's your first one to cap six hours. So that's a, that's a new, a new milestone. This one only went five hours and 11 minutes. Right, right.

00:52:58 - Ryan Carniato

Which is more typical? Yeah, just one sec, one sec. Before we move on. This was shared in chat and I thought it was actually kind of funny. The whole delve counted articles,

00:53:11 - Anthony Campolo

it's skyrocketed in like academic papers and stuff like that, I bet.

00:53:15 - Ryan Carniato

Yeah. Yeah. This is. Or in the title or abstract, like some. Somewhere in 2023. It looks like it's slightly gone down in 2024. I guess people are aware of this,

00:53:25 - Anthony Campolo

but because they're starting to put into their prompts, not to say delve, because this is a wiser problem, people realized.

00:53:33 - Ryan Carniato

Yeah, yeah. This is just crazy. Anyway, yeah, I hadn't heard of this. For those of you who, who have haven't, this is kind of pretty funny. Anyway, back to back to the regular schedule content.

00:53:46 - Anthony Campolo

Cool. So this is, this is the really important. So this, so this is included with every prompt, no matter what. This is just a general instruction. I give every single one of them. Then the way I set up the tool is you can inject as many prompts as you want. And you do that either through the CLI or through the front end input. And it will insert two things. It'll give you the instructions. So there's two prompt instructions here. There's the short and long summary prompt. So just like we wrote in your doc, this is the short and long summary prompt. So write a one sentence description that doesn't exceed 180 characters, which is roughly 30 words. So I tell it in two different ways what the length is, because that gives it even more idea what the hell I'm asking it to actually do. Because sometimes they can count characters, sometimes they can't, you know. And then I say I give it one paragraph summary, approximately 6001200 characters. Roughly 100 to 200.

00:54:40 - Ryan Carniato

So does this mean that the summary is generally based on the first half of the stream, like the first two and a half hours?

00:54:48 - Anthony Campolo

Yeah, so that's, that's one. That's one thing that you can. So the ones that there are ones that can take the five hour chunk. The reason why I have to break it up is for the chapters, not for the summary. If you wanted just the summary and not the chapter, you could feed this whole thing to chat GPT 4.1 and it would work. They could give you a summary of the whole thing.

00:55:08 - Ryan Carniato

Usually my stream focus is in the first two and a half hours and then afterwards it's just like random crap and like this week in JavaScript. So this works perfectly fine. It's just. I was curious.

00:55:17 - Anthony Campolo

Okay, yeah, no, yeah, that's a good point. And because yours are just longer or more complicated, probably what I would do is I would do two runs. I would do one that would just give you the summary for the whole thing and then I would do another for the chapters which would then break it in half. And then I would kind of combine all of that into the final output. So you have the summary and then you have the chapter titles and descriptions. So this includes a whole lot of stuff I don't necessarily need to read out that we're just kind of explaining to it how the structure should look. Because I wanted to have a very specific format in terms of the time, like actual timestamps. It needs to be hours, minutes and seconds. Because if it doesn't do that, then YouTube's so. Okay, so this is one of the cool things about the tool. I've designed it in a way where the chapter titles you get out of it. You can just copy, paste directly into YouTube and YouTube makes chapters out of that. So you probably know this, that if you write the.

00:56:10 - Anthony Campolo

The timestamps in your YouTube description, it then creates links automatically that you can click on. Right?

00:56:17 - Ryan Carniato

Yeah, exactly. That's the key.

00:56:20 - Anthony Campolo

Yeah, but that doesn't work if it's 70 minutes and 10 seconds instead of one hour. 10 minutes and 10 seconds. Okay, yeah, so that's what this prompt is for. But anyway, and then I tell it the same thing. It needs to be this long. I want so you can do shorter chapter descriptions if you just want like a one. And so you don't actually need to do chapter descriptions at all. You could just get the chapter titles. I like getting the chapter descriptions because what I like doing is basically turning each video into a piece of content that I put on my blog. So if you go to AJC web dev and you go to like my videos, you have. These are all my AutoShow outputs where I have the episode description, which is One sentence. I have the episode summary, which is about a paragraph, and then I have the chapter titles and description. So you could kind of read through this and it would be like a description of the entire episode, which is not quite the same as just getting the chapter titles, because then you're just kind of getting a broad idea of what are they talking about in this section of the video.

00:57:21 - Anthony Campolo

And then usually you click it, you'll actually watch that section. Whereas I'm kind of creating something where this could substitute entirely for even needing to watch the video. You could read through this and basically get the gist of, like, what is discussed throughout. Like, this is on Goose, which is like, you know, an AI agent kind of tool. This is actually super fun stream. But yeah, so then there's. So there's the instructions, and then there's the example. So this is where I actually give it the output. So this is what I was just showing you on my blog, but written in markdown format. You have the episode description, you have the episode summary, and you have the chapters. So Those are all H2s, and then each individual chapter title is an H3. So there's a markdown structure to the output as well. So that's the whole thing. That's the whole document just to zoom out so people can see the whole thing. Front matter, prompt and then transcript. That is what gets fed to the LLM. So before we go to feeding it to the LLM, do you have any questions or comments?

00:58:23 - Ryan Carniato

No, I think this makes sense. I'm following so far.

00:58:29 - Anthony Campolo

Great. Now let's look at what we're getting here. This is where I kind of want to talk about the different models in terms of what you can get and how it's. So you see here.

00:58:39 - Ryan Carniato

Yeah. Alexander was asking how accurate they were, and he was trying to do stuff for his podcast. But I think maybe we should just see that and then answer that after we go a little bit further along.

00:58:49 - Anthony Campolo

Totally, yeah. So you're going to see here a wide range of outputs from not useful at all to hopefully more useful. So what I was talking about how ChatGPT01 was like the ultimate model. I'm showing here ChatGPT 4.1, which is a model that can take the whole thing. So you see here, I was able to get it to take the entire transcript in one go, but the problem is it's not going to give very good chapter titles. So the description. Let's check this out first, because these should be fairly good. So the way they're written is not going to be in your voice, obviously. It's a detached voice of God where it's just describing it from like a neutral third person kind of way. So you may not like the exact voice it gives you and you can modify it or kind of. There's ways to prompt it in a way that can give it your voice, but. So the thing it says here is. Ryan Carniado explores latest advancements, immutable signals and fine grain reactivity, diving into store design projections and the practical and theoretical challenges of state management and modern JS frameworks.

00:59:54 - Ryan Carniato

So that's a pretty good description. That sounds pretty good.

00:59:57 - Anthony Campolo

It's basically accurate. Like it's kind of, it sounds kind of generic and most people would read it and kind of sense it's written by an AI, but it's not wrong and it delivers the information to you in a way where you at least get the point, like what it's about, you know, then you have a longer summary. So here it says. In this episode, Ryan Carniado continues his exploration of mutable signals, focusing on the nuanced design decisions behind stores, projections and derivations within the context of solid JS and broader reactivity frameworks. He discusses the evolution of state management patterns, examining the trade offs between immutable and mutable data structures, and the implications for performance and API ergonomics. Ryan emphasizes the importance of reconcilability, stable references and source guarding in store implementations, drawing comparisons to practices and react views, felt and preact. He also highlights the challenges of efficiently mapping and diffing nested data, the pitfalls of over relying on compilers and language abstractions, and the quest for universal primitives that balance power and clarity. Throughout, Ryan shares practical benchmarks, code patterns, deep insights into architectural decisions, encouraging a thoughtful approach, both theoretical design, real world applications.

01:01:05 - Anthony Campolo

So again, it sounds LLM, but it's a ton of information in there.

01:01:10 - Ryan Carniato

Actually that's a decent back of the book, you know like when you, when you flip the book over the back cover.

01:01:19 - Anthony Campolo

Yeah, yeah. So it, it seems like you would find this useful. It is giving you an output that is not just total trash, you know, so that's good. Now here's the issue though. The chapters, there's not a lot of chapters. So it has a chapter at the 0 mark, 7 minute mark, 24 minute mark, the 54 minute mark, the hour and a half mark, and then the 2 hour, 51 minute mark, 4 and a half hour mark. And that, that's about it. What you want, you want like 30 of these you don't want like seven chapters. So that's the problem with ChatGPT 4.1. If I had given this to OH1, it would have given me 30 chapters. But unfortunately you cannot do that with any of the current models on ChatGPT. So there's been some sort of degradation at some point in time in turn. And SWIX actually Text had a tweet about this saying oh one was the ultimate summarizer. But if you see now if we break it in half, we're gonna have much more fine grain chapters. So now what I'm doing is I broke them in half and I gave them to O3 Pro.

01:02:19 - Anthony Campolo

So this is a larger model that can is better at breaking down the text and doing more fine grained summary. So here we see we have chapters every six minutes essentially. So we have the introduction and stream warmup context, Old topics, New urgency, Benchmark Culture framework, Discourse Signal Stores and the physics metaphor Mutable Reactivity. Why bother? Incremental computation and Signia.

01:02:46 - Ryan Carniato

Yeah, SIGNIA is. That's right.

01:02:48 - Anthony Campolo

Yeah, yeah. I didn't watch this one because I was on my vacation. Signal of Signals. The Knockout Flashback. Joby's Model Centric Example Stores and Motivating Redux. And for each of these chapters is a two paragraph description of what you're doing. So this is pretty detailed and this, this gives you a lot of stuff. The issue with this is that it's not really looking for when the sections begin and end every single time is exactly six minutes. That's a, that's a huge, huge issue. This is something that the models didn't always do and it's something that I still kind of have to actually sit down and kind of work out within the prompts. But if we look at some other models now if we look at say Claude. Claude will break them up in a slightly more natural way. We see here we have the introduction and it knows that's just like a minute. It's like a really quick kind of thing. And then you immediately dive into a topic and then it has it go at 553 and then 932, 1727. So you have this chapter is only four minutes long, whereas this chapter is like eight minutes long.

01:03:53 - Anthony Campolo

So it knows that these chapters should not all be the exact same length. You want to be the length of what the actual content is. This is why I'm saying Claude is able to do that actually. So I would recommend using Claude Sonnet 4 to generate your summaries. I think that will probably give you the best output at this point. But even still, you have to actually watch the video and go to these timestamps to see if it's actually correct or not.

01:04:18 - Ryan Carniato

I mean it looks, it looks roughly like the titles are what I expect. But yeah, I guess the timing is hard to say. The only funny thing is that I thought the chat GPT1 had a sense of humor. I don't. Maybe it wasn't trying, but it just, the title seemed funnier.

01:04:33 - Anthony Campolo

They're like right.

01:04:35 - Ryan Carniato

Signal and Signal Flashback Knockout. Like it was like.

01:04:39 - Anthony Campolo

That's actually. That's a really good point. Yeah, there's different, they have different voices and something I've started to play around with is I have created a text document that has extracted every tweet I've ever written and I've started trying to take chunks of that and feed it to an LLM to say write this in my voice. So that's, that's something that I'm just beginning to kind of like delve into in terms of the. How can you tweak the. How it speaks, like what its tone of voice is, whether it's like comedic or kind of just flat. You know, that's. That's a really good point. So yeah, so that's kind of, that's kind of my whole presentation on, on this half. So what you would then do if you didn't care about the descriptions at all, you could do a prompt that just gives you the chapter titles that would just output those. So let me just real quick, I'm going to just run an example on a minute long video because yours, as I said, takes like an hour and a half just so people can see this guy in action. Action. So and this is all open source.

01:05:41 - Anthony Campolo

Anyone can just download this, this repo. It's just so people have the link GitHub.com/autoshow, AutoShow CLI. So this is the People want to use this guy. You check it out here and we have a bunch of examples in the docs here. And I want to do a prompt example. So let me go to prompt options. Let me talk just briefly about how I'm thinking about the prompt. You have a pretty narrow use case. You're looking for the chapter timestamps. And as I said, that is the thing that spun me off to this whole project. But I found then once I built this pipeline I could tell it to generate almost anything and it could do all crazy crap. I started writing all these different prompts. Prompts. So what I showed I Just showed you two of the prompts I showed you. You know, the default prompt. So this is the default prompt which is just summary and long chapters. But I also have, you could get titles for the video. So if you want, like, if you just did a. If you first, you know, create a video like you didn't live stream, but you like just film something and you want to like have a title for it, you could do that.

01:06:53 - Anthony Campolo

You ask. It could give you some title ideas. You could do your summary and bullet points. Instead of a paragraph, you can have short summary or a long summary. And then you can have short, medium or long chapter descriptions. Then though, you could do things like key takeaways like what are the most important, like bits of information you would get from this. You could have it write like a check for comprehension test where we'll have questions based on the material in the video. If you want to turn your videos into like a paid curriculum course, you could do that. You can pull out quotes. So if you want actual specific language from the transcript, like you want a quote that you have actually said in your own words and own voice, you can do that. And you can do like a frequently asked questions.

01:07:36 - Ryan Carniato

Just, just, just so I understand what I'm looking at here because this is. Sorry, this. I see AutoShow cli. This is what you're.

01:07:44 - Anthony Campolo

Yeah, let me, let me run one, Let me run one of these.

01:07:47 - Ryan Carniato

This is what you built and it looks like you're inputting from your RSS feed. The. Whatever.

01:07:52 - Anthony Campolo

The. Let me. Sorry, I forget which prompt I was going to do chapter piles. That's right. Let me do this real quick. Actually, let me do chapter quotes. That'll be interesting. So first, if you just run it without the prompt option, you'll get kind of what I showed you. This does the five step process I was describing. It starts by downloading the video and generating the markdown. And it gives you all the steps in the terminal output. It then downloads the audio and strips it. So this is kind of just like logging the CLI options so you can see what's happening in each step. It then runs the transcription, which is giving it to whisper and creating the transcription. And then you see here, there's a progress bar as it goes. Then it selects the prompts and inserts the prompt. So this is why I was saying there's a default prompt. So the whole prompt that I've explained to you so far, that's just the default prompt. And this is the thing that is highly configurable and really Gives the tool like all of its power is the ability to select prompts and then the run language models and when.

01:09:03 - Anthony Campolo

Then you have the whole thing and you feed it to the LLM and then the LLM kind of gives you this huge output with everything. And then with that you have this, which is kind of similar to what I had shown you. You have your front matter, you have your transcription, or you have your prompt, you have your transcription. So this is what the tool does by default. And then what the prompt options do is it changes this section in particular. So everything around this, you always get the front matter, you always get the transcript. But this part right here is what you can configure. This is what changes the output. You can write all sorts of different things in different prompts. You could ask it to write you a rap song, you ask it to write you a blog, you could ask it to write you a five day schedule of social media posts. You can ask itself, write you essentially anything. So this is what the prompt configuration lets you do. So does that make sense now?

01:09:59 - Ryan Carniato

Okay, okay, yeah, but you just like. Okay, just. I'm stepping way back. You have a CLI that has all these pre configurations. You run it based on these arguments and then it does all five steps of the process.

01:10:16 - Anthony Campolo

Yeah, for you.

01:10:16 - Ryan Carniato

Yeah, exactly. And this is a open source project. Or is this, or is this something you.

01:10:21 - Anthony Campolo

So what you're looking at, so what you're looking at right now, anyone could do this, they could clone this down, they can run this and there's no money is required. This is totally free. You don't even need an API key. If then you can just go to a chatgpt, even free version and just plop it in and get the output. If you want to use some of the services, then you need API keys. You have things like Open API, Anthropic, Gemini, stuff like that. Let me show you now, let me run this now instead of just running the same thing that I ran, I'm going to add chatgpt at the end now. It's actually going to do the full process of not just giving you the prompt and transcription for you to give to an LLM, is actually going to give it to the LLM and get the response back. So this is the fully automated flow and as I said, you can do this for free if you use an open source LLM. There's a way to do this with Olama. It's kind of extremely complicated and I have implemented and de implemented lots of different times because it's just not worth the effort.

01:11:25 - Anthony Campolo

But just doing this with a 1 minute video with the ChatGPT, this costs fractions of a cent. It's really not that expensive. But what it's doing now is instead of giving you the prompt and everything is actually going to. Now we have the full things. We have the front matter, we have the episode description, the episode summary, and the chapters. If we do this, see the whole thing. So this, the example I use is a interview I did about Lambda School. So exploration of Lambda School, personal experiences, programming frameworks, and the value of choosing unconventional education paths. I made the case for why I dropped out of Lambda to do Redwood Js instead. So I always say I'm a proud Lambda School dropout. And so, yeah, so this is. So this is the entire automated process, or you didn't have Lily, you wrote a single command, which was this guy, and it goes through each step for you and then also gives it the prompt, gets the output, inserts the output into the larger markdown file. Okay.

01:12:30 - Ryan Carniato

Yeah. Okay, that makes sense. Yeah. Okay. So, yeah, this has almost been a necessity to build this kind of stuff, just given like, how many pieces. This is a constant, like a lot of shared parts. But it's, it's a constant, like, process that you have to go through that you do, I guess with a podcast or something you do all the time.

01:12:52 - Anthony Campolo

I use it for all sorts of stuff. I use it for content. I create both my podcasts and my videos. I use it for content. I listen to, like really long podcasts that I find are interesting, but I always just have way too much content to listen to. I'm going to eventually turn this into a daily newsletter where I summarize all of the web and AI podcasts and give you these short summaries of each of them so you can then get a sense of what's happening in the whole ecosystem in these short little blurbs. So I'm using it for all sorts of crap. But if you see here, I gave it just the prompt for the chapter. So here you just got your chapter titles and your chapter headings. This is an example of what you would want. You would want to run exactly this command, except maybe Claude instead of, instead of ChatGPT. And you wouldn't do, you wouldn't do dash dash RSS, you would just do dash dash video because that's. And then you put the YouTube video right in here. It would be, you know, YouTube.com whatever your, your.

01:13:51 - Ryan Carniato

Oh, so this, this, this, this thing handles the downloading as well. Interesting.

01:13:55 - Anthony Campolo

Yeah, that's. That's the YTDLP part. That's step one. So it can work with videos. You can give it playlists, you could have it generate each video in a playlist, or you can have it. You can even do channel. You could do just channel and then YouTube.com forward slash, no Ryan Carneado. That would take probably about a week to complete. So you don't actually want to do that, but you could literally run on every single video on your YouTube channel and then you can do it on RSS feed and. Oh, and then you could do it on a local file. If you have a file on your computer, then you can use it. You don't need to grab something from the Internet at all. So almost any content available anywhere you can process through this. And a lot of that power comes from YT dlp.

01:14:39 - Ryan Carniato

Very cool. Yeah, sorry, Deb was just saying I could use this as a way of summarizing, you know, various things to figure out what to talk about in this week in JavaScript. Earlier you said that before we started you talked about look at the newsletter, but you could actually have this look at all the newsletters that you care about.

01:14:56 - Anthony Campolo

Exactly. Yeah, yeah, yeah. And that's the thing. I'm like, I have all the infrastructure to do that right now. I just have to like create the sub stack and start actually dropping the inputs and just hitting Ascend. Like I'm really close. That's going to launch within the next couple month or two, hopefully. But yeah, so this is kind of. This is the first half of the presentation. I'm going to show you Ryan GPT after this. But let me kind of go off of my screen sharing for a second. I haven't been seeing any of the

01:15:22 - Ryan Carniato

chat, so you should really vibe code UI for this app.

01:15:27 - Anthony Campolo

There is a ui. There is a UI that's AutoShow app. I can show that too. That's the paid thing. I didn't want to come here and shill an app.

01:15:35 - Ryan Carniato

No, it's okay. I understand that. I was actually going to ask this question because you were like at the beginning, you're like the company paid thing. And I was like, well, what you showed me is free except for the services you use. So what's the, what's the paid thing? You can, you can, you can show it.

01:15:52 - Anthony Campolo

It's okay. Okay, cool. Yeah. So. So this is available on AutoShow app. If anyone actually goes in. Sorry, browser. Block the screen. I was just sharing it. What are you doing? Okay, there we go. So you can you can put me back up on the screen? Wait, that's the wrong one. I want. Okay, so this is the app, so I'm going to run it just locally on my machine. But this does exist on AutoShow app. If people want to go and create an account. I will say this is like in beta version. This literally has just launched in the last week, so you probably will hit bugs. And using the video thing versus the file thing is slightly flaky just because I'm still working through the issues of downloading the video. So I just want to throw that out there. But if people want to use this today, they can let me go to just like the front page real quick. This is AutoShow. Transform your content into anything. This does not even work yet. I still need to create a video. As I said, super duper beta. Still lots of things to do.

01:17:00 - Anthony Campolo

But just to show the functionality of it, I'm going to upload a file and you now can select your different transcriptions. So you see different versions here. And there's a, there's a credit system. So you start by buying credits and then the credits is what allows you to select whether you want higher or low, lower quality transcriptions. There will eventually be more options.

01:17:25 - Ryan Carniato

So you're from the app version. You're actually wrapping the other services like so you like. They don't. Like, like. I don't know what. Lmm. You're using. Sorry. Ll you're using underneath.

01:17:39 - Anthony Campolo

But see that part in step four.

01:17:41 - Ryan Carniato

Yeah, yeah, but you. But basically you. Yeah, okay.

01:17:46 - Anthony Campolo

Yeah. So the CLI and the front end are basically doing the same thing. The CLI also gives you access to these different services, but you input your own API key to make that work and you pay directly to the service. Whereas with this you buy credits to use the app and then once you have credit credits, you then have access to all the third party services. You don't have to worry about API keys or connecting to any of them.

01:18:09 - Ryan Carniato

Makes sense.

01:18:09 - Anthony Campolo

Okay, yeah. Then you have your prompt options. So as I was showing you all the prompt flags, now you just see these written out. Now what I don't have yet, but I'm going to have very soon, is you can hover over these and like a thing will pop up showing you what each one is actually going to look like. Because right now it's kind of vague what these do. So that's like a thing that I'm going to work on very soon. But these are all the flags that were available. So if you select Short summary, long summary and long chapters. This will give you the same thing we were getting before. I'm going to just add a couple more in here. We have social and post. And then I always like doing a rap song. I think it's always fun. And then so there's like 40 different prompts here to pick from. And then if you want, you can actually just write your own custom prompt. So if for some reason you have a weird specific thing you want to do, you can just type that in right there. And then the final point, you pick your LLM.

01:19:02 - Anthony Campolo

So we have the same options I was showing before, Chachi, bt, Claude and Gemini. And then you see there's a big discrepancy in credits. Claude 4 opus will cost you up to 800 credits, whereas GBG 4.1 nano is only 5 credits. And this is directly reflective of the different pricing of the models. I'm not doing any kind of fakery in terms of like how I'm describing.

01:19:24 - Ryan Carniato

Are you scaling it based on the inputs? I don't know enough about it, but the stuff costs more based on, on how much it has to process or based on.

01:19:32 - Anthony Campolo

Yeah, so. So the cost workout for transcription is based on, you know, per minute of, of audio you give it. And then for the LLMs is based on input tokens and output tokens, both of which will be two different costs. The inputs are very cheap, the outputs are more expensive. So the length of the transcription you give it will affect the amount of credit. So if you do a one minute video versus an hour long video, that's going to be completely different in terms of both the transcription credits and the LLM credits because the input is scaling, you know, based on how long the video is. And then the outputs is kind of fixed because there's only. They can give longer outputs versus shorter outputs. But the way the prompt is written, it's usually going to give you a fairly kind of similar amount of text. So the really the longer content you put in, the more credits is going to be. That's essentially the math that it works out to. So after you pick your LLM, then it's going to generate the whole thing. It's going to give this weird ugly output that I still need to fix.

01:20:35 - Anthony Campolo

But this will now be able to show you your show notes along with the prompt as well. And this should usually go faster, but it's getting there because it runs the transcription in the background. There we go. Now this is your output. This renders the markdown for you. So we see the same thing. We have our episode description, our episode summary, and we were able to get this just through clicking through this ui. So this requires no technical experience whatsoever. I just onboarded the most boomer dude you've ever met in your life, and even he was able to use it. So that is kind of the goal that I had with this. So let's look at some of the more interesting ones. So here we have the social post. Unlock real coding skills. Discover how frameworks like Redwood, React, and GraphQL can transform your learning journey. Dive in now. Hashtag web development, hashtag coding journey, right? And then here we got our rap song. Yo, I started out with React Fresh stack no slack learning modern JS Push a limit, never hold back boot camp Ignition Ignition is a small piece need the real deal the skills to unleash increase Hooked on red red hot pathways no fuss GraphQL queries direct hits got no mistrust Frameworks like a maze I traced every turn from lines of code and mastery Watch the bottle burn so, yeah, so there's your.

01:22:02 - Anthony Campolo

There's a rap song about my journey learning React in red foot in a boot camp.

01:22:07 - Ryan Carniato

There we go. I just. I already. I already hear the eight Mile song. Anyways. Okay, I got you.

01:22:16 - Anthony Campolo

All right, so, yeah, so this is. This is the app autoshow app. People want to check this out. Please send me bugs. If you. If you hit bugs, you will hit bugs. This is very, very new, very beta. If you want to try it out and you'll like, kind of QA it for me, I can send you a promo code to give you credits for free. Totally down to do that. If you want to try it out, you know you want to pay, but you're won't, like, understand. It's a beta thing. Just DM me. Totally, totally down for that. So, yeah. So thanks for giving me the opportunity to show that.

01:22:47 - Ryan Carniato

Very cool. Yeah.

01:22:48 - Anthony Campolo

And it's built with solid, solid and Astro. That's a solid front end you just looked at.

01:22:55 - Ryan Carniato

Nice. Nice. That. Yeah. It's funny, when he was talking about the vibe coding, Jonathan is like, that's all. It might be okay for that. But you actually built. You built it yourself, right? That was.

01:23:07 - Anthony Campolo

That's a great question. I would say I was vibe coding at least a year before the term existed. This entire app I have built with LLMs, kind of helping me get there. So I wouldn't necessarily say I vibe coded it, because when people say vibe code, they're like, oh, so you, like, plugged it into a friggin Bolt and then just one shot and then you had an app like. No. I spent a year like building this and working through all sorts of different tech. I originally had a Fastify back end and a React front end and I turned that into an Astro Full Stack kind of thing with a solid front end. And I've pulled in, pulled out all sorts of stuff, but I was using an LLM kind of as I was coding the whole time because I have found it very, very useful because I, unlike Ryan, am not a God of coding who can do anything he wants anytime. So it's very useful for me to have an LLM that can help me work through all sorts of weird bugs and issues as I've been building this. Yeah.

01:24:02 - Ryan Carniato

No, okay. No, I mean it's cool. And I. I'm always. The Astro Solid combination obviously is an awesome combination.

01:24:08 - Anthony Campolo

So I tried start at 1.2, but then I heard that there's not going to be a Solid Start anymore. I don't know if that's true. I heard it was like that's merging with Tan Stack or something.

01:24:16 - Ryan Carniato

That's not. That's not exactly true. The thing is the core of like the, like the Nitro piece or I. I want to call Nitro plus plus piece was originally developed building Solid Start. Then Tanstack took it and then refactored it to make it better and then that will get back. Both projects actually have different goals which I'm hoping integrate with each other eventually. But Tanstack wants to be a generic framework for all the JavaScript frameworks. Obviously.

01:24:44 - Anthony Campolo

I think Dev told me this, so that's the source of the misinformation. Just so we're clear.

01:24:49 - Ryan Carniato

Okay. Because basically React gets special treatment in Tans Backland. Solid gets about as best treatment as you can get.

01:24:58 - Anthony Campolo

So Solid Starts can continue to be an independent project that's going to develop.

01:25:02 - Ryan Carniato

It's our starters I want to create. It's our create. It's our Create. React app or Beat.

01:25:06 - Anthony Campolo

Ryan GPT is built with Solid Start front to back. So we'll get, we'll get to that. Is it cool if I take a quick like 2 minute break, choose the bathroom and then we'll do the Ryan GBT part.

01:25:16 - Ryan Carniato

Yeah, it sounds good. Good idea. Pop off for a minute. That's cool. Yeah, yeah, no, it's. Yeah. I sometimes forget. Yeah. I get so into my stream that I actually, I rarely actually end up leaving here anyway. Yeah, yeah. Just. Just to clarify further to. In that gap.

01:25:35 - Anthony Campolo

Yeah.

01:25:35 - Ryan Carniato

Solid Start is not done. The Solid Start's built in such a way that it's to support all solid libraries. Right? I've talked about this before on stream a bit, but like I. Its whole goal was to prove that you could be router agnostic where Tanstack Start is very tied to Tanstack router and. But it was to show you could be like render framework agnostic. So there's similarities there, but the goals end up being differently, right? Like I want people to be able to use Solid Start and not even have a router. You know what I mean? Like it's lower level. If Vite continues to raise the bar on what they offer on their server renderings, I'll start to shrink. Shrink shrinks off the map. You know, things like Nitro is a Vite plugin, as you meant as you mentioned. Or like if, if we can just shrink the baseline of like. Of like what is needed on the bottom. Yeah, Solid Start gets smaller. But the truth is today there's a gap between like basic Vite setup and something that has full SSR in its consideration. Right? Because every framework SSR is a little bit different, right?

01:26:44 - Ryan Carniato

So like we need a new type of Create React app or Create Solid App or whatever. One that has SSR in its DNA but isn't so opinionated as a, as a meta framework. I know Solid Start has a lot of like opinionated pieces like the file system routing, but even Solid Start was actually built in such a way that the file system routing could actually feed into any router. A lot of inspiration there actually from an old beat plugin that was basically generating FS that could be plugged into a router. We kind of realized that kind of have the same goals and at that point it's just a toggle. Like whether you like remix style flat routing or you like more classic looking folder based routing. Like we use Install Start or like I guess the old nuxt structure or sapper or you like the new. Like I like naming all my files the same thing or putting plus signs in front of them. Like that's, that's kind of like a U choice. But mechanically you still need file system around. You can't just not include it because of the convenience that comes from say code splitting or automatic code splitting and bundling considerations.

01:27:53 - Ryan Carniato

Anyways, it's a whole other thing. But yeah, I got to plug that, so that's good. And Anthony is back. And yeah, Astroplus, anything is a solid choice regardless. That is, that is true. Well, although I mean you might be, if you put something exceedingly large on it, like, react. You might be, like, sort of wasting it a little bit. But, yeah, Astro with anything small is a pretty solid choice.

01:28:18 - Anthony Campolo

I've really, really enjoyed that, the Astro solid thing, because I've been big on Astro for a while. My blog's been built with Astro for, like, three years or so. This is the first, like, dashboard kind of app type thing up bill with Astro. And the API endpoint's been working great. You know, it took me a little bit to figure out how to, like, use things like YTDLP in it, but for me, what unlocked everything is just running the whole thing in a Docker container. I don't try and, you know, deploy these things on, like, netlify or anything. Like, I'm running my entire app in a Docker container container on railway. And that works pretty well for me.

01:28:51 - Ryan Carniato

Yeah, no, for sure. Yeah. It's interesting, too, because, I mean, I guess with your app there's a decent amount of, like. When you say, like, the prompts are. Sorry, the part we saw where you went through all the UI and went through all the selection, I have. That part's mostly interactive, which I imagine is where, like, the solid part comes in. The Astro is just like all. I guess all the marketing pages and stuff are around it.

01:29:14 - Anthony Campolo

Like, in the main, Astro is to. To for the API endpoints, because we need to call out to.

01:29:20 - Ryan Carniato

Right.

01:29:21 - Anthony Campolo

You know, assembly AIs endpoints or OpenAI's endpoints or use, like, the first thing, like, YTDLP needs to download the video. So there's a server running in the back, and that server is a Docker container running Astro API endpoints.

01:29:34 - Ryan Carniato

So mostly. Okay. Yeah. Yeah. Okay. Interesting. Yeah.

01:29:37 - Anthony Campolo

I mean, like I said, it started as a Fastify backend, so it's like. It was like a node server that I had to port to Astro API endpoints, and it eventually got there. It took some work, but it got there.

01:29:47 - Ryan Carniato

Yeah, I don't think people use Astro enough like that they realize, like, Astro kind of funnily enough, because it does

01:29:55 - Anthony Campolo

more, but it became the opposite of its marketing. Like, it used to say Astros for content, not for apps. And then it made a thing for apps. It's so funny, right?

01:30:03 - Ryan Carniato

Yeah, but it's just like, once you use it as, like an. As I said, like the API endpoint standpoint, it's one of the easiest ways just to lean up a server and just go like, okay, here's a file. Do the thing. And it's like, oh, I need a page. Do the thing. Like, I think what makes Astro attractive is how quickly it can go from just being like, you know, here's an endpoint with some API to like, here's an endpoint that the HTML or to here's an endpoint that has an app on like a solid app on it. Like that transition from those three steps is very rapid and almost effortless. Where in most solutions going from the first one's easy, right? You pull in Express or hono or even H3 or whatever, you can get an API endpoint up and then you go, okay, I need to serve some HTML and you're like, okay, that's not too hard. But then when you're like, okay, now I need to bring in a small Vue thing or something or whatever, pick a framework. Now you're like, okay, well I have to bundle this or no, maybe I'll just use the standalone mode.

01:31:02 - Ryan Carniato

But like you're, you're suddenly like, the jump isn't smooth. Astro makes it so that you can go from like basically basic HTTP all the way to like, you know, here I just like. It's funny, like in some of those cases you might not need Astro anymore. Like you just made a single page app. But the fact that Astro can just go from like I'm fully back end with some dumb stuff to now I have a single page app running out of Astro in just one go without you having to worry about stuff is incredible.

01:31:27 - Anthony Campolo

I think that's exactly how my setup. I have like 10 API endpoints and I just have the single page app, which is the solid app. And there is a front page as well. But it's been super easy to do. Yeah. So all right. Ryan GPT.

01:31:42 - Ryan Carniato

Yeah, yeah, yeah. I'm, I, you showed this off in the discord a while back and I was just like, what, what is this?

01:31:49 - Anthony Campolo

Yeah, so this is, this is something that I have wanted. And then I again use your streams as a test case. So there's this thing called there's two ways to do this is what's called embeddings. Then there's fine tuning models. I have not fine tuned Orion GPT model yet, but that's coming next, that's coming soon. So right now what I'm doing is I'm using something called embeddings where basically you can take a ton of text that would be too much text for an LLMs context window. So you already see how a single stream of yours is already pushing the bound of what these LLMs can handle. So if you wanted to Work with it in terms of 100 streams. You need a whole different way of doing that. So that gets into things like embeddings, vector databases, all this kind of crap. How this all works under the hood is not super duper important. Basically, there's just ways to turn text into something that LLMs can understand that then you can have it search through like it's a database. It can kind of read through all of your streams. It can pull out bits and pieces of information from different ones.

01:32:52 - Anthony Campolo

It can then assemble them into an answer to a query. So what I did is I built an interface where you could ask it a question and it would use the sum total of all of your transcriptions and show notes to give you an answer. The idea being that it would be a better answer than if you just ask ChatGPT directly. If you just ask ChatGPT, that's actually what we should do. I should ask ChatGPT with and without the Ryan GBT bottom, and it would just give you a kind of very generic answer versus if you ask it with what you got, then it should give you a more specific answer. So probably me. That's a question for you.

01:33:36 - Ryan Carniato

Yeah, yeah, I've answered this before. Defintifying breaking change only in the configuration on the surface, but it's. It's less. I. Yeah, mainly a configuration change, but it will also affect like maybe how we approach adapters. So it's on the deployment side, it's less. It won't break the code in your app per se. Like we're not changing API there. It's more of like how it gets deployed, maybe, and how it, like you configure the config file.

01:34:08 - Anthony Campolo

Okay, sorry, you can flip over to my screen now.

01:34:11 - Ryan Carniato

Yeah, sure, sure.

01:34:13 - Anthony Campolo

All right, so this is Ryan GPT. This is not yet deployed. I still got to figure out a couple things in terms of how the embeddings are going to work in terms of the deployment version. But if people want to clone this down, it's just a solid start app, so it should be very simple for you to do. This is actually one of my most starred repos, by the way, just because you shared this. So thank you for that. Every Ryan Carnegie ever summarized and turned into a betting things for chatting with stupidly written description, but that's basically correct. So you can do this with a cli Also because I kind of built in the AutoShow CLI into it so you could do do that. And the. The question I asked it in terms of the readme is why Marco is the most underrated framework ever and it gives this extremely long and detailed answer. But I don't want to go into that right now. I want to just do this first question. So I'm going to ask it. What, what is a mutable signal? And we're going to compare the answer we get from ryan GBT with ChatGPT.

01:35:08 - Anthony Campolo

So while that's running, ChatGPT says immutable signal generally refers to a signal or piece of data that can change over time and be updated by some processor system. The specific meaning of mutable signal can depend on the context. And then it kind of goes in. It gives you different interpretations, it gives you some code examples. It also gives you some examples I have nothing to do with, you know, solid like engineering and physics, you know. So it gives you a bunch of stuff that has nothing to do with what you really want processing.

01:35:36 - Ryan Carniato

What's the first answer? Just out of curiosity?

01:35:38 - Anthony Campolo

So in program immutable signal is a reactive variable that holds a value and can be updated. See, that's just a signal. Other parts of the program can just subscribe to it and automatically react when its value changes. And then it gives you examples you do get listed first. Solid mob ax react via use state key trades. You can both read and write its value.

01:36:01 - Ryan Carniato

I see.

01:36:01 - Anthony Campolo

So do you think that's a good answer or a kind of answer?

01:36:05 - Ryan Carniato

It's eh. Yeah, it's, it's. It's.

01:36:08 - Anthony Campolo

Now let's look at, let's see what Ryan GPT gave us. So Ryan GBT said immutable signal is a reactive state container that lets you update its inner value in place rather than forcing you to replace it with a completely new value every time. In other words, while a regular or immutable signal might require you to create a new value on each change, ensuring that every update is a new reference, immutable signal is designed so you can modify its contents directly. Despite these in place mutations, it still remembers its dependencies and notifies any observers or computed values that rely on when change occurs. So is that correct?

01:36:42 - Ryan Carniato

That's a better answer. Yeah, because I don't. Yeah, I mean a mutable signal isn't really a thing per se, but I, but yes, I. Because it's actually a container of signals. But I think the question kind of like this is the right answer.

01:37:00 - Anthony Campolo

This is the more correct answer if it's in the context of Ryan Carnegado's work, you know.

01:37:06 - Ryan Carniato

Yeah.

01:37:07 - Anthony Campolo

So, yeah, so there you go. So that's Ryan gbt. I Gave it all transcripts ever. And we can see that in the thing. If you go in here, you can go into. And this is still pretty rough. I'm going to update right now. I had an on deck thing because there's some that I hadn't quite run yet. But that's no longer. Indeed. I've run all of them now. I separated them into years. And then you have a markdown file for each. I have to update this. The last one is so long, Silicon Valley. And then I just drop in the raw AutoShow output. So at this point, this should look familiar to people. You got your front matter episode description, summary chapters. I had to break these up into multiple parts even though I was generating these with 01. So I was getting these in one shot. The embeddings can't take the full thing at once. So you have to break these down into smaller files. So a lot of these are in multiple parts. Yeah, I was going to see all there.

01:38:03 - Ryan Carniato

I was going to ask how do you keep this up to date? You just periodically. Every few weeks it's like I'll just run the ones that I haven't run yet.

01:38:09 - Anthony Campolo

Exactly, exactly. Yeah, yeah. And then. And this is a higher level feature I'm going to build in AutoShow is a subscription thing where you subscribe to a YouTube channel and then every time you do a new one, it will just kick off that process for you with whatever prompts you select. So that will be it. There'll be a way to automate this. So if you are a content creator, you could just always get new ones delivered to you as you push out new content.

01:38:32 - Ryan Carniato

Makes sense. Yeah. No, this is interesting. The Marco answer is very.

01:38:37 - Anthony Campolo

Yeah, let's look at the Marco answer. So Marco is considered the most underrated front and fork ever because it pioneered many advanced features and techniques. Sorry.

01:38:44 - Ryan Carniato

Actually, that's interesting as this is. Can we ask a specific question? Can we go back to the other.

01:38:50 - Anthony Campolo

I want you. Yeah, I wanted to just have you give me questions. I want you to feed me some questions. So let's do that.

01:38:58 - Ryan Carniato

Why is fine grained rendering better than a virtual domain?

01:39:04 - Anthony Campolo

That's a great question. This will take roughly 10 to 20 seconds to run just because the LLM processing part itself just takes a certain amount of time. So you can only make it so snappy. Fine grained rendering focuses on updating just the very specific pieces of the UI that actually changed, rather than re rendering whole sections or even full component trees. I didn't do that. That's a Pretty good answer. In a virtual DOM system, every update triggers a diff between old and new virtual tree and then patches the real DOM based on those differences. While the stepping process can be efficient many scenarios, it still involves some overhead especially in complex or high frequency update situations. That seems pretty good to me with

01:39:45 - Ryan Carniato

fungal rendering and pointed framework tracks actually value that's fine. Let's keep on going. I want to see if it gets any of the harder nuance lower CPU. Okay, let's keep on going. Reduce memory faster Overall the bunger and

01:39:57 - Anthony Campolo

our division of okay, so let's add to the question of go into extreme depth on specific use cases. Let's try that

01:40:11 - Ryan Carniato

because I want to see if it actually gets that like things like stores like can basically cause the fine grain updates without rerunning everything in between. I don't know if. I don't know if it'll. If it'll actually get there.

01:40:29 - Anthony Campolo

Let's see. So this is going to be mostly same. The same stuff. Let's start. Let's take a look from here. Checks back which pieces of state are using each UI fragment. Instead of performing a diff between two virtual trees they automatically know that updating one state variable only affects specific DOM nodes. Use case counter or form input updates. Consider an application with multiple independent files form fields with fine reactivity. If one inputs value change is only the DOM node directly bound to that? This is kind of just describing the same thing.

01:41:00 - Ryan Carniato

Okay, I got a better question.

01:41:03 - Anthony Campolo

Yeah, okay.

01:41:05 - Ryan Carniato

Why will the react compiler never be as good as signals?

01:41:13 - Anthony Campolo

This is good. I like asking LLMs like a given opinion on things because this is. This is where they have to actually make an argument for something. It might equivocate and be like well, maybe it will be. React approach is fundamentally built around a component rerender model and an immutable data philosophy. The react compiler, no matter how many optimizations it incorporates must ultimately work within this paradigm. In contrast, signals offer fine grained reactivity by tracking individual data dependencies and updates Updating only the parts of the UI that truly depend on a change value.

01:41:53 - Ryan Carniato

Inherent overwhel single nose who depend on can profit change unless consumers avoiding necessary recomputation Is there anything? Is that the end? No way someone reacts boilerplate. He cannot funly all the reacts render logic without his core mentality. I think you're doing full rerun. That's. That's pretty acceptable. Yeah. It's just funny because I do you

01:42:18 - Anthony Campolo

wanted to go into further depth really. You wanted to give a longer more complete answer which is Something that I can build into the prompt.

01:42:25 - Ryan Carniato

It's not about. It's not about the length. It's. It's about like it highlighting like the really crux things that like that it can't do. That's why I asked the question like that. But it's fine like that. That answer is pretty good. Okay, cool.

01:42:39 - Anthony Campolo

Yeah.

01:42:39 - Ryan Carniato

Like I. Basically what I was thinking of. I recently did a video where I did the myths about signals which is not my stream content exactly. But. But I was. I. That's a good one.

01:42:49 - Anthony Campolo

What are the biggest myths about signals? Keep going the same way you're saying.

01:42:55 - Ryan Carniato

But it was like those questions are really good questions to test because they're very tricky. Like the specific ones.

01:43:02 - Anthony Campolo

Like.

01:43:03 - Ryan Carniato

Like one of the biggest myths in my opinion is just adding signals to a solution is. Is making it slower. Okay.

01:43:12 - Anthony Campolo

So the biggest miss says that signals are a brand new concept, whereas they actually go back to older like knockouts. That's. That's important context. That's a good answer. I think myth signals automatically eliminate all unnecessary renders. What do you think about that one?

01:43:31 - Ryan Carniato

Okay, that's fine.

01:43:33 - Anthony Campolo

Okay. Signals are too low level or complex for day to day use.

01:43:37 - Ryan Carniato

That's. That's definitely a myth. Yeah.

01:43:40 - Anthony Campolo

Y signals require a total rewrite of existing applications. I think signal. I think myth five is what you were referring to.

01:43:48 - Ryan Carniato

Yeah. There's a concern that. Take it event. Okay. Yeah, sorry. I want to read four for a second. In reality projects including adoption strategy signals can introduce a nice slip party application course granted signals. Okay. That's true. Like View vapor can work inside of VDOM library. Signals are. But signals also come with their own trade offs. For example, bookkeeping track small independent vc. But less doesn't benefit fine grain updates. Yeah. Okay. Okay. Okay. I got one more question. This might piss off some people.

01:44:27 - Anthony Campolo

You do way more than what we just made.

01:44:30 - Ryan Carniato

Why are signal. Why will signals in solid always be better than they are in preact?

01:44:39 - Anthony Campolo

That's hilarious. Yeah. And the idea here is that this should be based on the arguments you. You have made.

01:44:49 - Ryan Carniato

Exactly.

01:44:49 - Anthony Campolo

So that's. That's why these. I knew that I had to actually get you and get some face time to get you to ask these questions. I cannot possibly come up with these.

01:44:59 - Ryan Carniato

Yes, it was designed.

01:45:00 - Anthony Campolo

You should kind of read about kind of for the audience.

01:45:02 - Ryan Carniato

Yeah. Signals and solid come out ahead because solid was designed with ground up with fine grained reactivity. And solid signals aren't just an add on. They're baked into the system and even into the compiler this allows solid to eliminate extra layers of abstraction like virtual DOM diffing that libraries like pre must contend with. Here are a few reasons Core design solves entire rendering models built around signals and finding rendering every piece stays tracked very low level when signals change only minimum update in preact contrast starting as a virtual ARM renderer and is integrating signals topics within existing framework. The extra vistul layer means that preact signals can never fully escape the overheader crusades. The system originally designed to rerun the cloud's wholesale compiler optimization solves compiler where signals transfer according to direct DOM updates hardwire that updates that occurred. This means extra bookkeeping. On the other hand, sweeten the runtime that's built to support diff based cycle. Even if single reduce the number of upsides you cannot completely remove the only constant. I mean preact has a few places it can optimize there okay granular ownership

01:45:53 - Anthony Campolo

so it's mostly right but like you

01:45:55 - Ryan Carniato

could quote yeah yeah this enables precise isolated soon because this on the virgin longest version can be cheap film with work on minimal design time philosophy solid is updating signals are adopted actually more interesting press signals added are new addition so while they may bring some granular they an ecosystem originally or render also true in summary console is building with a compiler it being leaner more predictable, more efficient prereactions while very prompting here are some of the constraints of the actual head of the original design for the fine credit core. Yeah, I like it.

01:46:25 - Anthony Campolo

Cool. Yeah, no, that is a really good question and I just think it's, it's definitely giving you a high level answer for sure.

01:46:34 - Ryan Carniato

I mean this is good enough.

01:46:35 - Anthony Campolo

I hit the right the right main topics.

01:46:38 - Ryan Carniato

I literally have people just spout the absolute garbage on Reddit like whenever like and I have to clear

01:46:46 - Anthony Campolo

this is better than at least garbage Reddit human output.

01:46:50 - Ryan Carniato

No, no, but what I mean is like if I could just literally take any of their questions it would be like. It would. It'd be great. It would like save me time because you're wrong.

01:47:02 - Anthony Campolo

Go spend some time with Ryan GBT to learn why.

01:47:05 - Ryan Carniato

Yeah, I mean, pretty much because it's just like, you know, like. Yeah.

01:47:09 - Anthony Campolo

I mean just Dev says Twitter reply bot. Yeah.

01:47:13 - Ryan Carniato

Twitter reply bot. Yeah.

01:47:14 - Anthony Campolo

Yeah, I love that idea. I'm going to take that Dev.

01:47:19 - Ryan Carniato

Yeah. On a Reddit reply bot. Yeah, I mean it just. There's always just like stuff that like. I mean obviously I have my opinion but it's like it's. It's just like I don't even Know what it's grounded on? See, when you read something like that, you know, it includes a lot of my expertise. So it sounds smart. Like, it sounds like it knows what it's talking about. It has references to examples. It.

01:47:41 - Anthony Campolo

So, like, so it succeeds. That's awesome to hear. That's. That's because this is the thing that I was, like, I was looking at some of the outputs, I was asking some questions I thought would be interesting, and, you know, I was like, okay, this is actually pretty good. I'm trying to get some good stuff, but I'm like, I really need Ryan to actually look at this and, like, you know, fact check whether this is.

01:47:59 - Ryan Carniato

There's still, like, some little nuances, but I think when most people. And it makes sense, AI looks like it. When most people read what I say, they miss those nuances anyways. So, like, Like, I think the chat or the GPT thing almost does better because it, it probably speaks in a. Like, it doesn't waste time trying to

01:48:18 - Anthony Campolo

explain, doesn't worry about. Yeah, the hyper nuance kind of like. Yeah, it gets. It has more of a beginner's mindset, I think.

01:48:25 - Ryan Carniato

Right. Which means that, like, it. They can pull away what I'm trying to say anyways without getting, like, locked or mired on, like, some weird thing, like, why did he say that? Now, those details might be actually important because someone will argue it, you know? Okay, I got, I got one. Sorry, can I, Can I open it? One more?

01:48:43 - Anthony Campolo

Yeah.

01:48:45 - Ryan Carniato

I don't know why I'm so adversarial, but these are the kind of things.

01:48:47 - Anthony Campolo

No, this is. I wanted you to be average. This is the whole.

01:48:50 - Ryan Carniato

Because these are the point of the

01:48:51 - Anthony Campolo

street, bro, for me to get you to ask adversarial questions.

01:48:55 - Ryan Carniato

Right? Because. Because, like. Because, like, the thing is people will say stuff and then.

01:49:04 - Anthony Campolo

All right, hit me with it.

01:49:10 - Ryan Carniato

I'm trying to think of the best way to phrase this question, but, like, isn't solid JS just vue, Just View?

01:49:19 - Anthony Campolo

What is View? Signal thing called

01:49:24 - Ryan Carniato

Composition API.

01:49:28 - Anthony Campolo

I want to be specific about that. Like, the signal part of View that will give it a better answer if we give it that kind of context.

01:49:36 - Ryan Carniato

It's because, like, I'm wondering if it can summarize, because sometimes I'm, I'm, I do leave it a lot more. All right, go for it.

01:49:48 - Anthony Campolo

You should read a little slower for the audience.

01:49:50 - Ryan Carniato

Well, solid JS and views, compensation. I've shared the idea of using reactive signals, and solid JS and reactive in view are they are not the same things. A different significant approach in architecture Solid is built from the ground up with fine grain reactivity in mind. It's designed to compiled your JSX code into highly optimized direct DOM updates without a virtual. This means that each single tracks exactly what part of the DOM update leading to minimal runtime overturning contrast. Vue Composition API is a way to structure and organize component logic within Vue. It leverages Vue's reactivity system which based on proxies and watchers to track dependencies. But Vue still use a virtual DOM and diffing process update the ui. Essentially the composition API gives you a nice functional way to encapsulate reactive logic, but operates on top of Vue's broader framework architecture. Solid JS takes a complete compile time approach where many optimization baked directly into the generated code. This enables bypass much of the dynamic overhead that frameworks rely on virtual dom. The results extremely efficient update cycle without needing need for a reconciliation process. So while both adopt reactive programming patterns and aim to perform solid isn't simply view composition API done differently.

01:50:45 - Ryan Carniato

It represents a distinct design philosophy that centers on compiler driven fine grained reactivity and minimal runtime overhead. Yeah, I don't like how it emphasize the compiler as much. I agree, but I this is. I mean Vue vapor changes this, but this. This is historically the correct answer. Yeah.

01:51:05 - Anthony Campolo

Why is solidus not just dependent on a compiler for its advantages? This is me kind of just trying to see what we get with this one. And if people in the chat have questions, throw them out. We can ask anything.

01:51:28 - Ryan Carniato

Solid strains aren't just the result of compile time optimization. The real magic comes from how it's built around fine grain reactivity model at runtime in more detail runtime reacting models Solid JS designed with each piece of state signal independent reactive unit. When a value changes only that part of the UI that depends on specific signals update. That means updates are highly localized and efficient without diffing algorithm or full rerender of component trees. Even without the compiler's transformation, the core minimize unnecessary processing minimal overhead by design. Unlike frameworks that rely on virtual DOM diffing or ones that expect re renders. Solid's runtime is structured to directly bind to DOM nodes. Its underlying system ensures component functions run only once and establish reactive bindings. The result in performance gains don't solely depend on compile time code rewriting compiler as an enhancer, not the foundation. While the compiler and associated tooling plays an important role in stripping away Redundant code and transformation turn JSX into optimized DOM operations. It's more of an enhancer. The compiler optimization such as static detection of non reactive parts and wrapping only those expressions that truly change make the final code leaner and faster.

01:52:24 - Ryan Carniato

But even if one were to use solid without heavy compile magic or in environments with minimal build steps, the runtime's inherent reactivity and update management would still deliver many of the advantages. Integrated owner disposal, batching solids Runtime manages reactive skills, choosing an ownership model that automatically cleans up once a DOM element's removed. The careful runtime bookkeeping, including techniques like microtask basking, is intrinsic to design and not something that file step can do alone. In summary, while compile time transferring solid are highly beneficial by reducing overhand and submitting optimizations, the framework's real advantageous lines runtime reactivity system the system built on signals precise dependency tracking and local updates ensures that solid JS remains fast, efficient and predictable regardless of both tool configurations. Yeah, I It's pretty good I Microtask batching is not a technique that we do in solid 1.0, so it's interesting that that's got in there, but I It's. Ah.

01:53:13 - Anthony Campolo

Because. So that's. Yeah, that's interesting. So this is trained on the sum total of all of your transfers. So it means that it's pulling in all the old stuff as well.

01:53:21 - Ryan Carniato

Yeah, yeah. Well actually it's 2.0 stuff, so it's actually pulling in the new stuff. It's the opposite. It's pulling in as the.

01:53:27 - Anthony Campolo

Yeah, yeah, yeah.

01:53:30 - Ryan Carniato

Oh geez. Dev's got a good one. I was gonna. Before we do devs. Sorry, I got one more I want to ask.

01:53:35 - Anthony Campolo

Sure, yeah, yeah.

01:53:38 - Ryan Carniato

Why can we consider Svelte3 a virtual Dom? I mean I. I would go as far as saying everything before stealth 5. Yes, svelte 3 and 4 are basically virtual dom at a glance. Svelte 3 isn't virtual dom framework in the traditional sense. It doesn't build a tree of virtual nodes at runtime, then diff on every update. Instead, the magic happens at compile time. When you write a svelte component, the compiler converts into a low level imperative code that directly updates the DOM when the state changes. Because the generic code knows precisely which DOM parts have changed, Svelte3 achieves the same fine grain update benefits that many virtual DOM libraries aim for, but without maintaining your diffing runtime runtime data structure in See the Okay. In effect, saltsmart takes the job of tracking dependencies and figuring out what minimal DOM needed since you could say for John, like because it delays many of the performance. Yeah. No, this answer is completely wrong.

01:54:35 - Anthony Campolo

It's completely wrong. In what way?

01:54:38 - Ryan Carniato

I mean the answer should be that components in Svelte three still run top to top to bottom. It uses like shortcut memorization. Like there is no independent reactivity. And so it does like shallow diffing. It's like. It's kind of like dirty checking.

01:54:55 - Anthony Campolo

Interesting. Yeah, I would say it's probably showing you with this because how much have you talked about this under streams only

01:55:00 - Ryan Carniato

in the Svelte 5 stream with Dominic Gannaway and Dominic himself said that. Said that. That, that Dominic Gannaway himself said that it's called svelte like before svelte 5 of a virtual ball.

01:55:14 - Anthony Campolo

Yeah. Yeah. So this is an example where because there's only a single stream that you talked about this, it's hard for it to pull in specifically that information. This would actually probably be one of those questions where because it's not heavily represented within your tranships, it would be better to ask just ChatGPT directly. It might actually have a better response because they could then search the Internet to pull in information about Svelte that is not present within your transcriptions.

01:55:37 - Ryan Carniato

Yeah, to be fair, I don't think anyone will get this one. This is an opinion that's. That's an opinion that, that I hold and like Dominic Gannaway holds. But like very few people view it like this. Okay, let's try that.

01:55:52 - Anthony Campolo

There's a way you wish you could ask the question to get it to probably pull that out, but let's go with Dev's question next.

01:55:59 - Ryan Carniato

How are single flight mutations better than React server components?

01:56:02 - Anthony Campolo

Sorry, give me that slower.

01:56:03 - Ryan Carniato

How are single flight mutations better than React server components?

01:56:15 - Anthony Campolo

All right, is this one that you've talked about multiple times on the stream?

01:56:20 - Ryan Carniato

Yeah, although I don't know if I straight out called them better but I'm interesting what this, what this says.

01:56:25 - Anthony Campolo

If these are both things you've you. I'm sure you've talked about Rack server components a bunch so this should be one that's probably better at answering both approaches.

01:56:33 - Ryan Carniato

Address updating an 8UI fresh but they tackle the problem very different angles. Single flight mutations reaction roads aren't directly alternatives for every use case they solve. Complimentary piece of the puzzle. Here's a Single flight mutation can be better or offer advantage compared to common patterns around React server calls. One round trip single flight mutation. Combine the update and subsequent Data read refresh in one request instead of sending the mutation then make a separate query to grab new state. The server returns updated data as part of the same call. Yes, this reduces work around triple mean the UI can update immediately with fresh state. Whereas React servers might require more layered approach for data fetching and mutation are handled in separate steps. That's not necessary. Let's see. Reduced client complexity. I mean that's the one thing React server components do. They do do a single flight mutation in a sense but okay, the single flight mutation but having to stale state or double fetched logic in the client. That's true. Since server components already or server response already include the data update data, there's no need for an extra cache and validation step. Yes. Otherwise might implement.

01:57:30 - Ryan Carniato

Okay, yeah, I think maybe they don't. I wonder if they don't know what React server components are and they think I wonder if they like recognize RSC more than React server components. Like do you think it's possible that they.

01:57:41 - Anthony Campolo

Let me try that. Let me pull that in and see if that gives it additional context. Yeah, that's a good. If you, if you're saying the words rsc, that's. That might be kind of getting lost in the sauce. And also what we're doing here is we're. We're pointing out the limitations of this and there's already some steps in terms of improving this. Like this doesn't include like your blog posts, this doesn't include content from other people talking about these subjects. So there's ways to pull in more context to even improve these answers more. This is one of the reasons why I wanted to agree with you to see where those issues are and see how we could actually give this more info to make these answers even better.

01:58:16 - Ryan Carniato

This is, this is. This answer already looks slightly better. Single flight mutations bundle right mutation and subsequent read UI into one atomic operation. This means that you send the mutation immediately get back the updated view avoiding double around drip.

01:58:27 - Anthony Campolo

Good.

01:58:27 - Ryan Carniato

In RC scenario though, you're relying on the server to re render dynamic parts of your page. You may still face extra fetch or more complex coordination. For example, if your UI's data requirements are spread among several components, yes, this is the answer. Single flag mutation keeps things simple by limiting that.

01:58:39 - Anthony Campolo

That's the key. You got to figure out the right contact which is a rsc, not React server components.

01:58:44 - Ryan Carniato

Simpler if you're moving parts of client with a single flame mutation, you sort of take full responsibility. Once mutations have submitted, the server reenters just the parts that really change the client doesn't have to manage multiple caches or track separate update cycles okay RCS also coordinate between static server rendered output and client side interactor. This can lead to subtle state or caching challenges when your data is scattered across components. Single flight mutations are often employed as a plain server functions that return exactly the updated data this cuts down application complexity I might have made that argument optimistic updates without extra fetch logic many inflation single flight uses can be designed for items immediate updates Blah blah blah Kitchen server kitchens up the while the server kitchens up the complete un updated response this merge mutation and fetch minimize use perceived delays and cost RCs rely on server diffing and stream the updated tree well the powerful MA demands extra care caching boundaries okay better developer ergonomics for targeted data needs when your use cases simply update something show new zeal a single image is 34 where no need to design component structure around it okay React server principal thinking about server render versus client render boundaries which can scatter data across files.

01:59:52 - Ryan Carniato

Single flame uses more all in one it's interesting. It's. It's hit and miss kind yeah yeah. In short both parts move to work server and strive to minimize bundle size and clockwise who's single trying to providing mutation and update in one secrem reducing round trips and easing state mansion they're particularly protect when you want to pattern without extra coordinates Entire react system maybe. Yeah. I mean the summary is fine. It's mostly that they're atomic that they're granular so you can basically do multiple of these things independently without having to rely on the server components to send the whole page back on every change. And when you have. They did mention this. When you have this whole page back on every change you can have multiple you have to fetch more data than you need like on the server because unless you use a cache because you're going to like hit a code path of something you didn't update but the server doesn't know that so the server's like hey, I'm here. I need all the data I need to render this part of the page. Even the stuff you know that didn't update.

02:00:51 - Anthony Campolo

I now reran that same prompt saying emphasize how it's atomic. So we don't need to read the whole thing, but let's just read the first the first paragraph and see if that sounds better.

02:00:58 - Ryan Carniato

Single flight mutations offer an atomic all in one approach that concerts sometimes more fragmenting life cycle react server codes with single flight mutation mutation quests say updating data on the server is sent process and resolving status return to one continuous operation. The single flight ensures the update and the subsequent data pitch reducing the chance of immediate inconsistent states or external round trips. Sure. The top nature key advantage. Sorry, one transaction. You read the whole thing just keep on going because. Yeah, because I think it's not getting that. Yeah, this is. This is not quite right. It's tricky.

02:01:35 - Anthony Campolo

Yeah. Yeah. So this is where it gets hard I think as you. As we drill deeper and deeper into specific. Not only concepts but like specific technology. Because like it only has so much context about React Server. The only context it has about React Server components is what you have provided within your own streams. So that's why it's probably struggling a bit because it would need to bring in more stuff from the Internet about React Server components to give a really fully complete answer because it only knows what you have told it about reaction components.

02:02:07 - Ryan Carniato

Yeah. And more so I think I've talked about certain subjects more consistently. Like when I talk about rcs I'll drop a comment here or there or just be like oh, in contrast to this. And it'll only be like these little snippets all over the place. When I talk about signals, I basically taught signals and I've done this multiple times. So like the main argument, even the stuff with like preact or whatever, I did a whole stream on that topic specifically. Like I'm. I. I use preact signals to make a preact fine grained render and then compared it to preact itself to show the differences. Right. Like that's something that it can grasp on where RCS or more that like I did hackmds for freaking months working on single flight mutations and stuff.

02:02:52 - Anthony Campolo

That would be actually the thing to put. Not even just your blog post when we need to pull in your hack MDs it sounds like. And that would give it far more context because that's the thing is I know that I can pull in your dev2 posts but it sounds like you actually have a much larger base of written content. That is not that.

02:03:10 - Ryan Carniato

Yeah. What model. What model is this using?

02:03:14 - Anthony Campolo

That's a great question. Let's go take a look. And I forget I think it's using 01 preview. I could be wrong though. Yeah. So this is using a very old model. Actually it's using O1 preview. So if I use a slightly newer model actually might get better answers because this is something I actually. Sorry, this isn't the repeat. That's not. That's not correct at all. Actually let's look at this application first. This is a solid start application. We have one API route which is the query. This is, let's see, it's running the query embeddings utility, which is. Okay, this is where the magic happens. So querying the embeddings, this is, this is back when I still wrote comments in my code. I don't do it anymore. So this is the embeddings API with open API. And then this is the chat completion. So it is. Where's the chat body? Okay. Okay. So this is using O3 mini. So this should use O3 regular, O3 Pro. We try to change this to O3 real quick and see if that still works. Works.

02:04:31 - Ryan Carniato

So yeah, that's interesting. So yeah, the way, the way embeddings work. I just, sorry, the trying to understand this a bit because you just change the model and you're instantly good to go. Like how does.

02:04:46 - Anthony Campolo

There's two things. There's, there's the embeddings and then there's the chat model you use to interact with the embeddings. So the embeddings are already created. They use something called the text embedding three large model. That's OpenAI's most recent and up to date embeddings model. So those are already generated. Those are not being regenerated at any point. Those are just sitting there. And that is from the sum total of these transcription files and summaries that I've written. So and that's also something that the embeddings do not have access to the original trans transcripts, they only have access to the summaries. That is a decision that makes it easier and cheaper to do. But you do lose some of the more high level fidelity of what you are actually expressing through your own words. So if we actually had the transcript for every single episode, it might give slightly better answers, but it'll be a lot more expensive, which is something you need to keep in mind. But then once you've generated the embeddings, those just sit there. Those exist in this massive Azure JSON file. It's 436,000 lines. That's your embeddings. It's just sitting there. Then you query it using any model you want.

02:05:59 - Anthony Campolo

The different models can now interact with these embeddings that you've created. And the more high quality the chat model is, the better outputs it can give. This, this did work. So this was using O3 regular instead of O3 mini. And I probably can now change this, I think to O3 Pro and then let's see if that still works as well. So okay, so that's wrong. That's not a correct model name. So yeah, so there's a bunch of different models. This is using the. Okay, this is just hitting the endpoint directly. So this is using the completions endpoint for people who are into this kind of stuff. That's how I'm used to. I'm not using the OpenAI SDK, but I do use the OpenAI SDK in AutoShow. Specifically, you were able to change the model just by feeding it. If you go to ChatGPT Models docs, it will tell you the different models that are available. If we wanted to do GPT011, we can do that and we can just hit that copy button and then it gives you a name that corresponds directly to it. So all the models are available on ChatGPT's docs?

02:07:16 - Ryan Carniato

Yeah. Okay. Yeah, no, I assume there would have to be some kind of indexing, but it can just scan the giant embeddings thing.

02:07:25 - Anthony Campolo

The indexing is. The embeddings is what is like an indexing. It goes into a database and it gives the model an ability to read through the embeddings. But it's not an index in the same sense that database index is. But it functions in a similar manner in terms of what it does for you, if that makes sense.

02:07:46 - Ryan Carniato

This description is a little longer than the last one. I don't know.

02:07:51 - Anthony Campolo

Yeah, so this is 4.0 instead of O3. So let's see. Ryan Carnio is a prominent developer, educator and open source maintainer. Best known as creator and lead maintainer of Solid, a high performance reactive JS framework. He's recognized for his deep expertise in front of web dev, especially cutting edge topics like reactivity, SSR hydration, advanced optimization techniques. He frequently hosts lengthy technical live streams where he discusses the design evolution of Solid JS as well as broader trends in the ecosystem. He often dives into advanced topics such as partial hydration, signals, space reactivity, server components, and comparisons between major frameworks like React, Next, Quic and Marco, except they spelled Marco wrong with a C. His conversations feature a mix of technical deep dives, live coding sessions and interviews with other major framework framework authors. In addition to building Solid, he's known for his open and collaborative approach, inviting community contributions and fostering cross ecosystem discussions about web devices, practices and future directions of frameworks. He plays a key role in shaping new patterns for fast, scalable and developer friendly web applications. In summary, he's a software engineer educator and open source leader, creator of Solid, who produces educational content about advanced front end technologies and actively participates in the evolution.

02:09:12 - Ryan Carniato

There we go. That's my conference talk description. Now I just need to take.

02:09:16 - Anthony Campolo

It's a pretty good bio. That's a pretty good buy. I think that's pretty solid. Like I would, I would use that, you know.

02:09:23 - Ryan Carniato

Yeah, yeah. It's funny because they always ask you for a summary and I'm always like, what do I say? I'm like reactivity performance enthusiast and reactivity super fan. But

02:09:37 - Anthony Campolo

let me respond to a couple things here in the chat.

02:09:40 - Ryan Carniato

Yeah.

02:09:40 - Anthony Campolo

So someone asked what model is it using if it needs O3 minimum and consume more data. Yeah, so you know, I kind of went into that a little bit. I think you always kind of want to go with if you want the best outputs, you want to use the best. Most current model really. It's always going to be usually the most expensive, but it's pretty, pretty legit. And then would broken up embeddings help you think so? That's a big question I do not quite have the answer yet to. It's a very, very important question. So I started with breaking them up enough so it did not break the embeddings generation. So basically I would just, I started with every episode, I ran them all. It would tell me which episodes are too long and then cut those episodes in half. Similar thought process. When you're asking me why is it two and a half hours like this is too big, so I cut it in half. This is a similar thing. Now it may be the sense that I could create embeddings where every chapter is an embedding instead of an embedding is just two episodes that might give a better response.

02:10:41 - Anthony Campolo

I don't know the answer to that yet. So that's experimentation I still need to do. And then somebody here has a funny comment. If devs in general don't understand rsc, then the LLM won't either. This is such a key point because all LLMs are trained on human data. If there's a bunch of humans out there saying stupid crap about RSCs, the LLMs are going to say stupid crap about our RSCs. They do it about all sorts of topics. You ask it who killed jfk, it's going to say Lee Harvey Oswald, you know, so it's trained on the data it has. Would graph rag help? You have to tell me what you mean by graph brag dev.

02:11:21 - Ryan Carniato

You guys keep the questions going. I actually have to step away for two seconds. Just ask Anthony a couple questions.

02:11:25 - Anthony Campolo

I'll be right back. It looks like graph rag. I see two different things here. I see a Microsoft thing which is a modular graph based retrieval augmented generation system. Yeah, so I'll talk about RAG real quick. Instead of embeddings. So what Dev is saying here is that instead of just embeddings, it builds a graph of knowledge that links together concepts. So it's a knowledge graph, which is a thing that has existed for many decades. I don't know, I haven't tried that yet. For people who are confused by the terms that are coming right now, RAG is something called retrieval augmented generation. Ryan GPT is rag. So what I've showed you is essentially a RAG implementation. So if you've ever heard that term, you don't know what the hell it is. You just saw it in real time. It's taking all of those files that have all of the transcription or not the transcriptions. It's taking all the files that has the summaries and chapter headings for each episode, and it's creating a numerical representation of all of that text and then it's allowing you to retrieve things from it to augment the generation of your text output.

02:12:41 - Anthony Campolo

So as with the term retrieval, augmented generation means because you're augmenting your answers by giving it a base of knowledge it can retrieve things from. And so someone's asking, does the vertex database come into that? So it's a vector database, not vertex. That may just be a typo. But a vector database is what allows you to take those embeddings and stick them in a database. So what I showed was a JSON file that has all the embeddings. So there's no vector database implementation with this. But for AutoShow, the app, there's a vector database implementation that uses Cloudflare's Vectorize. Actually just did a stream a week or two ago with DEV about this. We talked about RAG and vector databases at extreme detail. There's SQLite, SQL, Vec, VC, there's PG vector for Postgres, and then there's Vectorize, which is Cloudflare's hosted vectors. I really, really, really recommend Cloudflare's Vectorize. It's awesome. It does the chunking for you. It figures out how to break down your different files. So there's a whole lot of stuff that you get from that. What we saw here is like the most dumbass way you can create embeddings with literally just writing them to a JSON file.

02:13:59 - Anthony Campolo

That's not really scalable. What you want is a vector database, as is being asked here. That is the type of stuff. If you're trying to scale this up, you want to look into things like PGvector or Cloudflare's Vectorize. Cool. And that's pretty much my, my whole presentation. So very, very, very cool.

02:14:19 - Ryan Carniato

Is there anything else you want to, you want to plug?

02:14:22 - Anthony Campolo

I mean, Dev wanted me to talk about Redwood SDK just a little bit.

02:14:27 - Ryan Carniato

Sure.

02:14:27 - Anthony Campolo

I am not super involved with it. I had Peter on my stream to talk about it. If you want to talk to a framework creator working on a cool framework, you should get Peter on to talk about Redwood. It's about leveraging just React, rscs and Cloudflare. It's essentially about combining those to create a framework that does all the stuff Redwood did for you before, but without GraphQL and without serverless functions, which were like the whole basis of the framework, the entire framework has been rewritten into a very, very basic, simple type of React Cloudflare setup. So no, no, no GraphQL no GraphQL graphic was gone. Craft feels dead. Shot in the head. They took it out back. They said, sorry, it's over. You're, you're sick. You gotta go.

02:15:17 - Ryan Carniato

I was not, I was not expecting that. I don't, I mean, I don't know what people build with with Redwood.

02:15:24 - Anthony Campolo

I did a terrible job the last time I was on then because that was kind of the whole point of our stream is that.

02:15:28 - Ryan Carniato

Yeah, yeah.

02:15:29 - Anthony Campolo

Beyond GraphQL to use just React server components.

02:15:33 - Ryan Carniato

Yeah. I mean, I, you, I think at the point of stream, you, they had just introduced it, but they

02:15:40 - Anthony Campolo

weren't able to fully say we're giving up on GraphQL forever. Because every Redwood app previously was built on GraphQL. So it's kind of a hard pill to swallow, I think.

02:15:50 - Ryan Carniato

Yeah.

02:15:50 - Anthony Campolo

People who already had all these apps built on it, but that, that was the direction they were going. It's been clear for like two years, but it's been like explicit within the last six months. So they're like. This new framework is called Redwood SDK. So it's a new site, it's totally new branding, and there's no GraphQL whatsoever.

02:16:08 - Ryan Carniato

Yeah.

02:16:08 - Anthony Campolo

Okay. You don't even need to use Prisma. It comes with Prisma just kind of as like a legacy thing at this point, but it's really just a React framework that integrates with Cloudflare. It's pretty cool. You actually might be really into it.

02:16:19 - Ryan Carniato

Yeah, I mean, it makes Sense from the fact that Redwood, like, was always about, like, coming up with the, like, combination of pieces to make you productive. So like, like they're just like, okay, Redwood version two. Like, it's like a different thing, you know, Like, I. I talked to Theo about this way back. I'm like, don't you. I mean, I love your piecewise approach, but even your piecewise approach, being that opinionated will get outdated at a certain point. It's just like, it's. It's just science. Right. And. And I think that, like, it's just more opinionated is. So they just did a refresh and

02:16:52 - Anthony Campolo

they dialed the opinions way back. Yeah.

02:16:54 - Ryan Carniato

Yeah.

02:16:55 - Anthony Campolo

The only opinion now is Cloudflare. It's. It doesn't work with other deployment platforms, so it's highly locked into Cloudflare. But I think it does it in a way that actually makes a lot of sense.

02:17:05 - Ryan Carniato

Right. It's just going all in. An RSCS is a. Is a thing. I mean, that's a whole other topic. It's fine. I can talk about that later.

02:17:11 - Anthony Campolo

That's why you should have Peter on to talk about. I think you two would have an extremely productive company conversation about it. Like, that I am not capable of having.

02:17:19 - Ryan Carniato

Right. Very, very cool. Yeah. Okay. So that, that, that, that's that. And I think we went through all the questions for you.

02:17:27 - Anthony Campolo

Yeah.

02:17:28 - Ryan Carniato

So I guess the other question is, is there anything else you want to plug specifically? Not.

02:17:34 - Anthony Campolo

Oh, no. I mean, all the stuff I've shown is all the stuff I want to plug. People check out autoshow app for the official thing if they want to sign up for that. I'm at ajc, see web dev everywhere on the Internet. And yeah, if people thought this was cool, Annexia said, this has been really cool. Learned a lot, so super happy about that. I definitely had a blast. I'm happy to stay on for a little bit of this week in JavaScript if you want to transition into that.

02:18:03 - Ryan Carniato

Let me think here, because I actually don't have a ton of this week in JavaScript, so I might actually be just deep diving into my theory theoretical stuff now, which probably is less cool.

02:18:12 - Anthony Campolo

Yeah, I can hop off them, but I'll still be in the chat and watching.

02:18:15 - Ryan Carniato

Sounds good. Sounds good. Thanks for coming on again. And I've learned a lot. I'm great. I feel like I need to get my. My hands on these tools to actually do the thing. I feel like because I've been the guinea pig, like, I. If I want the look, I could probably like, find pieces Already partially done and I could like, throw up because the situation I'm in right now is I'm at a deficit. I would like to use this stuff moving forward. I just have like three or four streams that don't have subtitles right now that are timestamps that I need to.

02:18:43 - Anthony Campolo

I'll hop into the discord and I'll just write a couple of the commands for you just so you can kind of see if you want to run this yourself. I'll give you just a quick startup script. So then is. Then you just let me know if you hit errors, kind of building it on your computer. But yeah, it should be pretty simple for you to use the stuff I've shown.

02:19:00 - Ryan Carniato

Awesome. I'm. I'm looking forward to it. I on so much time save. Even if they're not super accurate. I know that was a concern earlier in the stream, but for, for me.

02:19:10 - Anthony Campolo

Yeah, you'll need to read through them and edit them, but they should cut your time down by at least a couple hours. That's the goal.

02:19:16 - Ryan Carniato

Yeah, yeah. No, that's awesome. This, this has been great as.

02:19:19 - Anthony Campolo

Cool, man. Well, thank you so much for having me. Thank you everyone who's in the chat. Lots of friends out there watching, so really appreciate it. Always have a ton of fun on here. So, yeah, catch you guys next time.

02:19:29 - Ryan Carniato

All right. See, Anthony, that was, that was, that was good. I wasn't sure how it was going to be. Like, doing kind of like a GPT has been fantastic. Doing something a little bit on the AI side because, I mean, I think it showed. I literally know nothing about AI. Use it to, as I said, copilot and to occasionally generate stream covers. Like, the stream cover that I have for this one was, was, was generated, but I actually ran out of my, like, was it like the open art or whatever? I ran out of credits, free credits on there and I didn't bother buying them. I just like went to the other, the next free service. I think I actually made this dream cover on Canva or something. AI coding with Dax. I mean, I would do that, definitely. I mean, I just don't. I know, I know. Like, I don't even know if it's valuable teaching me this stuff at this point. Yeah, he's a tribute. Great to see him demo and showing it. Yeah, yeah. And it sounds like he's got an approach. I mean, it's a, It's. It's always tricky.

02:20:42 - Ryan Carniato

I mean, because as I said, I do watch enough of Theo stuff to know that when you create stuff that targets content creators, it's like a tricky market because the niche, like, he always talks about the mistakes he made with Ping, but it sounds like there's like a maybe more general purpose thing here as well. So I think that's. That's interesting, right? Dax would be a great topic for AI topics. Yeah, for sure. Dax's use case, at least when he was working a lot with the Zero and Replicache stuff, has actually been. Been something that's been on mind a lot. So I'd like to get to a good place where I could both learn AI from him and then show him also how I've been solving his problems with Solid, because he's a big early adopter. Solid. And honestly been very helpful to me personally. Just people you meet. I met Dax originally through, I guess, like Theo and that, like those group of people, maybe I'm trying to remember. And he. And he also, you know, very quickly got into the Solid bandwagon, which was huge. And when I was, like, trying to figure out my next steps here, he's.

02:21:53 - Ryan Carniato

He's actually the guy who connected me with David, how I ended up at Sentry. So, yeah, definitely, definitely would like to do some stuff. We. For a while there, he was doing some content that I was actually gonna go on it, but I don't think I'm that funny. He was doing that between two trees thing he did with a bunch of people. I was. I was at one point gonna be a guest on that, and then it was kind of like I realized that, yeah, maybe I wouldn't be the best person for that. I don't even know what this means. Buzzy. If ryan took to AI SDKs, we will hit AGI a lot sooner. I don't. Yeah. What's age? What's AGI? Artificial general intelligence. Okay, interesting. Yeah. You guys speak a different language than me. No, dax is awesome. Yeah. No. Okay, so let's. Let's. I'm glad everyone's still here, still live. Let's. I wanted to. I. I do have a little bit of this week of JavaScript that I'll get to in a bit, but I wanted to talk about a kind of an after word, a little cap up of the topic last week.

02:23:22 - Ryan Carniato

First of all, I got to say it out here. We were kind of mind, kind of doing a mental experiment on mutable reactivity and what behaviors and characteristics it should have.

02:23:35 - Anthony Campolo

Right.

02:23:36 - Ryan Carniato

And if you remember, we talked about this stuff.

02:23:40 - Anthony Campolo

Right.

02:23:41 - Ryan Carniato

I realized that I was slightly wrong about Svelte. And actually I Think Svelte's model is kind of. Might be the right balance because I showed off a really cool model last week that was super, super powerful and did everything I wanted in terms of the behavior immutability. But let me see if I can go back to when I first talk about where favorite feature. Yeah, yeah. In the responses here. Yeah, the egg gives a nice example. And I use this example to show how svelte doesn't retain. I mean, we don't really care about anything below here, So I'm just going to get it out of here. Actually, I got to keep the script tag. I think Svelte doesn't really care about references. So there's this thing where I said, you know, a equals reference, B equals reference. And then we realized that A doesn't

02:24:47 - Anthony Campolo

equal

02:24:50 - Ryan Carniato

A equals A, but A doesn't equal B. And that actually, I guess in our example it should be true. But you see how. Where is it? You see how when you. At the end, when we. We've updated everything, that B s like that A doesn't equal B and that B is three and A do when we write to it. What I did learn though, is if in svelte, if we go S A directly, they do end up being the same and they do end up equaling. So. Ignoring this warning that they're talking about here, whatever. As long as you don't keep the same external ref, like as long as you directly assign them, Svelte actually does keep. Keep references. It's only the external ref it doesn't keep, which is reasonably sane behavior. Right. You know, that essay isn't going to equal ref. So why should, like, why should this magic happen? You know what I mean? Why should. Why should these be the same? There's. There's theoretical benefits to this, but I actually think that their model is more sane than I gave up. Like, if I. Because what. What I ended up doing this week was I was working on some problems with proxies.

02:26:40 - Ryan Carniato

And then I was like, I. I finally got like all the different functionality working. And I was like, okay, this is. This is pretty cool. And what ended up happening was where is it? I came up with this example where. I used a reference to create a store. I updated that store value and then sometimes later use the ref to create a new item on the store. Also try to direct assignment and then do a structured clone. And what you'll see in this example is that even though they're separated in time, the first three are all the exact same object. Whether they were assigned Directly or assigned by ref. And while you could argue that this makes sense in one sense because they still have the same underlying object, you could also argue that you might expect B to be Jon Snow. Like not the hold because it's coming from the original ref. Again, like you can picture at some point in the reason I hit this was because I was doing diffing and I was like, it's. Is it weird I was doing like reconciling and I was like, is it weird that if I like diff the original object back in, it holds the updates that previously been done to that object?

02:28:09 - Ryan Carniato

It's a little bit odd, right? Like you might not expect it. So like, because you might view this in like the case of diffing as like fresh data, even though technically you've made a proxy before. And there's this interesting tension between like what's the identity of the proxy? Is it. Is it like something that you get from the path? Like is. Is. Is it like. Because the problem is when you're inside a proxy you have this like there's an internal value and if you. That internal value is going to link to its own properties. So when you go and go to fetch that internal value, like to link to its own properties to return yet another proxy, like, I mean I can show this solid signals. I'll just go into here for a second and go into store implementation. Doesn't really matter what version or what I'm using. You can kind of see this when you get into the git trap does a bunch of stuff but ultimately what ends up happening is it accesses, it gets basically, sorry, it ends up getting a value either read from a signal that it internally has or from it, but it reads the child value from.

02:29:44 - Ryan Carniato

From that is derived from like underlying target. And then if it's wrappable, it wraps the value. Otherwise it sends the primitive value back. It's wrappable means is it an object itself? Should it be tracked? What I mean is internally you're not necessarily wrapped. If you were wrapped, then you might, you know, wrap twice. You need some way of saying like, hey, I've already got this object. You know, like you don't want to just, just be. Because you're accessing it. Wrap proxy and proxy and proxy and proxy and make all these new proxies. If you want to keep some kind of stable identity, you want to be able to do like some kind of lookup. Now that lookup could be global in a weak map or on a symbol like we do generally with solid. Or it could be based on the path. You could say, like, I hold the proxy inside here, you know, and we always give the same proxy when you. When you go to the same path. I didn't actually test if Svelte does that, but I'm gathering if.

02:30:39 - Anthony Campolo

If.

02:30:40 - Ryan Carniato

Yeah, if I can do SA equals sa, that means every time that you go and access A here, it's the same proxy model for. For this to be true. Right? So there is a lookup this. So it's either based on like reference, which it's not, because otherwise SA would equal SB here. Sorry, no. Would be equal to SB here, which is not. Or it's based on path. Now it's possible when you set it, you can see the existing proxy and assign that a proxy to a different path as well, which is what I think Svelte is doing. When I do S A, they see the proxy coming in and go, okay, well, SB will be that proxy. Right? But my point is that you have to be aware not to overwrap. And there's only a couple ways to do this. The challenge, of course being path based is like arrays, right? If you move items around now, you can say that once you're in a store, you should be internally referencing the store path. So like if, if, if, if instead of our state being an object, if instead our state was an array. Can we do this?

02:32:05 - Ryan Carniato

Sorry, I'm missing. Yeah, screw this. Sorry, I shouldn't have. I deleted too much. If instead our state's an array, right? And if we said s 0 equals ref and s 1 equals ref and s 1, a equals 3 and then we just replace all the a's with zeros. And ones. I think this should work out the same way. But. That's great. Okay. It's basically the exact same behavior we showed before, only this one updates. And if I change this to be S0, it will work as well as properly it Just what I wanted to point out is that like if you're moving objects around in an array or something, you have to make sure that you aren't talking about fresh objects coming from the outside that you're self referencing. Like, which is typically the case, right? Like if you take an array and then you filter it or sort it, you're going to get the proxy objects back and then you can assign the new array proxy object to the. You could like reassign the array. Right? I don't know how felt does that top level. Because it's the const. I can't actually reassign the array this way.

02:34:03 - Ryan Carniato

But I'm sure if I put like, like an object, then put like a list array on it and then reassigned it, it would. It would work. But what I'm getting at here is it's kind of like something you have to keep in mind, right? And I think it makes a lot of sense. So I just wanted to kind of step back from that from last week because I've kind of come around to this being probably the more sensible approach in terms of what mutable behavior looks like. That being said, that doesn't actually help us a ton because I. I started doing some more work and I want to. I want to talk about an interesting topic that you might not expect from me on my stream, generally speaking. Let me see first, I want to quickly see. Yeah, we got people still going. I was seeing if you guys are all live still on. On YouTube and on Twitch and you. Yeah, we still. We still got people watching. So that's good. I'm stoked. I wanted to talk about. I wrote a hackmd that I want to talk about here for a minute because I think.

02:35:20 - Ryan Carniato

I don't know if I've talked about it enough, but. And I guess I should share this hack md. It's not like completely finished thought, but, you know, I'll post it in here for you all to see. Yeah, yeah, for sure. And I actually came up with a way to do this, but also keep the original, like keep references still reference. That's the weird part about this example that I was showing here is that the original object here, this ref, isn't actually mutated at all. I don't know.

02:36:09 - Anthony Campolo

Did I.

02:36:10 - Ryan Carniato

Didn't console log it? Yeah, no, I console logged it afterwards. Yeah, it actually isn't mutated at all. But it depends on what you use as your identity. Right. What I'm saying is svelte must be using part of the path as an identity where I was doing a global lookup. Like, this ref is always the same proxy no matter where you put it. Which is why even though technically it's not mutated, you know, it's still Jon Snow, not John Doe. It still kind of like acts like it. Like in a weird way because it's like connected. I've done some really weird stuff with proxies where creating swappable underlying objects so the proxy can change what its target is so that I could do immutable diffing under the hood while providing a. Kind of like mutable interface. So when people do it, like all the stuff static looking like you know, you don't swap everything out. When you change an item, you don't clone its parent and it's parent all the way down like an immutable structure. But yet internally it does do that so that you could like basically use referential diffing. It was a cool idea and.

02:37:21 - Ryan Carniato

But I kind of realized, realized a few more things kind of working along this thing.

02:37:26 - Anthony Campolo

I.

02:37:27 - Ryan Carniato

As I said, I think a partial path based approach actually makes a lot of, a lot of sense. It's tricky because sometimes that like at creation time you have to be sensitive to when things get creative. But I think on access is generally fine. It's, it's. It's one of those things because like if you're storing the proxy on the item then like every time you traverse I'm gonna have to look internals to understand how you do it. But I mean conceptually I think I get how I would create a solution that looked like that. I just, I'd always done it with a global reference because it was mutable. I didn't care. Right. Like there's no, there's. There was no reason not to use a global weak map because if someone was going to like in the past installed because it was mutable. Assign this ref here like you mutated it. So like it's, it's the same proxy. You're good. Once I switched to a system that was swappable and not mutating, it got a lot weirder that this was. These items were still intrinsically linked. Yeah. We are talking about the benefits of the VDom for the first time today because people, I don't know if people understand this the whole.

02:38:45 - Ryan Carniato

Yeah. I. Now, now I just gotta decide if, if it's something that I can look at right now or not. Maybe I'll look at it and then comment more on it after, but I'll. Proxy traps can't be that hard. Right.

02:39:05 - Anthony Campolo

Okay.

02:39:06 - Ryan Carniato

Prototype. Okay. Check proxy is array version stack.

02:39:19 - Anthony Campolo

Okay.

02:39:19 - Ryan Carniato

Let me see if I can find the new proxy call. Okay, there we go. Right. Delete property. Get. Proxy path symbol. Yeah. Okay. Yeah. So sources get prop. Sources is. Scoped to the specific thing. Yeah. Because I was using a glo. Our storage used also a global trap for all. For all of them. So this is, this is scoped to the specific creation. So sources is a map. Yeah. Okay. Yeah, yeah, yeah, yeah. Okay. So map. It's undefined with parent Sources set sources descriptor value stack. Mass with parent. I'd have to look at doing that but again with parent Interesting. I'm gathering this is path based is. Is the key to this right? Because that's why you care about the.

02:40:47 - Anthony Campolo

The.

02:40:58 - Ryan Carniato

Oh okay. Okay. So this is acting more like the like the signals map underneath the proxy. Yeah Special handling for length Makes sense. Sorry, I stepped out of the function for a second. Okay okay. Scope inside a closure. Yeah yeah yeah. Okay yeah yeah. I'm gonna have to look at this probably in more detail but I what I'm getting there's definitely like indirection right like the target itself. Let me see. Where's the new proxy call again? Value.

02:42:05 - Anthony Campolo

Not used.

02:42:10 - Ryan Carniato

Prop and target okay so does look at the underlying thing in terms of like overriding get target target

02:42:23 - Anthony Campolo

source target.

02:42:33 - Ryan Carniato

Flex get target prop receiver okay okay so yeah sources is like this signal map almost I think and then we are still using our target. I was trying to see where the actual wrap happens but it's harder for me to see right off and not undefined undone initialized undefined reflect target prop receiver so we don't actually. Unless the new is. Unless it's pre processing on the new value coming in that I'm missing. Get prototype of value yeah this is it here, isn't it? Oh no, this is just saying if it's not wrappable return value proxy value array version. Yeah I'm still just missing something that I'm looking for. It's fine. It's interesting to see different it's interesting to see different implementations. Yeah maybe look where. Well the thing is if it wasn't it was non proxy able or already a proxy they like they would they this check would have happened earlier which is why I think that anything that I'm looking for is after this. Actually maybe I can look at the set trap that might help me see what I'm looking for for. Oh, I see. Yeah I don't know exactly what it's doing but you.

02:44:44 - Ryan Carniato

It's because you create. You also create the signals on right? Yeah yeah okay yeah that works. Okay that makes sense. And then that's actually override otherwise you pass through okay. Yeah yeah stay simple property. You can return the existing property but so we don't mute the object you pass in. Yeah yeah yeah yeah yeah yeah I. Yeah this yeah this is closer to what I've been doing recently in my. Because yeah I wanted to like with the overrides. I've gone to a place right now where the stuff I'm working on is really weirdly complicated because it both does overrides and also lets the main Target be swappable which is complicated but yeah, Yeah. You talk about not writing in the signal. Yeah, yeah, yeah. And the other problem with you need the signals I. If you want to do any kind of concurrency because otherwise like that's the only way you're going to get versioning. Like anything outside of the signals is not going to be versioned properly. This is something I had to change in 1.02. I mean there's a bit of thought in like what you don't observe is like in a different dimension.

02:46:15 - Ryan Carniato

It's not observable. Yes, yes. And this was the thing that was really attractive to me. Getting rid of like the like use the frozen and sealed object use cases is something I used to like clone but then like it wouldn't diff properly and yeah it just. It it. Yeah I think this is the right general way to approach the what I called source guarding in the last stream. I last time as I said I was a little bit harsher on the one behavior install but actually I think that behavior makes a lot of sense. So yeah, I step back from that. I actually think like in terms of my expectations of how shared references should work, I think this is the right model. So yeah stepping mostly just wants to say I'm stepping back slightly from last stream. I wasn't sure I said it kind of inconclusively but I missed that this worked that the direct assignment kept the references. That changes everything. I think this is more than reasonable. So yeah, I think this is the right sort of conceptual model for the source cards. But now that we've gone down that tangent kind of a slight addendum on what I was showing last stream.

02:47:29 - Ryan Carniato

Most of the stuff I was still talking about still very important in terms of like you know, so that's source guarding. Salt's model for source guarding is what we want and stable references reconcilable current safe is what you get with the signals. Reconcilable is what I kind of want to talk about today because I actually went through got, you know, started working on my optimized version in deep tracking and got to an oklay place and then I got to diffing and I started noticing some stuff and I think we should probably talk about that today because I mean this kind of thing is probably fairly obvious to say Dominic, but I don't know if it's obvious to everyone else. Did I actually paste the link to this article yet? I don't think I did. I think I got nicely sidetracked by dom which is good. This is a quick way to get answers to questions that having people literally there on screen to answer them for me. So that's good. Yeah, I like what Svelte is doing there. Yeah. So I'm going to share it here so we can talk through it while I go.

02:48:53 - Ryan Carniato

So the article today is called the Genius of the Virtual dom. Probably the last thing you expect me to write, but it's important to understand the powerful abilities of alternative solutions if you want to create the best solutions. And there are some, there is some real genius here, whether initially intentional or not. To start, the virtual DOM approach is not particularly optimal way of diffing as a baseline. I'm not saying there aren't optimal approaches. You go check out Inferno or EV or Ivy or however you say it, there's definitely very optimal virtual DOM approaches, some of the block DOM stuff. But I'm just saying let's just get into perspective of baseline what a virtual DOM is. It creates artifacts just to compare which carry overhead versus just diffing the inputs at its core. When fresh data comes in, it has no idea of what has changed. It needs to diff everything from that point of change. It doesn't even necessarily have a way to short shortcut that diffing just on the baseline. Okay, wait, what about memoization? Well, notice how almost no one was talking about memoization before React hooks. I'm just talking about in the React community, I'm people who actually built virtual DOM libraries were already thinking in terms of memoization before or, you know, immutable virtual doms and all that stuff.

02:50:05 - Ryan Carniato

There's a lot of really great research on the virtual DOM between 2014 and 16, you know, and you saw tons of innovation in that space right during that time period. But the developer coming into using React never talked about memoization when it was class components. Pretty much never. Like very rarely it would come up in this thing. You know, don't get me wrong, there's a couple reasons for that, right? It was because one lot of people are less performance aware without having a smaller unit than a component. It wasn't nearly as common to have to deal with the way data linked together. Like adding dependency arrays immediately made it very evident to people that like, oh, when this changes, this changes, this changes, this changes with the classes. A lot of people would just be like, oh yeah, it's a render function, always reruns. No one really thought too much about it. And because their logic wasn't embedded in it, like they weren't, like they weren't really feeling the consequences the side effect would of that re render.

02:51:03 - Anthony Campolo

Right.

02:51:04 - Ryan Carniato

Don't get me wrong, there's should component update. I've definitely written quite a few of those in my day. Especially when I got started on React native. Performance was actually something really, really, really, really a consideration with early React native because some stuff you do like especially around pieces with animations, performance of the JS thread could actually completely slow down your whole, whole application. So like it was a thing when you're, you know. But React chose to take a broad stance early because there's always the worst case. There's all the fresh data from server which will not be equal and users simply doing stupid stuff like this. Hopefully you guys can see this. But list equals this list state and then like somewhere else you'll list push new item then you set state new list. See this is not an immutable change.

02:51:57 - Anthony Campolo

Right.

02:51:57 - Ryan Carniato

People, people can do this in React and it still works. It's like how angular devs don't think about onpush strategy. Yes. Right. It's there for more performance minded people. I mean people dislike function components because it seemed like leading up to hooks, you know, we had hawks and we had these patterns. It just, it was a lot easier to just like think of like input output. Like it was, it was this functional component thing. Yeah. So like. Yeah, I mean people did over use it per se, but there also wasn't. Yeah, I mean, Yeah, it was a different time. I don't know. Like I'm not going to be too critical on that because like they're just. We didn't have the same level of awareness in the React ecosystem, which is kind of the selling point of React. I mean if people want to argue hooks made things go downhill, it was because people came aware of this stuff for the first time. Right. You never know because you didn't put much app logic in it. And the same thing could be true of render functions. I know it's a bit odd by current standards, but people still do this today.

02:53:19 - Ryan Carniato

And the problem is that the array is the exact same reference. So if we auto memoize props, this would never show updates. Like if React just like. Because React could have always auto memoized props. They could have always been like, yeah, if the prop doesn't change, like the reference to the array doesn't change, we won't, we won't rerun that child component. They could have done that. But they wanted to make sure that this worked because it would, it would be kind of weird for people. Like basically React had No opinion about immutable versus mutable early days because their baseline is basically diff everything, throw it away, we don't care, right? It works as readily for when you have the ability to keep references, you working in a single environment without serialization or you know, you're working with the data from the server, right? Because like it's this, it's when you leave the environment and the web is by its very nature, you know, a server and a client. So like there is going to be the serialization boundary that we must always face with, right? So yeah, all I'm saying is in this example the set state is more like this render component.

02:54:23 - Ryan Carniato

But this is just the baseline. Because what I'm saying is you actually mutated the state here with this push. But no one knows to re render, right? The, this set state is actually more of just like render the thing, right? Like if, if this didn't do anything like if you, if, if you like I think if you just put like an empty object or whatever like I, this isn't the right syntax actually looking at this because I need to actually do. It's this, sorry, fixed. But what I'm getting at is like if you just put an empty object here like this set state, empty object, I think it would do basically the same thing in React, Right? I mean some people were, they're just, they were just used to having like to deal with this kind of crap all the time. They were just used to having to like deal with all the edge cases. Like I think there were super complex, there were maybe not super super by modern days because they were limited a bit by tooling, but there were fairly complex apps. But if you're coming from like dealing with like dirty checking loops in Angular or like Butterfly effects and Knockout or wherever you're coming from, like, I mean this is something you can live with, right?

02:55:54 - Ryan Carniato

Perhaps. But yeah. And in a sense, do you know what this like reflection made me think of? Sorry, this is this week in JavaScript topic, but I gotta, I gotta, I gotta, where is it? Bookmarks. I gotta throw this one out here. Using let in module scope as state management, right? Like if you think about react earliest days, this literally could be like just some random state and then you could just been like this rerender component and if that let variable up here wasn't in the component and you just like how should I put this? Maybe I should play around this for two seconds just to show what I'm talking about. But like you know what I Mean, like, I mean I can't even do it in solid. Cassidy doesn't re render. But I can at least show the code I'm talking about. Like if, if I just made this let count and now I know that I'm making global state here, but if I, if I did like count and then. Now we're going to pretend this is instead of use state, I'm going to instead say that the framework has re Render okay.

02:57:30 - Ryan Carniato

Or render Render comp. Okay. This is going to be our new framework primitive, right? Render comp. Right. Then all we have to do is put count in here and then turn off solids warnings. And then in here we could just be like count plus plus. Actually let's do parentheses. Render Render comp. There we go, There we go. And now like in, in something like React, this would work. Sorry, I missed. Count equals zero, right? Like this, you could call this set state. You could do this thing. But generally speaking, you could just have something outside of the render function and then just update the state and then call render comp and then it would update. Right? And if this was a class component, you could even be like, you could picture that this now is, you know, class counter, right? Not a function. And you could do something along the lines of like. This count plus plus. This dot render. Like I'm oversimplifying it, but because you should schedule the render, you shouldn't call render yourself. Right, but my point is, My whole point is like conceptually. Oh, I missed the count equals zero, right?

03:00:00 - Ryan Carniato

Conceptually, You know, you could basically have a let variable, so to speak, and like the framework could basically operate itself, right. Like this is kind of the initial promise of the virtual dom. It was just like, we don't freaking care about your state, really. Like React made a built in state mechanism to make it like you not have to do this and this and you know, kind of like manage it a bit better. Like it's kind of weird to be like, it doesn't feel automatic. Like people want a little magic when you call render. Even though set state is the same thing. But like essentially this is like what you're dealing with. So when I, when I see this, that's what I think about, right? Are you making it so we all end up using a framework that only works if we all just use less. Is you state and Useref any different things than let? Yeah, actually Useref isn't right, but Usestate isn't is very different than let, Right? That was like the argument, you know, you could say that SVELTE made early days but they used a compiler to do that. Or you say reactive libraries do it when they use proxies.

03:01:25 - Ryan Carniato

But you know, trying to hide it behind a variable. You could argue, but. But usestate is not the same as let's. Because essentially as I showed here, it's like sure, this is let and updating the state is let but use state is both. It is both set the state and tell myself to re. Render. Yeah, or a getter, you know, getter setter combination because you define the state, you know, initially. So there's no reason why like if properties are fixed. But I mean that the nice thing about the REACT model is that this is just. This count is, you know, just a plain object or a plain value or whatever, right? I mean in the old REACT model, obviously it was this state, right? You didn't just reference it directly. But I mean it, you're right because you know, you don't, you don't need collisions, right? It should actually be, you know, I guess it should be state equals count zero and then we'd go this date count. But I mean we also weren't that worried about collisions on this dot increment either. So you know, this, this could have been optional, but setstate gives it a name, you know.

03:03:10 - Ryan Carniato

Anyways, I'm just, just playing around here. But like the thing is there's only so many ways to. To solve this problem, right? Like yes, we got the same mistake as Felt. Again, Felt did not use. Let the JavaScript use a syntax or custom JavaScript for runtime. Yeah, yeah, exactly. So the opposites. Forget it. You always know exactly where it is and the only thing the same scope can change it. Yes, this is actually the power of ruins. Side note is the scope locking. So like. This is the thing that's missing about this is as magical as it sounds. There's only so many ways change management works. Something has to tell it to change, right? Like either it's the world you could pull like the most naive thing you can do is poll, right? You can just go, I don't know what changed every 60 milliseconds or you know, whatever. Pick a, pick a time, maybe 16 milliseconds so you get 60 frames per second. I'm going to just check all the state in my whole app and see if anything is different. Right? But if you're not doing like immutable data or don't have history, like you're literally like, you basically have to like keep a copy and then check against it.

03:04:37 - Ryan Carniato

I mean this is what dirty checking in Angular is. Except they. They very quickly got past let's poll and they're like, how about when someone does something, like an event happens or like some async thing, some. Something happened in the system periodically. Like if nothing happens, we won't do anything, but if something happens, we should check everything. And that's what Zone JS is in that. But short of polling and just like casting onto the universe, something has to go, okay, I should run the cycle again. It has to trigger or schedule the thing, right? So like let estate management purely is not a real thing. You I. Which is. Which makes tweets like this kind of interesting because it seems like it's kind of more for engagement than like. Because he'll come back and he. Like there's only so many places these solutions will come back to, but he'll kind of come back and then we'll have grown. There'll be like this dot render and then we'll have that kind of grown moment. Do you know what I mean? Where it's like, okay, well great, you have let. But is this better than. I mean, maybe it's less to the type, Right?

03:06:08 - Ryan Carniato

Because then you. I guess, I guess that we have our answer. Actually, this is less the type. Maybe, maybe, maybe. No maybe. Maybe we do it with a functional form. Maybe we do something like S S count plus plus. I kind of like this. I don't know, maybe this set state is, you know, like, it's. It's just. It's the same thing. Oh yeah, yeah, you're right. You still need to return the. Well, it depends on what you're doing. I'm actually showing immutable API here. You're right. Technically it should be immutable, probably. Does it need to be? It doesn't actually. A mutable API top level would actually still work because you'd be like watching for these properties. So you'd always be diffing top level. And then like if you had a complex object, then you'd have to do something like s.user equals previous s.user. Name s.user.name/example or something like. I don't know, we could toy with the API a bunch. What I'm saying is like a top level assignment. But yeah, they. They wouldn't. You're right. They would never use a mutable API because people would think that they could just do this and that is harder to track because then.

03:08:17 - Ryan Carniato

Yeah, yeah, you're right. They would never do this because of nesting yeah, so yes, But to be fair, all things being equal, I'm going to go back to my, my silly version here. If, if this was the API and you're just using let's.

03:08:50 - Anthony Campolo

Well,

03:08:54 - Ryan Carniato

someone could do this pretty easily too, right? So this suffers the same problem. It actually lends, you know, arguably lends because it doesn't expect you to return the next value. It arguably actually lends to the, to people doing the wrong things or you having to basically diff everything and not be able to shortcut memoize. Like this is actually kind of the problem with just running on lets like you're taking this original React mentality generally speaking is not performant, right? Like the shape of the API with set state is actually. I mean sure, just replacing the new object is not a big thing, but like the fact that each hook is a separate thing, the fact that the functional form returns the next value, these are all subtle hints to make you do the right thing. The fact that the value doesn't update right away basically kind of like punishes you. Like basically it punishes you for doing mutable stuff, which is all good in React's design, Right? So like as I said, it's so funny because when people look at these tweets from Ryan Florence, they're like, oh, he must be talking about like signals or something.

03:10:12 - Ryan Carniato

It's like no he's not. He's just talking about like dumb unperformant React, right? Or like, like I like basically you're not going to need it moment I imagine, right? So I mean it's fine, let's, let's continue on here. I don't want to dwell into this. There's a whole bunch of people like in the chats, you know, speculating on this stuff and I'm getting caught off topic, but I think this is probably the time to do this now before I continue. No, it's going to be direct mutation without any kind of set or getter. Lord help the spaghetti, right? Lead is bound as light school, which is basically opposite spaghetti. You always know exactly where things the same scope would change. Well, no answer. Yeah, but I don't worry about the lexicope. It's as wide as it gets. Multiple devs work on the same lexical scope, can work and mutated and you find any large bits. Yes, yes, yes, yes. Don't hold dead. Okay? He's not going to see this light hard take on lexical scope being large enough yet. But it's not about the lexical scope. The problem is like the size is is. The size is infinite if you don't have a bound context.

03:11:18 - Ryan Carniato

If you don't say like re render this component right. Unironic. Oh, here David Corset unironically thinks it's a good idea. Please use a function to update, not a direct careless mutation. He. Fair, right? He's basically. But you see how you walk along a set of steps here. Like. Like once you're using a function you're like, well, why don't you just have the function move the state? Like you kind of naturally end up@react's AP. As I said, some of the responses make you not even tell if this is a joke or not. I wouldn't be surprised if there's a manual render call right after. How can you tell? This guy knows how frameworks work. Yeah, that is a very insightful insight. We hate you say I love let back then. Yeah, but. The point is of this kind of whole discussion is like. And getting back to where I'm getting at is at its base, this is kind of what makes the virtual DOM powerful, even though it does not make it perform it right. Like. Just kind of something to throw out there. Like you can do pretty dumb stuff from an optimization standpoint and still kind of get by.

03:13:16 - Ryan Carniato

Don't get me wrong, by modern standards. The reason we have the React compiler and have all this stuff, we. We would not. We would reject this generally speaking. Which is why it's very interesting to see what they're doing in Remix 3 because it's like, I think we're in an interesting cycle where there's a return to simplicity and it's like, I think, you know, like htmx and stuff. And I think it's popular enough that people, there's enough buyers of this that it's a thing. But on the other hand it's like it's a very. You're not going to need it attitude, like a very Yagny kind of deal. And it's like sure, but like what percentage of the time do you like, could you have taken the solution like a step, just like a step further and just alleviated like tons of pain just to make your point. Right. Anyway, So let's continue. The same is auto memory. I know it's a bit odd. Yeah, blah blah blah. It's more like render component. But that is the baseline for the same reason it can never depend on knowing what references are the same. So for things that need to keep identity through moves like items in arrays, you should be assigning a key Property, right?

03:14:37 - Ryan Carniato

If you remember the stalled example that I showed here with moving items, because we're dealing with referential equality, we know these things are the same thing, right? So like we can, you know, we move them around, you know, or like we can have like a map function or each and svelte or whatever and be like, okay, I have these DOM nodes, I'm just going to move them. They're the same thing. But if you're getting fresh data in, completely fresh data, you don't have anything to work with. As I said, this is not immutable. This is not like mutable chains. This is literally like blank slate baseline that I'm talking about here, right? So to keep identity you have to basically assign it, right? That's why there's a key prop in React. So a V dom, as far as diffing goes, is sort of dumb. It doesn't necessarily leverage the power of immutable diffing or fine grain ability to. To not re execute immutable systems of stable references. Right, fine, Believe. Yes, right. When fresh data comes from the server, even today, you are basically rerendering the whole page. Yes. You aren't necessarily updating the DOM everywhere, but the whole process running from top to bottom, right?

03:15:41 - Ryan Carniato

Now don't get me wrong, every solution has this problem because almost all optimizations in the web are rooted in the fact that we don't have to cross serialization boundaries. But I want to point out like in a sense, as an introduction, the VDOM actually isn't smart that much from a DOM diffing standpoint. But it sets a baseline like this is the worst case scenario. It basically built to handle the worst case scenario, right? So yeah, sounds stupid. How is this genius, right? Well, we focus a lot on emphasizing the benefits of both push and pull when introducing people to signals.

03:16:21 - Anthony Campolo

Right.

03:16:21 - Ryan Carniato

Push systems only notify things that are related to change and pull systems only do the work work that is relevant. Right? So when you have a push pull like signals combines the benefit of both systems having the best update characteristics of both systems. They only notify the things that can change and only do the work that's relevant to the smaller scope that you know is actually being used. So like the problem with you know, React is you can have a whole bunch of state in the components and stuff and I. Even in these silly components, you know, the class ones or whatever, it doesn't really matter the shape of it. And as long as, sorry, this is such a great example what React's doing, as long as this render is getting

03:16:57 - Anthony Campolo

called, well,

03:17:00 - Ryan Carniato

it's showtime. Everything, even unrelated things are going to just get pulled into the mix. And that's just how it works at the end. It does only do the work that it needs to do. But it on the other hand kind of like has to check with everyone along that scope. Whereas like a push based system can go like, oh, only this one piece of data updated. We know that we don't need to run the rest of this stuff, but that push based system, purely on its own doesn't know if that piece of data is even being used anywhere. That piece of data could just be sitting off to the side of the screen. So like you could basically see in its extremes with React, when you set some state that's never going to be used because some other conditional state, it's going to rerun the component and it's going to immediately kind of pull the values and stuff that it does use and then maybe see that those haven't changed and then kind of shortcut out maybe or you know, because. But with a signal system, it's going to notify something and then it notifies the things it depends on and depends on and props gets through the graph and then finds out that there's no effect on the end or no rendering or no nothing and doesn't do anything.

03:18:29 - Ryan Carniato

Right? That that's signals aren't push based. Sorry, I'm just mean like a naive push based system like RXJS or something, right? Signals actually do push pull where they notify through the system but don't do the work along the way to calculate and then they find the effects at the end and then they just pull. So like basically in a signals based system, when you do that change, instead of like going through and doing all the work on each dependency, it'll just notify them all and then see that there's no effects on the end and then we'll call it a day. So it's the only system in this stupid scenario that doesn't end up like rerunning user code, right? The push based system will rerun user code because it needs to literally propagate through. It goes, okay, you updated, okay, you updated, you updated, you updated, you updated, you updated, you updated. And then it's like there's no sync, there's no effect. So it's like you did all the work, but ahead of time because no one cares about it. React will go, okay, component run. And it goes, you didn't change, you didn't change, you didn't Change.

03:19:36 - Ryan Carniato

You didn't change. You didn't change. You didn't change while running your user code. Only the signals implementation doesn't need to run your user code to make that decision. Sometimes I wonder though, is reference going to be proactive, set state removed and forced update rename to remix? Yeah, I mean, maybe, you know, I don't know. I guess we'll find out. It's funny, a lot of early virtual dom, I think Mithril, there's a few of them that actually used basically this like Force update thing. And thank you for reminding me about Force Update. I forgot about that. So yeah, let's go back to full benefit. It only does the work that's relevant if you. If you have a large amount of data, but your vdom is only looking at one item in it currently the last the. The only thing is mapping over and diffing. It literally could care less about any of the other existing data in your universe. So what I mean is picture this component where you have some giant blob of data, but your item is literally just using item 0. There could be like 10,000 items in this blob of data, but you only use zero.

03:20:50 - Ryan Carniato

And in. Yeah, B Dom library, it literally could care less if the other one to 999, 999 thousand rows even exist. Whatever. Right?

03:21:06 - Anthony Campolo

Yes.

03:21:06 - Ryan Carniato

This is the. I told you Dax has been on my mind in terms of this, right? If fresh data comes from the server and replaces old data, it happens at no consequence. It doesn't diff anything, it doesn't use it, it just throws it away. It's consistent because it has the fresh data available, but it doesn't have to deal with any existing references that aren't used. Now if a different component uses a different part of the state, it would also need to be set as well. So there's like concerns around synchronization. But it's understood that Each location the VDom is independent and things need to enter state in order for it to update. Like if you don't use state here, this is again good API design. There's no expectation that like in the other, like if two things references giant blob, there's no expectation of anything updating. Unless someone calls set state somewhere, it has to enter the state. Right? But for that reason, state reason state isn't really ever truly shared, right? Like not in this kind of granular sense that we're talking about here. Right? Because states owned by the component. Right? Sorry, where was I?

03:22:14 - Ryan Carniato

Anything that could share a state is actually a descendant of the source of that state and would get rerendered anyway on that state change. It doesn't truly exist in multiple locations unless it's a copy. Conceptually what I'm trying to say is even though this example doesn't show it, I'm using svelte here. It could be solid, could be whatever. You can basically clone the state and then have two different locations in your ui, each independently listen to that state and independently update and like manage that stuff. And in fact they could be the same object and have them both update on a change to either location of that state. But because the REACT model is basically this, right? Like this dot render doesn't even really care about the data of the state or like bdoms in general kind of work like this, it's not really possible to like share the state. You have to like even if you assign that state, anything that could assign it. You think about it like how, how does something else get user here? Like if user is in here now, right? And this is going to be our user component.

03:23:16 - Ryan Carniato

I know there's increment and other stuff to it, but what I'm getting at is if users in here, the only other thing that's going to get user is going to be something that you pass it back down to here or something that you derive from somewhere else in your component logic here. So like at minimum anything that actually like sets user. I mean it's more obvious with the React UI is going to call the setter from here and trigger it from here. So like there's no way to like actually unlink like the state and have it be like, have like have it be the same thing. Like you have to synchronize it yourself, right? You'll get the props down and then you'll assign it to your local state or whatever. Like synchronization is your responsibility. You can't just like pass a self synchronizing thing like the same instance that will update and be like this thing that can basically update like wherever it is, right? I mean like, like a signal you can't like it's, it's not a signal. And this is part of again why React doesn't, you know like signals. Yeah, I don't think preact's worried about or not preact.

03:24:48 - Ryan Carniato

I don't think Remix is worried about concurrency or like in that sense they don't care from a purist state sense point they probably care about it from like can I show loading states which you can get Very far especially if you control the router and they probably won't even go that far now it's a different component use a different part of state. It also needed set as well. Okay, worry but for the reason state isn't really ever truly shared. We talked about that. Okay, which works nicely in vdom's favor because while it isn't an immutable system, if you make your state immutable which you can because the path essentially identify or its identity in this tree structure right? Because it's whatever component it gets the state to. So I just want to check something So I thought of my mic keeps on getting quieter. Sorry, I don't know why it just keeps like auto getting quieter. Something must be doing that. No, I just generally just write my thoughts every it probably won't ever get published to be an actual article. I do this I write like four I just write my I it's. I will publish this on hackmd but I don't like this is just mostly for my usage.

03:25:57 - Ryan Carniato

Maybe this is something worth talking about these days I'm actually looking at like creating videos for topics that are important enough. Where were you going? Okay, if you make your state immutable, which you can because paths essentially are identity in this tree structure, you can leverage immutable optimizations when it comes to diffing. The React compiler is a tool for creating stable references and auto memorization but you can also optimize by hand. What I'm saying is because React can't share state really because it's like just inherent to the tree even though at a baseline it's dumb, they can use immutable diffing like as I said, they're not going to say that's the prescribed way because you know you could have fresh data from the server. Well, that doesn't care. It's not a big deal. But because you have people like this push state the let people of the world but. But if you. If you want to and prescribe to immutability which is what hooks kind of push people into then you actually can get a fairly optimal version of this diff. Is it as good as signals? No, we already saw Orion GPT answer that but it's not bad.

03:27:11 - Ryan Carniato

So the result is you have a system that when diffing only diffs what's being used. Right? We already saw that it only needs a diff0right doesn't care about the rest of it and you get it right if the giant blob changes again, it's only going to diff0. Again, it doesn't care about 1 through 999, even if all the other data changed because you just replace the reference and there's nothing special about it, right? It just keeps on diffing zero, right? With plenty of options for shortcut optimization. So while nowhere near as optimal as fine grained rendering, when living in a world where you have the ability to apply these sort of referential optimization, it isn't bad considering how bad it could be. When you are in a world where you have fresh data from the server, though, it looks a lot more appealing only dealing with what is used. So this is. This is. I want to talk about this because I've known this aspect to a certain degree for ages, but I didn't fully appreciate it until like just this past week. But, like, this is why I created stores. Not because I wanted a convenient syntax for people to do mutation.

03:28:12 - Ryan Carniato

I could care less about that, honestly. I created stores because I recognized there was a gap, right? It's funny because, like, my motivation was completely different than, say, Vue2's option API. Vue created. Basically, they were getter setters back then, but they're basically stores or deep reactive, you know, proxies of assets. I know that weren't actual proxies, but you get the idea they did it because, oh, look, it looks like plain data. You can mutate it. This makes development pleasurable, you know, whatever. For me, I was like, this is really freaking powerful if you think about it like, react diffing is really dumb, but the fact that it only cares about what. What you use is actually really powerful. So let's move forward the pitfall of granular reactivity, right? Maybe I should talk about this more because while it's fairly obvious to me, it may not be to everyone, it's always been about create versus update, right? I do say that the arguments that, you know, Pete Hunt and you know, Jordan Walk and the whole crew came out in the original REACT days don't really hold anymore. Because they don't but signals. But the only reason today that Signals are the best of both worlds is pretty much because all benchmarks.

03:29:39 - Ryan Carniato

Sorry, best. Sorry. The only reason that Signals are the best of both worlds and pretty much all benchmarks is that you don't create what you don't need in these scenarios. Like, why would someone benchmark having a bunch of extra garbage in there, right? It's the same reason that Solid's never been punished for having lazy versus eager computation. Solid's actually eager computation, even though we do push pull. That thing earlier where I was talking about, you know, doing all the calculations down the tree, you know, when it something's not being used. Solid still does that. It's push pull in the fact that it has algorithmic guarantee keys that ensure that stuff happens in order. Most push based systems can't handle diamonds, they can't handle advanced shapes and graph, and they don't ensure each node runs once. But solid 1.0's eager execution means that it doesn't only do the work that's actually being used. And in benchmarks and in most cases when you're building stuff, especially when you have hierarchical rendering and you have like ownership and stuff, this does not make matter at all, right, that it's eager or lazy. But were there a scenario that is less optimal, it could.

03:30:55 - Ryan Carniato

Right, right. Because these are models based off stable reference. Creation is different from updates. So we've been able to leverage that. Right, Especially because a DOM is also retained mode. Things like cloning elements can reduce overhead. So basically because of the nature of the fact that creation and update are different, we've actually kind of positioned ourselves in a way that we can leverage that information more readily than something where update and creation are the same thing like in most virtual DOM setups. Don't get me wrong, block dominant approaches do actually do that same kind of optimization and separation, but it's not something that React has really cared to do. You know, it's like theoretically an optimization that just hasn't been on their like books, but could be we actually benefit from identity. What is dynamic versus static? And to be fair, fast vdoms do this too. This is how Inferno got so much, so much faster than React. And we avoid doing unnecessary work. The BDOM isn't required to diffuse. Diffing begets more diffing, generally speaking. I think I've showed that enough on stream. The more you diff, you end up diffing more. It just, it's an endless chain and these solutions, like fine grained solutions just sidestep it.

03:32:12 - Ryan Carniato

But you are working on a single environment without crossing serialization boundaries. So many optimizations are available to you and fine grain solutions tend to leverage them all. So yeah, like basically we've got us in this weird place where conceptually signals have more overhead to create than just using plain data. I mean, I think that's pretty obvious. But the reality is because of the optimizations we make in terms of static versus dynamic, because literally the shape of our solution's not rerunning and all this kind of stuff, we basically become faster at creation and update. I said pitfalls. Right, sorry. Well, I'm just established why in the happy case, fine grain rendering absolutely trumps the virtual DOM in pretty much every way. Okay. Like there's a reason why other frameworks have moved off the virtual DOM or moving that direction other than React. It's because, like, if you're in that state, you're just in the client, you're doing like spa type stuff, doing client side work, you're within your single environment. It is kind of better in every imaginable way. Right? It's just. But what about when the data exceeds what's being used or the majority of the data is coming fresh from the server in less than granular chunks?

03:33:27 - Ryan Carniato

Well, it's easy to point at those inefficiencies and say, fix them. Right? Because it's like, why the hell are you doing that? Like, Ryan, Florence's stock ticker demo is an atrocious way to build apps. If you've ever seen that benchmark where you originally showed React performance, like I've said that over and over and over again, is like the, like the worst possible way someone could build an app. Why would you use the bandwidth of sending your whole database over the wire as many times a second? It was just a demo that proved the power of React, which is incredibly good at that thing. To be fair, I've created solutions with Solid that are faster than React in that and kind of are very close to like, you know, I think in that specific demo, it's actually like very much the same as like something like Inferno or whatever. I think there's other overheads to consider. But my, my point is, While in most cases, and I'm saying like the 90% plus cases, like, you don't need to worry about this, which is why virtual DOM kind of like don't need it. What if you do want to take this responsibility on.

03:34:33 - Ryan Carniato

Yes, it is what makes Solid a great choice for local first completely. But React has one property that is kind of good for local first. Right. That's why I'm saying Dax has been on my mind. Right, let's keep on talking. Let's look back at our large data blob, this time with a big reactive data store. Now, I just called this Create Store and it's using basically the same API. I just change this from Use State to Create Store. We don't know what the store is. I'm just. This is a hypothetical React store. Let's forget about Solid stores for a minute. And approach this generally. Let's pretend this. It could be view reactive, it could be svelte thing we were just looking at. It could be whatever. Right. Let's just assume that this is some deep nested reactive data, right? Yes. Yes. With single gm. No need to diff at all because of graph. Yeah, in most cases, except the cases where you do need the diff, which is where. Where I'm. Where I'm getting to. Right. A lot of times historically people kind of got away from not diffing because like the places where you don't need the diff are generally large changes.

03:35:33 - Ryan Carniato

Like you change to a new page. Kind of like, okay, well if I'm going to change the new page and I'm waiting 200 milliseconds for my API to respond, I can just blow everything out anyways, you know what I mean? I think it's important to talk about those edge cases because it also makes it very clear where signals are. Just like if you could ignore those edge cases or marginalize them. I think it's a no brainer to talk that fine grained rendering is better than the vdom model. I don't think it's even up to debate relatively. And I know I always generally come off fairly fair on this stuff. It's just the thing is there are those edge cases and I can't ignore them. The impact of how edge they are actually matters. Let's look at this. Let's look back at our large data blob. This time a big reactor store. Okay, got it. Forget. Yeah, okay. So most naive implementations would involve mapping over all the data up front to make it reactive. Something like Jovi's article, we covered this last stream. I love this article because it's like exactly what we used to do.

03:36:41 - Ryan Carniato

Like this is a really natural progression. You start with the React developer who's just doing everything in the usetate in the component. It's really inefficient. And then they pull the signals out and into the these models that kind of co locate like inside the tree but aren't part of the tree. And like this is just such a natural evolution. Like this takes me back. But for me this wasn't enough. In fact this got. This approach got incredibly bloated for me in the knockout days and I was like, react kind of has a point. I came up with solutions to kind of solve this or smooth it out. But I call this like the double mapping problem because you have to like map the data to make it reactive and then map the Data again to while you go over it to render it, right? Sure, you only have to map the data initially, you know, maybe once. But how would. If the updates coming from the server aren't granular, then you have a mapping diffing problem. But I just want to use this as an example, right? That's definitely more expensive than just throwing data into a, into a single signal or into a single state variable, right?

03:37:49 - Ryan Carniato

Like above and react. We didn't map over it. You just like picture mapping over 10,000 rows here. That would be kind of expensive building all those signals. Whereas in react case you literally just threw the data in and was like done, done, done. I only need the diff.items0 and like whatever I properties I use on item 0, right? We use a store here though because we want fine grained updates. The cost of a few more signals down that path is we use is no worse than the overhead of the B DOM and on update we don't need to recreate everything. Yeah, so the benefit here of a store, if you, if you forget about the fact that it's. You have to Basically map over 10,000 items is the actual path that you go on is only a couple signals here, right? It's like.items.0. A couple user fields, first name, last name, whatever.

03:38:39 - Anthony Campolo

Right?

03:38:39 - Ryan Carniato

So we're talking maybe four signals and then you never have to re render anything. Do you know what I mean? Like it's just the things that update. So like this trade off for a couple signals definitely worth it. But for 10,000 items in a, in some signal store you create might not be right. That's a huge creation cost for really fast updates. But yeah, what about the other data? Creating all the signals for all the data we may never use is definitely expensive and more so when fresh data comes from the server. If we want to keep our nice stable references, right, you don't want to lose those signals. You're going to need to diff all the data to update. Those signals we may never use. You could blow them away. But at that point it might as well be a vdom as now our UI layer needs to do the diffing, right? So the couple options you can basically say like oh, I'm just going to use a single signal, right, and then have the UI do the update or like recreate all the signals every time too. Like that the new data comes in like I mean that's expensive too, right?

03:39:34 - Ryan Carniato

Like now you're not doing taking the cost just on create. You're Taking the cost on update, right? The diffing might be expensive, a bit more expensive of diffing all 10,000 rows, but maybe the memory allocation of the creation is actually even worse, right? If you're just like getting a new 10,000 rows and a new 10,000 rows. And again, I'm not subscribing to the DB Mon, you know, Grind Florence stock ticker thing of the world, but I'm saying like, even if it's not happening 60 frames per second, the fact that happens at all is actually considerable, right? Most fine grain demos people made when React came out were like this because writing and diffing yourself was pretty tedious. Between that and mapping, it was easy to make mistakes on something that in React is just setting a variable, right? Can you picture the code you need to create this custom store to diff it when it updates and to map it and then have it do the fine grain updates versus literally the code that we used in React that just did it. There was a reason people were not into signals at the time because they're just like for this use case, this isn't the common use case.

03:40:38 - Ryan Carniato

But you just had to show them this and the React solution was like there, done. And then the signal solution. This has a whole bunch of hidden stuff here, right? You might be, as I said, you might be creating, maybe it's not specific wrappers, but custom wrappers or generalized wrappers that kind of go iterate through data. You know, this is what's hidden behind that. And yeah, okay, so where was I? And in a sense it's the worst of both world scenario for push pull because something needs to be there to do the pushing, right? That's why you create all the signals up front. There's an upfront cost in addition to doing the necessary work, which is diffing in this case. While diffing should be avoidable in 90 plus percent of cases, it isn't always avoidable. Fine grained solutions got away, got away with this fact. Most big changes were client page navigations anyway. So blowing out more than they needed to those cases was rarely felt since they're already dealing with large affordances and usually waiting on server data. So like I was very aware of this when I created Solid. I wasn't going to create Solid if I couldn't basically be better than React in all things, even the worst case scenario, or at least have a reasonable solution compared to other virtual DOM libraries.

03:42:14 - Ryan Carniato

So yeah, changing the physics proxies became the game changer here because we can use them to lazily create signals. This removes the initial overhead. You don't have to map. You map as you render. When you have proxies you call createstore. And then when you go state items. Oh, that creates the signal zero. That creates a signal. You literally only create the signals that you use, which. It's good. We're so far so good. Right? We're getting back to where we're doing this removes initial herd and things that haven't been created or no longer needed aren't going to notify downstream work. We have been using this effect and solid for years, obviously. However, I've realized work on 2, 4 and that our diffing wasn't as optimal as it could be. We still diff'd everything since we needed to update update our fixed references. What I mean is when I created reconcile and Solid it actually diffed the whole blob because any signals, you know, because basically we mutated the data, right? We talked about this before. So like if, if, if, if. If you have a proxy and you have the target underneath it and you need to like basically update that, maybe you get some new fresh data and you need to kind of diff that data.

03:43:42 - Ryan Carniato

You want to make sure that the new data holds. And my thinking was the proxy has to be a stable reference. So like it means that we couldn't just forget about it. We still diff'd everything. Maybe not everything had signals that notified, but we still diff'd everything to make sure that we got the update data data we went through and updated those existing proxies, the proc. You know, if they existed. We have to walk through the proxies and update it now. So I had an idea. If we somehow made the proxy target malleable, swappable, we could just replace the things we didn't care about without dipping them. Right? Because if, if you basically use the linkage. We saw this when we were looking at svelte or and with solids internals, when you're, when you go to get the next thing, if there isn't a signal there or isn't, you know, isn't like some kind of reference wrapper there or whatever, we could just, we look do a lookup based on the underlying target property. Right? So how do works via chain we can. Where was I? If we could just do it kind of like immutable system, basically just go, okay, replace the underlying object and then update the signals.

03:44:53 - Ryan Carniato

If there's no signals because no one's listening, then any future lookup is going to walk through that underlying object, which means you've instantly, without diffing everything, replaced all the data that's there. Right. Because if someone's listening to it, then a signal would exist or a proxy would exist so there'd be something to trace across and diff. But if no one's listening to it, none of these things exist. We can swap them without any cost. So basically get that same behavior that react gets in terms of diffing. Since reactivity works via chain, we actually know what is being used. If you break the chain reactivity, things won't update. So generally speaking, you can always trace signals that are created through the store and those are the only values you need to get. Because any chain, like here state items 0. Like any chain through the reactivity is only like only thing you listen to. It's going to start from the root and it's going to go through a specific path or a specific chain. The paths you don't take or ones that are no longer being listened to. I built into stores now in 2.0 that they release signals that aren't being listened to anymore.

03:46:01 - Ryan Carniato

Any path you go through that doesn't exist, you don't care. You can just swap the value and you don't have to diff any further. So yeah, this makes us kind of get the benefit of both systems. Right. But there's a reason I'm talking about this today. Yeah, the results are drastically better. We basically hit the best of both worlds again using laziness and reactivity plus knowledge of the graph only to do exactly what is needed. Except I noticed this. So let's look at this example for a minute. We have a store which has an object A with props hit high. And then I create an effect that console logs a prop and console log B. And then. I set store S B B. One thing now we just saw, it's felt that they actually create the signals on. Right. That might be the solution here honestly of kind of working through it. But at least in what I've been working on with solid by setting SA down this chain under another object, no one's reading from it reactively. So we have signals all along here. You know, A has a signal, prop has a signal, you know, in the store.

03:47:20 - Ryan Carniato

But B, we just set an object here. No one's walked into it. But this object happens to be store A like it happens to be that same reference. And then I said okay, let's diff it. Let's diff B S, B, let's diff it with this new version of B with this new value. And the problem is if you can figure it out, because this is disjoint, it's not connected. There's no signal here and definitely no Signal here on B1. So it just stops diffing. And while the object swaps and replaces. If after this operation I look at store B B1 I'll get props. Yo, it won't update A because it shortcutted on reactivity. This unfortunately only logs the effect once. Let me show you, show you in a new tab, right? See it only logs high. And what I was going to say is if we go like flush sync.

03:48:26 - Anthony Campolo

Sorry.

03:48:28 - Ryan Carniato

And this is all 2.0 stuff. If you're wondering what the weird difference in API you might be seeing if you go flush sync and then I do something like. I mean I don't need this flesh sync here probably, but if I go console log store. Because it. It actually. Sorry, store.b.b1.prop you. You see that it's. Yo, it did update, right? Because this method of swapping works. It. Just because there's no signals, it doesn't notify A of change. It's. It loses the identity of it like being the same object. Because it can't tell, unfortunately, because there's no. No one ever walked this path that's getting diff'd. Which is kind of an interesting problem because you start going. When you look at this, you go like why did React not hit this problem? And it's because there's no way to walk any. Do any diffing in React that doesn't involve walking the pat. But this diff is arbitrary. Do you know what I mean? Like it's not based on what's being used, right? It's reconciling everything within the range that we target. Because it doesn't know what's being used. It only knows what's being used based on what's being actually used on the.

03:49:57 - Ryan Carniato

On the. On the path. And as I said, because state can't exist in two locations in React, not really. Everything will have to re render if that thing changes. Like, because this doesn't work, they never hit this problem,

03:50:20 - Anthony Campolo

does it?

03:50:21 - Ryan Carniato

Does that make sense? Hopefully that makes sense. Somebody no one is listening along the path of B. So when we diff it shortcuts early and replaces what it isn't used. It is updated, but it has orphaned A. Now this makes a lot of sense for vdom and mutable system since you can't expect shared references to continue to react Right. If you've ever used Immer, for example, I think I did this example on stream and put something in two locations and then update one of the locations, it actually forks from the other location. Right? Because immutable data you have, if you have something two locations, you have to update in both places. Otherwise you do not end up with the same object. Right? It would be weird if you assign an object, then updated it and it not to show that update in immutable system. This happens in Immer, but pretty much nothing else. Because immersive weird because it literally looks like you're mutating it. Does Immer have a playground? Whatever. This is like more than I wanted to deal with. I guess it doesn't really matter. It's fine. I think, I think my point is clear.

03:51:43 - Ryan Carniato

Like in Emmer, when you have an example that looks like like this, even though like you do this, the these won't be the same. Like updating S1A will mean that S0 is not updated. Like we'll see SA as a 1, B2 and then SB as 3 and 2 even though you've like directly assigned it. It kind of defies the logic of like, of. Of assignment. But on the positive, with that constraint, diffing works perfectly optimally. So something has to give. Either we restrict the rules around diffing, we opt for v1 style less optimal diffing where we diff everything, or we restrict rules around stores or make a new type of store. I think 1 and 3 are interesting to explore. The fact that today, thanks to proxies, this isn't just an impossibility gets me excited. And my gut is that I'm not done learning things from react. Basically, I think this is a very important problem to solve. I think diffing is the reason stores exist, which is kind of funny. I guess a lot of people don't realize that. So it's probably the most important feature is to find a solution where this works.

03:53:11 - Ryan Carniato

There's lots of ways to get around this problem, right? Like if. If you had to reconcile the whole store, then this can't happen, right? Or if you. But then like what if it's two different stores? What if you assign from store A to store B, you could still reproduce this problem simply by to do update along like a diff to update along a path which is shared somewhere. As long as you have shared references, then diffing won't work. Ideally you could say that's the constraint. You can say like look, if you're going to diff Your data should be normalized. You could create a primitive that's like called like a. Create diffusion or something that only specializes in doing diffing and following the rules and it would restrict you from having, you know, different expectations on shared references. These are all possibilities here. But it's important to understand that if the goal is to have basically optimal diffing, basically something that matches the approach of React and a lot of times when you match the pros react, you end up being faster than React because React is usually not the fastest at things we need to.

03:54:30 - Ryan Carniato

We're basically left with restricting the rules around diffing or restrict the rules around stores. That's as far as I got before Stream. I literally wrote this up at 9am this morning. But I wanted to share this with people just to kind of the progress from, you know, since last time I. I got here because I think it's important to understand that there are characteristics of the B DOM that are enviable. Perhaps not in the general case, but in some very specific cases. All right, how's everyone doing? I'm sorry that, that was a bit of a deep dive. That's our technical design consideration for this week, I think. I think it's time for this week in JavaScript which doesn't have as much stuff. We already dug a bit into the Ryan Florence stuff but you know, we can still see. We can find. Give me a. A moment and we can get prepped for that. But yeah, someone say, say hi in the chat here. So I don't feel like I'm completely alone while I get this stuff sorted. Hi, chat holding Cat. There you go. This stuff makes me wonder if Signals could actually. Could enable actually performance Swift ui.

03:56:25 - Ryan Carniato

I think there's. I think there's. There's still room here. Like it's so funny when people think this stuff's fully explored. I think we can still do better. Having SSL issues. Thanks, Buzzy. Ever since I heard that iOS retained mode.

03:56:46 - Anthony Campolo

Yeah.

03:56:47 - Ryan Carniato

This suggests. If iOS is retain mode, it suggests that we can do better than React native. I mean, I realistically, I don't know if there's enough investment here to do that, but if these systems are using retain mode and it makes sense that they would for performance, then like we can do better than React. Right? I mean, yeah, I think it's interesting. All right, thanks for. Thanks for saying hi. Let me see if I can get a few more tabs open here when I switch to this week in JavaScript. Okay, that one's good. Kit. Solid duplicate Bookmarks. Why not? We throw Blue sky in just for fun. I don't know if there's actually anything relevant on Blue sky, but there rarely is. But there might be. The other problem is I can't bookmark in Blue Sky, I don't think, which has made it basically useless for me for the stream perspective. Right. There's no, there's no bookmark. Really. Yeah. No, yeah. Blue Skies pretty much has nothing.

03:58:32 - Anthony Campolo

Okay.

03:58:33 - Ryan Carniato

But I needed the check.

03:58:44 - Anthony Campolo

Okay, cool.

03:58:45 - Ryan Carniato

I think that's good. All right. This is gonna be like the shortest this week JavaScript ever. Ever. I know that those are famous last, But. Yeah, I am, I am serious this time. Okay. So, yeah, let's talk about this week in JavaScript.

03:59:15 - Anthony Campolo

Okay.

03:59:16 - Ryan Carniato

And I'm gonna start with just doing some solid updates because there probably hasn't been a ton of things that I reposted. Yeah. What do.

03:59:30 - Anthony Campolo

You.

03:59:32 - Ryan Carniato

I love seeing 3D demos. I'm not going to lie. When I see people combine solid and 3Ds together, it just always makes me, like, really excited. It's just such a cool application of it. I think this is using Solid just for, like, the menu system on the game, kind of like the way people use React. But it'd be still, it's still, it's still cool. Especially when you know it's actually a game that gets sold. What do we got here? Learned that 9 out of 10 dentists prefer signals too. That's a little fun. Yeah. See there. There wasn't much new stuff on the Solid side this week that I retweeted, but. And I actually didn't check if there's anything the last day or so, but see, super short. Let's keep on going. Let's look at bookmarks. I think, I think I only had one bookmark that we haven't talked about this week, which is I, Dev, if he's still around, you got to tell me what inspired this. Hey, Grok, make a list of 25 of my mutuals that talk shit about signals. I thought this was funny. Dev. Signals are beloved in our community, so finding 25 mutuals talking shit is tough.

04:00:51 - Ryan Carniato

Most praise them based on analysis. Here's your top 25 mutuals, ranked by engagement. Who's those? Any critical notes marked. Brian Carneo admits overhead limits for effects, API panic side effects, web memory management issues. Alt suggestion, no lies detected. But there is a. There's a few other. Where's the rest of this? More people responded to this. I saw them earlier. I swear I saw them earlier. Maybe it's on here yeah unfair I need a citation memory manager issues so I think I said using react re renders the bad if you have a perform you can you can't diagnose with react scan then use solid or something more stimulus also run call it as mace falls Hitting signals is like hitting on lego bricks. They're great for building stuff you should use them but they hurt if you step on them so use them the right way. Fair point. My note amendment was a loose reference to your warnings on state management if it falls with signals in react in your March post we issue same entrepreneurs attempt to bring sales into react not outright hate just caution every appreciate the lego analogy signals are indeed powerful when you try it for my current even critters no tradeoffs okay.

04:02:27 - Ryan Carniato

Tanner fair point. My list has taught me to any critic you weren't marked as no criticism found your posts show strong support for signals praising their performance reactivity no receipts apologies for the implication. In good company stellar line of thinkers. Yeah. That's interesting.

04:03:01 - Anthony Campolo

Okay

04:03:05 - Ryan Carniato

let's get let's go to my my my stream or sorry my feed and see if what we haven't talked about yet we talked about meta programming with proxies. Oh yeah, here we go. Just the other day Dominic was talking about I think there's a missed opportunity to create a front end framework that really works well the AI tooling the amount of times I've had the correct AI output because it doesn't quite understand control flow around JSX is staunching. I still feel like JSX in its current state isn't ideal. We should be using in 2025 SOL5 fair is better but AI gets confused around template modules vs non modules when it comes to ruins. This might get better if so opens up declaring state and other bits of logic in control flow. In control flow but it's still not great. And someone's like this is exactly what Remix V3 is. That's funny. Does anyone know what Remix 3B3 is? Maybe, maybe maybe everything. I think it's because Remix that they're making AI first framework but I also not sure they actually know what that means at this point or anyone knows that this means but this is an interesting thing kind of like observation specifically.

04:04:23 - Ryan Carniato

See this suggests at least that Dominic has actually looked into it. He was working on V0 a little bit I think earlier playing with at least he works at he's at Vercel on the svelte team right. So he's been you know, definitely, you know Had a chance to play with this stuff. I think a more radical approach to jsx. I was hacking on Inferno a while back and got this working. JSX has the statements rather than as expressions. This enables GS control flow around GSX rather than the other way around. The funniest thing is this looks like Marco right? I think I actually responded that somewhere here maybe I didn't I swear I responded this thought I did. One of the best thing about Martin template language came up stream a while back and compiled their syntax in jsx. Understand the biggest thing is how WYSIWYG it is since fine grain renders can already handle state declared in the middle templates everything can be cut and pasted. Yeah. See the thing is while this looks weird to someone with react like thinking that these state variables could be like inside cont control flows or like inside the template this templating could compile to solid and work exactly how you'd expect it to with only running parts of the template once.

04:05:38 - Ryan Carniato

Like it's perfectly doable. We just don't have a templating language that can actually do this. And Marco language actually can basically do this because they have the like the let tags and stuff. So like Dylan actually showed it on stream taking Marco and compiling it to solid which is actually pretty cool. I wish there's a world where the experiment can live in an establishment today. It takes so many boxes, markers and innovates so many ways with syntax approaches ahead of its time. Exactly. It's interesting base parser use markers quite fast and flexible. The opinion stuff is in the compiler nowadays. Yeah simpler thanks to bow layer with a chunk work with simple JSX transform though. So. Yeah, I mean I don't know

04:06:18 - Anthony Campolo

at

04:06:18 - Ryan Carniato

this point though I would only trust someone like Dylan to make. Make this templating language for me and I. I think he's a little bit busy with Marco 6 on the way But I did want to give this shout out because it's sad how hard it would be get updated templating language like this like erase like just forget about the runes. Like he's got runes in here but like just like basically or something like ruins in here. Just forget about that. Like that's. That's obviously what he. He, he. He kind of was going for with his imp. But like this could be create signal for all I care. Just the fact that you could somehow have this kind of nested statements. I don't know if this exact syntax makes a lot of sense to people. Like how it could Decipher like what is expressions or not. I don't know if this exact syntax works. Like there's a lot of rules you have to follow but I think that there is some kind of desire for something like this. I just, I'm probably not gonna be the one that builds it. Probably not because it like yeah, I guess full layer can help with TypeScript.

04:07:27 - Ryan Carniato

But then like you're taking on like I kind of like even though it's not perfect with signals, I kind of like how TypeScript just kind of works. Like our tooling with Solid has always been relatively lightweight. We let React carry, you know, and everyone working React carry the work. This is like even if I think this is like amazingly better, it's a huge investment. Like Dylan went through a whole stream, he said it's easier now but he literally showed all the things you have to do to like make your own language. This is a great stream if you ever want to like people want to catch back on that. Let me just copy the link here. And this is a great link too because he, he, this is him actually showing Marco and outputting solid. Yeah. What's Dominic saying in the chat here? One of the Ben, sorry, I get that saying one of the benefits that people are going to get a walk through all the steps that lead to Solid. Yeah probably. I actually have a fully working Z local using Inferno extension that supports TS and prettier. Very cool. And it doesn't use signals, it use new hybrid diffing model.

04:08:37 - Ryan Carniato

Fair enough. I have that partial top down rendering that doesn't need two way connections. Gotcha. There is also no proxies on objects because it uses stack analysis to connect the object reactivity into the hood. Yes. Okay, makes sense. As I said, I just, I just want the templating language because I think like jsx it could have more application than just what's going on there. I think that you have a different non signals rendering model. I think that templating language should also work with like I think, I think the applications could work in multiple ways. Basically I get a component centric model is the only place where it doesn't make as much sense. Because I'm gathering your approach is still. Well actually is it completely. I'm gathering it's still using hybrid diffing like some kind of partial mechanism. Right? Partial top down rendering. Okay, so yes, but you're not going to rerun the whole component because state in the middle runs. You're going to run partway through the template when the state runs. Yes, yeah, yeah, exactly. Yeah, yeah. So, yeah, I just, yeah, I don't know. I mean. Maybe, I don't know, I'm.

04:09:46 - Ryan Carniato

But we should talk more. I think if the actual templating is extractable, like, like just the core part of the templating ASTs and the TypeScript and the thing, I mean, might be fun to play with. Dylan just stream the other week talking about six. I thought he was brilliant but the interviewer was a bit reductionist and you could see Dylan was like I want out of here. Oh, I mean, I told them, I mean I, maybe I should check my discord. They can come when they're ready, they can come back on. I told them we weren't going to cover Marco until marco6 was like in thing but now that it's like RC or beta or whatever, like it's like on the way out, I'm perfectly fine to start hyping Marco again. So if you want someone who will listen to him talk for several hours and ask pertinent questions, I'm all for it. All right. But yes, this is cool Dom. It's very, very cool and it feels more JSX than the Marco language. So like I'm, I'm all for this. Yeah. Maybe we should do stream together. This might blow your mind a bit.

04:10:59 - Ryan Carniato

Yeah, there's, there's no virtual DOM or holds. It's just we don't associate, we don't associate the state with the effect and so we always top down update and work out connections that way. Okay. The downside is that state can't be created outside of a module. It has to run. Okay. Okay. So it's a bit like a hook. Yeah, I'm. Yeah, let's, When's. Yeah, let's talk about that too. I'm interested both in the templating and I'm definitely interested in, in Inferno X or like this approach. I know it's a long road from where you're at where you have something working and can show people to getting stuff through and let's start that ball rolling, you know, let's do it. Okay. So the stream ad. Okay. Yeah. This was fun. Yeah, sorry, I mean you guys should know by now I'm sure everyone here follows Dominic. Right. I'm actually just gonna stick Twitter link right in here in case people don't. Right. Just throw there.

04:12:09 - Anthony Campolo

Just.

04:12:14 - Ryan Carniato

I built it seven days in February. I mean I. You shared it. You, yeah, you built in February and then you shared it like beginning of March. And I was just out with on Paternity you on the show. We were gonna do a call and I'm like, man, I just. Now is not the time. But I'm. I'm more than ready to check the stuff out now anyways. Oh, okay. So this is official then. I. I wasn't, I wasn't sure if this was public knowledge yet. Software Vercel, author of Lexical and Inferno, former core maintainer engineer on React and salt. Yes. If I'm reading this correctly. I didn't see. I guess it's not something people advertise. Anyways. Interesting. Explains all the AI stuff in the V0 stuff. Anyways, we're all good. I mean Vercel caught them all a while ago. Okay, Where am I going? I'm not working on the feature Flag team that Vercel. No. Oh, you now are so not. Okay. You work for the Future Flags team on Frazelle. Okay. So. And that's like build process, like terms of. Okay. Interesting. Yeah. Which makes. As I said, I makes sense. You need get that outlet building, building some new framework stuff.

04:14:01 - Ryan Carniato

I. I want to, I want to see it. I think, I think a lot of people here would want to see it too. Okay. Yes. Someone asked earlier if I watched Async Svelte Learn with Jason. Yes, I definitely watched it. I. You know what, let's just, let's just. I'm going to pop this open and then we'll. And then we'll. We'll.

04:14:27 - Anthony Campolo

We'll.

04:14:27 - Ryan Carniato

We'll go from here. Let me just. Second, I need to change the way that I'm sharing my screen. Because. So you can actually hear this. What do we want? Chrome tab? Yeah, let's do this one. There you go. Down here. So I'm out of the way. Just an aside. This stream is great. Svelte has basically added server functions. They've added like I. They have like query, command and form, which are basically the equivalent to like wrappers or like we have query and action in solid router. So they also have management of granular cache invalidation in these patterns and they actually did single flight mutations as we found out last stream and that led to. He demoed it and I'd been watching quietly the whole stream because I was super curious to see what was new and some of the Async syntax stuff is kind of interesting and you know, it's cool to see these kind of patterns emerging. But you know, I. The part that I wasn't as familiar with was the. Basically all the Server function stuff, you know the stuff that's kind of very similar to the what we put in Solid Start and that you seen started seeing now in Tanstack Start.

04:16:13 - Ryan Carniato

So I was kind of curious like how that worked and if it was very similar and it looks like they've actually implemented all the way to the, to this kind of similar point that Solid Start did. It was kind of funny watching parts of the stream and I'm like Rich is better at explaining stuff sometimes than me because I was like parts of it. The stream felt like I did a stream with Jason about a year ago on the exact same topics where I showed server functions, showed cache or we call it query now and action and showed the single flight mutations and showed almost the exact same demo in Solid instead of Svel about a year ago and Rich's explanation actually made way more sense to me and I was like oh man, people should just watch this and then kind of go back, you know, on the solid one. But the single flight mutations did lend to this funny moment. Hopefully you guys can hear this first version of the design. You can hear that, right? Oh if you're not it's only 30 second clips so I don't care. First version of the design didn't have single flight mutations and we were like is that a problem?

04:17:17 - Ryan Carniato

I mean it's probably not because like a lot of the time you actually don't know what data is going to

04:17:20 - Anthony Campolo

be needed for the next thing. So there's a risk of over fetching

04:17:22 - Ryan Carniato

if you do a single flight mutation. But you know what if, if we don't have single flight mutations Brian's going to kill us. Like he's just going to murder us on his stream.

04:17:34 - Anthony Campolo

So we iterated until we found a

04:17:35 - Ryan Carniato

design that we like.

04:17:36 - Anthony Campolo

Carne auto driven development.

04:17:37 - Ryan Carniato

I get it. Yeah, exactly.

04:17:40 - Anthony Campolo

I mean that's been the story story

04:17:42 - Ryan Carniato

of front end for the last couple of years, right?

04:17:44 - Anthony Campolo

Indeed, indeed.

04:17:45 - Ryan Carniato

First version of the but yeah, funny moment I hope I didn't think about it but I mean Dominic's been in the stream past when he's working on stealth and stuff and sometimes I guess I am very critical of decisions on, on frameworks. I didn't really see myself as a bully but I mean if it's for good cause you know. Yeah, you know, I'll take it. To be fair, I said I slightly disagree. The problem of overfetching comes with route level hoisting, right. Like it's present because you've hoisted stuff out to the route, if you do it lower, you can't remove the waterfalls necessarily on code splitting, but you can, you can fetch slightly less so. But the thing is, you made that decision already when you hoisted the stuff out to the route level, right? So the whole trick to single flight mutations, at least I haven't looked too much in detail with the svelte version, was we used the router to make the to do the information. It looks like they do two as well. And then because you can choose to invalidate only certain query keys, and we know which query keys are already on the page and which ones are new, we can actually do a differential on the server too.

04:19:04 - Ryan Carniato

So when we run the server server loaders and actually like, you know, the loaders on the server and actually run all the separate queries, we hit the query and go, oh, we don't need this one. Like it's already on the page. So we actually only fetch the differential of the new data. So like if say like you have some stuff you fetched at the top of the page and then you know, it's nested router and then the section that you're updating, you're adding an item and then moving back to the list page, so you're navigating away. What we can do is when you post the item on that page, the loaders are going to run for the whole, for the, for the whole page in theory. But it does a redirect and goes, okay, I'm going to the next page. So on the server it's like, okay, I need top, top new page. It's going to recognize that you should already have the data for the sections above and it's not going to refetch that. It's actually going to only and if some of the data is shared or something, it's only going to fetch the stuff that you don't already have on the server or that's impacted by the invalidation caused by the mutation.

04:20:06 - Ryan Carniato

So in that sense it kind of solves the RSC problem of like having to fetch everything with a server side cache. Ironically, it's using the client side cache knowledge to basically not have to use caching on the server, which is kind of crazy. But what I'm saying is I actually think it is actually a general thing because you've already authored your app in such a way that if you consider that over fetching you wouldn't hoist stuff up to loaders. So I actually think this is really relevant improvement both because it turns it all into A single flight. And because you keep all those aspects of a client where you don't have to, you know, fetch everything you can when you. On the client. If you're purely doing this on the client side, you would do two fetches, but the client would be like, yeah, I already have this data. I don't need to do this and only fetch the new stuff. We just transcend that onto the server without needing all the cache APIs on the server. So it's like, super powerful in both ways. So, yes, I'm slightly disagreeing with the importance of it, but Felt implemented it anyway, so, I mean, I have nothing to complain about.

04:21:22 - Ryan Carniato

Sorry, I know it's a tangent. You guys are all, like, celebrating the. The reference, right? But I'm, like, still focused on the. The technical. But, yeah, no, I. It's cool to hear and put out there, the. The influence there, even if it's through bullying people on my stream, apparently. I know he's. He's not being completely serious there, but sometimes I wonder a little bit. You know, I'm probably. It's weird being the place of being the framework author of a competing framework and also possibly one of the most critical people around decisions that frameworks make, because it puts incredible scrutiny on my own decisions, obviously, which means I have to be very, you know, when Fabio comes around and tells me that I'm doing things wrong, I have to be very cognizant of the fact that, like, you know, he is probably right on some things, at least in some elements. Where do we meet? You know? But on the other hand, yeah, I don't know. It's probably a position I can't hold forever because, you know, if Solid is successful, then eventually maybe it'll be position that looks like punching down. But I'm. I'm super stoked.

04:22:34 - Ryan Carniato

Like, this is one more thing that makes Sveltekit, like, you know, alongside Solid Start and the direction. Tanstack Start, the direction that I thought things would go. Like, it's exciting. I did that talk at VUE Amsterdam in February 2024, and I did the stream back in December, I think 2023, where I said, like, this is the shape of frameworks to come. I was very, very sure that this was the. The right pattern, at least with what technology we had today. And it's cool to see that Sveltekit is now also what I consider a framework in that scope. They have their own details which will say, you know, better makes sense for Svelte. I'm cool with that, generally speaking. Solution wise, like this is this. This kind of hybrid server model is a very powerful model compared to RCS. Can we augment RSCs into this? I don't know. Does it make sense? Maybe RCS will eventually be. Get to a place where they could be arguably a better model, but right now, today, given what we have, this is. This is like. This is really cool and really good and I'm so stoked to see that, like, Sveltekit is joining us on, on, on this thing along.

04:23:41 - Ryan Carniato

As I said, like Tanstack. Now you can get these patterns in React Solid Start or, sorry, React Solid and Svelte. Yeah, it's actually very interesting to see that kind of influence of what we do in the stream there. We haven't had the same impact on the VUE community, I admit, except for, I guess, View Vapor. But the Meta Framework side, it's kind of hard because nuxt is so good at what it does in terms of dx, in terms of things, but architecturally, I think my gut is that Nuxt does a lot of the same stuff, but just presents it in a different way. I never quite know how Nuxt works. Daniel wrote it. Come and try and show me. But, you know, there's something about the View that's always been a little bit alien to me. And I don't mean like alien signals, I mean, like, this different from how I look or approach problems. And I think that's part of what makes VUE very unique. I mean, every framework has its elements obviously, too, but I'm just speaking of VUE specifically. Okay, but, yeah, people should check out the stream, the two very exciting things around Async for Svelte.

04:25:15 - Ryan Carniato

And, you know, actually, you know what, let's just. Let's just link the whole. Let's just link the whole video here anyway. Why not? People should check it out. It's. I love. I guess it's called Code TV now, but I was gonna say, I love Learn. With Jason, it's. It's always, he's the best at taking the I'm a beginner and then where to go thing. Like, I was trying to play that role a bit with the AI stuff today. Jason knows exactly the right questions, and as I said in the past for Solid, he's the one, like, it was on his stream that I figured out how to actually sell solid to React developers. The first time, it was like, literally the first time, you know, one of the first times we, like, did the, like, hoist state example, you know, where it's like, oh, local state, global state, you know, so great stream. I'm glad he's still doing it. And. Yeah, CodeTV dev. All right, that's it. I told you this one would be short. I don't. I don't really have any other topics to talk about. It's only like 20 some odd minutes.

04:26:29 - Ryan Carniato

I don't. I think I went through all our streams. There was nothing on Blue sky. Right? Blue sky. Blue sky. Nah. No, I think I'm good for this week. So, yeah, let's. Let's call it a stream and everyone have a great, great weekend. I've got some more work to do to figure out how to fix merge the immutable, immutable worlds of diffing. So hopefully next week I'll have an answer for everyone. Yeah, definitely. Yeah. Oh, yeah, yeah. Jason, Brad Press do two OGs in the onboard space. Yeah, it. All right, let's see. Is there anyone worth rating right now? I don't know. I never know. I don't follow enough people. Yeah, it's all good. I'm just going to call it then. All right, have a good one.

On this pageJump to section