
Building an AI Frontend App with Ragged featuring Monarch Wadia
Anthony Campolo interviews Monarch Wadia about Ragged, a TypeScript library simplifying LLM integration for web developers
Episode Description
Monarch Wadia demos Ragged, a TypeScript library for integrating LLMs into web apps, covering tool use, streaming, and why JS developers need AI-native tooling.
Episode Summary
Anthony Campolo hosts Monarch Wadia to introduce Ragged, an early-stage TypeScript library designed to make working with large language models accessible to front-end and full-stack JavaScript developers. The conversation opens with both developers reflecting on their personal journeys into AI, from early encounters with GPT-2 and DALL-E to daily reliance on ChatGPT and Claude, establishing why they believe this technology is genuinely transformational rather than another hype cycle. Monarch explains that existing frameworks like LangChain and LlamaIndex were built primarily for Python developers, leaving JavaScript developers to wrestle with incomplete ports and sparse documentation. Ragged aims to fill that gap with a lightweight, event-driven API built on TypeScript from the ground up. The centerpiece of the stream is a live demo of Ragged's tool use capability, where Monarch builds a "Smart Reader" interface that lets an LLM search Wikipedia and intelligently filter results, illustrating how structured tool definitions allow AI agents to interact with application code. Anthony then shares his own AI-powered content workflow using Whisper transcription and LLM-generated show notes, demonstrating the kind of practical integration both developers champion. The episode wraps with an open call for contributors, positioning Ragged as an ideal entry point for web developers who want to break into open source and AI without needing data science expertise.
Chapters
00:00:00 - Introductions and Developer Backgrounds
Anthony welcomes Monarch Wadia to the stream and Monarch gives a quick rundown of his career as a full-stack developer with deep experience in React, Svelte, Angular, Vue, and back-end technologies like Node, Java, and Rails. He explains that his recent focus has shifted to working with large language models from a practical integration standpoint rather than a data science perspective.
Monarch introduces Ragged, the TypeScript library he's been building over the past few weeks, explaining that it grew out of his frustration with repeatedly rebuilding chatbot and text-streaming features across different projects. He positions it as a lightweight alternative to heavier frameworks, designed specifically for the needs of full-stack Node developers who want to integrate AI into user-facing applications.
00:03:49 - The AI Journey and Why This Time Is Different
Anthony and Monarch discuss the spectrum of web developer attitudes toward AI, from deep skepticism to full enthusiasm. Anthony traces his own journey back to GPT-2 in 2019 and describes how watching computers write poetry convinced him something fundamentally new was happening, long before ChatGPT made that obvious to the broader public.
The conversation turns to how both developers use LLMs in their daily lives. Monarch describes AI as a second brain that fulfills the original promise of the internet as an information superhighway, while Anthony recounts having extended conversational debates with ChatGPT. They also share their first encounters with AI image generation through DALL-E and Deep Dream, connecting those experiences to their conviction that the current wave of AI is not just another hype cycle but a genuine turning point.
00:10:03 - Developers, Job Fears, and Ragged's Mission
The discussion shifts to the common developer fear of AI replacing jobs, with Monarch noting that developers have always been in the business of automating themselves through abstractions. Anthony adds important context about tools like Devin, cautioning that curated demos can be misleading and that hands-on experience is the only reliable way to assess an AI tool's real capabilities.
Monarch steers the conversation toward Ragged's npm package and GitHub repository, walking through the project's early community-building efforts. He describes his plan to use the existing Mint Bean Discord as a launchpad and extends an invitation to web developers who want to learn about LLMs through open source contribution. Anthony shares the personal connection of having used Mint Bean early in his own coding journey, framing the collaboration as a full-circle moment.
00:17:27 - Live Demo: Smart Reader and Tool Use
Monarch begins a live walkthrough of the Smart Reader example application, a Svelte-based command-and-control interface where users issue natural language commands and the AI executes actions through defined tools. He demonstrates searching Wikipedia for "aliens" and then asking the LLM to filter the results by relevance to ufology rather than fiction, showcasing how tool use produces useful emergent behavior.
Anthony pauses to clarify the concept of tool use for viewers, and Monarch explains its origins in research where LLMs were found capable of using bash terminals and outputting structured JSON. He walks through the Svelte code, showing how Ragged instantiates with an OpenAI provider, how tools are defined with titles, descriptions, and input schemas, and how the event-driven architecture connects LLM outputs to application logic through listeners.
00:28:08 - Client-Side Security and Local LLM Futures
A practical discussion unfolds around the security implications of running LLM API calls on the client side, with Monarch explaining the "dangerously allow browser" flag from OpenAI's library and the legitimate use cases for exposing an API key in internal or local-only tools. Anthony draws on his experience at StepZen to explain how frameworks like SvelteKit offer server-side solutions for API key management.
Monarch looks ahead to a future where consumer devices have powerful enough GPUs to run LLMs locally, which would eliminate the API key concern entirely. He envisions Ragged adding a local provider alongside its current OpenAI driver, and both developers highlight the exciting momentum in open-source models like Mistral and Llama 3. Monarch also announces that a Cohere integration is already being built by a community contributor, taking advantage of Cohere's free developer tier.
00:37:30 - The Adder Example and Building Custom Tools
Monarch switches to a simpler Node-based example that sends a math prompt to GPT-4 and receives a result through a custom adder tool, stripping away the UI to focus on the core mechanics of Ragged's predict function and tool definition API. Anthony appreciates the clarity of this example compared to the earlier demo, noting how it makes the prompt and response flow explicit.
The walkthrough covers how Ragged's t utility lets developers define tools with typed inputs including booleans, enums, numbers, and strings, with plans to support nested objects and arrays. Monarch live-codes a multiplier tool with help from GitHub Copilot, and then sketches out a hypothetical document selection tool to illustrate how tool use could power filtering and retrieval workflows, connecting the concept back to real-world use cases like an admin asking AI to select relevant documents from a large collection.
00:48:32 - Anthony's AI Content Workflow
Anthony shares his screen to walk through a blog post documenting his AI-powered content pipeline, which uses yt-dlp to download YouTube audio, OpenAI's Whisper for transcription, and an LLM prompt to generate titles, summaries, chapter headings, and key takeaways from the transcript. He shows the bash script that ties everything together and demonstrates how it can process entire YouTube playlists in a loop.
Monarch is visibly impressed, calling the workflow a fundable product and encouraging Anthony to pitch it to investors. The conversation touches on how tasks that were previously impossible or extremely time-consuming are now achievable with relatively simple tooling, reinforcing the broader theme that practical AI integration is within reach of any web developer willing to experiment.
00:56:56 - LLM Detection, Claude vs ChatGPT, and Wrapping Up
The conversation lightens as both developers discuss the South Park ChatGPT episode and the challenge of detecting AI-generated content in the wild. Anthony shares his experience spotting LLM-generated YouTube comments and notes that Claude produces noticeably more human-sounding text than ChatGPT, a distinction both developers find significant.
Monarch closes with a call for contributors, offering hands-on mentorship in exchange for contributions to Ragged, whether in documentation, examples, or driver development. Anthony reinforces this by advising aspiring open source contributors to seek out small, early-stage projects with accessible maintainers rather than trying to break into massive established codebases. Both developers express enthusiasm for continuing to collaborate and encourage viewers to get involved with Ragged and the broader AI ecosystem.
Transcript
00:00:01 - Anthony Campolo
All right. Hello, everyone. Welcome back to AJC and the Web Dev. We got Monarch. What's up? Introduce yourself, Monarch.
00:00:20 - Monarch Wadia
Hey, thanks for having me on, Anthony. It's been a while since I streamed, and I'm really glad that I'm doing it for the first time in a long time with you. As an intro, I'm a developer, I program, and I build stuff. I've been a full-stack developer for a while now, and I've mostly worked in front-end stuff: React, Svelte, SvelteKit. I've done my fair share of Angular 1. I've done Vue. And I've also done a bunch of back-end stuff like Node, Java, and Rails. I think just dropping those technologies probably dates me a little. So that's sort of my background.
I have been working with large language models as a full-stack developer. I'm not a data scientist or anything like that. So working with large language models on the prompting side and the tool use side has taken up a lot of my AI learning time for the last couple of years.
[00:01:24] And I am building this thing called Ragged. Ragged is a TypeScript library that simplifies and streamlines all of the stuff that I've been doing on a day-to-day basis. So I've built and rebuilt chatbots. I've built and rebuilt the text streaming feature over and over again for different applications. And I sort of got tired of reinventing the wheel over and over again.
00:01:56 - Anthony Campolo
And I used to build an open source library. Once you've done things a million times, you build a way to do it faster.
00:02:02 - Monarch Wadia
Exactly. And I really love LangChain. LangChain is awesome. There's also Haystack and all these awesome frameworks. I found them a little too heavy for my personal tastes, and I just wanted something a little more lightweight that had a more fluent API, or at least one that was more pleasing for me personally.
What came out of all of those drives and impulses and needs was this project that I started about 3 or 4 weeks ago. It's super early, but it's growing really fast. We already have our first three contributors other than me, and it's starting to really take on a life of its own very slowly.
What's come out of it is this TypeScript library that I'm going to demo today that is focused not on the data scientist, not on the back-end developer, but on the front-end or full-stack Node developer. It's built for the full-stack Node developer's needs, rather than most libraries which are built for Python devs or back-end devs, because retrieval augmented generation has taken over the world recently.
[00:03:14] So there's a lot of back-end focus on AI and large language models. I always found that I was more interested in using AI and large language models to empower the user, to give the user more power over their user interface, and in that sense really empower the user in their daily life. Because what I've always found is the more power you give to the user in the user interface, the better their life becomes. That's sort of the goal of the user interface, to make users' lives easier. And I see a lot of potential in large language models to do that. That's sort of an introduction.
00:03:49 - Anthony Campolo
Yeah, that's awesome. You use a lot of terms there, like LangChain and RAG and stuff like that. For people who are already, like me, very deep into this whole LLM world, I totally got what you're building immediately. And that's one of the reasons I jumped on it and wanted to check it out.
But I want to take a step back for people who haven't gone down this route yet. I've seen, and I'm sure you've seen this as well, there's a spectrum for web developers in terms of how into LLMs and AI in general they are, whether they're really skeptical of it and kind of hesitant to try it, or if they're all in. And I think you and I are very far on that spectrum. We went all in and we really saw the potential of this technology. I've had a lot of conversations with a lot of people about this.
[00:04:39] I'm excited that you're as excited about this stuff as I am, because I think it's transformational technology in a lot of ways. And I can actually say that I was kind of ahead of the curve. I remember when GPT-2 came out in 2019 and I first saw an LLM write poetry. It wasn't even good poetry, but the fact that it could write poetry at all, I was like, this is cool. This is something qualitatively different from what I have seen computers do in the past.
And then most people had that moment when they tried ChatGPT. They had a conversation with ChatGPT. And that's what I say: having a conversation, not just searching for information. It's good at that too. It's good at lots of things.
[00:05:30] But to me, I would sit down and have these long discursive conversations about the JFK assassination, and it would try and convince me that Lee Harvey Oswald killed JFK. And I was like, no, he really didn't. And we'd have this huge argument about it. So what do you use LLMs for in your daily life?
00:05:51 - Monarch Wadia
Everything. It's sort of like a second brain at this point for me. And it really lets me tap into the collective consciousness of humanity. The internet started as a promise. The promise of the internet was the information superhighway. That was the big buzzword back then.
00:06:13 - Anthony Campolo
I was a kid, but I remember.
00:06:15 - Monarch Wadia
Yep. The sum total of human knowledge at your fingertips, and the internet sort of provided that. Yahoo back in the day was interesting because you could look up all these lists and find things. AltaVista was another one. Webrings were a thing. It was sort of janky. And then Google comes along and Google becomes this amazing way to access everything just through search. And then Google sort of starts slowing down because it needs to make money. They start pushing ads. And the user experience of Google has been not that great for the last...
00:06:48 - Anthony Campolo
People learn that Google's where people go, so then they start hacking it to get the results at the top, whether they're actually the results you want or not. That could be products, news agencies, everything starts being gamed.
00:06:59 - Monarch Wadia
Exactly. And that was already painful. I was already kind of itching for an alternative to Google. And then I didn't actually start my AI journey with ChatGPT. I started it with DALL-E 2. A friend of mine sent me this little AI-generated picture of a dinosaur, and I was like, what is this? And he said, I prompted DALL-E 2 and it created this image. And I said, what? So I started prompting DALL-E 2. I signed up for an account and started prompting it with dinosaur images, like a dinosaur eating the CN Tower in Toronto. And I just started making these really quickly.
00:07:48 - Anthony Campolo
I want to share this because the first time I used DALL-E 2, or maybe this was DALL-E 3, I'm not sure, but I basically asked it to create an image of cats wearing berets in the style of Henri Matisse.
00:08:04 - Monarch Wadia
Nice. Those are really good.
00:08:08 - Anthony Campolo
I have this. This is DALL-E 3 because I redid this. I originally did this with DALL-E 2. I'm not sure where that screenshot is, but that's basically the idea.
00:08:16 - Monarch Wadia
Yeah. It was a mind-blowing moment for me because the last time I saw AI generate art was ten years ago with Deep Dream. Yeah.
00:08:30 - Anthony Campolo
Deep Dream is the thing that looked like you gave a computer acid. It was like this psychedelic rainbow transformation. And you'd look at the sky and there'd be animal faces kind of in it.
00:08:42 - Monarch Wadia
Dogs everywhere.
00:08:43 - Anthony Campolo
I remember when that came out and I remember looking at that and I'm like, this is like watching the AI super brain come into form in real time. It's so fascinating to me. This is before I even knew how to code, and all this AI stuff like AlphaGo and things like that, that is what actually made me want to learn to code in the first place.
I tried to learn Python, I tried to learn NumPy and Pandas, and I had no idea what the hell was going on. I spent years trying to do that, and then eventually it was like, this isn't working. And I went to boot camp and learned web dev instead. I spent years as a web developer. And then ChatGPT blows up. So I have such a long history with all this AI stuff.
That's also why it's kind of frustrating to me when people think, okay, people have been talking about AI forever.
[00:09:30] There's always been AI hype. This is not different from the other times. This is just a thing that's going to fizzle out. And I'm like, no, trust me, this is different this time. It really is different. And this is the time to get in. Because I've been into AI for ten years, but it's only been in the last year and a half that I've actually used AI. Like you said, you use it for everything now.
00:09:51 - Monarch Wadia
Yes.
00:09:53 - Anthony Campolo
So.
00:09:53 - Monarch Wadia
Cool.
00:09:53 - Anthony Campolo
If computers could smoke weed and produce cars. That's hilarious.
00:10:00 - Monarch Wadia
That's awesome.
00:10:01 - Anthony Campolo
That is hilarious.
00:10:03 - Monarch Wadia
It's basically what computers are doing now. They're creative. They're brainstorming. They can come up with all sorts of weird concepts. And honestly, some of these things are smarter than me.
I should probably divert the conversation to Ragged, because I think everybody kind of knows what ChatGPT is now. Everybody sort of knows that there's AI you can talk to, or at least in our circles. At least the people watching this will know what ChatGPT and GPT-4 are, more or less.
So one thing that a lot of developers are scared of, and it's a valid concern as well, is this: is AI going to steal our jobs? In the last ten years, well, more than ten years now that I've been a developer, I've seen that developers are always programming themselves out of a job. And that's not a thought that I came up with.
[00:11:00] That's something that developers before me came up with. This has been a common thought since probably the 70s or 80s.
00:11:08 - Anthony Campolo
It's also been that joke where someone has like three jobs that they just show up to two hours a week because they wrote a bunch of scripts to do what they're supposed to.
00:11:17 - Monarch Wadia
That's basically what developers keep doing, right? They keep building abstractions and automation. And so we're constantly at the leading edge of just programming ourselves out of a job. We've always been doing that. And people say, like you just said, well, this is different. There's this thing called Devin. And I saw a video for something called Devin. And Devin is like...
00:11:38 - Anthony Campolo
People are now saying it's a fraud. And this is something you've got to be wary of, is that you can watch a demo of one of these tools and get an idea of what it could do, but those demos are highly curated to make it look as impressive as possible. You really have to get your hands on these tools to get a sense of what they can and can't do.
And this is what made ChatGPT such a revolution. It's just a text interface on the internet anyone could go use. And so people were able to see for themselves what it could do and what it can't do. And that's why when ChatGPT first came out, there was a lot of pushback saying, hey, look at all these things it sucks at. And it's gotten progressively better and better.
A lot of those things, like you can do math now. If you give it math equations, it will actually write code to do the math. It used to just take in numbers and spit numbers back out and have no concept of whether they actually made sense or not.
[00:12:25] But that's what I like about talking to you about this, is that you actually use this stuff day to day, and that's the only way you can really get a sense of its capabilities and its limitations.
00:12:38 - Monarch Wadia
You know what? That's a great segue to hop into the code and start showing.
00:12:43 - Anthony Campolo
Let's show the GitHub or the npm package actually, so people can check this out themselves.
00:12:49 - Monarch Wadia
Oh yeah. Great idea. So if you go to npm and you go to Ragged, super simple name.
00:13:00 - Anthony Campolo
And it's Ragged, like playing on RAG.
00:13:04 - Monarch Wadia
It kind of is. I like the name, but the funny thing is, it has nothing to do with retrieval augmented generation right now. That's kind of what the generation part...
00:13:14 - Anthony Campolo
You'll build that in later and the story will be different in the future.
00:13:18 - Monarch Wadia
Exactly. We'll see where this goes. Maybe people will forget about RAG altogether and Ragged will just stand on its own. So super quick, I have 14 versions of this thing out, so I've been actively developing on it, and now we have contributors.
00:13:37 - Anthony Campolo
Share your YouTube actually. I'll share that in the chat. You have a YouTube video you just put out.
00:13:42 - Monarch Wadia
I do. So if you want to share that, that'd be amazing. As you can see by "sidebar position one," which is an artifact from markdown front matter, this is very raw. It's version 0.1.3. But what I'm excited about is this is an opportunity for me to get other people excited about this project and join us in the community to start working on this. Thank you for giving me this opportunity on your platform.
So what I'm doing is I have this old Discord with about 10,000 people on it called Mint Bean, and I'm just co-opting the Discord. I'm going to be using the Mint Bean Discord as a seed to grow the community. It's been sleeping for a while, and I'm really hoping that the Mint Bean Discord can be used to get a lot of attention on Ragged as it grows. But right now, there's only four people in the community.
[00:14:41] Really, I'm counting you and me, Anthony. So there's Abhimanyu too, who's here. And there's one or two more people who are actively involved now. But this is a great opportunity for anybody who's a web developer who doesn't know machine learning, or maybe knows a little bit of machine learning or data science, but they want to get into large language models and AI.
00:15:04 - Anthony Campolo
I'm going to say real quick, when I was first getting my start, before I had a job, I was doing a lot of my first meetup talks and Mint Bean was where I did many of those. So you gave me a platform originally when I was trying to get into coding. So this feels like a full circle moment where I...
00:15:21 - Monarch Wadia
Yeah.
00:15:21 - Anthony Campolo
...can bring you on my platform. And you were the second guest on JAM as well. So you were there. You started from the bottom. Now you're here with me. Very happy to continue doing this kind of stuff with you.
00:15:35 - Monarch Wadia
I get by with a little help from my friends. Friendships are amazing. So anybody who wants to get into AI, as I was saying, but they don't have AI skills in a traditional data science way, I'm hoping that Ragged becomes a platform for developers who want to get into AI to get there without knowing all the hardcore stuff. I don't know the hardcore stuff either.
00:15:57 - Anthony Campolo
Yeah, I want to touch on a thing you said earlier about this being for JavaScript developers and not Python developers, because that's a similar thing I noticed getting into a lot of these libraries. LangChain especially and LlamaIndex, those are kind of the two really big open source libraries that aim to do similar stuff to what you're doing. And there are some differences, but with both of those, they were Python libraries that they then kind of ported to JavaScript.
They would have this huge docs with all these examples in Python, and then there'd be like, here's two JavaScript examples. And I'd be like, okay, I want to do all this other stuff that is in Python, but you only give me these two examples in JavaScript. So then you're kind of trying to port stuff over and it ends up being very challenging. So having something that is from the start JavaScript, TypeScript, Node native, that is one thing that is really exciting to me about this project.
00:16:53 - Monarch Wadia
It's definitely been a journey. I had to pick up so much Python because I wanted to get into large language models. And then after picking up Python, I realized that I don't really need the hardcore stuff. I'm more interested in the integration into a full-stack architecture, and I'm a full-stack dev who wants to take advantage of AI, and I don't really need Python. Everything is happening over APIs anyway, so let's build something that's for full-stack web devs.
Ragged works on both the front end and the back end, and the examples that I'm going to go through today are going to be front end examples. I'm going to go through a little intro in the back end just to show that it works.
00:17:27 - Anthony Campolo
And you should bump up just one font. That would help.
00:17:30 - Monarch Wadia
Oh, yeah. Totally. Maybe a couple. There you go. Is that better?
00:17:35 - Anthony Campolo
Yeah.
00:17:36 - Monarch Wadia
Awesome. So I'm going to go through a couple of examples. We'll walk through the very basic first API call example. And once that's done, I'll go into a simple example that's also on here for its streaming API too. But the really exciting stuff is the tool integration. What tool integration is, is that it lets the LLM call your code in a structured way, so it outputs structured data that you can then feed into your architecture and emit as an event.
00:18:13 - Anthony Campolo
Essentially. You also have this home page as well that people can check out.
00:18:18 - Monarch Wadia
Yes. That's the quick start. We need people to write documentation. I've tried to write documentation. It's not my forte. I've tried to build examples. They're not my forte.
00:18:28 - Anthony Campolo
That was the first thing I did when you sent me this. I created a little quick start and shot it over to you.
00:18:34 - Monarch Wadia
Yeah. If anybody wants to join and just build examples, that'd be awesome. And thank you for building that. You were actually the first person to join the...
00:18:43 - Anthony Campolo
I was the first contributor.
00:18:45 - Monarch Wadia
You were the first contributor.
00:18:47 - Anthony Campolo
I like being on the ground floor of this kind of stuff. It's fun. And I really learn a lot just by building stuff like this with people.
00:18:54 - Monarch Wadia
You did it with a few different projects now, like zero to one, baby.
00:18:58 - Anthony Campolo
Yeah, that's right. The other one that I was at the ground floor for, I remember this is the second time where I remember when they're publishing the npm package itself. It was Slinkity. I don't know if you know Ben Holmes, but he was talking about this project, Slinkity, which is basically like, how do you combine Eleventy with React components and Vite and stuff?
He was talking about it and I was really excited about it, and I literally went to my computer and did npm install Slinkity and I'm like, Ben, this thing doesn't exist yet. He's like, yeah, I haven't published it yet. And I'm like, you're hyping it up though. Where is it?
00:19:33 - Monarch Wadia
Oh, really? So he hyped it up before he even published it.
00:19:36 - Anthony Campolo
Yeah.
00:19:37 - Monarch Wadia
Wow. I remember, I think, you tried to install it and then you came back and you're like, this thing is broken. And I was like, crap. Remember that? Yeah. I fixed that up, but it's all working now.
And the really exciting thing is the tool integration example. You know what? Let's work in reverse. Let me actually show you what this thing is capable of, and let's get people excited. And once people are excited, let's go back and do the tutorial.
So just to get people excited, here's a little example that I built. This is Smart Reader, and this is committed to the Ragged GitHub repo. It's available for anybody who wants to check out the code. So if you go to the Ragged GitHub and you go into the examples, you'll see this. Smart Reader is a command and control user interface that you talk to the AI and the AI does things for you.
[00:20:43] The AI right now can do two things. It can search Wikipedia and display results, and it can analyze those results. The interesting thing about this is that even with those two things, you get a lot of emergent properties that you can then build on top of. So I'll just do a quick example. Search for aliens. Let's just do aliens.
00:21:14 - Anthony Campolo
Can you actually explain what tools are? Because this is a specific term that's being used in a specific way. I'm not sure who first started using this terminology, but tools is a specific term.
00:21:29 - Monarch Wadia
It is. The first time I came across tools as a term was in a research paper where somebody had taken large language models and gotten them to use a bash terminal, a Linux terminal. They were using the Linux terminal and checking out whether this LLM, out of the box, without any special training, could actually use tools. And they found that it could.
The way they use tools is they give you the command and you run the command on their behalf. So you could execute it in a terminal. You could just pipe it to the terminal and execute it, or you could put it in an eval in JavaScript. Sort of dangerous. But people figured out that these LLMs, because they've been trained on GitHub and Stack Overflow and the rest of the internet, they know how to code. And because they know how to code, they know how to output data in JSON. And now that you have output data in JSON, you can build an API on top of that and sell that to your customers.
[00:22:34] And that's what OpenAI and Claude and Cohere, all of these large language models, are coming out with: what they call function calling or tool use. What tool use is, you tell the LLM, here are all the tools I have. You can imagine it like an RPG, like you have a backpack full of tools or weapons. So you fill the metaphorical backpack full of your tools and you tell it, this is how you use them.
It's like a JSON blob, maybe 200 characters or 300 or 500 characters long, and you send that to the LLM, and then you can do stuff like search for aliens. And what it'll do is it'll trigger the tool. In this case, it's a Wikipedia search and it'll come back with the results.
00:23:20 - Anthony Campolo
And this is actually when I was first using ChatGPT. This was a huge sticking point for me, that it couldn't use the internet. It was trained up to, I think I remember this very well, September 2021. It would always say "my knowledge only goes up to September 2021." It would say this over and over again. And I was like, man, it'd be really nice if you could just search the internet and get new data. And so web browser is one of the tools now on ChatGPT.
00:23:48 - Monarch Wadia
You got it. And they also do Python execution now. You can generate images, have them output into ChatGPT. It's crazy what you can do with it now. But yeah, that's a great point. All of that stuff is enabled by this feature, which is tool use.
You can see that it's outputting all of this stuff. These are all Wikipedia articles, and these are just simple extracts of the first however many characters from the article.
00:24:19 - Anthony Campolo
So you can tell it's Wikipedia because it's coming up with movies and TV shows and the type of things that you see lots of, or just like a concept here. Nordic, like this is the first thing that actually has to do with aliens and not just a TV show or a movie. I'm a little bit into UFO stuff recently, so I recognize that one.
00:24:39 - Monarch Wadia
You know what? That's actually a great kickoff point, because now I can actually go in there and tell the AI agent, hey, most of these are fictional examples. Could you tell me which of these are actually related to ufology and not just fiction? And I can hit command. So I gave it that prompt, and here's the assistant output.
00:25:13 - Anthony Campolo
That's kind of the more LLM type thing where you're having a conversation with it.
00:25:18 - Monarch Wadia
Exactly. So here it comes. From the following results, Nordic aliens, that's the first one that you pointed out, Anthony. That one is not fiction. Ancient Aliens is, funny enough...
00:25:31 - Anthony Campolo
An attempt to be real, yeah.
00:25:35 - Monarch Wadia
But arguably it's still ufology, and it picked that out from ten results. We can look at the titles of the results: Alien, Aliens, Alien Franchise, Cowboys & Aliens, Alien: Romulus, Ancient Aliens, Xenomorph. It rejected all of those. And it clued in on the fact that Ancient Aliens and Nordic aliens look like they have content that's related to ufology.
00:26:01 - Anthony Campolo
Yeah, and I would say that's correct. I would say that's definitely true out of all of those.
00:26:07 - Monarch Wadia
Isn't that super useful? Imagine if you have 100 results and you as a human have to go in and manually select each result. Can you imagine how painful that would be?
00:26:19 - Anthony Campolo
Yeah.
00:26:20 - Monarch Wadia
Over here, what we can do is actually tell the AI, okay, pick out the ones that are interesting, which is in essence a filter. Filter the ones that are interesting. And after that point you can actually tell it to select the ones that are interesting.
That's a feature that we can build. And maybe we'll get to that towards the end of the video. But that's a feature that we could super easily build using Ragged. This whole thing is built in Ragged. There's no LangChain. There's just Ragged. And what Ragged is doing is it's defining a few tools and it's calling the OpenAI API.
00:27:00 - Anthony Campolo
So right now you have an API key that is calling a model that is being hosted by OpenAI.
00:27:08 - Monarch Wadia
Yes, exactly. I love the way you explain things. It's super clear. I tend to gloss over things.
00:27:16 - Anthony Campolo
Yeah, because this is the type of thing that when someone sits down to use these kind of things, they're like, okay, how do I set this up? How do I run it? That's usually where my mind goes. And the first thing you do is go get an API key. Otherwise you're going to send that message just to get nothing back.
00:27:34 - Monarch Wadia
Exactly. So let's just go into how this thing is instantiated, how this thing is built. This is a Svelte view. The first thing we're doing is inside the onMount. So when the component gets mounted, the first thing we're doing is instantiating Ragged. And we're telling it to use the OpenAI provider. And here's the config. The config object is literally the OpenAI config object. So you have all of the OpenAI configs in here.
00:28:08 - Anthony Campolo
Server side, because you're using onMount?
00:28:11 - Monarch Wadia
Nope. This is all client side.
00:28:14 - Anthony Campolo
Okay. Isn't that kind of an issue though, if you have your API key in there?
00:28:17 - Monarch Wadia
That's a security decision. So if you're using this as, let's say, an internal tool, then that's a decision that is totally fine. You can use this as an internal tool.
00:28:27 - Anthony Campolo
You just have this on localhost. You haven't actually deployed this to the world.
00:28:31 - Monarch Wadia
Exactly. And if you look at the structure of the application, you can see that if somebody was really serious about research, then maybe this could be bundled as an Electron app or a desktop app and provided to the user. And the user could provide their own login into OpenAI and they could access it on their own.
So that's one use case. There are definitely use cases, even though it says "dangerously allow browser." It's a totally legit use case.
00:28:59 - Anthony Campolo
"Dangerously allow browser" is different from what I'm talking about. What I'm talking about is having your API key in the JavaScript client and being able to find it in your dev tools. Or is "dangerously allow browser" the tool part that's letting the LLM use the browser?
00:29:13 - Monarch Wadia
No, no.
00:29:15 - Anthony Campolo
It's the other thing.
00:29:16 - Monarch Wadia
This is an OpenAI thing. OpenAI says by default, client use of the OpenAI library is not allowed, as it risks exposing your secret API credentials to attackers.
00:29:25 - Anthony Campolo
Oh, so they actually build that in themselves, so they're already aware of that.
00:29:29 - Monarch Wadia
Exactly.
00:29:30 - Anthony Campolo
Okay, interesting.
00:29:32 - Monarch Wadia
So one of the things I'm trying to balance is transparency versus ease of use. If you're going to use Ragged on the client side and you're consciously making the decision that, okay, I'm going to use this inside the browser and I'm going to expose my API key, which is a valid decision in many cases, then just put this in there.
I know it says "dangerously," but it's there for a reason. It's to scare off people from making the bad choice inadvertently. But if you know what you're doing, then "dangerously allow browser" is fine.
00:30:01 - Anthony Campolo
Well, the reason why I bring that up is my first job was for StepZen, which was a GraphQL API company, and all the first examples I built, that was what they kept harping on. They're like, you need to actually make sure people don't expose their API keys, because for them they were old school API devs. They're like, never expose your API key. Absolutely cannot do that.
And so I figured out all these ways to do server-side functions or serverless functions to hide that kind of stuff, and things like SvelteKit and most of these frameworks now build in ways to do that. It's pretty easy. So there are ways to manage it that's not too challenging. But like you said, this is even simpler.
00:30:37 - Monarch Wadia
Yeah. What's going to happen in the next three to four years is a lot of devices are going to come on the market that have powerful enough GPUs that they can actually run large language models locally. So I just read an article...
00:30:54 - Anthony Campolo
Yeah.
00:30:55 - Monarch Wadia
Totally. So what I'm hoping for with Ragged is when that happens, we can very quickly modify Ragged to have a local provider. So instead of OpenAI, maybe this would say "local" and maybe you don't even have the config anymore because you're just using the local machine to do it. There's ways to do this inside the browser already using Hugging Face. I'm not going to go into that, but that's a driver that we could very easily build.
00:31:21 - Anthony Campolo
For people who are interested in this, I would just point people to Mistral and the new Llama 3 release. There's really exciting things happening in open source LLM work right now.
00:31:31 - Monarch Wadia
Totally. It's so cool right now. The other thing is Cohere, which we're going to be building here.
00:31:41 - Anthony Campolo
Yeah, I built a server example with them for Edgio right before I left. It was literally the last project I built for them, creating a Cohere integration. Their API is really nice. Their LLM is not quite as good as Claude and ChatGPT, but it's really close.
00:32:01 - Monarch Wadia
I think they have a free tier now. So if you're a developer, you don't have to pay to develop stuff as long as you're not pushing to production.
00:32:09 - Anthony Campolo
Yeah, their APIs are really nice. I definitely recommend people check that out.
00:32:13 - Monarch Wadia
They've been growing on npm, so yeah, 100%. That's the next integration that we're building. Somebody in the community is already building a Cohere integration. Since Cohere is free, it's easy for developers to get into. Even if you don't have a credit card, you can get function calling from an API. So that's the next integration that somebody's already building. They're going to be building the driver.
Ragged works on drivers. Right now we only have OpenAI, but Cohere is coming up soon. So coming back to the code here, how this works is I'm just instantiating Ragged and after it's instantiated, I have a few tools that I make available.
There's the doSearch. This is the Wikipedia search that I showed earlier. Simple function. All it's doing is it takes a text input and then it does searchWiki, which is an API util that I built.
[00:33:15] Nothing fancy. It just searches Wikipedia. It's like an AJAX call inside it. Kind of boring. Gets the results and then saves them in state. Really simple function.
But we need to expose that function to the LLM. So how do we do that? We are listening for a tool use finished event. What I've done is I've created a tool called Search Wikipedia, and Search Wikipedia is a Ragged tool. So this is all Ragged stuff.
The tool title is "Search Wikipedia." The description is "Search Wikipedia for a term. This will return a list of results. It will also display the results to the user. You must only call this function when you're explicitly asked to run a search." So that's the prompt for the tool.
And you describe exactly how to use the tool. After giving that context, you describe what the inputs are. This is a schema definition. The search term is a string, and here's a simple description.
[00:34:19] You note that it's required and that's it. That's the tool. And the way that this tool is being handled is it's event-driven. Ragged is entirely event-driven. When the tool use finished event happens, if the name of the tool is Search Wikipedia, then we do doSearch. And that's how you hook the LLM into your code.
00:34:50 - Anthony Campolo
This is probably slightly outside of my general knowledge, but is this why you need RxJS? Because of the event stuff?
00:34:58 - Monarch Wadia
Yeah, we don't need it. I'm using it right now as part of the API. I might take it out because it adds a little bit of bloat. Maybe it's because I'm not doing tree shaking, but it's adding like 100 kB or maybe 150 kB right now, which is a lot for a front-end library.
If somebody wants to solve that problem, come on board and help us solve that. That's one of the things that I'll probably solve in the next couple of months, bringing down the size of the package.
But yeah, that's how doSearch works. And doSearch is the function that we saw earlier. So if we wanted to add something like a selectArticle function, it doesn't have to be async. I could give the article ID which is a number, and then maybe we can add that selection.
[00:35:55] We could add the selection into state and the history object over here. That's the console history. So maybe I should just read the console history screen, and we could append the history so that it shows what the actions of the LLM are. It's mostly really boring JavaScript code.
00:36:18 - Anthony Campolo
As I'm looking at this, this all looks pretty comprehensible to me.
00:36:22 - Monarch Wadia
Exactly. Nothing magical here. Everything is simple. And the nice thing about Ragged is that it's all TypeScript code. So you get type hints. And because of the type hints, if you're using something like GitHub Copilot, the type hints will actually help you build Ragged code already, just out of the box.
So yeah, that was a little bit of a demo for Ragged. Work is progressing right now. Ragged doesn't even have any history, so I sort of faked it over here. The assistant output is just one-off commands that I'm sending. If I asked it, "Hey, what did you just say?" it'll just say something like, "Oh, I don't remember. I didn't say anything."
So this is very early code. But the goal here is to continue building these tool use examples out and to provide documentation and tool use examples and turn this into a thing that normal web developers with no experience whatsoever with LLMs can very easily use.
[00:37:30] And that's the goal. Super low barrier to entry.
00:37:33 - Anthony Campolo
Yeah. What I would like to do next is could you go to your example where you just run a Node script that throws text at an LLM and gets a result back? Because for me, I found that really helped me compare it to things like LangChain, where LangChain has this abstraction called chains and it adds a lot of overhead in terms of understanding.
Whereas what I liked about the example you first showed me is it was a single function that you give text to, and then you get a response back from the LLM. I thought that was pretty cool.
00:38:09 - Monarch Wadia
Yeah, 100%. So let's take a look. This is a very simple piece of code that adds a couple of numbers together. Very simple. And there's nothing interactive. This is the prompt over here, "add 66 and 66." And it should give you the right answer. So let's work with that example.
The project is so new that I haven't even changed my working title for the folder. I called it Language Game before I called it Ragged, and I haven't even changed that.
00:38:40 - Anthony Campolo
I like that.
00:38:42 - Monarch Wadia
Yeah. Wittgenstein, I think, was the one who talks about language games.
00:38:50 - Anthony Campolo
Yeah, that's my thought too. I had a conversation with the CEO of Trychroma about Wittgenstein and how he was 100 years ahead of the game with all this stuff.
00:39:04 - Monarch Wadia
Oh, yeah. Just breaking apart language and realizing it's just noise. It was brilliant. Here's how this example will go. I'll run npm dev. It should tell us what the prompt is, and then it should tell us what the answer is. And the answer will come from the adder tool that we've defined already. I'll demo the functionality and then we can walk into the code. Prompt: "add 66 and 66." Answer: 132.
00:39:38 - Anthony Campolo
See, it's basically just doing tsx main.mts. So you're just running this function right here, and then it's giving you this output.
And then I was going to say about the last example, it was not entirely clear where the prompts were. So I like this where it says "this is the prompt," because calling something a description is not always entirely clear from the LLM terminology, like where you're actually feeding it a prompt and where you're just giving text to explain something to yourself.
00:40:11 - Monarch Wadia
Great point. I'm learning from you too, Anthony, because explaining things, I think it's just the curse of being in the trenches. You're so close to the problem that it's hard for you to translate what's in your head, with all of the details, into something that is actually useful for other people. So thanks for slowing me down there.
00:40:30 - Anthony Campolo
Yeah, okay.
00:40:32 - Monarch Wadia
So we saw just now that "add 66 and 66" is the prompt that comes from here. And then the answer somehow magically appeared. The answer was 132. So where did that answer come from?
Well, if we look up here inside the tool, there are two ways to use these tools. One is you can listen to the event. Two is you can provide a handler to the tool. The handler just makes it really easy to return the results. It's almost like a mapper.
You could either console.log inside the handler. So I could very well just console.log the result over here, and that's one way to just log it. That's totally legit. Another way to log it would be to actually listen. Let me see if I'm actually listening over here because I don't remember anymore.
00:41:37 - Anthony Campolo
R.type, tool use result. Right.
00:41:41 - Monarch Wadia
Right. So the result over here, I'm not using event streaming. I'm just using a promise interface to simplify the example. So how I'm actually doing the console.log is all of the actions and all of the history comes back as a result. And I'm just looking for the exact event called "tool use result" and I'm logging that. It's all promise-based. So I'm just awaiting predict.
00:42:13 - Anthony Campolo
It's the predict function. That's when I was doing this with you, that's specifically your API abstraction term for where you're giving it the actual prompt itself.
00:42:25 - Monarch Wadia
Exactly.
00:42:27 - Anthony Campolo
"Ragged makes it so easy to integrate tools. I tried building an adder using GPT OpenAI API. The response object is so complex, but Ragged abstracts all that." There you go. There's your one-liner pitch.
00:42:40 - Monarch Wadia
There we go. Abhimanyu is awesome at explaining stuff like that. That's exactly it. If you look under the hood and you look at the actual query that's going up, labyrinthine is the word that comes to mind. It's a really complex nested object.
The nice thing about Ragged is, here's the prompt. Look at this. Result equals ragged.predict. Here's the prompt: "add 66 and 66." We're going to use the GPT-4 model, which is smarter. GPT-4 is better for tool use.
And we're telling it you have a calculator here called adder, and you can use adder. What is adder? Well, let's take a look. Adder is a tool. It's called "adder." That's how the LLM sees it with that string. So if I change that, it would change the name of what the LLM calls it. The description is "it adds two numbers together." What are those numbers? A and B.
[00:43:33] A is the first number.
00:43:34 - Anthony Campolo
This is not a built-in OpenAI tool. This is a tool you are creating and defining and then handing to the LLM and describing.
00:43:44 - Monarch Wadia
Yes, exactly. And then you take that and you drop it inside the prompt. So if I remove this, if I just give it an empty object, then the LLM would try and do it using its own brain, and it might give you the right answer. But the tool use is really where it shines.
If I wanted to create a multiplier, so multiplier equals t. t comes from Ragged, that's being imported at the very top. So you import t from Ragged.
00:44:21 - Anthony Campolo
And t for tool.
00:44:23 - Monarch Wadiat for tool, yeah. Now you start off with tool, and then it has a few other things which I'll get into when we get to the inputs. But let's start with tool.title multiplier. And GitHub Copilot is just going to do most of the hard work for me. Description, multiply two numbers together, and then the inputs.
This is where t shows off a little bit more of its power. t will give you different types of inputs. So right now we're supporting boolean, enum, number, and string. And the short-term goal is to add object and array. So you can pass in nested objects and you can pass in arrays. And you can even pass in arrays of nested objects. So that becomes more powerful.
00:45:11 - Anthony Campolo
That's where you start getting into, like, if you wanted to build RAG into this, you could do like an array of text files or something and then have that be an input that it can answer questions off of.
00:45:24 - Monarch Wadia
Yes. Now here's the thing: the input, this is where traditional web dev changes and it becomes more AI- and ML-driven. In traditional web dev, the input is only provided by the user or by the developer. But with machine learning and AI and large language models, the input can also be provided by the agent.
So the input over here is actually the agent input. This is the input that the AI will put in. Maybe I should rename this. Maybe this should be called AI inputs, just to clarify, but this is stuff that the AI will put in. And the AI will enter the number description, first number. So what you were saying was right, but instead of the actual documents, this would actually be the operations on the documents. So you would provide the documents in context and then the AI.
00:46:18 - Anthony Campolo
Like the embedding layer, right.
00:46:21 - Monarch Wadia
Yeah, exactly. So if you wanted to generate embeddings, then maybe you could ask the AI to generate an embedding over here. Maybe an example would be, let's say you had documents. This fictional example: selectDocument equals tool. And then you would have title, select document. Description, select document from a list, select it by ID, each ID is unique. And then you would have inputs, and you wouldn't have documents, you would have document ID.
And how you could call this is you would do r.predict. Here are the documents, and then you pass in the documents inside an array. And then you would prompt it. So we haven't built that out yet, but you can add that. Super cool.
00:47:25 - Anthony Campolo
Yeah, that makes sense.
00:47:27 - Monarch Wadia
Yeah. And then you could select document and you could say here are the documents, select the ones that have to do with ufology. So you could have like 100 documents and then the AI will go in and just select the ones that are related to ufology and give you the exact IDs.
And then you can do with those what you will. You could delete them, you can modify them, you could show that you've selected them, you could move them around, whatever you want. But you can see how instead of going in and asking the user to select, who might be a very busy admin person, to select all of the documents that are relevant, now the admin person can just go in and say, "All right, my boss asked for just the ones that have to do with ufology. I'm just going to ask the AI to do the hard work for me and select all of the ufology documents." It'll give me that, and then I can generate a report and send it to my boss.
[00:48:22] So instead of spending an hour or 5 minutes or 10 minutes or however long, now the AI is doing all the boring work and that empowers the user to do more with their time.
00:48:32 - Anthony Campolo
Totally, yeah. I've been using LLMs so much just for my own content workflow. Right now I have a whole setup where I run Whisper to transcribe this video itself. After we're done, I run it through Whisper, transcribe it, I take that transcript and I feed it to an LLM with a prompt that says create a one sentence description, a one paragraph description, and then chapter headings with timestamps.
So it'll give me like a meta description if I want to just turn it into a quick markdown thing. I even have it generate the markdown if it's a YouTube video, because it can read the YouTube metadata and then turn that straight into a markdown file. So it's almost like a website generator in some sense from YouTube videos.
So that's one of the things that I've been doing and I've found it to be super useful. And I'll probably build out more integrations with other LLMs because right now, the way I have it set up, it literally just spits out a prompt with a transcript, just concatenated into one document, and I copy-paste it and give it right to the input. I use Claude now because Claude has a longer context window.
[00:49:41] So it's better for two hour long things. And then I just copy and paste it back in, so I don't have it set up with an API call at all. But this seems like the type of thing that eventually, maybe even right now, could do something like that.
00:49:55 - Monarch Wadia
Yeah, I think the multimedia stuff is sort of coming. As soon as we have a... there are problems with the API over here. Like given inputs over here, it's not clear that it would be the AI inputs. It's not clear that it would be the AI putting in inputs. I'm not really super happy with the overall API yet. It's a very early-on project.
So once the API is settled and we know what the API is going to look like, on my roadmap, the roadmap sort of extends into a bunch of features. We need to start adding multimedia stuff to this, because multimedia stuff is where a lot of hard work goes into working with videos. Right now, a lot of people transcribe them manually. A lot of people subtitle them manually.
So ideally what this thing can do is, imagine a Chrome extension where you can just go to YouTube and Ragged is listening in on the YouTube video or somehow feeding into the YouTube stream, and it's generating the markdown document for you, the one that you described.
[00:51:08] So building Chrome extensions is a feature that Ragged is probably going to excel at. And the problem really is that all of these AI, LLM things, they haven't been built for developers.
00:51:25 - Anthony Campolo
Yeah, this is my blog post where I explain the whole workflow that does that. And yeah, you might be able to take some inspiration from that once you start doing more multimedia. The way I have it is you literally just clone Whisper and do it all on your computer. There's no API calls at all.
00:51:42 - Monarch Wadia
Can we read that? Do you want to show it off a bit? Because I'd love to learn more.
00:51:46 - Anthony Campolo
Yeah, let me share my screen and I can show what's going on here. So I'm using this tool called yt-dlp, which is a really powerful CLI that interfaces with YouTube and has FFmpeg built in under the hood. And what I do is you basically start with just a YouTube video. So this is, I think, the first episode of FSJam, and it runs through it and creates a WAV file. And I'll go through this kind of quickly just so you can get a sense of what it is.
And then it runs Whisper, which is OpenAI's open source transcription, like the last open source thing they ever created, because now everything they do is closed source, which is great. And then it creates a transcript which kind of looks something like this. I run a Node script to clean up the transcript a little bit. That's not really too exciting. And then I have this big prompt that I add to it.
[00:52:50] And this is actually for the sake of the blog post. I wanted to really show what you could do. It does some other things that I don't really use for my own workflow, but can be interesting. It will create five potential titles, a one sentence summary, a one paragraph summary, and a two paragraph summary. And then it will create chapter headings and then three key takeaways, and then three potential future topics if you want to expand upon it. And then you feed it an example of what you want it to look like.
00:53:26 - Monarch Wadia
Wow, that's incredible.
00:53:28 - Anthony Campolo
Yeah. And then this is the bash script that does it all at the same time, while also using it to grab things like the video URL, the upload date, the uploader (the YouTube channel), the YouTube channel URL, and then the title, upload date, and thumbnail. And then I'm just echoing out to a markdown file. It's a little hacky, but it works for sure.
Then you run yt-dlp, you run Whisper, do the transformation, and then I added a step where you could give it a playlist and it can do this on multiple videos. So then it has this loop step where it loops over a playlist and does it on every single video. So yeah, it's pretty built out. And here's the example of what it ends up giving you. You have this markdown header and then you have the potential titles, summaries, chapters.
00:54:31 - Monarch Wadia
Dude, this is like...
00:54:33 - Anthony Campolo
Pretty cool, right?
00:54:34 - Monarch Wadia
This is like a fundable product right here. If you take this to a VC and show it to them, you might be able to raise money. That's incredible.
00:54:41 - Anthony Campolo
You're like the third person who's told me that. The first time I showed this to Jen, my wife, she's like, "Why aren't you charging for this?"
00:54:49 - Monarch Wadia
You really should, man. Honestly, if I were you, I would just take that and go to a VC and say, "Hey, look, I built this prototype, give me some money," and then you can hire me.
00:55:00 - Anthony Campolo
Cool. Well, I'm glad it's impressive. I've done a bunch of streams recently showing this to other content creators. I showed this to Ben Holmes, and I showed it to Nick Taylor and Nicky T, trying to figure out who else would want to use this. What would they want to be different about it?
This is my open source AI thing that I'm working on. You have yours, and this is what I've been hacking on for months and months and months.
00:55:26 - Monarch Wadia
Nice, dude. That's incredible. It's insane, right? Stuff that previously was completely impossible to do now is just trivial. Not to downplay that you built it out, because your stuff is super impressive. But I'm pretty sure if somebody wanted to build a little prototype that just got a summary, they could build it out in half a day, a day, two days, a week.
00:55:50 - Anthony Campolo
Yeah. You just need to run the Whisper transcription and concatenate a single sentence prompt saying, "Create a summary of this transcript." That's it. That's all you need to do.
00:55:59 - Monarch Wadia
That's super inspiring, man. That blog article, I want to read it now. I think you probably have it in the YouTube video somewhere, but I want to take a look at that because I built something similar-ish, but not really.
I built an automatic essay grader, and it was showing some really promising results. The professor or the teacher has to just provide the assignment and the rubric for the assignment to the AI. And then when the student submits the essay, the AI automatically grades that. And the first question I got from someone was, "Well, what about presentations? Can I do presentations?" And looking at what you just showed me, it looks like audio presentations could be pretty easy for this thing to do.
00:56:56 - Anthony Campolo
And I gotta ask, have you seen the South Park ChatGPT episode?
00:57:01 - Monarch Wadia
Yeah.
00:57:03 - Anthony Campolo
That was my favorite. My favorite part is when the teacher finds out that all the kids are using ChatGPT to write essays. His response was, "You mean I can use AI to grade these essays?"
00:57:15 - Monarch Wadia
I forgot about that. Let the AI write the essays, let the AI grade the essays, and then let the AI train itself on that and eventually take over the world.
00:57:25 - Anthony Campolo
Dude, that episode was so funny. Oh my God.
00:57:29 - Monarch Wadia
I like the shaman. The shaman...
00:57:32 - Anthony Campolo
Can...
00:57:32 - Monarch Wadia
Detect ChatGPT.
00:57:34 - Anthony Campolo
Yeah. And we're kind of like the shaman. I'm sure you get this too. There was a recent video I did where I got like four comments on it that were all a whole paragraph, just super positive, saying how great the video was. And I'm like, all four of these are LLMs. These aren't real people.
00:57:53 - Monarch Wadia
You could tell.
00:57:54 - Anthony Campolo
Yeah, it was so obvious. They all read like LinkedIn messages. Knowing how people follow crypto projects, no one's going to leave a whole paragraph-long, very written-out thing about how great something is where each sentence is the same amount of words. There's a uniformity to the text that comes out, especially with ChatGPT.
Claude actually sounds more human in a really uncanny way. I wrote a blog post about this, how having a conversation with Claude almost feels like talking to a real person.
00:58:31 - Monarch Wadia
It's very eloquent and it's super empathetic. It understands. It doesn't feel like a bot. And the fiction writing is just incredible with Claude.
00:58:41 - Anthony Campolo
Yeah. Where ChatGPT, you feel like you're talking to a bot.
00:58:46 - Monarch Wadia
Yeah, pretty much. This stuff is, what, less than three years old? ChatGPT came out last year, was it?
00:58:54 - Anthony Campolo
It's not even two years old. It came out the last day of November 2022.
00:59:00 - Monarch Wadia
2022, wow. A year and a half. It's stuff that I think every developer should be hopping on and learning. And especially because there was no machine learning or data science in the stuff that we just talked about. It was all just JavaScript code and plain English.
And hey, you could prompt it in Chinese. You could prompt it in Hindi, you can prompt it in Russian, you can prompt it in several different languages. So it's already internationalized, which is the crazy part.
00:59:37 - Anthony Campolo
Awesome, man. So we're at the hour right now. Are there other things you want to talk about or show off, or do you want to start wrapping it up? I got as much time as you need, so it's kind of up to you.
00:59:49 - Monarch Wadia
Yeah. Let's wrap it up, man. I think this was nice, and I got all my points across. We're all looking for contributors and people to hop on. I've been mentoring developers for ages now.
So if somebody wants to hop on and just work on this, I'm happy to get hands-on and work with them side by side and show them the ropes of how to build a library and whatever they need. If they want to learn how to work with LLMs, I can show them. I can teach that in return for contributions. If they want to learn how to work with TypeScript, build their own library, show off their skills, I can work with them. Don't ask me to write documentation because I suck at it.
01:00:32 - Anthony Campolo
I really encourage people listening right now to take Monarch up on this offer. One of the biggest questions when people are getting into coding and they're told they need to get into open source is, "How do I get into open source?"
My buddy Brian Douglas, BDougie, he has a whole company built around open source. And we've had a lot of conversations about this. One of the themes we always hit on is don't go to React and try to contribute to React. You're not going to be able to. It's being run by a giant company. It has ten years of legacy. It's being used by so many people.
You want to find something small and at the ground floor with a maintainer who is available and willing to help you out and get your feet wet with it. So if you want to get into open source, this is the type of project you should be looking at.
01:01:25 - Anthony Campolo
I would really stress that.
01:01:27 - Monarch Wadia
Totally. That's what I liked about Redwood too, especially when it started off. I don't know about now, I should really hit them up and just go say hi.
01:01:38 - Anthony Campolo
About the same size as ever, I'd say. He's got a team of people, some companies building it. It never went full Next.js. That was kind of disappointing for me because I thought it was so awesome and I wanted it to blow up.
But it's solid tech now and they're going to React Server Components. There's still a lot of really exciting stuff happening there. And it still has that small team feel where you can get involved. You can meet the people, you can meet Tom. You get to meet the founder of GitHub, no big deal.
01:02:15 - Monarch Wadia
No big deal. Yeah. He's a really nice guy.
01:02:19 - Anthony Campolo
Super, super nice.
01:02:21 - Monarch Wadia
Yeah. Cool.
01:02:22 - Anthony Campolo
Awesome, man. Well, this was super fun. We will ship this. I will run it through my automatic show note generators, create some nice chapters and timestamps for people as well, for anyone who's going to be catching this afterwards.
And yeah, check out Ragged! It's a super cool project. I've already created a quickstart and I have my own repo out there, webdev's Ragged on GitHub if people want to check out. I'll probably end up having a blog post as well because I think this is cool.
I really like getting involved in open source projects at the ground floor as well. It's something that I've really gotten a lot out of, both personally and professionally. And it seems like we've had a lot of people in the chat who are really excited about this as well. So that's really cool.
01:03:10 - Monarch Wadia
Anthony, thank you for having me on and having the Ragged community on your stream. Yeah, thank you very much.
01:03:19 - Anthony Campolo
All right. We'll catch you guys next time.