skip to content
Video cover art for AI Frontends with Anthony Campolo and Nick Taylor
Video

AI Frontends with Anthony Campolo and Nick Taylor

Nick Taylor and Anthony Campolo discuss AI frontends, exploring tools like LlamaIndex and Mistral, and demo how to build AI-powered apps for content creators

Open .md

Episode Description

Nick Taylor and Anthony Campolo explore AI fundamentals, build a chat interface with LlamaIndex TS, and discuss integrating AI into front-end web applications.

Episode Summary

Nick Taylor hosts Anthony Campolo for a wide-ranging conversation about AI and front-end development. Anthony shares how his interest in AI began with AlphaGo in 2016 and eventually led him to learn coding, while Nick describes his experience as a power user of tools like ChatGPT and GitHub Copilot. The two clarify key AI concepts including context windows, embeddings, fine-tuning, and agents, with Anthony explaining how foundational models are improving so rapidly that specialized fine-tuning is becoming less necessary as context windows expand to handle massive amounts of input. They discuss practical workflows — Nick shares how he used ChatGPT to learn Tailwind CSS by translating his existing CSS knowledge, and both reflect on the value of conversational debugging over autocomplete-style coding assistance. The hands-on portion centers on spinning up a LlamaIndex TypeScript project that ingests a PDF of Nick's past stream transcripts and enables natural language querying against that content, demonstrating how developers can integrate AI chat capabilities into any website. They briefly attempt a Mistral example but hit API key issues, which leads to a discussion about open-source versus proprietary AI models and the long-term importance of transparency. The episode closes with thoughts on normalizing AI tool usage in professional settings and plans for a future stream building a more complete AI-powered project.

Chapters

00:00:00 - Introductions and Anthony's Path to AI

Nick welcomes Anthony Campolo back for his third appearance on the stream. Anthony introduces himself as a developer advocate with experience across GraphQL, blockchain, and deployment platforms, and explains his recent shift to freelance work doing developer relations for various companies.

Anthony traces his interest in AI back to 2016 when AlphaGo — a deep learning neural network — defeated human champions at the board game Go, a feat considered a major breakthrough due to the game's enormous complexity. He also mentions Google's Deep Dream image generation as an early fascination. Despite having no coding background at the time, these developments inspired him to slowly learn programming, eventually landing in web development. He notes that unlike previous hype cycles, the current wave of AI is genuinely useful for everyday tasks.

00:05:44 - AI Concepts: Context Windows, Embeddings, and Agents

Nick and Anthony discuss the practical landscape of AI tools, with Nick sharing that he uses ChatGPT and GitHub Copilot daily. Anthony argues that hands-on usage is a valid form of AI knowledge, and notes that even model builders struggle to fully explain why their systems behave the way they do. The conversation turns to what "AI front ends" means — specifically, integrating AI capabilities into existing websites rather than having AI build sites from scratch.

The two unpack several foundational concepts: agents as chatbots that can execute multi-step tasks, embeddings as vector representations of text used for search and similarity, and context windows as the working memory limit of a language model. Anthony explains that rapidly expanding context windows — from a few thousand tokens to potentially a million — are reducing the need for fine-tuning and embeddings, since developers can increasingly just feed raw data directly into models. He also touches on prompt injection risks that arise from overloading context windows.

00:19:26 - Hardware, Speed, and the AI Arms Race

Nick marvels at the computational scale required to serve AI to billions of users, and Anthony frames it through the lens of Moore's Law, noting that today's capabilities would have seemed impossible not long ago. They discuss Microsoft and OpenAI's planned Stargate supercomputer, which Anthony suggests will be used to train GPT-6, with GPT-5 expected imminently.

Anthony highlights that token generation speed is set to improve dramatically — potentially tenfold within a year — thanks to advances in AI-specific chips, which will unlock entirely new use cases. The conversation shifts to practical developer workflows: Nick describes how GitHub Copilot excels at boilerplate code and pattern recognition, especially when relevant files are open for context, while Anthony prefers conversational debugging with ChatGPT or Claude, where he explains problems in natural language and receives targeted fixes rather than relying on autocomplete inference.

00:26:54 - Learning with AI and Normalizing Its Use at Work

Nick shares a concrete example of using ChatGPT as a learning tool: when he joined Open Sauced and needed to write Tailwind CSS, he would describe his intended styles in regular CSS and ask ChatGPT to translate them into Tailwind classes. Within two weeks he had internalized the syntax. He also describes using AI to instantly generate TypeScript types from JSON payloads, saving significant time on repetitive tasks.

Both hosts discuss the cultural tension around admitting AI use at work. Nick recounts an interview at Netlify where he accidentally had Copilot enabled, and his interviewers simply said they used it too — a moment that helped normalize the tool. Anthony emphasizes that the real skill is maintaining understanding of what the code does, not whether AI assisted in writing it. They agree that developers who refuse to adopt these tools risk falling behind those who leverage them for productivity, while also cautioning that AI output still requires human verification and judgment.

00:36:16 - Live Coding with LlamaIndex TS

Anthony walks Nick through setting up a LlamaIndex TypeScript project using the create-llama CLI. They configure it as a Next.js chat application powered by GPT-4 Turbo, loading a PDF containing transcripts from Nick's recent guest episodes. Anthony explains each configuration option — template choice, model selection, observability, vector database, and file parsing — before they generate the project and run the embedding step that converts the PDF into vector representations.

After starting the dev server, they interact with the generated chat interface, asking questions about themes from Nick's past streams. The bot accurately identifies topics like compassion, resilience, and building genuine relationships from the transcript data. Anthony explains how this approach works well for documentation sites and FAQ bots, noting that feeding the model structured question-and-answer pairs makes it particularly effective. He mentions the wave of "talk to your docs" chatbots from a year prior and suggests the technology has matured significantly since then.

00:58:28 - LangChain, Mistral, and Open Source AI Models

Nick asks about LangChain's role in the ecosystem, and Anthony confirms it serves a similar purpose to LlamaIndex as a higher-level abstraction across multiple AI models, though he finds some of its abstractions unnecessarily complex. They briefly compare the two frameworks before pivoting to Mistral AI, a French company that has open-sourced its models — a key differentiator from proprietary offerings like OpenAI and Anthropic's Claude.

They attempt to run a Mistral example from Edgio's repository but encounter persistent API authentication errors. Despite the technical difficulties, the conversation yields a valuable discussion about open-source AI's importance for trust, transparency, and long-term progress. Anthony acknowledges that open-source models currently lack the polish and resources of proprietary competitors but argues the long-term bet favors openness, especially as legal challenges around training data and copyright intensify.

01:19:16 - Wrapping Up and Future Plans

Nick and Anthony reflect on the stream's highlights, with Nick expressing appreciation for how Anthony clarified often-muddled AI terminology around training, embeddings, context windows, and model selection. Anthony shares details about his current freelance work doing developer relations for Dash cryptocurrency and writing AI-focused articles for a nonprofit called Funds.

The conversation briefly touches on Web3 and decentralized technologies before Nick notices the solar eclipse beginning outside his window. They agree to reconvene for a future stream where they will build a more complete AI-powered tool — ideally something Nick could actually use in his content creation workflow. The episode ends at 01:25:49 with plans to continue exploring practical AI integrations for front-end developers.

Transcript

00:00:20 - Nick Taylor

Hey, everybody. Welcome back to Nicky T Live. I'm your host, Nick Taylor, and today I am hanging with my good friend Anthony Campolo. How are you doing today, Anthony?

00:00:33 - Anthony Campolo

Hey, doing good. Happy to be back. I think this is my third time on.

00:00:37 - Nick Taylor

Yeah, yeah. For folks who might not know you, why don't you give an intro and maybe kind of how you got into tech a bit? And I'm going to fix the title right now because it's showing the title I had earlier this morning for now streaming on my own. So I'll sort that out. But anyways, go ahead. Cool.

00:00:58 - Anthony Campolo

Yeah. So my name is Anthony Campolo. I am a developer advocate. I have done a lot of open source and worked at companies doing things like GraphQL, blockchain, and most recently I was working at Edgio, which is kind of like a Cloudflare, like a Netlify dev-tooling, deployment-platform type thing. And you worked at Netlify, so we actually have a lot of shared experience there in terms of working at those two companies. But I have recently kind of pivoted and gone more freelance, kind of, you know, people like Jason Langstorff, James Q. Quick, and others who are doing devrel companies. You know, I think Jason chose to kind of do it. James was kind of let go. And then I think I basically was just thinking, I think I want to do it. So I kind of decided to quit Edgio at a certain point because they were pivoting. They were kind of pivoting to more like a security company, essentially, which wasn't really a thing that I felt very well suited for or really interested in either. But I have been doing just like some freelance work for Dash, which is a cryptocurrency.

00:02:14 - Anthony Campolo

But okay, the thing that we're here to talk about is AI, because AI is cool. No, AI is the thing that so many devs have been talking about, and you gave me a little bit of background in terms of where your knowledge is with it right now. But for me, it's actually the reason why I first learned to code in the first place, because I wanted to do AI. And this was because back in like 2016, there was a first kind of big AI media-explosion, hype-cycle type thing. Not really because there are tools that people were using, but because you had AlphaGo. Do you remember AlphaGo?

00:02:59 - Nick Taylor

I remember hearing about it. I'm not too familiar.

00:03:03 - Anthony Campolo

You know what the game Go is?

00:03:05 - Nick Taylor

Yeah, I know what Go is. I didn't know about Go until we had Cassidy Williams on the Dev.to stream a few years ago. So she gave me the whole lowdown. Super old game. Excuse me. Very popular and very easy to start, but it takes years to master, and...

00:03:26 - Anthony Campolo

Yeah.

00:03:26 - Nick Taylor

And then there's this supercomputer, right? That's what AlphaGo is. Is that right?

00:03:30 - Anthony Campolo

So AlphaGo was a deep-learning neural net, which is a lot of the tech that has kind of led up to what we have now with ChatGPT. Basically, it's just a giant network of kind of random interconnected nodes that they had play itself. Well, first they trained it on a bunch of human games, and then it eventually got to better-than-superhuman performance by having it play itself. And with Go, you'll hear people say this a lot, there's more move space than there are atoms in the universe. So unlike chess, which you can kind of brute-force, you need to have a more holistic understanding of the board space and how to compete and things like that. So that was considered a huge breakthrough at the time. And then there's other stuff as well. Google had this image-generating thing that was called Deep Dream, I think, and it would generate these images. And it was like, you know how when you see AI images now, the fingers will be messed up? It's weird digital artifacts, you know. But this was when they were just getting them to recognize shapes and objects, so it would create images that were psychedelic almost because they're kind of morphing in and out of different shapes.

00:04:51 - Anthony Campolo

And so, okay, all this stuff I thought was just super, super fascinating. And I had no idea how to code, though, because 10 years ago I had a music degree and I was just broke and didn't have a job, didn't know what to do. So I started to very, very slowly learn how to code, and it took years and years and years. And I eventually just kind of got into web dev and learned how to make websites because it was a lot simpler. But anyway, now we're like 10 years later and AI blows up again. And everyone's talking about AI now, but the cool thing now is finally we have useful stuff. We can actually use AI now to do things for us. This has not been the case for the 10 years I've been into AI. Just in the last year, you can actually use AI to do literally useful things. So some people still think it's just another AI hype cycle. Like, it's not really a big deal. Isn't it still kind of useless?

00:05:40 - Anthony Campolo

And it's like, no, things are different. Like, this is actually different.

00:05:44 - Nick Taylor

Okay. Okay, gotcha. Okay, cool, cool. Yeah, I didn't realize all that history there. But okay, so that's... yeah, thanks for the recap there. Like we were saying before the stream, I don't consider myself very well versed in AI at the moment. I'm definitely a power user of AI tools. I use ChatGPT every day. I have GitHub Copilot.

00:06:16 - Anthony Campolo

I would argue that you do know a lot about AI then, because actually interacting with the tools to use it is how you learn about it, even more so than necessarily building it in the first place.

00:06:30 - Nick Taylor

Okay, true, fair point. I guess, or I guess I should be more precise in what I meant, is I don't understand the deep internals necessarily of things. Also, I fixed your...

00:06:43 - Anthony Campolo

Neither do the people who build them. That's the thing. It's just like a thing that you kind of run on a bunch of data and do different optimization tricks on. But it's really hard for even the people who are building the models to fully say why they behave the way they do. It's really kind of interesting.

00:07:01 - Nick Taylor

Yeah, no, I do find it super interesting because we're going to talk about the topic of AI and front end. So I guess, when you say AI front ends, do you mean something AI-powered that's going to build me my website front end? Or do you mean something else? Or is it both, or a bunch of things?

00:07:30 - Anthony Campolo

That's a super good question, actually. So it's not something that... well, it's like the snake-eats-its-tail thing. You could kind of think of it either way at a certain point. But what I'm going to be showing more is how do you have a website that you integrate AI capabilities into? So you have a normal dashboard and then you want to bring in a chatbot to do something or to analyze some of your data, that would be the kind of thing. Now, once that gets sophisticated enough, you could build that into a website that then builds websites using AI if you wanted to. So that's why I say it's like the snake eating itself. But the thing I think is cool is right now most people are interacting with these things through something like ChatGPT, and developers have their coding tools with autocomplete and things like that, but that's really just a developer thing. Your average normal person is just interacting with ChatGPT pretty much. That's basically it. And that is kind of limiting how people can conceptualize just how broadly you can apply this technology.

00:08:41 - Anthony Campolo

Because really all you need is just an endpoint that you send text to, and you can get text back, and that will recreate all of what you're actually getting through ChatGPT, because that's what you're doing. You're sending messages to OpenAI's server. OpenAI takes those messages, feeds them to their ChatGPT bot, and then gives you an answer back. So there's no reason why you couldn't have that in your own website in any form you want.

00:09:07 - Nick Taylor

Yeah, totally. Yeah, no, that makes total sense because OpenAI, which ChatGPT runs off of, you know, you can get API keys there. We talked about this briefly before, but essentially I could literally make my own ChatGPT. Like, I could make the front end better if I wanted to, or more to my liking, and it would still be the same capabilities, most likely.

00:09:31 - Anthony Campolo

Or yes. Well, it's identical in the sense that you are still interacting with their services unless you want to use your own models. So this is the next point, that once you have your front end, then you will be in a position as a developer to switch out different models if they get better. Because right now these things take like a year for them to create and like $10 to $100 million. It's an extremely expensive endeavor to create these foundational models. So GPT-4 is OpenAI's state of the art, but that one's already about a year old, and Claude 3 just came out a month ago and it blows it away. Like, it's significantly better across many different dimensions. So you could wait around for ChatGPT to get better and then upgrade that, or you can just say, hey, I'm just going to switch over to Claude. And if you are using one of the higher-level frameworks that we're going to be talking about today, you don't even have to worry about the idiosyncrasies of Claude's API versus OpenAI's API, because they're giving you an interface that kind of abstracts across all of those different models.

00:10:41 - Anthony Campolo

So you just give it an API key and then have a chat endpoint and then they figure out kind of all the stuff under the hood.

00:10:49 - Nick Taylor

Okay, cool, cool, cool. So some kind of super oracle that allows you to hit all kinds of different services. Okay, yeah, this brings up an interesting point or discussion. But something like ChatGPT, I've had people mention it's something like this all-knowing oracle, which it's not necessarily, but it's like a Google search. It's not meant for something specific. It's just like, you know, how many grams is a pound, or also, what's the size of a neutron, and then this and this and this and that. Whereas I know there's this movement to kind of have more specialized... I think they're referred to as agents, from what I've heard, because I've listened to Swyx's Latent podcast.

00:11:47 - Anthony Campolo

Yeah. So an agent's not necessarily specialized. What it means is an agent is like a chatbot that can run multiple tasks and check whether they've been completed or not. So imagine if ChatGPT... you could tell it, hey, I want you to actually code a website in the sense that it would need to go to a service, create an account, then deploy something, create a GitHub repo, do all those kinds of things. And that's what an agent would do.

00:12:15 - Nick Taylor

Okay. Okay, gotcha. Okay. Yeah, so that I get then. But yeah, I guess there was also more like models being very specific, which can be a beneficial thing, you know, because having the all, air quotes, all-knowing oracle model might be good for day to day.

00:12:34 - Anthony Campolo

Let me talk about that, actually, because I don't think it makes sense to ask ChatGPT random factoid questions. It's actually useful for generating text like that. So if you want to, I mean, it can write poetry, it can write stories, it can do all sorts of things. It can also debug code. You can have a conversation with ChatGPT to fix your coding errors. So I think it's a tool that you dialogue with. And so it's not just like looking up random facts. And the specialization thing, there's this thing called fine-tuning where you can kind of take a model, give it more data, and kind of change the model to make it more specialized. That's not really, I think, where things are going. There's more and more research going into the foundational models. So the foundational models are going to keep getting better and better and better. And what is improving now is the ability to feed them your own data. So you don't need to have a different trained model. You just have a really good foundational model that has a long enough context window that you could give it everything it needs to know to understand that domain.

00:13:46 - Anthony Campolo

So instead of like fine tuning a model, you just like feed it a book and then it reads the whole book and then can respond as if it knows all the knowledge in that book, whether it was trained on it or not.

00:13:56 - Nick Taylor

Okay. And in terms of that example of feeding a book or whatever, can we talk just to clarify some concepts? They're kind of murky in my head a bit, so I just want to make sure I understand them fully. So when we talk about embeddings, from what I've understood at a high level, it's really like, let me try and run some kind of, not necessarily code path, but certain things to get to my answer faster, so I don't have to pass as many tokens long term because there is a cost to these things. So is that kind of correct, or...

00:14:37 - Anthony Campolo

So embeddings are like a different type of fine-tuning. So fine-tuning is when you change the model itself. Embeddings are when you basically take new text and turn it into a language that the model understands. So it's usually used for things like search and recommendation and stuff like that, because the embeddings can kind of find similar topic spaces. So if you imagine ideas clustered around each other, you can have king and queen clustered around each other, and then if you have king minus the concept of man, then you get queen. So it's stuff like that.

00:15:25 - Nick Taylor

Yeah, okay. Yeah.

00:15:30 - Anthony Campolo

I would say embeddings and fine-tuning, though, for the most part, aren't really things you need to worry about too much because most things are going toward vector databases or just literally being able to shove all the context into the LLM. Because the thing that's changed a lot in the last six months is that ChatGPT, when it first came out, you could give it a couple thousand words at most and it would just break down. Whereas now you can give it, I don't know, 50,000 or something, a really ridiculously large amount. And Claude is up to like a million. So at a certain point it kind of becomes limitless, and then it's just like the limiting factor is going to be how big is your codebase?

00:16:11 - Nick Taylor

Okay, that brings up another question. And I'm just throwing... I know you might not necessarily have the answer to all these, but the fact that Claude or ChatGPT has such a bigger amount that it can take in terms of... you mean words, which are basically tokens, right? That's what you're referring to.

00:16:33 - Anthony Campolo

So, yeah, so a word is usually, on average a word is four tokens.

00:16:37 - Nick Taylor

Okay, so I guess the question, there's obviously a cost to tokens, but is there also the possibility of you're throwing too many things at it, that it's like your original message is just completely lost because you tried to add so many things to, you know, like, like, I don't know, like if you said, you know, how far is it from Montreal to Toronto? And then, you know, like, that might give a very clear answer. But then like after that, you know, you're like, okay, but on Sundays and at this time, and if you're in this specific car, you know, I mean, like, if all of a sudden, if you get so laser focused, is that, is that good or bad? You know, like, like I, I, I feel like in general, like, like forget computers for a second. Like, the more context I give to somebody about something, it'll make it more clearer. But I, I, I, I read something briefly the other week. I, honestly, I don't know. I have the article, but they're saying sometimes just kind of like flooding it with too much stuff is just not a good thing as well.

00:17:45 - Nick Taylor

Is, have you heard that?

00:17:47 - Anthony Campolo

So no, actually it's worse than that. Anthropic, the company behind Claude, just put out research that basically showed that by extending beyond the context window, giving it too much stuff, that's how you can eventually get it to behave outside of its safety guarantees and things that they programmed it specifically not to do. And this is what it's called, like, okay, okay, prompt hacking. And so that's jailbreaking, prompt injection, I guess is the term. So okay, that's bad, and we should just define the context window first. So if anyone doesn't know what we're talking about right now, basically, when you have a conversation with ChatGPT or one of these things, there's a certain amount of memory it can remember. In terms of when you have a conversation with it, you try and reference previous parts of the conversation, it can do that as you talk to it. But eventually it'll get to a point where it just gets confused because there's a finite amount of tokens that it can actually keep in its kind of working memory, the transformer. And then once it runs out, it gets full, then it just has to start bumping stuff off the end as it's adding in new stuff.

00:19:02 - Anthony Campolo

But the context windows have been extended to such insane degrees over the last couple months and will probably continue to extend out. And so that may become less and less of a problem. And it may be that you will just always have kind of a running memory of every conversation you've ever had with all of your chatbots, because this just would never run out of space.

00:19:26 - Nick Taylor

And this is something that still kind of blows my mind. I know we can keep throwing hardware at stuff, but the amount, you know what I mean? Because I'm just thinking about the number of people on the planet and then the amount... I don't know. Like, obviously there's hardware to handle this, but it's like...

00:19:44 - Anthony Campolo

Well, it's also just like Moore's Law, you know. We've lived through how things have gotten faster and smaller. A whole lot of this stuff would have seemed insane, you know, not too long ago. But one last thing that's also going to be crazy is that right now these things generate answers not as fast as you want. Usually you can kind of sit there and watch them write out the things, like, you know, 30 seconds to a minute. That's going to improve by like 10x over the next year because that's one of the specific things that the AI chip companies have been working on, the actual speed of token output. So that's going to make a big difference as well in terms of the use cases you can enable.

00:20:27 - Nick Taylor

Okay, cool, cool, cool. Yeah, yeah, I think I was reading, is Microsoft and OpenAI talking about some million-GPU computer, I think, or...

00:20:39 - Anthony Campolo

Yeah, Stargate I think is what it's called.

00:20:40 - Nick Taylor

Something like that. Yeah, yeah, that's right. Stargate. Yeah, yeah, of course it's gotta, you gotta name it something sci fi related obviously.

00:20:47 - Anthony Campolo

So I think that is going to be used to train GPT-6 because GPT-5 should be coming out any day now. So if they're building that now, that's going to be for probably their next big-iteration model.

00:21:01 - Nick Taylor

Okay, cool, cool, cool. Yeah, no, and it's just like, I don't know, it is pretty wild because I basically had Copilot free for quite a long time because I worked in open source, and now my company pays for it. But there's a lot of these tools. ChatGPT isn't perfect, Copilot isn't perfect, but I still find them very useful because, you know, depending on what it is, Copilot's really good at a lot of boilerplate stuff that I can obviously type out myself, and it just knows how to do it. Or if I copy something... speaking of context, I remember Rizel Scarlett, who used to work at GitHub, was telling me this: if you want GitHub Copilot to have more context, at least in the context of Visual Studio Code, have relevant files open. And then when you copy-paste stuff, it knows, and when it does the first thing that you wanted it to do, then it knows to replicate it if you have...

00:22:07 - Nick Taylor

It can figure out your patterns and stuff. And I find that super useful. Sometimes it's completely wrong in terms of trying to autocomplete me. Yeah.

00:22:17 - Anthony Campolo

I think Copilot enables a lot more of the things that people worry about when it comes to this stuff because they're worried about people just generating a whole bunch of code that they don't really know what it's doing. What I like to do, and this takes longer and it's a slower process, but I enjoy actually talking to ChatGPT or Claude and kind of explaining what I'm trying to build, what's broken, and then giving it the specific code that I think it needs to know to have the context for that.

00:22:47 - Nick Taylor

Yeah.

00:22:47 - Anthony Campolo

And then it will, like, anytime I do that, if I do it clearly, it will almost always know exactly what is wrong and how to fix it, or it'll generate me the exact code that I want. Whereas when you're just autocompleting based on the context of what you've written before, how can it? It can't read your mind. So it can't really know what you're trying to do. It just tries to infer it from all this other stuff and general patterns of coding. But if you want to just say, hey, I want you to do this, instead of having to figure out how to get it to generate the thing you want so then you can hit tab...

00:23:23 - Nick Taylor

Yeah, no, exactly. Yeah, yeah. I found adding comments to say what I want to do, which is kind of like if you were chatting with...

00:23:31 - Anthony Campolo

So that's exactly it. So that's the hack of doing what I do within Copilot itself. You talk to it through writing code or through writing comments. Yeah, yeah.

00:23:40 - Nick Taylor

But typically, at least my workflow right now is I let Copilot do a lot of the boilerplate stuff. And sometimes, like I said, when I copy-pasted something, it can figure out what I'm about to do. I think it's great for that stuff. But a lot of the other stuff, I just tend to have a conversation in ChatGPT, like you were saying. And it's not always right, but I still find it helpful because I think one interesting thing about prompt engineering is it forces you to explain yourself better, which I think is a nice side effect, actually. So it's like, okay, no, that's not what I meant. This is what I mean. And so on and so on. And again, even if it doesn't give me the right solution or a complete solution, I still find it very helpful because I'm kind of like rubber ducking in a sense a little bit. You know, like, hey, this time the

00:24:44 - Anthony Campolo

rubber duck can code back. That's the first time. Yeah, yeah, yeah, exactly. This time.

00:24:50 - Nick Taylor

No, totally. That's totally it. And so I still think it's super helpful because I'm like, okay, that's not exactly what I wanted. But I know enough about the language I'm coding in, so I'm like, this is a good starting point for me, and I know how to shape it after that. So that's still very useful to me. But yeah, like you were saying, I don't think, at least at this point, we should take things as like, this is 100% correct and just copy-paste it. Because one thing, which I'm sure you've noticed too, I can't speak for Claude because I haven't really used it yet, but ChatGPT is very confident in itself, you know. Yes, I can help you with that. Here is the exact answer. Then it's like, here you go. And then I'll write, like, sorry, I have a bit of a cold still. I'll write, like, no, that's not right at all. And then ChatGPT will be like, oh, I'm so sorry, my apologies. This will work. And it happens like three or four times, and I'm dying. I'm like, you're so polite, but you're so wrong.

00:26:03 - Anthony Campolo

Yeah, the things I found that it struggles with are things like regular expressions, things with weird edge cases that you didn't specify ahead of time. So usually, though, as you use it more, you start to get a sense of broadly what it can do and then where it will break. And so, as you say, you just need to have the presence of mind to actually check what it's doing. But for the most part, I find when you're working iteratively with it, you're going to know whether the code works or not because you're trying to fix something. So you did a thing and gave it some code, put some code in, you see, did it work. And as long as it got you to a new error message or got you closer to the finish, usually then you could just feed that back to it and say, hey, that thing you gave me was broken. Here's the error I got. And then it'll say, oh, I left this thing out, sorry, here's actually what I should do.

00:26:54 - Nick Taylor

Yeah, no, totally. The other thing I'll say is, for me, it was a great learning tool. So I hadn't used Tailwind professionally until working at Open Sauced. I was at Netlify before that, like you mentioned, and so I wasn't doing front end at that point. I was just working on the frameworks team. And then prior to that I was at Dev.to and we had custom CSS and a lot of bespoke stuff. And so, I mean, I wasn't stressed out about learning Tailwind, because I

00:27:29 - Anthony Campolo

think it is pretty intuitive in your life, I'm sure, to figure it out.

00:27:33 - Nick Taylor

Yeah, but the thing was, there were certain cases where I was like, I know exactly how to write this in CSS, but I don't know, because there is a DSL to Tailwind, right? You know, some things are pretty obvious. M-dash-some-number, it's a margin. P-dash, it's a padding. A lot of those are obvious. But then I was trying to do stuff that was a little more complex, like using the ampersand selector, or I forget, there were some things that were a little more advanced that I wanted to write in CSS, but I was trying to leverage Tailwind as much as possible. Side note, Tailwind is great, but sometimes you do have to do some custom CSS. But my main point is I was like, I know how to do this in CSS. This is what I have in CSS. How do I write this as Tailwind CSS classes? And I did that for about two weeks when I first started at Open Sauced. I mean, I didn't tell anybody. Not that I was ashamed or anything.

00:28:38 - Nick Taylor

I was just like, it's just another

00:28:39 - Anthony Campolo

tool for me. This is people using AI for work but not knowing whether to talk about it or not. Like, you know, I'm definitely pro normalizing it, and more people should share what actually works and what doesn't. That's a really good one where, oh yeah, finally, it actually works and it consistently works. That's a crazy superpower you just got.

00:29:02 - Nick Taylor

Yeah, yeah, yeah. No, I've mentioned this story a few times, and again, I just didn't mention it at work initially because I was super busy. I had to get shit done. But yeah, I just thought it was a really great way to learn something. And I think another thing that, and we're definitely going to dig into some code soon because I know we're already at 1:30 here, but I think another thing that's interesting is a few things I definitely use it for, like say I get a snippet of JSON. I can write a TypeScript type for that piece of JSON if it's a small thing. But if I was working on a prototype of something recently and I was given a code snippet of JSON for an example payload, and I was like, okay, I'm not typing out all the types for this, I just pasted that into ChatGPT. I just said, can you generate a TypeScript type for this piece of JSON? Boom. I got it like two seconds later, pasted that in.

00:30:10 - Nick Taylor

I might have tweaked a few things to narrow some types, maybe, but those are the kinds of things where it's like, bing, bang, boom. I've saved myself like 10 minutes. Oh, Tony just joined over on Twitch, says, hey, Nicky, coming in on the AI discussion. Feels the same. Needs discussion. Too much [unclear]. Okay, cool, cool, cool. Also, thanks for joining, Tony. I don't think I've ever met Tony, but I met Tony, I think, through Jason Langstorff's Discord.

00:30:43 - Anthony Campolo

Yeah.

00:30:45 - Nick Taylor

Yeah, yeah, I just basically see him on somebody's Twitch stream at some point. But yeah, stuff like that, these kinds of things. This kind of gets into the whole other topic of, you know, I'm really not scared of losing my job to AI. Maybe in 10 years that might be different. But right now I think, and I'm pretty sure I'm not alone in saying this, but I think it's the people that resist saying, like, I'm not using these tools. Those are the ones that are gonna actually... it's gonna be...

00:31:20 - Anthony Campolo

Sorry, it's hard because you gotta compete with people who are gonna be leveraging tools to be more productive. And so, you know, I definitely think it's going to be the case that we're going to continue to use these things more and more in our workflows. But there's never a point where you can just tell it to generate the whole thing you want because people don't know what they want. Even a really good PM can't create a whole app and a whole thing at the same time. They still have to go give it to users, and then if they have things they want changed, they need to go modify it, and at that point they have become a developer, even if they're just using the thing to constantly build their thing. At a certain point, you know, it's just like no-code to the nth degree.

00:32:07 - Nick Taylor

Yeah. And side note, in terms of normalizing stuff, I definitely want to jump into live coding, but I'm curious what it's like for people that interview now. You know, because I've done stuff where you get that scratch pad, or what the heck's it called? I didn't get hired at Facebook, but I made it all the way there. But basically the initial tests, it's like there's no IntelliSense or anything. I'm doing good, Bryn, by the way.

00:32:41 - Anthony Campolo

Thank you. They just give you like TextEdit, basically.

00:32:44 - Nick Taylor

Yeah, yeah, yeah, basically. So anyways, it went well, but I can understand why that's a little weird to a lot of people because it's not what you normally work in.

00:32:56 - Anthony Campolo

Right.

00:32:56 - Nick Taylor

Like, it just seems like a weird gatekeeping trick. Yeah, yeah, but I think it's...

00:33:02 - Anthony Campolo

They want people to be able to do the things that they can do because they feel like if they can't do what they know how to do, then they must be incompetent.

00:33:10 - Nick Taylor

Yeah. And the reality is, you're going to start a job somewhere, and it's like, yeah, okay, yeah, we use WebStorm too. Or we use VS Code. And the funny thing was when I interviewed at Netlify, when I got to the technical part, eventually it was with two of my old coworkers, Matt Kane and Eduardo Bouças.

00:33:34 - Anthony Campolo

And yes, I've had them on the pod.

00:33:36 - Nick Taylor

Yeah, yeah, no, they're good people. But it wasn't so much about getting something completely working. We talked through a lot of stuff. There was a bit of coding and stuff. And then at one point I was like, oh, sorry, I forgot I had Copilot on. And they're like, don't worry about it, we use it too. And I thought that was a really great answer because, you know, it was normalizing, like, okay, it's okay to use this tool. Like, oh hey, how's it going, Shariah? Thanks for joining. You know, I don't know, somebody was talking about this the other day. I don't know if it was Jason Langstorff or... I forget where I was. I think it might have been Jason. But say somebody is new to tech and they're using these tools, and they build something, that's fine, but I think they still want you to be able to explain what you did, you know,

00:34:33 - Anthony Campolo

not just actually have the presence of mind and understand what's happening and that you are actually following what the problem is and how to solve it and, yeah, all that stuff.

00:34:44 - Nick Taylor

Yeah, yeah. Oh, yeah. Oh, you killed the tech project at Netlify. Cool stuff. Sorry, Tony's in the chat chatting. Yeah, it just wasn't a fit there at the time. Yeah, that's the thing too. Side note about interviewing, sometimes you can be awesome and it's just not a fit. But eight rounds, that's pretty, pretty long. I think I had the technical interview, and then I met with a couple managers and the recruiter. But anyways, I know you're pretty skilled, Tony, so too bad it didn't work out, I guess. But two departments. Oh wow. Geez. Maybe... I don't know. I interviewed back in fall 2021. That's when I started. And then there was a bit of delay because it was the first time

00:35:43 - Anthony Campolo

to start DevRel at Netlify.

00:35:47 - Nick Taylor

Yeah, yeah, yeah. Cool, cool. All right, we're getting in the weeds here. We could talk about the whole interviewing stuff too, but I do kind of want to talk about what we're going to dig into here. So like you were saying, we want to dig into some AI for front ends, and it's not necessarily... oh, same time frame. Oh, sorry to hear, man. Yeah, cool, cool, cool. Yeah, so pull up...

00:36:16 - Anthony Campolo

Yeah, go to LlamaIndex TS.

00:36:20 - Nick Taylor

Cool. And I'll just.

00:36:21 - Anthony Campolo

Or just like Google that. That's not the website.

00:36:26 - Nick Taylor

Yep, cool. That's the one.

00:36:28 - Anthony Campolo

And then, so we want the TS one. So this is going to take you to all their Python stuff.

00:36:34 - Nick Taylor

Okay. I'm assuming LlamaIndex TS means TypeScript.

00:36:38 - Anthony Campolo

TypeScript. There you go, buddy. So yeah, this is a pretty cool open source project, actually. If you go to the GitHub... GitHub first.

00:36:50 - Nick Taylor

So top right. Side note, but my old coworker Lori Voss works at LlamaIndex now. I think they're running DevRel there.

00:37:01 - Anthony Campolo

Yep, that's right. Yeah.

00:37:04 - Nick Taylor

Sorry, Bryn in the chat has a question. Why did you choose to do all that, like the YouTube things and interviewing? Also, what interests you in the AI world?

00:37:14 - Anthony Campolo

...

00:37:15 - Nick Taylor

All the other things, talking about interviewing and stuff, that just typically happens on a live stream. I call them tangents. It happens. And also, what interests you in the AI world? I'm just interested in general tooling and how it can make me more productive, I think. I don't know about you, Anthony.

00:37:35 - Anthony Campolo

A couple of things. Definitely just pure productivity, that's cool. I really enjoy using it to learn about different topics because you can ask it and get answers back to all sorts of questions. You can do stuff like finding sources to try and read. You know, I got really into the 60th anniversary of the JFK assassination recently. And that's a huge, massive historical topic with millions of documents and thousands of books written about it. And so I just have these long conversations learning all these random facts about it and different theories and stuff like that. So I think it could be used as a really interesting tool for education and learning. And you can then put things in where you can have it track people's progress over time, and that's kind of a whole different type of thing. But I think that's pretty cool. And I really enjoy it for condensing large amounts of text and information into summaries and stuff.

00:38:42 - Anthony Campolo

So one thing is I've always had podcasts and like streams and stuff, but it's hard to always find the time to do things like create chapters and timestamps and like a really good transcript. And so all of those things now you can get like 99% of the way by using these tools. And so that's just really cool because it lets me just make my content more legit.

00:39:07 - Nick Taylor

Yeah, yeah, no, for sure. Awesome. Okay, so we got LlamaIndex TS here, which is, I'm assuming, a Node.js version of what we can work with for their stuff.

00:39:19 - Anthony Campolo

Yeah, so this is doing a couple things. It's kind of like things like LangChain, which are basically giving you a higher-level abstraction on top of different models so that you could use OpenAI's ChatGPT or switch over to Claude, like we were saying before. There's also these vector databases that allow you to load your own data and then query and interact with it that way. LlamaIndex is a middle ground between the two. It both handles multiple models and also handles a lot of the underlying vector-implementation stuff. This is, as far as I can tell, the simplest way to just spin up a really simple chat-interface front end and then talk to a PDF or anything like that. So I'm gonna give you a gist and I'll put it in the chat too. Okay,

00:40:12 - Nick Taylor

Let's put that there. You just sent that over Discord, or...

00:40:19 - Anthony Campolo

Okay, sorry, the one I sent to Discord I think may or may not have been the correct one.

00:40:24 - Nick Taylor

Okay, I'll just read it from the chat.

00:40:27 - Anthony Campolo

Yeah.

00:40:27 - Nick Taylor

Okay, cool. Let me open that up. Cool, let's zoom this in. Okay.

00:40:36 - Anthony Campolo

The first thing is you're going to download this PDF. I ran your last five streams, or six streams, through my show notes generator and then put those on a document. We're going to use this to ask questions about your previous shows. Okay, Anthony...

00:41:04 - Nick Taylor

I'm calling you Anthony AI.

00:41:06 - Anthony Campolo

Sweet. Anthony GPT.

00:41:09 - Nick Taylor

Yeah. Whoops. Anthony AI. Okay, cool. All right, let's curl this bad boy. Where's the...

00:41:17 - Anthony Campolo

Yeah, it just doesn't have those, I don't think.

00:41:20 - Nick Taylor

Okay. Yeah. You saw what I was trying to do.

00:41:23 - Anthony Campolo

Yeah. He clicked the copy. Yeah. Okay, cool. For the API key, what I think is you should put this in your zshrc, because the next couple commands are going to work just as written if this is in your environment, or you can just export it for your current session. If you don't think you're going to be out of this session, I'm going...

00:41:49 - Nick Taylor

to just add it to my zshrc because I'm going to do this off screen.

00:41:55 - Anthony Campolo

Great.

00:41:56 - Nick Taylor

And can you send me a token? Not a token. API key. Real quick.

00:42:01 - Anthony Campolo

Did you hear it right before the gist?

00:42:05 - Nick Taylor

All right, cool. I'm just going to pop it in. Give me two seconds here. Logistics. Okay.

00:42:15 - Anthony Campolo

All right.

00:42:16 - Nick Taylor

Yeah, I see it. Cool. Okay, let's close that. All right, let's just run Zsh again. Okay, cool. All right, so there's this thing called...

00:42:38 - Anthony Campolo

Create-llama, which is like any of your create-a-thing CLIs, and if you do it without all these options, you can kind of walk through these one by one. But this is going to ensure we do exactly what I want to do. The only thing we're going to have to change, though, is files. I'll talk through what each of these is, though. The first thing is there's different templates. You could either do a Next.js app or you can do a Python-based backend app or you can do a regular Express server. We're going to do the Next.js one. It comes with a front end.

00:43:13 - Nick Taylor

Okay.

00:43:14 - Anthony Campolo

Is that your laundry?

00:43:15 - Nick Taylor

Cool. Oh, you can hear it.

00:43:20 - Anthony Campolo

Yeah, it is. Every so many of those have the same song.

00:43:24 - Nick Taylor

I'm laughing because usually I use Krisp, and usually it keeps out stuff like that. Maybe it's just...

00:43:33 - Anthony Campolo

It was very faint. Yeah, I have a good ear. I went to school for it.

00:43:38 - Nick Taylor

Yeah, it's true. Yeah. I'd be disappointed if not. You're. You're a musician, so. No, it's just kind of funny because like stuff, like if fire trucks go by and stuff, you don't hear it

00:43:48 - Anthony Campolo

because if it's loud enough, it knows to hit the compressor. Yeah. Anyway, so observability is just if you want to send telemetry and stuff back, which you don't want to do, and then you give it your OpenAI key, so that'll be in your environment variables. And then you pick a model. You can do GPT-3.5 or 4 or 4 Turbo. We don't have to go into this too much, but basically 3.5 is a super-duper-cheap old model that honestly doesn't even really work that well. And it's only if you just want to make sure you have something working. You don't really want to use it. GPT-4 is newer and better and more expensive. But GPT-4 Turbo is allegedly still better but cheaper, which is interesting how that could work. But that's the one that we're going to use, especially because if we want to just ask it about you, Nicky T., it will have more data about you. So when I tried asking GPT-3.5, like, who is Nicky T., it made up some person. And then I asked GPT-4, and it was like, oh, they're a developer from Dev.to and Netlify.

00:45:04 - Anthony Campolo

And I'm like, wow. So yeah, we want to use the more recent one if possible.

00:45:09 - Nick Taylor

Okay. Yeah, yeah. Side note, but if you try to Google Nick Taylor, typically if you find somebody in Canada, it's a pro golfer. So I basically gave up on SEO, and my name's pretty, pretty common, I think.

00:45:27 - Anthony Campolo

Yeah.

00:45:27 - Nick Taylor

All right, cool. The only thing, I changed it.

00:45:30 - Anthony Campolo

Yeah.

00:45:31 - Nick Taylor

Oh, yeah. So good.

00:45:31 - Anthony Campolo

Yeah. So the files is where you give it either an individual file or a folder with files. And so I wrote it as if you're just in the home root of your computer running all these commands. So yeah, you already modified that. And then, yeah, use LlamaParse. That is if you want to use Llama's newer updated implementation of their parser. They have a cloud service now, so you need an API key. That makes things a little more complicated. But also, this thing is turning into a company, I'm guessing, so there's a cloud now. So that's what you do. And then vector DB is if you want a whole vector database. This is if we weren't just loading a single PDF, we wanted to load like a thousand or something like that. Or if you wanted to put in other types of media and things like that. And then tools is a LlamaIndex-specific thing if you want to use plugins and stuff. And then post-install action will just install the dependencies after we generate it. So I think if you run that guy, everything should work okay.

00:46:37 - Nick Taylor

And while that's running, say yes. Let me just... there we go. Okay, so it's installing dependencies at this point. Quick question I had is, so we have vector DB none right now. But if I did specify it, that means I could omit files potentially, right?

00:46:58 - Anthony Campolo

You could... well, or you'd need to include...

00:47:01 - Nick Taylor

So right now we're passing a PDF and there's no vector DB. But if I wanted to have a vector DB, I don't necessarily need files. Or is the vector DB really to complement searching the files you pass in?

00:47:17 - Anthony Campolo

Yeah, so the vector DB is so you're not just loading in data directly from your own computer. So this is literally where if you want to give it a PDF, it's just off of our own machine. And so you could have as much stuff on your machine as you want. You could feed it stuff and it can generate all these vectors for you. Or you could use a vector database where you give it all the data and then it does that. There are different points where it makes sense to use these tools depending on how much data you have. So if you just want to query a small amount of documents, then it doesn't really make sense to use a vector DB. But if you have a whole bunch of stuff...

00:47:57 - Nick Taylor

Okay, so it couldn't find that. Hold on a sec, I'll go fix. I don't know how to do this quicker, but you know what, let me just copy.

00:48:06 - Anthony Campolo

This is why you need Warp. Yeah, let's just step through it. Just do the original command. It's not a big deal. This is the thing that tripped me up too the first time, getting the file in the right place.

00:48:19 - Nick Taylor

Oh, and just put it in my home directory then.

00:48:22 - Anthony Campolo

No, I'm saying just run npx create-llama@latest nikit.

00:48:28 - Nick Taylor

Okay.

00:48:29 - Anthony Campolo

Yeah, we'll do the old way now that I explained that. People have this god command if they want. Let's actually walk through it as well. That would probably be good. So just do it just like... yeah, exactly.

00:48:42 - Nick Taylor

Still a bit of brain fog from the cold. Okay. Oh, it's because it exists already.

00:48:48 - Anthony Campolo

Yeah. Nuke everything first.

00:48:50 - Nick Taylor

Yeah. Okay. Which.

00:48:56 - Anthony Campolo

Yeah, so this is going to be chat. The first one.

00:48:59 - Nick Taylor

Okay. That's what streaming is. Okay.

00:49:02 - Anthony Campolo

Yeah.

00:49:02 - Nick Taylor

Next.js, no observability. OpenAI, leave blank to skip.

00:49:09 - Anthony Campolo

Right.

00:49:09 - Nick Taylor

Because it'll.

00:49:10 - Anthony Campolo

Well, you're going to want to put that in. You should pull this off screen and do that.

00:49:14 - Nick Taylor

All right, all right, here we go, here we go. Oh yeah, that's the... yeah, that's my background. Side note, that's a cool app from Sindre Sorhus, who's written I don't know how many packages for npm at this point, or for the JavaScript ecosystem, but it's kind of cool. You can make it interactive. Okay. And here... okay, I got to continue this off screen for a sec because it still shows the API key. So I'm going to choose GPT-4 Turbo Preview. Use local files, right?

00:49:59 - Anthony Campolo

It'll probably let you actually select it. It will come up with a file picker. At least it did when I was doing this.

00:50:04 - Nick Taylor

Oh yeah, it did. Yeah. Yeah. Okay, cool.

00:50:07 - Anthony Campolo

So that kind of ensures you can't mess this step up if you do it this way.

00:50:12 - Nick Taylor

Yeah. Okay, cool.

00:50:13 - Anthony Campolo

I just hate doing these walkthroughs. I like being able to run a single command and say generate the thing exactly like this.

00:50:20 - Nick Taylor

Yeah. So we're not using LlamaParse. No vector database. Okay. If so, otherwise just press Enter. Start in VS Code. Yes. Okay, okay. Failed to start VS Code, but it did generate the thing. Okay, so let's clear. Let me bring back over... this could be because I'm in VS Code Insiders. This is something I guess...

00:50:51 - Anthony Campolo

Yeah.

00:50:52 - Nick Taylor

I ran into this with Million Lint the other day. Okay, so it did create Nicky T. So let's go to code Nicky T. Whoops. T-R. Don't save. Cool. All right, now we got the project here. It's asking me if I want to open in a container.

00:51:14 - Anthony Campolo

No, we don't need to do any of the Docker stuff. The only thing you got to do is run npm run generate. Now what this does is the step that is like your vector DB where it's going to read your...

00:51:35 - Nick Taylor

It didn't. I think I forgot to install the dependencies. Two seconds.

00:51:38 - Anthony Campolo

Oh, right. Yeah. Because that's what the post-install action was for.

00:51:42 - Nick Taylor

I'm guessing it's using ts-node for the local development, or maybe... okay. Generated storage context. No valid...

00:51:54 - Anthony Campolo

So what's happening here is it's reading the PDF and then generating the... so this is like the embeddings, essentially. So if you can hear, this is all of your stuff that actually turns... so actually go to, instead of this one, one of the other files. Open your file picker back up again.

00:52:16 - Nick Taylor

Actually, there's three vector indexes. Okay.

00:52:21 - Anthony Campolo

Yeah, so those are the embeddings right there.

00:52:24 - Nick Taylor

Okay. Okay, so. And this is what should make it more efficient to find things.

00:52:31 - Anthony Campolo

Yeah, this is essentially what turns your PDF into a language that the computer can understand and can kind of now use. And it's like, you know, in The Matrix they plug you in, load you up with data, like that.

00:52:44 - Nick Taylor

Yeah, yeah. Okay, cool, okay, cool. All right, so we've got the project. So it's Next.js. Should I start the dev server, or what should we do next?

00:52:52 - Anthony Campolo

Yes, just npm run dev should be all you need to do.

00:52:57 - Nick Taylor

Okay, let's move this over. Boom. Hello, localhost:3000. It's built a chat.

00:53:11 - Anthony Campolo

Think about what question you want to ask. What insight would you like to glean from your previous episodes?

00:53:17 - Nick Taylor

Okay, how do I... why can't I type in there?

00:53:24 - Anthony Campolo

Oh, sorry. Sometimes... yep. Yeah, hit Tab. Yeah.

00:53:31 - Nick Taylor

Okay, so what is Nick working on? So for context, since I work in open source, I've been streaming my work. I'm working on Polar Quiz. Okay. Which episode?

00:53:46 - Anthony Campolo

I just did the last six guest episodes.

00:53:49 - Nick Taylor

Oh, guest episodes. Okay, cool.

00:53:50 - Anthony Campolo

Yeah, we could do that actually next time. Next time we'll feed you all your coworkers. That's actually... that'll probably be more interesting for you.

00:53:58 - Nick Taylor

Okay. No, I know. It's cool. I just. Just glad you mentioned that. What is polypaint?

00:54:09 - Anthony Campolo

Was that one of your last six episodes?

00:54:11 - Nick Taylor

Should be. Yeah, so I did a live stream. Okay. Now just for context, again, no pun intended, is this only basing this off of the PDF? It's not going out?

00:54:27 - Anthony Campolo

No, this is just knowledge. It just knows this answer. So do this. Ask, what were some major themes of the last few episodes?

00:54:35 - Nick Taylor

Okay, okay. Yeah, that was the one. Okay. The second one was with Brian Morrison, who's at Clerk now. Compassion and resilience, that was with April Wenzel. Building genuine relationships, that might have been part of that as well. Valuable content. Okay, yeah.

00:55:10 - Anthony Campolo

So you see, some of these are direct episodes that people did. Some of them are topics that were discussed throughout the episodes. Some can kind of cross over between episodes. But this is like the idea space that you're kind of existing in. And to me, I read this like, yeah, that sounds like Nicky T. stream to me.

00:55:28 - Nick Taylor

Okay. Excuse me. Okay, so this is pretty neat out of the box because, obviously, it's a template. For my branded site, this looks terrible because it doesn't match. But again, it's a template, right? But it's a good starting point. And I guess this is a great... would this be a great way to generate a docs-site Q&A kind of thing?

00:56:02 - Anthony Campolo

Yeah, exactly. So you could feed it all of your docs and all... one thing that people find really makes these things good is if you have a frequently asked questions list, like the questions and the answers, and you feed it that, then it can kind of figure stuff out really well as people are asking things. So yeah, this is the type of thing that you would use to basically have people ask questions about a custom, specific set of terms or code.

00:56:29 - Nick Taylor

Okay. Yeah. Okay. Cool, cool, cool. No, that's pretty cool. Let me see.

00:56:37 - Anthony Campolo

This is one of the reasons why a year ago everyone did that. Everyone put out a chat box, like talk to your docs, like Astro did, and all these other projects did it, and some worked better than others. I think everyone got really excited to build it and then probably built it maybe six months too soon. Right now, these things work really, really well. At the time, it was a little bit harder.

00:57:02 - Nick Taylor

Okay. I asked this question because I don't think it knows really much about Million Lint yet. Maybe.

00:57:10 - Anthony Campolo

Yeah, totally.

00:57:12 - Nick Taylor

Okay, so this is basically I did the stream with one of their community engineers, Toby.

00:57:17 - Anthony Campolo

Toby. Yeah.

00:57:21 - Nick Taylor

And I saw it took a second to answer, so I am pretty sure it wasn't getting an answer from the Internet. And it even says appears to be, so I'm guessing it's taking that out of the context from the streams. Okay, that's cool. So this is pretty neat. And then, obviously, you could tie in a vector database. So I don't think we're gonna have time to pop in a vector database today. But this is one particular example. I know I did a stream with Liz Siegel. She works at Twilio. And we did one with LangChain. And from what I understood about LangChain, again, because I haven't really worked with it that much aside from that stream, it's really kind of like the buffet, pick which one, pick what you want to use. It's kind of like an intermediary, whether you want to use OpenAI or Mistral or whatever. Is that kind of what LangChain is?

00:58:28 - Anthony Campolo

Yeah, exactly. Yeah. And they also have other... they have an agent implementation. We talked about agents way back at the beginning. And so they also build other higher-level things like that, and they have like a million examples. And LangChain has a massive, massive community.

00:58:46 - Nick Taylor

Okay.

00:58:46 - Anthony Campolo

I find their abstractions can be a little bit weirder. I have a buddy, Monarch, who's specifically built a version of LangChain that's way simpler. It just gets rid of all the LangChain higher-level abstracts. This is just like, here's a function. Give it text.

00:59:02 - Nick Taylor

Yeah, yeah, yeah, gotcha.

00:59:04 - Anthony Campolo

Yeah, but if you've already done LangChain, you already have a good idea of kind of some of this stuff. What was that episode? Do you have a link to that?

00:59:11 - Nick Taylor

Yeah, let me find it. I know we had some technical difficulties. We ended up getting something done at the end. But let me find it. I think it's Liz Siegel and Nicky T Live. Oh yeah, here it is. AI Thanksgiving. Let me go to YouTube though, because that...

00:59:40 - Anthony Campolo

Is it on your videos, not live? Yeah, there it is.

00:59:43 - Nick Taylor

Yeah, it should be there if you... yeah, you can check it out. We ran into a few things and it wasn't...

00:59:49 - Anthony Campolo

It was more... I watched some of the stream.

00:59:52 - Nick Taylor

Now I remember it was a struggle. I wasn't stressed out. But it was just kind of funny because dealing with Python and like dependencies weren't installing in certain cases or, or the wrong ones. It wasn't finding the right things. So. But anyways.

01:00:06 - Anthony Campolo

Yes. So LangChain and LlamaIndex both have Python versions and JavaScript versions. So I know nothing about the Python version, because I'm not a Python developer. But all I know is that, yeah, I feel like using Python is hard, and every time I see people trying to actually spin up Python, it's just a total nightmare. It's all virtual-environment stuff and no one ever knows what to do. So I think actually having really nice, simple JavaScript tutorials and ways to work with these things and spin them up is huge. And I think we'll get a lot of Python developers to start doing more JavaScript if it stays that way.

01:00:40 - Nick Taylor

Okay, cool, cool. Sorry. Still got that cough. Sorry again. This was definitely a cool demo here that we spun up pretty quickly, aside from the minor issue with the file path, which was on me. I've heard of Mistral, but I've never used it. I don't know if you want to touch on that, or if you had any demos planned for that, or if you just want to talk about it.

01:01:16 - Anthony Campolo

So we could spin that one up because that one actually is... I basically got the example. Actually, let's not. So I did this for...

01:01:31 - Nick Taylor

Oh, you broke up for a sec.

01:01:35 - Anthony Campolo

Sorry. I just realized I left... I was typing.

01:01:39 - Nick Taylor

Okay.

01:01:41 - Anthony Campolo

The tab I was streaming from Twitch. Hold on one second. Yeah, so just go to Mistral's homepage.

01:01:53 - Nick Taylor

Okay. It's just Mistral AI, I'm assuming. Yeah, actually I already have a link to it. That can't be right. Mistral AI. Okay, there we go.

01:02:12 - Anthony Campolo

Yeah. So on their homepage, they have actually open-sourced their models. So this is one of the big differences between them and things like OpenAI or Claude, is that the actual model that is ChatGPT, no one can run that on their own computer, basically. It's all proprietary. No one even knows what it is necessarily. And there's been kind of people who've leaked information about it on podcasts and stuff about what the actual implementation is. But I don't know. I think this is a huge problem, and it's going to make it hard for people to trust these models. It's going to hold back progress, it's going to lead to probably more regulatory concerns. So I just think the more we can encourage open-source AI from every level of the stack, the better. So that is why Mistral is kind of so important.

01:03:10 - Nick Taylor

I guess a couple things before we look into it a bit. But what's the business play then? Still dying here.

01:03:21 - Anthony Campolo

So they will still give you the ability to use... it's the same thing as it always is. We'll still run it for you. Even though it's open source, you're still most likely going to want to have a bunch of servers in-house to serve all these requests to a chatbot. So just because we want people to be able to experiment on it and have the ability to run it themselves, the average dev is still going to want to outsource the heavy lifting. So whether that business play works or not, I think that is kind of what they're planning on right now. I'm sure there's other things in the pipeline. But I think if there are going to be just a couple foundational models, then it'll be very hard for them to break through. If we end up having dozens of these and they all find different niches, then we'll see what happens. But right now they're definitely considered one of the main competitors.

01:04:18 - Nick Taylor

Yeah. Because the other thing I wonder about is, I've read about this and it makes sense. The heavy hitters are the ones with the money that can pay for all this computing power, right? Microsoft, like I said before, they're gonna buy a million-GPU computer. No one can compete with that unless you're like Meta or Amazon or Google. Who else can compete with that? They can't. I mean, anyways, that's a whole other debate, I guess. But okay, so let's talk about Mistral. So open source, already it's got a plus one in my books. I like how they've mixed the Franglais here. Talk to Le Chat.

01:05:10 - Anthony Campolo

So I just popped a link in here. This is the Mistral example we had for Edgio. You could probably spin this up. You need to clone down the entire repo. I think actually these instructions are wrong now, actually. Whoops.

01:05:27 - Nick Taylor

Let's go to Whoops. Okay. Clone the whole Edgio examples.

01:05:34 - Anthony Campolo

Yeah. And then this is like examples of things, examples of the examples. So once you've done that, I'll kind of show you where the project is and then...

01:05:44 - Nick Taylor

Cool, cool, cool.

01:05:45 - Anthony Campolo

Work from inside there.

01:05:47 - Nick Taylor

Yeah. Okay. All right, let's go. Thank you. One gigabit Internet. All right.

01:06:01 - Anthony Campolo

Massive repo. This, this happens to me too.

01:06:05 - Nick Taylor

Okay, so let's open it. Actually, Edgio examples.

01:06:10 - Anthony Campolo

Make sure you're going to go to examples/v7-ai/mistral.

01:06:19 - Nick Taylor

V7 dash AI Mistral. Okay. Should I open that as its own thing?

01:06:25 - Anthony Campolo

Yeah, you just want to be in the Mistral one. Basically get rid of all that other stuff. Yeah.

01:06:31 - Nick Taylor

Okay, so copy. Let's just do this.

01:06:42 - Anthony Campolo

I need to give you.

01:06:43 - Nick Taylor

Okay, I'll just install the dependencies. Cool, cool, cool. We're like in the matrix here, hacking.

01:06:58 - Anthony Campolo

And then create an env API key I just gave you.

01:07:04 - Nick Taylor

Okay. The good thing is I have a cloaking thing that will hide it. So let me just take it off screen for a sec, though. Okay. Toggle. Okay. Just drop that in Discord, right? Cool. Mistral, is it a French company, or are they just playing... yes?

01:07:36 - Anthony Campolo

Yeah, I'm pretty sure.

01:07:37 - Nick Taylor

Yeah. Yeah, because it's the Franglais like that. That's not a French Canadian thing, that's a very French-from-France kind of thing. Le parking and... well, we say that stuff in Quebec too. But anyways, let me just double-check again before...

01:08:01 - Anthony Campolo

env.

01:08:02 - Nick Taylor

Okay, cool. Toggle. Yeah. Okay, cool. Wicked. I'll give you a panic here. Oh no. The environment file's open, but it's cloaked. All right, cool. All right, so what should we get up to next? Install the dependencies?

01:08:22 - Anthony Campolo

Yeah, so then let's just run it. I think it was npm run dev, or no, sorry, npm start.

01:08:32 - Nick Taylor

Okay, cool. Let's just bring this over. Okay. Web stream example. Do I...

01:08:46 - Anthony Campolo

So I think it's the way I have already set this up, is that you still use it with OpenAI. So you give your OpenAI key, and then this way you wouldn't have to have your own OpenAI key hard-coded in.

01:09:03 - Nick Taylor

Okay, good.

01:09:06 - Anthony Campolo

Let's try... we can change that. First of all, if you want to just reposition your screen.

01:09:15 - Nick Taylor

Give me a sec. I only have to enter it once I guess for the session or anytime I ask a question.

01:09:24 - Anthony Campolo

I think it just stays there any time. This one's not really a super great example. The only reason why I really put this one is because it doesn't use Next and it's a little more agnostic to what service to use, I think. So I'm not sure if you can just give it other API keys or not, or if you'd have to actually change the code. I'm poking around in it right now.

01:09:47 - Nick Taylor

Okay. Actually it's telling me the Mistral API key isn't good, or no, it's saying Mistral API error. Not necessarily the key's bad, but... well, it says 401.

01:10:00 - Anthony Campolo

So it's not... you wouldn't put your Mistral API key there. You'd put your OpenAI API key there.

01:10:05 - Nick Taylor

Yeah, no, I did try again, but... two seconds. OpenAI key.

01:10:19 - Anthony Campolo

Yeah,

01:10:23 - Nick Taylor

Okay. Yeah, it still gives the Mistral error. Let's see here. Let me just look at the network panel. Let me do it again. Let me see if I can just get it to submit. Okay, cool. I'm just going to delete the key again and then let's just see what the network... okay. Mistral Tiny.

01:11:06 - Anthony Campolo

Let's do this instead. Yeah, I'm going to send you a link. Let's just do one of these simple Node scripts instead.

01:11:15 - Nick Taylor

Okay, cool.

01:11:21 - Anthony Campolo

This is another thing that I like about most of this stuff, that you don't even need a front end. All these things you can just do with Node scripts if you want. Just run commands and then kind of feed text from one place to the command, and you can do all sorts of stuff that way. This will be good.

01:11:37 - Nick Taylor

Cool. Did you send it? I didn't see it in Slack.

01:11:40 - Anthony Campolo

Oh, sorry.

01:11:45 - Nick Taylor

Okay, cool.

01:11:45 - Anthony Campolo

Yeah, yeah. So let's just do the first chat completion. Just create basically like a Node.js... just create a JS file and run it with a Node command.

01:11:59 - Nick Taylor

Yep. Cool. Why is this not done? Boom. Trash. Okay, let's go down one. Oh yeah, I meant to dip in this.

01:12:14 - Anthony Campolo

Yeah, yeah, yeah. That was some of the stuff I was going to show you when we first streamed this, when we first scheduled this, but it's been a while since I looked at these and I'm already kind of over them, so.

01:12:28 - Nick Taylor

Yeah, cool. See? Okay, I was confused for a second. I was like, that's not JavaScript. Okay, cool, let's do this. MJS. Okay, cool. Let's copy this. Cool. And I'm using... okay, it's not going to pick that up because I don't have...

01:12:59 - Anthony Campolo

Do you know how to use environment variables with Node 20?

01:13:02 - Nick Taylor

Yeah, I know it imports environment variables now, so I can't remember the syntax, but I'm pretty sure I'm on Node 20.

01:13:10 - Anthony Campolo

Yeah, give me one second. I got a little snippet because I always forget it.

01:13:14 - Nick Taylor

Also, I always used to have a thing that said if dev, you know, use dot env, but...

01:13:21 - Anthony Campolo

Yeah, okay, so dash dash env-file equals dot env.

01:13:30 - Nick Taylor

Okay. The node and the...

01:13:34 - Anthony Campolo

You want to put it after node, though?

01:13:36 - Nick Taylor

Oh, yeah, it's fine. Yeah, it's a flag to Node. Oh, it's because I'm not in the root of Mistral. Yeah. Okay, okay, hold on. Give me one second.

01:13:58 - Anthony Campolo

Nuke your whole thing and start over.

01:14:00 - Nick Taylor

Yeah, yeah, no, I'm just gonna move it, though. Give me two seconds here. Let's go to dev streams. Anthony AI. There you are. And where's my Mistral? Oh, I didn't even save it yet. Save. It's under Anthony AI. Now go up. Boom. All right, let's just drag this in. Whoops. Should be able to show in Finder. Cool. And let's get you out of there, buddy, and let's get you here. All right, cool. Now we're good. Okay, let's try this again. Okay. Better.

01:15:05 - Anthony Campolo

This is going to be some ESM thing.

01:15:07 - Nick Taylor

Okay. We don't have a package.

01:15:09 - Anthony Campolo

Yeah.

01:15:10 - Nick Taylor

Okay, so... okay, cool. All right, let's try this again. Yeah. No, there we go. Okay, so still getting the API error. I'm going to double-check the API key then.

01:15:47 - Anthony Campolo

It might be because my API key is old.

01:15:50 - Nick Taylor

Oh, yeah, yeah, it says unauthorized, so.

01:15:53 - Anthony Campolo

Okay, let me see. Let me go back in and log in. Let's see. That makes sense. I think I can create another one real quick. Yeah, here we go.

01:16:05 - Nick Taylor

Okay.

01:16:07 - Anthony Campolo

I can. I think I can create another one real quick. Yeah, here we go.

01:16:15 - Nick Taylor

I'll take this commercial break to blow my nose. Okay.

01:16:23 - Anthony Campolo

This will expire in two weeks.

01:16:25 - Nick Taylor

Glad I got this cold before I go to Miami. That's all I can say.

01:16:31 - Anthony Campolo

Okay, here we go. Just sent it to you.

01:16:34 - Nick Taylor

Cool. All right, let's try this again. Okay, let me pop this out real quick.

01:16:53 - Anthony Campolo

Oh, it seems to be saying I was using tokens, so maybe it was responding, but it was getting swallowed by an error or something.

01:17:02 - Nick Taylor

Weird. All right, new one's there. Save. Cool. Let's pop that back, see what we got. Huh. Same thing. Okay. Clearly maybe an API key thing, but weird.

01:17:34 - Anthony Campolo

Yeah, I'm not sure. This is not okay.

01:17:37 - Nick Taylor

Yeah, well, we won't stress out about that. Yeah, it's okay. It's live streaming. Things don't always go as planned. But essentially, aside from Mistral being open source, I guess what differentiates it maybe from, say, OpenAI or Claude?

01:18:03 - Anthony Campolo

That is the big question, I would say. Not much, unfortunately. Okay, so for people who want to make this pitch for open source and why it's so valuable, it's going to be an uphill battle, let me tell you. They don't have anywhere near the resources that OpenAI does. So they don't have all these other fancy extra bells and whistles, like custom GPTs and sharing links to chats and stuff like that. But the long-term play is kind of, if you believe in open source, that eventually the open-source models are going to be the best ones. So I think that's kind of the long-term play, that especially if you think about copyright and stuff like that, the legal challenges of creating these things are going to get more and more complicated. So yeah, I don't know, we'll kind of see. Right now I think they're still a fairly young, early company, so they probably have a lot of plans and features they plan on building out. But right now it's basically just the chat stuff and embedding things like that.

01:19:16 - Nick Taylor

Okay, cool, cool, cool. Yeah, no, it's been fun hanging here, man. I'm definitely down. We talked about it briefly before the stream, but I'd definitely be down to digging more into some of this stuff. Like you were saying, kind of combining a few things to build out something a little more full-fledged. But I thought that was... sorry, man, this cold's got to go. Yeah, no, honestly, it was pretty cool how LlamaIndex pulled that in, and maybe the other ones do this stuff too, but it just seemed...

01:19:58 - Anthony Campolo

Yeah, LangChain is a similar thing where you'll get a chat interface, kind of more built-out functionality, and they had a multi-tab one and you can get JSON output and stuff like that. But LlamaIndex and LangChain are both kind of... those two are really at the top of the class when it comes to this stuff.

01:20:17 - Nick Taylor

Okay, okay. Yeah, cool, cool. I'm a little partial to LlamaIndex because I have an alpaca on my stream, which is not a llama, but people always think it's a llama. So I don't know, maybe I can say that's where I bet my money because of that. Just use some ridiculous reason why. Okay, cool. I'm going to cough again. I know it. I'm dying here. Cool, cool, cool. No, it's been super fun, Anthony, man. I appreciate you clarifying a lot of the terms at the beginning of the stream because I feel like there's a lot of stuff in AI where people just say AI and they never kind of specify things, you know? So it's just nice to hear about the training, the embeddings. I mean, some of these things I was familiar with, but I liked how you clarified some of these things too. And I dropped a link to where people can give you a follow. But so, you're doing mainly consulting now? Is that what you said you're doing?

01:21:32 - Anthony Campolo

Well, I'm doing kind of similar devrel stuff that I was doing. I'm doing it for Dash, so I'm building out front-end examples and streaming. And they want me to basically be the connector to the Web2 world, so do some of that kind of streaming too, if you're interested. We're actually looking for streamers that we could pay to do an episode or something. So you would be able to also make some money streaming about Dash if you wanted to. But it's a cool project. I know one of the dudes on the team, Ryan, really well. Okay. And they've been around for like 10 years. So they preceded all the really crazy Web3 craziness and had stuck around through that. So, you know, I think that they've actually built a lot of trust in terms of what they're doing and their legitimacy. But aside from that, I'm also writing articles for Everfund now. My cohost...

01:22:26 - Nick Taylor

Oh, cool.

01:22:27 - Anthony Campolo

has his company Everfund, which is for nonprofits. And so he wanted... he's like, we want to write articles about AI. You know way more about this than I do. I was like, sure.

01:22:39 - Nick Taylor

Oh, cool, man. That's awesome. Well, I wish all the best and all that. And yeah, we can chat about that. I don't really want to dive into Web3 right at the end of this, but I definitely dabbled in it and found it interesting because I'm always interested in looking at new technologies on the high level. I feel like there's a lot of scamming still going on. I know there are a lot of people who are trying to do legitimate work there too, though. So I have a hard time finding a balance. And then there's other stuff like TBD, like Angie Jones's company, which is not cryptocurrency at all, but it's based on web standards and it's decentralized too. But it's not necessarily Web3. I know they call it Web5,

01:23:41 - Anthony Campolo

but yeah, it's the authentication layer, essentially. It's problems that Web3 has been aiming to solve, just in a slightly different way. So I think a lot of people felt the need to rebrand away from Web3. And there was a time when, before Web3, they just called it blockchains and the decentralized web or something like that. So to me, those are pretty much the same thing, which is kind of a rebrand, you know? But I think it's great. I think it's whatever it takes to get people into stuff and excited about it because it's just about, like, how do we give the power back to the people, ideally, and away from the giant corporations.

01:24:22 - Nick Taylor

Yeah, I hear you. I just realized I forgot... well, I have a window above me and my house is facing...

01:24:31 - Anthony Campolo

Is it being eclipsed?

01:24:33 - Nick Taylor

Yeah.

01:24:33 - Anthony Campolo

Well, I'm...

01:24:34 - Nick Taylor

I'm gonna go watch the eclipse shortly. My backyard is where the sun shines, because the backyard's on the west side, so I have a feeling I might start getting eclipse rays coming through the window in a second. So I'm going to go get my eclipse sunglasses on shortly. But it's been awesome, Anthony. And yeah, like I said, let's do another one and we can chat some more about it, but build out something more full-fledged like you were talking about. And I think it'd be fun to do, but also I think it would probably be a useful tool, whatever we build out there too.

01:25:11 - Anthony Campolo

That's the thing. I want to actually create something that would be useful for you, that you would actually use in your content creation, because that's what I'm doing now.

01:25:20 - Nick Taylor

Yeah, no, cool, cool, cool, awesome. Well, like I said, check out Anthony's website. He's doing contract work if you're looking for anything in developer advocacy, and he also does a lot of writing. Check all that out. And yeah, always good hanging out, my man, and looking forward to when we meet in person again. And everybody else, we'll see you next week. Anthony, if you just don't mind staying on one sec. Yep.

01:25:47 - Anthony Campolo

Later, buddy.

01:25:49 - Nick Taylor

Later.

On this pageJump to section