
Vibe Coding: Building Faster with AI-Powered Development
Anthony Campolo joins JavaScript Jabber to discuss vibe coding, building his AutoShow podcast tool with LLMs, and how AI is reshaping developer workflows.
Episode Description
Anthony Campolo joins JavaScript Jabber to discuss vibe coding, building his AutoShow podcast tool with LLMs, and how AI is reshaping developer workflows.
Episode Summary
Anthony Campolo returns to JavaScript Jabber to explore vibe coding — a term coined by Andrej Karpathy in early 2025 describing a development approach where programmers rely heavily on LLMs to generate code rather than writing it themselves. Anthony shares his experience building AutoShow, a tool that automates podcast show notes by combining transcription services with LLM-generated summaries, chapter titles, and other content. The conversation covers practical workflow details, including how Anthony uses Repo Mix to manage project context, why he prefers starting fresh conversations with the LLM rather than extending long ones (since models degrade as context windows fill up), and how rule files function like prompt engineering to keep generated code consistent. The panel debates whether vibe coding threatens developer jobs, drawing analogies to autopilot systems and historical technological disruption, ultimately agreeing that AI accelerates developers rather than replacing them — though it may shift which skill levels are most in demand. Dan Shappir adds perspective from his work at Sisense, where AI-assisted development has been mandated across teams, noting that Java developers face more friction with these tools than JavaScript developers. The episode wraps with Anthony demonstrating AutoShow's interface and discussing security considerations for vibe-coded applications.
Chapters
00:01:05 - Introductions and Defining Vibe Coding
The JavaScript Jabber panel introduces themselves and welcomes returning guest Anthony Campolo, who previously appeared to discuss Redwood and Slinkity. Anthony explains that he's built an AI-powered app using what's now called vibe coding, setting the stage for the episode's central topic.
Dan Shappir steers the conversation toward defining vibe coding, and Anthony traces the term back to Andrej Karpathy's February 2025 description of a development style where you let the LLM write most of the code. Anthony notes he began coding this way around mid-2023, when skepticism around LLMs was still high among senior developers, and explains the spectrum from pure vibe coding — where non-programmers just feed errors back to the model — to a blended approach where experienced developers fix issues themselves.
00:06:10 - The Blurred Line Between Vibe Coding and Traditional Development
Steve Edwards admits the concept makes him uneasy as someone who values understanding code deeply, while Charles Max Wood describes his own experience sitting somewhere between pure vibe coding and traditional development. The panel explores whether having a competent programmer shortcut the feedback loop still qualifies as vibe coding.
Anthony argues the key distinction is whether the LLM generates the initial code, regardless of how errors get fixed afterward. Dan offers an interesting reframe by comparing vibe coding to having a product manager sit alongside a developer and direct their work. Anthony emphasizes the critical importance of testing — whether through formal test files or treating CLI commands as end-to-end tests — since trusting LLM output without verification is where things go wrong.
00:14:00 - Tech Stacks, Tools, and the Repo Mix Workflow
Dan asks what technologies Anthony generates with LLMs, and Anthony describes his evolving stack — currently Astro with Solid and Tailwind — noting that popular, well-documented frameworks perform best with AI code generation. The discussion moves to tooling, where Anthony explains his somewhat unconventional workflow using Repo Mix to compress his codebase into a context-friendly format.
Rather than using Cursor's agent mode or similar integrated tools, Anthony pastes Repo Mix output into ChatGPT, Claude, or Gemini directly, then manually applies changes while reviewing diffs. He explains this deliberate friction gives him a chance to review code before it hits his project, which matters because he's building a production app with payments and authentication. The panel discusses how this approach compares to more automated agent-based coding workflows.
00:19:22 - Rule Files and Prompt Engineering for Code Generation
Dan raises the topic of rule files — configuration written in natural language that guides how LLMs generate code. He shares his experience introducing rule files into legacy projects at work, describing the strange sensation of writing configuration in English markdown rather than JSON or YAML, and how small changes to rules could produce unexpectedly large differences in output.
Anthony explains that his rules evolved iteratively from observing repeated unwanted patterns in LLM-generated code. Both developers agree that examples in the codebase reinforce written rules, since LLMs sometimes fail to follow English instructions alone. The conversation touches on specific challenges like getting LLMs to respect ESLint configurations, illustrating how adding layers of tooling indirection increases the chance of errors because the model needs that tool's knowledge in its training data.
00:26:38 - Context Windows, Model Selection, and Costs
Anthony explains a counterintuitive aspect of LLMs: they get less effective as conversations grow longer because accumulated context — including dead ends and wrong paths — fills the context window and pushes out important early instructions. This is why he prefers one-shot interactions through Repo Mix rather than extended conversations.
The panel fields an audience question about which models work best. Anthony shares his current preference for Claude Sonnet over Opus due to its longer context window and better reliability, with Gemini 2.5 as his second choice. The discussion shifts to costs, where Anthony reveals he pays $200 monthly for both Claude and ChatGPT subscriptions to support building AutoShow full-time, while recommending beginners start with a $20 plan and avoid free tiers that restrict access to the best models.
00:36:18 - Building AutoShow: From Personal Tool to Product
Anthony describes AutoShow's origin as a personal tool for automating podcast show notes. He explains the progression from manually running Whisper transcription and pasting results into ChatGPT, to building a CLI that automated the entire pipeline — downloading video, extracting audio, running transcription, applying prompts, and generating formatted output.
After repeatedly hearing from friends and fellow content creators that he should charge for the tool, Anthony began building a proper front end and back end. The app now supports multiple transcription services, dozens of prompt templates spanning content summaries, social media posts, creative writing, and business materials, plus multiple LLM options. Charles Max Wood notes potential synergy with his own podcast platform plans, and the two discuss possible integration through AutoShow's backend API.
00:42:30 - Will AI Replace Developers?
The panel tackles the question of whether vibe coding threatens programming jobs. Anthony argues that someone still needs to conceive the app, prompt the LLM, and test the results. Charles frames it as an economics question, noting that most companies have backlogs far longer than they can address, so acceleration through AI tools leads to doing more rather than cutting staff.
Dan draws an analogy to airplane autopilot — automation handles most of the routine work, but you still want a skilled pilot for situations the system can't handle. The conversation evolves into discussing how AI changes what it means to be a developer, with Dan questioning whether generated code even needs to be human-readable if humans aren't the ones maintaining it. Anthony jokes that at least everyone agrees React is forever, before Dan raises the possibility of agentic web experiences where LLMs generate entire webpages on the fly.
00:51:44 - AI-Assisted Development in Enterprise and Legacy Code
After Charles departs, Dan shares his experience at Sisense, where management has mandated AI-assisted development across all teams. JavaScript and TypeScript developers have adopted Cursor successfully, but Java developers face significant friction because LLMs were primarily trained on Python and JavaScript, and Java tooling is optimized for IntelliJ rather than VS Code-based editors.
Anthony reflects that vibe coding has been a step change for him as a developer with about five years of experience, noting academic research showing junior-to-intermediate developers gain the most from LLMs. Dan highlights another valuable use case: using AI to tackle tedious refactoring work like converting React class components to functional components — labor-intensive tasks that might never get done otherwise but are well-suited to LLM assistance.
01:00:26 - Junior vs. Senior Developers in the AI Era
Dan and Anthony explore how AI tools affect different experience levels. They consider scenarios where a company might replace two junior developers with an LLM-assisted senior, or alternatively empower two juniors to perform closer to senior-level work. Dan argues that the core senior skill — decomposing complex problems into manageable pieces — remains essential regardless of AI tooling.
Both agree that seniors are likely safe but that reduced demand for junior roles could create a pipeline problem: if fewer juniors get hired, where will future seniors come from? The conversation circles back to the fundamental economic question of whether companies will use AI-driven efficiency to reduce headcount or to tackle more ambitious projects, with both leaning toward the latter based on historical precedent.
01:09:07 - AutoShow Demo, Security, and Final Thoughts
Anthony walks through a live demo of AutoShow's interface, showing the flow from inputting a video URL through selecting transcription services, prompts, and LLM models to receiving formatted show notes. Dan suggests building a plugin architecture to make the system more extensible and allow third-party integrations.
The conversation addresses security concerns common to vibe-coded applications, with Anthony explaining his approach of using established services like Clerk and Stripe rather than rolling his own authentication and payments. He closes by encouraging developers and non-developers alike to experiment with LLMs, shares his preference for Claude as a coding model, and invites listeners to try AutoShow and provide feedback. Dan wraps up the episode at 01:18:26, thanking Anthony for sharing practical insights into the vibe coding revolution.
Transcript
00:00:00 - Ad Read 1
The Global Gaming League is presented by Atlas Earth, the fun cashback app. Hey, it's Howie Mandel and I am inviting you to witness history as me and my How We Do It gaming team take on Gilly the King and Wallow $267 million gaming in an epic global Gaming League video game showdown. Plus a halftime performance by multi platinum artist Travy McCoy. Watch all the action and see who wins and advances to the championship match right now at globalgamingleague.com. That's globalgamingleague.com in partnership with Level Up Expo.
00:00:30 - Ad Read 2
The Bleacher Report is your destination for sports right now. The NBA is heating up, March Madness is here, and MLB is almost back. Every day there's a new headline, a new highlight, a new moment you've got to see for yourself. That's why I stay locked in with the Bleacher Report app. For me, it's about staying connected to my sports. I can follow the teams I care about, get real-time scores, breaking news and highlights all in one place. Download the Bleacher Report app today so you never miss a moment.
00:01:05 - Charles Max Wood
Hey, folks, welcome back to another episode of JavaScript Jabber. This week on our panel, we have Steve Edwards.
00:01:13 - Steve Edwards
Yo. Coming at you from a beautiful sunny Portland area.
00:01:17 - Charles Max Wood
We also have Dan Shapir coming to
00:01:19 - Dan Shappir
you from the always sunny Tel Aviv.
00:01:22 - Steve Edwards
So nice.
00:01:23 - Dan Shappir
Sometimes it is sun, sometimes it is. Very much on the warm side of things.
00:01:28 - Steve Edwards
Yeah. Next week I get to visit my boss down in Southern Arizona where it's going to be like, you know, 100 Fahrenheit average.
00:01:35 - Charles Max Wood
Yeah, it's been getting up to 95, 98 here. This is Charles Max Wood from Top End Devs. I mean, thankfully it's not that humid. Kind of like in Arizona. It's not that humid.
00:01:45 - Steve Edwards
The dry heat.
00:01:47 - Ad Read 2
Yeah.
00:01:47 - Charles Max Wood
But it's still not fun to go sit outside in. We have a special guest this week and that is Anthony Campolo. Anthony, you want to let people know why you're famous. And I think we've had you on before, so maybe refresh our memory there too.
00:02:01 - Anthony Campolo
Sup, everyone? It's my third time here. I originally came on to talk about Redwood and then talked about Slinkity, a now completely unknown framework that basically made Eleventy like Astro. And then everyone just used Astro and stopped using Eleventy, so no one cares about that project anymore. But it was fun. And now I am building an AI app that I kind of vibe coded, maybe depending on how we define that term. So, yeah, we're going to be talking about that. And yeah, super excited to be back and to chat with you all.
00:02:38 - Steve Edwards
So I got to tell you before we get started, when I first saw your name, I thought of somebody else because back in the, when I was growing up, there was a famous Christian speaker named Tony Campolo.
00:02:49 - Anthony Campolo
Yes.
00:02:49 - Steve Edwards
And I was going to ask how many times you got confused for him.
00:02:53 - Anthony Campolo
I think this might be the first time ever. Usually it's just me, it shows up when you search, you know my name. So that's why, you know, my handle is ajcwebdev and that's the same everywhere. So that's an easy way to find me online if you just search my name. Yeah, you get Tony Campolo, but no one I've ever talked to has ever really known who that was before. So this might be the first.
00:03:13 - Steve Edwards
He was, yeah. When I was growing up, he was, he was quite well known, at least in the church circles that I traveled in. So.
00:03:18 - Charles Max Wood
Well, pulling us down around his office before you showed up. And then he was like, I didn't know he coded.
00:03:25 - Dan Shappir
Pulling us back towards. I'm trying to pull us back to tech. I'm trying to pull us back to tech. So, Anthony, you're on our show because of. A little bit a while ago I actually posted on X and also on bluesky, kind of a shout out for people to come and speak about vibe coding. And you were kind enough to answer my call. So let's start with that. You know, what is vibe coding?
00:03:55 - Anthony Campolo
Yeah, I love this term because it's one of those examples where the term came around and I felt like it defined something I was already doing. So it gave me the ability to explain to people more easily because this term already existed that I can kind of point to. So the term itself was invented by Andre Karpathy, who's an extremely well known researcher in AI. He's worked with a lot of the big companies and this was in February of 2025. So it's a fairly new term. The way he defined it. He said there's. This is direct quote. There's a new kind of coding I call vibe coding, where you fully give in to the vibes, embrace exponentials and forget that the code even exists. He then kind of goes on, gives an example. I asked for the dumbest thing, like decrease the padding on the sidebar by half because I'm too lazy to find out specifically how to do it by looking it up and having to apply to your project. So for me the most important thing is that you do the majority of your workflow through the interface of an LLM by telling it what you want your project to do or how you want your code to change and then having it write the code.
00:05:07 - Anthony Campolo
There's a couple more kind of levels. Some people will say it's only vibe coding if you're like speaking and you're not typing. But I don't think those distinctions really matter that much. I think it's more so where you are not directly writing much code. The few times you would be writing code are if it gives you some code with an error and you have to fix it. You can either do it yourself if you know how, or you can feed the error back to the LLM and have it then try and fix it with new code, which would be kind of a fully iterative vibe coding cycle. So that's probably how I would define it. I first started coding like this, I think around the middle of 2023 is when I first tried doing this. And I've been doing it for a while. And back then you still had a lot of skepticism around LLMs, especially from a lot of senior devs. They would say it hallucinates a lot. It writes buggy code, it writes insecure code. No one should ever depend on an LLM for their code. That's obviously changed a lot. People are more open to this idea now.
00:06:07 - Anthony Campolo
So yeah, be curious how you guys see the term.
00:06:10 - Steve Edwards
Well, as somebody who likes to know the details and the underlying code and know how everything works behind the scenes, that makes me cringe just thinking about it.
00:06:20 - Charles Max Wood
So I have to jump in here because I've done kind of both and I'm a little curious where you draw the line because the Global Gaming League
00:06:30 - Ad Read 1
is presented by Atlas Earth, the fun cashback app. Hey, it's Howie Mandel and I am inviting you to witness history as me and my How We Do It gaming team take on Gilly the King and Wallow $267 million gaming in an epic Global Gaming League video game showdown. Plus a halftime performance by multi platinum artist Travy McCoy. Watch all the action and see who wins and advances to the championship match right now at globalgamingleague.com. That's globalgamingleague.com in partnership with Level Up Expo.
00:06:59 - Ad Read 2
The Bleacher Report app is your destination for sports right now. The NBA is heating up, March Madness is here and MLB is almost back. Every day there's a new headline, a new highlight, a new moment you've got to see for yourself. That's why I stay locked in with the Bleacher Report app. For me, it's about staying connected to my sports. I can follow the teams I care about, get real-time scores, breaking news and highlights all in one place. Download the Bleacher Report app today so you never miss a moment.
00:07:30 - Charles Max Wood
I have some friends that are not programmers at all, right? And so to me, they're kind of having the pure vibe code experience because it spits out a whole bunch of stuff with funny syntax that they can't make heads or tails of, and they run it and then they literally just hand it the error back and say, now it's doing this. And then it says, oh, well, change this. And they'll eventually get to something that in a lot of cases, works. Now, you know, there are various works.
00:07:59 - Dan Shappir
Yeah. Or seems to work well.
00:08:00 - Charles Max Wood
In some cases, it does what they want. Now it. Does it do it efficiently?
00:08:04 - Anthony Campolo
No.
00:08:04 - Charles Max Wood
Is it pretty code? Hell, no. Right. But it does the thing right, at least to their satisfaction. But then when I'm using the AI, a lot of times, you know, I don't know, I. I've. I've used cursor, I've used copilot, I've used grok, I've used ChatGPT, just on their own to get it to write stuff. And it becomes more of a blend, right? Because yeah, if it throws an error, I'm like, oh, I know how to fix that. And so then I'll go in, sometimes I'll even tell the LLM I fixed this error with this. Okay, thanks for telling me. Here's some more stuff. And so is. Is that vibe coding where I'm like, you know, I'm. I'm half the solution on my own or. I don't know that that's kind of what I'm wondering. Because, you know, if you have a competent programmer that can, you know, shortcut a lot of the feedback on it, you know, and just make it run or quickly clean stuff up. Is that vibe coding or is that not vibe coding?
00:09:05 - Anthony Campolo
I would think it's more vibe coding than not vibe coding. To me, the more important thing is did you start by prompting an LLM to give you the initial code? That, to me, is really kind of the important thing. What you do after that, to try and fix the error, whether you're doing it yourself or you're having the LLM do it. I think that has more to do with kind of experience levels and just what's going to be more efficient and faster for you. But for me, the bigger thing is that the majority of the code is being written by the LLM.
00:09:37 - Dan Shappir
By the way, taking it from the other direction, if, let's say it was 10 years ago and you had product manager or UX specialist sitting alongside the developer and telling them what to do, would that have been considered vibe coding
00:09:53 - Anthony Campolo
from the perspective of, you know, the PM or whoever? It's a similar thing. And this is one of the things that makes LLM such a huge deal for not just developers, but really for non developers. Also. I want to also just throw in one more thing real quick. To make this actually work, you have to do a couple things. You have to have some sort of iterative way of actually testing the code, preferably with written tests, or, you know, just some sort of flow where you go click through a couple boxes and make sure it works. Because if you don't have that or if you're trying to change a bunch of code that changes too many things that you can't test efficiently, then you have. You're just kind of trusting the LLM and that's the thing that you don't want to do. Going back to what step Steve said, you do still want to have some sort of guarantee that your code is doing what you want to do. So you could have it start by vibe coding you some tests that you will then run after it gives you the new code. That's like a really, really, really important thing that doesn't always get brought up.
00:10:48 - Charles Max Wood
I want to jump in on this too, because people get worried about is it going to hallucinate in the same way on the tests? It's a different prompt and so it probably won't, but yeah, anyway, it will
00:11:00 - Dan Shappir
hallucinate, but it would be different hallucinations. And if you get incompatibilities between the two, know that you have at least one problem, potentially two, Right?
00:11:09 - Anthony Campolo
Yeah. So you have to make sure the test corresponds to what you actually want the application to do. So you guys have to actually do it and then run a test and make sure they go with each other and is working the way you think it should.
00:11:20 - Dan Shappir
So how do you do you test these sort of things? Let's say like, what am I writing a test script, some code for playwright? What am I creating in this test script?
00:11:33 - Anthony Campolo
It depends what you're writing. The way I first started doing this and what I found was a really good, efficient way to do it was by building CLIs. So for me, I started with using Commander.js and I was trying to create a flow that would download a video, extract the audio, run transcription, add a prompt to the transcript, feed that to an LLM, and then get a response back to do things like generating show notes or chapter titles for podcasts and things like that. And so I would have a command that I would want it to be able to execute, to do a thing. I would have it write the code, I would then run the command and the command would do what I want or wouldn't. So I was kind of treating the command itself as the test. And then that would always then give me logs based on how the command went, which would include the errors. And I wouldn't necessarily have to write a standard test file to test it because the, the command itself would be the test.
00:12:28 - Dan Shappir
But does that mean that. But that means, if I understand your description correctly, that you're effectively manually testing it. I mean, you're, you're running a certain script, but then you're low. You're, you're, you are the one who's actually checking to see that you get the expected results.
00:12:46 - Anthony Campolo
Well, it's still a, I mean, it's automated because you wrote a test, the test itself is going through steps that you would manually do yourself. But like I said, it's like, it's like an end to end test.
00:12:57 - Dan Shappir
Yeah, it's not an end to end test. Yeah, that's what I was aiming for. So. Yeah, so vibe coding, I guess is because the system is so dynamic and everything is changing so rapidly, you're effectively also kind of acting as, to an extent, a manual QA for what you're developing.
00:13:20 - Anthony Campolo
Yes. Yeah, I think that's a, that's a good way to look at it. And as much as you can build in ways where the system can QA itself, you should, there's always going to be at least some sort of points in time where you have to step outside and actually just like try it and see if it works or not. So I think there's a lot of the same kind of issues that we get just with testing in general, like what's an efficient way to test and what's not.
00:13:41 - Charles Max Wood
Yeah, in a lot of cases when I've done it, I just load it in the browser and just click around and then I'll have it, I'll vibe code it and prompt it for unit tests. But that it really does depend on what I'm building because sometimes that's not the right way to do it if it's, you know, completely command line based or something.
00:14:00 - Dan Shappir
By the way, code in what do you usually generate? Anthony, what do I.
00:14:04 - Anthony Campolo
In terms of like, what LLM am I using or.
00:14:06 - Dan Shappir
No, what you're using the LLM to generate code. Is it React code? Straight on, DOM code, jQuery code, HTMX code?
00:14:15 - Anthony Campolo
Yeah.
00:14:15 - Dan Shappir
What are you generating?
00:14:18 - Anthony Campolo
A lot of stuff. Like I said, I first started by building a cli and then I extended that functionality to a backend and front end, which have gone through many different tech stacks. I was a big framework guy, so I'm always kind of trying out different tools. My current stack, in terms of what I'm deploying, the AutoShow app with, which is. The app I Vibe coded, is Astro, with solid as the templating language and tailwind for the styling and. Yeah, so that's the. That's basically the back end in the front end. I've done stuff with like a Node.js Fastify server. I've done stuff with React. Really any kind of popular technology that has a lot of usage and a lot of docs will probably do pretty well. Some people say you should always use Next.js and Supabase. But like, I think as long as you're not trying to use something really obscure and really new, it. It tends to work pretty well. If it is very obscure and very new, you should create like a. Just a single file that kind of has all the docs in it and drop that into the context of the LLM.
00:15:23 - Anthony Campolo
That will kind of help solve that issue.
00:15:25 - Dan Shappir
I think. These days you also have various MCPs that make it very easy to get content off of GitHub, for example.
00:15:34 - Anthony Campolo
Yes, yeah, yeah. MCPs add a whole other level to things, for sure.
00:15:38 - Charles Max Wood
Yeah. But when you're using the LLM. No, I just curious because you're asking about the tools.
00:15:45 - Anthony Campolo
Mm.
00:15:45 - Charles Max Wood
Are you doing this with like Copilot connected to Gemini or GPT or Claude or using something like Claude code or. How are you doing that? And I'll also point out that Copilot, if you put it into Agent mode instead of, I can't remember the other mode, it'll actually put the code into your IDE for you. And so I'm a little curious, like, what level are you working at there?
00:16:10 - Anthony Campolo
Yeah, so I've tried Copilot and Cursor, I've tried Claude code. I've tried, you know, things like bolt and V0, I think is what Vercel's thing is called. And then like when you have the agent loop, like you said, where it's actually changing your code for you, that is the full Vibe code experience. Like that is Steve's worst nightmare, for sure. So I tend to.
00:16:30 - Charles Max Wood
It's kind of nice.
00:16:32 - Anthony Campolo
It can be very, very nice. Yeah, totally. I think for personal projects, that's definitely what I would do. I have a flow that some people consider kind of strange and inefficient. I use this tool called Repo Mix, which is a way to kind of intelligently take your codebase and smush it into something that is small enough to fit in the context of these LLMs. And then it also includes instructions on how I want my code to be styled and written. Then I write what I want to do as a line at the bottom of that whole thing. And then I plop that into either ChatGPT, Claude or Gemini. I use all three kind of depending on which one is going to work best. Then it gives you the output and I put it into my file. As I'm doing that, I'll quickly kind of look over the code and the diffs. So I'm not doing a pure vibe coding experience. I am paying attention to my code. I'm doing a production app that has, you know, payments and login and stuff.
00:17:26 - Anthony Campolo
So that gives me a step to kind of slow down, look at the code that's being written, look at what is changing in my project, get a sense of what's happening, and then I can then test it and see if it works.
00:17:38 - Dan Shappir
So do you generally when. When the LLM is suggesting stuff, do you usually approve it one by one or do you just approve it all and then look at, I don't know, the git diffs or whatever.
00:17:52 - Ad Read 1
The Global Gaming league is presented by Atlas Earth, the fun cashback app. Hey, it's Howie Mandel and I am inviting you to witness history as me and my how we do it gaming take on Gilly the king and wallow. 2, 6 $7 million gaming in an epic global Gaming league video game showdown. Plus a halftime performance by multi platinum artist Travy McCoy. Watch all the action and see who wins and advances to the championship match right now at globalgamingleague.com. That's globalgamingleague.com in partnership with Level Up Expo.
00:18:22 - Ad Read 2
The Bleacher Report app is your destination for sports right now. The NBA is heating up, March Madness is here and MLB is almost back. Every day there's a new headline, a new highlight, a new moment you've got to see for yourself. That's why I stay locked in with the Bleacher Report app. For me, it's about staying connected to my sports. I can follow the teams I care about, get real-time scores, breaking news and highlights all in one place. Download the Bleacher Report app today so you never miss a moment.
00:18:52 - Anthony Campolo
So I plop in all the changes that it gives me and I tell it to write code files in full. So I just click copy-paste for each of them. And the way my projects are structured usually only has to change a handful of files to make the change if it's atomic enough. And then after I put all the files in, I'll then look at the diffs on each of those files. So if there's three files I'll just go click over to the diff viewer on VS Code, kind of look at all those and then I'll. I'll test the application with either a test script or just clicking through.
00:19:22 - Dan Shappir
Now you mentioned that you use the project itself as the context or put it in the context in order to obviously create consistency within the code base.
00:19:32 - Anthony Campolo
Yeah.
00:19:34 - Dan Shappir
First of all, consistency is only worthwhile once you have some code. And also assuming the code is structured the way that you like it. So do you also manually create or have created some sort of rule files or something along these lines as kind of a starting point or a way to keep aligning your code with a certain desired outcome?
00:19:57 - Anthony Campolo
Yeah, so with Repo Mix it includes an instructions file that gets appended to each one. And for in there I have things like how I want the logging to be done, the JavaScript kind of styling. So is using imports not, you know, require kind of like Cursor rules? Yeah, they're exactly like Cursor rules. Yeah, totally. So yeah, I think everyone should have those and they should make sure to like get down in writing how they actually want their code to look and behave and that the coding styles that they want for sure.
00:20:28 - Dan Shappir
So for me it's interesting. So I've recently I'm working on several legacy projects at at work that are. They are definitely legacy, but they're still very much undergoing development by several developers. And I introduced the rule files into all of them and it was a very interesting experience for me in a lot of ways. It was my first real encounter with prompt engineering, you might say. So first of all, it was really odd for me being an old timer in tech, to effectively write configuration as English in a markdown file rather than in JSON or YAML or anything that you normally would expect configuration to be in. It's a very like it's a very strange experience the first time you do it. Like, instead of saying, you know, ESLint colon on you, say, I want you to use ESLint. It's a very odd experience. But the other thing that obviously was a very significant thing for me was seeing how seemingly very small changes that I would make in the rule files could have very significant impact on the generated code. And it wasn't always very trivial to totally understand why. Is that your experience as well?
00:21:57 - Anthony Campolo
Yeah, for me, this whole. The rules came around very iteratively where I was working a lot with the LLMs, so it was generating a lot of code, and there would be a lot of times it would be doing something consistently that I didn't want. And then every time that would happen, I would write a rule and then I would make sure that that rule actually changed. The thing I wanted to change didn't change anything else. So you kind of need to do this. It takes some time. There's really no shortcut to doing it. And you have to just kind of get familiar with the LLMs and how they generate code. And when you're changing your rules, try. It's another kind of, like, iterative process where you get the outputs you see, are they what you want, tweak the prompt if it's not what you want. And once you get it to where you want, try not to mess too much with your rules file.
00:22:40 - Dan Shappir
Yeah, so. So just. Just to finish that point, Chuck, for me, what I literally did was I started without any rule files. I would pick a file, let's say, that was in a style that I didn't like, and literally gave just the one command, refactor, saw what got generated, then put in some rules, did it again, saw the difference, tweaked them, did it again. And like you said, it was an iterative process. What I also discovered was sometimes even though I gave certain instructions, and even though the LLM very explicitly stated that it was following those instructions, it still didn't or didn't fully.
00:23:24 - Anthony Campolo
And this is where having examples, like if the rules match what your code already does, then it will be. It'll be better at doing that because it'll both have the English language to explain it and example of what it actually looks like. So you're right. Sometimes when you're just writing in English, it won't always get it. You actually need a file that's written with that style. You can show it to be like, no, do it like this.
00:23:46 - Dan Shappir
So, for example, for me, it was. I wanted it to not introduce in any ESLint errors or warnings.
00:23:53 - Anthony Campolo
Well, that's not what your ESLint configuration is.
00:23:55 - Dan Shappir
Yeah, and I obviously gave it the path to the ESLint configuration. And it would literally say, hey, I found some ESLint issues, I'm fixing them. And then I found some more, I'm fixing them. And even then when it was all done, it still had some ESLint issues. So.
00:24:12 - Anthony Campolo
Yeah, and that's because you're putting a tool in the middle that it has to understand well enough to be able to do that, instead of just it knowing enough about code in general to know like what import versus require is. Anytime you're. You're adding in these extra tools and extra layers of indirection, it's more chances for the LLM to get confused and do stuff wrong.
00:24:36 - Charles Max Wood
Yeah, because it has to know about ESLint in its latent space because you're not teaching it.
00:24:42 - Anthony Campolo
So if you had a doc or an ESLint, you know, file that actually has ESLint context and explains what the ESLint stuff means and your configuration, how that maps to ESLint and how it works, that would be an additional piece of context you could give it that would help it figure that out.
00:24:56 - Charles Max Wood
Right. Then you are teaching it how to do that.
00:25:00 - Dan Shappir
Yeah.
00:25:01 - Charles Max Wood
So I have a question, because you guys are talking about rule files and this is something that I haven't really used typically. I'm getting in and I'm saying, look, I want you to do this, I want you to use these tools, I want you to. And yeah, I have to keep reminding it when I'm doing the code. So where do you put those into your tools?
00:25:20 - Anthony Campolo
For me, everything goes through Repo Mix. So this is a huge part of my toolbase. And if you're using Cursor, you won't do this. You won't have this workflow at all. But for me, I just have. Literally the way Repo Mix works, you just have a separate markdown file with your rules written in it. It just grabs it and appends it. I have a custom script that does a whole bunch of crap, but it's really. It's just a. It's just a hunk of Markdown is all it is. And that gets added to the project context that Repo Mix creates for me.
00:25:49 - Dan Shappir
And that's essentially the same way that it works in Cursor just takes it from a different location. And the whole point is exactly what you said, Chuck. To avoid needing to manually put those in the same stuff in the Prompt again and again and again, each and every time.
00:26:07 - Charles Max Wood
Right? Yeah. And it's usually the ongoing stuff. Right. So it's. Oh, just to remind. No, I'm using Tailwind four, not Tailwind three. Right.
00:26:15 - Dan Shappir
Or first of all, it's.
00:26:16 - Charles Max Wood
I'm using Tailwind.
00:26:19 - Dan Shappir
Yeah, first of all it's. I'm using Tailwind rather than, let's say css. It might figure it out from your existing content or it might not. Right. If you're starting a new project, then how would it know? So you would need to remember to tell it. Use Tailwind. But if you put it in your. In your rule file, you don't need to remember.
00:26:37 - Charles Max Wood
Always remember. Yeah.
00:26:38 - Anthony Campolo
This also gets into something about conversation length. The longer you talk to an LLM in a single conversation, the dumber it will get. This is something that is highly unintuitive that a lot of people do not understand about LLMs. That's why I like Repo Mix. Because every time I'm making a change, I am kind of one shotting it, getting code back. If it doesn't work, I'll have a very quick back and forth to fix the bug and then I'm immediately going to go to a new context, a new conversation every time.
00:27:04 - Charles Max Wood
So that was the thing that I wanted to bring up with Repo Mix. And I was going to ask earlier, but you kind of already answered it by saying Repo Mix. And that is like when I'm starting out, it can kind of keep most of the app in the context, but as I build more things in, it obviously gets to the point where it can't.
00:27:24 - Anthony Campolo
Yeah. So Repo Mix lets you include more specific sets of files and things like that. You can configure it to say, I just want my front end files to be in this configuration and my back end files to be in this. So if you have an app that gets very, very large, you can start to scope it to different sections that will have the context it needs to make the change you want to make.
00:27:44 - Charles Max Wood
So, so do you have to tell it I'm working in this section of code and so only care about because. Because that's what I'm wondering. What are the limitations on it?
00:27:53 - Anthony Campolo
I have a script where I create a bunch of different configurations that allow Repo Mix to generate different types of context and then I'll just run a command like repo back for my backend or something. It's. There's a lot of ways which you can configure it in terms of how many different ways you need to configure it will kind of depend on how large your project is and how many different sections there are and how easily it will get confused if you give it the whole project versus just the parts of the project it needs. But since I'm vibe coding it, every time I want a repo mix command, I just give it my code and say, hey, I want this repo mix command to cover this part of my project that gives me the new command.
00:28:29 - Dan Shappir
Okay, you mentioned before that the LLM gets dumber the longer the conversation continues. Yeah, why is that?
00:28:39 - Anthony Campolo
So that's just because of what context length itself is. Context length basically means that there's a certain number of tokens that the LLM is able to kind of keep in its like working memory. And at a certain point, as it gets longer and longer and longer, it'll get full. So we'll have to start bumping off
00:28:56 - Ad Read 1
the Global Gaming League is presented by Atlas Earth, the fun cashback app. Hey, it's Howie Mandel and I am inviting you to witness history as me and my How We Do It Gaming team take on Gilly the King and Wallow 267's million dollars gaming in an epic Global Gaming League video game showdown. Plus a halftime performance by multi platinum artist Travy McCoy. Watch all the action and see who wins and advances to the championship match right now at globalgamingleague.com. That's globalgamingleague.com in partnership with Level Up Expo. Whether you're looking for more space, a fresh start, or a home built around the way you live today, you'll find it at Fisher and Fritchell. We're building carefree villas, townhomes, family homes and luxurious estates in amenity packed harvest and family friendly Post Farms Manors in o', Fallon, Rye Hill Manor and Birdie Hills Crossing in St. Peters, Sulfur Spring in Manchester and Cedars Valley in exclusive St. Albans. All with the quality craftsmanship you'd expect from Fisher and Fritchell. Visit fnfhomes.com for a map of locations.
00:29:54 - Anthony Campolo
That's fnfhomes.com. [unclear] from the beginning. And then that means it will lose important context if it made some changes at the very beginning. And then you get to a very long conversation. It won't be able to be aware of those at the very beginning. So that's why you have to keep it within the LLMs context window, not just for the first message so it doesn't say this is too long. I can't respond so it doesn't then Claude actually does something smart. Claude will tell you at a certain point in time when your conversation is done and they will not let you keep using. They'll tell you you have to create a new chat. I'm not sure if any of the LLMs do that right now. Most of them just let you keep going forever and then it will just be. That's where you get the hallucinations. That's where you get a lot of these errors people think about when they think of LLMs is once you exceed the context window in your conversation.
00:30:43 - Dan Shappir
So it's basically to a certain point in time, you're kind of getting lost in the weeds, as it were, with all the noise that accumulated throughout that entire conversation. All the dead ends and wrong paths remain in the context and actually bump out the more important stuff that you probably gave at the very beginning of the conversation, as you said, like the goal that you're working towards, et cetera. By the way, we have a question from the audience. Does he always try all three models? Claude, Gemini, ChatGPT. Which one tends to work best for which types of code?
00:31:18 - Anthony Campolo
Sure. So I will switch to different models if the first one I tried isn't working. Like, if it gives me broken code and then I tell it to fix it and it gives me code that's still broken in the same way or broken in a different way. I treat most of these things pretty, pretty pragmatically. And the question more so is like, so what's. Because if you do all three, you have to then check all three, and if they all work, then you just wasted, you know, a ton of time. The bigger question is, which one do you start with? Which is one that's most likely to give you the correct answer the first time so you don't have to use another model. And that has changed a lot for me over. Over time. I'm constantly kind of switching back and forth as new models come out because the space is so competitive right now that almost no one is able to hold on to the best model for more than a couple months. So my go to right now is Claude Sonnet, not Claude Opus. This is a rare case where the quote unquote, best, biggest, newest model is not really the best.
00:32:13 - Anthony Campolo
Sonnet is better than Opus because one it has a longer context window, which I don't know why they made Opus a smaller context window. It also is slightly faster and it just doesn't have as many kind of doesn't seem to have as much downtime. There's one time where Opus was broken and then I realized I could just switch to Sonnet and Sonnet wasn't. So like sometimes if Claude is down, a Claude model is down, not Claude, the whole thing. So anyway, Claude Sonnet is my go to right now. Then if that doesn't work, I'll try Gemini 2.5. And then if that doesn't work, I'll try ChatGPT.
00:32:45 - Dan Shappir
Yeah, same for me. We're starting with Claude Sonnet.
00:32:48 - Charles Max Wood
Yeah, I, I just kind of start with whatever I'm sitting on and then I'll just change it when I need to.
00:32:54 - Anthony Campolo
Oh, that's easy.
00:32:55 - Charles Max Wood
I don't start with any one of them. I just rotate. It's like this isn't doing as well. And I, I also don't follow along so much with the. There's a new model out, I want to try it. I just, I'll, I'll wind up switching to it when it's like, okay, for whatever reason, this isn't doing what I want. My question though is how much are you paying for this? Because as you use the different models, you typically have to pay some of them. It's per usage, right? So it's like I put so many tokens in or, you know, whatever, they're
00:33:24 - Anthony Campolo
usually only usage based. If you're using the API, it's, it's usage based in the sense that you need to buy more expensive monthly subscriptions to get higher usage caps. So I'm not literally paying like by the token, but and this, and a lot of this will have to do with just like how much do you use it, how much code do you write and is that code that you're writing like work related or not so you can get away? I think most people, they should start by getting the $20 subscription to whatever LLM they're using, whether it's ChatGPT or Claude or whatever. And then if you find it is useful for you but you're hitting the usage limit, then look at some of the more expensive ones. Claude has a $100 one and then a $200 one where you can get 5x more usage or 20x more usage. I think ChatGPT just has the 200 plan where you get essentially unlimited usage. I've never hit the usage cap on ChatGPT, so I pay $200 for both of those because I'm building an app that is going to like be my full time income hopefully one day, so I can kind of justify that cost.
00:34:26 - Anthony Campolo
If you're someone who's just learning these things or just starting out, I would recommend just starting with a $20 a month one. I definitely recommend not using the free plan. If the free plan does not give you the best model, you want to have the best model, even if it's $20 a month.
00:34:39 - Dan Shappir
And there's a reason that Nvidia is now worth 4 trillion.
00:34:43 - Anthony Campolo
Seriously.
00:34:45 - Charles Max Wood
So for new folks, there are a couple of things we've kind of thrown in here that I want to explain. One of them is tokens. Tokens are essentially words or parts of words that give meaning to the context. And the context is what the LLM remembers about your conversation. So it breaks it up into tokens and then it figures out what it means. The other one is Dan's reference to Nvidia going up. A lot of these models are trained using GPUs, and Nvidia is the largest maker of the best GPUs for the training for these. And so as you get bigger and bigger models and more and more information crammed into them, they need more machines with more GPUs to feed the data in so that they can build these models. You can run them on your own machine. And a lot of times it will take advantage of your own, your GPU and your computer, which is probably also made by Nvidia. But at the end of the day, that's why. It's because they're buying the GPUs like candy. Because in order to get a bigger, stronger model, they need more hardware.
00:35:54 - Dan Shappir
Or put another way, all that money that's coming from both us as users and from the VCs is all flowing downstream into Nvidia's pockets, right?
00:36:06 - Charles Max Wood
Yeah. Because they're basically the big or only game in town for the hardware you need to.
00:36:12 - Anthony Campolo
Yeah, well, they always say in a gold rush, you want to be, you know, selling shovels. This is probably the best example of
00:36:18 - Charles Max Wood
that is what are you building and what's kind of your workflow as you build it out? And then related to that, what do you find that you know is working or not working in that workflow?
00:36:30 - Anthony Campolo
Yeah, so I'm building something called AutoShow. It's AutoShow app, if you want the latest go to dev AutoShow app. If you're someone watching right now, I just can't wait. That will be in upstream by the end of today, hopefully. But it's something that I first built for myself, just kind of as a personal tool that I thought would be useful. I explained it very Briefly at the beginning where I wanted to take my podcast. So I'm a podcaster like you guys. I do lots of live streams as well, and I wanted to be able to get an LLM to write the chapter titles for me. You know, most people, they listen to a podcast like your Lex Friedman's or your Joe Rogans or whatever you'll get. Actually, Joe Rogan doesn't do this, but Lex does. You get chapter titles and timestamps for each. So you can click to a certain point and it'll jump to that point in the conversation and you can kind of read that over again, a sense of what is the guest going to talk about on this show. But that takes a lot of time to have a three hour podcast.
00:37:26 - Anthony Campolo
You gotta listen to the whole thing and find those times. So I found that if I used Whisper, which is OpenAI's open source transcription model along which gives you the transcription and the timestamp for each line, I could give that to ChatGPT and say, Hey, I want chapter titles. Read this, chunk it up into topics, and then give me where the topic starts. And that was the first thing I did. And I was like, wow. Like, just that alone saves me a lot of time and it is very useful. So then I created a scripting workflow to do all that myself. Instead of running Whisper copy, pasting the transcription into ChatGPT, writing the prompt, or, you know, saving the prompt somewhere on my computer and copy, pasting it and then giving it to the LLM and then getting the output back, I created just a command or CLI to do all those steps for me. So you would write npm, run AutoShow, give it the URL, and then you would have the full show notes in without doing any other effort. So I was like, wow, that's pretty cool. And I started expanding it out.
00:38:24 - Anthony Campolo
I started adding more prompts, you know, things like summarize the whole episode or give me the key takeaways, or write frequently asked questions based on this, or write a rap song based on this, or write a blog post based on this. I just kept adding more and more prompts and more and more things it could do. And then I was like, okay, I need to. So then I started showing it to people. You know, I go on streams with my friends. I'm like, hey, look at this thing I built. And after showing it to people, I just kept getting told over and over and over again, people would be like, dude, you should charge for this. And I'm like, okay, well if it's People really think it's useful. I think it could be a useful app that I should try and productize this. So I've been working on building the front end of the back end so that there'd be kind of a nice user interface that people could use. And so, yeah, so now you have a, you know, clicking interface. You can just drop in a link to the YouTube video you want or upload a local file from your computer.
00:39:17 - Anthony Campolo
You select the transcription service you want, you select the prompts you want, and then you select the LLM you want and then it gives you the show notes back. So yeah, that's the, that's the app before we go on to like a whole product.
00:39:30 - Steve Edwards
So before going too much farther, I want to say comparing us to Joe Rogan was a very appropriate comparison, I think.
00:39:36 - Anthony Campolo
Of course.
00:39:37 - Charles Max Wood
Right.
00:39:37 - Anthony Campolo
I mean, you guys are the JavaScript podcast.
00:39:39 - Charles Max Wood
And in my mind he's almost as, he's almost as pretty as we are. Yeah, I also want to. Yeah. So with these tools, I mean, I've used some tools that do some of the things you talked about, not all of them.
00:39:53 - Anthony Campolo
That's good to hear.
00:39:53 - Charles Max Wood
But yeah, it. Yeah. And then the other thing I was going to point out is I'm actually working on kind of the other end of things where I want kind of a podcast assistant, of course.
00:40:05 - Ad Read 1
The Global Gaming League is presented by Atlas Earth, the fun cashback app. Hey, it's Howie Mandel and I am inviting you to witness history as me and my how we do it gaming team take on Gilly the King and Wallow $267 million gaming in an epic Global Gaming league video game showdown. Plus a halftime performance by multi platinum artist Travy McCoy. Watch all the action and see who wins in advances to the championship match Right now at Global Gaming League league.com that's globalgamingleague.com in partnership with Level Up Expo. I know you're in Missouri.
00:40:37 - Ad Read 2
It's Britsky, baby.
00:40:39 - Anthony Campolo
And guess what? You're in Missouri.
00:40:40 - Ad Read 1
So you can play on spinquest.com America's
00:40:43 - Charles Max Wood
greatest place to play slots and table
00:40:45 - Anthony Campolo
games absolutely free from the comfort of your own phone.
00:40:48 - Charles Max Wood
With the ability to win real cash prizes straight to your bank account.
00:40:52 - Anthony Campolo
Why not head over to spinquest and
00:40:54 - Ad Read 1
buy a thirty dollar coin package for only ten dollars?
00:40:57 - Anthony Campolo
I love you.
00:40:58 - Ad Read 1
Spinquest is a free to play social casino void where prohibited. Visit spinquest.com for more details.
00:41:04 - Charles Max Wood
I plan to monetize the platform that we host the shows on, but I want to build an LLM based system in there where it's like, hey, this is how we do the scheduling. Can you schedule an episode with so and so can you invite them to the podcast? Can you. You know, they said they want to talk about Vibe coding. Can you give me three or four resources that I can go check out before the episode in order to talk to?
00:41:27 - Anthony Campolo
Cool. I can help you build that if you want.
00:41:28 - Charles Max Wood
Right.
00:41:29 - Anthony Campolo
Let's talk afterwards.
00:41:30 - Charles Max Wood
Yeah, definitely. And so it's funny because the. The two kind of.
00:41:34 - Anthony Campolo
Yeah.
00:41:35 - Charles Max Wood
You know, they kind of blend what I think. Yeah. And the other thing that to be interesting is then, you know, can I license AutoShow? Right. And so it's like, hey, for all of the process stuff on the other end, instead of building it myself, you know, say, hey, we're going to send your episode over to AutoShow and we're going to get all the metadata back that we need in order to publish it. And so a lot of that just. Anyway, it'd be really interesting to kind of compare notes and see where this is going. I'm doing most of mine in Ruby, but yeah, anyway, yeah, it would just
00:42:07 - Anthony Campolo
be a backend endpoint that you could, you know, hit and access if you want to just use the AutoShow part. But yeah, there's only reasons why I like having so many friends who are content creators and why I like going on shows and explaining it to them, because most people are like, hey, I could use that.
00:42:19 - Charles Max Wood
Yeah, I've thought seriously about doing that stuff, but I want to focus on the other stuff because that's where I spend most of my time. And so, yeah, if you'll do the stuff. Yeah.
00:42:30 - Dan Shappir
And so question about that. So you're building this whole thing, as you said, using Vibe coding. If you were to have done it three years ago, you would have probably written it manually by hand.
00:42:42 - Anthony Campolo
Yeah.
00:42:42 - Dan Shappir
How much of a difference has the Vibe coding aspect of it made?
00:42:48 - Anthony Campolo
It just completely accelerates. Like, it makes such a huge difference in terms of the speed at which I could build new features, new functionality, fix bugs. It's just a huge, huge accelerator because you think about all the things you had to do when you didn't have this. You would have to first figure out what you want to do, what tech you're going to use. You have to go to their docs, you have to then try it out. I mean, if you get to the point where you know all your tools, you're already very experienced with it, then you're just, you know, writing the code. But even then, you know, you have to figure out what the feature you want is, how you're going to implement it. You have to then write all the code to implement it. You then have to test it, you have to write new code if it's broken. So for me, it's just a huge, huge accelerator. And as I've done it more and more and I've learned how to do it more efficiently, how to, you know, anticipate the weaknesses it has and how to mitigate those. I've continued to accelerate my development speed even more as I've been continuing to do this.
00:43:45 - Anthony Campolo
So, yeah, I, I just think it's, it doesn't necessarily make you a better dev, but it makes you a much, much, much faster dev.
00:43:52 - Charles Max Wood
So the thing that I can see people thinking is, are people going to lose their jobs because of this?
00:43:58 - Anthony Campolo
Well, no, because there still had to be me to think of the app, to then prompt it, to build the app and then test it to use the app. Yeah, I mean, but I, I don't
00:44:10 - Charles Max Wood
know, my employer could hire me to use tools like this and not have to hire two or three other people maybe when they get to that point.
00:44:20 - Anthony Campolo
Yeah, or it's a question of, you know, but if they think of, if they then have success and they're making money and they want to do more things, then they'll have to hire more devs, even if they're AI enabled. I think that it's kind of like an economics question. Will there be a point where people like, well, I'm making enough money, so instead of trying to expand and make more money, I'm just gonna keep making the money I make and then cut, you know, workers. I just, I just don't think that's really how it happens in practice. I think in individual cases companies may make that decision, but on like the scale of the whole economy just hasn't how it's happened throughout history. So it could, this could be different. I'm definitely not saying that that's not possible, but I think if you look at technological advances throughout history, that has never happened before.
00:45:06 - Dan Shappir
There's another aspect here, and that goes to the whole testing aspect of it. Think about it. I'll give a different analogy. Now, like you said, this might change. Yeah. Think about pilots with the autopilot. I mean, if you think about pilots flying a plane, commercial jet, the Autopilot does like 90 something percent of the actual flying people don't know. But even the takeoffs and landings these days are pretty much automated. But that does not mean you don't want a pilot in the plane's cockpit because occasionally you have situations that the, the autopilot can't properly handle and you want a, a person in the loop.
00:45:50 - Anthony Campolo
And like, I've never heard that. That example before, but that's right, I'm going to start using that.
00:45:55 - Dan Shappir
Well, I mean, I'm going to charge you for that.
00:45:57 - Steve Edwards
This past weekend, I got a chance to drive a Tesla and the guy who owned it was a friend of mine. And he's like, dude, check this out. And he said, on autopilot.
00:46:04 - Anthony Campolo
But.
00:46:05 - Steve Edwards
And I was. It was driving and steering and stuff, but it says right there on the screen, be there ready in case something happens.
00:46:11 - Dan Shappir
Right?
00:46:11 - Steve Edwards
You don't just sit back and take a nap and, okay, wake me up when we get to where we're going. Because something could obviously happen. So, you know, as you were talking, that's how I'm thinking. It's doing a lot for you, but you're still there sort of keeping your eye on it.
00:46:23 - Dan Shappir
Make sure it's, be there. Happy, Be there. Ready? Sorry. I don't want it because I'd be much more stressed out having my hands hanging over the wheel rather than simply just holding the wheel. I don't know. But.
00:46:39 - Charles Max Wood
Well, my, my thinking on this is much more in line with what Anthony explained. And, you know, I asked the question because I want to hear what he thinks. I don't want to tell him what I think and then go, do you agree?
00:46:50 - Anthony Campolo
Wow, you think? No. You and your ideology.
00:46:54 - Charles Max Wood
Right? But I look at it and a lot of the other technological advances that we've seen where people were like, you're going to put people out of work because you automated a factory, right. You know, with an assembly line or with, you know, with robots or tools or things. I mean, in some of those cases with the physical products, I mean, yeah, you're only going to sell so many widgets and so. Right, yeah, you kind of see that there. But with a lot of the other technological advances, and especially in software, my experience has been that our backlog is longer than we can do in 50 years. Right. We've got a zillion things that we want to put in there, try out, run with, whatever. And so now if you've got developers that cost you more or less what they cost you anyway, and you can give them these tools to accelerate, you just wind up doing more things more than you wind up laying people off. You know, that may not always happen in every case, right. There may be people that go, you know what We've cornered this market, we're pretty comfortable where we're at and so, yeah, they wind up doing the other thing.
00:47:59 - Charles Max Wood
But my experience is that the cost of making software that does what people need is going to wind up going down and that's going to reflect into all of the other areas of economy. And the companies that fail to innovate with this stuff are going to wind up getting left behind. And so you're not going to let people market. Yeah, you're going to accelerate all the people you have and make them way more efficient so that you can stay competitive.
00:48:28 - Dan Shappir
Also. Look, do I think that we might eventually get to a point, I don't know when it will be? You know, people talk about AGI and stuff like that where you don't need a person in the loop, maybe, but when that happens, it won't stop at software. So if you're saying, you know your job is at risk. Well, I could argue that eventually every job.
00:48:52 - Anthony Campolo
But then even still, who made the LLM then? And these things aren't out of nothing.
00:48:58 - Dan Shappir
No, the previous LLM, at a certain
00:49:00 - Charles Max Wood
point in time, if you're talking about AGI, that's actually a different animal than what we're dealing with here, where it's, it's more capable of actually making decisions. Where right now the LLMs, they kind of do need a human running them. And so the idea of an AGI or superhuman intelligence, the idea behind those is that you don't need the human behind them and we're just not there yet. It's a different thing.
00:49:27 - Dan Shappir
I do think that it is changing what it means to be a developer and the skill set associated with it.
00:49:38 - Charles Max Wood
Oh, absolutely.
00:49:40 - Dan Shappir
Although again, I'm not exactly a hundred percent sure in exactly how, because like you said, at the end of the day, for example, Anthony, you said you're still going into the code and fixing various things by hand. Now maybe with better models you would do it less frequently, but you still kind of need to be able to do it. It's also interesting for me that at the end of the day we are generating code in react or generating code in solid, and you have to kind of think of to ask like why if at a certain point in time, if it's not the person interacting with the code, why does it need to even be human readable code?
00:50:24 - Anthony Campolo
Well, it needs to be something the browser can interpret. That's going to be the thing. Although you'll never be able to go beyond that if you're building for the web or the Platform you're building with will kind of define unless the platforms change and start accepting English language than have them spin up, you know, code on the spot to do.
00:50:39 - Dan Shappir
Yeah, probably.
00:50:40 - Anthony Campolo
That's a lot of issues.
00:50:42 - Dan Shappir
That's probably not energy efficient. Once. Once you have the task worked out, you probably want something more automated.
00:50:51 - Anthony Campolo
I'm just glad we all agree now that React is the only thing we'll ever write for the rest of our lives.
00:50:55 - Dan Shappir
No, because they're still paying me to write Rails. We did have an interesting conversation recently about the fact that eventually you might get to a point in where if you're talking about an agentic web kind of like, you know, think about Google now where you put in a query in Google and instead of search necessarily the search result, you're. You're looking at what Gemini generates for you. And currently what Gemini generates is mostly text. But you could theoretically think about the future where Gemini effectively generates a webpage for you based on what you requested.
00:51:35 - Charles Max Wood
Yeah, hang on, I've got to take off so I'll let Anthony answer but I've got to go. So yeah, real interesting to see how this wraps up.
00:51:44 - Anthony Campolo
Awesome.
00:51:44 - Dan Shappir
Bye guys. So Anthony, now it's just. It's going to be just the two of us, I guess.
00:51:51 - Anthony Campolo
Yeah, it's all good. Hours of talking to you, Dan.
00:51:54 - Dan Shappir
Same here.
00:51:54 - Anthony Campolo
Where are you working these days?
00:51:57 - Dan Shappir
We're taking a detour. We still have audience. So just so you know, this is not a part of the conversation. I'm actually working for the past year. I. I've left my pre. The previous company that I worked at Next Insurance about a year ago by the way. They just recently got sold which is nice because I kept my stock.
00:52:15 - Anthony Campolo
Hell yeah.
00:52:16 - Dan Shappir
Yeah. And now I'm working at a company called Sisense which does analytics and we are very much impacted by AI both in the development process. We are kind of doing the AI revolution inside like it basically management came basically gave a mandate that all development. I won't say it's vibe coded, we're not there yet but all, all development is now AI assisted and even AI driven.
00:52:49 - Anthony Campolo
That's super interesting. So this is relevant to the conversation then?
00:52:52 - Dan Shappir
Yeah, for sure. So for example, all the TypeScript/JavaScript development, because we still have some legacy stuff written in JavaScript, is now being done with Cursor. Interestingly, a lot of our backend is implemented in Java and the Java people are having a hard time with it. They've tried to use Cursor and it's not working well. I don't know exactly why, I haven't looked into it personally, but they're having issues.
00:53:20 - Anthony Campolo
I can tell you exactly why because the models weren't trained on Java, they were trained on Python and JavaScript code.
00:53:28 - Dan Shappir
But I think it's even beyond that. They're even getting, you know, Cursor is essentially VS Code with stuff and they're getting it. They're facing all sorts of challenges to work with the Java tooling stuff like
00:53:43 - Anthony Campolo
Java tooling is all designed for IntelliJ.
00:53:46 - Dan Shappir
Yeah, stuff like that. So they literally even had, you know, some of the developers literally tried to work by having both IntelliJ and, and cursor open at the same time, work in Cursor but do all the builds from intellij and you know, having them sync and they're not having fun.
00:54:03 - Anthony Campolo
Yeah, well, that's how I, that's how I always felt when I had to write Java. So now you know how the rest of us felt.
00:54:08 - Dan Shappir
Well, look, I've gone through all, through all the programming languages, just so you know, I've started and Java's the worst, right? No comment.
00:54:18 - Anthony Campolo
I don't hear Java's the best.
00:54:21 - Dan Shappir
Well, let's put it this way. When I moved from, I had a certain stint using Visual Basic and moving from Visual Basic to Java was actually a pleasurable move because I prefer the curly bracket syntax over if and if and stuff like that. But going back to our original topic. So you're saying that, let me put it bluntly, would you have been able to do this project without this whole AI assistant development?
00:54:58 - Anthony Campolo
Probably not, I would say because, you know, it's kind of a question of I could, but it would have taken so long that almost wouldn't have been worth it for me, for me to do it, you know. So I do think it has been a step change in enabling me to build stuff that would've been very challenging for me to have built in the past in a reasonable amount of time. You know, and this is partly, I'll fully admit this partly to with my level of experience, you know, I didn't start coding until my late 20s. I've been a professional dev for about five years now. Someone who has been a professional dev for 15 years. It'll be very different in terms of what an LLM enables them to do before and after. I think it will speed them up if they use it correctly, but it will be less of a thing where it will be a step change in enabling them to build brand new stuff. And there's actually there's there's academic research to support this that shows people who are, you know, junior to intermediate gain the most from LLMs. That's not just in software development.
00:56:00 - Anthony Campolo
That's across all, all like legal and stuff like that.
00:56:04 - Dan Shappir
So I'm having an interesting experience right now. As I mentioned, we're using it with a lot of existing legacy projects. So for example, we have a legacy project which is implemented in JavaScript with React and class components and in manually transitioning from class components to functional components and hooks is. Well, it's very, it's labor intensive but fairly boring. Not a lot of creativity involved in that. Let's put it this way. And it's definitely something that you can tell an LLM to do. You'll probably need to fix the results.
00:56:45 - Ad Read 1
The Global Gaming League is presented by Atlas Earth, the fun cashback app. Hey, it's Howie Mandel and I am inviting you to witness history as me and my how we do it gaming team take on Gilly the king and wallow. 2, 6, $7 million gaming in an epic Global Gaming League video game showdown. Plus a halftime performance by multi platinum artist Travy McCoy. Watch all the action and see who wins and advances to the championship match right now at globalgamingleague.com. That's globalgamingleague.com in partnership with Level Up Expo.
00:57:16 - Anthony Campolo
What's going on Missouri? It's bluff here. And you know what I hate about Missouri? They make it really hard to play my favorite casino style game. But you know who doesn't? Spinquest.com they have over a thousand available for you to play from the comfort of your own home anywhere in Missouri. And you don't have to wait for your prizes because they have instant cash prize redemptions. And new users that sign up right now get a 30 coin package for 10 bucks. So go sign up.
00:57:39 - Ad Read 1
Spinquest is a free to play social casino void where prohibited. Visit spinquest.com for more details.
00:57:46 - Dan Shappir
But maybe really robust tests.
00:57:49 - Anthony Campolo
Going back to the testing.
00:57:50 - Dan Shappir
Well, obviously whenever you're doing refactoring, well let's put it this way, way when you're doing refactoring it doesn't really matter what you're using manually or LLM or whatever, you need to have robust tests as, as, as a baseline. You can't. I would never start a major, a significant refactor without good test coverage. But it's really effort intensive to do this kind of thing. Labor intensive. And it's, it's not very interesting work. It's not very rewarding work. So if you can jumpstart this process with an LLM, that's a significant upside. So it's not just for let's vibe code a new project. From my perspective, it's also very, very useful when it's. Let's refactor an existing project and get it to where we would like it to be something that otherwise might be so labor intensive that we might never actually do it.
00:58:48 - Anthony Campolo
No, I totally agree. I've told people the same thing and I think this may be finally the point in time we can like rewrite all those COBOL apps, you know, or get off of Java so that you could then have a better dev experience with LLMs. So I agree. Refactors now are. You can do a lot larger refactors a lot faster with LLMs, for sure.
00:59:08 - Dan Shappir
That'll be interesting. Telling it, telling an LLM. Here's a Java application, rewrite it in. In Node or something like that and see what happens.
00:59:19 - Anthony Campolo
Yeah, that'll be hard. The react class components to functional components would be a little easier because you can do, you know, bits and pieces at a time. But if you just have to switch the entire language, you almost have to refactor like the entire thing.
00:59:32 - Dan Shappir
Yeah, that is true, but. So an interesting point that you raised about being an experience rather than a relatively less experienced developer. I can see it working both in favor of and against both types of developers. Like, let's consider the junior developer. On the one hand, you might say that using that if you've got a team that, let's say in the past you might have a team with one senior developer and two juniors. Well, now you might just give the senior an LLM and tell them you don't need the juniors anymore. You can effectively get the LLM to do the junior stuff for you and then you're under. You're going back to Chuck's question before you're potentially undercutting the work of junior developers. Do you think there's a risk of that?
01:00:26 - Anthony Campolo
Potentially, I think it'll be an economic question for the company. Do they get more value out of having their senior dev use an LLM do the junior dev work, or would it be better to have the two junior devs use an LLM and get closer to senior work? Because then you would have, you know, you think of it as having a senior and two half seniors, but you
01:00:47 - Dan Shappir
would still need the senior. Or could you? So it works both ways. It could be maybe I just keep the senior and have the LLM instead of the two juniors. Or maybe I could have the two juniors working as semi seniors and doing without the senior. Or maybe with fewer seniors.
01:01:04 - Anthony Campolo
That gets back to what Chuck said though, is that you do have a finite amount of work for them to do. Or is it once they've done their work, you have more work for them to do, in which case having more means you could do more work.
01:01:16 - Dan Shappir
My opinion, by the way, is that you would still need a senior, but for a slightly different thing. Like from my perspective, at the core of software development is the ability to take complex problems and break them down into several simpler problems that you can then take the results and mush them together and get the solution for the original complex problem. And you do this recursively or iteratively until you get to such simple problems that are relatively straightforward to implement. And even if you're using an LLM, you'd still need to have a person kind of driving this process. I mean, think about you developing your app. You couldn't tell the LLM, here's my idea now go develop an app for it and do all the development process and just send me a text when you're done. Again, maybe we'll get there one day, but we're totally not there yet. So that's what I think you need the seniors for. And in a lot of ways that's what you need the seniors for with juniors anyway.
01:02:32 - Anthony Campolo
Yeah, no, I agree. So I think, I think that the argument you're making and what I agree with is that seniors are probably going to be sticking around if people are being replaced because of LLMs, is probably. That's going to be harder to break in if you're a junior, which will
01:02:47 - Dan Shappir
raise an issue for the industry because if you don't have juniors, well, with. Where will the seniors come from? Yeah, so how. So you know when, when you're vibe coding, how do you know when you're done?
01:03:01 - Anthony Campolo
Well, for me, it's just have I built the feature that I wanted to build? You know, like you were talking about how the, the point of the senior is to kind of have this vision what is the problem you're trying to solve and, and how do you get there? So for me it's like I first had this initial vision of a workflow I wanted where I could generate show notes without any manual steps along the way. And then from there I've just been adding on more features like, oh, I want it to be able to work with not just audio files, but also video files so it can extract the audio Or I want additional prompts so it could do more. Or oh, I want to be able to run it twice on different LLMs. Or oh, once it gives me the output, I then want to run text to speech so I can listen to the show notes. Or oh, I want to now generate a cover image so I need to connect it to an image.
01:03:48 - Dan Shappir
Maybe you should just release it.
01:03:50 - Anthony Campolo
Well, it is released. That's what I'm saying.
01:03:52 - Dan Shappir
Ah, it is released.
01:03:55 - Anthony Campolo
Yeah, because people can use it.
01:03:58 - Dan Shappir
Cool. So obviously I think you are right. You gave the link both to the production version and to the development version that people can try out.
01:04:07 - Anthony Campolo
Yeah, people should go to. I could screen share and I can show you if you want.
01:04:11 - Dan Shappir
Yeah, well, maybe we'll do it quickly. Although again, for people listening on the podcast, that's.
01:04:16 - Anthony Campolo
Yeah, I can. I'll speak through what's kind of happening. So let's. It'll be quick. It won't take me very long to kind of go through it. So. So you have an interface where you just
01:04:28 - Dan Shappir
start before you continue to our listeners. Again, Anthony will be describing what he's showing, but if you really want to see, you can always find the. You'll be able to find the video on YouTube because we always release the episodes also on YouTube as actual videos as well.
01:04:45 - Anthony Campolo
Yep. And then how do I, how do I get this off screen? Are you able to see the riverside thing right now?
01:04:54 - Dan Shappir
Yeah. Yes, I am. Okay, I'll just, just drag it a little bit. Yeah, like that. Okay, great.
01:05:00 - Anthony Campolo
So you start with selecting either a file from your computer or you give a video URL link. It doesn't just have to be YouTube, it could be something like Twitch or Vimeo. Any kind of service works. And then after you select the thing you want to process. Oh, wait, sorry. Oh, that's good. Yeah. Then it gives you the available transcription services and you select which one you want. There's a credit system, so depending on whether you use a more expensive transcription model versus cheaper, you'll have different credit options and you just. It's just per usage, so there's no subscription. It's. It's pretty simple. After you select your transcription service, you select the prompt you want to use. There's a ton of prompts right now, things based around content, so different length summaries and chapters. You can pull out quotes. You can create social media posts, blog posts. You can do business related stuff, create email campaign series, press releases. You can do creative stuff like songs and short stories. You can create educational material and then stuff like personal and professional development. So I'm going to add when you hover over them, it will show you a quick preview of each so you can kind of tell what it's actually going to give you.
01:06:22 - Anthony Campolo
That's just something I haven't done quite yet. Then you'll select your LLM. Do you have a question?
01:06:29 - Dan Shappir
No, just a thought or suggestion that eventually you may want to implement some sort of a plugin mechanism which will make it both easier for you to add new features without having to release new versions, but also make it possible for third parties to add their own plugins into your system.
01:06:46 - Anthony Campolo
So what would be a plugin that
01:06:47 - Dan Shappir
would like all those services that you showed before would be plugins.
01:06:54 - Anthony Campolo
Okay, but that would still have to. Something would have to deploy and then,
01:07:00 - Dan Shappir
yeah, they would need to be somehow deployed securely into your system. I'm not saying that it's trivial, but it would make the system a lot more extensible. Both make it Generally speaking, whenever something is a service that you can invoke from your system, you should always think about whether that can or should be a plugin because a it kind of decouples the infrastructure itself from that particular type of processing and also again makes it possible for somebody else to effectively extend your system. Now then you get into other interesting questions about what you know, like you said, how do I do something like that securely? How do I prevent data leaks, how do I monetize it, stuff like that. But it still, it opens up a lot of possibilities.
01:07:54 - Ad Read 1
The Global Gaming League is presented by Atlas Earth, the fun cashback app. Hey, it's Howie Mandel and I am inviting you to witness history as me and my How We Do It gaming team take on Gilly The King Wallow 267's Million Dollar Gaming in an epic Global Gaming League video game showdown. Plus a halftime performance by multi platinum artist Travy McCoy. Watch all the action and see who wins and advances to the championship match right now at globalgamingleague.com. That's globalgamingleague.com in partnership with Level Up Expo.
01:08:25 - Charles Max Wood
What's up everybody?
01:08:26 - Anthony Campolo
It's your boy Jackpot Johnny.
01:08:27 - Charles Max Wood
And do you travel a lot like I do?
01:08:29 - Anthony Campolo
I know when I get pretty bored, I just lay in my hotel room
01:08:32 - Dan Shappir
and I open up Spin Quest up.
01:08:34 - Anthony Campolo
If you're a brand new user, you could get a $30 coin package for only 10 bucks. That means you can play games like mines, Blackjack and of course Live Bakara. Make sure you guys check out spinquest.com today and get in the game.
01:08:48 - Ad Read 1
Spin Quest is a free to play social casino void where prohibited. Visit spin quest.com for more details.
01:08:54 - Anthony Campolo
Yeah, and, and I am planning on having the, the kind of backend API of this be exposed to other people. Like someone like Chuck. If he wants to, to use this, he wouldn't have to go straight through the front end flow.
01:09:07 - Dan Shappir
Yeah, but that's using you as a service. I'm talking about you using other services to perform particular operations.
01:09:17 - Anthony Campolo
Yeah, it's possible for sure now.
01:09:24 - Dan Shappir
So one of the issues that I recall people bringing up, you know, when people have that had no experience in software development at all, Vibe coded, like services that actually, that they actually sold or provided, they ran into all sorts of security issues, like silly things like putting the AWS keys on in the front end code or stuff like that. Because obviously the LLM doesn't care. How do you, given that you effectively are not writing a lot of the code that you're generating, how are you preventing these sorts of things from happening?
01:10:10 - Anthony Campolo
Yeah, so can I actually finish the flow first and then I'll answer?
01:10:12 - Dan Shappir
Oh, yeah, sorry, sorry.
01:10:14 - Anthony Campolo
Yeah, no, it's all good. So then you select the model you want. We have ChatGPT, Claude and Gemini. And then it gives you the show notes. This styling needs to be fixed. It'll be a little nicer. I actually ran this on a JavaScript jam's first episode so we can kind of see what the output is. So we see here we have an episode summary. I'll just read like the first sentence. This inaugural episode of JavaScript Jabber introduces the podcast and panel members featuring AJ O'Neill, Peter Cooper, Jameson Dance and Charles Max Wood.
01:10:46 - Dan Shappir
The old school guys. Yeah, and then it gives you the
01:10:49 - Anthony Campolo
chapters and then if you keep scrolling, you'll see the prompt that gave it to you. When I first shared this with my friend Scott, he thought that was a mistake. He's like, you know you're showing people the prompt, right? And I'm like, well, yeah, I want them to see the prompt that was used to generate. He's like, but that's the secret sauce. Why would they use your service when they could just use the prompt? And I kind of get where they're coming from, but there's all this other stuff beyond the prompt that the app does. I guess the use is that it, you know, takes, processes the video and connects to a transcription service and connects to an LLM service. It does the whole thing. So I think there's still value to the app even if you do include the prompt, that might be a different.
01:11:27 - Dan Shappir
I agree. Because also people use these sort of things in order to save time and effort. If, you know, they could copy the prompt and they could get some of the functionality, but at a significant expense of time and effort. The process would certainly not be as streamlined. So for sure.
01:11:48 - Anthony Campolo
And then this is a bug. The transcript should be here. That's something for me to fix. And so the transcript will be included along with the show notes. And then you can see some configuration, like what model did you use? How many credits did it cost you? And then metadata includes like the title, the YouTube link it came from, and the YouTube channel it's connected to. You can see the cover image from the show as well. So a lot of this you could kind of take and turn straight into a markdown file with front matter that can become a web page for each of your episodes or something. That's kind of what I do on my website for all of my videos.
01:12:24 - Dan Shappir
It seems that LLMs have really been revolutionary for content creators. And that's yet another great example. Like at the end of the day, it's a person creating the content, but it's the LLM doing a lot of the chores and all these sorts of things around the actual content. Stuff that otherwise would have taken away time that could have been spent creating even more content.
01:12:51 - Anthony Campolo
Yep. Yeah. So going back to your security question, so for me, I do have enough experience to know basic security things like don't put API keys in your front end, stuff like that. So that is where, you know, just having some sort of dev experience obviously comes in handy. If you're not a dev at all, you're trying to launch a whole app that is very risky. And I would recommend trying to find someone you could pay some sort of small amount of money to at least do a very, very baseline security check for you. If you can't do that, spend a lot of time prompting your LLM to have it give you instructions for security testing your app. So for higher, higher level stuff like process scripting and stuff like that, I am in the process right now of kind of hardening the application. I didn't roll my own auth and I didn't roll my own payments. So that also I'm kind of leaning on the services to do some security for me, banking on clerk and stripe handling, you know, their end of the security and that I have to just make sure that the app itself can't get hacked.
01:13:57 - Anthony Campolo
So someone could then have access to like your Your credits and your credit card that's attached to Stripe and things like that.
01:14:04 - Dan Shappir
And the main advantage is obviously, aside from reducing that effort, the fact that there are a lot of people out there integrating with Clerk and with Stripe. So the LLM is not short on proper usage examples.
01:14:21 - Anthony Campolo
Yeah, totally. And this raises the value of these third-party services and leans even further into the don't-roll-your-own-auth kind of argument, which I see both sides of for sure. I'm not saying no one should ever roll their own auth, but for me and what I'm doing and what I'm building, it definitely makes more sense for me not to roll my own.
01:14:37 - Dan Shappir
Okay, then we are running towards the end of the, of the show. So before we wrap up, is there anything else you would like to speak to say either either about the service you created or about Vibe coding in general or anything that we might have missed?
01:14:53 - Anthony Campolo
Yeah, I mean I just encourage devs or even non devs to try and work with LLMs as much as they can and try and really understand how they work, what, what you can do with them to get the most out of them, what their weaknesses are. And a lot of this you're only really going to get through experience. I personally find it really fun. I'm having more fun developing now than I ever have. One of my friends was like, it's so boring to code LLM. You just sit there and wait for the LLM to write the code. And I'm just like, well, you know, do some dishes in the middle. But that's what I do. I, I literally, I do chores while I code. I'll, you know, do a prompt and as it's writing code, I'll like do like a minute or two of chores and then come back to my computer. And that's kind of how it works.
01:15:38 - Dan Shappir
Kind of reminds me of the old days when we would be running compilers. Like, you know, I did a lot of coding in C and stuff like that and the, and in the old computers way back when, you know, before we had the M1s would take a while to run and we would like sit there and you know, surf the web or whatever while the compiler was running.
01:16:03 - Anthony Campolo
So XKCD comic where two dudes like sword fighting in the office and they're like, hey, what are you doing? Like, we're waiting for our code to compile. He's like, okay, yeah, I guess.
01:16:11 - Dan Shappir
So now we're waiting for the LLM to finish.
01:16:15 - Anthony Campolo
Yeah, no I've actually, I've made that same comparison actually, for sure. So yeah, I would also say try out different models, you know, see which ones you feel like work best for you and don't feel like you have to like dive headfirst into the entire AI space. Really the most important thing is just, you know, finding a model that works for you and can be useful for you and try and work it into like your day to day stuff, even if it's not coding. If you have some sort of other task that you feel like could be faster or automated, just throw it into an LLM and see what happens. And they're getting better all the time. They're connecting to more external services all the time. I would say don't worry about jumping into MCP yet. I think MCP is cool and it's going to be revolutionary, but it's super new and it's probably not something that people should really jump into unless they like working with new tech when it's kind of in changing breaking states, you know.
01:17:03 - Dan Shappir
Yeah, cool. So we usually have picks, but to be honest, I don't have any special pick today. Do you have anything that you would like to shout out as a pick before we finish?
01:17:15 - Anthony Campolo
I mean the things I mentioned throughout the episode, Repo Mix is cool. Check out Claude definitely. If you're looking to code, I think that's probably the best one right now in terms of, you know, price and features and speed and all that. And then check out AutoShow, let me know what you think. I just launched it like a week ago, so it's still pretty new and there may be some bugs and things like that just hit me up online. If any of that happens. I can give you some free credits to test it out if you want. I'm ajcwebdev everywhere on the Internet, X, YouTube, LinkedIn, GitHub. So yeah, check it out. Let me know if it's useful for you. Let me know if there's features you want me to build or prompts you want me to add. You'll have the ability to write your own custom prompts pretty soon if you don't want to use any of the regular prompts. And yeah, that's pretty much it.
01:18:07 - Dan Shappir
Excellent. So Anthony, thank you very much for coming on the show. I think you shared a lot of super useful information. I think we're literally watching a revolution and your case in point. So thank you again and to all our listeners, thank you for listening in and see you next time.
01:18:26 - Charles Max Wood
Bye.
01:18:29 - Ad Read 1
The Global Gaming League is presented by Atlas Earth the Fun Cashback App hey it's Howie Mandel and I am inviting you to witness history as me and my How We Do It gaming team take on Gilly The King Wallow 267's million dollar gaming in an epic Global Gaming League video game showdown. Plus a halftime performance by multi platinum artist Travy McCoy. Watch all the action and see who wins and advances to the championship match right now at globalgamingleague.com. That's globalgamingleague.com in partnership with Level Up Expo.
01:19:00 - Anthony Campolo
You know what? It sucks to be bored. But when I get on my phone and play real casino games on spinquest.com the time flies by. That two hour wait at the DMV seems like 10 minutes. Play your favorite slots, live blackjack, live craps with a live dealer. New players $30 coin packs are on sale for 10 bucks. Play spinquest.com and you'll never be bored again.
01:19:23 - Ad Read 1
Spinquest is a free to play social casino void where prohibited. Visit spinquest.com for more details.