# SolidJS Agent Doc Audit with Dev Agrawal

> Anthony Campolo and Dev Agrawal explore generative engine optimization, applying Joel Hooks' agent discovery skill to the SolidJS docs site.

- **Collection:** Video
- **Published:** 2026-04-06
- **Author:** Anthony Campolo
- **Canonical URL:** https://ajcwebdev.com/videos/solidjs-doc-audit-dev-agrawal/
- **Markdown URL:** https://ajcwebdev.com/videos/solidjs-doc-audit-dev-agrawal/index.md
- **JSON URL:** https://ajcwebdev.com/videos/solidjs-doc-audit-dev-agrawal/index.json
- **Channel:** [Anthony Campolo](https://www.youtube.com/channel/UCpdzti0GURPfMjKzYK5FVSA)
- **Original URL:** https://www.youtube.com/watch?v=QFXyVBxy5D8
- **Original Label:** Watch original

---

## Episode Description

Anthony Campolo and Dev Agrawal explore generative engine optimization, walking through Joel Hooks' agent discovery skill and applying it to the SolidJS documentation site.

## Episode Summary

In the second episode of Web Devs at Night, Anthony Campolo and Dev Agrawal dig into the emerging practice of optimizing websites for AI agents rather than just search engines. Anthony introduces Joel Hooks' agent discovery skill, which audits a site for how easily AI harnesses can crawl, parse, and act on its content. They walk through the layered approach: a solid SEO baseline, markdown twins of HTML pages, JSON projections, sitemaps that include both formats, robots.txt entries for various AI bots, JSON-LD structured data, and accessibility roles that double as machine-readable hints. Anthony shares results from running the skill against his own blog and against the Solid Docs repo, surfacing gaps like incomplete sitemaps and missing per-page markdown projections. Mid-stream, Dev discovers a draft pull request from the Solid Base author Jerome that already implements much of what they were planning to contribute. They also touch on Claude skills versus AGENTS.md files, the limits of agent-driven browser automation, and how ad-supported sites may struggle as agent traffic grows.

## Chapters

### 00:00:00 - Kicking Off and Framing Generative Engine Optimization

Anthony and Dev open the second episode of Web Devs at Night with some banter about their odd sleep schedules before Anthony introduces the day's topic: generative engine optimization, or GEO. He credits Joel Hooks and his agent discovery skill as the inspiration, explaining that coding agents often need to fetch current framework documentation from the web and strongly prefer markdown over HTML soup.

The conversation establishes the core insight that making documentation AI-friendly starts with simple moves like exposing a markdown version of every page, then expands outward to sitemaps that list both HTML and markdown URLs. Dev frames the skill as essentially a checklist of heuristics an agent can run against any site, similar in spirit to a Lighthouse audit but aimed at agent readiness rather than performance.

### 00:06:10 - Solid Base, Markdown Routes, and Touring Joel's Blog Post

Dev mentions that Power Sync docs already support appending .md to any page for a vanilla markdown view, and notes that Solid Docs are built on Solid Base, a static generator on top of Solid Start created by community members. The hope is that markdown projection primitives will land in Solid Base directly so individual doc sites get them for free.

Anthony pulls up Joel Hooks' new joelclaw.com blog and walks through the agentic AI optimization checklist post. They look at how his own ajcwebdev.com now exposes markdown and JSON variants of every post, plus a robots.txt loaded with user agents for Perplexity, Cohere, OpenAI, and Meta crawlers. Bradley and other community members drop in via chat with relevant context about Solid Start 2.0 work.

### 00:13:14 - Crawl Policies, Extractable Fragments, and JSON-LD

Dev compares the approach to Lighthouse audits, observing that Joel skipped writing a tool and just wrote a blog post that any agent can execute. They discuss how much GEO actually overlaps with classic SEO, agreeing that writing genuinely useful content remains the foundation regardless of who or what is reading.

The pair work through Joel's seven-point guidance on rewriting pages into extractable fragments: aligned titles and H1s, descriptive subheadings, front-loaded answers, self-contained sections, Q&A blocks, measurable claims, and a single canonical source. Dev notes this reads like advice for chunking and embedding pipelines. They then cover JSON-LD as a standard way to label what a page is, and ARIA roles as accessibility signals that browser agents like OpenAI's Atlas now use to interpret pages.

### 00:25:00 - Three Projections, LLMs.txt, and Code Base Maps

Anthony reads through Joel's argument that HTML, markdown, and JSON should be three projections of a single canonical resource, not three separately maintained documents. Robots.txt advertises both XML and markdown sitemaps, LLMs.txt points to feeds and access patterns, and an API discovery route can expose the structure programmatically. Anthony shows his own JSON index covering hundreds of blog posts, podcasts, and video transcripts.

Dev riffs on wanting a similar tool for code bases: a single command that dumps every file, its imports, exports, and symbol relationships into agent context, replacing the slow grep-and-read loop most agents currently rely on. They compare this to Cursor's embedding-based code search and agree both approaches probably have their place.

### 00:30:56 - The Agent Discovery Skill Itself

Anthony pulls up the actual agent discovery skill markdown and walks through its four-layer model: search baseline, initial response truth, machine-readable projections, and operator ergonomics. Dev appreciates that the framing covers not just reading content but also operating sites on a user's behalf, which goes further than he expected.

They discuss the boring fundamentals the skill emphasizes: clean robots policies, proper sitemaps, structured data, and the principle that LLMs.txt is a hint service rather than a ranking hack. The skill also touches on Vercel's evals showing that skills alone can underperform compared to explicit AGENTS.md instructions for always-on guidance, which sparks a tangent.

### 00:38:34 - Skills Versus AGENTS.md and Code as Documentation

Dev unpacks the Vercel finding, arguing that if information needs to always be in context it belongs in AGENTS.md or the skill description rather than the skill body. He acknowledges the ecosystem is still figuring out where each kind of instruction belongs and that the current situation is messy.

Anthony shares that he prefers manually invoking skills rather than letting the agent auto-select them, and that he has deleted his AGENTS.md files entirely, betting that a consistent code base teaches the agent the right patterns through example. Dev agrees, pointing out that markdown instructions inevitably drift out of sync with code and end up confusing agents more than helping them. They both express skepticism about agents operating browsers reliably today.

### 00:44:12 - Auditing Solid Docs and Discovering an Existing PR

Anthony walks through the Codex and Claude reports he generated by running the agent discovery skill against the Solid Docs repo. The audits flag an XML sitemap that only covers the core doc set while missing three other doc projects, broken locale detection in shared helpers, no per-resource markdown twins, and missing canonical URLs and JSON-LD in the head.

Mid-discussion Dev gets pinged by Jerome, the author of Solid Base and Cobalt, pointing to an open pull request that already adds LLMs.txt, sitemap, and markdown routes to Solid Docs. Anthony hands off screen sharing while taking a quick break, and Dev explores the PR, confirming that appending .md to doc pages on the preview deployment now works.

### 00:54:00 - Frameworks Absorbing Complexity and the Ad-Driven Web

Returning, Anthony celebrates that the Solid Base team is already ahead of the curve and plans to rerun the audit against the PR before the next stream. Dev makes the broader point that just as frameworks have absorbed SEO complexity over the years, they should eventually absorb agent optimization too, so authors only have to write components and markdown.

The conversation drifts to how agent traffic threatens ad-supported sites. Dev mentions Wikipedia's growing donation banners and fandom wikis getting worse with every visit, while Anthony brings up recipe sites with stacks of mobile pop-ups and impossible-to-click X buttons. They agree that for documentation, ecommerce, and content sites there are concrete paths forward, but the standards will keep shifting rapidly.

### 01:00:00 - Wrap-Up, Power Chat Demo, and Next Steps

Anthony floats topics for the next stream including evals against the Solid Base PR, a Solid 2.0 migration skill seeded by Bradley's blog post, and inviting Jerome and Bradley on as guests. He praises the Solid community for catching gaps before he could even file them and reaffirms his commitment to building on Solid.

Dev shows off Power Chat, a multiplayer AI chat app he is building on Solid Start, Neon, and Power Sync where agents are first-class participants and streaming responses sync across multiple tabs and devices in real time. Anthony offers to feature it in an upcoming blog post about sync engines for agents, and they wrap up the roughly hour-and-seven-minute episode planning to meet again in two weeks. The stream ends at 01:06:49.

## Transcript

[00:00:00] - Anthony Campolo
Hello, everyone. Welcome back to the second episode of Web Devs at Night. I think that's what I'm sticking with for the title. What do you think of that one?

[00:00:16] - Dev Agrawal
Honestly, it's way too early in the day for me to consider it night, but that's just me and my odd schedule.

[00:00:26] - Anthony Campolo
It's funny. You and I are on a similar schedule, it sounds like, because I'm working the Indian time zone right now and you're just living that dev life.

[00:00:36] - Dev Agrawal
Yeah, something like that.

[00:00:40] - Anthony Campolo
We were messaging each other last night, like 4 a.m. or something, right?

[00:00:47] - Dev Agrawal
Yeah, I'm pretty sure I've got a decent amount of sleep between then and now.

[00:00:54] - Anthony Campolo
Word. Cool. So is that Consent?

[00:00:57] - Dev Agrawal
Clerk.

[00:00:59] - Anthony Campolo
Clerk, that's right. I was just thinking because Chris is having launch week this week, so Clerk was on my brain. I gotta get him back on the stream to talk about that. But today we're gonna have a fun topic. This is going to be me doing my best to give back to the Solid community and try to really make their docs bulletproof, both for humans and for the AIs. So there's this term now, generative engine optimization, which is like, how do you make your thing... I know, stupid term. But I've been starting to follow some of these people, and what originally got me onto this train of thought was Joel Hooks, absolute legend. He's the guy who created Egghead. He has been really all in on AI. And he created this skill called Agent Discovery, which will basically audit your whole site to see if it is easily discoverable and searchable for agents, specifically, because you think about it, we're having our agents write all this code for us.

[00:02:26] - Anthony Campolo
Usually they won't have all of the current versions of whatever framework you're working in in their knowledge base. So they need to reach out to the internet to find content, and they're able to do that. They can parse HTML and stuff, but that's not really their natural language. They're much better if they can just find a markdown file or something. So the first level is what you and I talked about last time, which is you just need a copy-to-markdown button or a "view this page as markdown," because having that link there will be easier for your agents to discover that content. And then from there it expands out because then you think, okay, that needs to be on your sitemap. So your sitemap needs to have both your HTML pages and your markdown pages. And then there are a lot of other things, structurally, that can go into your site that will make it more optimized for agents. So what are your thoughts on all that?

[00:03:29] - Dev Agrawal
So far it sounds like it goes through a bunch of different heuristics or known things. It has a list of things that it would check. So it would check if it's markdown-accessible or not, check if there's a sitemap and a few other things, and then validate how many of these things your site is doing to be agent-friendly. Is that what this agent optimizer is doing?

[00:04:01] - Anthony Campolo
Yeah. Let's just hop right into it.

[00:04:04] - Dev Agrawal
Sure.

[00:04:04] - Anthony Campolo
So I'm going to share this link real quick. Forgive the infinite scroll, everybody. Here is what we're going to be looking at right now. This is actually a new site Joel created called joelclaw.com. I think these are blog posts that he's been writing with his OpenClaw or something.

[00:04:27] - Dev Agrawal
Yeah, his OpenClaw has been writing these, basically.

[00:04:30] - Anthony Campolo
Yeah, but they're really good. They're really useful. So I found this post because he tweeted, "Point your clanker at this. It works pretty well." And I was like, what's this? And he has this "copy for your claw" button, which I think will just turn it into... okay, so that's cool. It gives it the link to the markdown. So we see here this is already what we'll be looking at a little bit. We've got this Agentic AI Optimization Implementation Checklist page. And then if you just add `md` to it, you instantly get this. So I really like that convention. I'm jumping around a bit here, but I did this for my site. All of my blog posts now have an "open as markdown" page. They open as an `index.md`. I might actually change that to make it a little nicer. But I basically pointed my own blog at this. And then if you do copy, you get a similar thing. It gives you some metadata at the top. That's pretty cool. It also ships as JSON.

[00:05:53] - Dev Agrawal
Nice.

[00:05:55] - Anthony Campolo
Some of these things are just stuff that, like I said, I pointed my clanker at his blog post and had it do a bunch of stuff. My sitemap now points to all of these as well.

[00:06:10] - Dev Agrawal
Let me actually... we have this on the PowerSync docs, where if you put `.md` on any of the documentation pages it just gives you a vanilla markdown version. We don't have it on the Solid Docs yet. I think we talked about it at some point, and the idea was that there is a framework that the Solid Docs website is built on top of called Solid Base, which is a static generator on top of Solid Start. The idea was that this ability to do things in markdown would be added as primitives into Solid Base at some point so that we wouldn't have to do much work on the Solid Docs side. So Solid Base was one of my contenders at SolidHack 2024, where I made [unclear] and stuff like that. This is built by a couple of people in our community.

[00:07:18] - Anthony Campolo
Okay, this is great. I'm really glad that I know this exists now because I had been going back and forth on the best way to do static sites with Solid Start. You have your MDX template, but it didn't quite do it for me. I ended up building a custom static site generator with Bun for my own blog because I've always wanted to build my own static site generator, but it always seemed like something that wouldn't be worth the effort. Now, since we just have our AIs write all this crap, I finally started doing this. But now that I know this exists, I might check this out, because this is also why I wanted to have this stream with you.

[00:07:59] - Dev Agrawal
Makes sense.

[00:08:00] - Anthony Campolo
Because I wanted to know what I should actually be looking to upstream that's going to be useful. And hey, we got two people in the chat. That's awesome. "Yo, what we doing today?" That's what we're talking about right now. We're going to be making the Solid Docs AI-friendly. We also got friends of the show. Hey Bradley, I was going to shout out a post you made. This is really useful to me, so thank you for writing your things learned migrating to Solid 2.0, because I'm working on a Solid migration skill and I'm going to feed this into it as context. So if you ever want to get on the stream, Bradley, I would love to chat with you. I think you and I are very aligned in what we're trying to do here with Solid.

[00:08:54] - Dev Agrawal
Yeah, Bradley was just on Ryan's stream last week to talk about Solid Start 2.0.

[00:09:00] - Anthony Campolo
Oh, great. I need to watch that one. I was sleeping, so I can't usually watch them when they're live, but that sounds great.

[00:09:11] - Dev Agrawal
Yeah, honestly, I haven't been able to catch many of Ryan's live streams in the last few weeks.

[00:09:18] - Anthony Campolo
So is Bradley on the team or is he just a community contributor?

[00:09:25] - Dev Agrawal
He got a fellowship at some point to work on Solid Start, which came out of Anomaly, or Dax's sponsorship for Solid Start.

[00:09:38] - Anthony Campolo
Awesome.

[00:09:39] - Dev Agrawal
Multiple people got funded to help out with Solid Start around 2.0. Bradley was one of them. And at the same time Attila was brought onto the core team for Solid Start. So he's been helping out a ton with Solid Start work and TanStack Start as well, TanStack Router Solid integration and all that.

[00:10:05] - Anthony Campolo
Dope. I'll probably shoot him a DM after this is over, see if he wants to get on the stream sometime.

[00:10:14] - Dev Agrawal
Hell yeah. He might know more than me when it comes to Solid 2.0, migrating stuff over, what writing code looks like for it, Solid Start, and all that stuff.

[00:10:25] - Anthony Campolo
Yeah. Okay, so I wanted to first go through this blog post. We're not going to just read it, but I want to scroll through it a little bit. He has a couple different skills here. This one, Agent Discovery, is the one that we'll be drawing from. The first thing he goes into is crawl policies. This is super-detailed minutiae about how different bots are going to read your page and whether it's like a `robots.txt`. I'm sure you know about that. `robots.txt` is something that Google's search engine crawler will look at to decide whether to crawl or not, and then what pages. It will usually use that to find your sitemap. So if I went to `ajcwebdev.com/robots.txt`, you'll see I have a bunch of crap here because I pointed his skill at this. I've got my normal sitemap, and there's also a sitemap now in markdown. Really, a whole bunch of this is basically just making everything available in markdown files. That's a huge portion of this.

[00:11:40] - Anthony Campolo
And now, because of the beauty of standardization, every single one of these has their own bots. You have your Perplexity bot and your Cohere bot, your OpenAI bot, your Meta bot. All of these need their own user agents so they can learn how to crawl your site. But for the most part, they're going to be looking at your sitemap. Now I have all of these links. This is like `llms.txt` in the sense that it's just a searchable map of my whole site. Normally you would see this in XML variations. I think the bots should be able to read XML and markdown pretty much just as well, but for some reason they just seem to really like markdown. So that was the first thing it has you do, make sure your `robots.txt` has all of this stuff in it. That's what this first section is for, shipping an explicit crawl policy. I just allowed everything because I want all of the bots to have access to everything I've written.

[00:13:00] - Anthony Campolo
Obviously, some people want no bots to train on anything they've written, and that's fine. That's their decision to make. And this is where they could explicitly forbid all of the bots from reading their site.

[00:13:14] - Dev Agrawal
This is giving me an interesting feeling where it almost reminds me of a tool like Lighthouse. You point it to a URL and it does a bunch of audits. But instead of writing all the code to do that, he has just written a blog post that describes what to do, and you just point your agent to it and it goes and does it instead.

[00:13:39] - Anthony Campolo
Totally. And we'll look at the actual skill itself too, because the skill basically takes this blog post and puts it into a format where your agent can just run it. But what you're saying about Lighthouse is really interesting because right now Lighthouse looks for performance issues and also accessibility issues and SEO issues. But by the nature of the tool and how long it's been around, it doesn't have a way to just say, "Is this AI optimized?" Which is where this blog post and skill fill in those gaps.

[00:14:18] - Dev Agrawal
Another thing I'm curious about is whether, other than having a markdown variant of your actual documentation pages, is there that much of a difference between SEO and GEO? Because the way that agents retrieve information is by search. They have a search tool and they use it very much like humans do, giving it an input string. The only difference is that agents could give it a much larger search string, and they can process much larger result sets, whereas we would give it smaller search strings and look at maybe just the top five instead of the top 100 results. But fundamentally it's still search. So what exactly is the difference between SEO and GEO other than having a markdown variation of the site?

[00:15:24] - Anthony Campolo
I would definitely say that it is going to overlap more than diverge, because at the end of the day, and this is what every expert in this area will always say, the most important thing is that you actually write useful content. People actually want to land on it and have it give them the information that they were searching for. I think that is still going to be the case even for the agents. In terms of the actual differences, we'll probably get more into that as we keep going through this blog post. But I think you're hitting on something important, which is that there's not really that big of a difference. Okay, we got another person. Kizzy Gamer6895, thanks for joining. We got a bit of a crowd today. That's nice.

[00:16:15] - Dev Agrawal
One thing I just thought of is that you could maybe do some quick evals, not for "do you show up in the search results or not," but trying to verify that if I ask an agent this question and it goes out and searches a bunch of information, is it able to give me the right information and look at the right resource? If it's not, then we need to keep tweaking things until it's able to find that resource and extract the information from it correctly while also having all the other search results in its context. So maybe SEO is more about, can search engines point to your content in the vast web of internet content? And maybe optimizing for agents is more like, if you dump a bunch of search results into the context of an LLM and ask it the specific question for a single page, can it find that? I know this is a bit more trying to theoretically categorize things, which is something my brain just cannot resist doing sometimes. It's almost annoying.

[00:17:37] - Anthony Campolo
But you're hitting on something really important, which is the evals. How do you actually test to see whether your AIs are performing better or not? I just made a note that when we get back here... so for the viewers, we're planning on making this a biweekly stream, every two weeks, same time, Monday at 7:30 Central Time. Evals would be a really good thing to do because I would like to run some evals on the Solid Docs before I push up a PR so that we can see, is this actually improving things or not? I'm mostly going off trusting Joel that he has done the work to actually validate what he is telling us to do here. But it will be good for us to do some downstream verification ourselves.

[00:18:29] - Dev Agrawal
Yeah, and knowing Joel, he knows that stuff very well.

[00:18:34] - Anthony Campolo
He's one of the people I respect the most of anyone in the tech scene. So let's keep going into this blog post here. This is a good example of what I was saying in terms of GEO being essentially the same as SEO. Oh, thank you.

[00:18:58] - Dev Agrawal
Cool.

[00:18:59] - Anthony Campolo
We've got "rewrite pages into extractable fragments." This is: make sure that your title, meta description, H1 are all focusing on the same thing. The page is broken up into sections with descriptive H2 and H3 headings. All of this is the type of advice you would have gotten just for basic SEO.

[00:19:23] - Dev Agrawal
Right.

[00:19:24] - Anthony Campolo
And then it says, "front-load the answer in the first sentence under each heading." "Write sections so they still make sense when copied out of context. Use Q&A blocks, numbered steps, bullets, and tables for facts that should be cited. Replace vague adjectives with measurable claims. Put the canonical answer in one place. Duplication creates conflict." I love this. There's so much gold just in this section. All of these would apply to regular SEO, not just for agents. So there is definitely a lot of overlap there.

[00:20:01] - Dev Agrawal
Honestly, reading through these seven points, it sounds very much like he's trying to optimize for documentation pages to be chunked and embedded and retrieved. Write them into extractable fragments so that it's easier for people's chunking models to break them down.

[00:20:22] - Anthony Campolo
Yeah, and like you said, I think his OpenClaw is what was writing a lot of these blog posts. So it's almost like the bots themselves are making their own preferences known to us through these types of posts.

[00:20:36] - Dev Agrawal
Yeah, but this is a good example of something that inclines toward the concept of, I put a bunch of context, a bunch of search results into an LLM's context, how well would it do at answering one single question which is in one or two of those search results? I don't think rewriting into extractable fragments... okay, aligning title, meta, and H1, those things definitely help the crawlability with search engines and SEO cards. But break each page into headings, front-load the answer... a lot of these sound like they're going to help the LLM reading your page better understand what it's about.

[00:21:37] - Anthony Campolo
Yeah.

[00:21:38] - Dev Agrawal
Okay, that's really nice.

[00:21:40] - Anthony Campolo
Cool. Let's move on here. Do you know what JSON-LD is?

[00:21:49] - Dev Agrawal
Is that one of the many supersets of JSON?

[00:21:54] - Anthony Campolo
Yeah, I think so, because this is some JSON schema stuff. "Use a schema to label what the page is."

[00:22:05] - Dev Agrawal
Or linked data. Yes. I ran into this when I was looking into a bunch of graph RAG stuff. So it's like a small addition to JSON to make it possible to have a bunch of JSON files that have links to each other.

[00:22:23] - Anthony Campolo
Okay, so I think that might have been what I had here then. Let me go back to the robots...

[00:22:40] - Dev Agrawal
But this one is actually compatible with standard JSON. It's not like JSONC, where if you remove the C then it breaks the parsers.

[00:22:51] - Anthony Campolo
Okay, that's good. Here is... I'm not sure if this is JSON-LD or not, but this is one of the things that was created. This is a JSON representation of my page. You have your metadata: title, slug. I'm using collections. I have a blog collection, videos collection, podcast collection. Then you have your description. I think summary and description are the same. Author, which is me. Source URL, published, how fresh it is, and then more URLs: the HTML URL, the markdown URL, and the JSON URL, which we're looking at right now. Then discovery... okay, cool. This is funny. I didn't even realize I have an `llms.txt` now.

[00:23:45] - Dev Agrawal
Nice.

[00:23:46] - Anthony Campolo
Cool. Okay, great. So that's the JSON-LD. So that makes sense.

[00:23:55] - Dev Agrawal
There we go. Yeah, the `@context`, `@type`, the `@id`... that's what JSON-LD adds.

[00:24:06] - Anthony Campolo
Cool. Just learned something new there. We got some accessibility stuff. OpenAI says that Atlas uses ARIA roles, labels, and states to interpret pages. That's interesting. I didn't know that.

[00:24:25] - Dev Agrawal
Yeah, I think a lot of the agentic harnesses do this as well these days.

[00:24:29] - Anthony Campolo
That makes sense because that would be a predetermined way to extract content beyond the HTML and div soup that you're going to get from most sites.

[00:24:42] - Dev Agrawal
Yeah, it's a win for everyone who used to shout "semantic HTML, semantic web."

[00:24:48] - Anthony Campolo
Totally. I've had a lot of those people on my streams and I've always agreed with them. That's really great. I've started putting in some Axe and Lighthouse verification scripts into my projects to start catching some of this stuff on my site and also AutoShow. We won't go too deep into that because that's its own stream entirely. But accessibility sounds like it's useful for the AIs as well. Let's see what we got here. "Expose machine interfaces and operator-friendly paths." This is where traditional SEO stops being enough. A page can be indexable and still suck for agents if an operator using Pi, OpenCode, or Claude has to scrape HTML, guess MIME types, or improvise the next step. UX is still broken. "Fix the basics first. Don't start by bolting MCP onto a product that still forces agents to scrape HTML, guess MIME types, and improvise the next step. If your routes lie, your headers are wrong, and your JSON is just a blob, a new protocol won't save you." Amen, brother. Yes. Boring baseline is what matters. One canonical source of content, truthful projections, explicit discovery services, and machine interfaces that tell the harness what to do next.

[00:26:09] - Anthony Campolo
That's the pattern in Joel Claw. Okay, so this is what we were looking at before. You're going to have an HTML page, a markdown page, and a JSON page. Right here: "HTML, markdown, JSON should be projections of one resource, not three separately maintained documents," which is exactly what we were looking at. You have these three URLs: your main HTML URL, which is your actual blog post page; then your markdown URL, which is the same content but in markdown format; and then your JSON, which does not actually have the content itself. It's just a higher-level representation of the metadata, it looks like. Cheap discovery services beat guesswork. `robots.txt` advertises both the sitemap XML and the sitemap MD. We saw that on my `robots.txt`. We have both of those right here. Then `sitemap.md` lists human URLs, feeds, architecture decision records, ADRs. I learned that at my job that I'm doing right now. We have a whole section of ADRs and markdown twins. `llms.txt` points to the markdown sitemap feed and markdown access patterns. API access discovery route for the structure. I don't think my site has...

[00:27:48] - Anthony Campolo
Pretty sure I don't have an API. The API routes to `index.json`, which is...

[00:27:58] - Dev Agrawal
Basically like documentation for the API itself.

[00:28:03] - Anthony Campolo
It looks like it's basically just another sitemap, actually. It gives each page. Blogs, videos, podcasts, like I said, I have these three different collections. And then it gives the headers for each.

[00:28:24] - Dev Agrawal
Okay.

[00:28:25] - Anthony Campolo
So it gives you almost everything except the content itself.

[00:28:30] - Dev Agrawal
That makes sense.

[00:28:31] - Anthony Campolo
Yeah. And this is obviously going to be a massive page because I have like 500 pages now on my site. I finally got all of my podcasts and videos I've ever done with full transcripts on my site. My blog 50x'd over the course of the last couple of weeks.

[00:28:52] - Dev Agrawal
Lovely.

[00:28:54] - Anthony Campolo
All right, cool.

[00:28:55] - Dev Agrawal
I feel like the next thing I want to give my coding agents is the ability to call a single tool and get a very detailed map of my entire code base instead of having to grep around a bunch of things, because almost every time it's going to look at a bunch of different files anyway to figure out how things fit with each other. Maybe a single tool that can go through all my files, all my import statements, and just dump that entire context. Everything but the actual code: all the files, where they are located, what other files they import from, what symbols they import, and what symbols they export. That might be a useful thing to just dump into the agent context at the very beginning.

[00:29:45] - Anthony Campolo
Yeah, like a metadata-enhanced `tree` command. That would be super useful. I imagine the harnesses probably have a way to do that in a bespoke way already. But having a standard way to do that would be super useful.

[00:30:08] - Dev Agrawal
Possibly. I think some harnesses just tend to rely on embeddings. I know Cursor does code base indexing through embeddings and then gives the agent the search ability. Just knowing how the different files link with each other, import and export from each other, might be more helpful than semantic search over the code base. I guess they'll both be differently useful.

[00:30:40] - Anthony Campolo
Yeah, totally. These last couple sections here are just how to test and measure and check for regressions. And then it also says, if you sell products, make your checkout agent-ready so your agents can buy stuff.

[00:30:56] - Dev Agrawal
That makes sense. And then "checkout pages, agent-ready," is that with agents controlling a browser to do the checkout, or...

[00:31:06] - Anthony Campolo
I would assume so. Or at least maybe get all the way up to the point where a user would just have to click checkout. It could fill in the products it wants to buy, and maybe even your credit card and stuff like that, but then the human has to click it at the end.

[00:31:21] - Dev Agrawal
I'm not entirely sure. Everything but the security code.

[00:31:25] - Anthony Campolo
Yeah, this obviously would not apply to my site or SolidJS.

[00:31:31] - Dev Agrawal
Maybe to the merch store.

[00:31:34] - Anthony Campolo
Oh, there's a Solid merch store.

[00:31:36] - Dev Agrawal
Yeah. They've got a "Got Signals?"

[00:31:39] - Anthony Campolo
That's right, the Solid store. We've talked about this before. I don't have any stickers on my laptop, so I could use "Got Signals." I should get one of those too. Twenty-six bucks for a T-shirt.

[00:31:57] - Dev Agrawal
It's a really nice T-shirt.

[00:31:59] - Anthony Campolo
The letters won't fade in two months.

[00:32:04] - Dev Agrawal
No, I've had it for a little over a year now.

[00:32:11] - Anthony Campolo
Okay, let's move on over to what I've generated here. Actually, hold on, let me pull up the skill itself.

[00:32:22] - Dev Agrawal
So you've already run these against the Solid Docs repo, right?

[00:32:26] - Anthony Campolo
Yeah, I want to first show what the skill itself is. So we have that context.

[00:32:32] - Dev Agrawal
Makes sense. Another thing I found interesting about this blog post is that it's almost using skills the way we might use a library. Here's a set of instructions for this one thing. I'm not going to dump all of this in this blog post. You can go and read that. So it's using skills as reusable instructions, not for agents, but for other pieces of content.

[00:33:01] - Anthony Campolo
Yeah, I've been really interested in skills. I think it's just the right abstraction level for what we're trying to do here. I've been glad to see people putting out stuff like this. This is a skill that I used and got a ton of value out of just from the very first time I ran it. It's all just in a markdown file. It's called Agent Discovery. As he said in the blog post, there are a couple of these that do different stuff. The Agent Discovery one pulls all that into a single skill. So if you want to run one skill that's going to give you essentially everything you need to know, that'll have it. It's saying here this is not just about reducing it to AI SEO, it's about making a page able to be crawled, cited, parsed, navigated, and actioned by an agent harness. It's going to go into a lot of the things that we already talked about. It's treating it as four stacked layers, the first being the search baseline.

[00:34:08] - Anthony Campolo
Having your pages be crawlable with a clean robots policy, sitemap, and structured data is the very first thing that's going to be important because, as you're saying, the agent has to reach out into the world to find what it needs. Having a map like this is like creating the map for the AIs so they know where to go when they walk out into the world. Then "initial response truth": the important facts must be present in the first HTML response, or markdown response, I would assume, if it's finding a markdown file. "Machine-readable projections": markdown text, JSON services that project the same canonical resource without drifting. That's important. You just have one source for the content, and then these different pages are being generated from that initial source of truth. And then "operator ergonomics": the site should be easy to drive from Pi, OpenCode, Claude Code, ChatGPT, or a browser agent without guesswork. If layer one is broken, you won't get found. If layer two is broken, agents won't extract the truth. If layer three is broken, harnesses waste tokens scraping HTML.

[00:35:24] - Anthony Campolo
If layer four is broken, operators and browser agents hit dead ends. Cool.

[00:35:29] - Dev Agrawal
So this is going through the progression of SEO to GEO that we were talking about earlier, but also goes a step ahead where it's not just about agents being able to read content, but also operate the website on behalf of the user. That's the part that I did not know we were going to get into here. But that's really good to know that that's also taken into account.

[00:35:53] - Anthony Campolo
Yeah, definitely. That's where stuff like checkout comes into play. It's saying here, "Use this skill with a task that mentions agent SEO, AEO, GEO, LLM SEO, and AAIO," which is the term he was using in his original URL, AIO Implementation Checklist. So agentic AI...

[00:36:15] - Dev Agrawal
Yeah, so it's not just the read path, it's also the write/operator path. Agentic AI makes more sense.

[00:36:25] - Anthony Campolo
Yeah, this is cool. This skill is like its own blog post in itself. I like how this is being organized. It says, "Do the boring SEO work first." Start with your `robots.txt`, your sitemap, titles, descriptions, headers, JSON-LD, author, date, source, pages being refreshed. Then "initial HTML is the truth surface." One resource, multiple truth projections. `llms.txt` is a hint service, not a ranking hack. Interesting. "`AGENTS.md` beats hoping skills trigger for coding-agent services. Persistent repo context matters more than wishful tool invocation." Okay, here it gets to evals. Vercel's evals are useful here. "Skills alone underperform for general framework guidance. Explicit instructions improve triggering, but wording was fragile. Compressed `AGENTS.md` repo instruction context won for broad always-on guidance." Does that match with your experience?

[00:37:42] - Dev Agrawal
Not necessarily my experience, but I understand the point here. `AGENTS.md` is the thing that will always be loaded. So if there are certain instructions that you always expect the agent to know, then `AGENTS.md` is the right place to put it. If you put something in a skill and you're expecting the agent to always reach out for that information, then maybe it never belonged in the skill in the first place. You should put it in the `AGENTS.md`. But then skills also have front matter, which is always loaded. So if you need some information that's always supposed to be in the agent's context, you can also just move it to the skill's description field, which we now...

[00:38:34] - Anthony Campolo
To be clear, for people who may not be up on skills, we see here at the top each skill will have a name, which is how it's invoked with your slash command, and then a description of what it is. This whole thing is actually loaded each time along with your `AGENTS.md`.

[00:38:57] - Dev Agrawal
I think actually the name and the description are the only two fields that are required to be loaded according to the spec. But the harness might decide to load the other ones as well.

[00:39:08] - Anthony Campolo
Right, you're right. I actually wrote a blog post about this, so I know for sure. The name and description are the things that apply across all... we see here actually, that's why these four are highlighted. "Attribute `display_name` is not supported in Skills file." These are what are actually supported.

[00:39:32] - Dev Agrawal
Right, which is fine. This is markdown front matter. It's not going to break anything if you have extra fields there. But my point was that I think I've read this, I don't know if it was a blog post or a conference talk, where Vercel talked about this, but to me it was weird that if you have some information that you expect the agent to always have, then it was a bad idea to put it in the skill anyway. That should be in `AGENTS.md`. And if you're writing a skill and your concern is that the agent might not reach out for the skill when it needs to, that just means you don't have a good enough skill description because that's how you actually optimize that. The standard itself has given you a place to do that, but you're choosing to ignore it and completely discard the idea of writing a skill in favor of putting things in `AGENTS.md`.

[00:40:41] - Dev Agrawal
It's not a very clean situation where, put this in `AGENTS.md`, put this in description, put this in the skill. We are still figuring these things out, and it's gonna be a while.

[00:40:55] - Anthony Campolo
Yeah, this is probably the thing I dislike most about skills, and I for the most part prefer to invoke a skill manually myself when I want to use it. I don't really like having the agents just randomly decide to use a skill when it thinks it's relevant to a certain task. Every now and then I'll see it pull up a skill and I'm like, "Oh no, that's actually not what you want." Because I have a ton of skills now that do all sorts of different crap, and they're not necessarily correct for every context. So this has been in the back of my mind as a thing I want to do, turn that off so that it doesn't pull in skills unless I specifically tell it to. I've also deleted all my `AGENTS.md` files. I saw the vibes turning against the `AGENTS.md` files.

[00:41:54] - Anthony Campolo
To me, what you really want is your code base to be consistent enough that the agent will just know the correct thing to do by looking at the code. I think that is actually where all this is going. If you have a really messy code base, you're relying on your `AGENTS.md` as a hack to make it do the correct thing when really it should be able to infer the right thing from the code itself.

[00:42:21] - Dev Agrawal
Correct. Also, the code itself will always be the most up-to-date and most accurate piece of information. A markdown file that you need to manually update after code changes is always going to lag behind. Even if you do your best to put a bunch of automations and Cursor hooks and OpenCode hooks in place to make sure they always get updated, it's the distributed systems problem where it's going to get out of sync. That's the only thing that we can guarantee. And at that point it's going to confuse the agent even more rather than helping it.

[00:43:04] - Anthony Campolo
Yep, totally. Just the last couple of things here. We already covered this in the blog post. Accessibility is agent UX, and then operator UX matters too. This is what you've been talking about. An operator using an agent harness should not need to reverse engineer your product. You should have stable, guessable URLs, obvious markdown twins, and so on.

[00:43:30] - Dev Agrawal
I don't have many opinions on the operator stuff because honestly, outside of locally testing the things that I've built, I have not reliably gotten my agents to browse things and find information or operate websites on my behalf. That doesn't sound like something that's very reliable right now, and in my experience it hasn't been reliable. So I haven't done it yet. But I think there's still a massive use case for it.

[00:44:02] - Anthony Campolo
Yeah, I'm the same way. I don't feel comfortable having my agent do anything outside of the bounds of code.

[00:44:12] - Dev Agrawal
Definitely. I would much rather have it just write a Playwright script for whatever it thinks it wants to do. Even if I'm not going to review the whole thing, it's still going to give me a bit more confidence. And it also means that the LLM isn't sitting there sending one command to the browser, waiting for the response, thinking, and then sending one more command, which is extremely inefficient.

[00:44:46] - Anthony Campolo
The rest of this stuff gives the Joel Claw implementation map as an example and then a verification checklist. I want to get into what we actually discovered for the Solid Docs. I ran that skill. I had Codex and Claude run on it. So let's look at the reports they generated. "Codex, source-only audit of the checked-in repository for agent discoverability." "The doc site has a solid baseline for agent discovery at the HTML layer. SSR is enabled. Pre-rendering crawling is configured. Content lives in MDX with structured front matter. Legacy redirects preserve old URLs. The UI generally uses accessibility controls and real links. The biggest gaps are..." Inference from source: "The XML sitemap generator appears to cover only the core doc set, while `llms.txt` covers all four doc sets." This is the same thing that the Claude report mentions first. So it sounds like the sitemap is actually missing a ton of content. Then it said locale detection is broken in shared helpers, which likely undermines translated navigation and locale-aware discovery. That's interesting. There are no cheap per-resource machine-readable projections, such as MD twins, and this is what I already called out and knew.

[00:46:14] - Anthony Campolo
It's just a docs page. It has none of this stuff yet, which is not that surprising. Most sites didn't have any of this until very recently. And head-level truth signals such as canonical URLs and JSON-LD are not evident in the checked-in app code. So search agent metadata quality is not fully verified in the source. That's saying it's not entirely sure because it needs to actually check the URLs themselves.

[00:46:42] - Dev Agrawal
Got it. I think all of these are very useful and topical things. I guess we can go through this first and then go through Claude later, and we can try to match which things both of them found. Those are definitely issues. It's always interesting to see what only one model finds and the other doesn't.

[00:47:09] - Anthony Campolo
Yeah, exactly. That's why this is the one thing I still use. I've mostly moved over to just using Codex, but I still pull in Claude to cross-reference Codex's work. So this is just going to dig deeper into each of those high-level findings. Let's switch over to the Claude one.

[00:47:36] - Dev Agrawal
I think the executive summary is good. I'd be more interested to see what fixes they come up with.

[00:47:44] - Anthony Campolo
Okay, let's do that then. I had it write a plan to fix this stuff. So it's a four-phase plan: locale integrity and agent discoverability. Fix the concrete locale bug first, then consolidate locale handling, then repair authoritative discovery services and add lean agent-facing tech services. Success criteria: localized navigation works correctly, sitemap XML accurately covers all published docs across all four projects, index pages emit explicit head metadata, and agents can fetch a cheap per-page text projection without scraping full HTML. This is the first implementation fix, and this is stuff that I don't really know enough about the site to know whether it actually makes sense or not. So what does this look like?

[00:48:43] - Dev Agrawal
Quick question. This report was generated by Codex or Claude looking at the code base itself, not just...

[00:48:50] - Anthony Campolo
What I first did is I created two reports, one from Claude and one from Codex with the agent skill. And then I pointed Codex at both those reports and told it to write a plan to fix everything.

[00:49:08] - Dev Agrawal
Got it.

[00:49:12] - Dev Agrawal
I'm gonna put a link in the comments. I was just pinged by the Solid Base author that there is a pull request on Solid Docs that adds `llms.txt`, sitemap, and MD routes.

[00:49:23] - Anthony Campolo
Could you actually switch to screen sharing and take a look at that? I actually want to just hop off and use the bathroom real quick.

[00:49:30] - Dev Agrawal
Okay.

[00:49:32] - Anthony Campolo
So you can pilot for the next two or three minutes. I'll be right back.

[00:49:35] - Dev Agrawal
Let me... can I even share my screen?

[00:49:44] - Anthony Campolo
You should be able to screen share. I'll put it up on the stage.

[00:49:46] - Dev Agrawal
Okay. There you go.

[00:49:49] - Anthony Campolo
Okay, you're good to go. Just bump your font up a bit.

[00:49:53] - Dev Agrawal
Thanks. All right. Osmium, Solid Base team. I want to look through this pull request real quick. Continuation of 1453, which is... okay, so it's a two-part PR where the second part has 242 file changes. This is a pretty big change. I really should have been more familiar with Solid Base. I think I need to build a portfolio website, which means I should be looking more at static site generators. But let's see the most important things. Fonts, images, `llms.txt`, and `robots.txt` have been removed from here. We have some collection scripts, which I guess have been moved somewhere. `package.json`... okay, so this...

[00:50:59] - Dev Agrawal
And this is to the Solid Docs repo. Can I search through these? Sitemap. That's a generate sitemap. `llms`... okay, so these were handwritten `llms.txt` files that have now been removed. Interesting. Okay, there we go.

[00:51:53] - Dev Agrawal
Color options. Nice. So I'm guessing there's going to be some scripts to automatically generate those.

[00:52:01] - Anthony Campolo
Yeah, I think on my site that we were looking at, those are all being generated. They're not handwritten. It looks like that's where this change is moving towards. It was cutting out like 15,000 lines and adding 5,000 in. So it's probably going to be generating a lot of this stuff. Who messaged you, by the way, with this?

[00:52:26] - Dev Agrawal
This is Jerome, the author of Solid Base and also Cobalt, which is like Solid's version of Radix UI. He's done a lot of ecosystem work.

[00:52:40] - Anthony Campolo
Cool. I'd love to get him on the stream too.

[00:52:44] - Dev Agrawal
Definitely. There is a preview deploy link, so let me quickly check that. Let's see if I can add an `md`. There we go. We have `md`. That works.

[00:53:06] - Anthony Campolo
Hell yeah. So they're ahead of the game. I'm glad we're seeing this. This is the type of stuff that I was going to be looking to merge in. It looks like they're already on top of it.

[00:53:15] - Dev Agrawal
This is our `llms.txt`: concepts, control flow, deployment, quick start.

[00:53:28] - Anthony Campolo
I'm going to want to pull this down and rerun that agent discovery on this.

[00:53:34] - Dev Agrawal
This is `feat/osmium`.

[00:53:36] - Anthony Campolo
Cool. I'm not going to do it right now because we're getting close to an hour here and I didn't want to go much longer.

[00:53:47] - Dev Agrawal
Very timely, though.

[00:53:48] - Anthony Campolo
Yes, perfect. Really glad we saw that. I'll be curious if it still flags issues with the sitemap on that. That'll probably be the first thing I'll look at.

[00:54:02] - Dev Agrawal
A helpful exercise to go through the specific issues that anyone building a site that wants to be more LLM-friendly should be looking at. It's also very helpful to see if static site generators like Solid Base can absorb more of this complexity of taking your raw markdown content and some front matter and doing as much of this work as possible automatically so that you don't even have to think about it, which is how a lot of SEO and web complexity has been slowly absorbed by frameworks. I'm hoping that this is also something that eventually gets absorbed into frameworks, where they do the job of adding all of this and all we have to do is write our components and our markdown content.

[00:55:01] - Anthony Campolo
Definitely, because I've got a ton of code in my current Bun static site generator doing all sorts of stuff that I have very little understanding of. That's good to know that there's a framework out there looking at this stuff.

[00:55:18] - Dev Agrawal
Yeah, and I'm guessing startups like Mintlify and some of these platforms for documentation are all doing a lot of work in this area. Stuff like Astro Starlight... every major framework has one of these. Next has Nextra, Nuxt has VitePress, GitBook, and there's a bunch of these open-source platforms for documentation.

[00:55:46] - Anthony Campolo
I've used them all. And look at this: this is actually the most recent blog post on Mintlify. "Almost half your docs traffic is AI. Time to understand the agent experience." This is a whole post about what we were just talking about. It was written back in February. So Mintlify is thinking about this too.

[00:56:07] - Dev Agrawal
Yeah, honestly, the thing I spend more of my time thinking about is how agent traffic is going to affect the sites that rely on ads, actual human traffic to drive ad revenue.

[00:56:22] - Anthony Campolo
Sure. It's going to rewrite a major portion of the American economy.

[00:56:29] - Dev Agrawal
A lot of these sites have already been overloading their pages with ads over time. Wikipedia has famously added bigger and bigger banners every other year, "Hey, we are running out of money, please donate."

[00:56:44] - Anthony Campolo
We need more money to write our propaganda.

[00:56:50] - Dev Agrawal
Yeah, this is almost certainly going to break some of those. I go on Fandom, random fandom.com sites every now and then for game lore or comic lore, movie lore, and they just get worse with ads every time I visit them.

[00:57:13] - Anthony Campolo
The joke about recipe sites as well. There's like 90 million ads on them. The average web browsing experience is just absolute trash for so many sites, especially on mobile. You'll get three ads pop up all at once.

[00:57:30] - Dev Agrawal
Yeah.

[00:57:31] - Anthony Campolo
And you have to find the really difficult-to-hit X button for each of them that is specifically created so it's really hard to click.

[00:57:40] - Dev Agrawal
And they are invisible for the first five seconds.

[00:57:44] - Anthony Campolo
Yeah. It's the worst.

[00:57:48] - Dev Agrawal
Cool. But it's good to know that at least for things like documentation sites or ecommerce or content sites, there are ways to make it more agent-friendly. Excited to see the work in this area. This is also going to change very rapidly. AIO is going to look a little different a year from now.

[00:58:20] - Anthony Campolo
Yeah. I just hope the first thing they do is standardize on `agent` in `robots.txt` so I don't need 10 different things added to my `robots.txt` for each individual agent.

[00:58:34] - Dev Agrawal
So what we were hoping to accomplish on this live stream has already been done as a draft PR sitting there. What do we want to do now?

[00:58:46] - Anthony Campolo
No, this is great. I think we learned a lot and covered all of the topics. What I'll want to do before our next stream is run the same skill on that pull request so we can see what it looks like now that a lot of these have been fixed, and it will probably get more into the nitty-gritties. I'm also going to try and think about some evals I can create to see what is going on. And we should reach out to some of these people to see if we can get them next time we stream, because having someone from the Solid Base team would be really nice, to chat with them directly about what they were thinking when they created this PR and what they're optimizing for.

[00:59:32] - Dev Agrawal
Nice. Jerome, if you're still watching, hit us up if you'd like to be on stream.

[00:59:40] - Anthony Campolo
With Bradley, I would love to talk with him about his process of migrating to 2.0 because I want to try and distill a lot of that into a skill that people could use to migrate. I actually already have a first draft of that.

[00:59:56] - Dev Agrawal
Nice.

[00:59:57] - Anthony Campolo
That's what we could talk about when we get back on in two weeks. It's gonna be 4:20 when we stream, so don't get too high, Dev.

[01:00:09] - Dev Agrawal
I'll do my best.

[01:00:12] - Anthony Campolo
Awesome.

[01:00:13] - Dev Agrawal
Isn't it gonna be in the middle of React Miami? I'm not gonna be... okay, it's before React Miami.

[01:00:20] - Anthony Campolo
I wish I could go to React Miami. That was so fun. Had such a great time at React Miami.

[01:00:26] - Dev Agrawal
Definitely. I'm also not going to go this year, but maybe next one.

[01:00:32] - Anthony Campolo
Let's do it. Cool. We can wrap it up here. Thank you, everyone who was watching. Really appreciate it. Thank you, everyone who was sending links and stuff to Dev to make sure that we're on the latest and the newest. I'm still getting a lay of the land in terms of what's happening in the SolidJS world, but I really want to start contributing and start pushing some meaningful changes. That's a big goal I have with this stream and with bringing Dev on so much, so I can get more plugged into the Solid ecosystem. I'll try to start browsing the Discord more too. I know that's going to be one way I can get spun up quickly.

[01:01:15] - Dev Agrawal
Oh yeah. The Discord server is really active. The `next` channel is where I hang out the most because that's where the Solid 2.0 discussions happen. I think Jerome found out about this because I shared it in the Docs channel. And then he's like, "Hey, we have everything you're talking about. I already fixed it. It's sitting there. Go take a look." Which is amazing. Some amazing people in the community doing awesome work.

[01:01:47] - Anthony Campolo
I had a sense in the back of my mind that some of this stuff was being worked on somewhere. This seems like a huge gap that hasn't been covered yet. I was kind of surprised that a lot of this work hadn't been done already. So it's good to know that it had been done. It's just still in the process of being pushed up. Win for the Solid community again. This is why I'm sticking with it and continuing to build all of my stuff on Solid, because you all are legit, you know what you're doing, and I always learn a bunch just from interacting with the community.

[01:02:19] - Dev Agrawal
Yeah, we just need to convince more people that we have a good ecosystem. People keep thinking that the ecosystem is bad. It's not. It's just not overflowing with 20,000 different ways to do things.

[01:02:32] - Anthony Campolo
Totally. We gotta take a look at your Awesome Solid page and make sure that's up to date with everything in it. That's always a good way to find all your community stuff.

[01:02:43] - Dev Agrawal
Maybe one other brief topic that we could schedule for some time. Let me quickly find the tweet I made earlier, which was this thing called Power Chat. I don't know if you've seen my tweet about that.

[01:03:05] - Anthony Campolo
I did see your tweet, but I didn't click through. Oh, also, the site that I'm writing blog posts for now, I pitched them a blog post about how sync engines are the thing for agents and they were like, "Yes, write that blog post next." So I'm actually writing a blog post about this right now. I can't copy-paste this. Can you just pull it up and share your screen?

[01:03:34] - Dev Agrawal
Yeah, definitely. I would love to help out with that blog post if you need it.

[01:03:43] - Anthony Campolo
For sure. I'll send it to you once I have a first draft and you can give me some notes on it. That would be awesome.

[01:03:49] - Dev Agrawal
So I have been trying to build this: a multiplayer AI app, because I'm really sad that every single AI app is single-user, single-player.

[01:04:01] - Anthony Campolo
Oh, so this is so two people can be in a chat at once.

[01:04:04] - Dev Agrawal
Yes. This is basically like Slack or Discord. I have two tabs open here. They're both different users. In one of them I'm logged in as Dev, and in this one I'm logged in as another Dev. I have very basic chat and mention functionality. But if you let me collapse this... here, if you add one of the agents that are in the channel... basically agents are first-class citizens in the chat. You can tag them, you can talk to each other, and the agent only talks if you mention it in a message, and then it responds. As you'll see in both of the tabs, it'll stream the response back. I don't think there are very many AI apps that can do this, where you have the same conversation open in two tabs, two different devices, and you get streaming responses in both. But the crazy thing is that PowerSync makes it ridiculously easy to build something like this, whereas if you wanted to do this without a sync engine, you would be in for a world of pain.

[01:05:24] - Anthony Campolo
That's cool. I might actually pull that in as an example app in the blog post. It's all open source.

[01:05:33] - Dev Agrawal
The code itself, yeah. It's not super complicated. It's Solid Start, Neon, PowerSync, Mastra for the AI stuff, and I'm going to be adding more features to it at some point.

[01:05:48] - Anthony Campolo
That's very cool. Let's carve out some time for the next stream to dig into that code base.

[01:05:55] - Dev Agrawal
Our topic backlog is growing very quickly. We might have to switch to weekly.

[01:06:00] - Anthony Campolo
Yeah, we'll see. I need my beauty sleep, Dev.

[01:06:05] - Dev Agrawal
Of course. Did you say Pete started his own podcast?

[01:06:10] - Anthony Campolo
Yeah. This is the good format for us. We get to churn through a lot of stuff. Anything else before we wrap it up?

[01:06:20] - Dev Agrawal
No. Check out Power Chat. Check out AutoShow.

[01:06:26] - Anthony Campolo
Yeah, we should have some AutoShow stuff I can demo next time we're on as well. We'll do a little bit of demoing and then pick back up with some of the docs work. We'll make that the flow, show stuff we're working on and pick a bigger topic to get into next time. I'll probably want to do a longer stream too, maybe more like an hour and a half.

[01:06:47] - Dev Agrawal
Cool.

[01:06:49] - Anthony Campolo
All right, thank you, everyone for watching. Hope to see you in two weeks and we will catch you next time.
