skip to content
Podcast cover art for IPFS with Daniel Norman
Podcast

IPFS with Daniel Norman

Daniel Norman explains IPFS, content addressing, and Web3 ownership concepts, bridging the gap between traditional web development and decentralized technologies.

Open .md

Episode Description

Daniel Norman explains IPFS, content addressing, and Web3 ownership concepts, bridging the gap between traditional web development and decentralized technologies.

Episode Summary

In this episode, Daniel Norman joins Anthony Campolo to discuss his programming journey and provide an accessible introduction to IPFS (Interplanetary File System) and the broader Web3 ecosystem. The conversation begins with Daniel's early exposure to web development in the 1990s, tracing his path through the LAMP stack, front-end frameworks, backend work in the D programming language, fintech, and eventually into crypto and developer advocacy at Prisma. From there, the discussion shifts to defining Web3 through the lens of ownership — extending the read (Web1) and read-write (Web2) paradigm to include verifiable ownership of assets and data via cryptography. Daniel then breaks down the two core pillars of IPFS: peer-to-peer networking, which allows any network participant to serve content, and content addressing, which identifies data by its cryptographic hash rather than its server location. Practical tools like pinning services, IPFS gateways, and platforms like Fleek are explained as bridges that make IPFS accessible to everyday developers. The episode closes with a candid discussion of content moderation challenges on a censorship-resistant network, including deny lists and distributed governance as emerging solutions.

Chapters

00:00:00 - Daniel Norman's Programming Origins

Daniel Norman shares how he first encountered web development in primary school during the early days of the internet, building websites on platforms like GeoCities and staying after hours to enjoy faster bandwidth. He describes learning HTML, Visual Basic, and eventually PHP as part of the LAMP stack era that powered early platforms like Wikipedia and Facebook.

His journey continued through a formative internship at Vodafone in Australia, where he learned SQL by querying billing data on Oracle databases. This early database experience proved foundational, as relational databases became a recurring thread throughout his career. He reflects on how certain technologies like SQL remain timeless even as the JavaScript ecosystem constantly reinvents tools like bundlers.

00:03:24 - From Front-End Fatigue to Backend and Distributed Systems

Daniel recounts his deep dive into front-end frameworks like Backbone and Ember during the Ajax revolution, eventually burning out on the complexity of managing memory leaks and event handlers. He pivoted to backend development, working with the D programming language on high-traffic systems where contributors to the language itself were colleagues at his company.

University studies pushed him further into lower-level programming and data structures, and he moved through the Go ecosystem, ad tech, and fintech industries. While working at a fintech startup using microservices, Kafka, and Docker, he became curious about Bitcoin and distributed protocols, eventually taking a sabbatical to explore the crypto space more deeply before landing at a DAO project called Aragon.

00:07:38 - Discovering Developer Advocacy and Prisma

Daniel describes his first Web3 job at Aragon, where he was paid in crypto and worked remotely before it was mainstream. After growing disillusioned with some of the hype in the crypto space, he took another break and discovered developer advocacy as a career path while exploring writing and side projects.

During this period, he rediscovered his passion for relational databases and stumbled upon Prisma, then still in its pre-beta phase. The type-safe developer experience was a revelation that made him feel genuinely productive, and he recognized the transformative power of well-designed developer tools. He applied for an advocacy role at Prisma and spent roughly two and a half years helping grow its community into what both hosts agree became the leading ORM for Node.js.

00:11:23 - Defining Web3 and the Meaning of Ownership

The conversation shifts to defining Web3 through a historical framework: Web1 enabled reading, Web2 added writing and social interaction, and Web3 introduces ownership. Daniel explains how ownership has historically depended on gatekeepers maintaining centralized records, whether for land registries or bank accounts, and how cryptographic key pairs now offer an alternative.

Anthony adds color by connecting this to PGP and the mathematical impossibility of faking cryptographic signatures. Daniel then traces the evolution of credit card authentication — from embossed numbers and handwritten signatures to chip-and-PIN and contactless payments — as a parallel example of cryptographic primitives gradually replacing older trust mechanisms in everyday life.

00:17:03 - Data Ownership, Global Context, and Bluesky

Daniel broadens the ownership discussion beyond tokens to include owning your own data, introducing IPFS's content addressing as a mechanism for portable data. Anthony connects this to anxieties around platform dependence, referencing the Twitter upheaval and users migrating to Mastodon as a real-time example of why decentralization matters.

Daniel emphasizes that Web3's value extends far beyond high-trust Western societies, citing countries like Venezuela, Lebanon, and Ethiopia where access to stable banking and protected property rights cannot be taken for granted. He then introduces the Bluesky project as an example of building decentralized social networking on open protocols, combining IPFS-related data models with decentralized identifiers to give creators independence from platforms.

00:25:28 - The Human Rights Case for Decentralization

Anthony underscores the importance of decentralized publishing by referencing the film "The Lives of Others," which depicts a writer in East Germany risking his life to publish dissent under a repressive regime. He argues that people in high-trust societies often dismiss crypto and decentralization use cases without recognizing that billions of people lack basic freedoms like publishing without government persecution.

Daniel amplifies this by recommending the work of Alex Gladstein at the Human Rights Foundation, who has documented Bitcoin's real-world impact in countries facing authoritarian control and severe currency inflation. Both hosts agree that broadening the conversation beyond Western tech culture is essential for understanding why these technologies matter on a global scale.

00:28:00 - IPFS Explained: Peer-to-Peer Networking and Content Addressing

Daniel introduces IPFS as the Interplanetary File System, a new way of moving data on the internet built on two core concepts. The first is peer-to-peer networking, which breaks from the traditional client-server model so that any network participant can both consume and serve content, much like BitTorrent distributes file downloads across multiple holders rather than relying on a single server.

The second concept is content addressing, where data is identified by its cryptographic hash rather than a server location. Daniel draws parallels to Git commits and Docker image layers, both of which use content addressing. Anthony adds a practical walkthrough of initializing an IPFS repository and publishing a simple HTML file, showing how a web developer can save a website to a decentralized, verifiable system using familiar command-line patterns.

00:34:47 - Pinning Services, Gateways, and Publishing to IPFS

The discussion moves into the practical infrastructure that makes IPFS usable for developers. Daniel explains pinning services as companies running IPFS nodes that host your content and keep it available, comparable to cloud storage with added network interoperability. Services like Fleek go further by connecting to GitHub repos, building sites, and publishing them to IPFS automatically.

IPFS gateways are introduced as bridges between HTTP and the IPFS network, allowing anyone with a browser to access IPFS content via a standard URL. Daniel also explains DNS link, a technique that maps a traditional domain name to the latest IPFS content ID, solving the mutability problem inherent in content-addressed systems where every change produces a new hash.

00:44:10 - Incremental Adoption and Protocol Labs

Daniel explains that IPFS adoption doesn't have to be all-or-nothing — developers can integrate content addressing into traditional Web2 applications for specific features like Q&A sections or downloadable content without rearchitecting everything. He notes that IPFS handles any file type, including video, audio, and structured data like JSON.

Anthony then asks about Protocol Labs' relationship to IPFS. Daniel explains that while Protocol Labs initiated the project when Juan Benet published the original paper in 2014, IPFS has since become increasingly community-driven with open specs, working groups, and contributions from organizations like Cloudflare and Fission. The latter is building an encrypted file system on top of IPFS to address the fact that all IPFS content is public by default.

00:48:02 - Content Moderation on a Censorship-Resistant Network

The conversation closes with the challenging topic of content moderation on IPFS. Daniel explains that while the protocol itself is neutral and censorship-resistant, node operators are still subject to the legal jurisdictions they operate in. Protocol Labs maintains an abuse reporting system for their public gateway and blocks flagged content from being served through it.

Ongoing specification work aims to create subscribable deny lists, similar to email spam filtering, that allow node operators to opt into curated blocklists from trusted parties. Daniel frames this as distributed self-governance rather than centralized moderation, with the resilience of the system coming from pushing decision-making to the network's edges. Both hosts acknowledge the inherent tension but express confidence that thoughtful, community-driven approaches can keep the network healthy while preserving its open nature.

00:54:59 - Getting Started with IPFS and Closing Thoughts

Daniel directs listeners to ipfs.tech for documentation and mentions upcoming JavaScript-focused improvements planned for 2023. He also recommends the Web3 storage blog as a resource for modern development with IPFS and content addressing, including projects that incorporate decentralized identifiers.

Anthony reflects on how learning IPFS felt like a genuinely novel experience compared to typical web development, and encourages listeners to explore a prior tutorial episode for a hands-on walkthrough. Daniel shares where people can find him online and expresses enthusiasm for hearing from developers who are building with IPFS, closing out a wide-ranging conversation that spanned personal history, philosophical foundations, and practical technology.

Transcript

00:00:00 - Daniel Norman

Sounds great.

00:00:11 - Anthony Campolo

Daniel Norman, welcome to the show.

00:00:13 - Daniel Norman

Nice to be here. Thanks for having me, Anthony.

00:00:15 - Anthony Campolo

We're going to be delving a little bit more into the Web3 world. I think our listeners are probably still not super hip to it, but IPFS, I think, is a good bridge technology to get people into Web3 because there's no tokens. There's not as much weird financial shilling that comes along with some of the ecosystem, but we'll get into IPFS in a little bit.

But first, I'd love to hear a little bit about yourself and how you got into coding. I think some of our listeners may recognize you as a former Prisma OG, so I'd love to hear your background and how you got to where you are.

00:00:51 - Daniel Norman

Yeah, I got into programming at quite an early age. I was quite lucky. In my primary school, we had this program where they introduced computers back in the early days of the internet in the 90s. We got to do some web development there. It was really rudimentary. We were learning HTML. CSS wasn't really a thing at the time, but I remember staying at school after hours because we had a much faster internet connection. Instead of being 56kB, it was double that. And so when everyone left, you got the whole bandwidth to yourself. That was kind of my exposure to the internet.

I think at the time the internet was at this publishing revolution. Anyone could publish a website. I think GeoCities was one of the platforms, like imagine that's the Vercel or the Netlify of that period. I remember publishing this little website, and I was really excited about this idea that it was a gateway into the broader world.

[00:01:41] It was really like going beyond the immediate physical and geographical community that I was tied up in. I was always a nerd, and I was always engaging with these different technologies. I was programming in Visual Basic at the time, and I was learning a bit of C++, and I had no clue what was going on. Then later on, the LAMP stack was picking up. That was, again, this exciting set of technologies. Wikipedia is built on the LAMP stack. For those who aren't familiar, LAMP stands for Linux, Apache, MySQL, and PHP. This was also the early days of Facebook, the mid-2000s, and this stack was really picking up, spurring this new wave of innovation building stuff on top of it. So I was learning PHP at the time.

I had done this little internship at Vodafone in Australia, working on their billing system, and that's where I learned SQL, basically, on their Oracle database.

[00:02:33] My parents both had a Vodafone contract. It was kind of cool because during this internship I was able to really query their billing data and their calls. I had this two-week opportunity to learn SQL from some pretty advanced software engineers. So that was my foray into SQL and relational databases, which seems to come again and again. Interestingly, I also recently researched this. I was looking at things that are timeless in the software world, things that are still sort of true today, especially in the JavaScript world.

We just had a chat about this prior to this recording, about how we tend to reinvent things every year and a half in the JavaScript world, like bundlers, for example. I was giving the example of RequireJS and Browserify, then Webpack, and now it's Vite. That's sort of how I learned SQL. I was able to apply that with PHP and the LAMP stack.

[00:03:24] I was building some web apps, and I then started working with a friend of mine. He was an entrepreneur. We were building online shops and doing these early e-commerce deployments using this LAMP stack. Then I moved on. At the time, these front-end rich apps started becoming very popular. I think Gmail was one of the first ones that popularized this whole Ajax paradigm. Sending asynchronous HTTP requests once the page loads, jQuery made it really easy to do these kinds of things. A couple of years later, you had all of these crazy messy apps that were using jQuery, and then suddenly people were like, oh, we need to think about a way to structure things on the front end as the logic started getting pushed, arguably, to the edge, the user's browser. You had this wave of reinvention: Backbone came around and Ember, and that's when I went deep into those technologies.

[00:04:15] At the time I was also studying computer science. I was doing this for a while, and I got really exhausted from doing front ends with Backbone. There were so many pitfalls that you had to constantly avoid, like memory leaks from when you forgot to unbind an event handler that you tied up to an element that isn't even in the DOM anymore. So that was really painful. There were new abstractions on top of Backbone at the time, but things just got really complex and hairy.

I was like, okay, I want to switch to backend development. So I switched into back-end development. At the time, the shop I was working at was using this programming language called D. It was created by Walter White, I think. No, sorry. I might be confusing that with Walter White...

00:04:54 - Anthony Campolo

The programming language. Like after C, D.

00:04:57 - Daniel Norman

Exactly. So D was supposed to be a successor to C and C++. It's still an active language. It has a small community.

00:05:04 - Anthony Campolo

It's Walter Bright.

00:05:06 - Daniel Norman

Walter Bright, that's the one. Yeah. I remember at some point we were the biggest shop at the time using this programming language. I remember some of the core contributors to the programming language were working at the company because we were the biggest user of this, and we were deploying these really high traffic systems. We had this quote in our systems: once in a million happens every three seconds, because we were just handling so many requests. At the time that was considered big scale.

So that was my foray into distributed hash tables and a bit more lower-level programming. My time at uni really pushed me to be more interested in lower-level data structures, micro-optimizations, and a focus on foundations. It was quite difficult work at the time, but I spent a couple of years doing that, and then I left that. Then Go started becoming popular, and I learned a bit of Go. I went through a bunch of these different waves in the software world through different industries.

[00:05:59] One was the ad tech industry and then financial technology, like fintech. While I was working at fintech, I had gotten exposed to Bitcoin, and I became curious about these distributed systems and protocols. I was working for this fintech company, and I had my doubts developing from working at this fintech startup where we were doing microservices with front ends and REST and a bit of GraphQL. I think we were doing that at the time. That was really exciting. We were doing event sourcing with Kafka and Docker containers everywhere, and suddenly everything is cool: distributed architecture and hyperscale and so on. There I was also doing infrastructure, and I got into Bitcoin, and then I was like, you know what? This thing is really interesting to me. I want to learn more about it.

After two and a half years at this fintech, I was like, okay, I'm leaving this startup. I want to take a break, and I want to learn some of this stuff.

[00:06:47] So I took a break and I was spending time learning Bitcoin. This is still prior to Prisma. I learned a bit of this stuff, and then I got into Ethereum and DAOs were a new thing. This is like, I think, 2017-2018. I went to the Chaos Computer Conference, which is this big hacker gathering in Germany. I met a lot of really interesting people there, and I was like, okay, I want to work in this space. How can I make it happen?

I had a friend who was working for this project called Aragon, which was supposed to be this DAO framework. We can get into DAOs maybe later on. I don't want to go on too many tangents here, but I heard about this Aragon thing. It was a DAO framework built on top of Ethereum, and it was using IPFS too. I was so excited about that I applied. They gave me a task, and I built it. It was using, I think, Web3.js to interact with the Ethereum blockchain and pull some block and transaction data and render it in a Next.js app.

[00:07:38] They accepted me. I had to do another task, and I started working for this project. This was the first Web3 project that I was working for, and I'm suddenly getting paid my salary in crypto. It was a brave new world, and it was the first remote job for me. This is like 2018-2019, so prior to the whole post-COVID, everyone's remote era. This was really exciting. I spent about a year there, and then I decided, okay, I'm leaving this. It wasn't working out for me, and again I took a break.

During that break, that was my foray into developer advocacy. I was like, okay, I'm tired of the Web3 thing. It's kind of cool, but it seems a bit overhyped. As you mentioned, everyone's shilling for some token thing, and the mix of software and finances can get really weird sometimes, even though I really love the idea of these open, verifiable financial networks that are decentralized, global, and permissionless.

[00:08:33] You don't need permission to use them. I love the fundamental ideas behind this, but there's obviously the dark side of this whole world. So I was a bit fed up with that, and I took a break from it. While I was on this break, I was really getting into writing. While I was getting into writing, I realized developer advocacy is a role. I was working on a cool side project and exploring tools again. It keeps on coming back to relational databases, so I was like, okay, I'll use MySQL or Postgres for this project.

While I was using it, I was looking for some good tools to work with, like a database abstraction for Node.js. I'd worked with Knex, which is a query builder, and it kind of served my needs. But then I discovered Prisma, and I was really on board with TypeScript. I was like, oh, this is great, I love it. It wasn't even called Prisma. It was called Photon or something. This is really before even the beta release of Prisma 2, because initially Prisma was this GraphQL thing, and then they dropped the GraphQL and it just became this pure database abstraction.

That was my foray into it. I'm using this Prisma thing and it's like, wow. Suddenly I feel like a developer. Suddenly I'm super productive. I get the type safety, and it was a bit rough around the edges, but for the basic CRUD stuff, even being pre-beta, it was just fantastic. That was when a coin dropped for me, which was like, wow, developer tools, when they work well, they're amazing. They just make your work so much easier. I saw they had some openings, and they had an opening for an advocate, so I applied, and that's how I got into developer advocacy.

[00:10:02] I spent about two and a half years or so at Prisma. We saw the community, the Prisma community, grow from a couple of hundred, if not maybe a thousand developers into being the best ORM for Node.js. Obviously that's debatable, but.

00:10:15 - Anthony Campolo

I don't think it's debatable. I think that's a pretty open and shut case.

00:10:19 - Daniel Norman

Great. I'm glad to hear that at this point. It's interesting that it's so obvious now because along the path there was so much resistance, and I understand a lot of things in programming come down to taste.

00:10:30 - Anthony Campolo

It's funny, your journey, it's much longer than mine. You've been in the industry, it sounds like, 10, 15 years almost, but there's so many parallels between your journey and my journey in terms of going into Web3, taking it back a bit, getting it to relational databases and finding a way there. You're talking about stacks and LAMP stacks, and I feel like the stack is really the idea that goes throughout all of this because you can have a Web3 stack, you can have a stack based on relational databases, and no matter what, you have these different layers of software that add up to some sort of usable application.

I feel like our listeners were probably along that journey with you for most of it, but there's a couple terms there that I feel like we should really define here. Let's start with Web3. This is one of the most ubiquitous terms, but also, I think, hardest to define. How would you define what Web3 means?

00:11:23 - Daniel Norman

Yeah, that's a great question. I should preface this by saying that no one has a monopoly on what the word Web3 means. You're allowed to use it whichever way you like. You might get some consensus on that definition or another, but it's just like the Web2 revolution.

So let's start with Web1. What was Web1? Web1 was this initial wave of the internet. Anyone could publish, and the main idea was that users on the internet could read. It was really about reading content. You could explore the web. I remember this old magazine cover of teenagers exploring the web, reading content from every possible corner of the world. Web 1.0 was about reading from the web.

Web 2.0 was really when social media started becoming a thing, when blogs and comments arrived. It became a two-way, multi-directional highway where anyone could actually respond to whatever you published on the web.

[00:12:17] So it wasn't just about reading. It was about interacting. It was about leaving a comment, liking stuff. It was the social aspect, making it this kind of two-way street. So that was Web 2.0. Arguably, Blogger, Facebook, Myspace, and all of these platforms were responsible for really bringing this to the mainstream.

I like to think of Web3 as: if Web1 was reading, Web2 was reading and writing, having this two-way street, then Web3 is about adding ownership to the web. Ownership is this really weird, amorphous thing. How do you own stuff? If you think about ownership historically, almost all ownership essentially comes down to having gatekeepers who control the database or control a paper ledger, and they keep track of who owns stuff, whether that's land registry for land ownership or financial ownership of things, which is done through centralized databases that are operated by banks, whether that's a central bank or your local bank. So Web3 is usually referred to as all of these new waves of technology that involve ownership.

[00:13:19] A lot of that is associated with Bitcoin and cryptocurrencies where you can own these tokens. People think, oh, I have Bitcoin, but really you never own cryptocurrencies like Bitcoin; you hold a cryptographic key that gives you access to those coins that are stored on a blockchain.

00:13:37 - Anthony Campolo

That's really the important part there, that the cryptography is what leads to some sort of vague sense of ownership. If you think of PGP, Pretty Good Privacy, it's a way to send messages between each other, and the way you can verify that you are the one who sent that message is by having some sort of key that can link to the cryptography. So it's really just a lot of math that allows someone to say, I have this long, ridiculous string of characters that I can input, and it will say it's connected to this other long string of characters. And that is what quote-unquote ownership means, which is a very heady concept. But once you get your hands on it, you realize it actually does work because cryptography works. You can't fake that long string of text. You just can't. It's mathematically impossible for someone to conjure up that specific string. So it allows people to say, hey, I did this, I can prove I did this.

00:14:35 - Daniel Norman

We can even look at an example. Most people are familiar with credit cards. I'm sure that most of our listeners have used a bank card or a credit card. If you actually look at the credit card today and you look at a credit card, say, from 15 or 20 years ago, a lot has changed, but it still has the same sort of form factor.

You have this plastic card initially. I don't know if you've ever had the chance to use credit cards, but back in the day you had this copy paper, the blue copy paper, and you had this mechanical machine that would take the embossed numbers, the credit card number, and copy it. You had to do this kind of thing with it as the merchant, and then you would have to sign. Essentially that was the signature. That was proof that, hey, this was you. You literally hand-wrote your signature.

[00:15:18] In fact, many credit cards still have on the back this space where you're supposed to put your signature when you go to a shop and use it. At least 15 years ago, it was still common. They would check that the signature matched. Obviously a lot has changed since then. Then we introduced the magnetic strip. Even with the magnetic strip, you still had to do the handwritten signature. Then there was a lot of fraud, and now we're in 2022 and we have the chip and that's associated with a PIN code.

These chips use the same cryptographic techniques. It's essentially a public-private key pair with some signatures. When you put the PIN in, there's a little chip that processes the information and generates a signature on the credit card. Of course, that's evolved more. So it's like initially we just had the numbers, right, embossed, or however you call it.

00:16:01 - Anthony Campolo

Now I can just tap it.

00:16:02 - Daniel Norman

Now you can just tap it. It's like the fourth evolution: just numbers, magnetic strip, chip, and now you can just tap it with PayPass or whatever it's called. It's pretty amazing how all of these changes kind of happen on the same platform, being that same credit card-sized thing, in a sense.

If we go back to this idea of Web3 and ownership, you introduce this idea of cryptographic primitives that are essentially replicating what a hand signature does in order to prove ownership or identity of things. I don't want to get into this conversation about fraud. Keys often get lost. Identity and key management is probably the most challenging problem in a world of ubiquitous encryption. Our identity is so reliant these days on these cryptographic primitives, whether it's a Web3 world or other services that we're using that require our devices, which also use a lot of cryptography.

There was one point that I wanted to add with regards to Web3 being about reading, writing, and owning, which is owning isn't necessarily just owning tokens.

[00:17:03] Owning can also, and this is very relevant for IPFS, mean owning your data. So today, for example, if you have a Facebook account or Twitter and you want to leave Twitter, you can ask for your archive and you can download that. They'll prepare it and you'll be able to download it. With IPFS, everything has a content ID, and we can get into the details of what IPFS is. But one of the great things that IPFS gives you as a building block, whether you're a developer or a user of something that is building on top of IPFS, is this technique that we will talk about called content addressing. You can more easily own your data and make a copy of it. There are already examples of this in action, so I'll pause there. I know that was a lot to take in.

00:17:44 - Anthony Campolo

This is the thing I find with most of these Web3-type discussions: before you can even get to what it is, you have to lay all this philosophical foundation of why you would even want this. I thought that was a really good pitch in terms of ownership.

The way I put it in the various interviews I've been giving to explain Web3 is that it's about giving power back to the users and taking it away from the platforms because the Web2 world, because of how it shaped up, there were all of these large billion-dollar companies that were formed that essentially are silos for the data, whether that's Facebook or Instagram or Twitter or YouTube. You don't really own your YouTube videos. Google owns your YouTube videos. For the most part, that's not really a problem, I think, for most people, but there's a certain type of individual who really wants that type of security and that peace of mind to know that if all of Google shut down for whatever reason, then you would still have your videos and you would still be able to show those videos to people.

[00:18:50] You can decide how you want those videos to be advertised, or what type of advertising will be done on them. So there's a level of control that comes along with Web3 that's really important. We talked about how the cryptography is what kind of enables all that, but I think the idea of ownership, and especially the idea of owning your own content, is more for a user of a social network. But if you're a publisher, this is where I think people really start to get why this is important.

Right now there's this whole thing happening with Twitter where everyone's worrying about whether Twitter is going to be here a week from now. I know a lot of my close friends right now are furiously downloading their data from Twitter and being like, oh no, is my Twitter data going to be okay? People are starting to move to Mastodon, and I'm seeing these conversations now happening around decentralization that I did not see a month ago.

[00:19:47] I think people are starting to get hip to this. I think the connection of how that fits into Web3 is still kind of lost on some people, but the idea that these platforms are not just going to be here forever is an idea that people are finally starting to understand.

00:20:04 - Daniel Norman

Yeah, there's a lot there. It's worth saying that we're lucky to be living in high-trust societies. I'm happy to trust Google with a lot of things. Yet a lot of people don't have this high-trust society. A lot of people are living in places where they don't even have access to modern banking, or their currencies are heavily inflated and they can't leave the country with their wealth. If you're living in Venezuela, Ethiopia, Sudan, Syria, Lebanon, these are all places, or Argentina, even, where people are facing a lot of constraints and difficulties that we're not so familiar with.

I think it's important to broaden the conversation when we speak about Web3 and what ownership means because we live in a high-trust society where property rights are really protected. The whole world is facing inflation and the macroeconomy is a complex thing to even have a conversation about, what is it even, the macroeconomy? It's two really heavy, loaded words. But to broaden the conversation, I think even these fundamental concepts of private property are not equally available in every country in the world.

Even though a lot of the Web3 culture is coming, and a lot of the noisy people are coming from high-trust societies, the US, Europe, and so on, it's really important to broaden the conversation and include the rest of the world because that's where a lot of the interesting use cases are coming. A lot of the new people joining the internet, the internet penetration rate is, I don't know, I think now between 50 and 60 percent, last I checked, maybe two years ago. The internet is reaching new places. Look at Africa, for example. Africa is leapfrogging the whole PC revolution. We're used to using computers, both mobile devices and a laptop or a computer.

[00:21:51] In Africa, a lot of people have never seen a fully fledged computer. They're just using mobile devices. The reason I bring this up is because I think there are so many interesting use cases that go beyond our conception of social media and the whole debate around Twitter. I think what's happening with Twitter is really interesting and exciting as a social phenomenon. Is it the best bouncing board to discuss Web3 technologies? Maybe. Maybe not.

What is definitely interesting, since you brought up Twitter, is the story of Bluesky. I don't know if you heard anything about this.

00:22:25 - Anthony Campolo

Just a little bit. This is Jack Dorsey's new project. It's based around DIDs, decentralized identity, I believe.

00:22:32 - Daniel Norman

Yeah, essentially, Bluesky is this project that Jack Dorsey started, and it's completely independent from Twitter, but it's an attempt to build something like Twitter using completely decentralized technologies in the same fashion and sort of same spirit of the old protocols of the internet.

This is a podcast. This podcast is probably published to all the different podcast apps using this technology called RSS, Really Simple Syndication, which is built on top of XML. The nice thing about this technology, as with email and XMPP, is that these are all protocols that were built for interoperability. These are protocols that were built to avoid situations where you have one company owning your data. What's so interesting is that a lot of these protocols, when they were built, didn't have all of these use cases that they're used for in mind. For example, when the RSS feed specification was created, it didn't have podcasts in mind. Yet podcasts have found their use case using RSS. That's why you're able to use Spotify, Apple Podcasts, Google Podcasts, Overcast, or whatever app you wish.

[00:23:37] You can subscribe to whatever stream you like, and it all interoperates. Coming back to Bluesky, it's this initiative to build a new foundation for social networking, which gives creators independence from the platforms and developers the freedom to build and gives users choice in their experience. You have multiple clients. Developers can build whatever they want, like podcasts on top of RSS. You have creators who have independence from the platforms. You own your data; you create the data.

So they built this AT Protocol. The AT Protocol actually combines something from IPFS, the IPFS sort of world, and what we're doing at Protocol Labs. It's using IPLD in order to represent these feeds. IPLD is another project related to IPFS. It's like the data model for IPFS. They're combining that with another technology called DIDs, decentralized identifiers, which is part of this broader initiative for decentralized identity. Another fascinating development in this Web3 space, because we're coming back sort of.

[00:24:34] I know I'm going here on a bit of a tangent, but hopefully I can close the circle you mentioned, right? Cryptography. I mentioned credit cards also being a technology. Identity is a challenging thing. Key management is a challenging thing. Decentralized identifiers are a way to introduce identity that isn't necessarily tied up with a specific service.

Today, for a lot of services, you can log in with Google, you can log in with Twitter or Facebook, and you're bound to the platform. With decentralized identifiers, theoretically, you can use your crypto wallet as your ID to log into things. The ideas are really interesting because they generalize this idea. The protocol is really starting to build this, and I think they've already got a beta. I'm on the waiting list. I haven't actually tried it, but they're doing some really interesting work there.

[00:25:27] And yeah, I'll pause there.

00:25:28 - Anthony Campolo

That was great. I want to hook into just one thing you said back there, and then we'll get into what is IPFS. You're talking about how we are in a high-trust society and that we're in a society where we even have the rule of law. This is something that I think Americans and people who live in other countries, people who live in Europe, people who live in Canada, people who live in Australia, maybe they're used to this idea of living in a country that respects the rule of law to a certain extent. People can always complain that rich people can buy their way out of whatever. But I think for the most part, if I go out and run someone over, the cops are going to come after me. That is just how it works.

People will talk about it like there's no use case for crypto or whatever. Then I'll be like, okay, well, you may say there's no use case for you because you can go to a bank and get a bank set up, and that's something you're able to do.

[00:26:19] You take for granted the fact that not everyone in every country in the world can actually do that. This is not something that just exists as a natural state of the world. There's a really good movie that drives this home. It's called The Lives of Others, and this is a movie about living in East Germany when the communists were still in charge. The point of the movie is about a writer who wants to speak out against the current administration because, for people who don't know, when the communists were in charge, there were widespread purges and people who wanted to speak out against the government were either silenced or straight up murdered by the government.

He had to get a typewriter and hide it and go through this whole thing just to write something and publish it without being murdered by the government. It's really hard for us to understand that this is the state of the world for some people, and having the ability to publish your words to the world without getting murdered by your government is an actual real-life thing for some people.

[00:27:23] This is really hard, I think, for some people to understand when you say there's no use case for this stuff. You need to keep those kinds of things in mind.

00:27:30 - Daniel Norman

Yeah, absolutely. Just a shout out to this guy called Alex Gladstein. He's with the Human Rights Foundation, and he's done a lot of work documenting the impact of something like Bitcoin in those countries that I mentioned: Ethiopia, Sudan, Syria, Lebanon, and so on. Just wanted to give that shout out. And yeah, absolutely, as you said.

00:27:49 - Anthony Campolo

We've laid a lot of this philosophical underpinning, and let's get into what is IPFS and what is the actual technology that fits into this whole philosophical conversation.

00:28:00 - Daniel Norman

IPFS stands for Interplanetary File System. IPFS is a new way of moving data on the internet. There are two core concepts that are important to understanding IPFS and the kinds of problems that it can solve.

To start off, let's start with peer-to-peer networking. The web as we know it is built around this idea of clients and servers, right? You open your browser and you access a specific server using a domain name. The DNS record is looked up, and the server serves you. IPFS changes this by essentially allowing any member of the network to be a productive member. It's breaking away from this client-server model and shifts into a world in which, theoretically, anyone can be a server. So you take on both the role of a client and a server. This is the key concept, peer-to-peer networking, where any member of the network can be productive.

00:28:56 - Anthony Campolo

Like The Pirate Bay.

00:28:57 - Daniel Norman

Right. Many people are probably familiar with BitTorrent. There's a lot of similarities between IPFS and BitTorrent. IPFS is heavily inspired by BitTorrent. The big revolution of it was you can exchange files from anyone, whether you're pirating content or you're downloading a Linux image, by essentially instead of going to one server and then overloading it and then the server can go down, you can pull it from anyone who's holding a copy of that Linux image. Theoretically, you can get much faster bandwidth and you get a much more resilient system. This is already starting to speak to some of the resilience and the efficiency gains that you get with something like IPFS and BitTorrent.

Obviously, the second thing is content addressing. Since this is a podcast focused on developers, I think there's a really good parallel or another example of content addressing. The very high-level idea of content addressing is that you address things based on what they are rather than where they are.

[00:29:58] How do you address things by what they are? Using hashes. You can take a file, run it through a hash function, and you get the hash of it. Many developers are familiar with Git. Git is actually a content-address system. When you commit something, every commit in Git has a hash, and you can load that hash locally, but you can also load it up on GitHub and GitHub will be able to display the exact same commit. You change one character, you even change the commit message, and the hash for that commit changes. Then you know you're going to either do a force push, or you're going to create a new commit or a new branch, and so on.

Everything in Git is content-addressed. Content addressing is actually pretty common in many different systems. Docker, actually, when you build a Docker image, every step you get this output. When you're building the Docker image, every one of those steps is a hash of the image at that step.

00:30:47 - Anthony Campolo

Could a hash be thought of kind of like a pointer? You have a piece of content and you have a pointer, and then you can use that pointer to kind of get to the content.

00:30:56 - Daniel Norman

Exactly. So what is content addressing different to? And that's what you alluded to. It's different from location addressing, which is just how the web works today. You go to google.com or you go to twitter.com, and that is pointing to an IP which is pointing to a location. With content addressing, and again with location addressing, that server could go down at any time. The magic of content addressing and the magic of what IPFS does is it allows you to address things by their fingerprint, as you mentioned right there, a pointer.

But it's more than a pointer. It's a pointer that you can calculate yourself once you get the data, so you can verify that this is exactly what the pointer is supposed to be pointing to. Even if you're doing content addressing, you still have to fetch it from some location. There's a misconception about IPFS that it just magically retrieves it, but no. You have a pointer, and that pointer is pointing to a number of different locations. Those locations are the network participants who are holding that content ID, that hash. We call it a content ID in IPFS, which contains the hash and a little bit more metadata.

[00:32:01] So I'll pause there. We had peer-to-peer networking, content addressing, being that you can address things by what they are using these hashes. In IPFS we call them content IDs. You can think of these content IDs as being a Git commit. It's just this string, this long hash. You can use that to look up things, and you can look them up based on wherever they are. The magic of IPFS is doing that content routing, translating from that pointer into all of the different locations that have it. That means that if you have more people who are hosting a bit of content, then you can theoretically fetch it from more people. Obviously there's more to IPFS, but I'll pause there.

00:32:37 - Anthony Campolo

I thought that was really good. I am someone who has used IPFS quite a bit actually, so I'm going to try and restate that and add some of my own color to it. For listeners who might be really confused right now, when you first start using IPFS, you can use a desktop GUI, or you can use a CLI. When you use the CLI, you'll start by doing an IPFS init kind of thing. That's a lot like initializing a Git repository like you do git init. Then you have this repository. You can start committing things to it. It's a similar thing with IPFS. You'll init a repository and then say you just create an index.html file. That index.html file will have an h1 that says hello from IPFS as all it is. Then you can take that and commit it to IPFS.

Now it's not a commit in the exact same way, but what it does is it takes this content and then it outputs what you're saying, the content hash.

[00:33:35] Then you have this thing where it's just a long string of text. But if you then do forward slash IPFS, forward slash the content hash, then you can get that h1 back. By doing that, you can save a website onto IPFS, and then you have this content hash and something that points to it, and that's pointing to a website. Then you have a website saved on this decentralized, distributed, cryptographically verified system.

I think that's where this stuff starts to make sense to the normal web developer like us. For a web developer, they're like, okay, I don't really understand what all this stuff is or why I would need it. Then if you can break it down to be like, okay, it's just a website. You can create a website, you can save this website on this system, and then you have a website that can't be taken down, that you can't stop someone from putting up, which then has its own set of problems associated with it.

[00:34:32] But let's take it back to just if you're someone who wants to put a website up. You want it to be resilient and distributed and not necessarily beholden to any sort of hosting provider. Then you can just save an HTML file onto IPFS.

00:34:47 - Daniel Norman

Yeah, I think that's a great way to contextualize the practical use case. This use case specifically is about publishing content. If you're publishing stuff onto the web, you're traditionally probably used to using, say, Vercel, Netlify, or Cloudflare, or whatever. You're essentially publishing it and making it available, or they make it available for you, over HTTP. But again, they're running the servers for you, and they're serving it, and they've got to run all of this infrastructure that is completely invisible to you.

In the traditional HTTP way with IPFS, and I should say this also applies to HTTP, you can also run Nginx on your computer and publish to Nginx. The challenge is what happens when suddenly a thousand people request data from your Nginx server. Okay, a thousand requests per second. You might be able to handle that on your computer, but suddenly it grows and it's 10,000 or 100,000. Suddenly it's putting a lot of strain on your computer that you're running at home.

[00:35:42] IPFS is a new way to think about publishing content in general, or making content available over the internet insofar as you can run it on your own computer and make it available. But then, as soon as people request it over the IPFS network, once they get a copy of it, they can also share that copy to other people in the network. Again, it's like you're distributing that load. You're able to do that because you have these content IDs that allow you to make sure that you're not pointing to a specific location. You're pointing to what the data is, and then you can grab that from everyone who's holding a copy.

In most cases, I don't think that's the real practical use case. Obviously it is a legitimate use case. Typically, the common way to approach this is using these IPFS pinning services, services that will host your content and make it available over IPFS. You have Fleek. Fleek is a nice platform similar to Vercel that publishes to IPFS.

00:36:37 - Anthony Campolo

You have to use Fleek. Fleek is super sweet.

00:36:40 - Daniel Norman

Yeah. Fleek is an example of a way of abstracting you running an IPFS node. You can still gain all of the benefits of IPFS without necessarily running a node. I know that the immediate use case to think about is, like, oh, you run a node on your computer and you make your website available. Sure, you can do that, and there's a lot of cool things you can do.

But I think the real power, if you're really building applications, is that you get this interoperability. It's like, okay, you can run it on your computer and then you can tell three pinning services, for example, pin this for me. Pinning is just saying, hey, I want you to try to replicate this content ID from the network. Then initially that pinning service, you can think of it as like Cloudflare. You have pinning services like Web3 storage and Infura and Pinata, and you basically tell them, hey, copy this content ID and pin it, make sure it's available, and they will try to fetch it from the network.

[00:37:29] Once they do, then suddenly you have two copies of it, or three copies, or four copies of it available through this network. That is resilient. So it doesn't matter if one service goes down; it will still be available. This is also probably a good moment to introduce this idea of IPFS gateways.

00:37:46 - Anthony Campolo

Yeah, I was going to say, for listeners, if the whole pinning concept was a little bit confusing, pinning is something that we could do an entire episode about. So don't worry about that too much. I just think the gateway concept is more important because this is something where we were talking about: you have a repo you can initialize. Once you create some content, then you get a content hash back. A gateway is what allows you to then take that forward slash IPFS, forward slash content hash and append that to ipfs.io, forward slash IPFS, forward slash the content hash, and then you can immediately see this content online on the internet because these gateways give you an interface into the actual internet.

So this is where IPFS and the old internet, the way we think about it, actually merge together. You can put something on an IPFS node or just pin it to an IPFS service, and then it's instantly available and can be viewed through one of these gateways. So let's explain what the gateway is.

00:38:46 - Daniel Norman

Yeah, that was a great introduction. An IPFS gateway is a public service. Any IPFS node can typically also be an IPFS gateway. An IPFS gateway is just a service that translates between Web2 and Web3, or specifically in this case, providing a bridge between HTTP and IPFS. It's like this gateway through which you can access this IPFS network, where you have multiple people hosting your file. That means that if you uploaded a file onto the IPFS network, you added it to some pinning service or whatever, assuming that it is available, someone else published it, you can ask any gateway for that content ID.

You request slash IPFS content ID using the HTTP protocol from any browser or any programming language that has HTTP in its standard library, which is almost every programming language. At this point, you can request data from the IPFS network.

00:39:42 - Anthony Campolo

So we have these different services that simplify things and let us get stuff online. So you mentioned Fleek. Fleek is a service that is kind of like, you say, a bit like a Vercel for them. Do they run their own pinning service underneath? Where do they connect the content hash to the actual website? Where do they fit in the lifecycle here?

00:40:06 - Daniel Norman

A pinning service is just a company or someone who's running some IPFS nodes and is providing an API through which you can upload files, or you can just send HTTP requests: hey, pin this content ID or pin this file for me. Really, pinning, the only thing it means is hosting that file. Think of it as like AWS S3 with some extra stuff on it which is making it available over IPFS. That's what a pinning service is. It's just there running IPFS nodes, allowing you to add your files to them, and then making them available through the network.

Fleek is unique in that it also can connect, kind of like Vercel and Netlify. It can connect to your GitHub repository or GitLab, and they will build your site for you. Then once they finish building it, they will also publish it to the IPFS network. So they will pin all the static files that you generate as part of your website. They will add those to IPFS, pin them to their IPFS nodes, and make them available.

[00:41:05] Not only that, they can also use this technique called DNS link in order to connect between your DNS domain name and the latest content ID. So imagine every time you make a change to your website, kind of like every time you make a new commit, you get a new commit hash. Every time you make a change, the resulting content ID will be different because it's a hash. So it's a hash of all of the data that you have in your website, and that one hash represents a snapshot of the website at that point in time. What they will do with this DNS link technique is essentially update the TXT record that points to the latest version and make sure that when you access the URL, you're always getting the latest version of your website. That's a brief overview of what something like Fleek can do.

00:41:49 - Anthony Campolo

Yeah, it's funny, I'm having flashbacks to the first time I was learning all this stuff. I wrote a blog post that I'll link to in the show notes, a first look at IPFS, and it is the longest I've ever spent writing a blog post because halfway through I realized that I had no idea how DNS worked. All of a sudden they're like, yeah, just create a text record and point it to DNS link, and then you have your DNS pointing to IPFS, and I'm just like, wait, what? I know how to hook a Git repo up to Netlify and then push my project, and then they gave me DNS. It was the first time I ever had to really dig into how DNS actually works.

Because IPFS isn't beholden to DNS. It has ways that DNS can hook in, but it's actually completely separate, right?

00:42:35 - Daniel Norman

Yeah, that's a great question. A good way to think about it is content that you add to IPFS is immutable. You get a unique content ID, and every time you change it you get a new content ID, so a content ID is always immutable. You can't change the content with the same content ID because it's a hash function of it.

Because you have this concept of immutability, how do you still have a persistent address, a way of, say, www.google.com that remains the same even though the content might be different? Or, Anthony might say, assuming that was your website, the content of the website changes, but the URL doesn't. That is introducing mutability in a sense because you have a URL that keeps on pointing to new versions of the website. So DNS link is an approach to introducing mutability to IPFS.

IPFS things are naturally immutable. Using DNS link, you can leverage this because you can change the DNS link TXT record. It's a set of standards. Because you can change that TXT record, you're able to make changes to your URL and basically make updates to your website. Otherwise, if you're using just immutable content IDs, every time you would make a change to your website, you would have to share a new content ID with all of your readers, and that would be nonsensical.

00:43:47 - Anthony Campolo

Yeah. If you're writing a blog, then obviously you're going to have a blog post and you may edit it or change it or create new blog posts, and then you have these new content hashes every single time. Now that makes a lot of sense.

I want to get into a little bit of the meta in terms of Protocol Labs and things like that. Is there anything else you want to speak about in terms of the technical implementation of IPFS before we get into that?

00:44:10 - Daniel Norman

There's a whole lot there, but I think that's a nice, soft introduction to IPFS. IPFS has a lot of interesting use cases. You could add content addressing into a traditional Web 2.0 app. It's not black or white. The important thing is, for example, if you're building some kind of app that has interactivity or where users can leave comments, you can actually use IPFS for some stuff on your website. If you have questions and answers, you can add all the answers to questions on your page to IPFS and then make them available for people to download or replicate. You don't have to rethink your whole architecture. You can gain a lot of the benefits of content addressing and verifiability in many different kinds of apps.

00:44:57 - Anthony Campolo

And when we say content, it's not just websites and comments. It also includes video and audio, right?

00:45:03 - Daniel Norman

Yeah. Anything that is a file, and even more abstract stuff like JSON, you can do some cool stuff with JSON.

00:45:11 - Anthony Campolo

Very cool. So you work for Protocol Labs. What is the relationship between Protocol Labs and IPFS?

00:45:18 - Daniel Norman

Protocol Labs is historically the initiator of the IPFS project. It was started by Juan Benet, who's the founder and CEO of IPFS. He released the paper, I think, in 2015 or 2016, the original IPFS paper in which he described the system and its properties, and he started working on the initial implementation in that time.

Obviously, IPFS has transitioned from being driven by Protocol Labs into being this more community-driven project. Today, six or seven years later, since IPFS was initially started, it is a lot more community-driven. We have open specs. We have working groups that are improving different aspects of the protocol. We have contributions from the broader community that are improving the protocol and doing all sorts of things. So even though Protocol Labs started a lot of this work, at this point a lot of the work is spread across the broader community.

You even have Cloudflare participating because Cloudflare is running IPFS gateways, and they are doing a lot of really cool things with IPFS.

[00:46:20] They're contributing to some of the protocols. You have this other group called Fission, and they're building all these great developer tools with a really nice developer experience. They're building this great thing called WNFS, which is an encrypted file system that is built on top of IPFS, because by default everything is public on IPFS. So you have to handle the encryption yourself. They're building this encrypted file system on top of IPFS. So there's many different groups and contributors to the IPFS project. Protocol Labs is one of them.

00:46:48 - Anthony Campolo

Yeah. I pulled up the original paper and it was actually July 2014. So we're going on eight years now, which is really impressive. I find that the ecosystem around it is also really interesting and really impressive. It's like this is a project that one company couldn't really do all of this. They built this system. They put down the base implementation of it. Then you had both new startups, like a lot of the ones you've mentioned, like Pinata and Fleek and all these companies, Web3 storage, and then you also have your legacy Web2 companies like Cloudflare getting in on it.

I think it really shows that this is a fundamental tech breakthrough in a lot of ways. This is not just some new hyped project. There's something more fundamental going on here, which I think makes it a really interesting technology. I really like the fact that you have both Web2 and Web3 groups getting interested in it because I think this is, like I was saying back towards the beginning of the episode, one of the better entry points for Web2 developers. It's like a file system, and a file system is not that hard to understand.

[00:48:02] The complicated stuff comes in. It's like, well, how do you make this file system distributed and able to do the more nitty-gritty type permissions stuff? So that'll be the last thing I'll be curious to get into before we start closing it out here.

What if there's content that you don't want to be on IPFS? We're talking about this immutable, permissionless system that allows anyone to put anything up that they want. I think most people, if you put on your black hat, can start to imagine very bad things that people could put on IPFS. So are there ways to block content, to moderate content? What happens when people start putting things on IPFS that we don't want on IPFS? Just to name an example, like child porn. Go really hard to the things we all can agree we do not want to be on the internet.

00:48:50 - Daniel Norman

Yeah, absolutely. I think that's a great question and probably a good topic to close with. The IPFS network consists of folks that are running IPFS nodes. There's tens of thousands of these nodes, if not more, maybe even hundreds of thousands. I can't remember the latest metrics, but the idea is that you have people running these nodes, and these nodes are associated with an IP address. They're operating in a certain jurisdiction.

Now, if content that is, so to say, bad, like child pornography, shows up on the network, what do you do about it? The first thing is Protocol Labs is operating one of the public gateways. It's called ipfs.io. It's a public good. We have this email address, abuse@ipfs.io, where you can send your abuse notifications. We basically block that site from being available via the ipfs.io gateway. However, it should be pointed out that the network itself is neutral.

[00:49:44] The network itself is built to be censorship-resistant. Even though you have these nodes, every node has the capability of controlling which content IDs it chooses to block. There's ongoing work on a formal specification for the format of blocking specific content IDs. With this, you can imagine something very similar to how spam filtering works on the internet, where you can subscribe to these different deny lists. You trust a certain party, you subscribe to a list, and you're trusting them that they are giving you all the different records of the IPs or the servers of spammers, so that when mail comes in, you can check it against that list and decide to block it.

The idea here is to build a system where you can opt into these deny lists. That's the specification work, and it will be followed by implementation work so that when you run an IPFS node, you can be sure that you're not breaking the rules by hosting any bad content.

[00:50:41] From a high level, the network and the protocol itself is neutral. The folks running on it, it's not just magically stored, it's stored on some server. That server is likely operating in a certain jurisdiction that has rules about what is allowed and what is not allowed. So there's obviously normal takedown requests for copyrighted content and for things like child pornography. Just because it's censorship-resistant doesn't mean that it's private. You don't get privacy out of the box. With IPFS, you still have an IP address. The same way you might have trouble if you're hosting all sorts of bad content using a normal HTTP server, you could have the same problem with an IPFS node.

I hope that answers the question and describes this tension between wanting to have the network and the participants in the network moderate, but the protocol itself is neutral. There's no content moderation built into the network itself.

00:51:33 - Anthony Campolo

Yeah, I think that you'll see a similar thing in the Mastodon world, where there'll be certain Mastodon servers that people know have this type of content, so they can block those specific servers. There's always going to be this tension between wanting to have an open protocol and then wanting to keep bad actors out. It sounds like you have a lot of thought being put into this around the IPFS world. I think that's really important because there's always going to be this Wild West aspect to the Web3 stuff, but that doesn't mean that the people in Web3 don't care about this stuff. Obviously we care about this stuff, and obviously we want this to be a safe, welcoming environment and be something that is good for everyone.

Like you say, there's tension there, and it comes down to the nature of human interactions, and you can't always have a system that works for everyone the way we want it to. There's always going to be some bad actors.

[00:52:27] It's a complicated problem, but there's a lot of thought and work being put into it, it sounds like.

00:52:32 - Daniel Norman

Yeah, I think distributing and decentralizing the moderation work is a big part of that. Letting folks take an active role in what they're choosing to replicate on the network is the mechanism by which it can self-govern, rather than having some centralized governance mechanism. That's where you get the resiliencies, by distributing the decision-making to the edges of the network. You're operating in legal jurisdictions, so you have some constraints there.

But the network itself, besides allowing you as an IPFS node operator to choose what you host, you're really distributing that to the network participants and the norms that arise. There are legal jurisdictions, there are specifications, and the mechanism to allow the network to self-govern. I'm confident that we can continue to evolve this network to be a healthy one and to be one that serves. It is serving thousands of people. The requests that we get through some of the gateways are immense. I think all the NFT space is using content addressing and IPFS in order to break away from being locked into specific vendors like AWS or specific cloud. You get a much more interoperable web that is hopefully going to serve a broader audience of internet participants everywhere in the world, not just in our corners of the world.

00:53:47 - Anthony Campolo

Yeah, it's funny, we didn't really talk about NFTs at all. But some of that people talk about is like an NFT is a picture saved on the blockchain. It's usually a picture saved on IPFS, not actually the blockchain itself, and that's what enables NFTs to work the way they do.

00:54:06 - Daniel Norman

Indeed. It's probably worth flagging: most of the NFTs are basically using an IPFS content ID because blockchains are very expensive. It's not really feasible to store a whole image on the blockchain when you have these really limited block spaces. There are always these jokes about how Ethereum and Bitcoin are like the most expensive databases in the world. It is true to some degree.

How do you do this thing with NFTs? You put up a content ID, and that points to an image that is being served on IPFS. The interesting thing is that once you own NFTs, you can make a copy of that content ID yourself. So if the person hosting the original content ID shuts down their IPFS server and you've got a copy, you can continue to make it available, or you can use a pinning service to make sure that there's multiple copies of it available on the network.

[00:54:52] Or you can use NFT storage, which is one of the public goods services that we've launched, which will also make it available for you.

00:54:59 - Anthony Campolo

Awesome. So if someone wants to get started with IPFS, where should they go? What's the best way to get a foot into this whole brave new world?

00:55:09 - Daniel Norman

IPFS.tech is the website. You can read more about it. We're working to introduce more, and I'm really working now on a project to have more ready-to-run examples for how to use it. We have documentation. We just released a blog post updating on our plans for using IPFS in JavaScript, and there's going to be a lot of new and exciting stuff coming in 2023. I would definitely check out that blog post. I can also share it with you so you can put it into the podcast notes.

I would also check out the Web3 storage blog. They have some great content about doing more modern development using IPFS and content addressing. Web3 storage is basically a hosting service. They also have a free tier. They do IPFS pinning and a bunch of other things, and they have some really fun, cool projects that you can experiment with also using decentralized IDs, so DIDs, decentralized identifiers. I think the Web3 storage blog is a great place to check out if you want to start building stuff.

[00:56:12] If you want to learn more of that conceptual stuff, the IPFS docs and the blog are good places to check out.

00:56:18 - Anthony Campolo

Yeah, we'll have a lot of links for people in the show notes. Where's the IPFS community at?

00:56:25 - Daniel Norman

We have Slack, Discord, and Matrix, and they're all kind of bridged with each other.

00:56:31 - Anthony Campolo

You have a Slack and Discord. That's a first.

00:56:33 - Daniel Norman

Yeah, Slack, Discord, and Matrix, and they're all kind of bridged with each other. I still prefer Slack. I'm still not a huge fan of Discord. That's where a lot of the communities are, and that's why I use it. I find Slack to be a bit of a nicer experience, but the nice thing is you can choose whatever works for you.

If you have some more involved questions, you can go to the IPFS forum, discuss.ipfs.tech. We're hanging out on all of them, and you can find all of the links on the help page of the IPFS website.

00:57:02 - Anthony Campolo

Well, thank you so much, Daniel. This has been a super interesting conversation. I hope for the uninitiated this was not too much of a firehose of information. This is something that I've been really pushing as a good entry point into this whole Web3 world. I think that for someone who's just a regular old web developer, bridging that gap to a way to get a website online in a Web3-native kind of way can be really interesting and really eye-opening for people.

I know that when I was first learning all this IPFS stuff, it felt like the first time I had legitimately learned something new since I first got into web development. It felt like something legitimately different and new from what I had been doing before, and it got me really fired up. I was just wanting to go out and explain to people what this thing was and why it was so cool. I did an episode with Ben Myers on Semantics about this, where I walked him through doing a basic IPFS setup.

[00:57:58] So if someone wants to see a more step-by-step tutorial-like experience, you can check that out. This should give people a good idea of why you'd want IPFS, what IPFS is, and a little bit of how to get started. So yeah, why don't you let our listeners know where they can get in touch with you? What are some good places on the socials to check you out?

00:58:19 - Daniel Norman

Yeah. So first of all, thank you for hosting me, Anthony. I really enjoyed this conversation. I really appreciate all of the contributions you've done to the IPFS community by writing about it and communicating about it, and you do it so eloquently.

Folks can find me online. My handle on Twitter is daniel2color. That's just the normal non-British spelling. It's the American spelling. So it's Daniel, the number two, and then color. I also have a website where I ramble about this and that. The website for that is norman.life, Norman like my surname. Those are probably the best places to find me. Through there you can probably find links to Twitter and whatnot.

I'm happy to answer questions. If you have questions about IPFS, if you're building with it, I'd love to hear from you.

00:59:01 - Anthony Campolo

Thank you so much. I'll close it out for us.

00:59:03 - Daniel Norman

All right. Cheers.

00:59:34 - Anthony Campolo

And it's just Daniel Norman. Pretty easy to pronounce.

00:59:39 - Daniel Norman

Yeah, it's almost like a generic Dick Smith.

00:59:42 - Anthony Campolo

Dick Smith. That's funny. All right. Cool. Are you ready?

00:59:49 - Daniel Norman

Yeah.

00:59:50 - short split/interjection

All right.

On this pageJump to section