
Supabase with Paul Copplestone
Paul Copplestone explains how Supabase builds an open source Firebase alternative using Postgres, Elixir, and a suite of community-driven tools.
Episode Description
Paul Copplestone explains how Supabase builds an open source Firebase alternative using Postgres, Elixir, and a suite of community-driven tools.
Episode Summary
Paul Copplestone, CEO of Supabase, joins Anthony Campolo and Christopher Burns to explain how Supabase is assembling an open source Firebase alternative by stitching together existing community tools rather than building everything from scratch. The conversation traces Supabase's core architecture — a managed Postgres instance paired with a Phoenix/Elixir real-time server that listens to the database's replication stream and pushes changes over WebSockets — before moving through its authentication layer (a fork of Netlify's GoTrue), upcoming storage and serverless function features, and the practical developer experience of spinning up a full backend in under five minutes. A recurring theme is portability: because everything lives inside a standard Postgres database, users can dump their data and move to any provider without code changes, a claim the hosts validate through Redwood's own integration experience. The discussion also touches on the tension between building in public and shipping quickly, why Supabase's hosting infrastructure remains closed source for now, the challenges of scaling databases globally, and potential GraphQL integrations via PostGraphQL or GraphQL Mesh. The episode closes with a look at multi-tenancy, regional hosting considerations post-Brexit, and the memorable story of migrating 800 databases from DigitalOcean to AWS under a tight deadline.
Chapters
00:00:00 - Introducing Paul Copplestone and Supabase
Anthony opens the episode by recounting how he first crossed paths with Paul at an Open Source Contributor Summit organized by Brian Douglas, where he noticed Supabase's early engagement with the Redwood community. This sets the stage for a broader conversation about Supabase's commitment to open source and its place in the developer ecosystem.
Paul then delivers his elevator pitch: Supabase is an open source Firebase alternative that differentiates itself by finding existing, battle-tested open source tools and composing them into a cohesive backend experience rather than reinventing the wheel. The hosts briefly unpack what Firebase actually is — a suite including a real-time database, hosting, storage, auth, and cloud functions — and note that its underlying databases are proprietary Google products, which creates confusion around whether people mean Realtime DB or Firestore.
00:04:42 - Why Postgres and the Developer Experience
Anthony raises the seeming contradiction of building a Firebase alternative on a relational database, and Paul clarifies that Supabase is not attempting a one-for-one API migration. Instead, the goal is to deliver a fast, intuitive developer experience — "build in a weekend, scale to millions" — powered by the most trusted and scalable open source database available. The team chose Postgres specifically because developers told them they loved it but kept reaching for Firebase due to ease of use.
Paul walks through the onboarding flow: signing up launches a dedicated Postgres instance with an Airtable-like table editor, a SQL editor, built-in auth, and auto-generated API documentation that updates as schemas change. Christopher asks about the practical benefits over a manually configured DigitalOcean database, and Paul emphasizes that everything lives inside the database itself, meaning users retain full portability and can export their data at any time without vendor lock-in.
00:09:33 - Database Scaling, Elixir Real-Time Engine, and Auth
The conversation shifts to database administration and scaling challenges. Paul acknowledges that truly global distribution and sharding remain unsolved at scale, but argues Supabase's bet on Postgres pushes those concerns far down the road for most users. Anthony then asks about the real-time engine's internals, and Paul explains how a Phoenix server intercepts the Postgres replication stream and broadcasts changes over WebSockets, giving any Postgres database real-time capabilities.
Christopher asks whether Supabase's modules are optional, and Paul confirms that a self-hosted Docker Compose experience is in the works. The discussion moves to authentication: Supabase forked Netlify's GoTrue server to work with Postgres and added providers like Azure and magic links through community contributions. Paul notes that Supabase itself still uses Auth0 internally — a running joke on the team — and that row-level security in Postgres handles authorization seamlessly alongside GoTrue's authentication.
00:20:14 - Storage, Functions, and the Portability Promise
Paul previews the upcoming storage feature, targeted for end of Q1, which will handle large files like images, videos, and PDFs — assets too costly or impractical to keep inside a database. Christopher probes the design decisions around serving generated PDFs, and Paul explains why storing and serving from dedicated storage is almost always preferable to recomputing on each request.
The conversation then turns to serverless functions, planned for Q2. Paul discusses evaluating OpenFaaS and OpenWhisk as open source alternatives to Lambda, and expresses interest in a Docker-container-based approach where developers can bring any language or runtime. Anthony highlights Cloudflare's edge functions and the Redwood team's own work with Cloud Run, and Paul reaffirms that portability is the guiding principle: functions written for Supabase should be runnable elsewhere, just like the database itself.
00:30:07 - Open Source Philosophy and Practical Trade-Offs
Christopher asks whether Supabase might ever keep parts of its stack closed source to ship a better experience. Paul is candid: the hosting infrastructure is the one piece that hasn't been open sourced yet, largely for speed and security reasons, but the long-term intent is to open everything up. He draws a parallel to Netlify, whose build tool is open source even though the platform is not, and acknowledges that building in public introduces consensus overhead that slows delivery.
The hosts and Paul discuss whether open source code is inherently cleaner than closed source code. Paul admits the dashboard has fewer tests than it would if it were public-facing, but emphasizes that the overwhelming majority of users want a hosted, zero-configuration experience rather than self-hosting. The philosophy is to deliver time-to-value first and follow with open source tooling as a guaranteed next step, checking in regularly to make sure the balance stays right.
00:34:09 - GraphQL, Multi-Tenancy, and Regional Hosting
Paul asks the Redwood hosts what features their community would most want from Supabase. Anthony immediately raises GraphQL, and Paul explains that while PostgREST powers the current REST API, PostGraphQL (now Graphile) could serve a similar role for GraphQL. Christopher suggests using GraphQL Mesh to generate a GraphQL layer from Supabase's OpenAPI spec, which Paul finds intriguing. The group also discusses auto-generating TypeScript types from the database schema via Swagger.
Christopher then asks about multi-tenancy and regional hosting. Paul describes a planned schema marketplace where pre-built patterns for SaaS or multi-tenant apps could be one-click deployed. On regions, he reveals that Supabase originally hosted on DigitalOcean before a dramatic week-long migration of 800 databases to AWS when credits ran out. Christopher raises post-Brexit data residency concerns, and Paul commits to looking into adding a UK-specific AWS region. The episode wraps with a lighthearted moment revealing the three participants are recording from Europe, California, and Singapore.
Transcript
00:00:00 - Anthony Campolo
It's almost like opinionated frameworks are things that people like. Paul Copplestone, welcome to the show.
00:00:15 - Paul Copplestone
Hey, Anthony. Hey, Chris. Nice to be here.
00:00:17 - Anthony Campolo
You've been intersecting with the Redwood community in a lot of different ways that we're going to get into. But you and I actually have an interesting intersection where we both spoke at what was called the Open Source Contributor Summit, I believe, put together by Brian Douglas, popularly known as Buggy on the internet. He is someone I met through Jamstack Radio. I had reached out to him to talk about Redwood, and throughout our conversation he was like, hey, do you want to speak at this Contributor Summit and kind of tell your story about being a bootcamp grad getting into open source?
It was funny because at the time I wasn't on the core team yet, but I was kind of a contributor, so I was in this middle ground. But I saw that you were speaking at it and I'm like, oh, Supabase. I've heard Supabase. That's the guy who posted on the Redwood forums, like, we did that Supabase integration. I kind of hopped in the chat and it was like, hey, Redwood people love Supabase, and you're like, oh, cool.
[00:01:13] And that was it. That was the entire interaction. But I thought that was really cool. The reason I tell this story to set off the episode is because Supabase is really focused on open source and open source tech, and I find that you've put a very strong emphasis on that in a way that I don't see a lot of other companies.
Before we get into all that, even though some of our listeners will know Supabase, others will probably have never heard of it. So first, what is Supabase?
00:01:38 - Paul Copplestone
Supabase is an open source Firebase alternative. We are building the features of Firebase using open source tools. I think this is where we're actually quite different from most open source tools. Of course, Firebase is not one thing. It's a suite of tools that kind of give you a backend, everything you need. Our differentiator is that essentially we're trying to find existing communities, existing tools that already solved the problem, and then kind of stitch them together to give this amazing experience around open source tooling itself.
00:02:06 - Christopher Burns
Just so everybody knows, Firebase is, to what I understand, a wrapper around Google Cloud to make their services easier to understand.
00:02:17 - Paul Copplestone
James, the founder, probably would disagree. It started as basically a real-time database that they offered. They came out about seven years ago. I think it started just as a real-time database and then particularly focused on the mobile experience.
So what Firebase is in a nutshell is a suite of tools. It's a database, a real-time database. It's hosting for your static website. It's storage for your images, movies, blobs, cloud functions. It's analytics. It's basically all of this and auth, all stitched together in a seamless experience with client libraries and everything working together nicely. Now it was acquired by Google, I think, about five years ago, and they're slowly integrating more with a Google suite of tools. But Firebase is a tool. It's quite amazing. You can literally build your frontend only or your product only, whatever it is, and then the backend is handled by Firebase itself. You just get everything out of the box. It has a few limitations. Typically people see some scaling limitations.
[00:03:17] And these are the things that we're trying to solve upfront from the start with some technical decisions. But otherwise, it's a really cool tool.
00:03:24 - Anthony Campolo
Everything that you said about Firebase and being a platform with all of these different pieces is very much how I've heard it described, and that totally gels with my mental model of it. But something that has always confused me is that I've never been able to quite tell what kind of database it actually is, because it's always kind of buried behind this really nice API and this whole hosting experience. And as you said, it used to be another thing and now it's a different thing. So it's not even the same database that it was when it started. So let's not worry about what it was. What is it now? Do you know what the underlying database actually is?
00:03:59 - Paul Copplestone
Yeah, it's a proprietary one by Google. I'm guessing it's wrapping one of their other databases, so I couldn't tell you.
00:04:06 - Anthony Campolo
I guess, which one? Like Spanner, maybe.
00:04:09 - Paul Copplestone
No, because it's more sort of a NoSQL database, for starters. They actually have two. This is what's super confusing and what will confuse most people. They've got one called Realtime DB and they've got one called Firestore. Firestore is their kind of new offering. When people talk about the Firebase database, it's unclear which one they're talking about because it could be either.
00:04:29 - Anthony Campolo
And we don't know what either of them are. They're both just proprietary Google databases.
00:04:34 - Paul Copplestone
It started off as Mongo originally. Then I'm sure it's morphed into some layer on top of Mongo, the Realtime Database. And then I don't know what the next one is.
00:04:42 - Anthony Campolo
Yeah. When people have talked about it to me, I've always gotten the sense that it was a NoSQL kind of database. It wasn't a relational database, which is why I found it very interesting that you were building a Firebase alternative with Postgres, and that doesn't quite gel in my mind with how that works. So are you trying to provide a Firebase experience with a relational database, or are you actually trying to map the model of their API to a relational backend?
00:05:06 - Paul Copplestone
Yeah, so that's an important distinction. We are not trying to build a one-for-one so you can migrate away directly. In fact, we thought about it at the start, but we just didn't like their interfaces. We thought we could do a much better experience. So all we want to do is provide the experience, or an even better experience, around the developer tooling. The sort of tagline we use internally is, you can build in a weekend and scale to millions. We just want to give this experience where you get to your wow factor immediately. Then you can scale sort of indefinitely using the most scalable tools that we can find on the market.
And actually, the reason why we didn't start with the tagline, the open source Firebase alternative, is that we just wanted to make tooling around Postgres because it's obviously a great database.
00:05:51 - Anthony Campolo
That was actually an idea I had for a tagline you should change to, which is Postgres for developers. The assertion being Postgres is usually for DBAs.
00:06:00 - Paul Copplestone
Yes, exactly. At some stage we'll undoubtedly change the tagline, but for now it gets the message across until we sort of bring out the extra features. The idea is that we talked to many, many developers at the start of last year and they would say, you know, I love Postgres. And then we'd say, what are you using? A lot of them would say Firebase, just because it was so easy to use. So it became our first thing. We actually just wanted to build the tooling.
But we realized we had to offer the hosted experience so that when you sign up, you can literally get a database in less than a minute and you can start building directly from the dashboard to get this experience of Firebase. That's where the idea came from: let's just build a whole open source Firebase because that's what people love and solve some of the problems that they don't love about Firebase.
00:06:42 - Christopher Burns
As a joke, I would say that you took the Bender approach and said you'll build it yourself with open source technology.
00:06:49 - Paul Copplestone
Yeah, exactly.
00:06:51 - Christopher Burns
I've looked into Supabase, but I've never used it. Sacrilege, probably, I don't know. I've seen what's happened and heard rumblings around the Redwood community. I have a production-running database that's hosted at DigitalOcean, and to give context on where I sit on databases, I hate them. I don't like them. I know they store data, but I don't really care about the data. I just want it to come back in an API, if that makes sense.
I'm really interested to know more about what the benefits are, in my mind, of hosting on Supabase in its simplest form.
00:07:34 - Paul Copplestone
Maybe it would help if I step you through the experience you would get as a first-time developer coming to our platform. So first of all, you sign up, you start a project. When you start a project, we go and we launch literally a full Postgres instance for you, not even a schema within our database. It's literally a full server for you. This one is your database and we manage it for you. We give you full Postgres-level access. So if you are a DBA or you want to get into the low-level stuff, that's great, you can. If you don't, then what we do is, one minute after we've launched it, you are on this dashboard and it kind of looks a little bit like Airtable. You can start building your tables, your database tables, directly from this table editor.
Then also from the dashboard you can use a SQL editor. You can query your database directly from the dashboard. We provide all the auth for you so you can start signing up users straight away.
[00:08:24] You can invite them along to your database. And then we also provide this very nice thing where, let's say you've started building your tables, the API itself has this auto-generated documentation directly inside the dashboard. So you just copy and paste the snippets of code. As you add a table, they sort of self-document. You can add descriptions to your Postgres tables that will get exposed to the documentation itself. So it's kind of this very seamless experience.
We keep everything inside the database. We're not doing anything special outside of it. So literally you could even dump your database and take it to your favorite database provider if you didn't like Supabase. But really we're building the tooling directly into the database itself. Now, that means that if you are not a DBA, you get this sort of seamless experience where you just focus on building your product. You don't have to worry about auth. We provide you all the keys, we provide you the security model. But if you do want to do some funky things with your database, you can.
[00:09:20] You want to add some special schemas, do triggers, do whatever you want. You can do that all from the dashboard, or you can connect your own whatever to pgAdmin to your database and start doing it as you would traditionally with, say, your DigitalOcean database.
00:09:33 - Christopher Burns
Just so we know, what does DBA mean?
00:09:37 - Paul Copplestone
Database administrator?
00:09:38 - Christopher Burns
There we go.
00:09:39 - Anthony Campolo
It's the person you pay a large amount of money to watch your database for you.
00:09:44 - Christopher Burns
I feel like we may be going into a new age of DBA because I've spoken to previous companies that have built big products. They're like, oh, we pay like ten people to watch our database. I'm like, what are they doing? Migrations. And I was like, have you heard of Prisma 2? It'll do them for you.
00:10:05 - Paul Copplestone
Yes, that's true, until you get to scale. There are definitely a lot of unsolved problems in databases. Scale is one of them that I would say probably only a few companies have really solved, truly global scale for, you know, sign up and get that experience. Some of the other ones are sort of migrations, working with relational databases, and definitely Prisma 2, the type of thing that we'll be offering.
We hope to solve this so you don't really have to think about it. We can bring a sort of next-gen experience to even schemas themselves. But then when you start scaling, there are some very interesting challenges: spreading your data out across the world, whether you host it, do you shard your database? It's too big. It's getting a million hits per minute. What is the real size that a database can handle? These are the ones where you start needing to have people optimizing the database, optimizing queries.
[00:10:57] But the goal of Supabase is really to push that ten times further down the road. We will expose, without having to rely on us to solve it for you, the things that are likely necessary. Hey, you're missing an index on this table. This query here is running ten times slower than it could, and we just make all the tooling that exists inside Postgres, because there is a lot of tooling, and we expose it inside the dashboard. So you can sort of go in and fix the problems as you see them crop up.
00:11:22 - Christopher Burns
That's really interesting how you say push it down the line because, as Anthony always says, I love all my problems fixed by someone else. One of the things that does scare me about Postgres is that I know in the future, within this year, we will be hitting an international release of our product, and we'll probably want a database in the US as well. What do we do? Do we just merge the databases and stitch them somehow, or do we just run them completely separate through two different endpoints and treat them like two completely different systems? It's hard.
When we spoke to Fauna, I was like, yeah, we do all that for you, and you just talk to us and we'll do the rest. I was like, hmm, maybe I've picked the wrong option with Postgres, but this is from someone who doesn't care about databases, and that's sacrilege in ways. But I like to always think of myself as a generalist, and databases are just that subject that is too deep for me sometimes.
00:12:24 - Paul Copplestone
It's definitely the case. As I say, there are a lot of unsolved problems when it comes to data. Data is inherently hard to deal with conflicts and things like that. So it's definitely one of the reasons, for example, we didn't build a database. We just used one of the most trusted in the world.
Postgres has been around for 30 years, and Ingres before that. So it's very well trusted, very scalable. If you ask people, hey, what is the most scalable open source database, almost always they'll recommend Postgres. So that's the reason why we chose it above anything else. And really, if you're thinking about those sort of problems, it's probably premature. If you're thinking about scalability as a problem when you're first starting your business, especially with Supabase, you shouldn't have to. Ninety percent of the reason for that is because we just chose Postgres.
And if you're migrating off, say, Oracle, you'd usually go to something like Postgres or you start splitting up and spreading the data around the world. That's an interesting one.
[00:13:18] That's definitely a problem and it's going to be more and more of a problem across the next ten years.
00:13:23 - Anthony Campolo
Do you think you're going to have to implement some sort of consensus algorithm at some point, like Paxos?
00:13:28 - Paul Copplestone
Yeah, we'll probably do it with routing to begin with. Maybe, for example, you might say which sort of data is global, which sort of data can be split across two different databases. But it's been less of a problem so far. No one's been asking for it. More people are asking for read replicas, pushing data to the edge to reduce latency. So these are the things that will likely hit first.
But to be completely honest, for the next six months we still have a lot of Firebase features to focus on before we start doing some of the nitty-gritty on the databases.
00:13:58 - Anthony Campolo
Yeah, we're going to get into some of those features that you're still working on. But before that I would actually like to talk a little bit more about the internals here because, as you say, you are using Postgres. You have this really stable tech, but you've actually built something kind of unique on top of it. I think you've talked a little bit about it in some of your interviews, and we're going to have to define a couple of things here for our listeners. First, because I think many of them aren't going to know these.
The first one is Phoenix. Phoenix is a web framework for Elixir. Elixir is a programming language that is a nicer syntax on top of Erlang. Erlang is a pretty old language that is made for concurrency, so having lots and lots and lots of little independent programs that can run called actors, the actor model. This is really the core of what makes this so unique. That's what you're using. So you're using Phoenix, and Erlang is what makes it all concurrent.
[00:14:54] My first question is why do you use Phoenix instead of just Elixir? What are you getting from Phoenix that you couldn't get from raw Elixir?
00:15:02 - Paul Copplestone
Yeah, it's a good question. I'll just point out that this is only one part of Supabase. This is the real-time. Basically what we do is we take Postgres, which is sort of a static database. You can push data in, grab data out, and we transform it into a real-time database using the Phoenix server.
The way we do that is that we listen to the replication stream of the database. So basically with Postgres, if you had two Postgres databases, you could send data to one and it would replicate the data across to the other one. We make Postgres kind of think that we are a replication target, and then it sends the data through to our Phoenix server. It translates this binary stream, and it sends out that data as a blob over WebSockets. This means that you can connect to this Phoenix server with the WebSockets, and you can listen to the stream of data coming out of your database. So anything that happens to your database, you can listen to it over the WebSocket.
[00:15:57] So what does Phoenix get us over pure Elixir? It's just a much cleaner, easier wrapper for this implementation for the WebSockets. It's got the security built in and adds some of the pre-configured components for how we'd do the sockets. It's kind of like saying, why would you use Redwood over JavaScript?
00:16:16 - Christopher Burns
One of the questions that I have is, are these modules optional? For example, could I use the real-time and Postgres without using your auth provider?
00:16:28 - Paul Copplestone
Yeah. So at the moment the experience is that it's a hosted platform. You sign up and you just get everything, and free of charge right now because we're in beta, we're still building, so everything's free. What we're working on this quarter is actually the self-hosted experience, the local emulator where you can run things yourself, and then from there it'll be basically kind of a Docker Compose if you want to run it, and you can choose what you want to run.
We are currently an amalgamation of six different tools, and it's soon to be seven or maybe actually eight, so you'll be able to switch things on and off. The center of it is all the database. All the config lives inside the database, so you always have to have that. But you should be able to turn some parts of it off if you don't want it.
00:17:08 - Christopher Burns
Awesome. Here's one question that I've just seen. Your login provider is Auth0, but is your Supabase provider [unclear]?
00:17:18 - Paul Copplestone
Yeah, that's a good one. We sort of had this accidental launch in April last year where we were just building. We had only been building for two months, and someone put us on Hacker News and we went to the top and we stayed there for a while. It was quite a successful launch, but we were supposed to be in stealth. We were just building, trying to figure out how we would get everything together.
At that stage we didn't have auth. We just had the database, we had the APIs, we had the real-time. And in that launch everyone asked for auth. So we spent the next three months building the auth solution and we ended up using Netlify's GoTrue server, which is an authentication module. We use Postgres as row-level security for authorization. So these two fit together quite seamlessly the way we've structured it.
However, we haven't gotten around to implementing our own auth on our own site just because we've been too busy trying to push out all the features.
[00:18:13] But it's definitely something that we joke about internally every week about when it's going to be implemented because we dogfood literally everything. Supabase is built with Supabase. Auth is the last vestige of our early days.
00:18:27 - Anthony Campolo
Yeah, there's a lot of different auth options for Redwood, one of which is GoTrue, and then another one is Supabase. So I'm assuming that's Supabase using GoTrue. So yeah, that's quite interesting.
00:18:39 - Paul Copplestone
Exactly.
00:18:39 - Christopher Burns
I use Magic Link, the other [unclear] provider right now, to do authentication.
00:18:45 - Paul Copplestone
Okay. Yeah. So we actually provide magic links as well. I must admit, we haven't been very good at pushing some upstream to Netlify, but yeah, we've implemented Azure just this week. And when I say we, I should point out, the community have implemented on our fork of GoTrue an Azure provider. Currently we have four providers, which are the ones that Netlify built, which are GitLab, Google.
00:19:08 - Anthony Campolo
Gosh, GitHub, probably.
00:19:10 - Paul Copplestone
GitHub and Bitbucket. Yep.
00:19:13 - Anthony Campolo
Yeah. I always make jokes about Bitbucket when I'm giving talks. Be like, why do they have Bitbucket as an option? Who uses Bitbucket?
00:19:19 - Paul Copplestone
Yes, it's been a long time since I've touched Bitbucket, but Netlify were great about implementing this. We forked it because they were implementing it on top of MySQL, so we had to do the Postgres wrapper. But yeah, someone implemented an Azure provider, and we have implemented magic links as well.
So literally you can choose how you want to allow your users to sign up. When you sign up for Supabase, you can just add in your credentials in the database, your application credentials for any of those platforms, and then you can start receiving signups from your users directly.
00:19:52 - Anthony Campolo
Just to clarify, when you say magic links, are you talking about the company Magic Link or just the concept of magic links?
00:19:57 - Paul Copplestone
Yeah, just the concept of magic links. So if someone provides their email but no password, then we'll assume it's a magic link. We send them an email.
00:20:07 - Anthony Campolo
Gotcha. Yeah, because we have an integration with the company Magic Link. So that could be confusing for people in the Redwood world.
00:20:13 - Paul Copplestone
Okay.
00:20:14 - Anthony Campolo
Cool. Let's get into some of the new things that you're building out. So this is kind of a naive question. Some people would maybe see that you don't have storage and wonder, how do you have a database without storage?
00:20:27 - Paul Copplestone
That's a good one. Storage, inside Firebase or in general, is for anything that is very big. Things that you would put in S3, for example, might be very large images, videos. You typically wouldn't put them in a database because the database storage itself might cost more. Also, fetching is a little bit trickier. You would store it separately in something that can be very cheap and globally distributed.
So we'll be bringing out storage for large images, large files, and we're targeting that at the end of March. So Q1, this is our big focus this month, is to build storage and release it at the end of March.
00:21:06 - Christopher Burns
What is the use case? Obviously large images. But say you have a program, a function that generates a PDF. What's the better way to serve the PDF from the function after it gets generated? Or make the PDF, create the PDF, and then save it to a storage system, and then let the user pull it from the storage system.
00:21:31 - Paul Copplestone
Yeah, that's a tough question. Depends on the use case. This is like a quiz. I feel like I'm in a job interview.
00:21:36 - Anthony Campolo
We go deep here. We've been told we're not technical enough, so we had to kick it up a notch.
00:21:41 - Paul Copplestone
Okay, great. Well, in this case, almost definitely. You would store it and you'd serve it out of the storage because they'll probably access it multiple times. So in that case, compute is more expensive than storage and data transfer. You don't want to be computing a PDF every single time someone needs to generate it. And a PDF inherently means that it probably will not change very frequently. So in this case, yeah, you're probably going to store it. And anything that is like a PDF, a document like that, you'd typically put it in storage for retrieval later.
00:22:10 - Christopher Burns
And the point of storage is not only to provide the storage, but the means to put and pull from storage.
00:22:20 - Paul Copplestone
Yes, exactly. So we'll have to generate some client interfaces as well. For this, we'll wrap it. We've got sort of this library, supabase-js. We officially support JS right now, and the community are building all the other languages. Everything that you need, auth, fetching, CRUD APIs, searching, all sorts of things, is built into this library itself, and it's modular.
So for example, each one of the tools that we support, one of them is Postgres auto-generated APIs. We build a client library just for that tool, and then we wrap it into our Supabase library. Same with GoTrue, the auth server. We build a library just for that tool, and then we wrap it into our library. Likewise with storage, whatever storage provider we choose, we'll provide a client library just for that tool, and then we'll wrap it into our Supabase library. That means that you could go ahead and use that tool itself as a standalone tool, and you can use our libraries without any breakages.
[00:23:15] And it's sort of our way of supporting open source tooling as well. We'll provide everything that you need in terms of libraries to put, fetch, manipulate, list, grab all your files, and also inside our dashboard, we'll have a nice interface for you to browse all your files, change the permissions, maybe expose them to the web if you want to, these sort of things.
00:23:36 - Christopher Burns
One of the other things that I always think of is the complexity problem, like how long it takes to get going with DigitalOcean: setting up a database, getting the right permissions, getting another user set up with the right permissions, getting Postgres set up.
00:23:55 - Anthony Campolo
PgBouncer.
00:23:56 - Christopher Burns
PgBouncer. It takes a lot of manual time to get that there. Do I understand Supabase correctly that, within five minutes of signing up, you already have a database that has all of these things done for you? But the bonus is you could take that Supabase stack and put it on DigitalOcean also.
00:24:20 - Paul Copplestone
Yeah, exactly. Literally, you sign up, I would hope, in less than five minutes. In fact, with some other providers, we've got a one-click deploy. You can deploy a frontend, you can deploy a backend. You can deploy our database with this integration and you get a frontend. And literally all you do is provide the name of the project, and it will go ahead and you've got a to-do list, a fully functional, real-time to-do list, in less than two minutes. I would hope that it takes even less than five minutes, and we're always working on improving that.
And then, as you say, the bonus is that everything that we do, we try to make sure that it's living inside the schema, living inside the database itself. So it's as simple as dumping the database. Your data is always available. It belongs to you. You can take it to any other Postgres provider. There's nothing special about what we do.
[00:25:11] We're trying to make it very compatible with any plain vanilla Postgres database, so you can take it anywhere you want.
00:25:18 - Anthony Campolo
Yeah. To kind of emphasize how portable it actually is, our tutorial for how to integrate Supabase with Redwood is literally to just take the environment variable and switch it from the one you're using from Heroku Postgres. You don't change any of the code. You don't change any of your migrations. The code in the project doesn't actually change at all. So when I saw that, that actually made me believe that this is portable in a way that allows you to lift and shift in a way that's very, very rare, actually, with a lot of these tools because things get so tightly coupled. And that's where I think building with Postgres is the right tech to bet on in that respect.
I'd like to get into next what you're doing with functions. Functions are really interesting because most providers seem to have gone all in on AWS Lambda. And obviously there's still Azure Functions and Google Cloud Functions, and you've decided to kind of go a different way, and you're looking to implement some sort of open source version of this. Is that correct?
00:26:10 - Paul Copplestone
Well, yeah. So functions will come in Q2. We're actually working on it now. But we probably won't implement it ourselves. We will once again try to support an existing tool, and we try to find a community that's open to us implementing it within our suite of tools as well. It's still unclear because Lambda seems to be table stakes these days, where most people have sort of written their functions in Lambda. So what we might try to do is find something that's compatible. But at the very least, we will find an open source tool so you can switch away from Lambda if you wanted and go to an open source.
00:26:44 - Anthony Campolo
Have you heard of OpenFaaS?
00:26:46 - Paul Copplestone
We have. Yep. The main ones we're looking at now are OpenFaaS and OpenWhisk, both sort of function providers.
00:26:52 - Anthony Campolo
Yeah. OpenWhisk was one of the originals. That's IBM, right?
00:26:55 - Paul Copplestone
Yes. Yep. Exactly. And they both seem pretty good. I also, you know, when we talk about functions, I don't know if you're familiar with Cloud Run, and of course actually Lambda does this now, but you don't push just JavaScript. You can push a whole Docker image, which means that you could bring JavaScript, but you could bring any language you want. And it's sort of like native. You can use whatever tooling you want within this Docker image.
I do think that's kind of the future of functions where you could provide a whole environment, not just one language. So this is one of the things we're investigating, you know, what do functions look like over the next ten years so that you could bring your own Docker container if you wanted and that could run as a function. But it's still TBD. We're not clear on the technical implementation just yet.
00:27:38 - Anthony Campolo
Yeah, I think the other ones you would want to look at would be like Cloudflare's functions. Well, I guess you wouldn't look at that one because that one is not an open source one. But in terms of people kind of building to a certain kind of function, that seems to be the only one that I see taking off that's different from Lambda because it's all edge network. Whereas Lambdas, you're getting a server in US-East-1. Usually you need to get Lambda@Edge to actually get your functions on the edge. So that's the big thing there. So you want to think about, are you going to be writing to the types of functions that are going to be on the edge? Because then you'll already be set up for all this kind of distributed, crazy, global replication stuff we've already been talking about.
00:28:16 - Paul Copplestone
Yeah, exactly. And as you can tell, portability is definitely something that we want from the start. So our ultimate goal is that you could take your functions that you've written for Supabase and you can run them somewhere else, or hopefully multiple different places, just like you could take your Postgres database. So this portability story is definitely something that we're trying to make sure that we deliver on from the very start.
00:28:37 - Anthony Campolo
Yeah. The last thing on Cloud Run is we actually have a Redwood integration with Cloud Run that's been in the making for a while and will hopefully be in an upcoming release very soon.
00:28:46 - Paul Copplestone
Oh no way. So you can run the whole app essentially on the edge.
00:28:50 - Anthony Campolo
So basically people have been getting closer and closer to containerizing Redwood with things like Fargate and now Cloud Run. This is something that Chris has a ton of experience with, and we're slowly building out better and nicer integrations to basically containerize Redwood. Cloud Run is a big step in this direction.
00:29:07 - Paul Copplestone
Wow, cool.
00:29:08 - Christopher Burns
We use PM2, so we have it hosted on PM2.
00:29:11 - Paul Copplestone
Okay.
00:29:12 - Christopher Burns
And it's proper. Not good. What I mean is it's good because, you know, all the functions return instantly instead of waiting for something to boot up. But then you've also got a server that is being optimally used 50% of the time with all this headroom just sitting there.
00:29:34 - Paul Copplestone
Yes. I mean, that's actually one of the big problems with Supabase at the moment. We actually provide full instances and we've got a lot of people coming and kicking the tires. So there's a lot of underutilized databases right now, but it's very hard to scale a database to zero. You just couldn't do it. And you can't really offer serverless databases right now. Well, not Postgres. The only one who's really doing it because they have massive scale is Amazon themselves.
Yeah, it's definitely one that we hope to solve in the long term. That's why we're working hard to try to find compatible tools with this sort of serverless approach.
00:30:07 - Christopher Burns
Wow. The mantra, as you would say, is open source tech first. Do you think, as you grow, there are areas where the ethical decision in your mind is: would it be better if we take this closed source in parts to give a better experience to the developer?
00:30:29 - Paul Copplestone
Yeah, that's actually why we started with offering a hosted platform. At the start, we were sort of doing everything. As you guys probably know, developing in public is hard, but it's also slow. You've got to make sure there's consensus on what you're going to build, lots of opinions.
At the start, we were doing sort of a bring your own database and everything quite open. Then in the end, most people just wanted us to host the databases. The infrastructure for hosting the databases is the only thing that we haven't open sourced so far. And that's largely because, you know, we've got to move fast. We've only been building for less than a year now, but also because there's data, we've got a lot of security, things that we need to make sure we don't have any security breaches. We've got to have audits regularly to make sure that we're complying with different laws and things like that.
At some stage, I would say by default, we're actually probably less open than I would like. And over time we will open up more. That is the goal.
00:31:24 - Christopher Burns
It's really always an interesting question. Netlify is inherently closed source, but their build tool is open source, so you could technically use it yourself. So it's like this thing of what do you have open source in a closed source company, and what do you have closed source in an open source community? It's a really hard one to answer because I always think, we've developed this really cool thing and we'd like to make it open source, but then we think, yeah, but it's also not ready yet, and we think we could still move really fast on it. And if we obviously abstracted that out, make that public, you then get this consensus of stable releases that you can't just push something to instantly and not worry about.
So it's always an interesting thought.
00:32:17 - Paul Copplestone
The purist in me wants to open source immediately. The practical me knows that you can just build much faster. We can deliver so much more value to our customers. It's very hard to build a suite of tools, especially where you're building libraries. We've got so much that we need to build, and it's undoubtedly the case that if we'd done everything open source, the platform itself open source, from the start, we just wouldn't be even halfway close to where we are.
So I think at this stage, yeah, we've made the right decision, but we check in very regularly on this to make sure that we're going to open source everything that we can as time goes on. I think there's no good answer to this one.
00:32:55 - Christopher Burns
Do you think that open source code is always pristine, not sloppy? It does get the job done, but it often is the most complex solution to get the job done. When it's closed source, you may skip a few corners. You may get it done, if you know what I mean.
00:33:13 - Paul Copplestone
Yeah, yeah. There's definitely that. I know, for example, some of our closed source, maybe our dashboard, doesn't have as many tests as it would if it was open source. Ultimately, it comes down to the tool as well. For example, the value that Supabase brings is that you just want this immediate experience. There aren't actually that many people coming and saying, hey, I really want to self-host right now. Ninety-nine percent of the people coming into Supabase are coming in because they want to get a database up and running in a minute. They don't want to worry about the configuration. They don't want to worry about env variables or anything like that.
So our focus actually is delivering this platform first, and then the open source tooling will definitely come as part of it. We just try to make sure that we can move as fast as we can on that promise. Time to value first, and then open source as a guaranteed next step.
00:34:09 - Anthony Campolo
Is there anything else that we haven't talked about in this coming year for Supabase?
00:34:13 - Paul Copplestone
Yeah, I mean, I'd love to work more closely with Redwood. That would be pretty cool. Are there any things that you're missing inside your framework that we could implement? Or what are the things that your own community are asking for that you think would be an amazing backend experience that we can deliver on?
00:34:29 - Anthony Campolo
That's a good question. I would guess that Chris has lots of opinions himself. For me, my first question would be, how does it work with GraphQL?
00:34:37 - Paul Copplestone
Because with Postgres you can literally connect any GraphQL server that you want to the database that we provide you, and then it'll work exactly the same. So if you wanted to use, for example, Prisma 2, you would just point it towards our database and then run all the schema migrations into it.
00:34:53 - Anthony Campolo
By the way, you'll get yelled at. This Prisma 2 has nothing to do with GraphQL because it's now just an ORM tool. So technically Redwood is actually doing the GraphQL part.
00:35:01 - Paul Copplestone
Oh, okay.
00:35:02 - Anthony Campolo
So this is why I would say you're going to get people asking you for GraphQL-specific kind of tooling because having GraphQL talk to a database is a pretty specific thing, like a bad joke. Adding GraphQL to your project is not a weekend project. So yeah, I'd be very curious to kind of see, because you have gateways and stuff that you're also using as well. You're using Kong, right?
00:35:24 - Paul Copplestone
Yes. So we use Kong. It basically splits the traffic between the real-time server, the auth server, it navigates on the server. And then for our APIs we use a tool called PostgREST. In fact, we employ one of the maintainers of PostgREST full time to work on PostgREST.
00:35:41 - Anthony Campolo
Yeah. So I think PostgREST would be the tool. Then you'd want something that could do kind of that, like generate your RESTful API but generate a GraphQL API instead.
00:35:49 - Paul Copplestone
So actually there is a kind of equivalent, PostGraphQL, which is just called GraphQL now.
00:35:54 - Anthony Campolo
Yes, I've heard of that. Yes.
00:35:55 - Paul Copplestone
Yep. The idea is that you point it towards an introspection database, and it builds a GraphQL sort of interface for you to use directly. So far we've had a few people ask for it, but I mean, technically we offer a lot of things that GraphQL would give you. You can do nested queries, foreign data, queries through your tables. We also provide all the auth. So we provide row-level security through your REST API.
It's not something that we'd probably build anytime soon if we were to offer GraphQL because getting it working with auth so that you don't have to have a layer in between all these sorts of things would be just a huge loss of focus for us. But if you wanted to bring your own GraphQL server, anything that works with Postgres should work with the database that we provide.
00:36:39 - Anthony Campolo
Yeah, no, that's cool. It's definitely something that I'll be interested in maybe checking out and seeing what we can do.
00:36:43 - Christopher Burns
Do you use OpenAPI or Swagger to provide that REST API?
00:36:48 - Paul Copplestone
PostgREST itself actually will generate on your root URL an OpenAPI interface. One of the nice things you can do is you just build your schema. You literally build your database. Then you can use a sort of generator, an ORM generator, from your Swagger to generate all your types, your TypeScript. And we're working on our CLI now, and this is one of the things that we'll do. You could literally dump all your types. We are thinking about even building our clients, things like that, all from your database schema. So you focus on putting your data types inside your database, your auth rules. Everything comes in your database, then you just dump it out.
00:37:24 - Christopher Burns
Well, if you use OpenAPI then you could use something like GraphQL Mesh, and that would generate a GraphQL API for you from the OpenAPI docs.
00:37:39 - Paul Copplestone
That's pretty cool.
00:37:40 - Christopher Burns
I've been experimenting with GraphQL Mesh using Stripe's OpenAPI/Swagger. Instead of using the JavaScript client, you literally just post to Stripe using cURL through GraphQL Mesh. It's a big-brain problem to be like, how do we simplify our client? By using more clients, you know? But it would work. Get on that experiment, Anthony.
00:38:09 - Paul Copplestone
I mean, everyone has their favorite tool, right? So there's nothing wrong with dabbling on what you want. I would say to most people, if they're coming in, be open to trying out our libraries. I think they're pretty easy to use. We've spent a lot of time trying to make sure that they're super straightforward, super easy to use, and they get you everything out of the box.
But in saying that, we're not religious about whether you use our tooling. That's why we expose the database itself. So just bring your own tooling, use what you like, try out different things, different tools for different jobs. Right.
00:38:38 - Christopher Burns
I was going to say, what I would like to see is team management on a layer level. There are two ways to do team management. One would be to create it so every team has its own database inside of the database. I don't know the correct terms, but I know it's like a database inside the container or whatever it is. Or you just have every team on one database and then obviously filter out the other teams.
There has been progress in areas with things like Prisma multi-tenant that would do that for you through Prisma. But if this is like a super version of Postgres, then we would look at features like Fauna is building, saying, okay, can we handle things like automatically pushing it to multiple regions? Can we handle team support easier? So at the database level you're not going to get someone else's data. There are things that I would think could be really powerful to have.
00:39:58 - Paul Copplestone
By team management, you don't mean my team of developers we're building with. You mean your customers?
00:40:05 - Christopher Burns
Yeah. So multi-tenancy is.
00:40:08 - Paul Copplestone
Multi-tenancy. Yeah. Okay. Postgres can do multi-tenancy. Actually, Auth0 seems to have some undocumented multi-tenancy functionality. And we've had a couple of people ask for it as well. So likely we'll bring out some guides. Also, one thing I'll mention now, we're kind of thinking about bringing out a bit of a marketplace where people can provide schemas that are sort of pre-built, so you don't even have to think. If you don't know how to build a multi-tenant application or you don't know how to build a SaaS application or something like that, then you can sort of just choose one that's been pre-built by someone who's got all the security models and everything that you need. You just one-click, you choose that one and it's up and running pre-configured.
Yeah, multi-tenancy is definitely one thing that we're already thinking about, and we'll definitely provide it in the future, but there's nothing limiting you right now. If you wanted to build multi-tenancy, it can be done already with the existing tool.
[00:41:03] We'll just make the techniques easier, as you say.
00:41:06 - Christopher Burns
My only other question is, I checked your regions and you host in the EU. Whereabouts in the EU?
00:41:15 - Paul Copplestone
I think that one's in Berlin. So we're hosting everything on AWS right now. I'd have to double check, so don't hold me to that. But I think it's in Berlin.
00:41:23 - Christopher Burns
Berlin or Frankfurt.
00:41:25 - Paul Copplestone
Probably Frankfurt. Yeah, okay. Yeah, I think it's Frankfurt.
00:41:29 - Christopher Burns
I'm a UK citizen that's had its head chopped off with Brexit. And our customers are very wary of storing data in Europe. And that's one of the reasons we use DigitalOcean is because we can host it in the UK and say, don't worry, your data is hosted in the UK.
00:41:47 - Paul Copplestone
But actually AWS has a region, right? A UK region.
00:41:51 - Christopher Burns
Yeah, London. It's EU-West-2.
00:41:55 - Paul Copplestone
Okay. Yeah. I'll chat about why we don't have it. But yeah, we actually started, funnily enough, on DigitalOcean. They were the first to kind of give us credits, and databases are not cheap to host. So we were very reliant on credits. We started off with DigitalOcean, but we quickly ran out of credits. And AWS then gave us some.
So we had to migrate at the time, 800 databases from DigitalOcean to AWS. And that's why all the regions, you know, they don't look like AWS regions. It's because they're still the names of the DigitalOcean regions, basically. But there's no reason we can't add another region. I'll make sure that we've got the UK and we'll put the country labels on them.
00:42:37 - Anthony Campolo
Yeah. That story of migrating 800 databases you told on your SE Daily interview, which we'll link to in the show notes. I highly, highly recommend people check that out because that's quite a challenge, I would imagine.
00:42:50 - Paul Copplestone
It was some long sleepless nights. I think we had a week or something like that to do it, before we started paying tens of thousands of dollars. So yeah, it was quite stressful.
00:43:02 - Christopher Burns
That's a wrap, I guess. Thank you for your time.
00:43:04 - Anthony Campolo
Thanks a lot, Paul, for being here. We appreciate all of your contributions to the open source world and being involved in talking to us in the Redwood community as we've been kind of building out stuff with it. We really appreciate it. And I know lots of people are really excited to continue building with it. So it'll be fun to see where Superwood goes in the future.
00:43:24 - Paul Copplestone
Yeah, guys, thank you very much for having me on your podcast. I hope everyone will check it out, and also everyone listening in the future. I hope they check out Redwood as well, and hopefully we have some nice integrations built and some nice tutorials on our website pointing towards Redwood soon.
00:43:40 - Christopher Burns
One for the listeners. Just so you know, all three of us are currently on different continents of the planet. Where I am in Europe, it's currently 9 a.m. Anthony, where are you?
00:43:53 - Anthony Campolo
I'm in California. It is 1 a.m.
00:43:55 - Christopher Burns
And Paul?
00:43:56 - Paul Copplestone
I'm in Singapore and it's 5 p.m., so I think I got the lucky side here. Thank you very much, guys.
00:44:01 - Christopher Burns
Thank you for your time.
00:44:02 - Paul Copplestone
Cheers.