
Cloud 66 with Khash Sajadi
Cloud 66 CEO Khash Sajadi discusses ten years of building a multi-cloud deployment platform, PaaS economics, and the future of developer infrastructure.
Episode Description
Cloud 66 CEO Khash Sajadi discusses ten years of building a multi-cloud deployment platform, PaaS economics, and the future of developer infrastructure.
Episode Summary
Khash Sajadi, CEO of Cloud 66, joins the FSJam podcast to share the origin story and evolution of his multi-cloud deployment platform. What began as an app store for data centers in 2012 quickly pivoted after metrics revealed that a Ruby on Rails deployment tool was gaining the most traction. From there, Sajadi explains the three major bets the company made—on the economic limitations of traditional PaaS, on containerization as a paradigm shift, and on Kubernetes as an organizational and governance model—all of which proved correct over the company's ten-year lifespan. The conversation covers how Cloud 66 supports roughly ten cloud providers while maintaining quality standards, how its "Control Tower" abstraction layer enables zero-downtime migrations between providers, and why the company deliberately avoids certain markets and use cases that fall outside its strengths. Sajadi shares colorful anecdotes ranging from deploying radiation monitoring systems after the Fukushima disaster to battling cryptocurrency miners abusing build pipelines. The discussion also touches on on-premise deployment challenges, the company's dogfooding process involving a "Bootstrap" environment, and Sajadi's excitement about the next generation of PaaS providers like Render and Fly.io pushing innovation forward after years of stagnation following Salesforce's acquisition of Heroku.
Chapters
00:00:00 - Introduction and Khash's Coding Origins
Anthony Campolo welcomes Khash Sajadi, CEO of Cloud 66 and the show's first sponsor, to the FSJam podcast. The conversation opens with Khash recounting how he got into programming as a teenager through electronics and circuit building, eventually realizing a Commodore 64 offered unlimited creative potential without the ongoing cost of physical components.
Khash traces his programming journey from BASIC through Pascal, Delphi, Ruby, and Go, humorously noting that he's now effectively banned from touching the company's codebase. Christopher Burns relates to the co-founder experience of drifting away from hands-on coding, and the group discusses the challenge of keeping up with the rapid pace of change in JavaScript frameworks and developer tooling while running a business that sells to developers.
00:03:03 - Cloud 66's Origin Story and Strategic Bets
Christopher asks how Cloud 66 has evolved over its ten-year history to compete with platforms like Heroku and Netlify. Khash candidly explains that the company started as an app store for data centers before pivoting within two months after metrics showed that a Ruby on Rails deployment tool was their breakout product. He recounts competing with now-forgotten platforms like AppFog and even Docker's early PaaS ambitions.
Khash outlines three foundational bets the company made: that traditional PaaS had unsustainable economics, that containerization would transform operations as a paradigm rather than just a technology, and that Kubernetes would win not just technically but through its open-source governance model. He argues the company's decade-long survival validates these bets, and explains how supporting multiple cloud providers became a double-edged sword—offering customers economic flexibility and protection from vendor lock-in, while also exposing them to unreliable smaller providers until Cloud 66 learned to be more selective.
00:10:55 - Cloud Providers, Use Cases, and Customer Stories
Anthony asks about the range of supported cloud providers, and Khash explains that Cloud 66 supports around ten providers including the big three, developer-friendly options like DigitalOcean and Hetzner, and regional players like the Brazilian provider Latitude. He also notes the company steers clear of the Chinese market due to regulatory complexity. The conversation shifts to the types of applications customers run, from static sites built with Gatsby and Next.js to dynamic server-side applications.
Khash highlights two major use cases: creative agencies deploying sites for clients, and SaaS providers spinning up dedicated, ring-fenced instances for individual customers across different regions and cloud accounts. He mentions notable customers like Pixar, who use Cloud 66 to build and distribute RenderMan, their industry-standard ray-tracing software. The discussion touches on how the platform serves as a cloud abstraction layer, allowing teams to run production on AWS while using cheaper providers for staging environments.
00:16:50 - Multi-Cloud in Practice and Edge Compute
Khash shares a compelling story about a charity that monitored radiation levels near Fukushima using drones and needed multi-cloud redundancy, since the very disaster being monitored could take down a nearby data center. Christopher then asks about how Cloud 66 handles the differences between cloud providers' permission systems, and Khash explains their Control Tower abstraction layer, which enables zero-downtime migrations of static sites between providers with continuous logging.
The conversation moves to edge compute, where Khash distinguishes between legitimate edge use cases and scenarios where Cloud 66's traffic-shaping tools in Control Tower can achieve the same goals without spinning up containers at the edge. He describes how their custom expression language handles traffic redirects based on language, browser type, mobile detection, and referrer data, saving customers money while keeping the platform manageable. The group also discusses the real-world implications of major cloud outages and the ironic situation where surviving an AWS S3 outage doesn't matter if half the internet is down anyway.
00:24:32 - The Future of PaaS and Community
Khash shares his outlook on the next five years of cloud infrastructure, expressing excitement about new PaaS entrants like Render and Fly.io breaking the stagnation that followed Salesforce's acquisition of Heroku. He sees potential for what he calls "PaaS 2.0" to become the killer application for Kubernetes, much as cloud computing became the killer app for virtualization. The discussion also covers Cloud 66's community of roughly 3,000 active Slack members and their open-source contributions.
Christopher raises the challenge of customer support when issues could originate from Cloud 66, the customer's application, or the underlying cloud provider. Khash explains their approach using network effects from managing tens of thousands of servers, internal canary deployments, and a system called Mission Control that gives their small support team visibility across millions of daily deployments. He emphasizes the 80/20 rule in product design—being excellent for standard web applications while openly acknowledging that high-frequency trading workloads aren't a fit.
00:31:27 - On-Premise Deployment and Dogfooding
The final segment tackles on-premise deployment, where Khash describes scenarios ranging from finance and healthcare customers to being escorted by armed guards into underground server rooms in London. He explains how Cloud 66 helps SaaS providers manage version control across multiple on-premise installations, solving the painful problem of maintaining bug fixes across dozens of customer branches running different versions of the same software.
Khash then describes the company's dogfooding process, which involves a dedicated "Bootstrap" instance of Cloud 66 that deploys all other instances, including the production SaaS platform and customer on-premise installations. He acknowledges the recursive nature of the setup—Bootstrap itself can't be deployed by Cloud 66, so it runs on a manually maintained Kubernetes cluster. The episode closes with Khash mentioning a promotional discount for listeners and inviting them to try Cloud 66's free trial.
Transcript
00:00:11 - Anthony Campolo
Khash. Welcome to the show.
00:00:13 - Khash Sajadi
Hi. It's great to be here.
00:00:14 - Anthony Campolo
Very happy to have you here. You are the CEO of Cloud 66. It's worth mentioning up front, Cloud 66 is the first sponsor for FSJam, so disclaimer right there. I'm really happy to have you here to talk about it. It's very in line with the things we talk about here in FSJam, so I'm definitely curious to learn more. Before we get into it, we'd love to hear a little bit about your background, how you got into coding, and then how you got into creating this company.
00:00:44 - Khash Sajadi
Sure, absolutely. It's a long history, not because it's convoluted, but because I'm very old. There are a lot of years to cover, and I'm not going to bore you with all of it.
Basically, I got into coding because I got into electronics and building different circuits. My dad was a big advocate of building things that made profit. Because I was a student, I was 12 or 13, there wasn't much of a business going on. I realized I liked creating things, so instead of just buying components and putting them together, I might as well make one investment, which was a computer at the time. It was a Commodore 64, and I could just create programs.
It was more of an economy question of creativity without ongoing cash flow issues. So I got into computer programming, and that's been the past 30-something years of my life. That's how I got into computers.
[00:01:32] I got into the Commodore 64, as I said, and BASIC as a programming language. Then in high school I moved on. I wrote Pascal and Turbo Pascal and then Delphi, which was all Windows-based, and then I moved to Linux. I did Ruby, Go, and whatever else. Now I think I'm kind of banned from touching the codebase within the company. So I keep myself happy thinking that I can code, and I never test that theory by not doing it, so I'm kind of in ignorant bliss.
00:01:58 - Christopher Burns
There's actually a really interesting one. The further you run into being a co-founder and running a business, the less time you actually have to code. I find that it's one of the most interesting things. It's like, could I actually code still? It's a big question, and how deep you get is a very big one, especially when there's newfangled languages that come out every six months that everybody's now trained on within a year. You're like, are we still using Ruby?
00:02:22 - Khash Sajadi
Exactly. Everything comes out like, is it still alive? Is Ruby still alive? Is JavaScript still alive? But TypeScript is definitely a thing now. Whatever is next.
So when you see the pace of change, for example in JavaScript frameworks, you kind of go, should I wait long enough that I don't have to learn it and the next one is going to come along, or should I invest in it?
For me, it's more about, as you know, what we do is about selling to developers. For us, being close to our customers and as a person responsible for the product itself, I really want to be close to our customers. So I have this kind of duty to learn new things, try it on Cloud 66, and see how the developer experience is. That keeps me, I want to say, at least in the game, but it would be an exaggeration to say I'm actually competent.
00:03:03 - Christopher Burns
You've been running Cloud 66 for 12 years, by the looks of it.
00:03:07 - Khash Sajadi
Twelve years? Ten, actually. We just had our ten-year anniversary offsite right here in Oakland because there was a conference we wanted to go to as well. So we got all the team, well, most of the team, over and we had a fun week. It was good after two years of not doing anything company-wide. Yeah, it's been ten years that this has been going on.
00:03:25 - Christopher Burns
Within the ten years, we've seen a rise of what is called third-tier cloud providers.
00:03:31 - Anthony Campolo
Layer two, layer two.
00:03:33 - Christopher Burns
That's it. You know more than me. How would you argue that you saw this coming as a company doing it for ten years, and how has Cloud 66 evolved from its original idea to now competing against Heroku and Netlify and the rest of the world?
00:03:47 - Khash Sajadi
That's a very good question.
00:03:48 - Khash Sajadi
I'm not going to claim any credit for foreseeing this. We started doing something else and very early on we pivoted. Still, I can claim that we've been doing this for ten years, but I think I can shave off about two months when we realized money was somewhere else.
We started as, this is 2012, right? Six years into the iPhone App Store, it was like four years old. We tried to build an app store for data centers. That was the idea. We thought packaged, ready-to-use things that work nicely together on a well-defined, opinionated stack and platform was a good idea. So we just started doing that. The company was even called something else, and we did that as an app store.
I think we rolled out about 10 or 12 different apps the first iteration, and then as all good startups are supposed to do, we started collecting a lot of metrics. We saw that out of the 12, one of the apps had gained quite a lot of traction, and that was the one that would deploy Ruby on Rails onto any cloud provider.
[00:04:43] So we thought, you know what, we're going to drop everything and focus on this. That was the birth of the company. That was kind of how we started.
As you can imagine, when you start off with an app store for data centers, there's always a data center in the equation. As soon as we focused on doing ops for existing cloud infrastructure, we found ourselves competing with the likes of Heroku, where they own the entire stack. So we had this kind of competition with Heroku.
We've had a lot of different competitors that many people don't even remember. AppFog was another one. We've had a lot of PaaS, and Docker itself was a PaaS, and Docker was a kind of side project that ended up being the more important one. So we've had the whole shebang of companies that we've competed with. Heroku is probably the only one from the old generation that's around, but there are new ones coming on, and I'm very excited about seeing how those guys are doing.
00:05:33 - Christopher Burns
There seems to be this, I would describe it like JavaScript especially is like this fashion industry for much more than it actually is about programming. A new website every six months, a new design, a new programming language every few years. We've seen this really weird balance between going from static to serverless to server-full. It's like a circle that keeps on rotating every few years, and each one has a new platform that hosts it better than the previous standard. What would you say the biggest benefit is by connecting to all the major providers, allowing choice to pick one over another?
00:06:11 - Khash Sajadi
If I wanted to unpack that, I'd go with two main things. You're completely right in saying that every six months you see new technology, new ways of doing things, and we all come along and say, wow, this is amazing. Then we do a little research and it's like, well, this was happening six years ago. It's deja vu.
We started the company with a couple of big, bold goals. One was that we're going to make a few bets, very strong bets, not on technology but on paradigms. I think we made three major bets in the past ten years of the company. I think we got all three of them right. I'm proud to say that's the case. I'm not going to claim that much credit. A lot of it is luck and hindsight.
The first one was around the fact that PaaS, as it was, say Heroku, has fundamental economic issues.
[00:06:57] It is not a technology issue, and we can dive into what that is. The second one was around containers and containerization, not as a technology, but as a way of doing things. The third one was around Kubernetes, again not as Kubernetes technology. I'm talking about technology versus, say, Mesos and Mesosphere or other container orchestration platforms and frameworks, but also around the organization, management, and governance of open source. Those are the paradigms we bet the company on.
I think the fact that startups die within the first three years, and we've been around for ten years so far, is some sort of testament that we got the bets right. That's kind of on the fundamentals of that.
Each one of those, economic PaaS issues, the governance of open source, and the way of doing things where containers fit within an organization, are separating ops and devs and the responsibility of different organizations and departments from each other.
[00:07:46] So those are the paradigms that we think are the successful ones. Those are very deep subjects we can get into.
As for the choice, as you mentioned in your question, giving customers more choice around different cloud providers is a double-edged sword. A lot of times when we surveyed our customers or potential leads, lock-in was mentioned a lot. The lock-in is like AWS. We don't want to run everything on AWS. We don't want to have lock-in to one vendor. But it seems there's not that much of an issue right now, not because multi-cloud is a big deal right now, as it probably was promised back then, but because the market has kind of relaxed a little bit. Nobody's going to say, I'm running Windows throughout, so therefore I have lock-in to Microsoft.
But the other side is, by giving customers options on the economic side, we allowed customers to benefit from the downward pressure that these units of cloud will have.
[00:08:40] So every year, well, at least before the inflation pressure of the recent 18 months, the cloud unit prices would drop. VMs would get cheaper, SSDs would get cheaper. Even network, to a degree, would become cheaper, and was the case for a long time. That margin was not passed on to customers when they were running on a PaaS.
This was a case, especially if you think about when DigitalOcean came about giving you VMs at $5 a month back then, passing that on to a customer is a great thing to do.
The other side of this story is that by allowing a lot of cloud providers and decoupling the customer's application from the cloud, while we allow the customer's workload to move around, you expose them to a risk of not really thinking about the fundamentals of that cloud. I think we found that issue early on and paid a big price for it.
What do I mean by that? A lot of times, as a cloud-friendly company, we get inbound requests from all sorts of cloud providers for deep integration of Cloud 66 with that cloud provider. In the early days of the company, we would say yes to all of them.
[00:09:43] You have a cloud provider with ten servers and you call yourself a cloud provider. We would go and integrate with you. The ultimate issue was that while we made use of that cloud provider easier, the reliability of that cloud provider was not the same as, say, AWS. So customers would ultimately suffer from downtime, unreliability, and all the issues that come with it.
What we learned was, while we have to broaden the spectrum of cloud providers that we support, we have to use cloud providers that are reputable and that we actually trust, not just their quality, but also their management systems, their practices around changing APIs, and all sorts of things around that.
Another thing that we see our customers benefit from in moving between different clouds is when every cloud is going around and giving credit to startups or new companies, all of those free credits and free usage allowances that companies give to their customers. They can bring it to Cloud 66 and use that, so essentially have a Heroku-like experience for free on their cloud provider of choice.
[00:10:40] And that's good for the cloud providers, and that's good for the customers. So that's where we are right now. Those are the two criteria that we choose for cloud providers. When the customer comes to us, they know they get a good cloud provider experience of their own choice, and they also have access to the cloud providers that are out there.
00:10:55 - Anthony Campolo
How many cloud providers do you have?
00:10:57 - Khash Sajadi
We always allow customers to bring their own servers, whether it's on a cloud provider that we do not support, or they have bare metal somewhere in a data center, like an old-school system. That's always the case. They can always bring any server they want and they will get the benefits of using cloud services.
I think we have overall, I think ten, if I'm not mistaken. You have the usual suspects, the big three: Google, Amazon, Azure. Then you have the second tier, I would say the very developer-friendly ones, DigitalOcean, Linode, which is now, I think, part of Akamai, Hetzner. You have some regional ones that we do support. For example, Maxihost, I think, is called Latitude now, which is a Brazilian cloud provider and it's very popular with Brazilian software developers. We have other ones around the world.
00:11:36 - Anthony Campolo
What about Alibaba?
00:11:37 - Khash Sajadi
China is an economy, a region, that we've never entered deeply. First of all, because we're not very familiar with that market, and the regulation seems to be of a specific type that we don't have the bandwidth to make sure we're always on the right side of the kosher side of things.
00:11:52 - Anthony Campolo
It's a nice way of saying you don't have time for the politics of the CCP.
00:11:56 - Khash Sajadi
I guess that's one thing. I think the world is big enough for us to be able to grow bigger without having to deal with a lot of meta issues.
00:12:02 - Anthony Campolo
So what kind of applications do you think most people run on Cloud 66? It sounds like you have a wide breadth of servers, containers, static sites. You can do a little bit of everything, but do you feel like there's a sweet spot that most people come to you for?
00:12:16 - Khash Sajadi
Yes, technically we have, if you will, under the hood, two different products. One is for static sites, one is for dynamic, if you will, like old-school server-side sites. Static sites, you can imagine, are mostly around using things like Gatsby, Next.js, and any static site generator, the latest flavor of any of those. This runs on an object storage like S3 or whatever is your cloud provider's equivalent, and then it's fronted by a WAF, or a web application firewall, where traffic is redirected, filtered, and served. You can apply some rules to it.
You can think of it as a Netlify on your own cloud provider, essentially. That's one side of the business. When you have something like this, a lot of creative agencies use that to deploy static sites for their customers, whether it's marketing sites or blogs or things of that sort.
The second side is dynamic, where you have an application, whether it's Node.js or Rails or a Go API fronted by React or an app backend, where our customers use that.
[00:13:13] We have two major use cases for that. One is, again, creative agencies where they develop something for a customer.
We have major companies run critical infrastructure on that. Pixar, for example, is one of our customers. They use our services to compile and build and sell their flagship product, which is RenderMan, the industry standard for ray tracing. For any animation that's worth watching, I suppose.
Then smaller agencies that do that, that's kind of one side. The second side, we've seen a lot of traction in the past three or four years, is when SaaS providers want to have on-prem or dedicated instances for their customers. Essentially what they have is a SaaS service or something that can be delivered on a web medium, and they use Cloud 66 to deploy dedicated, ring-fenced instances for every single customer. We make that easy for them to do in a uniform way on different cloud providers, different regions close to their customers where the customer wants, under the customer's account, for example, and have these deployed and synchronized across the world.
[00:14:09] So those are the two main use cases that we see. As for the application type, I think it's such a long tail that I wouldn't be able to give you a specific application type. I would say we've had a lot of travel industry, a lot of e-commerce that's been hosted on Cloud 66. As you can imagine, in the past two years, travel industry websites have not done great, but e-commerce has been on the up. So it's a wide range.
00:14:32 - Christopher Burns
Yeah. This is actually something that, as we've been speaking, I've thought it's kind of like the ultimate abstraction of any cloud provider. Every single one of them has EC2 of some sort and S3 of some sort, but you don't want to learn their interfaces. I guess you can use Cloud 66 to just use them with an abstraction.
00:14:52 - Khash Sajadi
That's correct. I mean.
00:14:53 - Khash Sajadi
That's one of the uses that we have, and it's kind of interesting in the sense that more than just not having to invest in learning the interface and the intricacies of each cloud provider, what a lot of our customers do with Cloud 66 is that they have multiple environments, for example, QA, staging, UAT, and other environments, and then a production one. A lot of times they use Cloud 66 to deploy production, for example, to AWS, and the QA test, whatever else, to another cloud provider that potentially is cheaper, or they can pack more into a server, or they have bare metal on it.
It is an abstraction that people benefit from in different ways. One is not having to invest in what's the latest of the 500 three-letter acronyms on AWS. Or it could be that I'm going to use DigitalOcean for non-production workloads and AWS for production, or the other way around. So there's a lot of that going on.
I think there was a time, maybe six or seven years ago, where not all cloud providers had data centers everywhere that there are.
[00:15:52] For example, DigitalOcean for a long time didn't have any data centers in India, which would have put them at a disadvantage. So some of our customers used that as a way of compensating for that, going with another cloud provider that has closer proximity to their customers. But that's not necessarily the laws of physics. It basically says that you need up to 25 data centers around the globe to cover everyone within 200 milliseconds. That's basically what it is with fiber optics, and all major cloud providers have those 25 data centers. One of the cool use cases that we encountered early on, which has something to do with multi-cloud and regionality, is that right after the nuclear disaster that happened in Japan after the tsunami, when we started the company in 2012, so that happened, I think, in 2011, there was a charity that put a Geiger counter on top of a mobile phone and mounted it on a drone. They would fly these radioactive measurement devices into potential disaster zones or potential zones that you don't want to send humans into first.
[00:16:50] They would map the readings onto a website. So, in real time, you would know if, for example, the wind shifts and is dragging radioactive potential hazard material your way and all sorts of things around it. As you would imagine, when you want to run something like this, you need to be in different regions and you need to be close to where your workers are. Potentially, if it's something like a nuclear power station, the data center that's powering your app might just go down because of the very disaster that you're trying to monitor.
So we had these things where users would use different cloud providers deliberately in different regions. And this charity benefited from that. So in 20 minutes, they could just switch over from one cloud provider to another. You see another use case of multi-cloud usage that Cloud 66 allows.
00:17:30 - Christopher Burns
Do you, as a service provider, glue between the providers? For example, what I mean by a very specific example, because I felt this is S3 has technically different rules on how they operate permissions on DigitalOcean versus Amazon. Do you abstract that in a way that the same permissions are added to both containers?
00:17:52 - Khash Sajadi
On the static side of our business, yes, we do. You can bring in your static site generator, which will then spit out a bunch of HTML and JavaScript files that are suitable for that. Take the example of Gatsby, which is a React-based static site generator, and it generates an npm run build, I suppose, that will generate a bunch of static files. Now those static files will go on to your S3 equivalent, or the object storage that you have, say on DigitalOcean or AWS.
What we have is a layer in front of it, what we call Control Tower. That's where we control access to any of those sites. So you can, in theory, move your site from AWS to, say, DigitalOcean with zero downtime. As the migration is happening, your traffic gets redirected, your permissions get moved over, and your access for users and the logs are even continuous. So a user that is served the JavaScript from AWS and the CSS assets from DigitalOcean will have a continuous log.
[00:18:50] While the migration is happening, now migration is not something that you do every day, but it shows that you can have a unified, or uniform, front on top of all of these different ways of managing objects.
00:19:02 - Christopher Burns
Yeah. This is actually really important where we see things like AWS just go down for hours and half the internet shuts down. I bet that's where you see a lot of providers say, we're just going to spin up DigitalOcean for a few hours to get back online.
00:19:18 - Khash Sajadi
Yes, this has happened. There are cases that if AWS S3 is down, we probably will have some existential problems, but other services can be affected. It's not necessarily just limited to a single cloud provider, the biggest ones or the small ones. In those cases, we've seen this happen.
One anecdote, kind of like a strange twist of all this, is that we built our product in a way that we are very immune to one single cloud provider going down, bringing Cloud 66 itself down, as you would imagine from a service like us. So when Amazon S3 went down, we didn't have that issue. But one twist was that when S3 was down, pretty much half the internet was down, so nobody would even notice that you're up. So this is kind of a weird twist of dependency on a single cloud provider. And I think maybe that's played into this lack of fear of lock-in into a single provider.
[00:20:07] It's like everybody's playing in AWS's sandbox, so you might as well just relax and enjoy.
00:20:12 - Anthony Campolo
So you're very decentralized, it sounds like. Has anyone ever tried to run blockchain nodes on you?
00:20:17 - Khash Sajadi
Oh yeah. This is a constant, ongoing battle, challenge, if you will. Yes, we have for our container side. So you can run Cloud 66, or you can run your application in containers. When you have container-based deployments, you always have a build stage, which means you can just build anything and you can basically use our CPU cycles, which on our build machines we call the build grid, for anything you want, including cryptocurrency mining, which happens every day.
So we have sophisticated systems of detecting those, blocking those, banning those, getting them out, which includes both technical and social aspects of the accounts that come in. We are in constant contact with CI/CD providers around the world, our friends who face the same issues and the solutions they come up with. We always talk about sharing ideas as to how to make sure service quality is constant for real usage, and they have strange and wonderful ways of blocking them. For example, you cannot sign up for a lot of CI/CD providers with a GitHub account that is younger than six months.
[00:21:15] Now, it might be discrimination against somebody who just wants to start on GitHub, but statistically, I think it shows that a lot of crypto miners just use new GitHub accounts for their purposes. So there's a lot of that going on.
00:21:26 - Anthony Campolo
So even if they're paying, you still consider that like an illegitimate use? Or is there a way to do that and still be above board?
00:21:33 - Khash Sajadi
Oh, right. Okay.
00:21:34 - Anthony Campolo
I'm from the Web3 world, so.
00:21:35 - Khash Sajadi
Yes, our TNC forbids use of Cloud 66 CPU for mining purposes. This is because the machines that we use for the build stage, which are our machines, are not optimized for that. It would basically suck out the oxygen for all the other tenants running on the same one. This is a pure fair-play measure that we put in.
However, as it is provided, it runs pretty much a lot of the compute and storage and network of the whole operation on our customers' accounts. If you use Cloud 66 to build, use the compute transfer, or do whatever you want to do on your own servers, be it Web3, blockchain, or even pure Bitcoin mining, good luck doing that on the cloud. But if you want to do any of that on your own servers, we don't have any issues with it, as long as it's legal within the jurisdiction that you use it and it's in compliance with the cloud provider you choose. That's good with us.
00:22:27 - Anthony Campolo
Gotcha. Yeah, that makes a lot of sense. I'll be curious. Something that is coming up a lot with companies like Netlify and Vercel is this idea of edge compute, and they're bringing in things like Cloudflare or Deno Deploy. Since you support every cloud, someone could just use CloudFront from AWS. But I'm curious if you have any sort of edge-native solutions that you're thinking about, or if that's an area you think you're going to go into.
00:22:52 - Khash Sajadi
So the answer to that is a lot of it is about intent. What does the customer want to achieve, and what is the purpose of running something on edge? There are a lot of legitimate use cases for it, and there are a lot of cases where we think the same purpose can be served in a different way.
Sometimes pure edge compute is about being closer to the customer or being closer to the web application. A lot of those use cases are available. Not everybody is like Spotify, where they want to have, like, most of the top charts of the region on a Google mobile mast just somewhere down the road, but there are still legitimate use cases for that. We don't support that part of it.
But in many cases, what we've seen in terms of edge compute is around traffic shaping and traffic redirects based on not simple rules of regex, but more complicated rules that will send traffic to different areas. Those are the ones that we do support. So instead of spinning up containers to just put a traffic redirect engine into it, what we do is use what we call Control Tower, where you can write a custom expression language script, which is Google's Common Expression Language.
[00:23:57] You can write that and we run it for you, and then we redirect it to a static storage, which is S3. So you don't necessarily need to have a JavaScript snippet just to send traffic to different places based on language, browser, mobile app, or the technology that comes in, or where the referrer is. We take care of that for you, but it's not running on your own edge anywhere, so you can save money that way. We benefit from having to build only one platform to sell all of that traffic. But if you really want to have some very complex logic right next to your customer, that's not something we support.
00:24:32 - Christopher Burns
Very interesting. Being in the industry now for ten years, how do you feel like the next five years of cloud infrastructure and developer experience is going to play out?
00:24:43 - Khash Sajadi
That will require my crystal ball, which is in the lower drawer. Just joking aside, I think one thing that I'm excited about is that after the kind of stagnation of PaaS as it was for the past, up until about two or three years ago, where Heroku essentially stopped innovating and they just did very minimal container support, the developer experience has not changed or improved. They started with a great user experience and developer experience to begin with, but that didn't change much. So they had a good starting point. But the improvement kind of stopped since Salesforce acquired it.
So there was a stagnation phase where PaaS kind of felt at risk. I think now with players like Render and Fly.io, where each one essentially takes a different approach, especially companies like Fly.io, where they really emphasize running close to the customers and Firecracker MicroVMs, containers, microkernels, or whatever, you want to push out as fast as possible, build and start as fast as possible right onto the edge.
[00:25:39] Those are the things that I'm excited about. So I think while a lot of companies that start in the PaaS space don't really address the fundamental economic issues of PaaS, essentially PaaS gets expensive very quickly for very understandable economic reasons. While they don't address those, the technology is very exciting around this, and I think that is going to be where we see a lot of innovation and excitement around this.
The same way I think technologies like Kubernetes have turned into tools of managing data centers in a new kind of world, I think the same way cloud became the killer application for virtualization, I think PaaS, I want to call it PaaS 2.0, can become the killer application for things like Kubernetes. So that's kind of one area that we are watching very closely and we really are excited about.
00:26:26 - Anthony Campolo
Is there any Cloud 66 community? It looks like you have a Slack on your home page.
00:26:32 - Khash Sajadi
Yes.
00:26:33 - Khash Sajadi
So we do have thousands of users that use Cloud 66, and I want to say about 3,000 or so are active on Slack, where they help each other. It's a fantastic community of wonderful people who are very helpful to each other and help us both in terms of helping each other and helping us understand what we can do better, how we can improve, and how we can shape the product. So there's that side of the community.
We also are very active in open source. If you go to cloud66.com, I think it's /oss, if I'm not mistaken, you will see the list of open source projects that we support. Yeah, it's /oss, or open source software. We have a few projects that we support and contribute to, sponsor, both on the Ruby on Rails, Kubernetes, and Node.js sides. So we have a vibrant community there as well.
00:27:23 - Christopher Burns
How does the support look like? Because I guess it gets really complicated when something goes wrong. How do you identify the problem? How do you then say, that's a problem with Amazon, that's not a problem with us? This is a very business question because it's like liabilities and how do you know that everything is working correctly? How does Cloud 66 manage that?
00:27:44 - Khash Sajadi
That's a very good question. You don't want to be in a position where the customer comes to you and says there's something wrong, and the only thing we can tell them is that we are waiting as fast as possible for Amazon to fix the problem. That's not the place we want to be.
But the way we've realized, and this comes with a kind of network effect, is that when you have tens of thousands of servers that you manage and deploy for your customers and you get vital information from them, and we deploy, I don't want to give it a specific number, but somewhere around a million times a day, for example, for our customers collectively, then you have a network effect that lets you identify issues pretty much before something is going to happen to everyone. That means that someone has experienced it, and that is someone that we help. That limits the scope of support for us in terms of discovery by quite a lot.
[00:28:30] When we start seeing something, for example, an APT repository has an issue on DigitalOcean, one or two deployments will go wrong and it will take us a while, our support team, to identify and isolate that problem and potentially come up with a solution for it, for example, in replacing, say, in this case, an APT repository with an alternative or a mirror. But then it means that we can stop other deployments from going wrong right after that. So it's kind of a canary in the mine, if you will.
We have a lot of our own canaries in the mine that we deploy constantly just to have on every cloud, every data center, every availability zone, every different permutation of their application and other things. But this kind of helps us quite a lot with that.
The other approach that we take is before we hit support, on the product side. So we go very much with the 80/20 rule. Our product is really good for 80% of the use cases that we see, and we don't even claim that you can bring everything that you can do yourself onto Cloud 66.
[00:29:22] So if you are running a web application or an API backend, or you have a static site, probably Cloud 66 is one of the best solutions that you can do for your business. You can focus on business and let us worry about operating systems. But if you have a fast, high-frequency trading application, we are not the right place for you. By doing that, we make sure that the types of issues that we see, be it on our side, our customers' application side, or the cloud provider side, are within a few buckets that we can get really good at fixing and addressing.
The third aspect of that is what we call tools that scale humans. We didn't want to become a consultancy shop. We didn't want to be a consultancy where we have to hire more people as we get more customers so we can take care of the load that comes with it. So we invested heavily in internal tooling, visibility metrics, monitoring, everything that comes with that.
[00:30:14] We call those systems Mission Control. That allows us, with a small support staff, to have visibility over what's going on on thousands and thousands of servers and millions of deployments that are going on every day, and that will allow us to be ahead of the game and solve the issues before they hit a critical mass.
00:30:31 - Christopher Burns
And this is really important for non-technical customers as well. I see that you work a lot with agencies, and normally it's agencies deploying that website, managing that account. There's nothing more confusing than trying to get that agency to then set up a cloud-providing host themselves. It's much easier for the agency to just say, look, we'll charge you hosting fees and we'll manage it. And giving it in a way that the agency can still have control and have a cost-effective price point for that, I think is super interesting.
One of the things that I noticed that we've not spoken about is on-premise deployment. I guess this is something I've never touched yet and I know is for the big players. But I remember speaking to Tom Preston-Werner at GitHub previously, and he was saying when they made the enterprise, it was the biggest pain ever because getting the latest application deployed on someone else's server is a lot harder than it sounds.
00:31:27 - Khash Sajadi
That's absolutely correct. So we approached this problem from two aspects, two angles, and we're very familiar with the pain points.
One is that cloud services can be deployed on-prem for customers who want Cloud 66 for themselves, and we have those customers. We have those in the health industry and in the finance industry. They want to have Cloud 66 to themselves, a single tenant, on their cloud provider of choice or even their servers somewhere down in the basement. I'm not lying here. There have been cases where I had to be escorted with people with guns into 30 meters of concrete down in a basement in some specific building in London, where servers there have not been restarted since 1990-something. There are those cases that we have to deal with.
I'm not going to say it's easy. There's a lot of legacy things going on. By being able to deploy Cloud 66 itself inside of containers and inside of Kubernetes, we've taken away a lot of the complexities and abstracted away, but it doesn't mean that you can run Kubernetes anywhere.
[00:32:20] If you have someone who wants to run something like this on a cruise ship where there's no internet in open seas, yeah, maybe that's not possible. So that's kind of part of the issue.
But the other side of it is using cloud to deploy on-prem. That's the part that we're actually very good at. There are other companies that do this, but for us we make it easy in two specific ways for our customers. One, we help them wrap the application into very clear boundaries of inside code and developer concerns, and outside, which is infrastructure and operations concerns. That is a very clear boundary. So when you're dealing with deployments, you know, if an issue is outside of this boundary, you're going to have to reach out to your customers, say cloud provider operators or the audit company or the ops guys of the customer or the client, to solve a bunch of issues like whether it's a traffic routing table or a firewall issue.
The other part is making sure that you, as a provider of those, have a unique, uniform control panel for all these instances that go out.
[00:33:17] Because one of the most complex things around deployment of on-prem is actually version control. If you have a customer who's not willing or not ready to upgrade to the latest version of your code, like you can just push a button on your SaaS product and get there, when you fix a bug, you end up fixing that bug on like 20 different branches for 20 different customers because not everybody wants to go on the latest product and migrate all the way up and have the downtime and the risk that comes with it at the same time.
That's one of the things that we make very easy. We create dashboards and ways that we can retrofit a bug fix or a feature into different branches, depending on what branch each customer wants to use. That's one of the biggest pain points that we take away, so not just the operations side, but actually the task of maintaining various different versions of the same on-prem on multiple clients.
00:34:03 - Christopher Burns
Yeah. And obviously you deploy using Git.
00:34:06 - Khash Sajadi
Yes. We've always supported Git, whether it's GitLab, GitHub, or just pure good old standard Git.
00:34:12 - Christopher Burns
I bet it's the proper on-prem support when it's like, we host our own Git, we host our own authentication, Okta, we host our own everything, and we don't want to be on anyone else's cloud.
00:34:22 - Khash Sajadi
Exactly. That's one of the biggest challenges. Yes. And not only do we host our own Git, but it's not actually run on port 22. It's 23.
00:34:29 - Christopher Burns
Yeah, for sure. My last question is a bit of a meta one. Do you enjoy dogfooding your own product?
00:34:36 - Khash Sajadi
Yes, that happens a lot. As you can imagine, when you talk about deployments, how do you deploy the first deployment? That's a question we have. So what we have is what we call a bootstrap environment, which is an on-prem in the cloud, but it's a dedicated instance of Cloud 66 called Bootstrap, and that's ours. We use that to deploy other instances of Cloud 66, be it Cloud 66 production, which is a SaaS version, or other on-prem instances that we have for other clients. We use Bootstrap for that, so there's a lot of dogfooding going on there.
But I have to admit, Bootstrap itself is not deployed with Cloud 66 because at some point you're going to have to stop, and somebody has to go and run a script or push a button we use for all of that. It is hosted on a Kubernetes instance, or Kubernetes cluster, I should say. We have our own processes to maintain the scripts for the very first instance, for the OG, the Bootstrap to be deployed itself.
[00:35:24] So we do a lot of that, and that helps us a lot. Our static sites, our dynamic application itself, and it's not just cloud. We have other assets that we deploy with cloud services as well. So it's quite a fun, inception-inducing process.
00:35:38 - Christopher Burns
I bet for sure. And especially when you find bugs in that, you're like, is this a bug in the bootstrapping process or is this a bug in the actual product?
00:35:45 - Khash Sajadi
Absolutely.
00:35:46 - Christopher Burns
Have you got any other questions, Anthony?
00:35:48 - Anthony Campolo
No, I just want to open up the floor to you. If there is anything else you wanted to speak about or things you want to let our listeners know about before we close it out?
00:35:56 - Khash Sajadi
I think it was great to be able to tell you a little bit about our backstory and what we are up to. We have a promotion going on right now, which you got the code for. So anybody who signs up with your coupon will get a discount on Cloud 66. Our pricing model is per server that we manage. With the code, they can benefit from a great discount. That's something that I'm excited to see how your listeners and viewers use Cloud 66 for.
00:36:22 - Anthony Campolo
And that code is 66.
00:36:25 - Christopher Burns
Thank you so much for your time today. I should probably give this a look when I eventually know that I will have to use AWS or Google. I know it's coming. I know I can't just hover around other companies forever. I will need to go down. I spoke to someone who was at Slack from 20 people to 20,000, and he was like, just always bet on AWS. AWS is the best, but you can spend weeks and weeks and weeks learning just how to use the dashboard. So I guess Cloud 66 helps with that too.
00:36:55 - Khash Sajadi
Absolutely. Looking forward to seeing what you get to do.
00:36:57 - Anthony Campolo
Awesome. Well, thank you so much. And I guess if people want to find out more, they can go to cloud66.com. Are there other social platforms that you would direct people to?
00:37:06 - Khash Sajadi
Yes. cloud66.com is the best place to start. You can just sign up there. It's a two-week free trial, and you can automatically extend your trial if you don't get a chance to play around with it. I know everybody's busy, so we automatically built that into the system. You can extend your trial as much as you want. Looking forward to seeing more use cases and wonderful applications being deployed with Cloud 66.
00:37:25 - Anthony Campolo
Well, thank you so much for being here. And thank you for being the first sponsor of FSJam. We really appreciate that and we're super excited to continue this partnership.
00:37:32 - Khash Sajadi
Likewise. Thank you very much. You have a great day.
00:37:34 - Christopher Burns
Thank you.
00:37:35 - Khash Sajadi
Bye bye.
00:38:06 - Christopher Burns
Bye.