Search
Follow me:
Listen on:

Day Two Cloud 105: How The Fly.io Cloud Brings Apps Closer To Users

Most conversations about cloud hosting in North America focus on the Big Three: AWS, Azure, and GCP. Today’s Day Two Cloud talks to Fly.io, a company with a different approach to putting workloads into the public cloud. Our guest is founder Kurt Mackey.

Fly.io is a platform to run full-stack and backend apps all over the world. Somewhat PaaS-oriented, the goal of Fly.io is to allow developers to self-service complicated infrastructure without an ops team. The company also tries to remove abstractions without creating a lot of infrastructure messiness.

We get into super-nerdy details about what Fly.io has built and how the company differentiates itself, including making multi-region a default setting to get applications as close to the user as possible. This is not a sponsored show.

We discuss:

  • How Fly.io works
  • Why the company uses tiny VMs
  • Challenges of private networking and tenant isolation
  • Fly.io’s infrastructure and multi-region model
  • How IPv6 supports Fly.io’s designs
  • Running workloads on Fly.io
  • More

Sponsor: CBT Nuggets

CBT Nuggets is IT training for IT professionals and anyone looking to build IT skills. If you want to make fully operational your networking, security, cloud, automation, or DevOps battle station visit cbtnuggets.com/cloud.

Show Links:

Fly.io Blog

@mrkurt – Kurt Mackey on Twitter

Transcript:

 

[00:00:00.240] – Ethan
Sponsor CBT Nuggets is IT training for IT professionals and anyone looking to build IT skills, if you want to make fully operational your networking cloud security automation or DevOps Battle Station, visit CBT nuggets dotcom cloud. That’s CBT nuggets. Dotcom cloud.

[00:00:25.120] – Ethan
Welcome to Day Two Cloud and Ned, we have a spectacular show today, and I feel like maybe we say that too often, but this one, this was good. We are going to interview Kurt Mackey, the founder of Fly.io, which, Ned, we talk a lot about AWS, Azure and GCP as a platform for hosting a variety of things, IaaS, PaaS related. Fly.io is a different approach. And we get into some super nerdy details with Kurt about what they built and how it works.

[00:00:57.970] – Ned
Yeah, the biggest thing for me is what they’ve built is something that is multi region by default. That is their approach. That is how they go with everything. So you think of all these organizations that typically are running in a single region, maybe two, no, they do multi region by default. They run the hardware, they run the software, they own the whole stack. And for me, that was the main thing and the big differentiator for them over other services.

[00:01:26.220] – Ethan
Now, if you’re listening to this, you might be wondering, is this a sponsored show? It was not a sponsored show. We just discovered this product through, I think, Hacker News and started digging around to found that it was super interesting, reached out to Kurt and he was happy to come on and talk about Fly.io. I really think you’re going to enjoy this interview with Kurt Mackey, founder at Fly. Kurt, welcome to Day Two Cloud.

[00:01:51.060] – Ethan
Hi, it’s good good to have you here. And we got to start at the beginning, man, which is really straightforward. It’s just give the audience in a few sentences what is Fly.io.

[00:02:02.670] – Kurt
Fly.io is a platform for basically running full stack and back end apps all over the world. And so the idea has been to turn the most boring rails app on the planet into a distributed multiregional application.

[00:02:17.490] – Ethan
A stack to run applications. OK, so we are we’re going to drill into that. Because I’m trying to map that to what I’m familiar with, like the big three AWS and Azure GCP. So would you say Fly.io IaaS PaaS both. Can it cure cancer? Has it already cured cancer?

[00:02:33.160] – Kurt
I don’t actually know if it’s cured cancer yet. I feel like we’ll find that out later. It’s it’s probably closer to a PaaS. I think there’s there’s some fuzziness there. So what we’ve wanted to do is build basically an incredibly good developer experience where they can self-service kind of really complicated infrastructure without having an ops team. So that’s very PaaS like in that respect. But it also we try not to have many abstractions. So one of the things you’ll notice when you start using it is you can do interesting things like listen to UDP, you can actually run like DNS servers and things on it, because we’ve tried to basically take away a lot of the a lot of the past constraints that I think exist for providers and not for developers.

[00:03:11.710] – Kurt
There’s things that like make my life easier and there’s things that makes customers lives harder. And sometimes those are the same. And we’ve tried to avoid those sort of things.

[00:03:19.480] – Ned
That’s an interesting way to go about it. You said you’re trying to remove abstractions, but also provide this platform that gets rid of the infrastructure messiness. So it’s kind of like it’s two different ends of the same spectrum. Maybe an example would help. What’s a really cool thing that’s been built on Fly.io?

[00:03:37.450] – Kurt
Gosh, there’s been some very cool and the coolest my favorite ones are game server kind of game server workloads that actually I call them game servers, but you see them a lot in kind of productivity and collaboration apps. So one of the things that blows people’s mind when they come from from most PaaS services, for example, and use Fly, is that all of the all of the little VMs that run their code are on the same private network and they’re all addressable.

[00:04:00.430] – Kurt
So when they realize they can actually make their app processes talk to each other in a way that’s not possible in Heroku, probably not possible in Lambda, it actually seems to open up a lot of really interesting things for people.

[00:04:13.930] – Ned
OK, you mentioned something interesting, and I’m sure we’re going to dig into this more, that it’s actually just little VMs running each bit of code and they’re all in the same private network. And that is distinct and different than many of the other PaaS implementations on the other clouds. How are you doing that? Because that seems like a real headache to spin up all these little mini VMs and maintain a private network for them.

[00:04:37.330] – Kurt
It is. The private network is interesting. So we started the VMs. We started because we’re kind of working on containers and docker’s just a really bad way to isolate people, both security wise and resource wise. And so we looked at just lower level virtualization stuff. There’s a lot of options. We ended up going with firecracker. It’s an Amazon Open source product that does tiny little VMs that actually boot faster than Docker, which was important to us. We want to be able to turn things up very quickly.

[00:05:06.580] – Kurt
And so that part is kind of it’s like relatively standard off the shelf things that we’ve hacked together to orchestrate the VMs in the way that we want for our particular customers. The private networking was an adventure because one of the things you’ll learn or already know, maybe when you start looking at, particularly things like CNI in the Kubernetes world, the container networking interface is that it gets the more abstractions are on networks, the harder and the more complicated it gets, and especially when you want to do things that are pluggable.

[00:05:34.790] – Kurt
So we did not do any pluggable networking. We actually run on our own physical infrastructure and have just basically as simple as you can imagine, connections between servers and then did our private networking from scratch. And it’s literally just the combo of a it’s a wireguard mesh between all our hosts. That’s that we have been managing for like two years at this point. And then we use basically BPF rules to isolate customer workloads over that same wireguard mesh. And so it’s conceptually relatively simple.

[00:06:03.670] – Kurt
There’s very few moving pieces. It’s not tunnels within tunnels, within tunnels. And it’s about as simple as you could build this thing because it has to be.

[00:06:13.030] – Ethan
Yeah, wireguard is an interesting choice. You can, I haven’t done this myself. I understand it’s fairly straightforward to to automate and then the performs very well. It’s not as heavy as IP SEC, but still has very strong encryption.

[00:06:30.760] – Kurt
Yep.

[00:06:32.010] – Ethan
And so you can make that private network in an automated way, have security there and then you said BPF so so you’re doing packet filtering again to keep your tenants isolated.

[00:06:45.750] – Kurt
Yeah, kind of. The other. I guess the other thing I didn’t mention is our private network is actually IPv6 only. So there’s no there’s no IPv4 blocks basically on these things. And one of the advantages of IPv6 is you can you can simplify rules a lot. So what we do is we give each customer a slash forty eight, an IPv6 forty eight prefix, which is some very large number of IP.

[00:07:10.680] – Ethan
It’s unfathomably large. Yes.

[00:07:13.510] – Kurt
That one might be the one where it’s like more than there are atoms in the universe I think is the number I heard in reference to this. And so the nice thing about doing something like that is, is all we have to do to isolate tenants on the network side with our BPF rules is make sure that, one that they’re in basically the right prefix when the packet comes in. So when the packet comes into our host hardware, we decide whether to dump that into the VM or not based on whether the prefix matches what’s on the interface for that slash forty eight and we have an unlimited number of IPs. The the blocks will probably never conflict with anything. It was just another simplification choice that made our life a little manageable.

[00:07:51.060] – Ethan
So those slash forty eights are coming out of some real routable IPv6 or not.

[00:07:57.760] – Kurt
No, we’re using the FD prefix, so it’s within I can’t remember the name of the private reserve space in IPv6, but there are slash 48s in private IPv6 space that that we have a blog post on this to the way this works is actually kind of the implementation is kind of fun and interesting because we do one of the cool things about IPv6 prefixes is you can actually like swap octets and make interesting things happen. Hard to discuss out loud. But like the blog post actually kind of interesting when you see a diagram of what actually happens to a packet when it flows from one VM in and through our mesh and then back to another VM.

[00:08:33.420] – Kurt
Oh, the really nice trick there is when you’re only rewriting IPs in that way you don’t have to worry about you basically like create a hash for each packet that goes through. When you just swap prefixes, that hash doesn’t change. So it simplifies even the BPF a lot more.

[00:08:48.210] – Ned
Hmm. Well, there you go. Boy we really we dove right into the

[00:08:53.820] – Kurt
Way into the BPF

[00:08:53.820] – Ned
the lower

[00:08:55.770] – Kurt
The faster we can get to BPF the better.

[00:08:59.760] – Ned
Kurt let me just zoom back out a little bit and sort of set the value proposition a little bit more, because I don’t think we got to that too much. Why would someone adopt Fly over one of the big three clouds? What what sort of the killer feature or a thing that makes their life so much better if they wanted to deploy on Fly?

[00:09:18.720] – Kurt
So the the real value we give to people is the ability to run kind of the apps already building in multiple regions. I heard a number that may be entirely made up, but I heard at one point that only zero point five percent of AWS customers run in more than one region. And the reason for that is because it’s you add a second region. It’s almost like an exponential growth curve in complexity. Once you start adding regions within kind of a normal like a normal data center and AWS is really like automated normal data center, they don’t really give you a lot beyond that cross region at least.

[00:09:52.080] – Kurt
So people use Fly because what they want to do is they want to run their apps close to their users because it’s faster and they can build better features. And what they, and they can’t because it’s complicated to do that basically on any other infrastructure.

[00:10:03.420] – Ned
Right. So I feel like the closest competitor or thing that would be closest to that would be a CDN provider whose sole goal is to get as close to the end user as possible. Is that line up with what you’re doing?

[00:10:16.470] – Kurt
Yeah. So we when we pitched to investors, we’d talk a lot about the CDN market and the difference between traditional CDNs and what we’re doing is they work well for static assets. They don’t work for like like a Rails or an elixir or a python process with a database behind it. And so we’re kind of tackling, if you’re being all analyst about it, OLTP workloads. Right. But kind of. Yeah. So it’s like like I said, you’ll see if you can comment on Hacker News.

[00:10:44.520] – Kurt
I’ve been saying for almost two years at this point, like our goal is to make the most boring rails app in the world, run on every region without any really code or architectural changes.

[00:10:54.330] – Ethan
That was my question. What is incumbent on me as the app developer? Do I have to care about distributed computing and being able to run this thing in more than one place? Database backend synchronization, all that kind of stuff? Or is this some magical thing where I check the multiple regions box and all of a sudden it’s it’s available closer to my customer?

[00:11:16.590] – Kurt
I would say it’s it’s magical, but it’s in a really easy to understand. It’s not it’s not the type of magic you look at and don’t know how it works. It’s what I think is kind of a clever take on a way of deploying apps that devs understand. So I’ll tell you all we’re doing. And so. Our hypothesis here is that all apps should run close users and basically the reason they don’t is because the infrastructure’s wrong for this, right? So, like, our goal was to build a thing that the max possible number of developers could use to ship apps close to people.

[00:11:46.940] – Kurt
And and we did some things. We tried various databases are the hard part. Right. So we tried various things. And I can tell you all the things we did wrong. But when we landed on is we built a hosted PostgreSQL and we built in multi region read replicas. One of the interesting things about apps people build is despite everyone liking to talk about big data and and high write volumes and things, the reality is like almost everyone’s building a read heavy app with a relatively small database.

[00:12:13.700] – Kurt
There’s just not like, anything beyond that is almost an edge case in some ways. And so what we did is we built I’m waving my hands around. We have diagrams on the website for this, but we built, we built the read replicas into Postgres. We made it so you can run. Your app processes next to the Postgres replicas. So if you have a Postgres database in Chicago, it’s easy to launch a read replica in Sydney and have your application servers run alongside those.

[00:12:41.270] – Kurt
The real the magic here is that when apps talk to databases, they mostly have, like most frameworks have this idea of a read replica built in. And so they can kind of decide if they’re doing mostly reads or doing writes. What we did is we made it. So when you’re in Sydney, you’re only talking to the read replica and what you have to do to make your app work in this way is catch the inevitable error when a write happens.

[00:13:04.550] – Kurt
So we will send all the requests in Sydney to your app. If they only do reads, it works just fine. If it doesn’t write, what happens is Postgres says, hey, this is a read only copy of this database. I can’t accept this write. And then you actually tell us to replay that whole request back to Chicago where the writes can happen. And so the idea is that you just do the naive thing. And when a write does need to happen, we again use network tricks, right.

[00:13:28.100] – Kurt
To get that kind of bundle of writes that happen in an HTTP request back to Chicago where it just magically works.

[00:13:35.330] – Ethan
You said network tricks and every network engineer listening to this show just went, oh, god.

[00:13:40.520] – Kurt
Yes right. Exactly. It’s stupid load balancer tricks is what you could label it. If you’re being.

[00:13:47.420] – Ethan
One more multiple region question for you, Kurt, is fault tolerance another reason that I would do this beyond, you know, geo awareness?

[00:13:55.790] – Kurt
Yes. Yes, it is. There’s actually a bunch of I’d call secondary reasons. For the most part, devs want the performance and for the most part of the devs who use Fly have been disappointed in what a CDN can offer to their particular application. I have fun, pithy statistics for all of us, something like 60 percent of the top 100 biggest Y Combinator companies don’t use a CDN at all, which I always thought was a fascinating thing. But there’s a couple other like kind of like secondary benefits.

[00:14:24.470] – Kurt
Resiliency is a good one. We’ve had we’ve had issues where our customer we had a customer running that was backed by S3 and I feel like it was last year. At one point there was a AWS DNS issue that made S3 inaccessible from certain regions, but it worked just fine from others. And we actually saw their app fail in those regions and then migrate to the regions where it was working and their users didn’t know any difference. It was like it was slightly slower, but kind of the infrastructure moved them.

[00:14:50.480] – Kurt
I thought that was very cool. And then the other one is data, kind of like data locality for like regulatory reasons. A lot of people like being able to keep their data in Europe or Canada because they have to.

[00:15:02.940] – Ethan
And they like to do it because they have to do it.

[00:15:05.570] – Kurt
Because they can check that box for their boss. Actually, what they really like is doing it with the same tooling. It’s a it’s like a relatively it’s a simple kind of infrastructure problem for them on Fly, whereas doing it otherwise would have maybe been a headache if they had to kind of do it in flight.

[00:15:20.570] – Ethan
Kurt, so when I’m hosting an app on Fly, it sounds like typically this is public facing. But is there a use case where if I wanted to keep it all private, I mean, we know the private network capabilities there. I could do that.

[00:15:33.940] – Kurt
Yes, there is. And the way we tend to see people use this as so there’s a use case where you could run all your private apps, there’s actually, particularly in a distributed world where you’re building internal apps for people that happen to just be in different cities, and countries like the same infrastructure are still useful. I think people building internal line of business apps also like seeing that they’re fast for people. It’s not like it’s somewhat like I was fast here, so it should be faster when I deploy it.

[00:15:59.380] – Kurt
Realistically, people don’t do that from scratch for internal apps yet, what we do see is internal apps deployed alongside public facing apps. And so they’ll be kind of with the private network where you’ll have a lot of time is kind of supporting apps for something that’s public facing. And then one of the cool things about our network is you can actually V, basically VPN to your private network using wireguard again so you can create a wireguard peer, connect from your local laptop to the network and then use internal tools like this is how I use Grafana, for example. It’s just an internal Fly app.

[00:16:30.010] – Ethan
Is there an idea? I guess I could do this with wire guard. Is there an idea if I wanted to connect like some data center that I’ve still got stuff on Prem living in tunnel up to my private network in Fly? Could I could I do that?

[00:16:44.290] – Kurt
You can, yeah. So we have a basically we call them wireguard peers that you can create. So I would create one for my laptop. We also have this token based API for for issuing kind of peers and it’s designed specifically so you can run, you can actually create peers on something like Kubernetes along the inside the pods. And we did this because connecting to people’s existing databases is important. So we need to be able to connect their Fly app into like a VPC for larger customers.

[00:17:12.070] – Kurt
But you can use it. You can kind of use it to to peer your VPC with your Fly private network just fine.

[00:17:20.770] – Ethan
Some more of these ten thousand foot overview, which we’ve really seem to be bad at this Ned. We started the ten thousand feet and within seconds we’re plunging towards the ground.

[00:17:29.860] – Kurt
That’s partially my fault. I can turn any conversation into BPF in like five minutes. It’s just not a.

[00:17:34.770] – Ethan
It’s fine we’re loving this. Does Fly as a provider here. Does Fly play a role in securing my data from the bad guys, whether that’s DDoS protection or anything else like that.

[00:17:48.160] – Kurt
We do, I think so we have network level DDoS protection, which I tend to distinguish for customers between like, for generic DDoS attacks, network level’s great. For targeted app level attacks, they kind of have to build that stuff. And we don’t really do anything for app level DDoS. We do things like encrypting all network connections between VMs were important to us for that reason. I just think that’s the default way that we have very to kind of flip back to the past for infrastructure as a service that IaaS.

[00:18:17.910] – Kurt
Yeah. The other one, I think the one way to look at well, one way we look at it, I guess what we’re building is it’s very opinionated IaaS. It’s like a it’s like you don’t really have options for how the network talks between VMs. It’s kind of like it’s just the way you should build your your EC2 if you’re doing it, for example. So that’s anyway, the encrypted network was important. We do like, when you add persistent volumes to your apps or when you build a Postgres, for example, you get an encrypted volume as well.

[00:18:50.200] – Kurt
So it’s all kind of encrypted at rest, which is good for at least checkboxes, even if it’s it’s not always the most meaningful protection you can put in place for an app. So we do some of that. Usually it’s pretty opaque, though, like we don’t really know what’s in people’s databases. We just know there’s a disk there. Same kind of for any other kind of application.

[00:19:09.070] – Ethan
Cool. OK, so that’s all kind of the basic stuff that one one would expect and. Sadly, those of us building apps still have to be largely responsible for our own app security. There’s no magic button there, but network level DDoS it’s kind of a big one for me as these sorts of attacks. It, they’re just going away, so annoying.

[00:19:29.060] – Kurt
It is.

[00:19:29.600] – Ethan
This is sort of a big deal.

[00:19:30.710] – Kurt
I think one point, though, to make about it is on you is historically PaaS opened up a lot more to the world than necessary. So like Heroku’s Postgres, you can just connect to from anywhere. And everyone shipped like a Mongo or Redis with no password at some point. And then you can just like hoover up the data. This is a I think if you build the infrastructure right, you kind of protect yourself from a whole crazy class of problems, which is why the private networks were so important.

[00:19:54.980] – Kurt
Like you can kind of deploy a database on Fly and no one’s ever going to go connect to that from outside your network unless you decide to do the work to make that available. And so anyway. But yes, it’s still up to the app dev, but I think infrastructure providers have kind of a responsibility to give people the right infrastructure for these purposes as well.

[00:20:14.150] – Ned
Right, provide some sane defaults and prevent you from shooting yourself in the foot.

[00:20:19.180] – Kurt
Make it very difficult for shooting in the foot.

[00:20:22.170] – Ned
Right. A lot of the cloud providers have started to move towards that sort of sane defaults. Like if you want to make an S3 bucket public now, you have to tell it and then it asks you if you’re sure and then it asks you if you’re sure again.

[00:20:35.120] – Kurt
Like really? Really.

[00:20:36.440] – Ned
And then every time you look at the bucket in, like, the console, it’s like got a big red thing next to it or something that says this is public. It’s hard to get away from. Of course you do it programmatically, you’re on your own, but.

[00:20:48.980] Yeah, you can do anything with Terraform. It’s fine.

[00:20:53.420] – Ned
So you have all these sites, I’m assuming. Well, I am assuming. How many sites or regions do you have in Fly.io today?

[00:21:03.230] – Kurt
We have twenty two regions. And so I think 18 right now are available for you to deploy application code to. OK, so there’s kind of 18 you can target. That’s been a I can talk a lot about regions and CDNs and things because we’ve learned a lot about that stuff. But it’s been a it’s a really interesting how many regions you should have. This kind of an interesting question. And we see really fascinating things from customers where they run in like four and that’s all they ever want. And that’s cool for them. So.

[00:21:31.210] – Ned
Right, right, I’m curious in terms of the actual rack space you’re using, you said you’re running your own physical machines and physical hardware, so are you just renting colo space from different providers across the world?

[00:21:46.690] – Kurt
We do. We largely do kind of managed colo. So we end up leasing the server and colo all as one bundle for the most part. We have a pretty consistent big relatively I mean, for 10 years ago this astonishingly large servers, but they’re kind of like normal big servers now. So we kind of have a consistent server, we ask these vendors to put in place for us. It’s all Epyc CPU’s. It’s like like eight terabytes of NVMe, like five hundred and twelve gigs of RAM on these things that we run the micro VMs on.

[00:22:13.870] – Kurt
But it is, yes, managing those providers. I was actually pretty nervous about doing the physical hardware at the beginning. I think someone tweeted, I was like, that was the worst possible decision if you ask anyone, that actually is paying off in spades for us now. So that was a fun journey.

[00:22:29.230] – Ethan
But what do you mean it’s paying off for you just because of the control you have or financially or how do you mean?

[00:22:33.490] – Kurt
Both really. So initially when we picked physical hardware, it was because we needed to do this anycast layer in front of the apps we were running. And it was very difficult to do that on a public cloud. Over time what’s happened is it’s given us a lot more control over what we ship. So the Epyc CPUs are a huge win and it would be difficult to get those from if we were like getting VMs from like Digital Ocean or EC2 or GCP. It’s also helped us.

[00:23:00.430] – Kurt
Margins are kind of a big deal. And this is where I tend to butt heads with investors because they’re like at your stage, you shouldn’t worry about margins. And I’m I’m like, well, at my stage we should worry a lot about margins because I don’t want to have to talk to investors.

[00:23:15.460] – Ethan
What if we slowed the burn rate down?

[00:23:18.490] – Kurt
Correct. Yeah. It’s like, what if we could make money when people gave it to us?

[00:23:23.200] – Ned
Heresy. Heresy in the church of VC.

[00:23:26.320] – Kurt
Yeah, yeah. I’m pretty firmly believe everyone should understand their unit margins and understand if they’re not making those good right now. Like why. And make sure that it’s like an investment. Right. If you’re burning money for margins, you’re investing in something in the future. But one of the cool things for us is it’s it’s let us actually ship things at prices that are comparable to if you were going to use something like Lambda and in some cases where we’re much better for some things because in particular our bandwidth pricing is sane and not what like Amazon and Google charge people, you can run like video workloads on Fly and it’s not going to it’s not going to put your company out of business, basically, because that we have we done all the work to to kind of get around this stuff.

[00:24:08.040] – Ethan
You said video work load, OK? What does that mean? Does that mean I put an MP4 of a video out there and you distribute it like a CDN or something else, like a train transcoding or something.

[00:24:22.380] – Kurt
Transcoding is a big one that we see requests for. One of the things that people like to do, one of the things that people like us for compared to a CDN is you can get kind of provisioned CPU capacity in these regions. So if you’re doing transcoding or like image resizing, it’s nice to actually be able to buy several hundred CPU’s to do this stuff with. The video. The more interesting to me, for us I feel like video on CDN is a relatively small problem, but stuff like like the Zoom call we’re doing right now is actually incredibly bandwidth intensive.

[00:24:50.490] – Kurt
There’s no CDN on the planet that’s going to make this good. It’s more it’s more that kind of work where there’s a lot of people building and kind of video communications stuff into their apps now. That won’t pay AWS bandwidth prices because they would be out of business in a hurry.

[00:25:05.810] – Ned
Right, how are your data centers or different regions interconnected today from like a layer one up through layer, whatever standpoint?

[00:25:16.360] – Kurt
We largely, we’re too small to have a sophisticated answer to this. So we largely lean on the people that we buy Colo and and servers from to manage the networks. So like we do a lot with Equinix and what was what was Packet previously. And so most of it’s over. Most of the interconnects are between are over transit, basically with whatever the transit agreements that Equinix now have between the regions that we’re kind of piggybacking on, we’ve learned a lot about what’s connected to what lately.

[00:25:46.900] – Kurt
I was actually surprised. For example, we have servers in Santiago, Chile, and we have servers in Sao Paolo. But people in Argentina actually connect to Washington, DC because it’s quicker to get from Argentina to Washington, D.C. than it is to get from Argentina to Sao Paolo because.

[00:26:03.490] – Ethan
Yes, it’s where the fiber is. And how it’s routed. I mean, I’m up in the northeast of the U.S. It’s faster for me latency wise to connect to a server in Chicago, which is where I run most of my VPS’s, as opposed to New York City, which is geographically way closer, right? That’s the fiber baby. That’s the way it goes.

[00:26:20.880] – Kurt
Yes, yeah. Yeah. It’s fun to watch, though, because it’s really counterintuitive sometimes. But also, I live in Chicago and I feel like I’m cheating because everything feels fast to me at all times now.

[00:26:31.220] – Ned
No doubt. Valid point.

[00:26:35.170] – Ethan
[AD] We pause the episode for a bit of training talk training with CBT nuggets. If you’re a Day Two Cloud listener, you are you’re listening to it right now, then you’re probably the sort of person who likes to keep up your skills, as am I. Now, here’s the thing about Cloud as I’ve dug into it over the last few years. It’s the same as on Prem, but different. The networking is the same, but different due to all these operational constraints you don’t expect.

[00:26:59.290] – Ethan
And just when you have your favorite way to set up your cloud environment, the cloud provider changes things or offers a new service that makes you rethink what you’ve already built. So how do you keep up with this? Training. And this is ad for a training company. So what do you think? I’m going to say obviously training and not just because sponsors CBT nuggets wants your business, but also because training is how I’ve kept up with emerging technology over the decades.

[00:27:20.800] – Ethan
I believe in the power of smart instructors telling me all about the new tech so that I can walk into a conference room as a consultant or project lead and confidently position a technology to business stakeholders and financial decision makers. So you want to be smarter about cloud CBT Nuggets has a lot of offerings for you, from absolute beginner material to courses covering AWS, Azure and Google cloud skills. Let’s say you want to go narrow on a specific topic. OK, well, there’s a two hour course on Azure security.

[00:27:50.650] – Ethan
Maybe you want to go big wide. All righty. There’s a forty two hour AWS certified SysOps administrator course and lots more cloud training offerings in the CBT Nuggets catalog. I gave you just a couple of examples to whet your appetite. In fact, CBT nuggets is adding forty hours of new content every week and they help you master your studies with available virtual labs and accountability coaching. Interested, of course you are! So satisfy your curious mind by visiting CBT nuggets, dotcom cloud and figure out if CBT nuggets will work for your training with their seven days free trial.

[00:28:26.980] – Ethan
Just go do it. CBT nuggets dotcom cloud for seven days free that CBT nuggets dotcom cloud. And now back to the podcast I so rudely interrupted. [/AD] [00:28:39.670] – Ethan
So so, OK, so I’m in Chile, but I end up connecting to Washington, D.C., because there’s some decision made along the way, is this how do you determine how to route those users to which data center you mentioned anycast along the way. But talk us through the algorithm.

[00:28:53.890] – Kurt
There’s actually kind of two stages to this. So we run what we call our edge. So it’s it’s our global load balancer. We run in all the regions and and every app gets an anycast IP. And so basically the decision of a user getting a packet to our edge is pretty much core Internet. You know, anycast decides. And then and then we work with a we actually kind of offload all our anycast management to another company because we don’t want to do our own networking just yet.

[00:29:20.950] – Kurt
But in general, what happens in Santiago is hopefully you’ll end up with our Santiago pop and we do all the TLS there. I think for us, the interesting problem happens when you connect. We know you’ve connected. We know you want to get to this particular app. And this particular app is running in five other regions. Which one do we actually send you to?

[00:29:37.270] – Ethan
Right.

[00:29:38.260] – Kurt
It actually gets to be a big, hairy problem because what we started with is like just send them to the closest, but then you overload closest pretty quickly, depending on traffic bursts and things.

[00:29:48.280] – Kurt
And so we actually do now is we we keep this concept of capacity per application VM when we route a connection, we go to the closest with availability basically, which could be either CPU, it could be some connection limit. That could be a lot of different things. It depends on the app. The really interesting problem there is we have this distributed eventually consistent problem where we get a connection in Santiago. We see that maybe Ashburn, Virginia has the closest VM that we think has load at the time the connection happens.

[00:30:18.670] – Kurt
And then by the time the connection gets to Ashburn, it’s like, no, this VM’s full. We can’t actually do anything with this here. So we actually have to implement all this retry logic to bounce between regions when basically to do kind of we call it latency shedding. And so the idea is, if you’ve maxed out, I have a fun story for why this had to be built, by the way. But if you’ve maxed out your capacity in Virginia, we’ll we’ll actually retry a request in maybe New Jersey or maybe L.A. or maybe in Sydney, who really knows, like what the next best option is effectively.

[00:30:48.100] – Ethan
So you’ve got to have some kind of an ingest load balancer or something because you’re using anycast, which means you’ve got no control over that component of it, I’m going to connect to wherever that closest IP is. But then as that inbound comes in, you’ve got all this metadata you’re applying to the decision of where to send them. So you’ve got to bring it in wherever it’s closest and then basically backhaul it across your network to where whichever datacenter you want to service the request.

[00:31:13.870] – Kurt
Yeah. Yes. And then when that data center is full, we have to re-backhaul it to a different one, basically, when when basically that we’ve there’s been like a race condition that causes the request to get somewhere that can’t handle it by the time it actually receives it. So yes, there’s a kind of our global proxy that handles TLS termination, handles TCP and UDP load balancing, makes various levels of choices for this stuff.

[00:31:36.860] – Ethan
And is that your own magic load balancer? Is that some third party that had all the magic built in for ya?

[00:31:41.710] – Kurt
No, that’s that’s us. One of the things I tell people is like the special things we built are our global load balancer and then the private networking and kind of everything in between is is kind of your typical cloud stack where you kind of take things that already exist and make them work the way you want. But arguably, our whole company exists because of the global balancer.

[00:32:03.460] – Ethan
It’s just such an interesting problem because it’s a dynamic problem too, latency, load all changes in real time. And so I don’t know if you’re making a request by request decision or how you’re doing it. But jeez, dude, that’s an achievement.

[00:32:17.740] – Kurt
It’s a request by request. So the fun story here is when we first got a big customer, they were doing a bunch of imagery sizing and for a reason, I don’t really remember exactly. We had, well A we had small servers everywhere at the time. We were like tiny and spending like 8K a month. And that was more money than we could fathom at the moment. And and they got a huge burst of traffic in Tokyo for basically doing 100 hundred million image resizes a day.

[00:32:46.450] – Kurt
And Tokyo just melted, as far as I can tell. I’m not actually sure what happened to servers. I just know they vanished and we couldn’t do anything there anymore. And that’s when we kind of had this moment was like we can’t just naively send people’s traffic to the place that’s close. We actually have to be able to account for availability and load. And even now, like latency, latency will degrade between two regions for no reason that we can understand.

[00:33:09.610] – Kurt
And we just have to route around that effectively. And so it’s a request by request. And sometimes we make that decision multiple times before we even establish the connection or request to the to the app.

[00:33:21.570] – Ned
Gotcha. OK, so I think I have like a pretty good understanding of what’s happening from a networking side, we could probably spend another hour on that alone. But I do want to get onto the the cloudier application side of things a little bit. You mentioned you’re using firecracker and you’re using micro VMs. How do I get my code onto those VMs? Do I, ship you an ISO? I’m guessing probably not. Or VHD there’s gotta be a way to get the code there, right?

[00:33:50.700] – Kurt
Yeah. So under the covers you ship us a Docker image and so it’s kind of like, I don’t know, the new grosser version of an ISO. Maybe if you’re, if you’ve used ISO like Docker, I mean at least it has layers. We wrote a blog post about it’s called Docker Without Docker. That kind of goes through this process of of how we what we do with Docker images when you push them to us.

[00:34:11.940] – Ethan
I think they read that one. Is that the one where you say it’s just a bunch of tar balls?

[00:34:15.930] – Kurt
Yes, that’s exactly the one. And in it and then it goes into why tar is a terrible format, but we use it anyway, so we just continue with it. So, yeah, we ended up part of the reason there’s there’s things you’ll hear me make strong statements on about like this is how things should be done. And there’s things like Docker images, like this is just how the world works. So we’re going to adapt to that because we want one of our goals is to make it so you can kind of launch an app within a few minutes with no no.

[00:34:41.610] – Kurt
Like, mental overhead. Right. And so we have a CLI that abstracts a lot of this. It’ll just build your local docker image and then push it to our registry and and tell us to deploy that particular image sha. But it’s all just Docker under the covers.

[00:34:57.420] – Ned
OK, so if I already have a docker file, I’m I’m already golden. I’ll have to do is run your command line tool to get my app to launch.

[00:35:04.920] – Kurt
Fly launch basically. Yep.

[00:35:07.660] – Ned
OK, and then. Then that’s it. You’re running your own private docker image registry where all of these images are. Does each customer get their own registry or is it more of a shared pool?

[00:35:18.750] – Kurt
Each customer gets a it’s a it’s a multitenant registry we built so each customer gets their own registry. That’s only for their organization. We’ll actually launch Docker Daemons for you to do the build. We found that a lot of people I don’t remember, but like more than half of the people that tried to Fly didn’t have Docker running locally.

[00:35:38.040] – Kurt
And so what we did is we built a lot of the Docker intelligence into our CLI to push the context to a daemon that we run as an app on Fly to do the build. We also give it a ton of CPU and rams. It looks insanely fast to people when, when they do a remote build. It is insanely fast, it doesn’t just look it. But yes the the registry is kind of interesting because one problem when you do everything global by default is putting all your docker images in Virginia is actually pretty slow for anyone in the Asia Pacific or Santiago sometimes.

[00:36:07.200] – Kurt
And so what we do is we actually run regional caches. And when you push, you go through a regional cache and we actually keep a copy of it. So it’s like a write-through cache that’s in the region. You guys are in, I don’t know what cities you all live in, but it’s probably in the city you live in. One cool thing we discovered is that developers tend to launch apps in the city that they’re in if they’re as close to the city they’re in as possible.

[00:36:29.970] – Kurt
And so just by doing that, we made things like substantially faster for people under most in most cases just because the app comes up close to them. And the docker image is also cache close to them.

[00:36:39.570] – Ned
Gotcha. Now, we’re saying docker images, but it’s actually running on a VM. So I assume there’s some things you have to change about that image and potentially there’s some enhancements or additional things you can do because it’s not just a container.

[00:36:55.530] – Kurt
Right. We so the the real technical in the weeds version of this is when you when we launch an image, what we actually do is we use a combination of ContainerD and LVM. And so when we pull that image down to the host that we then run the firecracker on. What happens is it’s it’s it’s basically LVM thin volumes. It creates per docker image layer. So if you’ve looked at Docker, it is just a stack of tarballs. And what happens is we let LVM create a thin, thin snapshot, basically per layer.

[00:37:28.260] – Kurt
So when you do incremental changes, it’s actually very quick to change because it just reuses the layer that’s already sitting there on the host. When you extract Docker, you end up getting in our case, we get LVMs, but it’s still just a it’s just a file system that we kind of provide to the VM when we boot the thing we have to inject an init binary because there’s not always obvious what to run. So we have a Rust based binary we inject into the file system and then we launch firecracker, say here’s your device ID, here’s the binary to run from this device and then kind of go do your thing.

[00:38:00.540] – Ned
OK, and in terms of workload types, I can run, is it basically anything Linux based? Do you support Windows containers? Yes, they are an actual thing that exists in the world. Shocking every day. Or are there some limitations on what like run, run, run times are supported.

[00:38:20.610] – Kurt
Um there is. It’s anything Linux based right now. In theory, the way the VMs work, we could actually run even things like unikernel type apps so we could actually run a kernelless application, if you were able to build something like that and we gave you the tooling to launch that. But for the most part, it’s just Linux based Docker images.

[00:38:41.970] – Ned
OK, yeah, I was pretty sure you weren’t going to be supporting the Windows ones.

[00:38:46.640] – Kurt
No, we needed it yesterday though.

[00:38:48.180] – Ned
Because there’s no besides Microsoft that does.

[00:38:50.520] – Ethan
Do you have to take a seat Ned. Are you going to be OK?

[00:38:53.160] – Ned
Oh, I. I don’t have a dog in this fight. In fact, I think Windows containers is a little bit ridiculous. But, you know, some people have feelings about it. I want to make sure that we’re inclusive about this sort of thing.

[00:39:05.970] – Kurt
You know, we needed a Windows VM yesterday to test some stuff because only one of us has windows running on like a NUC or something. And and we are actually irritated, hard as it was to get a VM from not Fly. It’s kind of funny. It’s like I wish we’d had Windows VMs yesterday so we could run our own stuff because it’s just we turn up VMs all the time. And as soon as we needed a windows, it was like, why is it so difficult?

[00:39:27.180] – Ned
Right. Another thing that I’ve encountered, especially with like serverless ish type applications, which is kind of what we’re talking about, is the need to warm up an application right before the request start coming in. How do you deal with that today? Do you have to keep some copies running and and am I as the client paying for those copies to run in all the different regions?

[00:39:51.090] – Kurt
The short answer is yes. So we I tend to think of Serverless as like functions as a service versus something like what we’re doing or Fargate is doing or Google Cloud Run is doing where. When you deploy an app, we launch the product, we launch a tiny VM, we let it run forever. When you scale to multiple regions, we turn them up in multiple regions and let them run forever. When you scale back down, we kind of go back to the one you do have to pay.

[00:40:16.750] – Kurt
We actually built a free tier specifically so you can keep three tiny VMs running at all times without giving us any money, because the goal has been to get people to do like side projects here and nobody wants to pay for that. But in general, yeah, we’re very much like it’s it’s almost just like provisioning your own VPS’s, right. Like you kind of get VMs and they go off. We have some auto scaling logic that’ll turn them off and on.

[00:40:38.550] – Kurt
But it’s not really all that sophisticated under the covers, which I actually think is good. But, you know, functions are coming someday.

[00:40:46.290] – Ethan
So the way you build then sounds like what we’re pretty much used to. I’m going to reserve a CPU instance with some kind of RAM characteristics and maybe some network bandwidth or something. And I get billed what, some static amount per month?

[00:40:58.740] – Kurt
Per second. But yes. While it’s on. So we our pricing, you’ll see a per second price and then an estimated monthly cost because months are inconveniently different lengths.

[00:41:07.470] – Ethan
Whether I use what I reserved or not. It’s not a usage based model.

[00:41:12.600] – Kurt
Correct? Right. So if it’s on, you get charged for it. If it’s off, you don’t get charged for it. And then when you when you so most apps are on full time, some of them actually scale up and down based on traffic. So it’s kind of usage based, but in general it’s either on or off and you’re getting charged for it or you’re not.

[00:41:29.100] – Ethan
OK, OK. But we’ve talked about some different use cases. Transcoding came up, something very CPU intensive as you’re spinning up different customer workloads and trying to figure out where in your infrastructure to put them. Is the noisy neighbor problem something you’ve thought about?

[00:41:44.700] – Kurt
It is because we’ve also all suffered from like CPU steal. It’s like a it’s just a huge pain and it’s usually like a two a.m. emergency and you don’t know why. And then you go figure out that steals the problem and then have to then figure out what you know. It’s funny watching you hear people talk about how they provision cloud VMs and frequently they’ll actually check for steal and then kill the thing if it’s above some threshold and go get a new one thinking it’ll put them on new host hardware. But what we do is we sell two types of VMs.

[00:42:12.510] – Kurt
We sell shared CPU VMs. They’re literally called shared CPU and we and then we sell dedicated CPU VMs. And so what happens on the shared CPU is you’re on a pool of CPU’s with other people using shared CPU stuff. We basically use just cgroups to control this. We basically give you your your proportion. If everyone’s bursting, you get you get one 8th or whatever of the CPU when it’s on full time. And then a dedicated CPU’s for things like image workloads, you don’t end up on the same hardware thread as another customer at that time. You get that whole hardware thread at all times. You get to meet a whole core.

[00:42:47.340] – Ethan
It’s almost like quality of service scheduling.

[00:42:49.560] – Kurt
Yes.

[00:42:50.130] – Ethan
Reminds me of that.

[00:42:50.790] – Kurt
Yeah. Yeah. And then one of the funny things about clouds is they all like to say vCPU’s, but what they’re really talking about is probably two hardware threads per core. And so actually getting a reserved core on a cloud is relatively difficult because you need to figure out you need to colocated CPU’s, which is an interesting challenge for for me. Again, I don’t love abstractions like I’d rather people just know exactly the hardware they’re getting. So we’ve kind of opted in that direction a little more.

[00:43:18.540] – Ethan
Talk to us about Kubernetes because you can’t have a show where we talk about cloud and not bring up Kubernetes. Is there any special relationship Fly has with Kubernetes or comments you have on that? He’s starts out with laughter. This’ll be good.

[00:43:32.240] – Kurt
Well, well, A, we don’t use Kubernetes because it won’t work for what we need. We ended up we end up we orchestrate VMs with basically HashiCorp Nomad, although at this point it looks a lot less like Nomad than it used to because it’s not like neither kubernetes is or Nomad are built for what we need to do under the covers.

[00:43:51.700] – Kurt
We actually we have this internal metric of how many customers have told us they shut down a Kubernetes cluster when they moved to Fly, which I kind of get a huge kick out of. I feel like Kubernetes is actually technically amazing. It always fascinates me when companies that basically built a CRUD app hire someone to manage their kubernetes. It seems like in some ways like we’ve regressed on infrastructure in the last 10 to 15 years. It’s become so much more complicated despite having like a tremendous amount of CPU and stuff.

[00:44:17.980] – Kurt
And so my feeling is for the people who actually need kubernetes. Kubernetes is amazing. I feel like most companies shouldn’t be messing with kubernetes for the moment.

[00:44:27.250] – Ethan
But it’s very cool. Kurt. It’s very cool.

[00:44:29.020] – Kurt
It is cool. You know what this is going to. I don’t know. What I think Kubernetes is, is is a way for people who like to do DevOps work to to guarantee themselves, like to basically build a full time job from kind of any level of complexity. And so, like, as soon as you put Kubernetes in, you have you have suddenly a full time job and maybe even a second person to hire. Right. And so, like, if I were doing that work, I would love Kubernetes because it’s fun.

[00:44:54.450] – Kurt
It’s interesting. It’s fun to build a whole PaaS for a company, whether that company needs a whole PaaS kind of internally or not. I have strong feelings on kubernetes, obviously, and we don’t we don’t use it.

[00:45:06.600] – Ned
But I think it’s very interesting that you chose to use Nomad for your orchestration and then customize the heck out of it. In my experience, Nomad is less opinionated and leaves you with lets you do more and doesn’t try to abstract as much stuff. And it’s also more lightweight. Is that the reasons you selected it?

[00:45:26.100] – Kurt
That’s exactly right. Lightweight is an important one. We can keep Nomad in our head like we can basically understand what Nomad’s doing at all times. And that’s not been true for me, for kubernetes for like five years at this point. Like, it’s just big. As soon as you get anyway. But then the other you mentioned abstractions is it’s kind of interesting. And I talked about those a little bit with the networking. We don’t use CSI or CNI because we don’t need pluggable storage or pluggable networking.

[00:45:52.290] – Kurt
And we’re much better off kind of doing our own storage and doing our own networking in a much simpler way that doesn’t kind of inherit all the baggage of the standardized pluggable things. Right. That was air quotes for people, if this isn’t going on YouTube I’m making air quotes continuously. So like all the kubernetes interfaces for doing things like storage and networking make complete sense. If you want to be pluggable and support multiple cloud providers and move from EBS to Google drives or to LVM or something like that.

[00:46:20.700] – Kurt
But for us, it’s like we’re literally never changing, we’re never plugging storage or we’re never plugging networking. We kind of built ours and this is it. And it’s simpler to have our own not abstracted interface directly to what we need. We’re only a company of seven. So like keeping things simple is actually I’m not sure seven people can run kubernetes for us. Like, it’s like and that’s not even a knock against kubernetes this time as much as as like just kind of a fact of how big that thing is for people.

[00:46:50.750] – Ned
Another thing that I noticed as I was reading through the documentation a little bit, and first I want to compliment you on your docs.

[00:46:56.930] – Kurt
Oh thank you.

[00:46:58.070] – Ned
They are clear and they are well-written and I can find things in them. And you think that would be a pretty low bar.

[00:47:03.590] – Kurt
That’s amazing to hear.

[00:47:06.020] – Ned
Yet so many people, or so many companies I should say, don’t clear that bar and Ethan knows I’ve gone this rant before. But but oh, as someone who writes docs occasionally, I just I appreciate the craft and I appreciate it when it’s well done.

[00:47:23.240] – Ned
One of the things I noticed is you have your own command line tool. I think we mentioned it before. Flyctl. Is that the only way to interact with Flyctl or can you also do it through APIs or some other toolset like Terraform or something like that?

[00:47:37.850] – Kurt
We have so Flyctl talks to a GraphQL API so you can actually build a bunch of stuff on top of Fly without using Flyctl. And we did. I mean, likethe CLI has to have an API. So we decided to GraphQL and make it pseudo public because we knew people would ask. So in our community, you’ll see people using the GraphQL API. A lot of times we have people that are running kind of like a single tenant SaaS they want to launch a new stack per customer they get in and the API is a nice way to do that. We there’s I think someone was working on both a Terraform and a Pulumi providers, so basically consume the API to make Terraform and Pulumi work. But those are the tools. I thought you were going to ask about Web UI, but when that question started, which I thought was kind of funny, because then it went to Terraform instead and I was like, oh, that’s that was not what I was expecting.

[00:48:25.190] – Ethan
So. Well, it’s kind of do we talk about automation a lot on this on this podcast? And so that it was really that was the context as opposed to the clicky. Clicky.

[00:48:34.430] – Kurt
Yeah, right. Yes. So we started with CLI for automation basically because you can even script the CLI, it’s pretty easy to integrate into all kinds of stuff. So we have like a GitHub actions thing that just pulls down the CLI and does a lot of CLI stuff.

[00:48:48.740] – Ned
Yeah, I don’t know. Every time I work with developer tooling and somebody brings up the UI, everybody gets all offended like, oh no, you do it at the command line you fool.

[00:48:57.950] – Kurt
And so I was like, I really like and I like the UI for metrics and like ultimately logs like going and kind of looking at stuff. The UI is great, but for actually doing things the CLI’s, maybe we’re just old and grizzled and this isn’t the way that the new kids are going to do it in the future. But CLIs for the Win, I guess.

[00:49:16.610] – Ned
This is the way.

[00:49:18.650] – Ethan
Kurt we’re coming up on the end of the episode here, man. There’s we’ve covered a lot of ground, but is there anything that we didn’t get to talk about that you think is super cool or interesting that you want to point out about Fly.io?

[00:49:29.150] – Kurt
I think we sort of talked about it. I think the thing I need to keep hammering on really is that, like, boring full stack apps work all over the world on Fly. It’s kind of like it’s a it’s like everybody’s building something boring that would benefit from this. And I think that’s that when we kind of hit that point, we’re like boring rails or boring Phoenix or boring Laravel just worked. It was kind of amazing when I first shipped like a stupid rails app and had it work multi region.

[00:49:55.070] – Kurt
And so that’s the I think that’s the thing I keep by talking about video transcoding and CPU’s, everything is like that. We kind of built this for boring full stack apps and I don’t mean boring badly. It’s like what we’re all building, right, to just work and it should be fast, right out of what we’re after. So.

[00:50:09.230] – Ethan
Well, Kurt.

[00:50:09.920] – Ned
Boring is code for it makes money.

[00:50:12.440] – Kurt
Yes, it makes money. And it doesn’t wake me up at two a.m. That’s kind of the.

[00:50:19.730] – Ned
Right

[00:50:20.610] – Ethan
Kurt tell people how they can follow you on the Internet. Are you out there in Twitter land or blogging or anything like that?

[00:50:28.160] – Kurt
I, I blog on the Fly.io blog occasionally. I’m mrkurt on Twitter. I’m not nearly as loud as I as I should be, but occasionally I’ll retweet and make it hopefully interesting tweets. But that’s kind of that’s that’s where I’m at.

[00:50:41.390] – Ethan
Very cool. And then for I mean, Fly.io we’ve been talking about that. Is there anywhere else that people should go if they want to know more about the platform?

[00:50:49.940] – Kurt
No. Just Fly.io and apparently are better than mediocre docs, which is a real high praise. So that’s that’s a. Go look at our docs and then tell me if they’re slightly better than mediocre. That’s really a good place to be.

[00:51:01.700] – Ethan
Coming from Ned, you have no idea how high that praise is. Well Kurt, thank you very much for appearing on Day Two Cloud. And hey, those of you out there listening virtual high fives to you for tuning in, if you have suggestions for future shows, we would love to hear them hit Ned or I up on Twitter at Day Two Cloud show or fill out the form of Ned’s fancy website Ned in the cloud dot com.

[00:51:26.300] – Ethan
If you have a way cool cloud product you want to share with our audience of I.T. professionals. Because you’re a vendor, you’ve made something amazing and you just want to let these cloudy folks know you should become a Day Two Cloud sponsor. You will reach several thousand listeners, all of whom have problems to solve, and maybe your product fixes their problem. We’re never going to know unless you tell them about your amazing solution. So find out more about that at Packet Pushers dot net slash sponsorship. Until then, just remember, cloud is what happens while IT is making other plans.

Episode 105