Follow me:
Listen on:

Day Two Cloud 147: Google Cloud Is Not Just For Devs

Episode 147

Play episode

Today on Day Two Cloud we peel back the curtains on Google Cloud Platform, or GCP. Our guest is Richard Seroter, Director of Outbound Product Management at Google Cloud. Prior to working for Google, Richard was an Azure and Microsoft MVP, a PluralSight instructor, developer, and sales engineer, so he brings a wide spectrum of experience to the conversation. This is not a sponsored episode.

We discuss:

  • Why Richard decided to move to Google
  • Concerns about Google’s habit of killing projects and how GCP isn’t Google
  • How GCP differentiates from other big public clouds
  • GCP’s embrace of multi-cloud as a competitive differentiator
  • More

Sponsor: StrongDM

StrongDM is secure infrastructure access for the modern stack. StrongDM proxies connections between your infrastructure and Sysadmins, giving your IT team auditable, policy-driven, IaC-configurable access to whatever they need, wherever they are. Find out more at

Sponsor: ITProTV

Start or grow your IT career with online training from ITProTV. From CompTIA to Cisco and Microsoft, ITProTV offers more than 5,800 hours of on-demand training. Courses are listed by category, certification, and job role. Day Two Cloud listeners can sign up and save 30% off all plans. Go to and use promo code CLOUD to save 30%.

Tech Bytes: Hashicorp

Our Tech Bytes sponsor is HashiCorp, where we dive into its Consul product to learn how it’s evolved from its humble beginnings to become a service networking platform with features including a service mesh, service discovery, network infrastructure automation, an API gateway, and more.

Show Links:

Google Cloud documentation – Richard’s blog

@rseroter – Richard Seroter on Twitter


Note: This output is provided as-is and has not been checked for errors.

[00:00:00.850] – Ethan
Sponsor. Strongdm is secure infrastructure access for the modern stack. Strongdm proxies connections between your infrastructure and sysadmins, giving your It team auditable, policy driven IAC configurable access to whatever they need, wherever they are. Find out PacketPushers this episode of Day Two Cloud is brought to you in part by ITProTV start or grow your It career with online It training from ITProTV. And we have a special offer for all you amazing Day Two Cloud listeners. Sign up and save 30% off all plans.

[00:00:52.290] – Ned
Welcome to Day Two Cloud. Today we are talking about Google Cloud, but we’re not doing it alone. We brought someone to help us. That is Richard Seroter. He’s the director of Outbound product Management from Google Cloud. But he’s not just a Googler. He spent some time in the trenches with Azure and AWS. So he brings a unique perspective to things. What stood out to you, Ethan?

[00:01:14.640] – Ethan
That it’s not all Google all the time with Richard and his perspective. He brings the corporate perspective, of course, but then he makes a lot of points along the way about how the other clouds fit into things. And it just felt like a very authentic conversation. Authentic was the key word.

[00:01:33.030] – Ned
I think authentic is the word of the day when it comes to this conversation. And stay tuned. After the conversation, we have a special tech bite from HashiCorp talking about Consul. But before that, enjoy this conversation with Richard Seroter from Google Cloud. Well, Richard, welcome to Day Two Cloud. We’re super excited to talk to you. Before we get into the topic at hand, can you give us a quick background on who you are and what led you to your current position at GCP?

[00:02:03.990] – Richard
Yeah, thanks. I’m super pumped to be here. I appreciate the invite. Richard. I’m an outbound product manager at Google Cloud. I have never taken the same job twice in 20 plus years of working, so I don’t know where to go from here. Maybe astronaut or plumber or something. I’m running out of tech jobs, but I’ve been a developer, architect, sales engineer, marketer product manager. Now, whatever an outbound product manager is. So Google recruiters reached out about some new function they were building that seemed terrifying and awesome. So I decided to jump at that. And before that, I was a twelve time Microsoft MVP, mostly for Azure. And also I’ve been teaching Pluralsight courses for almost a decade now and stuff like AWS Salesforce and other things. So better or worse, I’ve been doing this cloud thing for a while.

[00:02:49.290] – Ned
Got you. And it’s interesting that you’ve hopped around in different type of positions, but all within the tech domain. So I guess we’re gonna have to invent something new for you. I guess.

[00:02:58.180] – Ned
That sounds like what Google did. This outbound product manager. What even is that?

[00:03:03.090] – Richard
I know it’s a good question. Yeah. As you say, they’re all related. I’m not like going from beekeeper to net programmer. It’s not completely tangential stuff. In essence, it’s really the go to market folks of product management. So I spend a lot of time with customers, partners, analysts, working on portfolio level, product strategy, things like that. So I have a team of whatever, 15 or so people now at this point who kind of bang around on this topic and make sure we launch products the right way, talk to customers more, get better feedback into our product loop. So honestly, some of the best parts of PM and these other jobs, I’m trying not to tell my boss that because I’ll never get a raise, but this is some of the most fun I’ve ever had.

[00:03:42.750] – Ethan
So, Richard, you’re a multi cloud human. We could phrase it that way because you got all this background with Azure, you’ve had some involvement with AWS. Why GCP? What attracted you to take up employment working for Google? Did they just back up the money truck? Was it something like that?

[00:04:00.210] – Richard
I live a very lavish lifestyle. Finances were super big, bigger than the fabric eggs myself. How do I support this sort of ridiculous lifestyle? No, I mean, look, we all get free pixels and free chinchillas and stuff. As part of working at Google, so that’s a nice perk. But honestly, some of the appeal was I didn’t know it that well. And the comfortable thing for me to do because work at Microsoft or something like that, if they were so inclined to hire someone as ridiculous as me. But for Google, it was more like, I know there’s amazing tech leadership, created some of the most important tech in our industry, and I know there’s always this great reputation on their engineering product. And some of my unfamiliarity was actually a draw. As I kind of like being a little bit uncomfortable at work. I like watching the office. I like a little discomfort. And so if I know something too well, Where’s the challenge? Some of it was like, I know this is amazing stuff. I’m going to feel incompetent for a long time. If I work there, wouldn’t that be somewhat exciting? And honestly, a lot of this was I knew they had great tech, but kind of emerging, maybe not great yet.

[00:05:01.630] – Richard
Go to market, and I thought that could be fun to help fix.

[00:05:04.250] – Ned
Got you. I also embrace discomfort whenever I can and seek out new experiences, because that’s kind of what’s exciting about technology, right? It’s the new and the shiny. I guess that’s at least partly why we’re here. I think a lot of our listeners might be a little bit familiar with GCP. Maybe they’ve spun up a project to take it for a test drive, or their company is investigating adopting the platform. And I think that’s driven in a large part because Google seen as the up and Comer, the third place cloud. And I don’t want to say that negatively, but that’s kind of the perception as it is now. But there’s also a concern that I’ve heard from the community that Google, as the larger company or Alphabet, has a tendency to kill products when they’re not working so well. And that’s scary when you move into the cloud realm where your production deployment relies on this service to bring you revenue. So can you speak a little bit to that perceived lack of distinction and how Google Cloud approaches services and features?

[00:06:15.030] – Richard
Yeah, we look at the fair from the outside in. It’s a fair concern. I’m still morning Google Reader. Like, there’s people like us. Now if you’re mourning Google Plus, you’re probably an oddball. I haven’t met anybody who’s really flipping out because that went away. But whatever, there’s somebody out there who’s just killing it on circles and they love however that worked. But look, I mean, the culture here is definitely R and D and a lot of experimentation and learning and making bets. And the one thing I actually kind of like is that we don’t subscribe to the sun cost fallacy of just because we’ve plowed money into this bet. Let’s keep it going forever because it’s still finite. Like, yes, we’re some trillion dollar company. There’s other really successful companies out there and we don’t have unlimited resources. We still have a ton of people and a ton of money. A ton of this. So where am I going to place my bets? And should I keep plowing money into something that’s just okay? Or should we say, like, we’d rather go bet on this next thing? So Google proper really thinks about constantly optimizing.

[00:07:13.590] – Richard
Now, Google Cloud inherits parts of the research oriented experiment part of Google, but we also sell a product unlike a lot of the other parts of Google where it’s search and Gmail and I don’t know, have you ever paid for Maps? I don’t even think you can. It’s not even a thing. I don’t know how you would do pay for most Google stuff. Like, a lot of this stuff is meant to be free and easy, so therefore it is experimentation, UX driven. But cloud. We charge something, right? So this has to be something you could trust. And so we do behave accordingly. You can count on one hand, even if you’re like a wood shop teacher and had four fingers cut off, you can probably count on one hand how many products we’ve deprecated. And when we do super long notification. But more importantly, last year we announced this thing called Enterprise APIs, which more or less reinforced long term commitments on backwards compatibility, multi, multi year notifications, if anything ever decides to even go away. Because we’re in this for the long haul, I want to earn everyone’s trust. We’re not owed it. We have to earn it.

[00:08:14.650] – Richard
But we’re going to earn it on the cloud side by being not only great engineering but a reliable provider. So it’s totally fair to come into it if you’re looking at Google proper and how we’ve been iterating on certain products. But Google cloud behaves a little differently because we have to sell something you believe in.

[00:08:30.070] – Ned
Right. That makes sense. I think that Enterprise API announcement, that was really interesting. I remember when that came out, I didn’t quite understand what it was about at first, but then I read it to them like, oh, okay, this is like a promise. Essentially, you’re making a promise saying if you sign a ten year contract with us, we’re going to support you and whatever you’re developing on the cloud.

[00:08:48.870] – Richard
And we’ve seen that, right. In the last few years, we’ve sold multiple ten year deals with large companies who are not going to be able to handle it if we just kill a database tomorrow. Of course, we wouldn’t any more than Amazon would or Microsoft would. If you look across the cloud providers, we’re all pretty good about not taking major services and busting them up like nobody’s doing that. And that’s good because you have to trust this stuff. This is the next foundation for the next generation of enterprise apps. It can’t be just changing constantly. That would be insane. Yeah.

[00:09:18.180] – Ned
The original cloud services for Azure is finally, I think, being deprecated this year after being around for ten years, and they’re still going to support a different version of it that you can move to if you want to stay on, like, Server 2012. It’s amazing.

[00:09:33.750] – Richard
Dawn, look, we’re still running the original pads in Google app engine, still alive and well and doing great, and a ton of people depend on that’s been around. I mean, good Lord, half of my career at this point, I was using it. No, eight.

[00:09:44.400] Wow.

[00:09:44.760] – Richard
So it’s great. Again, there’s sometimes these reputational things and I understand it, I’ll be empathetic towards it. But I also want to help people relax and see gosh, these are long haul bets. You should be able to bet on this as much as you did some of the big software you bought in the 90s.

[00:09:59.250] – Ned
Right. In terms of new features and new services that are under development, I think this is probably a good question for you since you are the outbound product manager you’re talking to customers. Is what’s being developed internally driven by those customers, or is it mostly driven by what the larger Google organization needs from the cloud services?

[00:10:19.770] – Richard
Yeah, it’s a good question. We’re a little unique in that a lot of our initial services were just manifestations of things Google was doing to run YouTube and run all Gmail and run services. So these are really, truly like cloudnative services, things that aren’t just things we came up with out of nowhere. They have been powering the service itself. So whether it’s, hey, if you’re in Google cloud, you’re using the same load balance or YouTube uses. If using Spanner, using a database that powers much of Google proper. If you’re using storage. Right, I love when I hear these stories. Like when you think about it, if you and I go to YouTube right now and look up the absolute most obscure cat video possible with one view, it still loads in about a second. That’s amazing. And that is the same storage subsystem you’re using, Google Cloud. So when you use our glacial storage in other clouds, you have like a 15 or even multi hour SLA for when that comes back, it’s super like off storage. Our still comes back in milliseconds and it has to because how else would it work in YouTube?

[00:11:18.050] – Richard
So we have this amazing storage subsystem that you use as a Google Cloud customer, or things like Kubernetes, which then inspired by our work system or Istio, mesh was inspired by service Mesh. So a lot of foundational things did come from what we had already built to simply run a cloud scale platform in Google. But at the same time, a lot of stuff we’ve do now is often driven by customer need. Right. What we’ve done with a managed VMware service can kind of confidently tell you that wasn’t something we were running internally.

[00:11:48.750] Right.

[00:11:49.100] – Richard
So that’s something. Hey, customers said we need a different landing zone for all this huge investment we’ve had in Vsphere and NSX and stuff. Sounds good. Manage VMware service. A lot of the AI ML stuff, yes, stuff we are indeed here. But stuff that’s been driven by what do customers need? What do you need from a speech API? What do you need from video stuff and identity and access management. Totally driven by enterprise need because our initial stuff was pretty light and now it’s become much more robust and a managed active directory service and things like that that we wouldn’t have had otherwise. So a lot of the stuff we do now, in the beginning you need an awesome foundation and that stuff does come from Google Engineering. And frankly, we’re super proud of that. A lot of the service mesh it adds on top of that. A lot of that comes from exactly what customers are trying to ask for and build. And I think that’s the fun part of this now is you’re mixing and matching both Google.

[00:12:39.700] – Ethan
I am robustness. It just made me chuckle, Richard, because as someone who’s had to dig through the APIs to find whatever the object and the thing is, I need to grant permission for a given service. It is robust indeed. Sir, there is a lot going on in there’s.

[00:12:56.260] – Richard
No certification.

[00:12:57.250] – Ethan
Yeah, that’s fair. We’re talking about customer needs. One of the things customers don’t, at least they’re not looking for higher prices, but yet not just GCP, but across the board, we’ve got higher prices going up. Cloud services. Gcp did have a fairly recent announcement as we’re recording this about some price increases and some changes to the pricing model. Give us the GCP story on pricing these days.

[00:13:21.090] – Richard
Yeah. So first of all back up. So I ran product for a cloud company before this as well. So I’ve had some at least experience raising prices and decreasing prices. I think we all know, first off, if you go back ten years, there was a lot more volatility in cloud pricing. We’re all kind of settling in. What was the right price for this? You saw tons of changes a decade ago. Now, for the most part, these things are fairly static. But why do the prices change? So prices go down much more than they go up across cloud providers? I think we see that a lot. But why do they go down when they go down first? When you get new efficiencies, right, all of a sudden it’s cheaper to run a service. So yes, we want to PaaS that cost on savings on customers. That’s cool. Frankly, I’ve lowered prices at companies before when we’re just trying to win in the market, even though if we’re going to lose money or have a lower margin, you’re trying to aggressively win market share. Or frankly, sometimes you lower your price because you’re just too expensive compared to your peers.

[00:14:11.750] – Richard
Same time, some of these things go opposite. Sometimes you do raise prices when it’s more expensive to run the service. Licensing costs went up, people costs went up. Whatever you weren’t able to get the efficiencies you thought. Or honestly, in a lot of cases, price go up among cloud providers because you’re out of whack with what the other providers are offering. Why am I offering a 50% in another? Clearly the market can tolerate that other higher price. We should be there. So prices go down for a lot of reasons. Prices go up for a lot of reasons. We’ve adjusted a couple of things, but honestly, it’s usually because there’s a new service coming to market that we’d like to steer you towards. It’s a better value than maybe a first Gen service. And we also have to remember, cloud prices never go up wholesale. It’s never like, hey, the cost of Oracle cloud just went up 9%. That’s never happened, right? Or Google or Amazon or Azure. It’s, hey, bandwidth costs for South America just went up 3%. Storage costs for the storage maybe went up for this, but there’s never like a wholesale. By the way, inflation just hit the cloud.

[00:15:09.890] – Richard
Everything’s up 12%. That’s never happened. I don’t see that ever happening. So at the same time, I get it that we all have a lot of battle scars, right? You and I’m looking at us, we’re not all super young. We stayed around a little bit. Right. So we’ve been there where, hey, the price of that software you bought just jacked up 30% the next year and it burned you and you were stuck because you couldn’t get off it. So I totally understand. There’s that sort of natural worry about getting stuck with something that goes up in price. We bet a ton of open source so you can sort something out if you want to switch to a different cloud. We love price transparency. You can see a lot of good metering information and billing and updates. So you know what’s happening. That’s the best we can do is try to make sure a prices go down as much as possible, super transparent, and then try to be more open source based. If you do want to eject, we’re not going to make that hard for you.

[00:16:02.130] – Ned
All right. For folks who are completely new to GCP because we kind of danced around the components a little bit. But maybe we should do a firmer grounding in what the components are in GCP, at least the ones that are common across the cloud providers. And I’m thinking of what I consider the big four of cloud. So it’s Network, Compute, Storage and Identity. So what’s the story in GCP with those four big categories?

[00:16:28.590] – Richard
Yeah, everything’s going to feel familiar, right? For the most part, the fundamentals, as you said are the same. Whether it is sure I can get VMs. Yeah, sure. Table stakes. I can’t believe. I mean, I could be a pretend cloud provider nowadays and still ship you VMs. That’s cool. As you say, I need a storage subsystem, some sort of block storage, probably an object storage. Because what are you doing? It’s 2022. You probably have some sort of good decent networking, load balancer, maybe even some DNS, some additional things to CDN those things. Azure pretty standard. So those are going to be your foundational things. As, you said identity management. Everyone’s got to offer databases, right? Of course, if you want a relational database, yeah, we’ll be here for you. Sql Server. Mysql Postgres sounds good. A lot of those things will feel familiar. I think sometimes you see differences in experience and then there’s a longer tail of different services each cloud offers. Right. Like cloud is not commodity. I usually push back on that when I hear someone just talk about they’re all the same, right? They’re a cloud, sure. I mean, in and out serves burgers just like McDonald’s does.

[00:17:30.800] – Richard
They’re both great companies. But those aren’t the same food, right? There’s different experiences. There’s different sort of things on their menus. And yes, the foundation, burger, fries and drink, totally. But only getting animal style in one place. I’m only getting whatever McRib, whoever loves those, that’s only coming from McDonald’s. Like there’s different experiences to both. So different clouds have different things and so our experience feels different. I love that. Hey, when I provision a VM in Google cloud, it comes back. A Windows VM comes back in less than 30 seconds. Linux VM, I can usually terminal in 15 to 20 seconds, which blows my mind. And it comes up every time, which was not my experience in some other clouds. So super reliable, super fast. I love that. Right. Or that our VPC is global by default. I’m not setting up different regions and then trying to figure out subnets. I can’t do any of that in the first place. So I love the fact that our VPC is just flat and global by default just feels different. Storage is super fast. Our portal doesn’t make you want to light yourself on fire. It’s actually a nice portal to use.

[00:18:34.430] – Richard
So there’s a lot of things that start to feel different than the services. There’s some things that Amazon is amazing, and there’s some things that Azure is awesome at. And you come to Google Cloud and you say, hey, our data store is amazing. Like a serverless data warehouse with BigQuery nothing like that. I don’t provision instances and manage an infrastructure. I just literally say, here’s a data set. Chew on this. And when you’re done, I don’t pay anything anymore. Like that’s. Bonkers. What is that? Or Spanner, which might be one of the most amazingly engineered cloud products ever in terms of a relational database that breaks Cap theorem that my buddy Eric Brewer came up with here at Google. The idea that I can be a consistent, available, partition tolerant database that spans regions and still performs with five nines an amazing perf. It’s just a remarkable database. It’s awesome. So our data story is amazing. People come here a lot for that and for AI. And our serverless story has gotten pretty awesome as well. So you’re going to come to us for certain stuff. That’s why multi cloud is kind of taken off as.

[00:19:31.850] – Richard
Certain people say, Look, I love what app integration stuff in Azure. So I played with a ton. Logic apps is awesome, and some of the service bus stuff all that’s amazing. Like those are great services to use. Amazon’s killing it in certain areas and we’re killing it in certain areas. And then you might be in a region where you can only find Oracle or IBM and you’re going to pick them. So all the clouds are doing great stuff. There’s things that are distinctly different from each one, and ours is usually around performance and experience. Often data is where people are leaning towards us.

[00:20:01.290] – Ethan
If you’re leaning into the differentiators as performance and experience, I have heard that GCP tends to be very Dev friendly. Is it performance and experience that makes GCP Dev friendly, or is that actually kind of an odd descriptor from your perspective?

[00:20:18.330] – Richard
It’s definitely an area we are intentional about. Is that can this feel like a developer cloud versus necessarily just something it pros love and it pro they’re awesome. I think they were a big driver in Amazon as well because it was a very script friendly cloud and you did a lot of stuff that felt great for operators. Doesn’t mean developers aren’t great there as well. But when you look at what we do with Firebase, you look at Google creating things like Angular and Flutter and Dark. There’s a lot of Dev tech that has gravity towards Google cloud. And then a good experience and good SDKs and a good portal mean that I can just kind of swipe and go and use the cloud super easy. It doesn’t just feel like there’s a ton of friction, there’s no boat anchors. I’m just kind of getting in there and shipping some stuff and seeing some value. A really good free tier that’s actually free. You don’t get stuck on day 31 with a giant bill. It’s like two or 3 million runs of cloud. Run our serverless container platform before you see any charge. I don’t know.

[00:21:10.720] – Richard
What are you building that’s running more than that as a hobby project? That’s awesome. So really good free tier devs can get started. You don’t call up a salesperson to mess around with it. Just have fun. But a lot of Dev tech. There’s millions of Google developers who use things like Firebase, who use things like Dart and Flutter, who use Angular, who’s used Android. And so some of that does have a gravity towards our cloud.

[00:21:32.730] – Ethan
We pause the podcast for a couple of minutes to introduce sponsors. Strongdm’s Secure Infrastructure Access platform. And if those words are meaningless, Strong DM goes like this. You know how managing serverless, network gear, cloud VPC, databases and so on. It’s this horrifying mix of credentials that you saved in Putty and then super secure spreadsheets and SSH keys on thumb drives. And that one dock in SharePoint. You can never remember where it is. It sucks, right? Strong DM makes all that nasty mess go away. Install the client on your workstation and authenticate policy syncs and you get a list of infrastructure that you can hit when you fire up a session. The client tunnels to the StrongDM gateway and the gateway is the middleman. It’s a proxy architecture. So the client hits the gateway and the gateway hits the stuff you’re trying to manage. But it’s not just a simple proxy. It is a secure gateway. The StrongDM admin configures the gateway to control what resources users can access. The gateway also observes the connections and logs who is doing what, database queries and Cube, cuddle commands, etc. For. And that should make all the security folks happy.

[00:22:37.920] – Ethan
Life with StrongDM means you can reduce the volume of credentials you’re tracking. If you’re the human managing everyone’s infrastructure access, you get better control over the infrastructure management plane. You can simplify firewall policy. You can centrally revoke someone’s access to everything they had access to. With just a click. Strongdm invites you to 100% doubt this ad and go sign up for a no BS demo. Do PacketPushers they suggested. We say no BS and if you review their website, that is kind of their whole attitude. They solve a problem you have and they want you to demo their solution and prove to yourself it will work. packet pushers and join other companies like Peloton, Sulfi, Yax and Chime. packet pushers. And now back to the podcast. Say, I want to do some homework on infrastructure as code and work on some of that stuff. Say with, I don’t know, Poloomy, let’s say, how easy is it for me to blow up the free tier and all of a sudden I get a big bill? Or are there throttles in place that can help me make sure you don’t screw up?

[00:23:49.930] – Richard
Now? It’s a good question. I believe we have like your I don’t believe we even asked for a credit card when you first start. So there’s a very clear opt in when the time comes that, hey, you’re about to start doing stuff. Be careful. And again, there’s still and every cloud has the horror stories that show up on Twitter. If I woke up and I have a $12,000 bill, some of those things are tough to prevent because you want to enable self service that says, I want you to be able to do all kinds of stuff and you don’t want to have to send in an email three in the morning because you’ve hit your storage limit, you just want to go. So it’s really interesting to see how we’re going to keep getting better. We have to get better, but how do we keep getting better at kind of taking off the rains so you can do whatever you want but letting you know sooner that you’re just about to spike, right. So you don’t see that cost. So we do a good job of making sure you are opting in before you start to really party on in the account.

[00:24:39.140] – Richard
But then even once you’re really in and you’ve given us a credit card, how do we still prevent you from maybe doing something you didn’t want? That’s still tough. Like, I see every cloud still kind of struggle with that because there’s no perfect solution quite yet.

[00:24:52.280] – Ethan
There’s price monitoring you can put in place and so on, but there’s enough subtleties to it and enough dependencies that getting it right is difficult and it’s not real time.

[00:25:01.140] – Richard
I think that’s the biggest knock against most clouds is because we’re so aggregating so much data constantly that you might not know the second you’re consuming too much stuff. There might be a two or three hour lag or whatever it is is I combine data from 100 services you’re using and then figure out that you’ve busted your budget. So every cloud is dealing with that. It’s just a lot of data you’re sifting through. We’re going to keep getting better and probably better than anyone can ever do on Prem. But it’s still a hard computer science project.

[00:25:26.830] – Ned
Yeah, absolutely. And ideally you don’t interrupt anybody who’s in the middle of a big project trying to do something and you’re like, oh, our AI detected some cost anomaly. So we’re going to shut down this instance to stop you from shooting yourself in the foot. And you’re like, no, I’m running this large scale of simulation, and I need that to run.

[00:25:44.550] – Richard
Where was Black Friday? Cyber Monday. And of course, I expected to see an extra 10,000 compute instances. Hey, you just shut down my store and my business is over.

[00:25:52.860] – Ned

[00:25:54.330] – Richard
Again, you can’t be too smart here. That’s the challenge. I need to give you the right tools and the right ability to set limits and caps and evolve those sorts of things. But I can’t get too smart here.

[00:26:04.150] – Ned
Yeah, I want to back up to the developer question again, because one thing that I noticed is when I in the olden times, went to user groups in person. That’s the thing we used to be able to do. I know, it’s funny. So I would attend the local Azure user group and I go to the AWS one. And most of the people in those groups were either Grumpy Windows Admins or Grumpy Linux Admins, depending on Azure or AWS. And then I was invited to go speak at a GCP user group. And it was like all developers, they even called it like the Google Cloud Developers user group. And that was a bit of a culture shock for me. And I’m just curious, is there more that GCP should be doing to court It Ops folks or is it more if you get the developers first, then the It Ops will be dragged along behind them.

[00:26:59.550] – Richard
Those are both good observations. I didn’t know where you were going to go with that one. So that’s it. That was a good conclusion. So yeah, we do lean more developer again, I think, because it’s just natural developer gravity. And frankly, we’re not going to be the most familiar things for an Ops team. That’s super edgy to say, like you’re not plugging this into all your system center stuff if you’re using Amazon. Sure, you’ve already gotten yourself familiar with that over because they were first to market. Microsoft plays really well on their familiarity, especially for Ops people. So that’s a natural place for them. I’m always going to be a bit of a foreign entity in an enterprise. You never ran anything Google in your data center for the most part, unless you had that weird search appliance from ten years ago, remember? That, right. There’s not a natural affinity. It’s usually neutral. It’s rarely do I come across negative. It’s just like, yeah, I’ve heard of you all, but I don’t have anything that looks like your stuff now. I’m running Kubernetes. You’re already running our stuff. Hey, I’m running this. I’m doing TensorFlow jobs.

[00:27:57.330] – Richard
I’m doing all these, you realize, gRPC and TensorFlow and Kubernetes and all these things in your data center. That’s us, right? Oh, that’s right. So sometimes there’s natural affinities to help us with Ops people. But to your point, most of this is still developers who are then creating gravity. That becomes more of a standard at the company. And again, a lot of people are choosing Google in general because there’s also a business affinity. We’ve seen these deals where you make a giant YouTube bet along with Google cloud, or look at Ford, which is a big thing with Android Auto and then also jumped on the cloud train. So it becomes actually a top down business choice versus cloud has often been an It Ops up choice. I think we’re now seeing like, hey, developer preference or business marketing lead, who goes, I’m spending hundreds of millions of dollars a year on ads. Shouldn’t I be chewing my data in BigQuery? Yeah, it probably should be. So it’s interesting to see it’s not just the stranglehold that sometimes it vROps has had on some of the tech choices because developers are showing their preferences. Business leaders are saying these are strategic choices, not just bits and bytes, but what am I betting on as a Corporation?

[00:29:01.370] – Richard
I think that’s a cool, interesting change.

[00:29:05.190] – Ethan
That is interesting. The business side of it had not popped into my head because you look at the big three clouds almost as interchangeable with if you’re a big Microsoft shop. Yeah, Azure is a natural fit for a lot of that. And everybody’s using AWS and Google’s been somewhat in third place, if you will. But now to position that like, hey, we use Google for these other things, should we also be using Google cloud too? Does that make sense? And strategically, from a business perspective, yeah, it is going to make sense for certain organizations. And when you’re an engineer and tend to think of everything in terms of technology, that thought may not pop into your head, but that is an important and relevant thing. Another interesting observation here about how GCP positions multi cloud, Richard, is that they acknowledge it’s a thing. It actually exists for some of the other cloud providers. Don’t even multi cloud. There are other clouds. No, but GCP does. Well, why would you acknowledge that? What’s the business benefit for GCP?

[00:30:08.310] – Richard
Yeah, I’ll say the cynical one first, because it’s the more obvious one. Whenever you’re not first, you encourage multi, right? I mean, first, let’s be cynical. If we were literally the first one, why would you say, hey, you should use other stuff too? Just being honest with you, right. In the scheme of things, first of all, I wouldn’t even end in your head that much. If you’re leading in any industry, why do you encourage people to look around? No, stay all in. But at the same time, more practically, when we look at this, a lot of the Google heritage has been multicloud infrastructure. If you think of like, we shipped Kubernetes before we monetized it, here’s an open source project this runs everywhere. Hey, here’s all these different open source projects that were by in nature used all over the place. So we were always used to just shipping ideas that everyone was using. Right. We always shipped proprietary stuff. We often talked about that. So some of our heritage is just it’s natural to just open source stuff and share stuff. And so that made it more natural. But then as more customers came to us, especially now, I just did a couple of briefings in the Valley this week before I came home.

[00:31:12.750] – Richard
Shops are saying, look, we’re not going to put everything in anybody. It doesn’t matter. Even if our first choice, because everyone made their first cloud choice five to eight years ago, totally cool. Now they’ve gotten somewhat competent in that thing and they get some confidence going, okay, I kind of know what I’m doing. I know pollution or TerraForm. I know how to do containers and CICD. It’s not super terrifying to use now, best to breed in other clouds or look, I just did some mergers and acquisition and I picked up a shop that’s using Google Cloud. It’s going to cost me so much more to move them or retrain all the people. I’m just going to keep them there and it’s fine. So some of this is organic and we’ve seen it where companies are just naturally doing this sort of thing because they’ve done virtual acquisitions, they’re using YouTube, so they also use BigQuery or they’re using ads and they’re doing analytics. So they just naturally become multi cloud. And then now some Azure being more strategic and saying, hey, for risk purposes or whatever, we want to spread that around. So we just noticed that years ago kind of leaned into that with what we did with Anthos.

[00:32:09.940] – Richard
We’re now doing it with BigQuery, where I can run our premium engine on Amazon and Azure as a managed service and just talk to my Azure data Lake, talk to my data sitting in S three all through the same BigQuery interface without ever moving the data. It’s where it sits. So you’ve seen this cool trend where we’re actually trying to say centralize your control plane, but keep your data plane wherever you are on premises in another cloud. And that’s weird and wild. And a lot of people are really clicking with that because I do want to centralize something. I can’t just simply use locality in each place. It’s too much maintenance and operation. So I still might centralize a control plane for all my logs, maybe some identity stuff, maybe my management of Kubernetes clusters with Anthos, maybe for my analytics with BigQuery. But I’m still not going to centralize literally all the data because of cost and complexity. I think that’s pretty wild. So it’s not just multi cloud like just jam stuff all over, but maybe also rethinking. Hey, I still want Google Cloud to be your anchor even if all of your data isn’t sitting here.

[00:33:10.180] – Richard
So we’re not just literally saying run everything everywhere. I think some software vendors are doing that. They run our platform on each cloud. I get it. That’s the play. Our play isn’t to do that. Our plan is to centralize at least the control plane with us.

[00:33:22.230] – Ethan
I’m going to interrupt the podcast for a minute here to talk about it training. You remember the ransomware attack on the gas pipeline last year? It caught your attention, probably caught mine. There’s a key thing here. Cybersecurity professionals are in demand to prevent that kind of thing. But there are not enough humans out there to fill all the positions. There’s over 500,000 open cybersecurity roles. You can become a cybersecurity professional if you get some training, some online training. It is never too late to start a new career in it or move up the ladder. It Pro TV has you covered for your training. They cover everything comp, Tia, to Cisco, the EC Council, the Microsoft. They’ve got all of it, including the cloudy stuff, more than 5800 hours of on demand training and the way they present the information. Some presenters are like they’re reading from the book and they’re super boring. That is not itprotv’s format at all. They use engaging hosts. They’re going to present the information in a talk show format and really keep it interesting. And they do it live. They’re live every day. And then once they recorded that live show, it goes studio to web in 24 hours.

[00:34:33.560] – Ethan
As you’re digging through their website looking for content, all the courses are conveniently listed by category, certification, job role. You can find what you’re looking for without a lot of trouble. And then when you pick the thing and you’re ready to go, you can stream It pro TV courses, either the live stuff or the on demand stuff from anywhere in the world via whatever platform you like. Roku, Apple TV, PC or there’s apps on iOS or Android. Learn it, pass your certs, and then get a great job maybe in cybersecurity with ITProTV. Visit ITPro dot TV day Two cloud for all plans use promo code cloud at checkout one more time. Itpro TV daytokloud and use promo code cloud at checkout to save 30% off all plans. And now let’s get back to the podcast.

[00:35:37.350] – Ned
One term I’ve heard coined recently and I debate its utility, but I’ve heard it isn’t super cloud and it’s kind of just like idea where each cloud is going to be so commodified and have enough standard parts where I can just build across all these clouds and treat them almost as one, as opposed to the more differentiation angle that you’ve been talking about. Do you think that concept has legs or is it just somebody needed a new term and they threw it out there into the ether?

[00:36:11.070] – Richard
The term part feels like the latter, like everyone just wants to coin their thing, that they can now be a LinkedIn influencer. So I get that I don’t subscribe to the idea that the public clouds are going to keep commoditising, and then we’re just going to depend on this whole new class of vendors to actually provide the differentiation layer across clouds. I think that’s selling the clouds too short because at some point I’m not going to just want to be a commodity. And we are now. So we’re going to keep shipping great data, AI app builder experiences. Now, at the same time, I love the fact that I can use Confluent cloud and Kafka across all the clouds. Awesome. I can do MongoDB Atlas across clouds. Amazing. I can do the same thing with Splunk or other things. So we are still going to have cross cloud kind of layers. You stripe across them. Makes sense. Now, do I think most people should be building apps that span clouds? No, I don’t think that’s a great pattern. Right. Do I want to have my front end in Amazon, my back end in Google, and then my messaging layer in Azure?

[00:37:03.580] – Richard
Unless I’m doing resume driven development, why in the world would I do that? That’s a terrible idea, right? Latency will be awful. Ops will be awful. I’m going to have security leakages. Now, I have a customer who I’m showing an example of. I still might say, hey, I’ve been entirely on Amazon SQS. Okay? So I still may be talking to that from Google Cloud or Azure, because that’s part of just the distributed system. And that just might be the way I’ve architected the system. But I wouldn’t go into it going my CDNS in this cloud, my databases in this, I wouldn’t intentionally architect a new app. That way. Most people are doing multicloud by saying, what’s the right cloud for this workload? And I’m going to use Amazon for this workload, and we use Azure for this workload, Google Cloud for this one. And to be clear, absolutely no one I’ve ever spoken to in two years here is saying a third, a third, a third. Nobody does that. You still pick up and then you have one or two secondary clouds. And so we’re all still jostling to be a primary nobody’s just splitting workloads evenly across clouds.

[00:38:00.330] – Richard
That’s nuts. But you are seeing most people, it’s a rare company who’s making a single bet on a single cloud. You’re in the minority at this point.

[00:38:08.260] – Ned
Yeah, that’s what I’ve observed from just the different folks I’ve talked to when I’m doing training sessions or just doing some consulting work. I want to shift a little bit from the multi cloud angle and bring things on Prem a little bit. Let’s bring it on home to the hybrid and Edge story, because that seems like a huge area for cloud growth. I mean, I don’t know if we can still call it cloud when it’s running on Prem called Fog or whatever, but I’m curious to know what investments is GCP making in that arena to embrace and support the Edge and hybrid deployments.

[00:38:47.610] – Richard
Yeah, I’m glad you called the hybrid versus on Prem necessarily, because on Prem, I don’t really want to just ship software that just lives as software on Prem. Why would I do that as a cloud provider? It still should be cloud connected. So it’s a hybrid story, right? It’s saying I’m trying to bring certain things in parity. I’m doing DevOps or site reliability engineering across public and private. Like, I’m trying to think about that differently. So we think of hybrid more than on Prem, right? I don’t want to just ship disconnected software. There’s vendors who do that. Rock on. Our best value is going to be in cloud connected stuff. So as we look at today, so our Anthos product, I can use it to run fleets of containerized apps in Google Cloud at scale. But I can also then take GKE, our Kubernetes engine, which I will happily argue is the best Kubernetes in the public cloud. I’m not sure what’s to say that is Google. We shouldn’t say best and stuff, whatever it is. Awesome. I can take GKE and I can run it on Azure, as software on Azure as a service.

[00:39:44.520] – Richard
I can run it on Amazon, and I can run it on Bare Metal or Vsphere. So a lot of companies who say I like the Kubernetes model from GKE, I like the consistent interface. I like the idea of connected service mesh that across all these environments. But I want to run this in my data center. I want to run this in the back of a fast service restaurant. I want to run this on an oil rig or I want to do whatever. So you get these environments where you’re putting like satellite clusters all over the place and these hybrid setups, which is pretty cool, right? And you start to reach into Edge as well, where you’re thinking of retail, Edge branch, office, manufacturing floor, all really cool stuff. So we’re actually pushing hard. I think all the clouds are doing a good job of saying, how do we extend now, just when you thought you got comfortable consolidating in cloud, now we’re going to go back and push back to your edge. Sorry about that. So I guess that’s good because that’s what our customers are asking us for, right? No one was ever going to say our entire world is going to be us east one like a that’s a terrible choice.

[00:40:38.920] – Richard
But you wouldn’t live in one data center region anyway. But now we’re saying no, we still want to move people closer to the data. I want to make sure if there’s an outage in a region, I can still have my service provided. If I’m a cable provider, I don’t want everything having to always go back to some mothership region or location. I want to federate that a bit. So we’re shipping things like Anthos at the edge. We have our Google Distributed cloud, which is hardware software services. Google Distributed Cloud Edge just shipped. So I can drop infrastructure fully managed by Google. Everything from patching your Kubernetes clusters and updating an operating system. Drop that anywhere I want and so we are making some of that bad. There’s a whole ecosystem there around good Edge stuff, whether that’s data ingest and data management and remote management of fleets and all that stuff, some of that stuff, some of that’s partners. But I think clearly that’s a big part of the future. I’m not yet on. I’ve seen some predictions like, hey, the Edge is going to be bigger than the public cloud in so many years.

[00:41:34.820] – Richard
I don’t know. I’m terrible at predictions. That seems like maybe a stretch. But I do think you’re going to see that become more prevalent. And if I can use the same Ops ideas and development ideas also for Edge, you tell me. I don’t know. Too many devs today who know how to deploy to 15,000 locations. That seems hard. That’s not just a Jenkins job that does that, right? I think it’s different. So how do I think of deploying at that scale, managing at that scale? I think we’re going to have to evolve some of our tooling and approaches. I think that’s awesome. But I’m not just assuming that because you learned Cloud now that I know how to deploy to some massive remote Edge, that seems like an evolved skill set. So we’re all going to grow together here. I think it’s mostly transferable skills, but we’re going to learn new stuff. I think that makes it awesome, right?

[00:42:19.990] – Ned
It’s definitely going to be a slightly different operational model when it’s so distributed and you don’t have this centralized data center that you’re interacting with. And I don’t know about some of the analyst predictions on the size of Edge, and they don’t know either. So I think we’re all just trying to figure it out as best we can. So if folks who are listening, a lot of our listenership are it Ops type people, not as many developers, though. I know you’re out there. I know you’re out there. If they’re curious about getting started with Google Cloud, what are some recommendations you would make about a good place to get started or a project they can kind of kick the tires on when it comes to running in Google Cloud?

[00:43:03.270] – Richard
Yeah, it’s a good question. I would say there’s a few things that would probably surprise your listeners about us if I were to say when you think of things to run. So first off, we’re pretty good at Windows and. Net. Now that might be mind blowing what you’re talking about. Clearly I have one cloud as my default. Totally fair. I get it. We’re actually pretty good at that. Our Windows environments in GKE are pretty great, as I mentioned, how fast we can bring stuff up and manage it. We have managed Active Directory, we have managed SQL Server for instances. So actually pretty good for our migration tooling today. I actually did it last weekend. I can take a Net 35 app on an IIS instance in Windows Server 2012, and we have a tool where I can just click a button, generally run it through a process and turn it into a container image, and run it on GKE, which is really cool. So I can take my old apps, containerize it to a fraction of the size and cost, and run it in a GKE cluster. So we have some pretty nice tooling for containerizing Windows, running it as VMs, whatever.

[00:44:02.690] – Richard
So first off, just because it’s a Windows workload doesn’t mean we don’t run it. That might surprise people. Same with serverless. If I’m looking at the first workload, look, kudos to Amazon for kind of inventing serverless. We can argue if PaaS was the first serverless, but let’s not be pedantic. What they did with Scale to Zero compute was awesome. Lambda is great. And what we’ve actually kind of continued to move forward is more serverless containers, which I’m actually kind of pumped about because while traditional function as a service is great, I wouldn’t contend that nobody on Prem had a single workload that just ran in Azure functions or Lambda things, right? It was a refactor, right? The different methods, signature, code base, nothing just ran there. But when I have serverless containers like Google Cloud Run, I can run a WebSphere app. I can run all kinds of apps. I’m not just running like methods, I’m running like systems in cloud run still scales to zero. It still has concurrency up to a ton of concurrent request per container. We just launched it with 32 gigs of memory. That’s as much as my freaking laptop, and that’s even a huge laptop, so I can have huge instances.

[00:45:06.150] – Richard
Linux containers scale to zero. So our serverless story is pretty awesome, especially for full apps. People may not think of that, but it’s a great first workload for some to get in here. I don’t even have to know containers. I can literally just do G cloud run, deploy it’ll, take my source code, package it up for me, containerize it, run it so super easy to get started, but it’s actually a really great way to get started. Don’t start with big clusters, don’t start with big complex stuff. Push an app, see how it feels attached to a simple no sequel database, see how that feels. We do a lot of nice integration stuff. So serverless is one of the best ways to start with Google Cloud. Don’t start with the most complex nine tier architecture with background jobs and like oh gosh, don’t make it so difficult.

[00:45:49.090] – Ethan
Start with an app experiment to grease the skids and make it easy for me to get that one app off the ground and going on GCP tutorials documentation database. Where do I need to read?

[00:46:00.250] – Richard
Yeah, so you’ll see pretty good documentation from us. One thing I love in our docs is when you look at a command and it’ll say hey something, run this command, all the sort of tokens where it might be your cluster name or whatever are editable in the docks. So I can literally click a button and go, hey, what’s the name of my services? It’s cool. And then copy that command, which is for my thing. We even embed our Cloud shell in our docs, so you could be looking at our docs going, I’d love to run that command. Okay, then click the button and it will literally open up the shell in the dock and you’ll run it against your Google Cloud environment. So super easy in context stuff. So go to the docs, run through the really simple tutorials. Just kind of get a good feel for it. Again, with these free tiers, there’s nothing going on to your bill, which is great. So you get into Docs, download our SDK. We got tons of local emulators you want to emulate spanner on your desktop. Awesome. We got that Firestore Kubernetes with Mini Cube Cloud.

[00:46:54.720] – Richard
Run other things. So lots of good emulators. Download the SDK, read the docs, experiment with some of this stuff. That’s the best thing. I’m not going to plug a book. Just use it. Start using some stuff. Right? Just run a basic command, get a feel for it. Our consult is really nice to use. I really doing stuff. I’m just about to tweet a few things about it when we get done the podcast. I learned something new today, so we’re a good hands on Cloud. Get in there. Don’t overstudy. Just start trying some stuff out.

[00:47:23.480] – Ned
That’s definitely the way that I learned best is by trying hands on, failing at something a bunch of times, and then finally figuring out the right command or the right series of incantations that you need to speed up the application. Well, this has been a great conversation, Richard. If folks want to follow you and find out what you’re up to, are you active on Twitter? Linkedin? What’s your preferred platform of choice?

[00:47:47.910] – Richard
Yeah, if you want a full dose of this, you can hang out on Twitter at Rcirotor and follow me there,, is where I blog at least once a month. Just things I’m learning. And yeah, I would love to connect with folks. You can find me on LinkedIn as well. This is the best time in my 20 something years to build and run software. I’m having a blast with this stuff, so it doesn’t mean it’s not more complicated and there’s more stuff to learn, but let’s learn together. That’s what this should be about.

[00:48:12.970] – Ned
Awesome. We’ll include links in the show notes for all that information. Richard. Seroter thank you so much for being a guest today on day Two Cloud.

[00:48:20.850] – Richard
Yes, it was a blast. Thank you.

[00:48:22.360] – Ned

[00:48:22.970] – Ned
And hey, listeners out there, virtual high fives to you for tuning in. If you have suggestions for future shows, we would love to hear them. You can hit either of us up on Twitter at day Two Cloud Show, or you can fill out the form on my fancy website., hey, remember, we’ve got a tech bite with HashiCorp coming up where we talk about Consul. So don’t hit the skip on your podcast app just yet. There’s more good info to come. Welcome to this sponsored tech bite with our fine friends over at HashiCorp. Today’s topic is a level set for their Consul product. Maybe you’ve heard the name in passing. Perhaps you took it for a test drive a couple of years ago, or you could even be using it today as a storage solution for Vault. The point is, Consul has changed and evolved significantly from its humble beginnings. And with us today is Van Phan to bring us up to date on what Consul is doing today. Van, welcome to the show. Let’s start with the million dollar question for folks who aren’t familiar, what exactly is Consul?

[00:49:24.630] – Van
Hello, Ned Hello Ethan. Thank you for having me on this podcast. Happy to be here, first of all, and looking forward to our conversation. So, yes, to answer your great first question, the million dollar question that comes up quite a bit when I talk to customers is what is Consul? To really set the context, I wanted to provide some background. You and I and many people know that customers are moving to the cloud and migrating to the cloud. We see a lot of modernizations happening with applications and adopting micro service mesh part of that journey. We also see the adoption of multi cloud, including the private cloud, on Prem, as part of this long term strategy. And so whether that’s by design or on accidents through acquisitions, the end result is that multi cloud is the future. Right? And so adopting the cloud is great for innovation and for lots of other reasons. But the shifting of the operating model between managing things that are static on Prem at the data center where you have physical hardware, monolithic applications, static IP addresses, known perimeters to something in the public cloud, that’s where your resources like your compute, your services, your IPS are all dynamic and ephemeral.

[00:50:34.010] – Van
This shift can be very challenging. And managing and straddling between the two models is very challenging. So end of the day in a nutshell, and it’s kind of a long winded way to answer your question. In a nutshell, Consul is here to provide some consistency across that, right? Consul think of Consul as a service networking platform that provides a consistent set of workflows to help secure and connect services using a consistent single control point across these multiple public and private clouds. So when we’re talking about these workflows, they include discovering services, applying service identities as part of that to replace these ephemeral IP addresses as this control unit that is used for enforcement. Right. Part of the workflow is securing services and ensuring customers can get to zero trust and enabling traffic shaping. With our service mesh. We also include automating services and automating security and providing access to services through an API gateway. So the culmination of everything I just said. Again, Bay, long witted, I hope it’s clear. But the home nationwide capabilities really makes Consul a unique control point for managing network services across clouds and runtimes.

[00:51:45.190] – Ethan
So then it feels like Consul is Kubernetes. But all of the things you’d have to Bolt on to Kubernetes to create what the Consul solution is, Consul has already got it all. Is that fair to say?

[00:51:57.220] – Van

[00:51:57.550] – Van
I mean, Consul leverages Kubernetes, Kubernetes, you’re right. It has a lot of these built in functionality, but Consul expands further beyond that. Right. You have your Kubernetes there to bring up apps and services, but there Azure, lots of aspects that are still missing in terms of security and connecting services between them in a consistent way. And we go beyond Kubernetes going to VMs and into ECS and other different runtimes. So it’s not just focused on Kubernetes, but it is a big part of it. Yes.

[00:52:29.850] – Ethan
And I have been around the tech industry for a while. I’m old then. That’s what I’m saying. The best products are the ones that solve legitimate challenges that are felt by practitioners. Right. If I’ve got a thing and you have the thing that solves my pain that I have dealing with it, that’s the thing that I want. So what challenges does Consul address? Why do I buy it? What pain am I fixing?

[00:52:54.300] – Van
Great question. And we talked to lots of customers and we have these recurring questions, recurring themes that come up and it comes down. It broke down these four things that we see quite often or hear quite often that customers ask. And the first one being, hey, customers want to know what all my services in my organization, right. They have the organization and they have these services that are spread across different teams, different views, different networks between different clouds, on Prem data centers, between different runtimes. Vms, Kubernetes, ETS as, EKS. Right. All sorts of ways to run their services. And these services are also potentially moving from on Prem to the cloud. Right. So it’s kind of dynamic and moving target. And the challenges are even more exacerbated in the public cloud where these services and IP addresses are ephemeral. They come and go all the time. So keeping track of everything is very challenging. You can’t do it the way you used to when you have a static data center. Right. So customers want this way of being able to track and consistently know about all their services, where they are, what their respective IPS are, are they healthy, are they online?

[00:54:05.910] – Van
Any actions are needed by them. So Consul provides this service discovery and service registry capability to be able to keep track of the single source of truth for everything. Right?

[00:54:17.260] – Ethan
Well, it’s not like this is a new problem, right. Van in that even when it was a static data center, you didn’t know what all the applications were running in that data center. It just becomes even more challenging. Now that you can spin up workloads up and down anywhere, it becomes a little harder to track things because they can move around like whacka mole.

[00:54:37.130] – Van
Yeah, you’re right. Even in the static world where there are teams that would reach out to me for hey, what are your apps and services running? Am I don’t have time for this. I don’t have time to tell you about what’s all my IPS and all the things I’m running. Yeah, it’s that much worse now that it is in the cloud. But to your other question or your original question. So that’s the first challenge customers have, right? We have these other three other challenges that they bring up to us all the time where they also want to ensure that their services are secure. So obviously security is top of mind. So they want to know how do they ensure that the services that are on their network are secure when they communicate over the network? Obviously with micro serverless and services just all over the place, the network is a lot more busy, a lot more chatter. So they want to be able to ensure that services when they communicate are encrypted and consistently enforced across the whole organization for that as well. So Consul provides a service mesh to do that and to enable Zillow trust for customers for that to provide the service mesh.

[00:55:44.820] – Ethan
Is that like you’re leveraging Kubernetes for some of the Consul stuff? Are you leveraging another open source project for the service mesh?

[00:55:52.030] – Van
So for the service mesh, we are using Envoy. So that is another open source project that is fairly popular and fairly well known and we have our own control plane, but we are leveraging Envoy to perform a lot of these zero trust capabilities. Right. Ensuring that certificates are exchanged and Authenticating and authorizing everything is kind of enforced through Envoy.

[00:56:16.890] – Ethan
Got it. So I’m going to assume that with all of these nuts and bolts, I can automate this thing. Yeah. Because I’m going to guess that’s another thing the customers want. It’s what I want.

[00:56:25.520] – Van
Yeah, definitely. Automation is something that everyone wants and there’s lots of benefits, obviously, for it makes workflows a lot more efficient. It removes any potential manual errors that can occur. It brings services up online much more quickly, making available much more quickly to be consumed by other teams and things like that. So we have this other capability, we call it network infrastructure automation. But it really helps drive automation with network devices in a customer’s environment.

[00:56:56.310] – Ned

[00:56:56.710] – Ned
When you say network devices, you’re not talking about just the services running in Kubernetes. Are you talking about possibly physical devices or virtual appliances?

[00:57:05.850] – Van
That’s precisely it, right? You have services that come online and just because they are available doesn’t mean other services or other teams can reach them. There Azure other network devices that need to be adjusted or updated to accommodate these new services. So that’s exactly what I mean. These IP addresses have to be applied to these network devices to be reachable by other services.

[00:57:31.350] – Ned
Okay. That makes a lot of sense as well. And I think there’s a final, fourth challenge to get into around being able to control traffic consistently.

[00:57:42.090] – Van
Yeah. So we talked about the service mesh and you have network shaping capabilities for all services within the service mesh. Right. But there’s also a desire to have external clients connect to those services within the service mesh as well. So customers there Azure, lots of ways of doing that. We provide an API gateway. Nice thing. That’s consistent within. It’s part of the whole Consul solution. So the way you manage and control traffic within the service network is going to be consistent with how you manage control traffic for external clients to want to reach those services as well.

[00:58:16.980] – Ethan
So, Van, I have one qualifying question here to help me understand what Consul is not. I was conflating it as like a container launch platform, if you will, like Kubernetes. Is that’s not what Consul is, is it?

[00:58:29.820] – Van
No, it uses Kubernetes. It uses different runtimes and different orchestration tools that happen to be running services. But no, it’s not that. It provides a lot of services around Kubernetes and around your services to enable them to connect in a consistent way and in a secure way. And it provides other things like service discovery. So it’s all this larger platform that provides a lot more capabilities than just what Kubernetes provides. Right?

[00:59:01.970] – Ned

[00:59:02.500] – Ned
That’s much larger than my original understanding of Consul. When I tried it out, I don’t know, three, four years ago, I really thought of it as well, first as a key value store because I was using it with Vault. And then I kind of got the feeling that, oh, it can do some DNS stuff and maybe some service discovery. But you provided us like four big challenges that it’s helping to solve. Now, my understanding is that there’s a brand new version of Consul that got released very recently, 112. Can you tell us what is special that has enhanced what Consul can do in that release?

[00:59:35.410] – Van
Yeah. So before we get into one, two to kind of go back to what you said earlier about how Consul has evolved. Absolutely. It’s definitely evolved from the early days of being a KV store and a service registry. Right. That we already talked about. It’s gone beyond that to provide zero trust. And part of the Zillow trust thing that we are providing is the service identity that I mentioned earlier. And the service identity is really important because it becomes a control point. Right. To be able to authorize services and determine whether a service A can talk to Service B rather than using IP addresses to determine that. Right. It can. Then you can use that service identity to be able to authenticate services and then further encrypted with MTLs. So it becomes a really important point to get to zero trust right now to your point about 112 is that we are enhancing this position even further to help customers get closer to zero trust by integrating with Vaults. Right. Vault provide zero trust for secrets management. So we want to naturally marry Consul and leverage lawless capabilities with Vault. So we’re able to use Vault for its PKI engine to generate TLS certificates for Consuls, control plane and data plane, which is a pretty big deal to be able to leverage that false capabilities for that.

[01:01:00.360] – Van
In addition to that, we can have auto rotation of the certificates on the control plane and the data plan. So in the end, it really reduces the burden of the administrator having to manually rotate these certificates and rotate everything that on top of everything else they have to do. Right. So when you can have auto rotation happen online and automatically enables more frequent rotations, and then that leads you to having better zero trust practices. Right. And then the last thing with this 112 is that not just TLS certificates are stored on Vault, but all of our other secrets as well that are pertinent to deploying Consul or running an operating Consul like ACL tokens and other encryption keys and things like that are all stored on both as well. So it’s much more secure than just leveraging Kubernetes secrets as your secret store.

[01:01:56.430] – Ned
Okay, that’s a lot of stuff that you threw in the release. It seems like Vault integration is one of the main themes of that release. Is there anything else that was really important or notable in the 112 release that you want to bring up?

[01:02:10.880] – Van
So there Azure other notable features that we probably won’t have any time to really dig deep into this. But going back to the automation portion that we discussed earlier really is a big differentiator for Consul. Right? Consul is a single source of truth through the fact that we do service discovery and we know about all the services across all the different clouds that we talked about earlier. It can now trigger events and work with TerraForm and integrate with TerraForm so that if something happens on the network where you have more services or scaled services or retired services, consult Sync, can react and can automate and configure your network devices to reflect those additional changes with your services that Consul is tracking.

[01:02:59.070] – Ned
Okay, so you’ve increased the integration with Vault and with TerraForm. Sounds like we’re making a nice stew here. Now, my understanding of Consul is that it’s free and it’s open source, but there is an enterprise version for folks who have to manage things at scale. If people who are listening want to take Consul out for a test drive or just understand more about their feet, about the feature set, where can they go? Where would you suggest they go on the interwebs to check that out?

[01:03:29.200] – Van
Yeah, I mean there’s lots of content from us or even from YouTube videos but you can start with going to our website. We have a Consul. Io page where there’s lots of use cases, case studies, technical documentation if you want to test drive it out. We have our learn tutorials. Learn is where you can go and play with all the different capabilities and test out the functions and features of it. And lastly I will mention that we have http Consul as well where with a managed Consul service and customers can also try that for free.

[01:04:05.990] – Ned
Excellent. How can the audience follow you on the internet if they want to hear more from you? Do you have a handle or a blog that they can go to?

[01:04:14.620] – Van
Unfortunately I don’t. It’s something I’ve been wanting to set up and have gotten around to it. I guess my LinkedIn is an easy way for them to reach which I’m sure you can provide on your website.

[01:04:29.490] – Ned
Excellent. We will include that in the show notes along with the learn guides and everything else about Consul. Van Fan thank you so much for being a guest on today’s Tech Bite and thank you to HashiCorp for sponsoring this tech bite. This is how Ethan and I feed our families after all. And hey thanks to you dear listener for tuning in. You can find this and many more fine free technical podcasts along with our community Until next time, just remember Cloud is what happens while it is making other plans.

More from this show

Episode 147