Search
Follow me:
Listen on:

Day Two Cloud 144: The State Of IPv6 In Public Cloud

Episode 144

Play episode

Today’s Day Two Cloud explores the vastness of IPv6 and public cloud. IPv6 provides so much address space that you can do things such as use an address once for one connection and never use it again, and it isn’t wasteful. The abundance of IPv6 may influence how you approach cloud applications and networks. Let your creative juices flow!

Our guest is Scott Hogg, CTO at Hexabuild and co-host of the IPv6 Buzz podcast.

We discuss:

  • Current IPv6 deployment and adoption in the major public clouds
  • The technical advantages of offering an IPv6 public endpoint
  • Do I really need IPv6 internally?
  • Are there differences in IPv6 offerings for PaaS and IaaS?
  • Whether you can run a pure IPv6 virtual network in the cloud, or is it still going to be dual-stack?
  • More

Takeaways:

  1. There are more IPv6 features in AWS, Azure, and OCI than you probably realized.
  2. You should be testing IPv6 in your Dev/Test environments today.

Show Links:

Rapidly Deploying IPv6 on AWS – A Cloud Guru

Migrate existing VPCs from IPv4 to IPv6 – AWS

What is IPv6 for Azure Virtual Network? – Azure

IPv6 on Oracle Cloud Infrastructure – Oracle

Configuring IPv6 for instances and instance templates – Google Cloud

Why AWS Embraces IPv6 – IPv6 Buzz Podcast

IPv6 Center of Excellence (COE) – Infoblox Blogs

Scott’s articles on NetworkWorld

@scotthogg – Scott on Twitter

Scott on LinkedIn

Transcript:

[00:00:05.410] – Ned
Welcome to Day Two Cloud. And today we’re talking IPV Six. There’s so much space out there. What is going on in the world of cloud with IPV Six? We have a very special guest, Scott Hogan, the CTO of Hexaville, and also the host of IPV Six Buzz, to bring us up to speed on what’s going on. Ethan, what stuck out to you?

[00:00:25.160] – Ethan
Well, I’ve been listening to the IPV Six Buzz podcast is one of my don’t miss shows. And one of the things that came up in a recent episode that we’re going to touch on here is just how much address space is out there. So if you’re building an app, you can do things like use an address once for one connection and never use it again. And that wasn’t wasteful. Honestly, if you’re coming from the IPV Four world, it’s a bit mind blowing. And Scott takes us through that concept and a lot of other things about how realistically we can adopt IPV Six when we’re using public cloud and dependent on a lot of public cloud services because we all want to be there. We want the future to be now, but it isn’t 100% now.

[00:01:04.770] – Ned
Ned, when I started adopting the cloud and I got a Slash 16, I thought I was rich. Imagine what happens when you have a Slash 64 to play with. That’s the sort of space we’re talking about. So enjoy this conversation with Scott Hogg, CTO of Hexabild. Well, Scott, welcome to Day Two Cloud. We’re very excited to have you here. The topic of IPV Six comes up occasionally when we’re doing various episodes and we thought, let’s bring on an expert, somebody who really knows their stuff. And well, that appears to be you. You have a whole podcast about it. So before I start with the first question, why don’t you pitch your podcast in case anybody is really interested in the IPV Six landscape?

[00:01:45.930] – Ethan
Sure.

[00:01:46.340] – Scott
Yeah. We’ve recorded many episodes of our IPV Six Buzz podcast on packetpushers.net for a while now with my co host Ed Horley and Tom Caffeine. And we often have guests or sometimes we do shows just us talking about IPB Six and things that are relevant to many enterprises as they start to consider it, plan for it, deploy it, understand some of the caveats that may not have been in textbooks they read ten years ago.

[00:02:18.230] – Ethan
20 years ago, there was one theme I could say about the V Six Buzz podcast AKS directed at the enterprise folks. It’s unlearn. Whatever you think you know from IPV Four. Stop trying to carry that forward, put that bag of bricks down and do it right with V Six.

[00:02:32.690] – Scott
Yeah, I think that’s what we like about IPD fix. It’s a way to shed the legacy thinking and think creatively about how we use addresses. There’s a lot of potential in Greenfield or starting from scratch or starting from a clean slate. And so that’s what we like. And there’s a current state of thinking, a current set of best practices for IPV six deployment that may not be captured in any text or clearly synthesized in a YouTube video that people can watch. So we try and give them practical information, but then also cover some theory.

[00:03:13.970] – Ned
Yeah. If I’m being honest, most of what I learned about IPV Six happened when I was pursuing my MCSE 2000, because Microsoft, they were really on the IPV six bus back then, not the bus. And they were riding that bus. And a lot of people don’t realize how ready Microsoft was for IPV six at the time. And then it just completely failed to materialize for them. That’s where I kind of learned about it and then kind of forgot it after I passed the exams. But listening to your podcast has brought me back up to date a little bit. So what I’m going to ask you to do is take all your episodes and just summarize them in the next five minutes.

[00:03:56.670] – Scott
A lot of episodes. We’re coming up on almost 100 episodes here. So we’ve talked more about IPV six than we thought we could talk about IPV six.

[00:04:08.550] – Ned
Yeah, it’s a big space. Let’s just start with the basics we are now in terms of deployment and adoption. Where are we with IPV six? Because it always seems like the IPV four apocalypse is about to happen, but never quite seems to actually happen. So where are we with things?

[00:04:28.980] – Scott
I mean, it’s one of those things that’s been slow and steadily increasing over the last decade. More than a decade ago, depending on where you measured probably just a few percentage points of IPV six traffic utilization now in North America or Europe might be close to a tipping point. We might be close to 50% V 450 percent V six on the backbone of the Internet. Other countries like India and other countries might have even much higher percentage, 60% to 70% IPV six in some specific countries. Where then in those locations, V. 4 may be in the minority. And so this has happened slow and steady without many people noticing. Or many enterprises have this feeling that, oh, we haven’t deployed IPD Six internally, therefore, it doesn’t matter or it’s a topic we don’t need to consider. But it’s been slow and steadily increasing usage on the Internet. And then what’s happened with enterprise networking is we made the Internet the corporate backbone and we sent people home. And guess what? Now they are using IPV six on their mobile devices with 4G. 5g services. Or they could likely have IPV six at their homes. And so this has all happened outside of the It departments, purview or the enterprise network engineer who’s just focused on the internal network or rolling out SDWAN deployment or even cloud practitioners haven’t really thought about, wow, there’s more IPV six on the Internet than I previously thought.

[00:06:05.980] – Ethan
Well, there’s more V six on the Internet, Scott. A lot of it’s mobile driven by mobile. As you said, if we look at our iPhones and my Verizon driven handsets, got a V six stack in there. It’s a dual stack, though, most of the time, right.

[00:06:19.790] – Scott
Sometimes because mobile providers, they also struggle with a plentiful supply of IPV four address space. They often give your phone a private B for address that then gets natted a couple of times maybe, and you go through a carrier grade Nat or large scale Net system or some mobile providers just don’t have even private V four address space to fulfill all of their subscribers needs. So they’ll run V six only on your mobile device and then tunnel the V four traffic across their core network. And then again Natt it out to the Internet on the edge. So it could be four and six tunneled or translated.

[00:07:06.410] – Ethan
So if I oversimplify that, Scott, I can say even though it’s super ugly, I still have V four connectivity in some way or another, even on a V six only device. So if I’m trying to make the case to the executives of my company to C suite, that it’s time we really need to go V six. How do I make that case? Because they could just argue back. Yeah, but V four is still working everywhere.

[00:07:29.030] – Scott
Yeah, V four works. But you have to realize that the V four traffic will go through two or more Nats between the client and the server. The client will have at least one, either within the phone or in the service provider’s network. There’ll be at least one. And then you’ll go through at least one or two Nats. Maybe on the server side there’ll be a front end load balancer, maybe there’ll be another app tier load balancer. Maybe there’ll be another Nat at the software container level. And so the V four traffic will get natted and backhauled through a less optimal path. And we’ll be fighting for connection space or connections per second through these other translators because they’re stateful where the V six traffic won’t be added will go directly to more directly from the client to the server and end up we see statistics where mobile connectivity over IPV six has a slightly lower latency because of not having to deal with Nat, which perpetuates updates to TCP and UDP header checksums.

[00:08:48.680] – Ethan
So I’m hearing performance and complexity. As in, once we get to a V six only world, we will have reduced the complexity in our networks. And so let’s get on the train and do the right thing. It’s beyond time to be adding V six to the mix.

[00:09:06.480] – Scott
Yeah. V six provides globally unique address space that’s super plentiful that doesn’t overlap with your on premises network, your cloud environments, any mergers and acquisitions you may make. And it’s a simpler addressing model where you only have several types of prefix lengths. It’s not like IPV four, where it’s like oh, can I get away with a SaaS 28 of private V Four address space in this one virtual network in my cloud environment? Or can I splurge and use an entire 27? And then you have 29 and 28 and 27 and 24. You have all these different prefixes where with V Six it’s a lot simpler and it has a simpler operational model as a result of that. And you know, when you roll out IPV Six, a network is never going to outgrow its bounds. You’re not addressing for the number of hosts and then giving it scalability or adding an address and holding one in reserve in case it grows beyond its bounds. No, you just are more likely allocating IPP Six addresses sequentially based on a certain size or a certain number of networks and never have to redress. That sounds pretty good to me.

[00:10:22.180] – Ned
Yeah. I feel like we’ve lived in the world of IPV for scarcity for so long that the idea of wasting an address is just horrific. We can’t even comprehend. No, I have to be very deterministic about how everything gets assigned an address. I don’t think some folks realize how plentiful addresses truly are in an IPV Six world. Can you give just like a brief example of how plentiful the addresses would be if you have a smaller block allocated?

[00:10:56.210] – Scott
Yeah. Okay, let’s say it’s just a simple example here. Ipv Six addresses are 120 bits in length, so we normally use the first 64 bits to represent the network. The last 64 bits represent the interface identifier. You just split it in the middle. 64 bits is the network number. 64 bits is the node number. If you remember back to IPX and Apple Talk days. So you just split it in half. Okay, so every 64 has 18 Quintillion possible nodes on it. That’s unfathomable. So if you took the 32, let’s just say 32 bits, you could give roughly 4.3 billion. You could give 4.3 billion organizations, 4.3 billion 64 each with 18 Quintillion nodes on them. So everyone on the Internet today could get their own Internet of 4.3 billion 64s each with 18 Quintillion possible nodes.

[00:12:10.670] – Ned
Okay, so we’re not worried about address space if you’re making that move. That’s a pretty compelling argument for anyone who’s struggling with addressing. Now, I’ve certainly noticed when I’m using my cell phone, if I happen to go into the network settings and dig a little bit, I can see my IPV Six address. So I know if I’m on mobile, I’m probably using V Six when I’m at home. I’m still using V Four on the wireless, but that’s a story for another time. Verizon Files doesn’t support V Six. I don’t want to get into that. But anyway. So with that in mind, is there a really big benefit beyond just the performance you mentioned to having a public endpoint? That’s V Six for all these devices?

[00:12:53.750] – Scott
Yeah. I mean, enterprises, I think organizations think oh, I don’t need to worry about IPV six. I’m not using it internally. But they fail to realize how many devices out there are and that operating systems will make they’ve got algorithms in them called happy eyeballs type techniques that make the eyeballs happy by trying to race V Four against V six. If V Four and V Six complete within a similar amount of time, it uses the V Six connection and tears down the V Four connection. And so clients are checking response times of V four and V six. If you’ve got an endpoint or a service that’s only using V four, then there’s only one possible path that those people could reach you. But if you’ve got a service that offers both protocol connectivity, clients can choose whichever one has the best performance. So by using only IPV four for public facing services, you’re limiting your performance to only one, which is increasingly becoming the less performant IP version. So you would want to make services or endpoints offer them both. And then they can choose whichever one has the best performance.

[00:14:19.370] – Ned
Right. Because my mobile device, if it only has a V six address, is going to have to hit that V four endpoint going through some sort of Nat, some sort of tunneling, and that’s probably going to give me lesser performance. Whereas if I can go directly, stay on B six the whole time, better performance for me, happier eyeballs. My eyeballs are happier for some reason. And then that’s a compelling business use case. Business case for the C suite, say our customers will be happier. I don’t have to get into all the technical bits and bites about why. Just understand, if we make this change and it’s not as big of a change as redoing our entire network, but make this change on our endpoints. Now, end users will get better performance out of our apps, and that could be more revenue.

[00:15:09.600] – Scott
Yeah. In North America or Europe, performance improvement with V Six could be five to ten milliseconds. In Central America, South America, Africa, many tens of milliseconds per round trip time faster with V six. And so if you think your web page and the Loading of the objects on your page, maybe 100 different connections, now we’re talking at least a second faster or internationally, could be four, 5 seconds faster. So many online.com retailers have metrics about if our web page performs X amount faster, we have this much more conversion rate of sales. Our website seems snappier. Our end users seem happier to browse and shop online. The speedier and quicker our web page loads and we have a better purchase rate for a second faster. There’s a return on investment there.

[00:16:16.670] – Ethan
Yeah. And the second doesn’t sound like a lot, but it really is. I mean, I will abandon sites that are too slow and a second again, it doesn’t seem like a lot, but there is such a tangible feel to it. When you’re moving through a site, that if it’s not Loading or it’s waiting for a particular object that hasn’t showed up yet. So you’re staring at that white screen while that last object appears. Abandonment. That’s a real thing. So you’re making a good case here, Scott. Well, Scott, let’s drill into the cloud side of things a bit. If I have a VPC of some sort. Is there a technical advantage to addressing that with V six?

[00:16:52.670] – Scott
There is, particularly at the Internet edge. At the Web tier, you want to make that Web tier accessible over V four and V six to make it able to be reached by the broadest Internet population. That broad Internet reachability is one of the main tenets and characteristics of cloud public cloud infrastructure. You want them to be accessible. You want everyone in the world to reach your page, your content, your shopping site.

[00:17:26.750] – Ethan
You’re talking about the front door. You’re talking about. Yeah.

[00:17:29.900] – Ethan
So that people can commit behind it doesn’t necessarily need to be V six. You’re not arguing there’s necessarily big advantages there, but certainly that front door has got to be V six.

[00:17:40.300] – Scott
Yeah. It’s not necessarily differentiating to have your database tier using V four V six, but there could be a compelling reason to use it in the container infrastructure because of the number of containers you may be launching. And then the compelling reason to deploy IPV six deeper into your multi tier web architectures or container architectures is to use addressing that doesn’t overlap with your on premises environments and avoids address overlap or facilitates collaboration. Community clouds avoid redressing. And so that’s the compelling reason there’s operational benefits to using IPV six in other parts of your infrastructure management tier, out of band control tier, other tiers of your application. There could be benefits there, but they aren’t necessarily performance because all of those tiers are very close together. There shouldn’t be a lot of latency there.

[00:18:51.970] – Ethan
Well, does there get to be an operational benefit where if we’re moving towards V six, I guess retiring V four is just part of the equation at some point.

[00:19:01.550] – Scott
Yeah, because at that Web tier, that load balancer will have a V four VIP and a V six VIP, and they’ll be registered in DNS. What goes on behind the scenes is invisible to the client. So it could be completely V six only inside of the past the front door there, and the client will never notice. And you can run IPV six only on things that are very modern. And when we’re talking about that should be very modern software, very modern operating systems, very modern connectivity. And so very likely you aren’t dealing with anything really late legacy there other than maybe dependencies that the cloud service provider puts on you or limits your ability to do V six only, or forces you to have to have dual or forces you to have to run IPV four on VPC and Vnets and stuff like that that gets into the cloud providers itself.

[00:20:07.780] – Ned
And I want to shift the conversation into that area because day two cloud, that’s ostensibly what we tend to look at, and I know it’s changed a lot over the last few years. What’s supported internally and externally on the cloud providers. Can you give us kind of an idea of where things are on the public endpoint side when it comes to the big three or four clouds that are out there?

[00:20:33.060] – Scott
Yeah, I think all of them have a load balancer, have DDoS capabilities for V six, have the ability to create security groups and route between virtual network segments, and even do dynamic routing on the hybrid connectivity, either over a VPN or directly through a data center provider ten gig link or something like that. They have the ability to bring your own IPV six address, many of them, many of the big ones people tend to think about. You can then create an enterprise wide IPV six addressing plan and then not use provider assigned address space inside of your cloud virtual networks, but rather use your own and have some control over that or have a consistent addressing model. But if you were limited and only had to use the provider assigned IPD six addresses, it’s not for not at least you know you’re using an address space that isn’t going to overlap with anything, but it does provide a little bit of vendor lock in there when you’re using their address space. It’s not portable, but you’re kind of locked in in the cloud anyway. Right. But then now cloud infrastructure as a service, cloud service providers are providing the ability to do V six only, providing Nat 64 DNS 64 kind of translation services so that you can run a V six only workload in their clouds.

[00:22:12.960] – Scott
So they’re progressing on that front. And then also now the cloud providers are getting the ability to run IPV six in containerized hosted containerized infrastructure.

[00:22:26.070] – Ned
I’d imagine that would be really beneficial to be able to get directly down to the container without having to go through a bunch of intermediary hops. Because when I think about one of the things you can do in Azure and AKS is you can create a big enough address space and IPB for in your virtual network that every new container can get its own individual IP address for the duration of its lifetime, and then that’ll get recycled through DHCP. But if you have enough churn, even DHCP with its leases can’t necessarily keep up with the demand and you’re still restricted to that address space. If I could use IPV six for it, I would never have to worry about reusing the same address for a new container. I just throw it away after the container dies.

[00:23:12.750] – Scott
Yeah. With DHCP, you might have to keep your lease time quite short because you end up with a scope exhaustion situation. So you keep your lease time really short. With IPV six, you’ve got a Slash 64. How would that change your thinking about your lease time on your scope? You’re not going to have a scope exhaustion. And why don’t you set your lease time to be not even a day? How about a week? How about a month of your lease time could increase because you’re not in risk of a scope exhaustion situation.

[00:23:47.910] – Ned
Right. And now I’m using that. Now there’s less translation happening between the service that’s being offered and how it’s being consumed by the client, because I don’t have to have this separate internal address space that I’m maintaining inside my virtual Kubernetes network, because now that’s just using the network it’s sitting on instead of having that additional layer.

[00:24:08.750] – Scott
Yeah. Also, if you’re a PaaS or a SaaS provider and you’re spinning up logical instances of your software for a customer and you need to have them all isolated from each other, now you’re running out of private, non overlapping ten space to create verbs, if you will. That’s networking nerd term. You’re creating virtual routing domains. Now, you could have all them be separated and each have their own unique address space. Also, when you’re doing things like infrastructure as code, you’re spinning up an environment, you’re running it for a while, then you’re deleting it, and then you go on to the next one. And so you have so much IPV six address space, you wouldn’t have to worry about going back and reclaiming old ones. Just pull a new one off the top of the stack, deploy it, burn it, move on. Next environment. Boom. Pop one off the stack.

[00:25:08.310] – Ethan
That’s so counterintuitive.

[00:25:10.110] – Scott
Kill it, delete it, move on. Don’t worry about going back and reclaiming by the time you go all the way around the loop. And now it’s time to reclaim. Yeah, I wouldn’t worry about keeping track of I mean, you need to avoid overlap of networks as you’re spinning up virtual environments. But things are very ephemeral in the cloud. They don’t have a long lifespan, and so you don’t need to hold on to that address or the plentifulness of the global ITV six address space allows you to not care so much about. Am I going to run out or just build things? Build it new. But with IPV four, we have these very sophisticated algorithms to avoid overlaps give a very small amount of address space that doesn’t. Oh, and then if it expands, then what do I do?

[00:26:07.110] – Ethan
Well, that’s what I meant by counterintuitive coming from the V Four world where you track everything very diligently and carefully to think about. I’m just going to take the next address off the top of the stack, use it, burn it, throw it away. Who cares? It doesn’t matter because you’ve got so many it would take you years, decades longer to actually burn through all of the address space that you have in there. Even if you were using them aggressively.

[00:26:30.340] – Scott
Yeah. You have to change your thinking and think about addresses as not a scarce commodity. That has to be. I mean, we don’t want to grossly waste them. You could definitely waste them and then run into problems again. But I think using just normal IPV six address conventions, you have plenty and you don’t have to worry about reclamation or saving.

[00:26:57.750] – Ethan
So, realistically, looking at my internal network, my internal cloud network, Scott, am I going to be able to go V six only? Are there limitations or just practical considerations where yeah, my hosts that are sitting on that internal side probably need to be dual stacked.

[00:27:17.490] – Scott
Yeah, I think you could start there and try it. The one thing about dual stack that it hides IPV four dependencies. When systems have their opportunity to choose both, you may realize there might be part of your CI CD pipeline or administrative functions or management access to things still relying on IPV four or a fetch of something to your registry or your storage might still be using IPV four. And then it’s only when you go to turn off IPV four, you realize, man, still using IPV four gosh, I think it behooves ourselves to try to do things V six only and understand. And then that points out exactly where we have V four dependencies. And so you want to know where those are as you strive for the future and you want to know those sooner rather than later and just keep track of them. You don’t have to solve all of them today, but you need to know they’re there. And so, yes, Dual stack is probably what you could realistically achieve today. But running dual masks V four dependencies that you may run into problems with later. So you might as well know where you stand today and then chart a course for the future.

[00:28:50.950] – Scott
But the reality is you may end up running Dual because you do have V Four dependencies.

[00:28:56.080] – Ethan
And if it’s that’s, fair enough, those hosts are more or less under our control. What about PaaS? If I’m using some sort of a PaaS service, how are the cloud providers doing with that? Is that a scenario where I have pretty good luck with V six or Dual Stack? Where am I going to be at?

[00:29:14.190] – Scott
Yeah, because they may have scriptable infrastructure that may be brittle, that they don’t want to touch it works. Don’t touch it. It’s going to mess up our client that could be using just a lot of V four still, even though they may be running on another cloud environment that does support IPV six the way their scripts are written, they’re still just using before. So you may be limited. Even if their underlying infrastructure has V six capabilities, they’re still thinking of doing things in a fee for only way.

[00:29:47.970] – Ethan
So PASC actually could be a driver for dual stack. It’s something. In order to connect everything I need to connect to it could very well be I got to maintain V four for some amount of time going forward until the various providers begin converting those services over.

[00:30:02.210] – Scott
Yeah. And you know what will happen. And you see their announcements. All the major cloud providers every six months coming out with new IPV six features and continuing to expand it into different regions, into different zones, into different services and services, just kind of pick up IPV six here and there a little bit over time. But all of them are working on developing new features. Yeah.

[00:30:31.300] – Ned
I was going to say that the main driver behind a lot of what gets developed and all these past services is what customers are asking for when they build out their roadmap for the next cycle, whatever. It’s going to be the next Sprint, they’re going to focus on stuff that is in the backlog that customers are demanding and any bug fixes and security stuff. If no one’s asking or demanding for IPV six to be integrated into the solution, that’s always going to take a backseat, too. I’ve got ten customers that are screaming at me that they need this new button on the yes.

[00:31:05.820] – Scott
And if we build that button, we get a ton of workloads to our cloud.

[00:31:10.040] – Ned
Exactly. But I think you’re right. There is just an overall renewed emphasis. And that has something to do with the fact that if one cloud does it, then the other clouds have to follow suit just to keep up with the Joneses, as it were.

[00:31:24.400] – Scott
Yeah. And I think developers are starting to realize that they’re so confined with V four. They want the flexibility, they want the ease of operational model, they want the addressing model that lends itself well to scriptable infrastructure as code.

[00:31:46.050] – Ethan
To not having to think about it.

[00:31:47.500] – Scott
Scott. Yeah. They are sick of being yelled at. Every time they need to launch a new service, they go to the IPAM person in the enterprise and say, Can I have another 16 of ten space? And the IPAM person is like, you’re killing me, dude. I don’t have 16 of ten space. I can just give you for your next cloudy project. And so they’re tired of being told no. And so if they just get a big block of V six from the network team, then they can just go Ham in their cloud and then they don’t have to come back and beg the IPAM overlords every time they want to launch a new cloud. And then every time you need address space, that’s a stop. That’s a hard stop in your infrastructure. Aks code. Right. I got to enter a ticket, I got to make a request. And if you could just run a script right through that improves your workflow, improves your ability to launch faster.

[00:32:52.770] – Ned
Yeah. With all the things that have been automated, sometimes the iPad is the last one, or there’ll still be a gatekeeper who has to bless the request when it comes in.

[00:33:02.860] – Scott
Oh, I got to ask the Firewall admin for a favor. It gets to this point, then it stops. Now I have to enter a ticket to another team which has a person that does something, and then once I get that, then I can put that value into my script and then run it.

[00:33:21.030] – Ned
Right. You mentioned Firewalls a second ago, and that reminded me of something that popped into my brain when we were talking earlier about the idea of I’m running a SAS, and I have multiple tenants, and if I’m using an IPV six address space now, I can have a dedicated block per tenant, and that can assist me with all kinds of security things as well, because I can lock things down by a range of IPV six addresses. I didn’t have that luxury when it came to V four.

[00:33:52.480] – Scott
Yeah, because there’s just so many overlaps. So now you have uniqueness, so you have probably greater situational awareness. You don’t have to keep track of overlapping. One is this customer is this device, and ten one. One is this customer that device. No, they each have their own V six address that’s unique. So even in your databases as PaaS or a managed service provider running in the cloud, or you’ve created a SaaS platform in the cloud, now none of the customers overlap. And so your database has a unique address field for every one of these customer workloads.

[00:34:42.450] – Ned
Yeah, that definitely has some potential. And if your service blows up overnight, you’re not going to have to worry about running out of tenant space because you’ve chosen this much smaller address space to work with. The Sky’s the limit, more or less, unless you become something like Google, in which case you have other problems. Because now you’re Google. To what degree does working in one of the government or sovereign clouds impact your IPV six options? Because I know those are sometimes the slowest to get the newest features.

[00:35:14.690] – Scott
Yeah, it boggles my mind that I think these clouds or these government type clouds think having less features is more secure, less security features, less V six features. Yeah, it’s getting better getting the V six features to those government clouds. There’s two things here. One is feature parity between what runs in a commercial cloud region or a government cloud region. The other is many connectivity from the government entity to that government cloud infrastructure must pass through a trusted internet connection. And so the V six connectivity from them to their own cloud may have added latency being backhauled through a special trusted Internet connection. Ticap M Tips These other kind of controlled Internet egress points for the government organization. The connectivity between the government organization and their Gov cloud region is one aspect. The other is that operating inside of that Gov cloud region may have not as current features as what may be available in a commercial region.

[00:36:35.300] – Ned
Okay, so if you are working in that space, just be aware that things might be lagging a little bit behind. I think we’ve done a decent job of covering what the sort of the business and technical benefits of IPV Six are. But I want to check in with you and see if is there anything we didn’t ask about? Is there anything we missed that’s a big benefit or bonus that you see on the business or technical side?

[00:37:00.410] – Scott
Performance improvements with V Six or giving clients the option of choosing whichever protocol may be more performance from their perspective is the biggest benefit. Then next would be the operational improvements, the ease of administration, ease of use, ease of management, because now we have an address space that’s plentiful that doesn’t overlap, and maybe we could run only a single protocol and reduce our operational costs that way. That’s the second one. The third might be just an address space that facilitates scripting or those concepts of making things ephemeral or short lived. Those are probably the biggest benefits.

[00:37:52.010] – Ethan
So, Scott, let’s say I’m sold and I’m in an IPV four environments entirely. Now, that’s what I’ve got. That’s what’s in the cloud. That’s what’s on my on Prem. Get me started here. I don’t want to break anything. I don’t want a ton of downtime. How do I I’m not saying right the plan for me, but give me the tips on how do I get that plan written, if you will, so that I can adopt the six and have it be minimally disruptive. Let’s keep cloud in mind here, Scott.

[00:38:19.450] – Scott
Yeah, I thought of a third set of potential improvements are once you get V Six rolled out or once you get comfortable with deploying IPV six, then you start to think about addressing differently for the future. So let’s say you want to create workloads that are zero trust, or you’re doing things with softwaredefined perimeter where a client goes authenticates that software defined perimeter, unlocks an SDP gateway, unlocks access for that particular user at that address, access to that particular application. And now you’re tracking the client address, and that client address could be V four or V six, where before you were only looking at the client V Four address and that was going through multiple nets. So how do you really have assurances that the client was coming to you from a legitimate address over V four? Because you don’t really see the client address unless you track based on cookies and things like that. But your ability to track based on the client address is less with V Four. With V Six, you have more assurance that the client is coming from the real address that the client has on their actual interface because it didn’t get translated then.

[00:39:44.790] – Scott
Also, we can change the way we think about services. So the services, we tend to think of a service AKS a single V Four address, and we put it into DNS. A modern concept is to have address list servers. So we tend to have one server or one service has one address and it’s well known in DNS. What if your service could have a different unique client Identifier for every client? So I got 1000 clients accessing a web server. The web server is listening on 1000 different addresses, one address per client, and when the client disconnects, we throw that address away and add another one. So the third phase, once you get your feet wet with IPV Six, then the future is now. I burn addresses per connection and I do that based on zero trust or I’m burning it in my service mesh, or I’m burning addresses in my container infrastructure and the addresses are ephemeral and things just come and go and then it reduces my attack surface. No one can do a DDoS attack to my service because 5 seconds from now the address is going to be different.

[00:41:04.090] – Ethan
Well, you’re talking about thousands of addresses getting burned in the course of a 24 hours period, Scott, or millions depending on if it’s a busy service or not. But there’s so much address space. You’re saying such an approach is realistic?

[00:41:16.850] – Scott
Yes. So you start to once you get your feet wet with Dual Stack and V Six, only now you start to think about what else can I do with IPD Six? How can I use IPD Six addressing creatively to give me maybe a security benefit or reduce my attack surface? Like I say, there can be no advanced persistence threat without persistence. And if things don’t exist very long, then how could they be attacked? So that’s like the future, what you might unlock is the potential with IPV Six down the road that you couldn’t do today.

[00:41:59.840] – Ethan
So let me guide you back to this adoption question then. I’ve got V Four. I want to get started with V Six. I can’t break anything, Scott. How can I gently turn things on and start moving with IPV Six?

[00:42:11.770] – Scott
Yeah, I think you could look at your Dev test environments and say, let me look at the scripts that build up those environments, build up those halls and walls and networks and VPC and Vnets and subnets and route tables. Let me look at that and let me look at that code and say, how difficult would it be just to add a few commands and then turn up the environment with Dual Stack and then start to make building Dual Stack just part of our normal Dev test build up, tear down pipeline workflow, and that would be the place to test it and get your scripts just right. And then once you feel confident with your scripts, then you could just make that just a standard set of configurations that you roll out all your environments with and then you never have to think about it again. It’s just built into the script, just runs all the time and I have confidence that it’s going to build the right environment, and I’m going to have dual stack in all these environments. Then you might think, oh, now I’m going to have a second Dev environment, and maybe then I try to turn off V Four or see where I can run V Six only and then compare and then look for potential to divest myself of V Four completely.

[00:43:41.730] – Ethan
Well, there was one thing implied in what you were saying there, Scott, and that is that the underlying network infrastructure has been set up with routing tables and so on to carry that V Six address space. So am I right there? There’s a preliminary conversation, something that’s happened when the network engineering side to facilitate this. We’re assuming that that’s good to go, and then we can start lighting it up on the house, right?

[00:44:04.020] – Scott
Yes. Also, just because a workload has a V Four address and a V Six address doesn’t mean it has to use the V Four address. If you only have a quad a record for an end point or service to service connectivity. If it only has a V Four address or V Six address, a quad a record in DNS, it can only make a connection over V Six. So even though you may have a service provider that enforces all workloads to have both protocols, you can start to remove V Four addresses out of DNS as a way to then force connectivity between software components to use V Six, even if the underlying infrastructure is dual.

[00:44:47.860] – Ethan
Right. We’re assuming that the inbound connections will have been resolved by DNS. Oh, my God, a quad A record. I’m going to make that inbound connection on V Six doesn’t mean that the host that’s still dual stacked. He doesn’t care if he’s got an A or a quad A. He can make his outbounds on B Four. But yeah, we can start reducing those DNS records one at a time and see what breaks.

[00:45:13.350] – Scott
Yeah, but that’s the place. You want to test it in a safe environment. And maybe even before Dev, you just want a playground, you want a sandbox. Let me just learn what are the API calls, what’s the Python I’m going to run what’s the Ansible, what’s the terraformation? What does the script look like to build this stuff up? Dual? That’s maybe the starting point.

[00:45:40.270] – Ethan
There some of the shops I’ve supported. I don’t know how safe the Dev environments would be. If anything goes down, I’m losing time. Come on, man, bring it back.

[00:45:49.770] – Scott
Yeah, exactly. Those environments end up being a bit fraudy.

[00:45:57.150] – Ned
Yeah. Wait, I thought this was Dev. Why are customers hitting this environment?

[00:46:04.110] – Scott
That was my green and my blue green.

[00:46:07.350] – Ned
I think the nice thing about if you’re all in on the cloud right now or using it heavily, you probably have a lot of this stuff scripted out or sitting in infrastructure as code. So spinning up a sandbox environment is not that big of a lift. And all you have to do is start updating, like you said, your infrastructure code to support IPV Six and see what happens in that sandbox. The cloud can be your playground, and you can figure out what is possible until it gets green lit for your development and then production.

[00:46:39.390] – Scott
Yeah, get comfortable with spinning things up and then killing it, deleting it, starting over, spinning it up, making sure building confidence that when it spins up and gets built, that infrastructure is getting all built up with V four and V Six. And then later down the road, then start to taper away V four and see where your V Four dependencies lie.

[00:47:05.370] – Ned
Well, this has been a fascinating conversation. I really appreciate you taking the time and helping update my skills since Server 2000 because it seems like things have changed a lot. Can you summarize just a few key takeaways for the audience, Scott?

[00:47:22.230] – Scott
Yeah, I’d say there’s more IPV Six on the Internet than maybe you thought five years ago. And it’s continued to grow. And so we need to recognize that even if maybe we have a plentiful supply of V Four address space, the rest of the world doesn’t. And our customers, our partners, our suppliers, our vendors, they’re struggling with address space. So if we’re the bigger person, we take the high road. We implement IPV Six and make our services accessible over IPV Six. We give our customers, partner suppliers, vendors the choice on which protocol they want to connect to us that makes their end user experience faster. And also, the cloud service providers have been slowly working for the last, I guess, eight years to add IPV Six features. And so you might not have even realized how much IPV Six capability your current cloud provider offers you because it’s just been slow and steady, just adding more features. And so educate yourself on the latest V Six features. And I bet you’d be surprised. You’d be like, wow, there’s a lot here that I can do with IPV six. And then once you have kind of understand that IPV Six is used on the Internet by your customers, understand the performance improvements, make that business case to your executives, make it a legitimate It project, and then start to build out that sandbox, that Dev test environment with IPV Six, and then see if you can turn off the four anywhere.

[00:48:55.480] – Scott
Understand where your V Four dependencies are there’s still going to be dependencies in the near term, so you may still have to run dual stack in different parts of your infrastructure over time. You might get to a point where you could run V Six only and then enjoy some opex cost improvements or operational improvements by only having to run a single protocol. And then once you gain that confidence, then the future unlocks the potential of using addresses in some really creative ways. Maybe the next step.

[00:49:28.650] – Ned
Awesome you have given us a whole bunch of links that people can check to see what’s supported on their public clouds of choice. So we’ll definitely include that information in the show notes. If folks want to know more about you and your world, where should they look? Where can they follow you at?

[00:49:45.620] – Scott
Scott Hogan and I write for the info blocks ipvd six center of excellence and I’ve written a lot for NetworkWorld.com out there and then listen to us on the IPV six buzz podcast.

[00:50:00.330] – Ned
Awesome. Scott Hogan, thank you so much for appearing as a guest today on day two cloud and hey, virtual high fives to you out there for tuning in. If you have suggestions for future shows, we would love to hear them. You can hit either of us up on Twitter at day two cloud show or if you’re not a Twitter person, I get it. You can fill out the form on my fancy website. It is nedinthecloud.com hey, so packet bushes has this newsletter thing. It gets published weekly. It’s called human infrastructure magazine and it’s loaded with the best stuff that we found on the internet plus our own feature articles, stuff that we write. It’s free and it does not suck. So if you want to get the next issue, check out PacketPushers. Net newsletter. Until next time. Just remember Cloud is what happens while it is making other plans.

More from this show

Day Two Cloud 174: Building Kubernetes Clusters

On today's Day Two Cloud podcast we walk through how to build a Kubernetes cluster to support a container-based application. We cover issues such as what constitutes a minimum viable cluster, rolling your own vs. Kubernetes-as-a-service, managing multiple...

Episode 144