Follow me:
Listen on:

Day Two Cloud 096: Public Cloud Isn’t Wrong. You Are.

Today’s Day Two Cloud is a wide-ranging discussion about the value of public cloud, a response to the growing backlash toward public cloud, and techniques to better meld automation into application and infrastructure delivery.

Given cloud costs and complexity, would we be better off returning to on-prem? Guest Chris Wahl is here to say no. He argues for cloud and automation by using them like they were meant to, rather than trying to map old processes and architectures onto the new. We also explore the notion of pipelines for automation application and infrastructure delivery to take full advantage of public cloud services, and why applying cloud principles to on-prem applications yields diminishing returns.

Chris Wahl is Senior Principal at Slalom. You may have also heard Chris in his role as co-host of the Datanauts podcast. Datanauts has been retired, but you can hear every episode here.

We discuss:

  • Cloud backlash
  • How to think about automation
  • The persistence of legacy applications
  • Automation pipelines and tooling
  • Repositories and version control
  • Viewing cloud as a way to make apps more profitable rather than cheaper to operate
  • More

Sponsor: CBT Nuggets

CBT Nuggets is IT training for IT professionals and anyone looking to build IT skills. If you want to make fully operational your networking, security, cloud, automation, or DevOps battle station visit

Show Links:

@ChrisWahl – Chris Wahl on Twitter

Chris Wahl on LinkedIn

Wahl Network – Chris’s blog

Chris Wahl on YouTube



[00:00:01.110] – Ethan
[AD]Sponsor CBT Nuggets, is IT training for IT professionals and anyone looking to build IT skills, if you want to make fully operational your networking cloud security automation or DevOps Battle Station, visit CBT nuggets, dotcom cloud. That’s CBT nuggets dotcom slash cloud.[/AD] [00:00:25.490] – Ethan
Welcome to Day Two Cloud Show, we are bringing back a friend of the packet pusher’s podcast network, Chris Wahl. If you listen to the Datanauts show, of course, Chris and I hosted that show together for, oh, three years or so.

[00:00:38.600] And put a lot of great episodes about oh, automation and cloud and storage and security and lots and lots and lots of topics that we covered. And Chris comes back today as a consultant, which is what he’s doing these days. Living in a very cloudy world. Chris is all about modern application delivery. And we go, Ned, man, it’s like my brain has got to get rewired to think like Chris thinks these days about how to deliver an application, there is so much going on that me coming from that legacy background are struggling to overcome. I don’t know how you felt about it.

[00:01:15.160] – Ned
I still feel the struggle. I also came from a traditional infrastructure background. I still think about racking servers and cabling stuff up and installing an operating system. And like this is so far removed from that. You have to just pull yourself completely out of that mindset and put yourself in the new mindset where it’s all about the API and the pipeline. And that is the focus of our conversation.

[00:01:35.810] – Ethan
Enjoy this discussion with Packet Pusher’s friend Chris Wahl.

[00:01:40.560] – Ethan
Chris, welcome back to the Packet Pushers podcast network man, that giant sucking sound all these months have been us missing your mellifluous voice as the you and me did Datanauts for I don’t know what was it, three years, something like that.

[00:01:55.260] – Chris
Felt like 30. And that’s what these big words. I don’t know what that means. What are you. Are you making fun of me?

[00:02:02.390] – Ethan
It means you sound nice. I like to listen to your voice, my friend. That’s that’s what I’m saying.

[00:02:07.170] – Chris
Oh that’s nice. Well, heart emoji.

[00:02:09.070] – Ethan
So you are no longer working at a vendor, Chris, you were working at Rubrik for a long time and now you’re not. Now you’re doing the consulting thing. And as you and I have been chatting about some of the projects that you’ve been working on, one of the things that has come up is your very strong opinions on cloud native, how apps should be delivered and and so on. So I want to I want to jump into that conversation with you.

[00:02:34.150] For example, you have this idea that if people were really doing cloud, right, maybe they wouldn’t have their negative, whiny opinions about cloud and and such. Let’s open up with that.

[00:02:48.580] – Chris
Man I mean, you ask question with such a strong vibe to it already? Yeah, I mean, I definitely see I definitely a lot of folks out there stating some things that I don’t necessarily agree with, with cloud related to cost and how to do it right and on prem is easier and all that kind of stuff. It’s it is a different way of thinking and like we’ve I think we’ve all admitted over many years, if you approach it the same way you do on Prem, if you approach it like it’s not this API driven candy store in the sky that you can consume anything you want, pay a metered price for it.

[00:03:22.530] And yeah, there’s things you have to learn, but it’s a lot of fun and you can stand up some pretty cool services and do things at scale that you just can’t do on Prem. There’s there’s magic there. And I think there’s this sort of like cynical black hole that exists predominantly in social media that is like cloud is this horrible cost sucking monster.

[00:03:41.250] And you have to distill it down to a PaaS or just go back on prem, or you know, buy vendor’s Widget X and it will solve the pain. And really, it’s just let’s just approach this thing cloud in a different manner. That is way better that we should have been doing forever ago. And a lot of that stems from pipelines and automation and just putting in the time to do it right. Rant over.

[00:04:04.050] – Ned
Some how I suspect that’s not quite the end of the rant, but a good place to pause. I’ve seen that same black hole of negativity and the opinion that I’ve seen expressed the most is don’t bother with IaaS because that’s just recreating your data center. If you’re doing cloud, you should be using SaaS whenever you can and PaaS if that’s not an option, if you’re going to IaaS, you’re probably doing it wrong. Is that the wrong way to think about it? What are your strong opinions regarding that sort of idea?

[00:04:34.470] – Chris
I’ll be honest, I don’t focus too much on IaaS, PaaS, SaaS layer cake stuff, I feel like that predominantly is sourced out of the sort of classical vendor world of perspectives on cloud.

[00:04:47.430] I’m more looking at abstractions and APIs so that the default rule that I believe works well is as you’re working in cloud start at the most abstracted layer and work your way down until you meet the requirements of the application. So in a lot of cases I’m going to start with if it’s AWS Fargate and Lambda and if that doesn’t work, I’m going to go down to ECS or ECR or EC2 if I have to. But it’s really just kind of going down that ladder to see what fits best and then using that layer.

[00:05:19.260] I don’t think it’s like. You have to start everything is at the IaaS and then you graduate to PaaS, I think that’s a very a very classical way of looking at architecture. It’s more just what does the application need to do and where does it fit in there pretty well. And then design it for that and and do the things. It’s not that it’s not rocket science. It’s just really not.

[00:05:38.310] – Ethan
It’s funny you put it that way. So I’ve been in a situation where I have I have a reason to build for one of my own businesses, potentially a bunch of little websites that do different things or maybe they’re functions. And I’m starting to realize, wait a minute, I rethink what I what I grew up with, which is client server. There’s a server, it’s got services running on it, Web servers and, you know, and so on.

[00:06:01.370] And I throw that thinking out and I look at what I can consume from the cloud by breaking things down to their smallest components. And I start with Lambda, for example. Does that solve the problem for me? What does that look like? The problem I have is just getting my head around it, Chris.

[00:06:20.150] – Chris
Well, it’s I’ll bring an analogy into it. I’m sure most people listening to this, and especially in the technical world, love building with Lego. It’s fun. You get a really easy to follow sheet of instructions. The pieces make sense. All these different modules, you can kind of build what’s on the instructions or not. But typically you’re trying to build what’s in there. And I see some people like, yeah, this is super cool, I love it.

[00:06:42.980] I get to build my hands and put all these things together. Cloud is the same thing. It’s just there’s no instruction manual that says this is the end intent. You’re kind of defining what that is. And maybe if you think of it that way, it becomes a little bit easier. So using your Lambda example, sure. Maybe your writing function functions and you’re putting that in the cloud and saying this is the actual code and then there’s services to run batch, there’s services to do cron, there’s services to keep the logging or ingest event data or whatever.

[00:07:11.070] And you just put these things together just like you would with a really cool Death Star Lego kit or something like that. And out pops the solution for what you’re trying to build. And I think that’s the fun part. And I don’t think that part gets enough attention, really.

[00:07:26.360] – Ethan
There’s an aspect here of of control maybe where if you build it yourself, like I actually read a blog post that came off a hacker news of someone who said, screw it, I’m going to build it all myself.

[00:07:35.750] I’m not jumping into AWS because it’s too complex or he’s used it and, you know, feels that it’s too complex, etc.. But as I boil down that blog post of his, as I was reading it, it kind of came down to to control. Control over latency and predictability about certain things and so on where the rest of it. And this is a theme that comes up a lot as different people write about these services. It’s too much. It’s too complex. There’s too many things going on, too many moving pieces, too many Lego bricks that are sitting out there. I have to snap together to make the solution. And it’s not fun. It’s actually a headache. Seems to be what some people come across. And Chris, you’re one of the few voices that I hear that are like, no, this is like the right way to do it.

[00:08:17.930] – Chris
I mean, a lot of that. I think that the pleasantness of it and the success of it is going to depend on your attitude, your team’s attitude, your organizational structure, how you approach. Are you a technical company or are you a company that has technical people? There’s so many different ways to to slice and dice perspectives on this that I feel like the ones that genuinely sort of enjoy putting stuff in the cloud and coming back to the main topic at some point, like pipelining, all of these things are the ones who really get to enjoy what it’s like to operate at this level, because really ninety nine percent of the day to day is come up with new cool things and implement them, not run, run the ops.

[00:09:00.590] You know, in fact, it’s kind of fun. You get to a point. I’ve got some folks I’m working with where the Ops is kind of presented all the challenges that come day to day to the point where they’ve automated a lot of the solutions for them or just have Lambda functions, fix things or triggers or whatever. And now it’s like, oh, cool, a new problem. How do we automate this? I haven’t seen one of these in a while. You know, it becomes becomes a lot more fun.

[00:09:21.380] – Ned
If I could extend the analogy you start with with Lego a little bit and sort of the person who wants to make it themselves would be sort of like whitling as opposed to using Lego. You start with like just like this block of wood and you’re like, I’m going to whittle it down to something. But that’s not reproducible, right? Lego is standardized. It’s building blocks, it’s reproducible. And you would use it on something like an assembly line, whereas whitling is like one person doing the thing.

[00:09:48.110] They’re going to do it again. It’s going to be a little bit different. That’s great if you’re making art. Not so great if you’re making technology. I think the assembly line kind of lines up with the pipeline idea and see, now I’ve I built a sugue.

[00:10:01.220] – Ethan
Subtle, subtle Ned.

[00:10:02.570] – Chris
You look at even even even modern video games today. There’s you know, if you’re not in a gaming look up things like Factorio and Satisfactory and Dyson Sphere program and all of these these video games are super popular with millions of downloads, whatnot, and you’re really just taking things and stitching them together and automating them. So that some greater product is the sum of the individual components, and so there’s obviously like people enjoy automating. People enjoy building pipelines and workflows and things like that, because in some cases, for me, I just love standing back and watching it work.

[00:10:35.710] I like watching something like little bird drinking out of the water. I think I could watch that for four hours. It’s like it’s doing the thing. So I think there’s a lot of fun there. But yes, pipelining, I think is where perhaps these folks haven’t dabbled into that or they have. And it just was set up in a funky way. But I think that’s what it really enables the fun. And that’s where, you know, if you’re if you’re getting into cloud and cloudy things and automation, you probably like building stuff by hand.

[00:11:00.520] You’re putting together like, OK, this is sort of cool. Maybe you’re working as an individual and you’re like, this is all great. But how does this actually work in the real world at scale with different teams all pulling in different directions? And that’s where pipelining kind of really comes in.

[00:11:14.950] – Ethan
I interviewed a guy recently who leverages a pipeline for network automation, and their solution was to have a line of code that would need to get put into several different devices on their network to meet some new standard. And it was it was pipeline driven in that they put the new line of code into the git repository. There was a task that would fire periodically, detect it. There’s new code there and then a process would kick off where that new artifact would have to get broken down so that the correct bit of code would go into each of the different devices.

[00:11:48.520] And it was all automated. They were using Jenkins in this case to make all of that happen. All they did to start this process was put in the new thing, the new requirement, whatever it was, and the rest. And they built a bunch of processes to make this happen. It kicked it off and and drove it through from there. When we talk about pipeline, when you talk about pipeline, I mean to say, Chris, is that what you’re getting at, where it’s an event driven architecture, something happens and then the pipeline takes over from there and drives some end result?

[00:12:22.480] – Chris
Yeah, I mean, there’s there’s lots of ways you can do it. It really depends on are we talking about deploying infrastructure or are we talking about are we deploying code? Are we actually building and deploying applications or infrastructure, although nowadays that’s a that line is a little blurrier. But yeah, the idea is you have you have a series of tasks that you’ve done manually back in the past and you figured out this script will update this value. Oh, I can call this API with this Python script, et cetera.

[00:12:51.070] And you just have sort of an engine saying when a change happens, typically you’re changing code in a repository or updating a value of something or it sees a commit on a GitHub repo or whatever, you know, run through the series of tasks, run the script, run the Python script, update the thing, return a status code. If it’s all great, let me know we’re good and then, you know, copy the files over and deploy them, something like that.

[00:13:12.280] Or with the infrastructure world, it’s usually piles of YAML. When they get updated, the YAML’s different and some sort of engine interprets that difference and makes that happen in terms of crud operations for for cloud resources. But the end result is the same. There’s there’s a series of tasks that you’ve defined. Typically it’s scripts in Bash or Python or something like that, that when some sort of change is detected, these scripts are enacted or these frameworks are invoked, responses come back saying good, bad. And as long as things are good, it kind of rolls through until until the end is done, the application is deployed or the resource is built.

[00:13:51.850] – Ethan
Response is good, bad. That could include testing along the way.

[00:13:56.230] – Chris
It should, dang it, if you don’t test in there, bad mojo. Yes. Yeah. I mean, it’s it’s the idea that you’re still running probably some some unit test locally or doing the doing the bot build process, pardon me. But after that it’s the integration testing. If I make this change, does it still talk to the things that’s supposed to talk to and then functional testing like does the actual end result from a user perspective? Does this does this thing work?

[00:14:22.990] Does does it do what it’s supposed to do? And you got performance testing and data validation and a whole bunch of other things you can add. It’s really just, I put it once on Twitter as like just make sure the change does the things I wanted to do. Nothing blows up. And it’s just a code way of expressing that, using scripts and things.

[00:14:41.470] – Ned
Now, you mentioned kind of the line between infrastructure and application codes is blurring a little bit like, you know, a developer might write the YAML that’s part of their kubernetes deployment as well as write the code to go into the container. Where would you draw the line or does it make sense to draw a line between infrastructure and app code? And would you keep them in the same repository, separate them? What’s what’s your strategy there?

[00:15:06.610] – Chris
I’d probably take a step back and, you know, I think tactically it’s like is it infra or is it app? What do I do with it? But more abstractly, I think it’s what can we do to make this application holistically deployable as a module? How do I take everything about this application and reduce the dependencies on something else so such that it can be deployed and hopefully deployed in a way that is scalable. You know, I’m managing a fleet of these things, whatever they are, and that’s kind of where we’re going in some cases and more advanced cases.

[00:15:38.200] I think there’s folks that have no pipelining, no automation, and they’re going to say what? And that’s fine. If you’re if you’re new to this, it’s perfectly fine to start by completely abstracting your infra and your application pipelines. And I don’t think they should all be blurred. But the work I’ve been doing more recently is how do we make the application itself a fully deployable unit, including dependencies for infrastructure, including any third party linkages that must be connected or tokens.

[00:16:03.760] And and that’s the idea that that I think DevOps is trying to bring purely like, you know, distill it down to the unicorn tier itself. That’s that’s what we’re going for, is you’ve got teams with domain knowledge on both sides building a package that is the deployable like that’s it.

[00:16:20.630] – Ethan
Getting to something that’s really fresh on my mind because no lie. 10 minutes before we started recording, I was finishing off a blog post addressing a question that had come up on our Packet Pusher’s YouTube channel, about stretching layer two between data centers, which is this archaic requirement for archaic applications where, oh we got to keep the IP address the same, but we got to have that, you know, application availability. So we need to be able to move that IP between data centers, which is stupid.

[00:16:47.920] From a network design perspective, it’s something you should never do. But because that requirement is so often pops up, businesses know we got to be able to do that. The networking industry has had to come up with all these different solutions where you can safely for some definition of safely stretch layer two between the two data centers, which isn’t the point. The point is why are we still in 2021 having to support applications that have been deployed in this archaic, ancient way?

[00:17:20.170] Chris, you’re talking about building an application as an artifact that you can deploy anywhere on any infrastructure using a pipeline and have that infrastructure be ephemeral potentially, it could move around. It can scale elastically and have all this flexibility. And yet there’s so many businesses out there that are mired in this past of I’m still married to a specific IP address for crying out loud. It’s just it makes me a little crazy to have that on the one side in it.

[00:17:48.340] And Chris, you as a representative of kind of the other side where we’ve done all of this modern automation, what’s why do we have this divide? Chris, do you have a take on this?

[00:17:59.990] – Chris
Feel like you really got a lot off your chest, the cathartic layer two forever the demon that will not go away? I think it’s. I will say I work with folks where they’re in a legacy situation for for reasons right? They’re a, legacy means financially viable, right? It’s not legacy if it’s not making money or doing something that helps make money. And that’s the problem, is that it’s persisted for so long because it’s successful. Success is probably one of the antithesis of technical progress because you’re kind of like, oh, it works, don’t change it or don’t change it materially.

[00:18:36.540] And that’s that’s tough. Like, who wants to work on that? If you’re this DevOps, highly paid, requested full stack unicorn, like, are you going to say, oh, man, I really want to go play with iSeries now and figure out how to reduce some sort of layer two thing. So I think it’s partially that and even back in the day when when I first started at Rubrik, it was like everyone’s like backup? So boring and like, yeah, the actual process of backup kind of is.

[00:19:02.520] But the technology could be interesting, as I think perhaps we need to make the technical challenges more interesting or visible or higher paid. As if you if you just cram people into an environment where they’re working with legacy and not giving them time, resources, training, other investments to get them at that new level, it’s not going to happen.

[00:19:25.050] Or you just got that one person or small group of people that. They’ve got you held hostage, they know the system, and it’s a very archaic old one, and there’s no way to replace those folks and they’re holding hostage. I’ve seen that a few times.

[00:19:39.320] – Ned
It’s scary to think that a lot of those folks are going to start aging out and they’re going to leave with that knowledge in their head. And then the people who are still need to run that system would be like, does anybody know iSeries and COBOL? Because we kind of need that right now. So I don’t know. It’s going to I feel like a lot of those applications are going to be refreshed, not because the business wants to, because the knowledge has left the building.

[00:20:02.930] – Chris
I don’t know. There’s there’s I see reports of like COBOL and Fortran rising in popularity again. So maybe what we’ll see is a new master class in mastering these old technologies and really understanding the point where we can bring them to something a little bit more modern, because I think you need someone with a foot firmly planted in both worlds. And that’s tough.

[00:20:25.450] – Ethan
Well, well, tough, tough why? Because I could argue on the one hand, it’s just new technology. That’s what we technologists do. We learn new stuff. So but, you know, the flip side of that argument and the one I actually face myself, is that just so bloody much to keep up with. So many, like if you look at just networking, which is the domain I specialize in, there’s new stuff coming along constantly to keep up with.

[00:20:46.780] And most of the solutions are difficult to get your head around with a lot of detail. So trying to also stretch to everything that’s required to get a proper handle on automation is a bit challenging, so I guess I can see it both ways. Is that your point, Chris, about why it’s hard or?

[00:21:03.190] – Chris
I would say probably 20 percent technology challenge, but think about the systems that are in place to support those systems, legacy systems. They’re typically not very agile. They’re not very exciting. So if you’re having to do a change request once every six months to put some code into production, you’re going to peace out that’s not that’s not fun. You know, it’s just not what you’re looking for.

[00:21:25.330] Probably if you’ve got a foot planted in the let’s move things forward world. So it just it takes shake ups at higher levels and it takes investments that may cause disruption internally and those will happen. But beyond that, I think folks see the advantage of the other side and. They’ll pull what they need from it and move forward.

[00:21:43.120] – Ned
What are we doing with these modern applications to ensure that they’re not going to calcify the same way that these older applications did? I know we’ve got like shiny DevOps and agile and all that, but my concern is that at a certain point it’ll start calcifying and just becoming, oh, this is the way we do it and something new will come out and we won’t be able to make the move to that newer technology or that newer concept or idea.

[00:22:07.600] – Chris
I think, broadly speaking, it’s always hard to fight, as you say, calcification across any any sort of tack, right. The moment you build it, it starts to get old. But the difference to me was historically experimentation and deployment was hard, long and expensive. And now those things are all quick, easy and cheap. So it’s just so much easier to try things. And so many people are trying all these different things all at the same time. It’s almost like we’re, you know, the million monkeys trying to write a novel and some of us are going to randomly get it right.

[00:22:41.050] And actually, I think many people will. And so now you’ve see, like even in containers, you know, there’s so many different opinions on how to deploy things. There’s so many different places you can deploy them. You can put it on old stuff, new stuff. So I think that helps. And just good hygiene if you’ve got a good pipeline in place. Wink, nod, finger gun, it’s way easier to stay modern because you’re having to worry about dependency mapping and where your artifacts are going and what versions you have for your framework. And and it’s really easy to put pieces in and take other pieces out. So I think that’s a pretty big win.

[00:23:16.110] – Ethan
OK, develop that idea in more detail with the pipeline. You just made it you made the case that the pipeline gives me this flexibility where I can substitute pieces and parts of my process in and out as opposed to what what was I doing before if I didn’t have a pipeline? And moving to a pipeline means I’ve got all this flexibility and almost sounds magical.

[00:23:38.350] – Chris
Yeah, I think there’s a lot more a lot of what I’m talking about for that modularity exists, but it lends itself more to the application development world. But even in the infrastructure world, let’s say that you’re using terraform to write some code to deploy stuff on Amazon and you’re using some security liniting tools or some checking tools, you can very easily remove one, add another one, make a rule so that it checks a third, this particular set of tests or environments and this region or not, you can play around with those different tools, make them so they’re not it’s not dependent on a result of positive for the tool to progress the pipeline.

[00:24:15.760] And you can even say, you know what, this pipeline is currently going to us-east-1. What happens if I go to us-west-2 what’s Oregon looking like today? How would it deploy or what would it look like or give me a report? So there’s lots you can do to kind of divert not only how the pipeline builds itself, but also where does it output its artifacts and build its resources?

[00:24:36.390] – Ned
I think I want to take a step back here for folks that are not super familiar with the pipeline, we’re throwing around terminology like crazy and I just want to step back. So like what are the primary components of a pipeline and some of the terminology that you’re using, like artifacts and CI?

[00:24:53.460] – Chris
Yeah, I mean, a pipeline is just a it’s a rules engine, it’s a it’s a loop. And it just you basically tell the the rules engine, you know, like a Jenkins or something like that. Hey, you know what? I’ve got this I’ve got this pile of source information source code could just be a bash script or YAML or in case of terraform it’s going to be HCL2 and hey, you know what?

[00:25:15.510] Anytime you see a change, run this run the terraforme code to do a plan and then email that to me. You know, it could be a really basic pipeline and it’s going to do all that work and continuous continuous integration, CI, that’s that’s a type of pipeline. You can have a CI pipeline, a CD pipeline for deployment or development. There’s all these different ways. But really, it’s just it’s just a rules engine that goes through a list of tasks to perform. And then at the end of says success or failure, it’s not fancy.

[00:25:47.130] – Ned
And this is mostly borne out of the application development world and not so much the infrastructure development world. So like, what are some of the challenges in moving the same idea and paradigm over to the infrastructure world where things aren’t quite as flexible maybe as they are in app development?

[00:26:07.000] – Chris
Oh, I don’t know. I feel like if if. Let me give you let me give you an opinion here first. If you’re working with someone that can’t fit into your pipeline because they don’t have an API or whatever module like, for like. Break up with them, they ain’t worth the relationship, right, level that up. Find someone who will work with your pipeline. So prioritize your pipeline, because that’s that’s honestly where the gold is. But as far as that, like the challenges that let me start with this, the challenges we’re trying to solve with pipelining is really just when you deploy something into any environment, we’ll say Cloud as the example, you probably want to make sure that it’s adhering to security best practices, that you’re not doing anything kind of goofy, that your team can see what you’re doing, that you can repeat the process.

[00:26:53.340] You can very quickly experiment in case you want to make changes. You really want to capture all of that in code. And that’s not anything new. Even if you were working at VMware back in the day, we had Onyx, which would capture the power PowerCLI commands. Various cloud providers today have things to capture the command you’re doing and make that happen.

[00:27:10.680] So it’s really just we want we want to capture all the steps that we’re taking, all the clicks and API calls and whatever that that happen to deploy a resource in the cloud and codify that and turn it into. Here’s a YAML file because YAML’s good on people’s eyeballs, but horrible on their souls. And then that’s what it is.

[00:27:30.810] You’re just it’s the same thing we were kind of trying to do back in the config management days. The puppet and chef was like, hey, when you have the server come up, go ahead and install SQL. And here’s the password for it and all kind of jazz. It’s just now we’re saying, hey, when you’re going to build the server that’s going to run the database server or whatever. Here’s where I want it. And this is the instance type.

[00:27:47.880] And, you know, all the configuration details are part of that because it’s infrastructure as code being put into a pipeline. And then you can go from there because if you like what you have, you can copy it. Now it’s in two regions or say it goes to all the regions or give it to somebody and say, hey, this is exactly how I want you to deploy your stuff. Just run this, you can hand it out. So it gives you a lot of flexibility.

[00:28:13.650] – Ned
Right, right. The other thing that I think about when it comes to pipelines is testing. There’s a lot of testing that you should probably be doing. You mentioned security best practices or some sort of linting, but it’s hard to see how something like unit testing in the world of applications, maps to infrastructure, because like in applications, I’m probably unit testing my functions. So I have a function and here’s a bunch of inputs I throw at it.

[00:28:37.470] Did it give me the right outputs? Awesome. I’m happy with that function. I’ve unit tested it. Infrastructure. I deployed a Vnet it’s there or it isn’t like I don’t. How do you unit test infrastructure? I guess is my question.

[00:28:52.390] – Chris
Yeah, I’m.. Testing is certainly the most Wild West part of infrastructure pipelining today. Because, you know, unit testing, unit testing, honestly, I think is sort of handled, I think you can use formatters and whatnot really just to tell like like that’s kind of handled by the engine.

[00:29:09.050] Thankfully, we don’t have to deal with too much of that. And it’s going to tell you that you have an invalid config or something like that. I feel like that’s the unit test unit test version of making sure that will the code compile? Will the config work? Everything else to me is more about integration and functional testing. Cause integration testing is, hey, if I make this change, do all the other components still talk to my thing?

[00:29:31.340] You know, if I make this this networking change, does it break my Transit Gateway? Does it break my security groups? Does it break whatever else? And to me, that’s a powerful thing of pipelining, because you can literally say, like in my example, I have a whole region in Amazon where I just test stuff. There’s nothing permanent in there. But before I run a pipeline that would deploy to East or West or EU or something like that, it just tosses it over to us-west-1, because it’s kind of like the worst region. It doesn’t really get used a lot.

[00:30:01.960] It’s like I’m west two or east east one. Those are kind of my my home base. But the point being, I can I know I can instantiate kind of anything from from ground up and then I can run my test there. So for me, I just have like a test region. If you had a more complex environment, you’d have test accounts. So before you deploy in a particular region or whatever. It would go through a test account and do a full deploy.

[00:30:24.050] And maybe there’s some some permanent infrastructure or some On-Demand infrastructure that got deployed so that you can do these tests. That’s not the end of the world. It just adds more complexity. All the other testing, I feel like security testing, linting, best practices. That’s where the open source community has really jumped in to provide a leadership flag like Checkov comes to mind from Bridgecrew. They just got bought by Palo Alto recently. But it’s an open source tool that literally goes through as of version 2.0, like seven hundred plus rules, dependency maps, all the things you’re going to build.

[00:30:58.670] I mean, there’s some really good, absolutely free tools and you can put those directly in your pipeline and it’s just like poof, value added from eight lines of YAML. So why not?

[00:31:08.630] – Ethan
[AD] We paused the episode for a bit of training talk training with CBT nuggets. If you’re a Day Two Cloud listener, you are you’re listening to the podcast right now, then you’re probably the sort of person who likes to keep up your skills, as am I. Now, here’s the thing about Cloud as I’ve dug into it over the last few years, it is the same as on Prem, but it’s different. The networking is the same, but different due to all these operational constraints you don’t expect.

[00:31:32.240] And just when you have your favorite way to set up your cloud environment, the cloud provider changes things or offers a new service that makes you rethink what you’ve already built.

[00:31:39.230] So how do you keep up? Training. This is an ad for a training company, what did you think I was going to say? Obviously training and not just because sponsors CBT Nuggets wants your business, but also because training is how I’ve kept up with emerging technology over the decades. I believe in the power of smart instructors telling me all about the new tech that I can walk into a conference room as a consultant or project lead and confidently position a technology to business stakeholders and financial decision makers.

[00:32:06.980] You want to be smarter about cloud CBT Nuggets has a lot of offerings for you, from absolute beginner material to courses covering AWS, Azure and Google cloud skills. Let’s say you want to go narrow on a specific topic. OK, for example, there is a two hour course on Azure security. Maybe you want to go big. All righty then. There is a forty two hour AWS certified SysOps administrator course and there’s a lot more cloud training offerings in the CBT Nuggets catalog.

[00:32:34.580] I just gave you a couple of examples to whet your appetite. In fact, CBT nuggets is adding forty hours of new content every week and they help you master your studies with available virtual labs and accountability coaching. And I’m going to I’m going to shut up now and get to the part that you actually care about, which is the special offer of free stuff that you get from CBT nuggets because you listen to this entire spot, you awesome human first visit, CBT nuggets, dotcom slash cloud.

[00:33:00.740] There you will find that CBT Nuggets is running a free learner offer. They’ve made portions of their most popular courses free. Just sign up with your Google account and start training. This free learner program is a great way to give CBT nuggets a try. Now, as a bonus, everyone who signs up as a free learner will be automatically entered into a drawing to win a six month premium subscription to CBT nuggets. So this is a no brainer to me. Just go do it. CBT nuggets, dotcom slash cloud. That’s CBT nuggets, dotcom slash cloud. And now back to the podcast that I so rudely interrupted. [/AD] [00:33:38.360] – Ethan
What pipeline tools do you favor? We’ve mentioned Jenkins just because it is sort of a standard, a lot of people use it. What other ones do you use, Chris?

[00:33:47.250] – Chris
I feel like Jenkins gets a lot of undue hate, its java, and that part deserves a little bit of like eyebrow raise, but it was a pioneer is a really good tool. It’s trying to continue to be modern. It’s not something I use, though very frequently. I use a lot of GitLab. GitLab CI is a pretty cool pipeline tool. It’s probably my favorite. If you’re looking to do infrastructure and you don’t want to use HashiCorp’s, what is it? They’re their cloud platform HCP. So GitLab CI is kind of nice and then I use a lot of internal tools, various various places where people are using internal pipelining tools, but a lot of them are pretty much the same. You know, you define you define a flow and it handles the flow for you based off some sort of source repository.

[00:34:37.490] If you’re looking for something else, like other ones, I’d recommend GitHub Actions is pretty dope. It’s more like a marketplace of things you can do, which is nice if you don’t know what you’re doing and you want a marketplace. But when you don’t want the marketplace, you want to write your own stuff. It’s kind of frustrating, you know, and probably the last one that that you’re going to see quite a bit would be Circle. I mean, it’s very heavily used for application pipelining.

[00:35:05.570] – Ned
Yeah, that is where I’ve seen Circle CI the most is what I’ve been helping with some app development stuff and that there’s always that Circle CI like YAML file in the root of the repository like OK there it is.

[00:35:18.390] – Ethan
You said a couple of things there talking through those tools. On the one hand I think I heard you say it doesn’t matter. They all do about the same thing, you know, pick one. You know, on the other hand, there are certain limitations depending on which one you pick. You mentioned GitHub actions. Well, if the the Lego you want to snap into your pipeline isn’t in their ecosystem, it can be more difficult or maybe impossible to get that done.

[00:35:42.510] – Chris
You can you can write your own, I just I both like it… Backing up for a moment. Yes, every pipeline does have things it’s good at. GitLab CI’s really good at both. I would say it’s better infrastructure than apps, but good at both. Circle great at apps, not not what I would use with infra. GitHub actions, I feel like it’s great as an integration layer with your public source code repository or even a private one. But it’s not really what I want to use for deploying my infrastructure. I just I just feel like it doesn’t suit my use case.

[00:36:15.750] So they’re all created equal in that you put in a config pretty much always YAML and out comes, some sort of some task workflow, which is the pipeline, but different based on what they integrate with, what they have templates and starter kits for and what they natively support. Like, do they natively support terraform or do I have to invoke terraform and kind of tell it what to do? Those are those are big differences.

[00:36:37.190] – Ned
Another big point of difference I’ve seen is if you need to deploy things on premises behind the firewall, you’re going to need some sort of runner or builder machine that’s inside your network deploying those resources. And some of the solutions don’t really have an internal runner option. I don’t think GitHub actions has a way to invoke an internal builder in your network. At least I haven’t seen that.

[00:37:02.750] – Chris
They do.

[00:37:02.750] – Ned
Maybe GitHub, like GitHub Enterprise or something.

[00:37:06.200] – Chris
No, just GitHub actions you can do a self-hosted runner. So you can just deploy an agent.

[00:37:11.000] – Ned
OK, that just reaches out and checks for jobs.

[00:37:13.910] – Chris
It just connects in and then that’s, that’s one of the runners that, that runs. So when you have, when you have a job that needs that on Prem, only reason I know is we were connecting back to an on prem device to do a demo back in the day. I didn’t write the code, but that’s when one of my colleagues is like there’s an on-prem runner and so does GitLab CI has one and most of them do have some sort of way to connect in and you run it on your bare metal or your instance. In fact, one of the cool things, if we look at GitLab CI that I like is you can deploy their runner into your cloud provider like AWS and then apply a role to the runner and then that that role then inherits the ability to do what you needed to do.

[00:37:52.040] So there’s no credentials past. It’s literally just calling commands into this runner that’s executing them in a containerized fashion on that runner, using their their proprietary application. And then it’s doing what it can based on the role assigned to the instance. So there’s lots of ways you can get things even in even in the public cloud environments that are kind of quote behind the firewall to do the things.

[00:38:15.260] – Ned
And another thing that I’ve been thinking about is how you manage something that’s a shared service, like a shared, long term, long lived service, that a bunch of other applications are dependent on something like DNS or active directory. Heaven help us all. Is there a place for pipelining and automation with those types of services, or is that still sort of manual and living in the infrastructure old days?

[00:38:45.590] – Chris
No, no, absolutely automate those things and those typically fall into what’s called a landing zone, which is a way that you deploy, typically it’s hey, I have a deployment into a cloud provider that has the master account or the main account set up the organizational units. And however the organization needs to be set up is there. All the guardrails.

[00:39:06.470] SSO, identity, you know, that’s typically what you’d want to build out because that adheres to the best practices of that cloud provider. And a lot of those are actual services that they provide where you say, hey, I never want this particular No one should be be able to SSH in this environment. No one should be able to deploy this particular type of instance. And then that’s that’s held everywhere, which is what I mean by the sort of best practices.

[00:39:32.060] And then there you go. That’s that’s an automated fashion. It’s being controlled typically by whatever is doing that for Amazon. That would be organizations and Control Tower. And then that’s like your your root set of infrastructure that only hopefully a very small number of people have access to it. They’re not logging on using that account. But that’s you stop doing it, assume the role. So so something like that for sure. And that you bring up a point that you’re talking about managing services.

[00:40:01.880] Those services are actually kind of easy because people do rely on them, but not a lot of people are trying to contribute or change them. Right. DNS, they want DNS to work, but they don’t have an opinion on how you do it as long as it resolves. They’re good. Imagine then deploying services in a pipeline, automated group, kind of collaborative fashion that are downstream and upstream dependencies, as well as being hit by customers either directly or indirectly.

[00:40:26.550] Like now. Now we’re getting hard and that’s where these things like pipelines become quintessential as well as publishing. What is my dependency do? What is my artifact, to whom I depended to, who is dependent upon me. That’s where that’s where it really starts to kind of bend your mind. So if you’re if you’re like, oh, man pipelining active directory or the integration for it or SCPs and in AWS is difficult. You got the easy, easy end of the deal.

[00:40:54.940] Like, honestly this is the easier, less troublesome part. Try really go make a friend in the development world and that has to deploy a service, talk to them, and maybe you’ll learn a thing or two or at least have some appreciation for pain.

[00:41:09.870] – Ned
Right. Because it’s not just the internal integration of the app itself. It’s the larger world that app lives in and all the dependencies that other apps have on it to get work done. Is that part of the testing that happens in a pipeline or that testing happens somewhere else where you’re doing like full end to end testing for your application and all its dependencies?

[00:41:34.720] – Chris
It’s pretty rare to see an end to end dependency test just because it requires so many teams to kind of put something together. Instead, what you would do is you publish, you version your releases, and you you strongly request that people use both semantic versioning as well as a version, version, specific dependencies. And that gives you the freedom. You can make changes and roll them forward. And if somebody’s down or upstream is saying this isn’t working, it doesn’t take them down, they can still be pinned or adhering to a previous version.

[00:42:09.310] They can kind of work that out so they can they can sort of test based on what you release and you can have them kind of work with that. A lot more contractual than it is all of us in the same page, because you’re releasing it all sorts of different times, there’s no way you can orchestrate. OK, Friday’s the test day. Not gonna work.

[00:42:27.210] – Ned
It’s not going to work out. It kind of reminds me of when I’m using terraform in modules and I need to pin my module that I’m using to a certain version or a version range, because I know that someone’s going to put out a new version of that module and might break something in my terraform deployment.

[00:42:44.340] And if I don’t pin the the providers and the modules to the version that I know works, then I’m in trouble. And then when I do want to move to the newer version, hey, maybe that’s something that happens in a pipeline. I change the version that kicks off a pipeline that tests that newer version in the context of my deployment.

[00:43:03.420] – Chris
Totally. And that’s bit me in the butt, like the way I learned about that Ned was I was using GitHub actions and when it very first came out and I said, yeah, just load the latest terraform apply action.

[00:43:17.280] And which was I figured, you know, I don’t care about the action from the terraform side. I just said, I want the latest release. Well, there was a bug in one of the commits that were made where latest release included data. So all of a sudden, all of my terraform code was running a beta release, which updated all my state information to the beta release and none of my regular stuff would work anymore. So I was like, you know what?

[00:43:39.930] This is why people pin dependencies or at least set ranges for, now I get it.

[00:43:48.240] – Ned
That bit me too, because all of my example exercise files from my Pluralsight course originally did not have versions pinned for everything. And I learned very quickly, no, you got to do that because people are going to be taking this course for a couple of years and you don’t want your stuff to break every few months.

[00:44:03.540] – Chris

[00:44:04.100] – Ethan
Yeah, Chris, we’ve been talking a bit utopian. That is, you do things, you set up the pipeline. It’s all automated and it’s great. But a lot of a lot of shops have a manual approval process. There’s some kind of a human intervention here before things go and are allowed into production. How do you deal with that when we’re trying to automate everything?

[00:44:28.180] – Chris
I’d start by figure out why it’s there and not in a bad way, but typically the reason those exist is somebody got screwed. They pushed on the wrong time or the the system burned down and customers were impacted and they’re like, you know what, we’re going to put some gates in place because we want to have control over when the change occurs because because our butts got chewed out, not yours, something like that. So there’s typically history there and I think it’s worth digging into. And you can certainly emulate that in a pipeline, and it’s actually fairly common, especially if you’re working with high legacy or corporate type work, where it’s like this is financially impactful, there will be a final approval before Prod gets pushed to.

[00:45:09.990] And that’s fine. I think the goal then is identify if you can if you can marry like the risk that occurred that they’re trying to mitigate for with the manual approvals, as well as show improvement with pipelining, because a lot of areas that typically get found through manual testing, manual pushes to production evaporate when you automate and pipeline because you’re able to solve for those problems and consistently solve those problems. And so that may be that point where you reach a milestone of, hey, we have an issue in four months.

[00:45:41.890] Do we need these three manual approvals or can we reduce it to one? And if we can reduce that to one, how long until we get rid of it entirely? I think the mistake that most people make is either, ‘A’ they assume pipelining means there’s full, full, full engines forward, you know, like fire the laser beams. Let’s hope we get to get to the Death Star and hit that womp rats or whatever. I’m mixing metaphors, you know, do the things or that.

[00:46:10.700] Now that that there’s just this need, that once you automate, you can’t have any manual approvals and it’s totally fine but have a plan to eventually retire them once trust has been rebuilt.

[00:46:22.450] – Ned
Right, and building up that trust is a big part of it as well, because I think once you move to a pipeline, you’re now going to be delivering and deploying on a faster cadence and people who are used to, you know, a change window once every two weeks or once a month. Now, the idea of making changes on a weekly or daily basis frankly scares them. And so you have to build up that trust with that group.

[00:46:45.850] – Chris
Yeah, I would say I want to pick on a little piece you brought out there, Ned, because you said once you go to the pipeline, you know, it’s kind of everything’s potentially much better. You can make a pipeline that really doesn’t do anything. You know, Pipeline says, you know what type make build whatever type rake, whatever for me, and then just puke out an artifact and then I’m going to still go manually test it.

[00:47:07.390] You can you can make the pipeline as skinny as you want to where it’s doing just one task or it’s fully doing all the testing, all the end results. It’s about that journey to get it to the point where it’s not just puking out the end results of a build process, but it’s going through the testing and the validation and everything you need to do to earn that trust. You can point to it and say, here’s all the data points showing the tests that we ran.

[00:47:30.730] Here’s that issue you had last time that you filed a Jira ticket for. It’s resolved, committed right here, and we’re testing for it and it won’t happen again. And typically something of that simple, if you show a business unit, business owner, somebody that’s less technical and you translate them like, oh, that thing there means I’m not gonna have that problem again. Awesome.

[00:47:51.270] Better make sure you’re right, but it’s just a way that you can kind of bring them in because that’s what they want. They just don’t want things to break, cause if they’re trying to run financial reports or some sort of marketing campaign or whatever they want to know the technology is not going to be pulled out from under them like a rug.

[00:48:06.480] – Ned
Right. I think you bring up a good point for anybody who’s trying to get into the world of pipeline. You don’t have to build that full end to end with every testing suite known to man pipeline the first time, like you can start real simple and then add complexity as you go.

[00:48:25.630] – Chris
In fact that that is the way to do it just start simple. I usually advise, take the tasks and do the manually first to make sure they work. You know, if you have a script you want to run, just run the script, do it, do it step by step by step, and then add the least stressful lowest risk items to a pipeline. Have to do the the boring manual. You know, you have to do it. There’s no logic to it. Steps, you know, like, hey, you know, you have to put the files over here, just have it automatically, put the files over here.

[00:48:52.720] And then from there you can add unit tests and whatever tests you need and just take what you’re doing manually. And you’re putting in code, putting a script against it and then adding that to your fun sandwich that is pipelining.

[00:49:06.150] – Ethan
Chris I got one kind of concluding question here that goes back to really how you opened this up. You had made the point that if it doesn’t have a good API that I can talk to and can’t include my pipeline, I don’t want to work with you. You also kind of implied and at least that’s been the context of our conversation when an artifact is being deployed, you’re consuming public cloud or that that’s what I’m hearing from you anyway. I’m not deploying it on something that’s on premises, very likely. Why is your attitude only public cloud at this point? Is it because of the flexibility of consumption or something else?

[00:49:44.980] – Chris
It’s less mystical than that. I don’t really do a lot of on Prem anymore, so most of the environments I’m working with are trying to get off Prem. And so that’s really just but but prior to this, prior to making that my my new job, I certainly did a lot of on prem pipelining because GitLab CI, for example, you can run that on a Linux box anywhere you could just put as a virtual machine. And even if there’s not an API, I can specifically call if there’s some way I can programmatically call it with a script or a CLI or I mean, whatever. In some cases it’s like throw a file in a particular directory and it causes an app to trigger.

[00:50:23.610] It’s it’s a weird, weird world out there. But certainly you could do it anywhere. I’ve just found that a lot of the magic that comes out of pipelining just isn’t a thing on Prem. Like I’m not building magic servers out of instances and virtual machines. I’m using a hypervisor potentially, which means I need a lot more guardrails on what I’m building and how I’m building it. Cause on prem, I’m worried about capacity. In the cloud. I’m worried about drip cost.

[00:50:48.670] It’s a totally different paradigm. So it means the pipelines have to be geared towards different things. Right. So if you’ve got I’m trying to deploy resources in the cloud. I potentially have already figured out the best way to build it from a price and cost and performance and whatever perspective. I’m just trying to put it in code and push it out so it’s repeatable and whatnot. On Prem, I can’t just have a go order me a new server like a pizza and it just appears.

[00:51:10.350] Right. I have to work within those confines. I feel like config management may be a little bit more prevalent as a tool used on Prem because you you have purpose built hardware and virtualization pools for a thing. Those environments typically have their own methods for rolling out workloads. Right. Or some other, you know, some other vehicle that they use. I would like it if they would be more pipeline friendly.

[00:51:32.520] Please work on your APIs and tooling, integrations and things like that and stop making us use Java. *cough* on Prem, but everything else, I think it’s apples to apples. I think you can pipeline pretty much anything you want. It’s just the the use cases in the cost models change.

[00:51:48.670] – Ethan
The future, in your mind, is public cloud, though, in other words, if I get my head around how to build my app so that I can deploy it on public cloud and in a way that is cloud native, make sense, pipeline friendly, et cetera, is that the way I should go? Or do you see that there is still a use case for on on Prem and let’s leave, let’s leave data itself and data governance outside of it and just talk about infrastructure at that context.

[00:52:16.540] – Chris
Hmm. I mean, I don’t like the idea of, like, throwing away one for the other. I think there’s always going to be you know, we always need something to be on. Prem, the thing is, I think we’ve sort of solved that, that we’ve cracked that nut. I suppose we kind of know how to squeeze a lot of efficiencies out of on-prem. It’s just a world where I feel like we spent the last 15 years just virtualizing the piss out of it and squeezing every dime we could out of every square inch of data, center space and hardware.

[00:52:47.470] And now it’s like how many thousands of virtual CPUs can I put? Like, who cares? You know, it just it we’re getting to the edge of what? Of what we can really squeeze out of that. I feel like cloud is. The reason that I chose it as my one hundred percent like this is what I do right now is because there are a lot of interesting ways that we can do things there. Maybe not to save money, but to make more.

[00:53:11.580] So that’s nice if we can figure out to make the how do we make this application more profitable, more scalable, even cheaper to operate, but we get all these benefits. I think that’s really what gets me excited versus I mean, who really wants to have a job where you’re just trying to make a number, get smaller? That feels like that’s not that doesn’t get me excited in the morning.

[00:53:31.950] I’m like, how do we make more money out of this? Make it bigger, better, better, more adoptable. Meet use cases it couldn’t do before. That’s why I get excited about cloud, not because it’s a versus thing.

[00:53:43.840] – Ethan
Chris, this has been one of these mind-expanding conversations. I love getting the chance to talk to people who’ve been right very deeply in it. And they’ve had time to form strong opinions. So this has been a really, really, really enjoyable conversation. And and man, I know I know a lot of people know you and you have a big social media following all that. But for those folks who don’t, would you let them know how they can follow you on the Internet?

[00:54:06.910] – Chris
Sure it’s W-A-H-L, and Chris Wahl on Twitter.

[00:54:12.010] – Ethan
Very good. Thank you again, Chris. We got to have you back, man, and talk about more of this nerdy stuff that you’re working on.

[00:54:19.660] – Chris
Keep me in small doses, I think. Otherwise I’m going to make you lose your audience once a year.

[00:54:26.800] – Ethan
Well, thanks again for showing up, man. And Virtual High five to you for tuning in. If you have suggested for future shows, we would love to hear you get hit either of us up on Twitter Ned. Ned. Both are paying attention to Day Two Cloud show. Or if you notice what a person got at Neds fancy website, Ned in the cloud dot com, he’s got a form there you can fill out.

[00:54:45.050] And if you want to keep up with the very latest of what’s going on in IT, shows like this, blogs that we’ve been keeping up with and so on. We got a weekly newsletter. It’s Free Human Infrastructure Magazine. HIM is loaded with the very best stuff that we find on the Internet. We have our own feature articles that we put in there, too. It’s free. It does not suck. And you can get the next issue via packet pushers dot net newsletter.

[00:55:04.090] All the back issues are there, too, by the way, if you just like I don’t subscribe to things. Cool! Just go to Packet Pushers dot net slash newsletter once in a while. Look at the archives at the bottom. You can read every issue that way too, if you like.

[00:55:15.940] Until then, just remember, cloud is what happens while it is making other plans.

Episode 96