Search
Follow me:
Listen on:

Day Two Cloud 092: What AWS Lambda Is Good For

Today’s Day Two Cloud podcast is a thorough introduction to AWS Lambda, which is AWS’s serverless compute service. We discuss how Lamdba works, what it can do, use cases, and more.

In general, serverless combines a managed service with event-driven compute to allow customers to avoid or minimize infrastructure management and reduce idle capacity. That is, the customer runs functions on demand rather than maintaining a persistent server instance.

Our guide for today’s conversation is Julian Wood, Senior Developer Advocate for the Serverless Product Group at AWS. This is not a show.

We discuss:

  • The differences between functions-as-a-service and serverless
  • A brief history of serverless at AWS
  • Core Lambda components
  • Common serverless use cases
  • Stitching functions together
  • How to package your code and supported languages
  • Addressing latency, state, and other issues
  • New features
  • More

Sponsor: CBT Nuggets

CBT Nuggets is IT training for IT professionals and anyone looking to build IT skills. If you want to make fully operational your networking, security, cloud, automation, or DevOps battle station visit cbtnuggets.com/cloud.

Show Links:

Serverlessland.com Blog – Amazon

@julian_wood – Julian Wood on Twitter

AWS Tech Talks – Serverless

AWS Serverless Office Hours on Twitch

AWS Serverless Workshops: Innovator Island & Wild Rydes

WoodITWork – Julian’s blog

Transcript:

[00:00:00.990] – Ethan
[AD] CBT Nuggets, is IT training for IT professionals and anyone looking to build IT skills, if you want to make fully operational your networking cloud security automation or DevOps Battle Station, visit CBT nuggets, dotcom slash cloud. That’s CBT nuggets. Dotcom slash cloud. [/AD] [00:00:24.720] Welcome to Day Two Cloud. Oh, boy, do we have a show for you today, we are going to go. I was going to say deep on AWS Ned, I don’t know how deep we actually get into AWS Lambda specifically, but we have developer advocate Julian Wood joining us. And we do get a pretty thorough introduction to Lambda what it can do, its use cases and so on. There was a lot here and Julian was very enthusiastic.

[00:00:51.090] – Ned
Yeah. So the thing that struck me is Julian and we talked for almost an hour, maybe a little bit more, and during that time I feel like we only scratched the surface. Like there is so much more to what you can do with Lambda the use cases, how it functions. We even get to talk about security and monitoring. So maybe that’s a whole other episode is very engaging and interesting conversation about Lambda and filled in some gaps for me that I didn’t even know I had.

[00:01:18.540] – Ethan
I felt the same way. It filled in gaps I didn’t know I had, because as we kept talking about what you could do with Lambda and the appropriate use cases that began to it made me rethink how I think about computing and how computing work gets done. And I think you that are listening, you’re going to feel the same. So enjoy this conversation with AWS’. Julian Wood.

[00:01:37.050] Julian Wood from AWS. Hey, man, welcome to the show. And I don’t think you’ve been on Day Two Cloud before, in fact I know you haven’t. So let’s introduce you to the audience. Who are you and what do you do?

[00:01:48.660] – Julian
Well, thank you so much for inviting me. A long time listener. I think I’ve listened to all of your shows. So it’s a privilege and an honor to be among such esteemed packet pushers. So, yeah, thank you. My name is Julian Wood. I work as a developer advocate within the Serverless team at AWS and I’ve got some I’ve got an awesome job and I work with builders, developers to help them understand how best to build serverless applications as well as being their voices internally. So any feedback, whinging, that kind of stuff. I bring that internally to making sure we do the best we can to build serverless products.

[00:02:21.660] – Ethan
OK, so when you say developer advocate, then you’re kind of a middleman, a proxy between people that are consuming the Lambda service as developers and the internal team at AWS that is producing the product.

[00:02:32.040] – Julian
Exactly. So yeah, we work within the product team. So with all the product managers and the engineers who are writing the cool, funky stuff and work exactly as their proxy on the outbound stuff of helping people understand it. And then on the inbound stuff, the first things is, you know, the adoring praise as well as the gripes and moans for whatever’s happening. But then also acting as developers and helping the product managers develop the products and going through all the iterations and having our customer hat on to be able to help them do their job better and they help us do our jobs better.

[00:03:09.540] – Ethan
Cool, man. Let’s jump into the Serverless discussion then, and I’m going to skip the old joke that, uh, Serverless is made up of servers. Yeah, we get it. OK. Ha.

[00:03:16.560] – Julian
Oh, now you tell me.

[00:03:17.970] – Ethan
I know, right. But in a sentence or two, what is serverless. And let’s let’s define it in a kind of a general sense, not the AWS sense for the moment. We’re going to go into the AWS version in a minute. But I want to hear from a broad sense how you would define Serverless.

[00:03:33.000] – Julian
Yeah, absolutely. So. I like to think of it as Serverless is the practice of using or the practice or an idea of using managed services combined with event driven compute functions, with the overall goal of avoiding or minimizing infrastructure management, configuration operations and maybe idle capacity. That sounds very high level, and I know it’s a bit marketing, a kind of spiel, but the idea is two different things. You want to avoid infrastructure management. So that’s the kind of thing not managing servers, not managing pods, not managing operating system updates, not managing patching or that kind of thing.

[00:04:12.080] But that’s from your code perspective and also connecting that code to services natively that may be via API or a messaging kind of thing. And so you don’t have to then run all these separate systems yourself. You can just consume the awesome powers of the cloud by using these managed kind of services.

[00:04:30.560] – Ethan
Now, I think of it as a service that is instantiated on demand. If I want the function to run, I make a call. The function spins up very quickly, runs, and then it’s gone. Is that correct?

[00:04:42.230] – Julian
That is correct. So if we’re talking about the big picture, let’s say Serverless is the big picture, which I’m talking about the managed services function’s code, all those kind of things and linking these different things together. Now, a Lambda is our compute service, which is functions as a service. And you can think of that as a little block within the big serverless ecosystem. So Serverless has a whole bunch of different products speaking from AWS. These is AWS Lambda, which we’ll talk loads about.

[00:05:10.640] But there are other things such as API gateways for hosting APIs and we’ve got, you know, many messaging systems. So that moves data around or stuff around for events. Topics, queues, things like event bridge, SQS, SMS, Kinesis, I think many others.

[00:05:25.820] But, you know, a number of AWS services that do that. And then there’s also workflow or workflow, orchestration. We’ve got a product for step functions with that. And so that’s connecting them all to the myriad world of AWS services. And then a portion of that is your actual function code, which is uses the AWS Lambda service.

[00:05:45.370] – Ned
OK, so I think it’s really important to disambiguate the functions as a service with the larger Serverless concept.

[00:05:53.260] – Julian
Correct, because that does serve functions as a service was one of the big starters of the serverless movement, if you want to call it. But it is just a small portion of it. And I mean, let’s be honest, it is a terrible name. I mean, I don’t know why in IT we name things for what they aren’t rather than what they are, said Serverless.

[00:06:13.780] Well, what does that mean? Same with, I mean, NoSQL. OK, so it’s not SQL, but then what is it? So yeah, we’re in IT. We have naming issues, but yeah, that’s just the way it is. And in fact, Lambda is a history story. If you don’t mind me delving back into the Wayback Machine, it wasn’t ever defined as Serverless when it was first announced and it actually grew out of the S3 organization.

[00:06:38.410] So if you’re not sure, S3 is an object storage system, which literally just stores a ridiculous amount of data in the cloud. It’s got, I think, 11 nines of availability. It is one of the sort of world wonders in terms of storage and people within the org had a great idea. They were like, well, hang on, if somebody uploads a file, it’s not a file, but it’s an object in S3 parlance. But think of it as a file.

[00:07:02.470] Wouldn’t it be cool if you could just run an action on that? Somebody uploads a file. I don’t need to then have a process that pulls that or some kind of server that needs to run something. Can’t we just do an immediate action? And so that’s actually where the crux of Lambda was initially thought of and born. And then obviously the clever cogs in the universe and AWS looked at then. But wow, let’s not just do that for S3.

[00:07:25.600] Why can’t we do that for everything within AWS and even broader. So why can’t we create this event driven model where an event happens. Yes. Uploading a file or an object to S3, but any other kind of event, you know, hitting an HTTP endpoint, you know, consuming something off a queue, a cron job, all these kind of things, why can’t we just run a little bit of code in response to that event and tada Lambda was born.

[00:07:52.720] So that’s where it sort of comes from. And it was all started as event driven computing. That was this whole idea that you have an event, something that happens that kicks off and there’s a little bit of your custom code would run, your custom code would do some processing on that data. So maybe for an S3 thing, it’s going to resize it or it’s going to remove the color from a picture or it’s going to do some sentiment analysis on whatever is uploaded in that file and then dump its output or push its output somewhere else.

[00:08:23.950] – Ned
OK, OK. So I think that really does set the stage for us in terms of what Lambda is. If I was looking at Lambda and I wanted to consume it, what are the core components? It sounds like you’ve sort of outlined them a little bit, but let’s just revisit that for a second. So I got it straight in my head.

[00:08:40.600] – Julian
Certainly. Well, so if we think of Lambda as a service that just automatically runs your code without requiring you to provision or manage infrastructure, you write your code, you upload it to Lambda and your code is the thing that’s actually important. Now, that’s the addressable thing. That’s your sort of business value. If you want to talk, talk about it. The important stuff isn’t really the resources for your code. And so you don’t have to bind your function to a collection of machines or a pod or anything that’s addressable.

[00:09:08.350] You’re just saying I have a Lambda function and I want to allocate it a little bit of memory. And that memory is its power. And you get a proportional amount of CPU, you get a proportional amount of network and some configuration, and then you just say every time it runs with this amount of memory and whether I whether Lambda in the background spins up 18000 cores or one core, not my problem. I don’t need to worry about that. I don’t need to worry where those cores exist where all of that happens.

[00:09:37.210] And the whole idea is you can you can get this amazing power of using a equivalent of a distributed computer being the power of AWS Lambda without needing to know anything about distributed computing. And that means you can build and create value to your customers far quicker.

[00:09:56.880] – Ethan
Now, you said upload my code. Do I have to package it in some way, deliver it as a container, send a binary? What does that really mean?

[00:10:05.040] – Julian
Absolutely. So you can start by creating a Lambda function in the AWS console and there’s a little window there where you literally paste your code in. Now, that’s the easy way to do it via the console. And there’s a number of different languages that you can write your code. Where’s my list here? Because it’s big enough to not remember offhand.

[00:10:27.180] So things like the native languages Lambda supports are Java, Go, PowerShell, node.js, C#, Python and Ruby. So there’s the native languages. But then there’s actually a thing called the runtime API where you can create functions that run for any other language. And you just you’ve got to do some more stuff to connect to the to create some connectivity with within the Lambda servers. And people have done weird and wonderful things. They’ve created COBOL functions and, you know, elixir and any kind of thing.

[00:10:58.780] So literally, the world is your oyster. You can go and create functions in any kind of language. But we’ve got a whole bunch of the languages that we support where obviously that makes life nice and easy and that’s native.

[00:11:10.080] – Ned
Right.

[00:11:10.710] – Julian
So your function code is just a little bit of code, let’s say Java or node.js or Python or whatever, something that people understand. And it’s a little bit of code and it has a function within that code that is called the handler. And what happens is you then have that piece of code. As I said, you could just literally copy and paste it into the console or what you can do from your command line is you can zip that up and it’s literally just zipping up one file. That’s the one way of doing it for what we call zip archive functions. There’s another whole way you can actually package functions as a container image and I will go into more detail with that.

[00:11:47.070] But let’s not muddy the waters with too much confusion as your brain is already spinning around wondering what the heck I’m talking about. So you take this a little bit of code, which has got a function in it, and you set that up and you upload it to Lambda in the background. It happens to be stored on S3. You don’t see that. You don’t care. What you then do is you can then invoke that Lambda function. So from your from the console, from your workstation, by the Lambda API, any number of ways you can invoke that function.

[00:12:15.540] What Lambda is going to do in the background is it’s going to spin up what’s called an execution environment, which is basically a secure, isolated bunch of compute. Now I’m avoiding using the word container because it’s not strictly a container and the container purists go. Oh yes. But and head along a whole dangerous side bar quest that doesn’t get us anywhere. But if you think of it in terms of an isolation mechanism, it’s actually a virtual machine and it’s called a firecracker virtual machine.

[00:12:45.930] And what we’ve done is instead of having isolation at a container slack docker kind of level, we actually do it at a virtual machine level. Every single Lambda function that runs is in its own isolated virtual machine. It’s tiny spins up super quickly. It’s literally got no other devices. It doesn’t have a webcam or USB or any of these kind of things connected. I think it’s literally got a a network device and a keyboard and the keyboard is somewhere.

[00:13:14.960] It’s probably that you have to F1 to continue under it if it needs to have a keyboard and then obviously some network connectivity to it to get in and a little bit of storage. But that’s all that that little micro view, that little micro VM has. So that micro VM is spun up Lambda download your code into that micro VM and runs the code that is in your handler. In the rest of your function you’ve also got some initialization code that that can actually happen before the invoke will get onto that when we talk about cold and warm starts. But ultimately your function is going to run and pass its results back to Lambda, which sends it back to you.

[00:13:50.160] – Ethan
Now, you said that micro VM spins up very quickly. We’re talking milliseconds, I think, right?

[00:13:54.370] – Julian
We are talking milliseconds. So in the background, what’s actually going to happen is Lambda maintains a fleet of a little bit more than one or two of these micro VMS all around the world. It’s actually trillions of invocations a month that are happening for Lambda.

[00:14:09.660] So there are a lot of these little micro VMs that are happening at the Lambda service in the background is going to maintain obviously a rather large fleet of these micro VMS and they run on standard bare metal EC2 instances, which we call worker nodes. So those are the servers that actually run serverless, just like networking guys, like the wires that run wireless, you know, that sort of concept, the low when you run wirelessly, loads of wires in the background, you know, with serverless, there’s loads of servers in the background.

[00:14:40.980] And so we manage a fleet of those functions. We then manage the runtime. So things like Java and Node and Python, the actual interpretor, the runtime, we manage that for you. We stick that code on within those little micro VMs that are running on the workers and then that’s sort of all ready to go. So we’ve got we’ve got a fleet of Python 3.8 and a fleet of Java ones all ready to rumble. And as soon as your function code in, there’s a mapping service which says we’ve got a spare micro VM ready to go. And your Python 3.8 code, for example, or three code is it is copied into that execution environment and it runs. So all of that is sitting in the background and ready to go.

[00:15:25.650] – Ethan
You don’t even have to spin up a micro VM. You’re saying because there’s a there’s an execution environment sitting there waiting for me most likely, and it’s just finding the right one that will execute my chunk of code.

[00:15:35.760] – Julian
Correct. So and that’s the that’s the first time that your function runs. The first time that your function runs the micro VM needs to download your code, well the Lambda service downloads your code into this micro VM and off it runs. So it’s literally the amount of time it takes for the code to copy. And off it goes because, you know, Java or Python, everything is there ready to go.

[00:15:57.690] Now, what then happens is there’s two parts of the Lambda function talked about the handler, that’s the sort of business logic code that happens. But the rest of the function also runs during an init phase. And this is when we talk about the cold starts that people sort of want to know and understand. So when Lambda function first runs, it does a cold start. And this is all the stuff that happens outside of the function handler.

[00:16:20.400] And you can stick code in there like, oh, I need to maintain a database connection or download a secret or I want to do some initialization code or I want to connect to my MySQL database that can be in the cloud, that could be on Prem, that could be anywhere else setting up the environment for you to use. And obviously that’s going to take a bit of time and that depends on your code.

[00:16:42.810] So once that is run, then the function invocation can happen. And so, you know, depending on your code, that could take tens of milliseconds up to minutes. If you’ve got a, you know, something that’s going to be really slow, that’s going to take a bunch of time. If you’re running a Java, the Java virtual machine needs to needs to spin up. That’s going to take some time. If you’re using a compiled language like Go well, that’s because it’s compiled. That’s going to start very quickly.

[00:17:08.910] You’ve got some different kind of things that you can play with and lots of tips and tricks to be able to reduce that cold start time. But then once your function has invoked and sent its results back, that is now seen as a warm available function. And the next request that comes in just goes straight to that warm environment and doesn’t need to run the Init code because the database connection is already there. The secret has been downloaded from some external system and the function can just invoke again so that what that subsequent warm start is super duper fast.

[00:17:40.770] Now, people then think, well, am I going to have a lot of cold starts and is it something I really need to panic about? And the funny thing is developers get caught up with the cold starts because what you do is you test your Lambda function and then you you redo it in your IDE and the process goes to zip and upload it and you kick off your Lambda function. And you think oh again? Now I’m having a three second cold start and every time I iterate on my Lambda function, I’m getting a three second start. So yeah, because you are updating the code of your Lambda function every time Lambda saying your function needs to run again.

[00:18:12.990] What happens in a production environment is as more functions are running concurrently. Each concurrent invocation of a function is going to have a cold start, but the subsequent invocations are going to be warm. And so you’re going to find if you’re running Lambda behind an API, if you are getting a thousand requests hitting, you’re hitting your API each second. Those first thousand requests. Yes, they’re going to be a cold start, but then for the next could be hours, all the subsequent calls, all the subsequent invocations are going to be warm starts.

[00:18:44.790] And so your actual percentage of invocations for cold starts is tiny and it can be, you know, generally around five percent. So it is something to think about, but it’s not something to think that it’s going to completely overtake all of your all of your functions.

[00:19:00.180] And we can talk about synchronous and asynchronous because let’s say in a synchronous request behind an API, then you care about that cold start. If you’re building asynchronous applications, you don’t care, because if you’ve got a Lambda function that’s sending an email message, waiting for something that’s pulling off the queue. Takes an extra second. Do you care? No. So this is part of the event driven thing where asynchronous invocation of Lambda functions. Cold start doesn’t bother you.

[00:19:28.560] – Ethan
So, Julian, this has been an excellent engineering level conversation so far about how this is working. But part of the cloud is economics. So when I am invoking these functions, what is this costing me? What’s the Lambda pricing model?

[00:19:41.070] – Julian
Yeah, absolutely. So Lambda is actually quite a simple pricing model and you pay for execution duration rather than a server unit. So you’re not you’re not paying for anything on for the servers or anything underneath.

[00:19:55.020] So there are two things you pay for. One is requests served. That’s just the number of requests, one, two, three, four, five up to whatever and the compute time required to run your code. The number of requests plus compute time, and that is metered in increments of one millisecond. So if you’ve got a function that runs three milliseconds, you pay for three milliseconds if you’ve got to. And this is actually a recent announcement from December at re:invent, it used to be by one hundred milliseconds and now it’s down to one millisecond.

[00:20:25.110] So that’s as granular as you could possibly get for paying anything. And also, there’s a super generous free tier. You get a you get a million free invocations a month, and that is in perpetuity. So it’s not just by via a free tier or something. So, yeah, a million invocations per month per region per account. So there are many businesses out there who are running, you know, a decent amount of Lambda invocations with multiple accounts. And they’re not paying anything. They’re just within the free tier.

[00:20:54.990] – Ned
Right. Wow. So, I mean, I would like to pay by the picosecond, so I’m hoping that comes out.

[00:21:00.540] – Julian
I will put that request in.

[00:21:03.960] – Ned
I got to imagine. So is the initialization time that you talked about earlier, is that included as part of that execution time that I’m being charged for, or is it just when the the init is done and the handler kicks off?

[00:21:17.470] – Julian
A bit of both, and I don’t want to get into too much detail with it as well, but there are there are different ways that the Lambda functions can run. But I would think that it best actually to assume that your initialization code is also going to be charged as well.

[00:21:32.110] There are some nuances to it. There are some scenarios where it’s not charged and you get some extra compute boost for that as well. But I think just to keep it simpler, yes. Let’s assume that you are charged for the whole lifecycle of your Lambda function.

[00:21:45.160] – Ned
So if I were developing some code to run in Lambda, it would behoove me to make that the most efficient code possible because I am being charged by how long it takes to execute.

[00:21:55.450] – Julian
Correct. Because you have this one millisecond billing. If you’re not running a server that is running a process 24 hours a day, even if that process is processing one request a minute, that server is sitting there idle for a long portion of its day, which you are still paying for. So this is the one one of the value propositions for Lambda is that you do not pay for idle, when your functions aren’t doing anything. You aren’t paying for anything. You’re not paying for some server or container or some process to hang around waiting for something to come in.

[00:22:28.200] – Ned
Right, right, that, actually, I think that’s a good lead in to examining what some of the use cases are when it comes to Lambda, because we’ve been talking a lot of theory and architecture. But I need the reason to actually use this thing. And I have a couple that I’ve thought of on my own and I’ve actually used in the past. But I’m curious, what are the primary and best use cases for using Lambda over other technologies in AWS?

[00:22:53.820] – Julian
Yes, certainly. So. I mean, I could give you the flippant answer of whenever you have some code, do you want to run to respond to an event?

[00:23:00.840] And that is true because that’s the whole the whole premise premise of Lambda. But, you know, a number of use cases, the one I was talking about being behind an API, now you hit an API to put get request or something. It’s going to hit API, Gateway, API Service and behind the scenes run a Lambda function. That Lambda function is going to maybe pull something from a database, post something to a database, connect to something and respond back to the client.

[00:23:24.810] So that’s a synchronous requests, a web front end, a web front end, very common use case for Lambda. The cool thing about this is we generally not only running, one Lambda function. You can have separate Lambda functions for PUT, for GET. A whole sort of workflow of of and collection of Lambda functions that can pull different parts of your application. One kind of thing. So that’s behind the API Lambda does something in response.

[00:23:49.320] Then a data processing is a huge kind of thing. So you upload a file, you have some data that’s streamed in via IoT endpoint, or it may be via some WebSocket connection or something like that to streaming data. Lambda is excellent because it’s just going to see that, see that data come in and do some transformation on that. You know, it’s going to calculate some averages. And it’s going to munge up, it’s going to stick it into some other format to plunk onto another system, whatever it’s going to do.

[00:24:17.970] So it acts as part of a pipeline for data processing and then, you know, scheduled events, cron jobs. That’s a whole kind of thing. You know, how many people are running servers out there which are just running a cron job, you know, every night at midnight. Let’s prepare this PDF report, you know, four times an hour. Let’s make sure that we are copying a file from here to there. These are great use cases for Lambda because you don’t need to, again, have these scheduling services that up front, up and running all the time, doing something.

[00:24:45.930] Anything that’s event driven. So something that can create an event. And those events are super broad. Even uploading a record to a database, something like DynamoDB is our key value database store a number of other database technologies as well. Just the act of uploading or adding a record to the database, deleting or doing any CRUD operation to a database can automatically fire off a Lambda function.

[00:25:08.700] So, you know, that’s got a whole bunch of use cases and that does sort of twist the mind a bit about thinking about how you can do how you can do computing, because what people do is they normally have some code, it uploads something to a database, and then in the same bit of code, they’ve got to do some retry logic. If that didn’t happen, they’ve got to then do a whole bunch of separate processes within their code.

[00:25:29.220] How about if you just fired off an event to your database to say add a record, a new customer record, for example? And then in the background, Lambda goes, oh, how about that? We’ve got a new record in the database database. Why don’t we send them an email notification? Why don’t we add them to my X system? Why don’t we add them to my Y system? Why don’t we add them into our data analytics system so all of these kind of things can happen, lead to record from a database to a customer.

[00:25:55.950] You can imagine all the workflows in your business that could kick off and that can kick off automatically just because of these events. A database record being updated.

[00:26:04.470] – Ethan
It feels like the big idea here is I’m not having an EC2 instance or something that’s going to be sitting there consuming CPU and costing me money constantly. Whether I’m using it or not. I am using the bare minimum amount of CPU to execute my function on the air, off the air as quickly as possible. And so I should in theory, I get some economy, I was gonna say economies of scale, but not not exactly. I’m getting. I’m getting a very frugal use of compute to accomplish a particular task and that those are all those good use cases for Lambda.

[00:26:38.670] So if we flip this on its head, Julian, what’s a poor use case for Lambda where it just doesn’t make sense to use Lambda or if people misuse it?

[00:26:48.260] – Julian
Oh, absolutely. I mean, this is IT we’re talking about. If you gonna create anything weird and wonderful things people are going to abuse the services for, that’s part of the fun. I will get to that. But I just want to pop back to what you were saying, because that is that is one of the incredible use cases of Lambda is this pay-per use model. And so you’re not having to think as strongly about scalability as well. If you think if you if you’ve got one event that kicks off the second or you’ve got a thousand events that kicks off in the second, you don’t have to even plan for that.

[00:27:19.770] It’s just going to happen in the background. That’s one of the amazing powers of it, is the scale up and then also the scale down. So think of it. You’ve got. A restaurant booking system Friday night, it’s going to be all kind of super busy on a Monday morning, people aren’t going to be buying tacos or pizzas or whatever, that kind of thing. You don’t have to manage the scaling down of that. Oh, I’ve got, you know, 100 EC2 instances.

[00:27:43.080] Yeah. On a Monday, I think we’re going to be able to deal with three. And then as the load comes in well, for lunchtime, let’s add that up to 100 and you don’t have to do that. Lambda is going to scale up and scale down automatically. And it’s not something you need to think of or aware of or even understand the infrastructure that is just spinning up and down the sides in the background. That is my sidebar. Let me go back to the bad use cases for Lambda.

[00:28:07.140] Well, I mean, I can put my marketing developer advocate hat on and say, well, we’re always increasing use cases day by day that you can use even AWS Lambda for even more features, which is true.

[00:28:19.760] But let’s not be trite about so. So one of the one of the constraints with Lambda is Lambda functions can run for 15 minutes. So that’s the longest time that a Lambda function can take to run.

[00:28:31.220] – Ned
OK

[00:28:32.030] – Julian
So if you need something that runs for 30 minutes or an hour, that’s sort of out of bounds of of Lambda and people are thinking, you know, why is arbitrary limit of 15 minutes and. We like to think of it is it’s one of the ways that we can solve one of the hard problems of security and maintenance, because those kind of things are harder when things stick around for longer.

[00:28:57.670] So the longer it stays, the more complicated to become becomes. It’s harder to spread the workloads around and you get things like affinity and state and those get far, far more complicated. And obviously, security, the more you know, the longer something stays around, it’s just ripe for possibly something to go wrong. The idea of Lambda is to have the sort of temporal ephemeral ephemerality. I’m sure that’s a word from somewhere.

[00:29:21.730] – Ned
It is now.

[00:29:22.630] – Julian
I’m not sure. I’m owning it, but ephemeral is something that’s temporary, it’s going to be ephemeral. And so the idea of Lambda behind the scenes, you know, you’ve got these isolated execution environments are being cycled and cleared out all over all over the time. And, you know, it’s one request, one execution environment. So Lambda functions, let alone within the same function, let alone within the same account, let alone between customers. They’re not sharing these execution environments at all.

[00:29:50.890] So that’s one kind of thing. And that also does a consistency of performance because, you know, in a distributed systems, you’re not sharing stuff in the same way.

[00:29:58.990] So that’s the time, the time limit kind of thing. The other sort of constraint is the processing power. So you can allocate recently, based from December, you can allocate up to 10 gig of RAM to a Lambda function that’s up from three. And you can allocate up to six, which proportionally you don’t. You just allocate memory. That’s the one dial. But up to 10 gig, that then proportionally adds up to six virtual CPUs. So if you’ve got less than I think it’s one point eight gig of memory, then it’s using a single CPU and that sort of ramps up up to up to the 10 gig.

[00:30:33.850] So people also think, well, hang-on from November last year before we had this, I had a job that ran for 15 minutes, took 15 minutes to run. Now, if you had a sorry job, just under 45 minutes to run, but now with a tripling of the memory allocation and the proportional CPU allocation, that job could run under 15 minutes. You know, you could really simplify your architecture by going Lambda. So that’s the second constraint.

[00:30:58.930] So we’ve got a time constraint. We’ve got a resource constraint. So you can’t create a Lambda function with a terabyte of RAM. Sorry. You need to run a Minecraft server somewhere. And the other is if you are using a port and socket model. I’ve been talking about event driven computing where an event happens and Lambda kicks off automatically. Some people aren’t there. Some people like their ports and sockets. And I’m pretending there’s a huge big grin from the networking pack that pushes people to this thing because we we live in a port and socket world.

[00:31:29.740] And, you know, the whole the container ecosystem is still there is very much in the ports and sockets world, huge amounts of of work going on there. And that’s absolutely great. If you if you’re not wanting to move from a port and socket view of port port and sockets implementation, rather move to an event driven architecture. Well, you know, Lambda is not going to be a good case use case for them. And the other one is Lambda on a general computer substrate for EC2 instances.

[00:31:56.050] Then we haven’t got GPU’s that are available to Lambda. We haven’t got, you know, some other funky hardware that you can maybe plug in. You can’t plug a USB key to run your dongle for Lambda, you know, all these kind of things.

[00:32:08.710] So so these are sort of some of the constraints of Lambda. But, you know, we actually like to think some of those constraints are superpower’s for Lambda because it focuses you on what you’re doing. This is your function code that’s going to run. You make it short, sharp, sweet and powerful. You’ve still got 10 gig of RAM. You still got six virtual CPUs and 15 minutes. You can do a heck of a lot. And if you can’t do that in 15 minutes, you know, some things are really suited to breaking that up because you’ve literally got an unlimited parallel supercomputer over here.

[00:32:39.460] So if there’s a way people can tweak their applications to be able to be more parallel, have more parallelism within their within the architecture, and then you can you can go way higher and way broader than you than you would normally think.

[00:32:53.260] – Ethan
Constraints can be very liberating.

[00:32:56.140] [AD] We pause the episode for a bit of training talk, training with CBT nuggets, if you’re a Day Two Cloud listener, you are you’re listening to the podcast right now, then you’re probably the sort of person who likes to keep up your skills, as am I. Now, here’s the thing about Cloud. As I’ve dug into it over the last few years, it is the same as on Prem, but it’s different. The networking is the same, but different due to all these operational constraints you don’t expect.

[00:33:19.750] And just when you have your favorite way to set up your cloud environment, the cloud provider changes things or offers a new service that makes you rethink what you’ve already built.

[00:33:26.800] So how do you keep up training now, for training companies the way you think? I was going to say obviously training and not just because sponsor CBT nuggets wants your business, but also because training is how I’ve kept up with emerging technology over the decades. I believe in the power of smart instructors telling me all about the new tech that I can walk into a conference room as a consultant or project lead and confidently position a technology to business stakeholders and financial decision makers.

[00:33:54.530] You want to be smarter about cloud CBT Nuggets has a lot of offerings for you, from absolute beginner material to courses covering AWS, Azure and Google cloud skills. Let’s say you want to go narrow on a specific topic. OK, for example, there is a two hour course on Azure security. Maybe you want to go big. All righty then. There is a forty two hour AWS certified SysOps administrator course and there’s a lot more cloud training offerings in the CBT Nuggets catalog.

[00:34:22.090] I just gave you a couple of examples to whet your appetite. In fact, CBT nuggets is adding forty hours of new content every week and they help you master your studies with available virtual labs and accountability coaching. And I’m going to I’m going to shut up now and get to the part that you actually care about, which is the special offer of free stuff that you get from CBT nuggets because you listen to this entire spot, you awesome human first visit, CBT nuggets, dotcom slash cloud.

[00:34:48.280] There you will find that CBT Nuggets is running a free learner offer. They’ve made portions of their most popular courses free. Just sign up with your Google account and start training. This free learner program is a great way to give CBT nuggets a try. Now, as a bonus, everyone who signs up as a free learner will be automatically entered into a drawing to win a six month premium subscription to CBT nuggets. So this is a no brainer to me.

[00:35:12.700] Just go do it. CBT nuggets, dotcom slash cloud. That’s CBT nuggets, dotcom slash cloud. And now back to the podcast that I so rudely interrupted. [/AD] [00:35:25.460] – Ned
So, I mean, when we’ve been talking about Lambda so far, it’s mostly we’re talking about the idea of a single action function. This is a function, it has an event, it does something and then it’s done. It’s finished its work. But I think most applications that I would work on today are built up of tens or hundreds of functions behind the scenes all doing things.

[00:35:47.690] And it sounds like if you haven’t made that move to event driven or sort of like a what would I call this a cloud native approach to an application, Lambda is probably not going to be a good fit for that use case. If you’ve got like a traditional three tier application that’s serving Web content, maybe that’s not a good fit. Right away, you’re going to have to do some work.

[00:36:09.890] – Julian
Potentially yes, but some of that work can really pay off. And I’m going to use an example. Let me think of something, something like PHP in a lamp stack. So, you know, very common architecture all over the Web where Linux, Apache, MySQL and PHP, you know, lots of websites run on that. The thing is, if you’re going to be running Apache, yeah, that’s a server to manage.

[00:36:31.010] You’ve got to scale it up. You’ve got to scale that down. The whole bunch of work for managing Apache may not necessarily be hard, but, you know, this is something you’ve got to think of. So a serverless approach would be migrating that to API Gateway. And that’s you know, it’s not a lift and shift, but it’s not that complicated of architectural change. And the thing is, you’ll end up removing a whole bunch of code, removing a whole bunch of patching API gateways, all sorted.

[00:36:54.110] Now, you can MySQL can stay the same connections to MySQL, PHP can stay the same and the L can sort of stand for Lambda. So what people have what people have done is they don’t necessarily have to split their application up into a whole bunch of different Lambda functions. They can start by the API gateway, as it did with Apache, has a whole bunch of different routes. You can have ten different routes within your application. You could still send back to a single Lambda function.

[00:37:21.470] Yeah, it’s going to be a big Lambda function. Doesn’t have to respond to a single event. You could have 10 different events or go into that Lambda function. So that is a way that you can take advantage of some of the benefits, you don’t have to manage your API and you can sort of, in a way, lift and shift some of your code, your Lambda, your your PHP running code into a Lambda function and then take on the benefits of that.

[00:37:44.210] And that’s going to scale up and down and scale and scale up and down automatically. And then I’m the first one to say don’t go and rearchitect architecture application just for funsies. I mean, we live in the real world. I’m bored. I don’t have anything else to do at work. I’m going to rebuild my application.

[00:38:00.830] But, you know, an approach is you have an issue, you have a scaling problem, you have an availability problem, you’ve got a risk problem. I don’t know any any kind of or you want to add some more functionality, very often a common use case. And, you know, people tell us they love the approach of, well, now what’s the quickest way that I can get that functionality or I can make this change?

[00:38:22.880] And yet, you know, the old way of doing things when you know how to do things, you set up a bunch of little servers to manage patch and do that. Yeah, that’s one way of doing it. But people are sort of starting to come onto the idea of, well, if I use these managed services and Lambda actually to add that little bit of functionality is way quicker because I don’t have to manage all the infrastructure underneath. Now, if that bit of functionality is something new and you have got a fleet of servers hanging around, well, that’s great.

[00:38:49.220] You don’t need to set up a new part, a new cluster or new whatever to do that. So we get a lot of people who just needing to add some functionality or they’ve got a scalability challenge or they want to break things apart because now it’s taking them six weeks to get a little change in their application because they’ve got this huge, big chunky application. And it’s like, oh, man, this is just way too hard to iterate through it.

[00:39:11.840] My CICD pipeline goes on fire. Whenever I do it. I have ten people pulling their hair out and having to manually merge changes and all that kind of operational kind of stuff. And so people say, my PHP app, I’ve got ten of those little functions that already in the app. Well, why don’t I just split those ten little routes out and make them 10 individual Lambda functions? Because you know what? When how many people are actually writing to my my LAMP stack?

[00:39:36.630] Well, not that many people aren’t. People aren’t writing stuff into my database, but reads from my database. Well, I’m getting flooded by these reads kind of things. But, you know, I’m running the scalability for my writes and the scalability of my reads are at the moment wedded together. And maybe that’s not ideal or I’m having some issues with that. So someone can literally just start and say, well, why don’t I just pull out the reads for my application, set that over to another Lambda function that can scale independently.

[00:40:02.120] I mean, this does go into the concept of micro services and, you know, breaking up a monolith. And, you know, it’s not always the best idea. You know, they’re great things from monoliths that they all do and they’re easy to reason about and everything. But we’ve got a problem. You’ve got an issue. You’re hitting some sort of challenge. You know, heading to a micro services approach is a good way to do this. And Lambda is, in a way, a really good use case for these microservices components.

[00:40:28.960] – Ethan
Julian, I want to ask you about state kind of a kind of a basic thing here when I’m running my Lambda function, it runs, there’s something it does or something or it computes something. There’s a result maybe. So there’s some state there, some chunk of data that probably needs to go somewhere. Now, you had mentioned in when you’re first standing up that Lambda function that maybe it needs to pull up to a database.

[00:40:53.110] So is that something that happens where it writes to a database that’s somewhere, or is it the responsibility of the code that called the Lambda function to receive the result back and then it writes to the database?

[00:41:09.540] – Julian
Can be all of the above just to annoy you and not give you a single answer. You’re in IT, it depends. Yes, the state is obviously handled differently for Serverless and functions as a service systems, but it doesn’t have to be entirely differently. Because Lambda functions can be doing writes to database’s, so if your state is in the database, it’s the Lambda function that can just write code into a database, read code from a database. And even then, I mean, we’re talking a lot about Lambda, because that is the focus of what what what we are chatting to.

[00:41:40.920] But they’re also a load of other Serverless integrations that can read and write things to databases without going via Lambda. API Gateway, for example, can write a write something into a database directly. So if there’s an, if that database write doesn’t need to be transformed through the URL or anything why even have a Lambda function there in the middle? So, you know, you’ve got you’ve got all these other kind of services which can move things around.

[00:42:08.010] So a couple of different ways to read and write from a database that is your state. No problem. You know, if your database is going to be can be something traditional, can be a relational database, can be a key value store, whatever, any number of database. Obviously, if Lambda is going to scale out hugely, you need to start thinking about well, am I going to be overwhelming my database in the back end? That’s the same kind of concern you’re having if you’re running EC2 instances, container’s or any same kind of thing.

[00:42:35.160] Another one is file storage. So last year we came up with EFS for Lambda and EFS is our cloud NFS storage system and basically you can attach an NFS mount to Lambda functions. So when a Lambda function spins up, it’s got access to a file store and so that can read and write data from NFS. That’s another way of doing state. The other way of doing state is passing state as the events in your application. When I was talking earlier about grokking this kind of event driven, event driven thing, you know, state doesn’t need to be necessarily a common collection because sometimes state is ephemeral.

[00:43:14.680] It’s only temporary. For example, someone someone uploads a file to S3. I want to then read that file from S3, I want to do transformation on it, let’s think, for example, image manipulation, OK, I’m doing image manipulation on this. I want to it’s a photo taken on a green screen. I want to write a Lambda function that’s going to read that image from history and it’s going to remove the green screen of that green screen and dump it to another S3 bucket.

[00:43:40.900] So I’ve got a source bucket, I’ve got a destination, but I’ve got another Lambda function that then pulls from that second destination bucket, which is now a source bucket. Some transformation, maybe add your company logo or funky background or whatever and pumps it to another S3 service. Another Lambda function can then pick that up from an S3 bucket and maybe dump it elsewhere, which is then going to be sent to a third party to print on shipping labels or, you know, a big poster or that kind of thing.

[00:44:10.930] So all of those states are transition states and it doesn’t have to be the same object storage doesn’t have to be the same file system. It’s a pipeline that you create that moves the state and moves this data through whatever processing you’re doing.

[00:44:26.140] – Ned
You could even have like a cron Lambda that runs and cleans up all these buckets from images that have been processed. So they don’t just sit around taking up space now.

[00:44:34.250] – Julian
Now you’re thinking, now you’re thinking.

[00:44:35.860] – Ned
Woah look at that.

[00:44:37.960] – Julian
Or even better is something like S3 has a lifecycle thing where not having to write any Lambda functions or any code, you can just say anyone that hasn’t been accessed for a week, dump it out to archives storage. Once It hasn’t been accessed for three months, seven years, whatever compliance kind of thing, get rid of it. Yeah, that’s the ultimate thing. I mean, Lambda is cool, but if you don’t have to write Lambda functions don’t have to write code, that’s even better.

[00:45:00.280] – Ned
Even better. Yeah. I’m curious is that using Lambda under the covers, the lifecycle.

[00:45:05.800] – Julian
You know, I actually, I actually don’t know. And I like that I don’t know because it’s cool. The magic happens behind the scenes.

[00:45:13.300] – Ned
So if I’m developing one of these Lambda functions, and I’ve actually used Lambda and and some other functions as a service, things to do what I would normally have like a cron box doing, because we used to have like a utility box that just ran all these cron jobs all the time. This is like the replacement for that. If I’m developing this code, can I do it locally? Can I do inside a code editor and run it locally to test it out or is this I have to upload it test against Lambda, change it, upload it again, wait for my environment to warm up. Like can I run it locally I guess is what I’m asking.

[00:45:43.510] – Julian
You definitely can. And it’s certainly faster than copying it onto a USB drive and then driving it over to AWS data center facility. Plugging. Yeah, let’s not, let’s not get too silly. This is getting ridiculous. Yes. So you can develop Lambda functions locally, superbly well. Number of different ways you can do this. There are there’s AWS CLI which means you can interact with the Lambda servers remotely from your local workstation so you can package a function, you can upload it, you can invoke it, you can see the results, you can pipe the logs.

[00:46:15.550] Anything you can do with AWS CLI you can also do with Lambda. Now, there are other Serverless frameworks that are really good for helping to develop serverless applications because they just make that process a little bit easier. There’s terraform, there is serverless framework and our own home grown to AWS, which is called the Serverless Application Model or AWS SAM. Now they do a whole bunch of different things. Part of it is the CLI, is to create all the functions. Part of it is managing, packaging, the functions and deploying of the function. That’s the whole kind of thing.

[00:46:46.930] What they also do is allow you to test your functions locally and what they actually do. I’ll use SAM as an example is it downloads a docker container which pretends to be Lambda behind the scenes and invokes a function locally. So people use this for testing, particularly for unit testing as well. Run your run your piece of code. The first time you need to invoke it locally. Obviously, it takes a bit of time for that container to come down, but then each each iteration of the Lambda function after that super quick.

[00:47:17.210] – Ethan
Ah that’s really interesting because you can if I can do that with the docker container, that implies that the execution environment that Amazon is running is separate from whatever the orchestration system is, that Amazon is trying to keep all the micro VMs alive and such effect.

[00:47:29.830] – Julian
Correct. And that I did say earlier that I was going to go into the sort of container image format, and I will do that because that adds another sort of a whole mind blowing coolness to it. But, yeah, let’s we’ll stick with the with with the local invocation, but this increases as well. If you want to test API Gateway, for example, there is a local API gateway. Let me pretend, and mock to be and to be API Gateway so you can run an API, you can run a Lambda function.

[00:47:57.530] There are Mock’s for a whole bunch of other kind of systems. But this is something also interesting to think of is that the more you try to force things to test and invoke locally, you’re going to start bumping into difficulties at some time because in a way you’re trying to replicate the cloud locally to do your testing. Sure for a Lambda function, sure for an API, a whole bunch of kind of stuff. You know, Lambda function.

[00:48:21.400] You want to iterate really quickly. Change is variable. Rerun, aw no, the output is not correct. The input’s not correct, let’s iterate, iterate, iterate. But once you start connecting to a bigger system and you connecting to a shared database, you’ve got multiple Lambda functions or you’ve got that whole S3 image processing pipeline going to mock S3 locally and pretend to have that running, it’s just not worth the hassle. And so people start to think about it as well. Why don’t you start bring your testing to the cloud rather than your cloud to your test environment.

[00:48:53.890] So use your local testing Lambda. Iterate quickly, quickly, quickly. But to start as soon as you start integrating with these more other systems, it’s not going to take that long to upload that code to the Lambda servers again and then run it with the full, unbridled power of of cloud.

[00:49:11.380] – Ned
Right. I think you you said it just right where you’re doing your unit testing that initial test just to make sure. Does it work? Once that unit testing is complete, you move to integration testing. OK, now that’s going up. You’re deploying that up in the cloud to test it. And then when you do your end-to-end test, now you’re working with all kinds of live systems. So if you’re walking through that testing lifecycle unit testing and SAM is happening locally and then everything else, you probably want to push that up.

[00:49:37.060] – Julian
Yeah, definitely. And I mean, people talk about the offline model and all that, then. Yes. Well, you know, when I’m on an airplane and I need to be able to iterate on my Lambda functions like, yeah, you know, we hear that. But, you know, you can only go so far. So yeah, you can test Lambda locally, you can test API Gateway locally. And there are a few other things you can do. But if you’re going to be extending or testing out, you know, think of the bigger picture and test with the power of AWS behind you.

[00:50:02.230] – Ethan
Sometimes it’s OK just to sit on the airplane and relax.

[00:50:06.820] – Julian
Yeah, airplanes don’t have Internet access yet.

[00:50:10.620] But they do.

[00:50:13.120] – Julian
Do they have Internet access? No, no, no, no.

[00:50:16.570] – Ned
Now, you mentioned a few times Julian, about container image support. And I know I know in the real world of Lambda, it’s not containers. Right? It’s the micro VMS, but people love containers. And you said there’s some additional functionality there. So walk me through that. What’s what’s going on with container images?

[00:50:31.780] – Julian
Yeah, absolutely. Well, stepping back containers is an interesting word because it’s a whole number of different things. Now, when people talk about containers that are often talking about isolation technology, we like, OK, we’re not talking about firecracker those micro VMS before. That’s an isolation technology. So Lambda does isolation using firecracker micro VMS and instead of the container isolation. So, you know, containers, sort of a tick box, you know, containers are portable, containers are something that you can move between different environments.

[00:51:04.450] You can try them on your own laptop, you can upload them to the cloud. Lambda does part of that. You know, it’s not 100 percent across every every kind of thing. But, you know, Lambda does have a way that you can use containers to test things locally and upload things, upload things to the cloud. But that other part of containers is, well yup, one other part of the containers is all the tooling around containers and CICD pipelines and Docker and the docker CLI and all of these amazing tools and security scanning and all these kind of things that people have made for containers and super awesome, super useful.

[00:51:39.660] Enterprises, start-ups are all using this amazing tooling. Now, with Lambda, with Lambda and Container’s, it’s the fourth bit that we’ve done a cool thing with Lambda and what we’ve done is container’s also means a packaging format and that basically means your Dockerfile and what you put in your Dockerfile to create this eventual thing that you’re going to put on an isolated environment. And so what we’ve done with Lambda is we’ve said you can now create your functions, but they packaged as container images and you create them using a Dockerfile.

[00:52:13.860] So it’s not that different from a Dockerfile you may love or know beforehand. And you say, I want my function to start with a Java or Python, you know, node.js 12 base image. So you pull that base image down. We have some provided images from Lambda and those images help just to make the connectivity between the Lambda function code and the Lambda service. Or you can create you could use Alpine Linux, for example, if you want. And then there’s another Lambda. There’s another container image that you can pull down to make that connection between Lambda.

[00:52:48.150] And then in that you copy your code, you, you know, install your dependencies. You know, this could pip install or NPM modules for node and you can literally build up your package of your function that’s going to run. And then you upload that to ECR, which is basically our elastic container registry. So think of it like a AWS’ version of Docker Hub, for example. And that’s just a container image that you then upload. And all it does is it contains all the stuff in one big blob.

[00:53:19.440] I used really bad words here, an image of all your of all the stuff that’s going to make up your Lambda function. And when we were talking before in detail about all about how the Lambda service runs your function, it does it in the same way. But what it actually does is it then pulls down your container image and it just builds the execution environment so all the rest stays the same. Still needs to be an event trigger, still needs to be a handler, still needs to be authored to work with the Lambda service, but you can now package it as a container image and Lambda will run it.

[00:53:52.390] The first thing people say is, well, you know, these can be up to 10 gig in size. That’s going to be horrible performance. What on earth are you thinking? We go aha! We’ve got some clever tricks behind the scenes of what we do is we actually cache those image layers close to where Lambda runs. So if you are using the node.js 12 provided image via Lambda, we precache that everywhere. And there are three levels of caches da-da-da-da-da, when your function then runs for the first time during that cold start, we then cache that container image and more versions add more layers of it.

[00:54:21.960] So, you know, Docker File has got a whole bunch of different image layers. We cache as many layers as we can. And so ultimately when your function runs, hopefully much of the stuff’s already in the cache and then you can just run your container image and function performance should be the same, obviously, depending on what you’re going to what you’re going to do.

[00:54:41.010] But the advantage of this is, is a few fold. First of all, you can use the Docker CLI to build your functions. You can use you can use a Dcokerfile and you can use all the container tooling you love to create those images. You could put this in a pipeline, you could scan them for vulnerabilities. You can ship them into your artifact repository. You can do a whole number of things with that. And then ultimately, when you spit out your container image, in the end, Lambda then is going to pick that up and it’s going to run your function as it did before, but just packaged in a different way.

[00:55:14.430] – Ethan
I am amused that you said container tooling you love.

[00:55:20.790] Well, Julian. As we’re getting to the end of the show here, I want to ask you about kind of the state of Lambda in this current day. Here as we’re recording this the end of March 2021 are their brand new features you’d like to highlight or maybe some roadmap things if you’re allowed to talk about them, that you could tease us what’s coming up next for Lambda.

[00:55:36.390] – Julian
Yes, certainly. Let me just look at our road map tool and Lambda is coming out with oh, now, that was close. I nearly got into trouble giving up all of the road map things. We need to keep some of the toys.

[00:55:46.850] Yeah, I mean, we’ve come out with some cool stuff. I mean, just going back the millisecond billing, the larger Lambda function sizes, container image support. There’s something for Lambda extensions which allows you to plug in observability and security tools and so you can. One of the things we haven’t talked about is we’re trying to we’re trying to connect with partners much more. Lambda doesn’t need to be unique. If you’ve got container tooling, you already use if you use, you know, an observability partner, it shouldn’t be weird. It shouldn’t be difficult. It shouldn’t be different with Lambda Lambda extensions helps helps with that.

[00:56:20.190] And, you know, one of the real premises of serverless architectures is when you hand over the operational responsibility for a lot of these functions, they are always just going to get better, bigger, faster, more, cheaper as things go, as things go on. So, you know, the roadmap is quite simple. There’s always going to be more functionality happening.

[00:56:39.660] It’s always going to get cheaper over the long run. You’re going to be able to do more things, you know, bigger function sizes, you know, cheaper with one millisecond billing and all things going to happen. I mean, I know you sort of sitting on the edge of your seats, everyone waiting, waiting for my big drop of the big kind of Lambda features that are going to run. I’m probably going to disappoint you, but what I do say is that, you know, we we are always listening to customers.

[00:57:02.540] And I’m not saying that now just a flippant way that, yes, we listen to customers, we do that literally 90 percent of all the stuff we ever build is based on customer feedback and and requests and those other little 10 percent things we sort of invent on your behalf. And we think of the cool, crazy, awesome stuff that you hadn’t even thought of and that and that we can do. So what I would suggest is Serverless is really an awesome mindset to be able to think of not managing any of this infrastructure yourself.

[00:57:30.710] The security is so much better. The scalability is so much better. Hand that over to AWS. We’re going to do a great job doing that. You get to focus on your business logic, your business code, and you can and you can fly. And so, you know, we have companies doing a serverless first approach where they decide we’re going to go serverless for as many kinds of things. Yeah they bump up the constraints and they may to spin out and do other kind of things.

[00:57:51.230] So that’s what I’d suggest. Really have a look at that serverless from a mindset, you know, don’t get too bunkered down necessarily into all the technical kind of details. It’s super easy to get going. Generous, free tier and easy to play with, you know, hopefully easy to learn and reach out. If you’ve got any questions, I’m more than happy to help you.

[00:58:10.790] – Ethan
Your title might be a developer advocate, Julian, but I see evangelist man. I think.

[00:58:16.430] – Julian
Well, that that is part yeah. That is part of the job. And we have a whole team of evangelists, but our advocates, we work within the product org. I’ll say it, it’s slightly different. But I mean I, I come from an infrastructure background. I was literally, you know, racking stacking Windows servers, Linux boxes, you know, virtualization hypervisor. I did all of that, you know, worked with infrastructure teams, you know, a firewall rules, you know, load balancers, all that kind of stuff.

[00:58:42.680] And that’s what sort of kickstarted the buzz for me with Serverless was like, oh, man, there’s a lot of the stuff that I just don’t want to have to look after. And I’ve been doing it for twenty five years and I love the tech. It’s all cool, but there’s got to be a better way. And so yeah, that was my initiation story of the light bulb went off for service. Yeah. Now I get to play with all the cool toys.

[00:59:02.630] – Ethan
Well Julian Wood how do people follow you on the Internet? Do you got Twitter. You got a blog, maybe a book you wrote that you’d like to tell people about. Go for it.

[00:59:10.220] – Julian
Certainly. Well, the best way to find out all about Serverless and AWS is just a website called Serverless Land dot com and that’s got, you know, blogs, videos, Learning Path series, everything to do about serverless on on on a daily basis. You can follow me on Twitter. I’m Julian Underscore Wood. And I will you know, I’m here, there and everywhere.

[00:59:31.460] We have a thing called tech talks with AWS where we do do content. We’ve got a serverless office hours every Tuesday, well afternoon in London, evening in London. It’s sort of morning if you’re on the West Coast and somewhere in between if you’re on the East Coast. But yeah every week, serverless office hours, an hour, an hour of us streaming on twitch. Bring all your questions. Bring all your concerns. Yeah, we’d really love to hear from you.

[00:59:56.660] And there’s one other thing. If you’re wanting to learn about service, you know absolutely nothing. And I’ve confused, partly confused you and partly inspired you. There are a couple of workshops. Go and have a look at workshop. One is called Innovator Island, and there are another one’s called Wild Rides, the fictional unicorn startup, of course. Why wouldn’t you? Yeah, that’s a great place to play when you don’t need to know anything, you need an AWS accounts.

[01:00:20.150] And literally in a few hours you can link a whole bunch of these things together, see some code and you can sort of hopefully it’ll sparkle some neurons in your head that can connect all the stuff together.

[01:00:29.180] – Ethan
Sparkle some neurons. I like that. Julian, again, thank you very much for joining us on Day Two Cloud today. This was great. I got I got a lot I got a lot from this conversation. Again, much, much appreciated.

[01:00:39.590] And thanks to you out there for listening. Virtual high fives. You really are awesome for making it through to the end of this show and bolstering up your knowledge about about Serverless in this case. And if you have suggestions for future shows, things that you want Ned and I to cover, we want to hear from you. You can hit either of us up on Twitter at Day Two Cloud show or fill out the form and Ned fancy website, Ned in the cloud dotcom.

[01:01:02.120] Now, if you’d like to hear more from the Packet Pushers Podcast Network, we have a free resource for you. The weekly newsletter, Human Infrastructure Magazine. When you subscribe to that, we immediately sell your email address to anyone that’ll pay us… We don’t do that. We don’t we don’t do that at all. Genuinely. We send it out weekly human infrastructure magazine is all about the best stuff that we found on the Internet, opinion and analysis of what’s going on in IT.

[01:01:24.590] And we send it to you for free. Thousands of people that subscribe and you can do all you got to do is packet pushers, dot net slash newsletter, put in the info and you’ll get the next issue. Until then, just remember, cloud is what happens while it is making other plans.

Episode 92