Follow me:
Listen on:

Day Two Cloud 183: How Did We Get To WebAssembly And What Is It For?

Episode 183

Play episode

WebAssembly (Wasm) is an up-and-coming technology that’s probably going to fall into the lap of operations folks. WebAssembly is basically a specification on how to compile things to a bytecode format and how to execute that bytecode. On today’s Day Two Cloud we start to peel the onion on what WebAssembly, what it’s used for, and why you might want to get your hands on it.

Our guest is Matt Butcher, CEO at Fermyon Technologies, a self-confessed Wasm fanatic. He compares WebAssembly to a Java Virtual Machine but with new features, particularly around security, that make it worth investigating.

We discuss:

  • WebAssembly and what it does
  • Server-side vs. client-side execution
  • Use cases for Wasm
  • Wasm performance challenges
  • More


  1. Wasm isn’t applicable everywhere
  2. How serverless could look if we could streamline the startup aspects of it
  3. Try it! WASM is going to catch on quickly, so get oriented

Sponsor: CDN77

Why should you care about CDN77? To retain those 17 out of 20 people who click away due to buffering. CDN77 is a global Content Delivery Network (CDN) optimized for video and backed by skilled 24-by-7 support. Go to to get your free, unlimited trial.

Show Links:

@technosophos – Matt Butcher on Twitter

Matt Butcher on LinkedIn

Fermyon Blog

JavaScript: The First 20 Years – Zenodo

Runwasi – GitHub


Docker Desktop with WASM – Docker

Fermyon Quickstart – Fermyon

Create WebAssembly System Interface (WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload (preview) – Microsoft



[00:00:01.130] – Ethan
Why should you GG you care about CDN 77 to retain those 17 out of 20 people who click away due to buffering. CDN 77 is a global content delivery network optimized for video and backed by skilled 24/7 support. Visit CDN 77 dot slash packet pushers to get your free unlimited trial.

[00:00:33.690] – Ethan
Welcome to day two. Cloud. And today’s topic is WASM that is web assembly. It’s a topic that Ned and I have hit on before on day two cloud. And we’re going to go deep today. Well, we’re going to talk for a long time and it’s going to seem duty. And at the same time Ned, it feels like we barely scratched the surface.

[00:00:51.030] – Ned
It really does feel that way. I think we almost hit an hour with this one and we could have gone for another 2 hours because the topic is just broad ranging. And Matt Butcher, who’s our guest, is just a very engaging speaker to begin with. So what I got out of it was WebAssembly is an up and coming technology. It’s probably going to fall into your lap as an operational person at some point. So you should definitely bone up on it now so you’re ready for when it happens.

[00:01:19.210] – Ethan
As Ned said. Our guest is Matt Butcher. Matt is the CEO of Fermyon Technologies. That’s a start up there, just getting ready with products. It’s not a sponsored show today, it’s just Matt. We’re leaning hard into his expertise in this area and man, he does know a lot. You will enjoy this conversation very much with Matt Butcher.

[00:01:36.750] – Ethan
Hello Matt.

[00:01:38.090] – Ethan
In a sentence or two, would you tell the nice folks listening who you.

[00:01:41.090] – Ethan
Are and what you do?

[00:01:43.230] – Matt
Sentence or two? That’s short. Oh man, I just wasted both of them. No, I’m Matt Butcher. I’m the CEO of Fermyon. I might be the world’s biggest web assembly fanatic at this point. Fermyon is working on technologies based around web assembly and we’ve been doing a lot of fun stuff for the last couple of years.

[00:02:02.580] – Ethan
Web assembly fanatic. Apparently we have the right person on the show then because we’re talking all about web assembly today. So people that are listening here, you’re talking to infrastructure engineers, people that are hands on with technology. For those folks who maybe aren’t familiar with web assembly, can you define it for them in a nutshell? Let’s try to keep it concise.

[00:02:21.620] – Matt
Yeah, I think there are a lot of very complex definitions of WebAssembly kind of floating around. But at the end of the day, really it’s just two things. It’s a specification for how to compile things to a bytecode like format and a specification for how to execute that bytecode format. So probably the easiest comparison is it’s sort of like the new generation of the Java virtual machine or the net clr, but with a bunch of new interesting kinds of features, particularly on the security side that make it different enough to warrant having another one of these things. We’re not just reinventing the same technology. Again, this one is different in Kind, and in particular there it’s security. Right. JVM Clr, the default kind of disposition of the virtual machine toward the user, toward the software developer is well, we trust that the code you’re writing is good code and consequently we give you access to the system resources like the file system and the network and things like that. And yes, you can shut them off, but the default disposition is trust the code. WebAssembly, really it’s a default disposition is don’t trust the code.

[00:03:30.900] – Matt
Right. You don’t want untrusted code running in your browser. And that was what WebAssembly was originally built for, or rather I should say it this way. When you run untrusted code in your browser because it’s floating around there somewhere, you don’t want it to be able to do anything nefarious on your system. And so really the VM was built sort of a language VM was built with that kind of default security model, and that turns out to be a good fit for a number of other cases besides just the web browser.

[00:03:58.680] – Ethan
Yeah. Is it fair to compare WASM WebAssembly.

[00:04:02.890] – Ethan
To a compiler or an interpreter?

[00:04:06.030] – Matt
Yeah, actually I think you can talk about it in that family. Right. Probably. Again, the JVM is probably the best thing to compare it to, which is a bytecode level interpreter. It interprets a binary and it might JIT compile or ahead of time compile both features that WebAssembly does as well. But that’s what it is at the core. Right, right.

[00:04:26.690] – Ned
And if I was making the distinction here, if I think of a JVM, I’m assuming that my base language here is Java. So I’m using Java, I’m writing it, and then I’m using it to translate things to bytecode. Is there a similar language distinction with WASM? Or do you have more options when it comes to the base language you’re using before you convert it?

[00:04:47.050] – Matt
I think the two biggest bytecode runtimes that we’ve seen have evolved in such a way that a language coevolves with them. Right. So there’s C Sharp and the entire net language of families that all sort of coevolved with the Clr and there’s Java, of course, that evolved with the JVM and then all the languages that also run on the JVM within reason, but all of them are really sort of specially built to run on that runtime. Web assembly’s early promise that has continued to play out is that it should be possible to take any language, whether compiled or scripting, and eventually get it running on the web assembly virtual machine. Right. The WebAssembly execution context. And so the early one, the earliest target language was C, because I guess if you look at the history of security, c is top of the problem child list, but also if you look at the history of old libraries that you might want to be able to support. C is pretty much the top of that one too. So C was a very early target. Now you got most of the top 20 languages in there, including Python and Ruby, both recently added support.

[00:05:58.580] – Matt
VMware has been working on PHP support. So you got a bundle of scripting languages, and then on systems languages, you’ve got Go and Rust support, C plus plus support. And then the coolest part is in that intermediate language stage, right, the executed binary support is actually really good for WebAssembly, and Java is kind of coming along too. So we’re really seeing a big chunk of languages come along.

[00:06:24.670] – Ned
It’s funny that you mentioned C upfront, because I know C has been getting beat up lately by the Rust aficionados on memory unsafe and Rust can solve that for them. But here’s another context by which you can solve that memory safety issue by putting it in a little box and you can’t get out of the box.

[00:06:44.210] – Matt
Yeah, there’s a funny rumor circulating. I have not verified this, but it makes sense, and that is that when the 365 team was porting some of Excel over to run in browser, they ran across a couple of really gnarly C libraries that have been around since dinosaurs still walked the Earth, and Excel ran on a 386. Rather than attempt to sort of rewrite that logic in JavaScript, they took that C library, compiled it to WebAssembly, and then hooked it up sort of in browser. And I mean, whether it’s apocryphal or not, the idea behind that story sort of expresses some of that core value proposition of web assembly. We can take some code that has been written a long time ago and be able to give it life in new and interesting context without necessarily having to do massive rewrites.

[00:07:44.850] – Ned
Got you. Now, that code, like the Excel code, for instance, is that something that’s running server side on the Office 365 servers, or is that something that’s running client side within the context of my browser?

[00:07:58.090] – Matt
Again, assuming the story is not apocryphal, it is run, that is, to run in the browser for their side of things. This is the interesting thing about WebAssembly. It was designed to run in a web browser. That was its initial use case, and it definitely found purchase there, right, in Office, in Figma, in the Adobe suite. But again, those characteristics of it, the kind of secure runtime and the ability to connect it with the outside environment and the ability to compile lots of different languages to it, those lend to a lot of cases that aren’t browser oriented. For me, there are kind of four big areas where web assembly looks promising, so the browser is obviously the first one, right? But sort of adjacent to that, you could say, all right, well, another interesting feature of WebAssembly is the fact that the binary type is fairly compact, and you can run it in a fairly small runtime, and you can run it in an interpreted mode, and you can deliver them fairly deliver the binaries fairly quickly over the network. These were all virtues for the browser, but they line up pretty well with what you need for an IoT story as well, right, where you might be dealing with constrained devices that also need again, I’m not going to pick on IoT as historically insecure, but IoT has been historically insecure.

[00:09:17.350] – Matt
And here’s an interesting way to layer in some security.

[00:09:21.080] – Ethan
Yeah, Matt, you can call IoT currently insecure. You don’t have to even mention historically.

[00:09:27.110] – Matt

[00:09:27.450] – Ned
We all know the S in IoT is for security.

[00:09:34.470] – Matt
I worked at an IoT startup years ago called Revolve and we were working hard on the security model, but we would stumble across devices and go, no, this clearly couldn’t be. You can write directly to the memory register of the device over the no, this can’t be the way crap we’ll go with historically because it sounds kind of even though really, I think probably a good swath of that industry is learning about security later than they should. And WebAssembly is a good way to sort of introduce a new layer of security in there where you can talk about running the code inside of a sandboxed environment. So another good application of web assembly that has been kind of exciting is for years, we have built years, decades we have built applications, gotten to that point where we said, all right, well, now I want the user or I want third party engineers or something to be able to make slight modifications to the way something runs. So I’m going to build a plug in architecture for that. And typically our plug in architectures kind of look like, well, I’m going to pick a programming language that I happen to like and I’m going to embed that in the system and you’re going to write your plugins in JavaScript or Lua or whatever the language was that I picked.

[00:10:49.780] – Matt
WebAssembly has some interesting potential there for being able to embed a runtime that many languages can compile to and then consequently, developers and individuals who want to extend an application can do it in their language of choice. Most interesting application of this has been, I think, Single Store, which is database company that went, oh well, we could embed a Web assembly runtime inside of the database and then you could write stored procedures, well, an alternative to stored procedures in, say, Python or Ruby or Rust and have those run execute inside the database. So you don’t even have to move the data out of the database to operate on it. You can do it inside the database. And I think that’s kind of an extension of that kind of plug in model. And then of course, the fourth model is the one I’m most excited about because as a longtime cloud engineer, to me it’s like, what is cloud? Well, it’s when one person supplies hardware and another person runs their stuff on it. Right? And I don’t have to manage i, as an application developer, don’t have to manage that infrastructure that’s running my code.

[00:11:54.820] – Matt
And you as the operations team, provide a generic service, but you want to protect all the different users from each other and all the different applications from each other and of course, all of your own infrastructure from bad acting people. So that kind of sandboxy layer that virtual machines offered first and then containers came along and offered really second, we’re seeing that same kind of sandboxy model for Web assembly, but for a different kind of workload. So that’s kind of fourth big area of interest to me. Well, the big area of interest to me, but a fourth big area of application for Web assembly is a kind of cloud world.

[00:12:30.830] – Ethan
So, Matt, listening to you describe some of what happens on the server side, I’m reminded of way back in the day, CGI, that was a thing we’d run on some of our Web servers and then Java with Tomcat servers. Are there any parallels we can draw from what those were or are to.

[00:12:47.030] – Matt
What WASM is all things come back into fashion, right? So grab your bell bottoms and let’s talk about CGI. I guess those were really like two decades apart. But yeah, I think the Web assembly ecosystem in some ways has some very early parallels with the early has some very clear parallels with the early Web ecosystem for us. It certainly did. You start a technology like WebAssembly or like Java or even early Containers, and there’s a sense in which you can tell the core of the story very quickly. WebAssembly as a technology has been around for, I think, about seven years now, since it was started. In about five years, since it hit 1.0. Core story was told very quickly. But then when you start to connect it with various ecosystems around it, in all cases that story gets kind of tricky. Containers needed a Kubernetes and needed an etcd. And needed all kinds of things before we could build the kind of systems we built now. Similarly, WebAssembly needs application platform support for the cloud if you’re going to embed it in IoT, needs an ability to connect to exotic devices or run on exotic devices and connect to exotic peripherals like sensors and things like that.

[00:14:05.980] – Matt
And a lot of work has been going on doing that kind of connection work. The brunt of it has been in a group called in a specification called Waze, which stands for the Web Assembly System Interface. Basically an Uber project defining how a web assembly runtime should be connected to things like a file system or things like networking in a way that retains the Sandbox security model and ideally now with a component model, which is a new part of the Wazzi specification in a way that allows us to extend what we can do. We as say, platform engineers exposing to our user base new features that they will be able to take advantage of like a database connection or an Http outbound library or something like that. In between. We have to figure out how to make things work. We have to figure out how to make do. Right. So early on we realized that while WebAssembly didn’t have all the networking libraries set up so that we could build something that functions closer to say, a servlet where you just PaaS the network connection into the user code, we did actually build a Cgilike system that we called Waggy WebAssembly Gateway Interface.

[00:15:22.560] – Matt
It was CGI 1.1 compliant. I mean, so you were not far off the mark when you say was, what can we learn from CGI? And it was because we knew the Wazzi support for file system and for environment variables and for clock and for random were all really well done and stable already, while the ones for the networking features and database features and things like that were still in flight. And we said, hey, we can figure out how to make this work now we just need to rewind to 1996. Thankfully those days are sort of passing by and we’re moving on into the next, the next big growth area for Web assembly. But it was a fun time, kind of dusting off the old 1996 programming books that are still somewhere over on the bookshelf behind me and going, what can I still learn from these books and what problems can I solve today based on how we were trying to solve problems back then?

[00:16:13.670] – Ethan
How long has WASM been around exactly?

[00:16:16.460] – Ethan
You said five to seven years. Somewhere in there, different revs were released and such.

[00:16:21.050] – Matt
So numbers are flighty objects in my world. But I believe that it was 2015 when Luke Wagner announced on the Mozilla blog that they were starting the Web Assembly initiative and it came out of Asmjs JS, which is a library he and others had worked on before. And then they wanted to push it right away into W Three and just say and they wanted to get so they in this case AWS Mozilla at that time, right. Mozilla wanted to get Google and Microsoft and any other major browser vendors all in a room and all collaborating on this. Because if you’re going to do something in the browser and you want to make it work well for everybody, you’ve got to have pretty quick alignment on all of those. The days of feature races are hopefully gone from the browser world and Luke and the early Web assembly developers were really successful in doing so. And so the Web assembly spec sort of evolved out of a cooperative environment between at least those three companies. I think it ends up being something like the entire W Three working group for it is probably 14 plus companies.

[00:17:32.610] – Matt
And I think that’s kind of remarkable. So Brendan Ike and Alan Worstbrook wrote a paper. Maybe I read it during the pandemic sitting on my back porch. So it must have been around 2020, all JavaScript, the first 20 years. And if you’re in for tech drama, really nerdy tech drama, this is probably the best paper you can read. It’s basically like every single thing that went wrong in the attempt to standardize JavaScript during its 1st 20 years of life, from the early Netscape years, to people throwing things at ECMA’s meetings and stuff like that. And I like to think that maybe since that’s the same audience here’s, the same crowd that learned there and then started working on Web assembly, that maybe part of the success story of WebAssembly was really learning from JavaScript. How not to do things and also how to do things right and how to get it in a standards body and how to get a process to control how you’re going to move this spec forward. So WebAssembly has comparatively been just about drama free. No human endeavor is ever drama free, but just about drama free.

[00:18:43.430] – Ethan
If we look at the timing of all of this with WASM and about when it came to life, we have these other execution environments, containers, serverless and so on, that are coming on the scene at a similar time. It feels like WebAssembly could have solved some of the problems that we solve with containers and or serverless, but those didn’t get a lot of momentum or WASM didn’t get a lot of momentum compared to those other ones. So why didn’t we start with Web assembly? Why is it catching on now, years later?

[00:19:14.750] – Matt
Well, yeah, and there are a couple of directions you could go there, because I don’t think that if Web assembly had caught on earlier, we wouldn’t have containers. I think each of them solves a unique set of problems, but it does speak to the way that our ecosystems work. I not long ago had a conversation with a friend of mine who is not a software engineer. And at one point in the conversation he’s like, what do you do? Well, I do cloud and I’m working on this WebAssembly stuff and had that moment where they’re like, oh, so can you take a look at my QuickBooks local setup? Because something’s not right. And realize from an outsider looking in, right, there’s no difference between an embedded software engineer writing C and a platform engineer running a million node cluster. There’s no difference in their mind. And then us as insiders know, well, there’s a huge difference. We make careers in niches within this big ecosystem and I think a lot of times we accidentally build walls, right? I’m not a full stack developer. I’m a back end server engineer. I’m a systems programmer. I don’t know what the latest JavaScript framework or whatever they’re called is, that’s not where I am.

[00:20:29.530] – Matt
And I think WebAssembly sort of grew up in a world adjacent to Cloud native, and we didn’t really early on see much cross pollination. And this is just one of those things where by happenstance there were no connections, and then one day by happenstance there were three connections and the next thing you know, the cross pollination really starts and you start to see momentum build. But for whatever reason, the two kind of grew up in containers and web assembly sort of grew up in isolation from each other and we didn’t see a lot of early interactivity, at least if there was any. I don’t know about it, right? And then at some point, Solomon Hikes sort of discovered it and tweeted about if web assembly had been around in 2008, then we wouldn’t have built containers. And then the next tweet after that is saying, well, that’s not exactly what I meant. I might have sort of been a little over enthusiastic, but it’s a really cool technology and I think that’s just the way tech works sometimes. And also I think there’s a sense in which once web assembly hit maturity okay, so this is another thing, right?

[00:21:35.950] – Matt
I think we tend to like kind of we want to write, scope any of our projects and say, okay, I’m building a thing that does X and that’s a really good idea. And if you don’t identify what the problem is you’re solving, then you get sprawling code bases that don’t really do any one job great and do a lot of jobs. But if you can really kind of focus in and solve a problem, then you have a good chance of making some progress. And I think WebAssembly was originally intended to solve a very specific problem. How do I run C code? How do I run Python code in my web browser side by side with my JavaScript engine and make it possible to pass data back and forth. And the spec is kind of laser focused on building the kind of virtual machine you need to do that. And it really only comes out a little bit later. Once that’s done, and once people understand what the characteristics of it are that you say, oh, well, actually it’s too bad I didn’t know about this two years ago because this would have been a great serverless runtime, which is essentially the way we approached it and said serverless.

[00:22:40.700] – Matt
The kind of first version of serverless was this promise that we were going to build low management, ultra fast execution environments for code. Right. Lambda being an excellent example of this. But as we got building, we went the way these things execute on my local machine and the way they execute in the cloud is going to be very different. And the sandboxing model in the cloud keeps introducing more overhead. We need to make sure that this piece of untrusted code running somewhere on my server can’t root its sandbox environment and then make its way in and infect the host environment or other customer environments. And so we have to start building security measures. And the virtual machine and container layers were not fast enough for this kind of early promise to serverless. So for us, when we looked at WebAssembly and it’s a great performance profile where it can start up in a couple of milliseconds or under a millisecond as we’ve gotten it now, that’s the kind of engine that we needed two or three years ago to power serverless. So let’s take a chance and say, all right, can we build sort of excuse me for using that kind of phrasing that we all hate in our industry, a serverless v two, right?

[00:23:53.050] – Matt
Can we build a server list that’s I didn’t say V next, which is but a serverless v two, where we can say, okay, we’ll take the ideas that were successful, this idea that you can run an isolated function, have it start up and shut down in seconds, and do something really useful. And can we replatform that on an environment that will cut it from start up and shut down in seconds to start up and shut down in milliseconds and still accomplish that same amount of utility? I think, as should be expected, right? We started projects in Silos. We didn’t talk to each other, full stack developers and web developers did not sit down with cloud native developers and say, let me tell you about this technology that you might be able to borrow until we were a good four or five years into the ecosystem. And then when it happened, it happened, right? It has caught on in a bigger way than I think even I self proclaimed super web assembly optimist person. I’m sort of surprised about how quickly it’s caught on and how positively WebAssembly has been received. We don’t view it as the big threat that has to be defeated by the incumbent container world, right?

[00:25:02.130] – Matt
Docker looks at it and says, hey, this is great. Let’s drop it right into Docker desktop alongside of containers because we can do interesting things this way. And I think that’s been a really cool thing. And I don’t know, maybe I’m optimistic, but I hope that it’s a reflection of the fact that we as an industry are learning lessons about causing drama or a perspective that causes drama versus a perspective that says, okay, this is promising technology, what can we do with it? And maybe we’re starting to learn to opt toward the latter.

[00:25:32.650] – Ethan
Let’s pause the podcast for a bit. Research suggests that 17 out of 20 people will click away to the buffering or stalling, and I am definitely one of those 17. There’s lots of stuff to watch out there and there’s no reason to wait around. If your company delivers online media, consider CDN 77. They are a globally distributed content delivery network and they’re optimized for video on demand as well as live video. CDN 77 is not some newcomer to the scene. They are used today by many popular sites and apps, including Udemy, ESL, gaming, live sports and various social media platforms. And that makes sense to me. CDN 77 has scale. They have a massive network with distribution points all over the globe and plenty of redundancy. While that means you shouldn’t have problems, what happens when you do need tech support? CDN 77 offers 24/7 support staffed by a team of engineers. No chatbots, no tickets getting routed around queues while no one actually does anything. Just no nonsense dedication to your issue. To get your online media back to 100%. To prove that CDN 77 will work for your content delivery, visit CDN 77.

[00:26:44.120] – Ethan
Com packet pushers to get a free trial with no duration or traffic limits, that’s CDN 77 com packet pushers. For a free trial, you can push hard. For serious proof of concept testing. CDN 77 com packet pushers. And now back to this week’s episode, the Serverless idea.

[00:27:06.900] – Ned
Using WebAssembly for the serverless context changes the way that we’ve talked about it so far, because before we were talking about it in the context of running inside a browser. Everybody has a browser, it’s kind of client side. But now you’re talking about running a workload that’s server side or serverless side from like an operational standpoint. Do you just have a Linux virtual machine that’s running a whole bunch of these web assembly virtual machines on top of it? Is that sort of the operational model that we’re looking at here?

[00:27:42.690] – Matt
That was where we started. And it has gotten very interesting as we’ve learned more about web assembly because it turns out that we can get better than that. The original way we built this was and a little bit of this is permian specific, right? And you can build your own system to do various permutations of a pattern like this. But what we’re discovering is we started out going, okay, we can start one VM and then we can essentially execute n different WebAssembly runtimes on top of this VM. And it works well, actually it worked fairly well to just say per customer, each one gets their VM and they upload their application to that VM or per application each one gets and VM here, sorry, I should be very clear in my terminology yet, one big VM virtual machine. We’ll call it like an AWS extra large just to kind of say, okay, so virtual machine, that’s the size we’re talking about, right? Then we have and we’ll just call them run WebAssembly run times, right? So how many web assembly runtimes can you run on one virtual machine? Well, the first pattern is well, we’ll run one standalone process that’s got its WebAssembly virtual machine per application.

[00:28:58.170] – Matt
That was how we started. But as we learn more about the WebAssembly security model and the dials and knobs that we could turn on a web. Assembly execution context. Right. The thing that’s actually running the bytecodes. Oh, well, we can limit everything, we can limit CPU, we can limit number of instructions, we can limit the amount of memory, we can present it with a virtualized file system that doesn’t actually correspond to a real file system anymore. And the more we got talking about this, the more we went, well, there’s no reason we’d have to run one process per application. And this is where we’ve really got interested in the potential of WebAssembly, where we said, okay, we can actually run N applications inside of a single process. And then from there we started saying, okay, well we can take something like a scheduling, an orchestration system, orchestrate out several different virtual machines, each of which running one of these kind of web assembly runtimes on it. And then we can start scheduling out workloads that way and run hundreds and hundreds of web assembly applications per virtual machine instance. And that’s kind of where we’ve been going.

[00:29:59.840] – Matt
And because of the web assembly security model, that route looks very promising to us.

[00:30:06.050] – Ned
Okay, so you’ve got the virtual machine that would be running your Linux operating system. I’m guessing it’s probably yeah, operating system.

[00:30:15.010] – Matt
I mean, it doesn’t matter too much, but we run Linux.

[00:30:18.080] – Ned
Right, and then you have your WebAssembly runtime, that’s a process running on that virtual machine and inside that process you’ve got a whole bunch of instances maybe like threads running inside that process.

[00:30:31.310] – Matt

[00:30:32.410] – Ned
Okay. And you need something to schedule all that and manage it. And I’m assuming that’s not built into web assembly, so you’re going to need something else in for that. The orchestrator and scheduler everybody likes to talk about is Kubernetes. So is the solution Kubernetes?

[00:30:49.790] – Matt
So we started with Kubernetes and we are currently on Nomad, which is HashiCorp scheduler. And part of the reason we drifted from one to the other is that and we built a project, it’s in CNCF, it’s called Crestlet. So you can go take a look at how we did all of this and even run it inside of Kubernetes and you’ll see that it ends up being more like that application per process model. Because Kubernetes is just a little too opinionated about what the artifact running has to look like, right? What the shape of the workload has to be really to the point where it assumes a container. We were writing a lot of shimmy, kind of coat shimmy. We were writing a lot of shim code. Now. I like shimmy too. We wrote some shimmy code that scheduled WebAssembly workloads, but tried to make them look like containers and it worked, but it wasn’t really kind of we felt like we weren’t getting as much out of it as we could. So we switched over to Nomad, which is a little more generic in the way Nomad views the workload and it provides all the scheduling primitives and I think at core, you would call it maybe a process scheduler instead of a container scheduler.

[00:32:02.240] – Matt
But even so, we wrote a custom task driver, which is a couple of hundred lines of code that basically receives a web assembly workload and schedules it onto the web assembly executor. Instead of scheduling it as a container and letting it start up its container runtime or scheduling it as a process and having it start up a process environment that for us has worked really well. So at Fermyon, we will probably stay the course there. We are not the only ones in the ecosystem, though. And Microsoft has continued working on the WebAssembly. And Kubernetes Story recently donated Runwazi, which is the name of the R-U-N-W-A SI is the name of the project to the Container D project and basically has worked on a container D Shim, where you can execute, again, a WebAssembly workload that sort of shaped to look like a container workload for Kubernetes. But it hurts me a little because of the ego thing. Right. But they’ve done a really good job and it done much better than we were doing when we tried to build Crestlet. They just figured out a better way of doing it. And I think that project is showing a lot of promise as far as a way of running web assembly inside of Kubernetes.

[00:33:16.730] – Matt
So in the future, I don’t know which one will win. I placed my bet, they placed their bet, and maybe there’s room enough, there probably is room enough for both in this ecosystem, but that’s kind of the way the orchestration stuff has played out.

[00:33:30.880] – Ethan
Matt, you’re describing a process to get a WASM process running. That sounds like it takes a long time, at least in computer speak, many milliseconds seconds potentially, to get something scheduled, moved, queued up, running, doing something, and then shutting down. Are there performance challenges here? Do I have to consider what sort of workloads are appropriate because of this architecture?

[00:33:55.850] – Matt
Yes, and in surprising ways, honestly. So let’s walk through just the process of deploying and then scheduling and then executing an application and we can kind of in parallel talk about containers versus WebAssembly and we’ll start to see how this goes. Right, so developer developer A is building a container thing. Developer B is building a web assembly thing. Both of them are starting by building something local, writing some code and then running commands that essentially package these up and prepare them for orchestration. In the Kubernetes case, somewhere along the line they’re writing a helm chart or something equivalent in the web assembly space. Currently, you’re really just deploying the Raw application and letting the scheduler do the rest. So it’s the application plus it’s supporting files. So at some point, developer A pushes it to Kubernetes, the resources get allocated and things get scheduled. Right now we look at a few dozen seconds to maybe 6 seconds on the lower end to get everything deployed out there. And then once it’s deployed, the container starts up and depending on its Replica account, is always running at whatever that Replica count is. So if I set my Replica count to three, I got three containers ideally distributed across three different worker nodes on my Kubernetes cluster.

[00:35:14.680] – Matt
So if we pivot over to the web assembly story, it’s a little different in shape once we get to this deployment side. And I’m going to use the Fermyon case, but you can substitute in. If I knew more about Runwise, I could substitute in the same kind of workflow there too. And nomad, deploys out the web assembly binary and files to the workers, right? And the workers then have it distributed and we could say again, I want it to be deployed on three different Replicas. But WebAssembly doesn’t then start things up right away, right? The binary gets loaded on there and nothing has started at all. Because frankly, the way we built things, if it’s a stateless micro service and the startup time is sub one millisecond, there’s no point in starting it up.

[00:36:05.350] – Ethan
It’s more like a function. You’ll call upon it when it needs to be executed, it doesn’t need to sit there and be listening.

[00:36:13.760] – Matt
Yeah, and this is why I call it kind of a serverless V two model, right? The way that functions. AWS a service typically work is you actually have a bunch of pre warmed infrastructure sitting around waiting till the last second or last 200 milliseconds, and then literally drops the workload on, executes it, and then in many cases, it just tears down the infra and you get this whole queue of infra where it’s standing up, warming up instances and then shunting them off to execute a function and then tear them down and start up a new one and pop it on the end of the queue. And that’s probably a better model to compare this to as well. Or a good model to compare this to as well. So WebAssembly deploys these binary files that just go out and live on execution context somewhere on Runtimes. And then every time a request comes in, the web assembly application has started up, executed to completion and shut down. But because you’re really the only bit of infrastructure you need is the memory and CPU power to execute that function in that very moment. We don’t have to have a bunch of prewarmed resources up there and we can hit very, very high densities because we’re running all of these WebAssembly things.

[00:37:19.390] – Matt
And again, these sort of big macro WebAssembly execution environments that can run hundreds of applications worth in a single AWS virtual machine. Okay? So yes, we’re starting to outperform the kind of typical serverless workload which is where WebAssembly strongpoint is. It’s not going to replace any of the container workloads because these are short executed things that start up execute, shut down containers are really about things that start up run for a long period of time. And, yeah, they get torn down and stood up again periodically. But periodically is really days, months or quarters rather than seconds or milliseconds.

[00:37:59.810] – Ethan
So server side at least, there is no idea of a long standing stateful workload that would be assigned to WASM.

[00:38:08.490] – Matt
We currently do not even attempt it. Yeah, and part of the reason why is because the problem space we were after to solve is the serverless one, right? And we have never had a good reason to try and build a technology that competed with containers, particularly when WebAssembly is still maturing, right? The threading specification is not done for web assembly and without threading it makes it awfully difficult to build something like, say, a database or a message queue. If the technology is not ready for the market and there’s really no reason to try and take on the incumbent docker technology, then why waste effort there? On the serverless side, on the other hand, we saw a big two years of massive growth, everybody getting interested and then the developers kept telling us, when I was at Microsoft, we heard this all the time. This model is great, I love it. They just dive right into the business logic, right? I got a function and I just fill out the function. I don’t have to stand up an Http server, manage the processes, handle kill signals, anything like that. I just write an Http handler and they really like that.

[00:39:14.770] – Matt
But the technology just wasn’t quite fast enough to do a lot of the things that people want to do to run, say, really high performance, high volume website. You can’t score a 99 to 100 on a Google Page speed rank if you have 200 milliseconds of startup time for your function on the server. So we’re going, okay, well, here’s a big opportunity. And that and the fact that I guess I would want to call a lot of the serverless ones sort of Frankensteiny in nature and that they didn’t really quite mesh with the way anybody wanted to work. They were kind of like glommed onto the side and platform engineers are like, great, now I got something over here that I have to take care of and I don’t know what it does and why and it’s opaque. And developers are going, great, I’m back to the days where I have to ship a tarball with my code up to a server, cross my fingers and hope it works. And we went, okay. So there’s a lot of room where the developer story of WebAssembly is being sort of built right into your programming language.

[00:40:09.500] – Matt
Your compiler just compiles out a web assembly binary. There’s a sense in which that’s going to appeal to the developer a lot more than yield, SSH, SCP tireball kind of thing. And then I think we can work back to a better infrastory, Too, where we can say we’ll do a better job of observability and traceability and gathering metrics and things like that and make it less of a thing. Glommed onto the side and more of a piece that feels like a manageable segment of a bigger picture that includes not just WebAssembly, but also containers and virtual machines and the litany of other cloud services that we have out there.

[00:40:49.470] – Ned
Yeah, I’ve seen a lot of think VPC kind of things around is serverless dead? And to a certain degree, have we over corrected on this micro services architecture and should we all just go back to monoliths? And that’s probably a conversation for a whole other episode. I’d like to focus on some of the security aspects of WASM and where it might shine from a security perspective and what are some challenges around implementing WASM properly and securing, especially like a multitenant environment where you might have more than one customer wanting to run a WASM process.

[00:41:27.710] – Matt
Yeah, so the security model for WebAssembly is basically built on this idea that you have an isolated sandbox that executes the bytecode, and that isolated sandbox is limiting consumption of well, it limits effectively any consumption of an external resource. So that includes CPU cycles and memory. But say you grant your Web assembly module access to the file system, access to a quote unquote file system in this case. Right? And really what you want to do there. And we know this Kubernetes actually, I think, nailed this story. And WebAssembly, I think, is replaying a similar story when a developer says, I want a file system. As an operations team, we don’t just give them access to the file system, we give them access to a nice siloed off piece of data that looks and feels to the code like a file system. And what’s behind it, who knows? And this is what I loved about the way Kubernetes storage works, is that the developer has no idea whether they’re getting a piece of a local file system, or whether they’re getting some piece of network attached storage, or whether they’re getting a simulated file system. As long as the code level part of it works, the implementation is beyond what they need to know.

[00:42:42.740] – Matt
And WebAssembly really does the same thing. It says, okay, but Webassembly’s default is well, the host runtime will determine what the implementation of the file system is. The Web assembly host runtime will just make it look like a file system to the piece of code that’s running. So, again, straight out of the Kubernetes playbook, and I love the fact that they did it this way, because then you get that same security layer and you get the same thing with environment variables. And now as Networking is starting to pick up, our Http implementation says we’ll proxy everywhere. The host will control the socket layer and then pass on the payload layer to the Web assembly module so it can construct an Http response. And the host runtime will turn that response object into an actual Http on the wire response. And that way you have this security layer where you can say, is this doing anything? Dastardly yes, it is. We’re not going to let it do that, or, no, it’s not. All right, let it through. And I think that’s the way the WebAssembly security model has really started. It’s kind of a combination of what in academic computing is usually called the capabilities model, plus really kind of the key learnings of how to do security in a distributed world that you see having really picked up from early like mazos and really gained traction with Kubernetes as Kubernetes has matured.

[00:44:06.430] – Matt
So the runtime story is fairly nice and cozy, we think. So far. We all feel like it’s fairly cozy. Then there’s the old software supply chain problem, and that really has to do with, well, how do I make sure that the web assembly module that made its way to the runtime is in fact the one that should be there? Right, that it’s. The developer who wrote it is trusted, and that the binary at each point along the way managed to make its way through. And we got off to what I in hindsight think was a false start on this one, in part because we were building this in parallel with Sigstore and some of the other things in the Kubernetes world, and they weren’t done yet, so we couldn’t necessarily just draw on them. But we built a service called Bindle that was designed to be sort of end to end secure, required a lot of signing, everything was encrypted, every object had its hash code, we built up merkel trees. All these words that I say out loud should be like, yeah, I think I’ve heard all those words before, but in a different context, because while we’re doing this, the Docker container ecosystem, the OCI ecosystem, and the software supply chain system were sort of coevolving.

[00:45:23.920] – Matt
And for whatever reason, and again, this is probably that same Silo thing we talked about before. Initially, we sort of resisted getting involved in that community, and in hindsight, I think that was a mistake. So recently, Fermyon, Microsoft, Docker, most of the large organizations who are working on WebAssembly, we have all said, okay, enough of that experimenting with that stuff. Software supply chain is coming along very well in the container ecosystem. Once OCI pushed through the artifact spec, there’s no reason we couldn’t support storing WebAssembly applications inside of OCI images. And then suddenly we get an end to end software supply chain story. And so really, over the last four months, we’ve kind of pivoted from trying to build a web assembly unique software supply chain story to just saying, we’re going to go with the OCI on this one. And you have those sleepless nights where you’re like, what if this happened? What if that happened? And then a change happens and you go, oh, I can sleep better at night. That was what happened for us with the OCI story. Once we realized we were really swapping out anxiety about an untrusted system that we had an untested system that we thought had the right trust model, we suddenly found that we could pivot to an almost identical trust model based on a system that now had huge industry momentum behind it and in record time.

[00:46:51.400] – Matt
Right. We started Bindle two years ago and Sigstore was just kind of starting to get its feet. And now here, only two years later, we are starting to see all of these technologies be treated as sort of like the de facto or the default and we don’t have to do anything, right. Other people who are smarter and better at security than I am can do all of the work and I can just kind of reap the rewards. So that’s the way I think web assembly is going to go.

[00:47:13.690] – Ethan
Matt, as you’ve described, web assembly and how the processes run, it feels like it sits on top of and is sort of somewhat abstracted away from the infrastructure underneath. So maybe this is a dumb question, but the question is observability. Is there anything unusual with observability that I need to care about regarding WASM.

[00:47:31.950] – Matt
In fact not a dumb question. It is one of my favorite questions because I think this is where Webistembly actually over the long term will be something of a game changer. One of the things we have struggled with in the container ecosystem and for a second of background, right. I started working in the container ecosystem right around the earliest times and I wasn’t of course a core contributor to Docker or anything, but a very early user right around docker 1.0 actually played around with the solaris container solution back when and it was hard to observe then, right. Observability wasn’t really on the table. Even now there’s a sense in which a container kind of feels opaque and you’re relying on the person who puts the container together to instrument the inside of the innards of the container, the application, the supporting pieces of it, instrument those so that you can connect it to your outer observability platform. WebAssembly is a little more raw than containers in a sense, right? It is. The deliverable in a WebAssembly application is the binary file that you’re going to execute and it’s a bytecode format, which means you can actually inspect it during execution by just instrumenting the runtime.

[00:48:43.050] – Matt
So for example, you can pretty easily in a host runtime say, hey, every time that the function foo gets called, pop an event on my event queue saying that that function got called. Or tell me any time that one function gets called more than 3000 times in a second in a single app because that’s probably a bad looping pattern or something like that. I’m making up examples here, but the idea that you could instrument at how much memory it’s using, how much CPU, what the function stack looks like. You can instrument really top to bottom in web assembly. And I think that over time this is going to turn out to be pretty promising and we’ll make WebAssembly a desirable thing for high performance applications, but also will illustrate to a broader community, hey, this is the level of instrumentation you could have. And the broader community, I think, will start working more toward that level instead of the opposite. Where I feel like over the last ten or 15 years as a developer, I was pressured to say, okay, you need to instrument by calling to this logging library and making sure every time this function calls you get an entry enter and everything.

[00:49:50.110] – Matt
And it became burdensome to me as the developer and the idea that the operations team can get deep visibility into an application without ever having to say to the developer, hey, can you go drop a couple of log lines in this function? That’s like the biggest win win in the observability world that we could hit. So I’m excited about the prospects of observability in the future and think that we’ll see in a year and a half to two years some big advances on the web assembly side there.

[00:50:16.950] – Ned
I love the idea of not having to go back to the developer and don’t we all? Yeah, I mean, not to disparage the developer, but they’re busy, they got stuff to do.

[00:50:27.190] – Matt
I mean, you think about us as processes, right? And I’ve spent part of my career as an operations side and part of my career developer side. And it’s always friction on either side because as operations, your job is to make sure everything is running optimally. And as a developer, your job is to chunk your way through this backlog of features and bugs that will get shipped out later. And anytime you have to have cross team collaboration on that, one party or the other is getting interrupted out of their core job in order to help somebody else with theirs. And anytime you can remove some of that friction, both parties end up much happier at life. I don’t have to bug a developer to do X and the developer is thinking, I don’t have to stop break out of flow while I’m working on next month’s feature in order to go in here and drop a log line in there or drop update the open SSL library. Nothing gets developers angry like, hey, there’s an open SSL vulnerability I need you to rebuild 1400 container images at the latest. So yeah, I think we’re making some good steps toward achieving separation of concerns between the operational environment and the development workload.

[00:51:44.850] – Ned
Matt there’s so much more I want to ask about how the network components are implemented, where I store the modules that I’m going to install on these things, and what the pipelines look like for building and deploying. This could be a three, but I think we’re coming towards the end. So what I’d like to do is just find out if I’m an infrastructure kind of person, which I am. And so development shops are somewhat limited, but I want to dip a toe in the WASM world and get a feel for what these applications look like, what the deployment flow looks like, what the tooling looks like that I might be responsible for at some point. Is there a good place where I can go and try this out? Sort of a sandbox environment or something I can deploy locally.

[00:52:34.690] – Matt
And I’ll give you three, one of which will be a Fermyon thing. But I want to be fair and say there are lots of docker desktop’s version with WebAssembly support in it is a great way to just kick the tires locally. You just run docker desktop, you walk through their tutorial, they have pre built applications that you can try and then see the shape of the workload. It’s the easiest way to sort of dip a toe in the operational side of things. Fermyon, we have a pretty easy quick start guide that will get you from zero to deployed app without ever having to write a line of code. We’ve got a couple of sample apps that you can get out there and deploy. The basic ones, like hello, world. You can also deploy a CMS system and kind of get a feel for what it feels like to build it to package and deploy an application into something like Fermyon Cloud. Or if you really want to dive in Fermyon platform there has the TerraForm scripts that you can use to stand up in cloud of your choice or locally, a full kind of environment with nomad and console and spin and all of these technologies and play around with it.

[00:53:36.120] – Matt
And the third one is the Runwazi project that Microsoft has been contributing to, particularly if you want to play with the Kubernetes space. I believe AKS actually even supports turning on Runwazi within AKS and you can very quickly see how that fits in in the Kubernetes system and what the strengths and weaknesses there are. So those are, I think, three good ways the docker route, the spin route and the AKS route.

[00:54:02.650] – Ethan
Thank you, Matt. Tons of information, amen. Like Ned said, this could have been a three hour conversation for sure. And maybe we got to have you back and talk some more about some of the Minutiae we just didn’t have the time to get into today. But if you could pick out a highlight or two from our conversation today that you want people to really get a hold of, what would those ideas be?

[00:54:23.170] – Matt
Yeah, I think we started off talking about different domains where WebAssembly is applicable and I think that’s a good one to keep in mind. I listed four. I wouldn’t be surprised if you can think of a fifth or a 6th one. And as a new generic technology, that’s fine. So that’s one highlight. I think the second is starting to think about how serverless could look if we could just radically streamline the operational aspect of it so that we’re starting up in under a millisecond, executing to completion in blazing fast times and we don’t have to sort of pre provision virtual machines and things like that. I think there’s a lot of potential there. That’s the part about Fermyon that I think I’m really most excited with. And the third is really this. And I think actually we ended there really well with those three, kind of how do you go try this out? The third is I think this is going to catch on and catch on quickly. So it’s a perfect time to just kind of dip a toe and understand the strengths and weaknesses of the technology and then SDWAN toward the horizon and say what is coming next.

[00:55:28.260] – Matt
So I think those would be three ways to kind of prime yourself for the potential of the web assembly ecosystem in a broad way and then maybe in more specific ways relating to the cloud.

[00:55:39.310] – Ethan
Matt, if folks want to ask you some more questions about our conversation today, how can they find what you’ve written, follow up with you on Twitter or anything like that?

[00:55:46.980] – Matt
Yes, I am technosopos pretty much all around the internet, including Twitter. I actually use LinkedIn, so you can always, if anybody after the Twitter things recently, I’m like I’ll try LinkedIn, so that’s another good place. But we blog pretty regularly at Fermyon dot com’s blog. I blog a lot about sort of the kind of stuff we’ve been talking about today, but you can also get more technical views from operations and development side of the house. And lastly, Fermyon has a discord where we’re all kind of hanging out and chatting all the time about WebAssembly. So those are great places to find me. The link to the discord scroll to the bottom and click on the discord link.

[00:56:28.760] – Ethan
Great stuff. Thank you very much, Matt Butcher, for joining us today. Great conversation. Matt, again, we do need to have you back at some point for a follow up conversation on WASM. Thanks a lot for appearing today. And if you’re listening out there still to the very end, virtual high fives to you for tuning in. You are awesome. If you have suggestions for future shows, Ned and I, we would love to hear them. Seriously, we will do our very best to take your request, find a subject matter expert to address your questions and get them on the show. Hit either of us up on Twitter, we’re at day two cloud show, or if you’re not a Twitter person, go to the request form. Day two. Cloud IO. We will get those and see if we can get a show together around your idea by the. Way did you know that you don’t have to scream into the technology void alone. The Packet Pushers podcast network has a free Slack group that is open to everyone. Visit packetbushers. Net slack and joined AWS, a marketing free zone for engineers to chat, compare notes, tell war stories and solve problems together.

[00:57:26.640] – Ethan
And hey, maybe chat about WebAssembly. That is all at Packet pushers. Net Slack and until then, just remember cloud is what happens while it is making other plans.

More from this show

Episode 183