Functions vs Containers: The Serverless Landscape

Updated on May 7, 2024
Julian Wood
Julian Wood ( interviewer )

Serverless Developer Advocate at AWS

Marcia Villalba
Marcia Villalba ( expert )

Serverless Developer Advocate at AWS

Share on:
linkedin facebook
Copied!

Explore the seamless integration of container images with AWS Lambda! Marcia Villalba and Julian Wood unravel the intricacies of development, deployment, and optimization. Discover how containers revolutionize serverless computing, offering speed, efficiency, and scalability in the cloud. From demystifying cold starts to harnessing caching technologies, this is your chance to learn from the experts.

Can Functions and Containers Work Together?

Marcia Villalba: Hello, and welcome to another GOTO Unscripted series video. It’s weird to say hello to the show that I never hosted before. But here I am. My name is Marcia Villalba, developer advocate for AWS at the Serverless team. And I’m joined by...

Julian Wood: Hello, I'm Julian Wood. And Marcia and I are actually on the same team, so we sort of do the same things, although we have different specialties. Marcia, it is always lovely to talk to you, and I love that your intro, even for your own podcast and your own YouTube channel, transcends over here to GOTO Unscripted. So it's super cool.

Marcia Villalba: I cannot avoid, introducing things is my...

Julian Wood: Absolutely.

Marcia Villalba: I have to say hello to people. So today we have a really cool topic because I don't know, maybe people reading the description of this video, like how many buzzwords you can put in a sentence, functions/containers, containers and functions. And it's like, how is that possible? Are they not kind of the same thing? Well, I have heard about these open-source functions that you can run yourself, are we talking about that? What is it that we are going to talk today?

Julian Wood: Well, we need to put more buzzwords in just for the title. So we'll say gen AI a few times, and then we'll do gen AI functions and containers, and then we're covered. So, we have all the buzzwords.

Marcia Villalba: Cryptocurrency.

Julian Wood: Oh, nice one. I like it. So this is an interesting topic to talk about because, you know, people always think that there's some sort of philosophical, technical whatever, either or debate between whether you can use functions or whether you can use containers.

And obviously, this is complicated because the word containers, let me get my quotes on the screen, is a broad term which encompasses a whole bunch of different things. And a container can be a packaging format, so it's a way that you can put a whole bunch of files together basically. Containers can also be a distribution mechanism because it's a way that you can have something that runs on your machine and runs on another set of machines in the cloud, on-premises, anywhere, that kind of thing.

Then containerization can also be the sort of concept of, oh, well, we containerizing things. We are making them a little bit smaller, a little bit more agile maybe. Maybe it's a similar sort of concept to microservices. So a lot of different things that containers are, and sometimes people can come at it from a different angle, and so, you know, they may get confused.

But the big picture is that when we talk about serverless, when serverless sort of came on the scene, about sort of 10-ish years ago, it was all about functions as a service. Well, mainly about functions as a service where you just upload your code somewhere and it would just run right at whatever scale, it would just run and somebody would take care of running that for you, scaling it, and, you know, updating it, and securing some of it and all, all that kind of things.

Now, obviously, the term serverless has evolved since then because it's not just functions as a service, but it's a whole sort of suite of databases, and messaging services, and orchestration systems that can do anything that sort of takes away that undifferentiated heavy lifting, we talk about it at AWS, where you just get to run your service over the internet and we'll take care of a lot of the sort of grunt work underneath so you can just focus on your, on your application.

And so we had functions and then we had containers that you could always run. And a lot of companies were doing either lifts and shifts or they were doing microservices and they were, you know, containerizing their applications and scheduling…

Recommended talk: Expert Talk: Are We Post-Serverless? • Julian Wood & James Beswick • GOTO 2024

Marcia Villalba: When we talk about functions, at least in the early beginnings, we always show these graphs that was, like, physical machines, virtual machines, containers, functions, and now we are like, how we put these things together?

Julian Wood: I think the sort of idea of that was possibly good to show how the sort of evolution of things could be, but it's not necessarily that the evolution has to be that you have to migrate from a container to a function, it's just a different way of running code and, you know, running actual containers, whether that's, you know, within AWS unit, that's a way for running a full running container that can run for hours, forever, in fact. And, yeah, it's a really great way to also run applications and functions...

Marcia Villalba: I think we can go back to the definition of what a function is and what a container is. I think it's very important what you said at the beginning, the different layers of what a container can be. Because in this case of functions, what we are talking is about the packaging, the container image, and then we have the way of running the container. And that's what we wanted to graph in that typical graph of physical machines, virtual machines, containers, and functions is where is the responsibility and what is the abstraction layer? Now, what we are going to say, "Well, now, you can put your container image the same that you put in your ECS or whatever you're using into Lambda."

Julian Wood: Basically, the premise is you can build a function from a container image, and the world's like, "Hang on, hang on, hang on, what are you talking about?" That's the confusing thing we're trying to tease apart today and sort of help people understand why it can be useful and some of the trade-offs that you need to do.

Containerizing Serverless: Exploring Lambda's OCI Image Support

Marcia Villalba: Now we clarify a little bit what we are going to focus running container images in functions, in serverless functions. So that's the TLDR of this, but why?

Julian Wood: Well, the why is good in a way, because why not? And that may sound a little bit silly, but the whole container ecosystem has gone wild over the past 10 years and for really good reasons because containers, when we are talking about the packaging format, the way you can put things together is a standard that so many different kind of things use. And so the benefits of using a container image is that there's so many tools, and there's so many packaging ways, and there's so many scanning utilities and just so many developers know how to package applications in a container. And so let's a...

Marcia Villalba: And you can run it in your local machine.

Julian Wood: That's when we were talking about the portability earlier, that is really cool that you can, you know, test and run something on your Mac or your Windows machine. And even on Windows, if it's gonna be deployed on a, you know, Linux OS with another chipset in another cloud provider, you know, you can have this sort of confidence that your packaging format can be useful.

And so what Lambda introduced, I think it's about three years ago now, is previously Lambda, you had to take all the files that was your code and you had to zip it up and you had to upload it to the cloud, to the Lambda service. It would store it in S3, some object storage that's a bit behind the scenes. And basically, when the function ran, it would just copy that code down and run your function, pretty simple.

But people were saying, "Well, hang on, why do I have to have different tooling for Lambda?" Or, "I'm using containers to build a whole bunch of other services, can't I just use containers to build Lambda functions?" So Lambda came out with what's called sort of OCI image supports, and so an OCI image is the industry standard to build a container image.

If you've worked in containers at all, I'm sure you've done Docker files, but a Docker file is a way of doing a container image, and that's Docker's implementation, but it's using the OCI spec under the hood, but there are other container runtimes, which also use the OCI spec. So anyway, what Lambda came out was instead of having to zip your package up, you could just use a Docker file to specify what is going to be in your Lambda function, and you could build in Lambda function with that.

We're talking about some of the benefits, we're talking about some of the tools using your tooling, using your Docker CLI on your laptop or wherever to do it. Another one is portability. Well, as you mentioned before, if you've got a function that is some code that is running somewhere, you can now port that to Lambda or use some of the functionality within Lambda much easier. And then some other sort of useful use cases was larger artifacts. So one of the constraints with Lambda is you can only have zip files that are 256 meg, with container images that is now up to 10 gig. And there's a whole bunch of cool technology we're gonna get in to make it happen.

Marcia Villalba: We'll get into that because that's very magical.

Julian Wood: Most definitely. Oh, I love it. Love it. That is so cool. So you can build Lambda functions up to 10 gig. So that means, you know, huge binaries that you need to put in, or even machine learning models or, you know, the...

Marcia Villalba: That was a huge constraint for a lot of our customers that they were like, "Well, I have so many dependencies, I have so many things. And even if you use Lambda layers, when you are running your functions in traditional where you're still limited to that deployment package size." Well, there is many reasons for why the package size is small but with containers, it seems that we have done some magic and we have broke that in some way, we'll talk that later so don't tune out, and we can put 10 gigabytes. So that's, I think, one of the biggest things besides we already know how to use containers, now we can put bigger things.

And I think another big one that at least I heard from customers is the immutability and the control that they have on their images. Because, well, if you use Lambda the vanilla way, the traditional way, let's call it, you are basically using whatever we provide, the runtime we provide, but maybe sometimes you need your own images, you need your own run times, you need your own whatever because you have so many constraints in your organization. And if you use the container image support, then you can bring all that into the play and that's also very important for many organizations.

Julian Wood: So that's when you talk about the packaging format and, you know, the power of a container is a packaging format because, by default, one of the awesome things about Lambda is it just automatically patches itself, the operating system and, you know, if using Python or Node or Java, you know, the minor version of Java, Node, or Python just gets automatically upgraded. Literally every time you run your function, it's making sure it's got the latest and greatest. And that is fantastic because it means you don't have to do that work yourself.

But for some customers they're like, "Hang on, this means if I do have a library which suddenly has an incompatibility now, but stuck because something just broke and I didn't make any code changes," or, you know, "Lambda functions run on Amazon Linux, previously Amazon Linux 2, and now Amazon Linux 2023, and my company uses Ubuntu or Alpine or some other, you know, Debian, or Red Hat, or anything, some other Linux distro, I don't really wanna have to use something different for Lambda because all my processes and everything are set up to use this other Linux distro." And so, as with a Docker file, with a Lambda function and container image support, it means you can actually bring your own runtime and bring your own operating system and that is super powerful.

Recommended talk: Best Practices to Design & Build Event-driven Applications • Marcia Villalba • GOTO 2022

Maximizing Flexibility: Leveraging Lambda's Custom Runtimes and Base Images

Marcia Villalba: I have heard stories of people running cobalt Lambda functions in these type of scenarios. So when we mean bring your own runtime, we mean bring your own runtime

Julian Wood: That's one of the powerful features of Lambda. There's a way that you can use customer runtime. So, yes, we have no Java, Python, Go, all these managed runtime. But, yeah, there are the custom run times, which custom run times is also a bit of a funny term because custom means, oh, we've customized the runtime. It basically is an OS-only runtime. So it's an operating system, and then you have the flexibility to build whatever you want on top of it. And so, yeah, that gives a lot of a lot of flexibility that you can do anything.

Marcia Villalba: But also in the other hand, it's not that you can bring your own things, but you can also use all the base images that Lambda provides. So if you don't need any custom runtime or anything weird, you want to use what Lambda provides, just go grab the base image for the runtime. You'll have like...

Julian Wood: All the goodies there.

Marcia Villalba: ...all the goodies. No need to stress because that's also what I love from Lambda, that is simple and I don't need to reconfigure the word in order to get started. So there is a lot of customization possible, but also the ease for the ones that don't need that much level of customization.

Julian Wood: Literally the first line in your Docker file is when I'm starting my Docker file, or when I'm starting my container image for Lambda, please, Lambda service can I have the base image, which contains Node or Python or Java, whatever? That kind of thing. And so Lambda creates these base images, which are publicly available on Docker Hub and Elastic Container Registry. And say, "You're not starting from scratch," you just say, "I want the Lambda image," and that's got all the Lambda-specific code in it as well. So your code as you used before, can run exactly the same as it is, yeah, it makes it really easy to get started with building your functions. You're not just starting from an operating system and have to craft everything yourself.

Marcia Villalba: So when we should not use containers because these are lovely, but...

Julian Wood: Containers are lovely, but I think some of the challenge is people get to, when they think about containers with Lambda, is that original point making about what a container is. Because people think, "Oh, I'm just running a container in Lambda. Any container in Lambda," that's not entirely true. It needs to be a container that Lambda works with or that supports, that can work with Lambda because Lambda is an event-driven application construct and architecture, your container needs to be able to support that. An event-driven basically means there's an event that comes in, your code does some processing and then returns the results.

Recommended talk: Building Low-Code Applications with Serverless Workflows • Ben Smith • GOTO 2023

Configuring Lambda Container Images: Event Handling and Runtime Constraints

Marcia Villalba: So maybe here we can stop for a second and talk a little bit on how we define this connection because in the vanilla Lambda scenario, we have our infrastructure and we say, "Hey, the input, the function that we are going to start is in this file, in this method, go from there." And then we have our handler file that has that method that will start the whole function and that's the event-driven part that we love from Lambda. How do we do that in containers, so we need to change the code? How will it work?

Julian Wood: You don't actually need to change your code, if you are using the manage run times and you're just pulling that image layer down, your code doesn't need to change at all. And you're just setting up basically a configuration option for your Lambda function, which can either be within the Docker file or it's just gonna use the defaults for Lambda.

So say you're using a Python function, if you're using the Python-managed container image layer, you just write your code and you don't have to do anything. But what you can't actually also do is specify, well, actually make sure that my handler is this file and this function within that file, and then off you can go.

And lots of people use that, not just for a single function, but sometimes they're gonna have multiple Python functions within a Lambda function, just to confuse that kind of thing. And so you can have the same file artifact or the same container artifact, and then go into different functions depending on your handler set. So, yeah, different ways to configure it. You can either configure it within the configuration of your Lambda function or set it within your Docker file.

Marcia Villalba: But is the same idea. So when you have your application in a container image, you still need to tell Lambda what is the input, the starting point of that function. So, well, we do that also for any kind of application we need to tell the server where to start. So similar for Lambda, but that's an important thing because sometimes we just, "Oh, let's put an express application here. We can run containers on Lambda." No. Not that straightforward.

Julian Wood: Going on on the sort of the specifics of how Lambda works, that when you are function code runs, it's gonna take an input and that's gonna be the trigger that launched the Lambda function. And that's gonna be if it's behind an API, well, it's gonna be that API event, if it's putting a message off a queue, it's gonna be the format of that message from a queue. If it is, you know, you've uploaded something to S3 and that's gonna invoke the Lambda function, well, it's gonna be the metadata object that comes from S3.

And so a Lambda function has an event, which is that event we've been talking about. And then some context information, which is just some metadata about the invoke. And so your code needs to be able to handle that as a container and, you know, so there's no point running, you know, server full things inside your Lambda function such as a full Express app or that kind of thing.

So that's sort of one of the differences with running Lambda. And the other is just one of the constraints with Lambda is Lambda functions can only run for 15 minutes. And that idea is born from the fact that Lambda functions are there to do a piece of work, and they're gonna then return their result back. And, you know, hopefully, it's gonna be within 15, well, needs to be within 15 minutes.

But, obviously, if you're running a full sort of more server full workflow within a container such as a Flask or an Express app, or some kind of really long-running process that's gonna run over 15 minutes, that's not gonna be suitable for Lambda. So, you know, you maybe want to be thinking about some other solution or maybe reducing the size or splitting up that job to do it within 15 minutes so it can be for Lambda.

So a lot of customers are taking applications and, you know, lift and shifting into the cloud and thinking they can just take the container that does a long-running process or users, you know, maybe specific hardware features or runs an Express app as is, and to just port that to Lambda, yeah, that's not gonna quite work straight out the box, but it's not gonna be that easy, that difficult to change.

Marcia Villalba: So there's the same limitations that whenever we choose, if we use Lambda or we use Fargate, we have to apply here because the long-running processes, if it's not event-driven, if like fully lifted shift, well, it's a stateless service, so well, all these kind of considerations that we have for traditional functions we need to have in this case because, at the end of the day, it's the same way of running the application.

Julian Wood: With addition of that one little thing you mentioned earlier about the mutability. One of the powers of the container images, you have full control over it, but then that means Lambda isn't going to automatically patch the function for you. So you do need to build something into your, you know, delivery life cycle, or your CI/CD process when a new version of Node comes out or a new operating system patch comes out, that you can just regenerate that container image. Hopefully, you've got some good testing, so you can just do some of that automatically. That is one of the differences between a container image on Lambda and just the previous zip way.

Recommended talk: Secure Container Images with Chainguard's Tooling: Melange, Apko & Wolfi • Matt Turner • GOTO 2023

Lambda Container Images: Streamlined Development and Local Testing

Marcia Villalba: Now we cover a little bit the pros, cons, when to use it, and we dive a little bit on how we develop this. So we said that we need to have a specification that's kind of something that changes. So when we are developing this function we will create a specification file and there you mentioned the base, the base image. If we want, we can put the input for our handler or the starting of our application. Is there something else that we need to put in this specification?

Julian Wood: If you are doing things as simple, not really. So you're gonna create an image and it's just gonna be basically a Docker file. So the first line of the Docker file is going to be from some kind of base image. So you're gonna say from some kind of base image, say we are talking a Node. So from the Node-base image, your second line in the Docker file maybe to copy your local and to copy all your local files into their base image.

That's literally the simplest Docker file you're gonna have and you may then want to specify what their hand method is as another file. So that's gonna be really, really simple. But people who package Docker files know that there's a build process that you can also do in that. So that may be running, you know, NPM install or PIP install if you're in the Python world. And, you know, when you're building applications, there's obviously a lot that you can do, and that's fully supported in the Docker file.

So what you can do is, in your Docker file, you can do, you know, pull the base layer, you can then say, "Oh, well, I'm gonna do a PIP install for Python, then I'm gonna copy some files, then I'm gonna do, you know, whatever it is in the Docker file, I'm gonna, you know, change something, grab something from another API, pull another image down, which is gonna be maybe some machine learning. I'm gonna then pull somewhere else and get, you know, some source data that I wanna store in my Docker image."

As you would in a normal Docker file, you can just go step by step by step and pull all the information in to create that sort of artifact. And at the end, obviously, you would add your Lambda function code. That handover comment you would want to do. With a Docker file, Lambda also supports the builds where you can do the...oh, my brain has gone fried with the multi-stage builds. So what you can also do is you can separate that sort of process of having the build part of your Docker file and then the actual image creation part of your Docker file.

So some people need to install a whole bunch of tools to be able to build their Docker file. So they can do that with a minimal image. You know, some people use Alpine or minimal images like that, and then they sort of start from scratch again, go, "Okay, I've got all of that stuff I brought in, that's all in parts I need, let me start building my actual final Docker file from the actual Node.js say base image. And then I'm just gonna copy those pre-compiled or, you know, brought-in files and sort of then build up my image." So multi-stage builds works really well with with Docker files for Lambda.

Marcia Villalba: When developing, also an important thing is testing. Is there some difference on how we test these type of applications using Docker images than how we test the traditional serverless functions?

Julian Wood: Yes, in a good way that, actually, I think it probably makes it a little bit easier to test your Docker files locally because you can just use Docker run. So there are lots of other solutions for running zip files locally, and you can use SAM, or serverless framework, or some CDK functionality to do it. But that's not the tools maybe you're normally using. So when you're actually developing Lambda functions, you can do it literally fully in Docker and you can run that Docker image. There's a little emulator you add in, which pretends to be the sort of Lambda API, and that's also just another line you put in your Docker file.

It means you can do a Docker run locally. Two ways I actually like to run it. One is when I'm really sort of developing from scratch and I actually don't quite know what's going on in that function. What I can actually do is I can sort of live run that Docker image and I can just connect into that Docker image and then literally live type, you know, line by line, installing kind of things, copying some code, and just sort of iterating sort of while I'm into the container. And because it's locally, I'm not limited by any 15-minute timeouts or any kind of thing, I'm just running something in a container and that's really useful for testing out your whole build process...

Marcia Villalba: And it's fast...

Julian Wood: Yes, really fast.

Marcia Villalba: ...because no need to go to the cloud...

Julian Wood: Use everything local.

Marcia Villalba: ...you can do it in a tunnel, in your subway, or in an airplane.

Julian Wood: Then the second step is, you know, once you've got your build process done in your container, well, what you can do is just run it locally in a bit more of an automated fashion where you create a mock input, for example, to Lambda function, and you, you know, send that to the container, it does its processing and then returns the result. So the container runs, spins up, does its work, and then sort of spins down afterwards. That sort of emulates the way that Lambda is gonna work in the cloud.

Marcia Villalba: Here is the same caveats that with any Lambda function or with any application that connects to AWS services, if you are connecting in your function to Dynamo or S3, or SNS, whatever, we are local. So either you make that connection to the cloud or you mock it. But that happens with any application that we are running locally that we need to be aware of that. So that's something, it's not like this Docker file it's magical and will emulate the whole AWS cloud in your computer. It just runs the function and if the function needs to communicate with outside world, well, you need to do that.

Julian Wood: And because of the way Docker works, there are also cool ways that you can sort of inject credentials into your Docker image. So when it runs, it can, you know, use another role, or use some sort of session credentials, that's actually really useful as well that you can have this, you know, one of the packaging cool things about Docker is it is all isolated and separate from your local machine. You just inject your credentials in, do your database connection or whatever, and you can prove that it works.

Recommended talk: Serverlesspresso: Building a Scalable, Event-Driven Application • Julian Wood • GOTO 2022

Deploying Container Images to Lambda: A Seamless Transition

Marcia Villalba: So now we develop this amazing application, how we put it in the cloud.

Julian Wood: The way that your local Docker file gets connected to the Lambda service is by uploading the image to Amazon Elastic Container Registry. So that's an AWS-managed container registry. It's sort of like Docker Hub, but it's AWS's. At the moment, Lambda only supports images from Elastic Container Registry, but as you use your normal command line utility, you do a Docker tag, tag your image, Docker pushes up to the repository you're gonna do, and then when you configure your Lambda function, your configuration of your Lambda function is actually, you know what? That Docker image I just created, you just pointed to that and you're sort of done. So that's the cool part of it.

So when Lambda then deploys that function, Lambda service is gonna pull that image from ECR and is then gonna run your Lambda function based on that image. Two-step process, but for people who are building any kind of container image, that's a normal way you would develop it. You don't need any other AWS tuning, you can just use your Docker CLI. And the Docker CLI to create the image, you then need to use AWS CLI or the Lambda part of it to actually build from that image.

Marcia Villalba: Many of the frameworks support that. So if you're using some or CDK or something like that, it's pretty straightforward to do this deployment.

Julian Wood: And often it's behind the scenes, you don't even notice that's happening.

Marcia Villalba: You don't need to even worry. But I think now the most interesting part is the running this thing because, well, we talk about developing, we put it in the cloud, now, it's a function. Nobody's using it. So, now, Marcia comes and do API gateway call to that function and wakes it up for the first time, what happens?

Julian Wood: Well, this is when we talk back to 10 gig, when people are going, "How?" Obviously, the most important thing that a lot of people worry about is cold starts with Lambda and cold starts, are important...

Marcia Villalba: And 10 gig scares you.

Julian Wood: Yes, like, "You must be absolutely crazy. And I have a huge machine learning model, or I have, you know, the Python-based image or Java or .net. Like, are you people crazy? Because that's gonna be ridiculously scary for cold starts." No. So before I go into the why not, just to explain about cold starts, a cold start is as your code starts up in the Lambda, what we call, execution environment and that is just the isolated little micro VM that runs your Lambda code.

Obviously, Node has got a startup, or Python's got a startup, maybe you've gotta make a database connection, maybe you've gotta pull some secret from somewhere, and then your code is gonna run for each invoke. So those sort of first initial steps is gonna happen every single time that your Lambda function runs.

That's gonna take some time. But that's, as with all normal coding practices, you know, that's just the way it happens. It can be a little bit more exacerbated in Lambda because obviously, you're running more of these execution environments. But the cool thing is actually is the more of them you run, the actual fewer cold starts you get because once you do a cold start and Lambda is gonna then run your invoke, and next time an invoke comes in, it doesn't need to run that cold start process. It's just gonna go and run your function, hand the code. So that's really quick.

Marcia Villalba: If the execution environment is idle, let's say.

Julian Wood: It's up and running. So this is an issue for some developers because some developers are testing their functions and they deploy a new version of the function to the cloud, and then they run the function and they go cold start. Okay. And then they do some tweaking, deploy the same function, a new version, get a cold start. They're like, "This is gonna be bad when I'm running production." But actually, because Lambda is reusing these execution environments for subsequent invokes, the busier application is, the fewer cold starts you're gonna get. And that is a bit of counterintuitive, and we talk to many customers, and customers are running these lambda functions at reasonable scale, you know, whether it's high scale or low scale, reasonable scale, see sort of between half 0.5% and 1% of cold starts.

So, you know, less than 1% of your function invokes are going to be cold starts. And really that actually only matters if you're running synchronous workloads because then your client is waiting for a response. If you're doing some batch processing, or stream processing, or some asynchronous process where you eventually need to update a database, you don't really care about the cold starts. So it's not as big a deal as people as people think. So that's the sort of history, and sort of why cold starts matter and what it's.

So the free cart is like, okay, when I do a zip function, when the function runs, it copies up to that 250 meg that's gonna take some time, 10 gig, you must be absolutely insane. There's no way I'm gonna wait around for that. So as part of this release, Lambda came out with, I think, is some of the coolest technology that I've seen in Lambda and this is one of the cool things with Lambda, is there's just a whole bunch of stuff optimizations behind the scenes that you don't know about or you don't need to know about, and we just make things faster and more efficient for you.

So two things, one is container images are actually aren't pretty full. So even though you can have up to 10 gig function, most container sizes are actually really small. And actually what's used are that function is even tinier. So even if you are, you know, using Java or .net or that kind of thing, and you pull down that immediate layer, and then your function code, in terms of the actual amount of bytes that are gonna be read from that container image is really tiny. And so if you're using, you know, .net, Java, whatever runtime it is, even Node, you're not using all of Node or, like, everything that's possible that Python could do.

So the first thing Lambda did is understood that, well, actually, we don't need to download the whole container image. We only need to pull the things as you need to use them. So if you're going to use some library with a Node, well, what it's gonna do is it's gonna pull that image as you use it, and that's sort of called...it's like lazy loading. But instead of having to download the whole image before even running your function, Lambda can just say, "Well, when your function code runs, I'm gonna be able to pull the things that you need," and that can become really efficient. And that means that the amounts of data that you're pulling down is literally probably 80% to 90% less. And so that's also, you know, super useful that you're not pulling the whole 250 meg for a zip power car function. It's literally only the data that you're gonna access.

Marcia Villalba: I don't know if this is something that we have benchmarked internally, but I have seen some tweets of people running container images faster than the traditional zip images. And it's like, okay, this is the magic on how it works because in your traditional vanilla Lambda functions, you pull everything down. And in here, you just pull exactly what you need. So you might be pulling like a few megs and, boom, you're ready to go. So that's...

Julian Wood: Even gets better because if you think of a lot of different functions that if you are using the Node.js manage runtime or Java manage runtime, how many different customers and how many different functions are actually using the same manage runtime? So why when your function runs, do you need runs? Do you need to copy all that information down? So second cool stage is where Lambda actually caches a lot of that information.

So, for example, if you were to build a Node.js 20 runtime today using a container image, I'm pretty confident that there are one or two other customers who are already using the Node.js 20 image. And so that is probably cached all through the Lambda fleet. And so when your function first starts and it says, "Oh, I need to pull something from Node.js..."

Marcia Villalba: I have this one.

Julian Wood: It's, "I have this one," and there are multiple levels of the cache and one of the levels of the cache is literally on the host, on the actual server that runs your code. So, you know, there's even nothing to download, even if Lambda has said, "Oh, you're gonna need to use that functionality that's in the cache," and so that's gonna be super-fast.

Marcia Villalba: So this is in a nutshell, a little bit of the magic of how we can run container images on functions. But before finishing this episode, I want to let know the audience that we leave a lot of the resources to deep dive and go into hands-on with these things in the description of this episode, because, well, this is just a short introduction, but you should try it, you should explore it. It's not hard. And we are still on this pay-and-use mode, so you can have your containers running in the free tier of Lambda with no problem.

Julian Wood: That's so easy to play with it. The speed thing is just so cool because we will put the link in the show notes, but there's some public papers describing how this all works. So even though it's under the hood in Lambda, you know, we explained it really well, and there's scientific papers showing how it all puts together. We cache stuff in it, and in fact, we've come up with a whole cool way that we can actually cache things across multiple different functions entirely securely without having to share any information between different functions or different customers.

So, for example, we talk about the Node 20 manager runtime coming down really quickly, if you've got a Python package or a Node package that someone else has used, even though we don't have visibility into different people's functions, the way we can manage it on the system is we can use some clever caching technologies that we can de-dupe across functions without functions knowing about each other. That sounds a bit like absolutely impossible.

It's a part of it called convergent encryption, which always sounds like a cool name. And so, yeah, just so many different ways we can cache this and make it faster. And as Marcia Villalba said, you know, for many Lambda functions, like, it's actually faster on the container image than the zip archive because we take advantage of this caching.

Marcia Villalba: I think with that, we can close this episode for today. It was lovely chatting with you, Julian Wood. I hope the audience enjoyed this and learned something new, at least I hope, because this is a very interesting topic that many organizations are taking the benefit of because we all know containers and we have tooling for it. So why not to embrace it with Lambda? So thank you very much.

Julian Wood: No, thanks, Marcia. Always happy to chat. And, yes, try it out. It's really easy if you're a container person, and you are thinking Lambda is gonna be a bit weird because it's all a bit different. Well, now the two worlds have merged and you can just use your container images to build your Lambda functions. Hopefully, it'll be a really great experience for you. And, yeah, lots of resources in the notes, which you can delve deep into how it all works.

Marcia Villalba: Yes. Thank you. Bye.

Related

CONTENT

44:41
Serverless and Event-driven Patterns for GenAI
Serverless and Event-driven Patterns for GenAI
GOTO EDA Day Nashville 2023
41:58
Building Evolutionary Infrastructure
Building Evolutionary Infrastructure
GOTO Amsterdam 2019
1:1:43
Democratizing Distributed Systems: Kubernetes, Brigade, Metaparticle and Beyond
Democratizing Distributed Systems: Kubernetes, Brigade, Metaparticle and Beyond
GOTO Amsterdam 2018
51:6
Designing for the Serverless Age
Designing for the Serverless Age
GOTO Copenhagen 2017