skip to content
ajcwebdev
Blog post cover art for Deploy a Docker Container on AWS Lambda

Deploy a Docker Container on AWS Lambda

Published:

Last Updated:
AWS, Lambda, Docker, Serverless

This example uses the Serverless Framework to deploy and query a Node.js server running in a Docker container on AWS Lambda.

Outline

All of this project’s code can be found in the First Look monorepo on my GitHub.

Introduction

AWS Lambda is a powerful computing model because it gives developers a known execution environment with a specific runtime that accepts and runs arbitrary code. But this also causes problems if you have a use case outside the environments predetermined by AWS.

To address this issue, AWS introduced Lambda Layers. Layers allow packaging .zip files with the libraries and dependencies needed for the Lambda functions. But Lambda Layers still have limitations including testing, static analysis, and versioning. In December 2020, AWS Lambda released Docker container support.

Shortly after the announcement, the Serverless framework created the following example to demonstrate how to use the new feature. This blog post will break down that example by building the project from scratch..

Create Project

Start by creating a blank directory and initialize a package.json file.

Terminal window
mkdir ajcwebdev-docker-lambda
cd ajcwebdev-docker-lambda
yarn init -y

Instead of globally installing the Serverless CLI, we will install the serverless package as a local dependency in our project.

Terminal window
yarn add -D serverless

As a consequence, to execute sls commands we must prefix the commands with yarn or npx. You can refer to the official Serverless documentation if you prefer to install the CLI globally. After installing the serverless dependency, create the following files:

  • app.js for our Lambda function code that will return a simple message when invoked.
  • Dockerfile for defining the dependencies, files, and commands needed to build and run our container image.
  • serverless.yml for defining our AWS resources in code which will be translated into a single CloudFormation template that will generate a CloudFormation stack.
  • .gitignore so we do not commit our node_modules or the .serverless directory that contains our build artifacts and is generated when we deploy our project to AWS.
Terminal window
touch app.js Dockerfile serverless.yml
echo 'node_modules\n.DS_Store\n.serverless' > .gitignore

Serverless YAML Configuration File

The Serverless Framework lets you define a Dockerfile and point at it in the serverless.yml configuration file. The Framework makes sure the container is available in ECR and setup with configuration for Lambda.

service: ajcwebdev-docker-lambda
frameworkVersion: '3'

AWS Provider

We select the AWS provider and include an ecr section for defining images that will be built locally and uploaded to ECR. The entire packaging process occurs in the context of the container. AWS uses your Docker configuration to build, optimize and prepare your container for use in Lambda.

Note: If you are using an Apple M1, you will need to uncomment out the line that specifies arm64 for the architecture in the provider property.

provider:
name: aws
# architecture: arm64
ecr:
images:
appimage:
path: ./

Functions Property

The functions property tells the framework the image reference name (appimage) that is used elsewhere in our configuration. The location of the content of the Docker image is set with the path property. A Dockerfile of some type should reside in the specified folder containing the executable code for our function.

functions:
hello:
image:
name: appimage

We use the same value for image.name as we do for the image we defined, appimage. It can be named anything as long as the same value is referenced. Here is our complete serverless.yml file:

serverless.yml
service: ajcwebdev-docker-lambda
frameworkVersion: '3'
provider:
name: aws
# architecture: arm64
ecr:
images:
appimage:
path: ./
functions:
hello:
image:
name: appimage

Dockerfile

The easiest way to setup a Lambda ready Docker image is to use base images provided by AWS. The AWS ECR Gallery contains a list of all available images. We are using the Node v14 image.

# Dockerfile
FROM public.ecr.aws/lambda/nodejs:14
COPY app.js ./
CMD ["app.handler"]

The CMD property defines a file called app.js with a function called handler. In the next section we will define the handler function and add the necessary code to our project. This would need to be modified if we changed the app.js file name or exported our function handler as something other than handler.

Function Handler

We will now include the code that will be executed by our handler when the function is invoked. Add the following code to the app.js file in the root of the project:

app.js
'use strict'
module.exports.handler = async (event) => {
const message = `Cause I don't want a server, but I do still want a container`
return {
statusCode: 200,
body: JSON.stringify(
{ message }, null, 2
),
}
}

app.js returns a JSON object containing a message clarifying exactly why anyone would ever want to do this in the first place.

Deploy to AWS

We are now able to generate our container, deploy it to ECR, and execute our function.

Configure AWS Credentials

First, you will need to set your AWS credentials with the sls config credentials command. This step can be skipped if you are using a global install of the CLI that is already configured with your credentials.

Terminal window
yarn sls config credentials \
--provider aws \
--key YOUR_ACCESS_KEY_ID \
--secret YOUR_SECRET_ACCESS_KEY

Run Serverless Deploy Command

In order to build images locally and push them to ECR, you need to have Docker installed and running on your local machine.

Terminal window
yarn sls deploy

The sls deploy command deploys your entire service via CloudFormation. After the deployment finishes you will be provided the information for your service.

Deploying ajcwebdev-docker-lambda to stage dev (us-east-1)
✔ Service deployed to stack ajcwebdev-docker-lambda-dev (183s)
functions:
hello: ajcwebdev-docker-lambda-dev-hello

Invoke the Deployed Function

The sls invoke command invokes a deployed function. It allows sending event data to the function, reading logs, and displaying other important information about the function invocation.

Terminal window
yarn sls invoke --function hello

If your function/container was configured and deployed correctly then you will receive the following message:

{
"statusCode": 200,
"body": "{\n \"message\": \"Cause I don't want a server, but I do still want a container\"\n}"
}