Building Scalable Serverless Applications with Terraform, Docker and AWS Lambda (Part 1/2)

Ragunath Rajasekaran
7 min readAug 9, 2023

How Terraform, Docker and AWS Lambda Revolutionise Application Development and Deployment.

Build scalable, efficient, and cost-effective serverless apps

Docker and Terraform offer significant benefits when working with AWS Lambda. Docker’s containerization capabilities provide portability, simplified dependency management, and consistent execution environments for your Lambda functions. Terraform’s Infrastructure as Code approach enables better infrastructure management, scalability, version control, and infrastructure consistency. By leveraging these technologies together, you can enhance the efficiency, maintainability, and scalability of your serverless applications, allowing you to focus on delivering value and innovation.

Whether you’re a seasoned developer or new to serverless architecture, this guide equips you with the knowledge and skills to build scalable, efficient, and cost-effective serverless applications using Terraform, Docker, and AWS Lambda.

Let’s dive in and unlock the full potential of these technologies in building your next-generation serverless applications.

What we will cover in this tutorial?

Part1:

1. Benefits of Using Terraform and Docker with AWS Lambda

2. Project Setup

2.1. Prerequisites and Setup

2.2. Folder Structure

3. Python & Docker Image

3.1. Python Code

3.2. Dockerfile

3.3. AWS Lambda Docker Validate

4.0 Terraform Basics

5.0. Build docker image using terraform

Part2:

6.0. Terraform config to Push docker image to AWS ECR

7.0. Terraform config for Lambda

8.0. Deploying the Lambda Function with Terraform

9.0. Cleanup and Resource Management

1. Benefits of Using Terraform and Docker with AWS Lambda:

  • Simplified deployment: Terraform enables infrastructure as code, making it easier to manage AWS resources and their dependencies.
  • Reproducible builds: Docker provides a standardized environment, ensuring consistent builds across development, testing, and production environments.
  • Portability: With Docker, the Lambda function can be packaged into a self-contained image, allowing for seamless deployment across different platforms.
  • Version control: Both Terraform and Docker enable version control, facilitating collaboration and maintaining a history of changes.
  • Scalability and reliability: AWS Lambda’s auto-scaling capabilities combined with Docker’s containerization allow for efficient resource utilization and robust application performance.

2. Project Setup

2.1. Prerequisites and Setup:

Before diving into the implementation, ensure that Docker and Terraform are installed on your local machine. You will also need an AWS account with appropriate permissions to create Lambda functions, IAM roles, and ECR repositories.

2.2. Folder Structure:

Throughout this tutorial, we will write Python, terraform, and docker code. AWS Lambda can have various runtime environments. Here, we will use Python’s runtime for Lambda. The Python code must be containerized using a dockerfile. AWS Resources will be configured using Terraform. It is very important to select the proper folder structure that can accommodate every tech stack.

  • Docker: This dockerfile will remain in the root directory.
  • app: This directory will contain the Python code for the application. Any python file that can be used in Lambda can be placed in this directory so that it can be effortlessly bundled when the docker image is being created. This directory will also contain the requirements.txt file used to track any external dependencies.
  • *.tf: Multiple *.tf files will exist, and they can all be placed in the root directory.

Let’s title this project aws-lambda-terraform-docker, and we will create a new directory with the same name.

Look at the folder structure we’ll be using for this project.

➜  aws-lambda-terraform-docker tree .         
.
├── Dockerfile
├── app
│ ├── main.py
│ └── requirements.txt
├── *.tf

3. Python & Docker Image

3.1 Python code:

The Python runtime environment has been chosen for AWS Lambda. While configuring AWS Lambda in AWS, we will specify the name of the Python file (main.py) and the name of the function (lambda_handler) that will be invoked when calling a Lambda function. I’ll configure the following details.

  • Name of the Python file: main.py
  • Function Name: This method will be named lambda_handler , and it will have event and context as input parameters.

Now we understand the structure of Python Lambda invocation. What do we intend to do in Lambda? We’ll keep the Lambda work as basic as possible. We can receive the request, extract the name, and return the formatted string in JSON format, along with the current time.

Let’s create the app folder (we’ve decided to keep all Python-related work in this folder) and create the main.py file. I suggest utilising requirements.txt to record external dependencies. We are not going to use any external dependencies for this simpler use case, but we will learn how to bundle Python code with external dependencies as part of this tutorial.

import json

def lambda_handler(event, context):
# Extract name from Event
name = event['name']

response = {
'message': f'Hello, {name}!.'
'time': datetime.now()
}

return response

Create a requirements.txt file and keep the file as empty.

3.2 Dockerfile:

In order to incorporate Python code into a Docker image, it is necessary to configure Dockerfile.

  1. Create a file named Dockerfile in the project directory.
  2. In the Dockerfile, add the following instructions to build your Docker image. Here's an example:
FROM amazon/aws-lambda-python:latest

COPY app/requirements.txt .

RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"

COPY app/main.py ${LAMBDA_TASK_ROOT}

CMD [ "main.lambda_handler" ]

Ref: https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-instructions

Using the following command, let’s create the docker image.

docker build -t aws-lambda-terraform-docker .
Python-based Docker image creation

3.3. AWS Lambda Docker Validate:

Let’s verify that the Lambda(docker image) is functioning as intended by running the docker image on the local machine and examining the output.

Let’s execute the docker image using the command below. Here, we have exposed lambda via port 9999.

docker run -p 9999:8080 aws-lambda-terraform-docker

AWS Lambda docker image validation via Curl invocation:

curl -XPOST "http://localhost:9999/2015-03-31/functions/function/invocations" -d '{"name":"AWSLambdaTerraformDocker"}' | jq
Invocation of Lambda from a local machine using CURL. The left side displays the docker run data, while the right side displays the curl command.

4.0 Terraform Basics

Definition from Official Website: HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files

Using Terraform configuration files, we can create on-premises and Cloud (AWS) resources. The infrastructure modifications could be deployed using the terraform command line tool. To work with terraform, we must be familiar with the following fundamental commands.

terraform init: This will retrieve the terraform dependencies and generate a lock file to assure the integrity of the dependency versions. This will establish a backend for managing infra state.

terraform plan: This will outline the resources that will be added, destroyed, and changed.

terraform apply: This applies the infrastructure modifications.

Let’s investigate these three fundamental terraform commands by building a docker image from our local machine.

5.0 Build docker image using terraform

  1. Create a file named docker.tf in the project directory.
  2. In the docker.tf file, define the Docker resources using Terraform. Here's an example:
resource "null_resource" "docker_build" {
triggers = {
files = "${filebase64sha256("Dockerfile")}"
}

provisioner "local-exec" {
command = <<EOT
docker build -t aws-lambda-terraform-docker:latest .
EOT
}
}
  • The null_resource resource named docker_build triggers the Docker image build process when the Dockerfile changes.
  • In the provisioner block of the docker_build resource, we execute a local command using the local-exec provisioner. This command builds the Docker image with the specified tag.

5.1.1. Terraform init:

Terraform init log

As described in the directory tree below, terraform init performed the three actions listed.

  1. Retrieved the dependencies with the correct version.
  2. A lock file (.terraform.lock.hcl) has been created.
  3. The State file (terraform.tfstate) was created.
➜  aws-lambda-terraform-docker tree . -a     
.
├── .terraform
│ └── providers
│ └── registry.terraform.io
│ └── hashicorp
│ └── null
│ └── 3.2.1
│ └── darwin_amd64
│ └── terraform-provider-null_v3.2.1_x5
├── .terraform.lock.hcl
├── Dockerfile
├── app
│ ├── main.py
│ └── requirements.txt
├── docker.tf
└── terraform.tfstate

8 directories, 7 files

5.1.2. Terraform Plan:

This command outlines modifications to the resource’s life cycle. It will decide whether to Add, Destroy, or Change the docker image in this step. Particularly, for this example, the response will be “1 to add” since we are executing it for the very first time.

Terraform plan log

5.1.3. Terraform Apply:

While executing terraform apply, we can observe that it will request permission to apply. If you input YES, terraform will generate the docker image.

Terraform Apply Log

Now we can understand the power of terraform config along with its three important commands.

What we did so far?

  1. As we intended to use the AWS Lambda Python rutime, the main.py Python file was programmed.
  2. The request was processed by the lambda_handler function, which returned the response in JSON format.
  3. This Python code was containerized with Docker, and its output was validated by locally executing the Docker image.
  4. A Docker image was created using Terraform.

Stay tuned for Part 2 of this blog, in which we will discuss the subsequent items:

  1. Publish the Docker Image to the Amazon Web Services (AWS) Elastic Container Registry (ECR).
  2. AWS Lambda configuration using terraform and AWS ECR image.
  3. Using terraform configurations, the resources will be generated.
  4. Using terraform destroy, created resources will be destroyed.

Ragunath Rajasekaran is a Senior Software Lead Engineer at Optum, with extensive experience as a polyglot developer. He is proficient in SpringBoot, Big Data technologies (Spark and Hive), React, and iOS development. He specializes in AWS cloud platform, employing infrastructure as code (IAC) with Terraform. Ragunath actively engages on platforms such as LinkedIn, GitHub, Dev.to & Medium through email.

If you haven’t already, you may want to check out his previous articles on SpringBoot, the VSCode development container, Python and React.

ragu-spring-boot-blogs

6 stories

ragu-react-blogs

2 stories

ragu-python-blog

2 stories

--

--

Ragunath Rajasekaran

Sr. Tech Lead | Spring Boot | Big Data | AWS | Terraform | Spark | React | Docker