The problem (as always)
In my career, I’ve often seen AWS Lambdas deployed directly without the source code being committed to GitHub. Even worse, many projects lack a defined architecture or tooling that allows testing Lambdas locally before deployment.
Here, I’ll walk through how to introduce some basic tools to fix this.
Setup aws locally
To run and debug Lambdas locally, we need an execution environment that mimics AWS services. For this, I recommend using LocalStack with Docker:
services: localstack: image: localstack/localstack ports: - "4566:4566" - "4510-4559:4510-4559" environment: - SERVICES=lambda,logs - DEBUG=1 - LAMBDA_EXECUTOR=docker - DATA_DIR=/tmp/localstack/data - AWS_DEFAULT_REGION=eu-north-1 - LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-} - USE_SSL=false volumes: - "./localstack/volume:/var/lib/localstack" - "/var/run/docker.sock:/var/run/docker.sock"
There are 2 things you need to keep notice:
- Upon
SERVICES
environmental variable we need to define 2 services:-
lambda
that indicates we need to use localstack for lambda execution. -
logs
in which any log outpud of the lambda would be outputed into stdout. This is the equivalent of cloudwatch service.
-
- We have disabled ssl upon
USE_SSL
- The spawned container need to communicate with docker in order to run the lambda. For this we need to define:
-
LAMBDA_EXECUTOR
as docker - Map the docker socket file inside the container via a bind-mount. It is achieved via putting
/var/run/docker.sock:/var/run/docker.sock
upon volumes section.
-
Localstack is available thorugh http://localhost:4566
as defined upon:
ports: - "4566:4566"
Download Lambda code
Lambda code is usually stored as a .zip file, which contains both the application code and its dependencies:
Detect lambda entrypoint
Lambdas are node.js or python scripts, usually is is a function inside a specific file.
For example, in a Python Lambda, the entrypoint is often defined as file_name.handler_function:
In this case, AWS starts execution from the lambda_handler method inside lambda_function.py:
Identifying the entrypoint is crucial because it helps determine dependencies and setup.
Installing the Required Python or Node.js Version
AWS Lambdas run with specific Python (or Node.js) runtimes (e.g., Python 3.9, 3.10).
You can check which runtime is being used under the Code tab in the AWS Lambda console:
Organizing Code and Dependencies:
Once you’ve downloaded the Lambda .zip, unzip it into a working directory. I recommend creating two folders:
- Original Lambda → The raw contents of the .zip. (for example
my-lambda-orig
- Clean Repo → A structured project where you separate code from dependencies. (for example
my-lambda
). Thefollowing stricture is reccomended:- src -> containing lambda source code
- tools -> for usesfull tools
Why separate them?
AWS often packages Lambdas with both source code and dependencies. Instead, you’ll want to manage dependencies with pip (and later, requirements.txt/poetry/pipenv).
Here’s the process:
- Copy the main Lambda file (e.g., lambda_function.py) into
./src
. - Inspect the start of the Lambda file (e.g., lambda_function.py). There you can distinguish which files are needed ans which are 3rd party libraries. Any non 3rd party library copy them upon
src
folder of the clean repo. - Using the name you can detect which ones are 3rd-party and which ones are required. It is a good idea to keep a list with 3rd party libraries as well.
- Copy only the application code (your .py files) into the clean repo.
A common Library is the
boto
one which is used for interfacing with AWS itself.
Once seperate the code init git and fo a first commit:
git init git commit -m "Cleaned Code"
Create a virtualenv
We need to have a list with lambda dependencies for that we ned to initialize a virtual env:
python -m venv ./.venv echo ".env" > .gitignore git add .gitignore git commit -m "Ignoring virtualenv" source .venv/bin/activate
Then using pip install
we are installing the nesesary dependencies, for example for boto
we run:
pip install boto3
Once depencendies installed we need to keep them into a list for that we need to run:
pip freeze > ./requirements.txt git add requirements.txt git commit -m "Added Dependencies"
Now every time we need to install them we can do:
pip -r requirements.txt
Detect environmental variables
Lambas also retrieve settings externally from environmental variables. For pythonk lambdas scan for strings such as os.environ.get(
, the parameter passed contains the environemtnal variable required.
Deploy upon localstack
Now that we have code sorted out we can create a lambda into localstack. For this we need to package the lambda as a new zip:
## Package lambda mkdir package echo "package" >> .gitignore git add .gitignore git commit -m "Ignoring support folder for packaging" # Copy code cp -r ./src package/ # Install dependencies upon package folder pip install -r requirements.txt -t package # Zip package cd package zip -r9 function.zip cd ../ # Package folder is temporary rm -rf ./package
The commands above do 2 things:
- Create a intermediate folder where code is staged, the folder is named
package
. This folder and it should be ignored from git. - Copy all lambda source upon
package
folder. keep in mind we need to copy the contents of src and not thesrc
itself. - Install all dependencies at
package
folder. This is achieved via providing the folder name upon-t
parameter.
The method described above can also be used for packaging the lambda for production as well. The zip file can be uploaded, thus replacing the lambda code.
Deploy on locakstack
For this we can run:
aws --endpoint-url=http://localhost:4566 lambda create-function\ --function-name "my-function" \ --runtime "python3.9" \ --role "arn:aws:iam::000000000000:role/lambda-role" \ --handler "python3.9" \ --zip-file "fileb://function.zip" \ --environment Variables="{VAR1=\"Hello\"}"
This command above creates the function my-function
this is the lambda function we need to create. At the command above we need to pay attention upon some arguments:
-
--endpoint-url
In our case we need to provide it with the localstack setup. If not it will try to create the lambda upon actual aws. In out case this si not desirable -
--fucntion-name
a distinctive function name. Each lambda should have its own -
--role
Keeping the valuearn:aws:iam::000000000000:role/lambda-role
is fine on every time tyou crreatr a new lambda. Localstack does not check for roles but we need to place this in order to avoid any unwanted behaviour. -
--handler
: Place the one that actual lambda upon aws uses. Here we define what would be used for lambda execution. -
--zip-file
The file we created containing code + dependencies. -
--environment
If no environmental variables used, ommit it. The value is a string encloded in{}
having this format (do not confuse it with json):
{VARIABLE_NAME=VALUE,VARIABLE_NAME2=VALUE2}
Develop and redeploy
During development if a changes is needed you need to re-package the lambda. Re-creating is not needed unless localstack container is terminated.
Updating the code is done like this:
aws --endpoint-url=http://localhost:4566 lambda update-function-code --function-name "my-function" --zip-file "fileb://function.zip"
Whilst if environmental variables need to change you can do it like this:
aws --endpoint-url=$ENDPOINT lambda update-function-configuration \ --function-name "my-function" \ --environment Variables="{VARIABLE_NAME=VALUE,VARIABLE_NAME2=VALUE2}"
Misc tips
If an extra dependency is needed is it a good idea to enable the generated virtual environment, then install it via pip. Then update the code upon localstack as mentioned above.
Top comments (0)