IT 2.0

Next generation IT infrastructures

Email GitHub Twitter LinkedIn Youtube

Running the stock NGINX container image with AWS Lambda

Part of my job at AWS is to explore the art of possible. A few weeks ago I came across an open source project called re:Web. What intrigued me about re:Web is that it allows a traditional container image (wrapping a traditional “web service” application) to be repurposed and deployed to AWS Lambda. The idea for this blog was sparked by an issue that Aidan Steele opened on the re:Web project. The technique that re:Web implements was originally pioneered by Aidan himself with his Serverlessish prototype. This blog will focus on re:Web but the outcome could be implemented in other ways, including Serverlessish. I’d like also to thank Aidan for his help with the prototype discussed in this blog (without his support I’d still be here trying to figure out how to map cache files to temp folders - who knew about /etc/nginx/conf.d/cachepaths.conf?!?).

Now that we are done praising Aidan (no, we are never done), let’s switch gears and talk about… how to run the stock NGINX container image in Lambda.

The way re:Web works is that it injects itself between Lambda and the actual web service application. The long story short is that, after a lot of trials and errors, the following Dockerfile is what you need to re-package the stock NGINX image to make it run in Lambda:

# syntax=docker/dockerfile:1.3-labs

FROM public.ecr.aws/apparentorder/reweb as reweb

FROM public.ecr.aws/nginx/nginx:latest
COPY --from=reweb /reweb /reweb

# setup the local lambda runtime (to run the image locally)
RUN curl -L -o /usr/bin/lambda_rie https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/download/v1.2/aws-lambda-rie-x86_64
RUN chmod +x /usr/bin/lambda_rie

###############################################################
########## start of custom tweaks - NGINX specific ############ 
###############################################################

# make nginx listin on 8090
RUN sed -i "s/listen       80/listen       8090/g" /etc/nginx/conf.d/default.conf

# move the nginx pid file to a directory that can be written 
RUN sed -i "s,pid        /var/run/nginx.pid;,pid        /tmp/nginx.pid;,g" /etc/nginx/nginx.conf

# put the nginx logs to stdout and stderr (which also avoids writing to non writable folders)
RUN ln -sf /dev/stdout /var/log/nginx/access.log && \
    ln -sf /dev/stderr /var/log/nginx/error.log

# redirect all cache files to /tmp (writable)
COPY <<EOF /etc/nginx/conf.d/cachepaths.conf
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
EOF

###############################################################
########### end of custom tweaks - NGINX specific ############# 
###############################################################

# reweb environment variables
ENV REWEB_APPLICATION_EXEC nginx
ENV REWEB_APPLICATION_PORT 8090
ENV REWEB_WAIT_CODE 200

ENTRYPOINT ["/reweb"]

This is what this (multi-stage) Dockerfile does:

  • it gets (FROM) the re:Web image to source the reweb binary
  • it gets (FROM) the stock NGINX image
  • it copies the re:Web binary into the NGINX image
  • it pulls the Lambda RIE (for local execution - only required if testing Lambda locally - highly recommended)
  • it tweaks the NGINX image to bypass (current) Lambda limitations:
    • /tmp is the only writable directory
    • can’t bind processes to ports <1024
  • it sets ENV variables to configure re:Web (e.g. note that 8090 is the port NGINX responds to)
  • it runs /reweb as the entrypoint

Please note that while this example talks about NGINX , you can almost extract a common pattern from the above. All these steps are required (and mostly the same) to potentially make any stock container image exposing a web service work in Lambda. The notable exception is the “custom tweaks” section which is very container image specific.

Now onto the action. To complete the following steps you need an AWS account, Docker Desktop or Docker Engine installed locally (or anything that can build, push, run a container image really) as well as the AWS CLI installed and configured.

Local testing

The image can be built as follows:

$ docker build -t lambdanginx:latest .

You can now run the image locally using the Lambda Runtime Interface Emulator (RIE). You can do so by modifying the entrypoint to call the rie binary and adding the CMD to call the reweb binary:

$ docker run -it -p 9000:8080 --entrypoint /usr/bin/lambda_rie lambdanginx /reweb

This image runs fine locally (this log includes the launch + 3 invocations from another terminal):

$ docker run -it -p 9000:8080 --entrypoint /usr/bin/lambda_rie lambdanginx /reweb  
INFO[0000] exec '/reweb' (cwd=/, handler=)              
INFO[0012] extensionsDisabledByLayer(/opt/disable-extensions-jwigqn8j) -> stat /opt/disable-extensions-jwigqn8j: no such file or directory 
WARN[0012] Cannot list external agents                   error="open /opt/extensions: no such file or directory"
START RequestId: 76ccd182-f70d-4fc6-93ed-a6dfb3aea8c8 Version: $LATEST
re:Web -- SERVICE NOT UP: Get "http://localhost:80/": dial tcp 127.0.0.1:80: connect: connection refused
2021/10/30 16:29:52 [notice] 21#21: using the "epoll" event method
2021/10/30 16:29:52 [notice] 21#21: nginx/1.21.3
2021/10/30 16:29:52 [notice] 21#21: built by gcc 8.3.0 (Debian 8.3.0-6) 
2021/10/30 16:29:52 [notice] 21#21: OS: Linux 5.10.47-linuxkit
2021/10/30 16:29:52 [notice] 21#21: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/10/30 16:29:52 [notice] 22#22: start worker processes
2021/10/30 16:29:52 [notice] 22#22: start worker process 23
2021/10/30 16:29:52 [notice] 22#22: start worker process 24
2021/10/30 16:29:52 [notice] 22#22: start worker process 25
2021/10/30 16:29:52 [notice] 22#22: start worker process 26
2021/10/30 16:29:52 [notice] 22#22: start worker process 27
2021/10/30 16:29:52 [notice] 22#22: start worker process 28
127.0.0.1 - - [30/Oct/2021:16:29:52 +0000] "GET / HTTP/1.1" 200 615 "-" "Go-http-client/1.1" "-"
re:Web -- SERVICE UP: 200 OK
127.0.0.1 - - [30/Oct/2021:16:29:52 +0000] "GET / HTTP/1.1" 200 615 "-" "Go-http-client/1.1" "-"
END RequestId: 76ccd182-f70d-4fc6-93ed-a6dfb3aea8c8
REPORT RequestId: 76ccd182-f70d-4fc6-93ed-a6dfb3aea8c8    Init Duration: 0.46 ms    Duration: 66.09 ms    Billed Duration: 67 ms    Memory Size: 3008 MB    Max Memory Used: 3008 MB    
START RequestId: b65d72bf-519a-4583-9514-44b9a87278dd Version: $LATEST
127.0.0.1 - - [30/Oct/2021:16:37:56 +0000] "GET / HTTP/1.1" 200 615 "-" "Go-http-client/1.1" "-"
END RequestId: b65d72bf-519a-4583-9514-44b9a87278dd
REPORT RequestId: b65d72bf-519a-4583-9514-44b9a87278dd    Duration: 2.92 ms    Billed Duration: 3 ms    Memory Size: 3008 MB    Max Memory Used: 3008 MB    
START RequestId: 3fb34af9-7b4e-47b2-9c35-a5a127e1e835 Version: $LATEST
127.0.0.1 - - [30/Oct/2021:16:37:57 +0000] "GET / HTTP/1.1" 200 615 "-" "Go-http-client/1.1" "-"
END RequestId: 3fb34af9-7b4e-47b2-9c35-a5a127e1e835
REPORT RequestId: 3fb34af9-7b4e-47b2-9c35-a5a127e1e835    Duration: 1.92 ms    Billed Duration: 2 ms    Memory Size: 3008 MB    Max Memory Used: 3008 MB
    

The locally running Lambda function can be invoked using a specific path and endpoint. This is how the function responds (with the nginx default home page):

$ curl -X POST -d '{}' http://localhost:9000/2015-03-31/functions/function/invocations | jq -r '.body' | base64 -D
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1186  100  1184  100     2   231k    400 --:--:-- --:--:-- --:--:--  231k
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
$ 

Cloud testing

Now onto the real thing.

Note I am using us-west-2 as the region in the example below. Change it as you see fit. Also remember to change the AWS account placeholder (123456789) to your real account.

Before we can create the Lambda function we need to upload the new container image to ECR. These 4 commands will:

  • create the ECR repository
  • login to ECR
  • tag the image
  • push the image to the repository
$ aws ecr create-repository --repository-name lambda-nginx --region us-west-2 
$ aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-west-2.amazonaws.com
$ docker tag lambdanginx:latest 123456789.dkr.ecr.us-west-2.amazonaws.com/lambdanginx:latest
$ docker push 123456789.dkr.ecr.us-west-2.amazonaws.com/lambdanginx:latest 

This is a CloudFormation stack that deploys the Lambda (courtesy of Aidan, again!). Save this file as cfn-nginx-lambda.yaml.

"Description" : "Running the NGINX image as a Lambda function"

Transform: AWS::Serverless-2016-10-31

Parameters:
  ImageUri:
    Type: String
    Description: "ECR image uri"

Resources:
  Function:
    Type: AWS::Serverless::Function
    Properties:
      PackageType: Image
      ImageUri: !Ref ImageUri
      Timeout: 10
      AutoPublishAlias: live
      Events:
        Http:
          Type: HttpApi

Outputs:
  Function:
    Value: !Ref Function.Version
  Url:
    Value: !GetAtt ServerlessHttpApi.ApiEndpoint

The template can be deployed with the following command (again, remember to check region and account ID):

$ aws cloudformation create-stack \
    --template-body file://./cfn-nginx-lambda.yaml \
    --parameters ParameterKey=ImageUri,ParameterValue="123456789.dkr.ecr.us-west-2.amazonaws.com/lambdanginx:latest" \
    --capabilities CAPABILITY_IAM CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \
    --stack-name nginx-lambda \
    --region us-west-2

This stack has created a Lambda function using the container image you have just created and put an API Gateway interface in front of it.

You can find the API Gateway endpoint by querying the stack:

$ aws cloudformation describe-stacks --stack-name nginx-lambda --query "Stacks[0].Outputs[1].OutputValue"
"https://zi0m7aklv9.execute-api.us-west-2.amazonaws.com"

And then hit the endpoint with curl and enjoy the NGINX default page coming off of a Lambda function (through said API Gateway):

$ curl https://zi0m7aklv9.execute-api.us-west-2.amazonaws.com 
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Alternatively, you can hit the same endpoint via a browser:

From here, you could do a quick scaling test and observe how Lambda responds to requests. Below I have used Apache ab to hit the API Gateway endpoint with a couple of different profiles:

while TRUE; do ab -n 10 -c 5 https://zi0m7aklv9.execute-api.us-west-2.amazonaws.com/; sleep 2; done

while TRUE; do ab -n 100 -c 50 https://zi0m7aklv9.execute-api.us-west-2.amazonaws.com/; sleep 2; done

I ran the first test profile (10 requests with concurrency 5) every 2 seconds in a loop for roughly 30 minutes and the second test profile (100 requests with concurrency 50) for another 30-ish minutes. And this is how our Lambda reacted:

Conclusions

In this post I have tried to demonstrate how easy it is to wrap a stock nginx image and run it with AWS Lambda. I did not have a particular use case in mind for this and I was just exploring the art of possible through some hacking. If this is of interest to you, and you want to chat about how this, please reach out! I want to hear how you are thinking about it.