Updating the Yelb Ruby Lambda functions and the S3 static website template

Yelb is the demo application I use to experiment with and learn technologies. A few years ago I have refactored the yelb-appserver component to work with Lambda and the yelb-ui component to be hosted on S3. At the time of this writing, this folder in the Yelb repository describes the architecture for this deployment model.

Note that I am far from happy with the deployment mechanism I have right now. It's basically a shell script that deploys a Cloud Formation template (which contains the DDB tables as well as the Lambda functions) and that clones a source S3 bucket with the JavaScript for the user interface into a target bucket. In the fullness of time I will want to turn this script into a full IaC artifact (likely CDK).

I am writing this blog as a self-note to describe the steps required to update the zip and s3 templates artifacts. I want to use these notes to build better automation in the future (AND to help anyone that may have a similar need).

Yelb application server as a set of Lambda functions

Some background for context first. When I introduced Lambda support for Yelb, I picked the Ruby runtime that was supported at that time in Lambda (2.5). This worked fine in combination of the runtime (2.4) that I was using, at the same time, in my Dockerfile.

To build the Lambda code artifact (a zip file hosted in a bucket in us-west-2) I used the following two commands (these scripts were expected to be launched from the yelb-appserver folder in the repo):

1docker run -v "$PWD":/var/task lambci/lambda:build-ruby2.5 /bin/bash -c "yum -y install postgresql-devel postgresql-libs ; bundle config --delete frozen ; bundle install ; bundle install --deployment; mkdir lib; cp /usr/lib64/libpq.so.5 ./lib/libpq.so.5"
2
3zip -r yelb-appserver-lambda.zip getvotes_adapter.rb getstats_adapter.rb pageviews_adapter.rb hostname_adapter.rb restaurant_adapter.rb vendor modules lib     

Note how I had to install the required gems (e.g. for Postgres) as well as extract the libpq.so.5 library and add it to the zip artifact

The zip artifact originally generated has been working for a number of years until something happened.

Early this year I had to update my Lambda Ruby runtime because the version I was using originally (Ruby 2.5) went out of support (the table in this Lambda documentation page outlines the Lambda runtimes life cycle). This is the commit for that update in my CloudFormation template.

This has (presumably) caused my old yelb-appserver artifact and the new runtime to get out of sync. My functions would no longer work and spit errors. To be clear, my source code kept evolving over time and I have indeed moved to a new runtime in the Dockerfile (you can see the commit here) but I have never recreated the Lambda artifact.

I have revamped my old script to generate a brand-new zip artifact for my Lambda functions (using the new Ruby 2.7 Lambda image) but I was having a hard time to stabilize the commands. First, the bundling complained that the Postgres version I was using was too old (I have fixed it by force installing Postgres 10 - this article was useful to me). Second, I was having problems with the Lambda functions requiring additional libraries. Again, luckily, we stand on the shoulders of giants and I have noticed other developers having similar issues moving from Ruby 2.5 to Ruby 2.7. Inspired by their discovery (thanks!) I was able to tweak my scripts and make them work for the new runtime.

At the time of this writing (December 2022) this is the script I am using to generate the yelb-appserver zip file to be used with the Lambda Ruby 2.7 runtime:

 1docker run --rm -v "$PWD":/var/task lambci/lambda:build-ruby2.7 /bin/bash -c \
 2    "amazon-linux-extras install postgresql10 epel ; \
 3    yum -y install postgresql-devel ; \ 
 4    bundle config --delete frozen ; \
 5    bundle install --path vendor/bundle --clean ; \
 6    mkdir -p lib ; \
 7    cp -a /usr/lib64/libpq.so.5.10 /var/task/lib/libpq.so.5 ; \
 8    cp -a /usr/lib64/libldap_r-2.4.so.2.10.7 /var/task/lib/libldap_r-2.4.so.2 ; \
 9    cp -a /usr/lib64/liblber-2.4.so.2.10.7 /var/task/lib/liblber-2.4.so.2 ; \
10    cp -a /usr/lib64/libsasl2.so.3.0.0 /var/task/lib/libsasl2.so.3 ; \
11    cp -a /usr/lib64/libssl3.so /var/task/lib/ ; \
12    cp -a /usr/lib64/libsmime3.so /var/task/lib/ ; \
13    cp -a /usr/lib64/libnss3.so /var/task/lib/ ; \
14    cp -a /usr/lib64/libnssutil3.so /var/task/lib/"
15
16zip -r yelb-appserver-lambda.zip getvotes_adapter.rb getstats_adapter.rb pageviews_adapter.rb hostname_adapter.rb restaurant_adapter.rb vendor modules lib

The resulting zip artifact is made available for the CloudFormation template to pull it and use it to configure the Lambda functions.

Yelb user interface as an S3 website

While the user interface did not break (I think), the JavaScript artifacts that I have had on the S3 source repository for a while has diverged from the latest updates of the code in the user interface. In other words, just like for the application server, I have never re-created the new site template (while I have indeed created the user interface images, yelb-ui, at every update).

The commands to generate the JavaScript for the user interface hosted on the second source bucket have never been publicly documented, and they were roughly based off of the sequence in the yelb-ui Dockerfile.

At the time of this writing I can run the following docker run command to generate the site template files required to be hosted on the source S3 bucket (note this command needs to be launched from the yelb-ui folder in the repo):

 1docker run --rm -v "$PWD":/yelb-ui node:12.22 /bin/bash -c \
 2    "cp -r /yelb-ui/clarity-seed-newfiles /clarity-seed-newfiles ; \
 3    npm install -g @angular/cli@6.0.0 ; \
 4    npm install node-sass@4.13.1 ; \
 5    git clone https://github.com/vmware/clarity-seed.git ; \
 6    cd /clarity-seed ; \
 7    git checkout -b f3250ee26ceb847f61bb167a90dc957edf6e7f43 ; \
 8    cp /clarity-seed-newfiles/src/index.html /clarity-seed/src/index.html ; \
 9    cp /clarity-seed-newfiles/src/styles.css /clarity-seed/src/styles.css ; \
10    cp /clarity-seed-newfiles/src/env.js /clarity-seed/src/env.js ; \
11    cp /clarity-seed-newfiles/src/app/app* /clarity-seed/src/app ; \
12    cp /clarity-seed-newfiles/src/app/env* /clarity-seed/src/app ; \
13    cp /clarity-seed-newfiles/src/environments/env* /clarity-seed/src/environments ; \
14    cp /clarity-seed-newfiles/package.json /clarity-seed/package.json ; \
15    cp /clarity-seed-newfiles/angular-cli.json /clarity-seed/.angular-cli.json ; \
16    rm -r /clarity-seed/src/app/home ; \
17    rm -r /clarity-seed/src/app/about ; \
18    # the following sed modifies the source files to read the endpoint from the env.js file 
19    sed -i -- 's#public appserver = environment.appserver_env#public appserver = this.env.apiUrl#g' /clarity-seed/src/app/app.component.ts ; \
20    cd /clarity-seed/src ; \ 
21    npm install ; \
22    ng build --environment=prod --output-path=/yelb-ui/static-site-template -aot -vc -cc -dop --buildOptimizer" 

The content of the folder ./static-site-template now needs to be uploaded to the root of the source S3 bucket that hosts the template of the static site (the Yelb user interface).

For the records, I am very much dissatisfied about the distributed nature of all these scripts. As of today, I have three places where I use a similar set of build commands that I need to maintain: the script above for the S3 static website template, the Dockerfile to create the yelb-ui container image, and the Linux script that I use, for example, to deploy the user interface on EC2 natively.

Some of this complexity is due to the fact that the JavaScript files need to be built specifically for the type of deployment being targeted. For example, when NGINX is used to vend the site content, NGINX is also responsible to proxy the application server APIs. In this case the browser needs to connect back to the web server (NGINX) for that. When S3 is used to host the content, the client needs to connect directly to the yelb application server endpoint (these comments in the user interface source code should clarify this). Regardless, in the fullness of time, I would like to streamline and centralize one build process that can produce multiple artifacts at once.

Conclusions

Again, these are notes-to-self for the automation I would need to build but publishing them here in the hope someone could get inspired to solve similar issues (or, better, for someone to tell me I am doing it wrong and there are easier ways to achieve the outcomes I need!)

Massimo.