And how we figured out it was not the right choice for us
TLDR: If you want multiple apps running separately in an EC2 instance, with some managed properties (Load balancer, health monitoring, etc). Elastic Beanstalk is NOT the way to go.At Remote Flags, I was in charge of setting up the infrastructure for a small service to facilitate demoing the main application to interested people.
As we're a small team, and budget is tight, I figured I would use a single "non-critical infrastructure" server; this would contain small tools that, if they went down, wouldn't impact the overall availability of our service.
Sounds good! So first question is, why not just a single EC2 instance?
Well, some problems with that, particularly with SSL certificates.
Most of our other parts of the service already use Aws Certificate Manager for other parts of our infrastructure. A limitation of this service is, as stated in AWS Docs, you can only use the certificates in an EC2 instance behind a supported service like ELB (load balancer service) or Cloudfront.
Also, one of the services may have to be up for a long time, so serverless can become quite expensive in these situations, though it can work great for some of these smaller applications.
Given these restrictions, and since we don't want to use multiple services for the same tasks, as the burden of knowledge grows with that, we decided to pick Elastic Beanstalk to build infrastructure for a single instance in a more managed fashion. This works for us, since we already have a few services running in Elastic Beanstalk.
So it can't be that hard, right? Right?
Setup
Create two apps with nodejs, named app_a
and app_b
:
App A setup
mkdir app_a app_b cdk
cd app A && npm init
Create an app.js
file with the following code:
const http = require('http');
const hostname = '127.0.0.1';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World App A');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://\${hostname}:\${port}/`);
});
App B setup
cd app_b && npm init
Create an app.js
file with the following code (this is identical to app_a
with a different response and port):
const http = require('http');
const hostname = '127.0.0.1';
const port = 3001;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World App B');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://\${hostname}:\${port}/`);
});
CDK setup
Init the CDK package
cd cdk && cdk init app --language typescript
Finally go to the generated code for the stack under bin/my-app-cdk.ts
and uncomment the env: line and replace with your AWS account info:
// env: { account: '123456789012', region: 'us-east-1' },
Let's add the necessary pieces for building our infrastructure stack on lib/cdk-stack.ts
:
import { Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as elasticbeanstalk from 'aws-cdk-lib/aws-elasticbeanstalk';
import * as iam from 'aws-cdk-lib/aws-iam';
export class CdkStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const appName = 'cdk-app';
const ebApplication = new elasticbeanstalk.CfnApplication(this, 'Application', {
applicationName: appName,
description: 'Elastic Beanstalk application for auxiliary-infra',
});
const ec2Role = new iam.Role(this, 'Ec2Role', {
roleName: `\${appName}-ec2-role`,
assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com'),
});
ec2Role.addManagedPolicy(iam.ManagedPolicy.fromAwsManagedPolicyName('AWSElasticBeanstalkWebTier'));
const myProfileName = 'InstanceProfile';
new iam.CfnInstanceProfile(this, myProfileName, {
instanceProfileName: myProfileName,
roles: [ec2Role.roleName]
});
const elbEnv = new elasticbeanstalk.CfnEnvironment(this, 'Environment', {
applicationName: appName,
solutionStackName: '64bit Amazon Linux 2 v5.5.0 running Node.js 16',
optionSettings: [
{
namespace: 'aws:autoscaling:launchconfiguration',
optionName: 'IamInstanceProfile',
value: myProfileName,
},
{
namespace: 'aws:autoscaling:asg',
optionName: 'MaxSize',
value: '1',
}, {
namespace: 'aws:ec2:instances',
optionName: 'InstanceTypes',
value: 't4.micro',
}],
});
elbEnv.addDependsOn(ebApplication);
}
}
Note the platform selected is for Node 16, see here the list of supported platforms.
After all this, run cdk deploy
. After that, all resources will be created.
Deploy Elastic Beanstalk App Version
To deploy your service to EB, easiest method is using the Elastic Beanstalk CLI. Follow Github AWS EB CLI to install.
Once that's done, run eb init
inside app_a
folder, and link it to the application you created (if you didn't change anything it will be called cdk-app
).
Run eb deploy
to deploy an initial version containing app_a
.
So now, how do we add app_b
to all this?
Where the ugliness starts
With this approach, Elastic Beanstalk expects to have a "main" application. When you upload a new version of an application, that means having a zip of the new version. In this case, with 2 apps, this means we need to elect one as main, and have the other one uploaded as part of it. The way I approached this was to have a command that would get the "secondary app" code, zip it, put it into the source code of the "main app", so that CDK can upload it to Elastic Beanstalk. Very messy, and extremely coupled.
Adding to this, we need to define different ports to the different apps manually in Nginx and each apps. There's a environment variable on Elastic Beanstalk called PORT
which overrides the app port, but that only works for a single app.
So, let's do it!
To have both apps in different processes, create a file called Procfile
with this code:
web: node app.js
app_b: cd /var/app/app_b && node app.js
This Procfile will tell the instance it needs 2 processes, each starting with the command stated.
Next, create a file in a new .ebextensions/
folder, called 01_setup_app_b.config
:
container_commands:
00_create_app_b_folder:
command: unzip -o app_b.zip -d /var/app
01_update_app_b_permissions:
command: chown -R webapp app_b && chgrp -R webapp app_b
cwd: /var/app/
Container commands allow you to use the source code and run commands based on them. In this code snippet, we get app_b
source code from the zip and change permissions, since that operation is done by root user.
Next, we need to override the nginx to allow us to access both apps. So let's define that $URL/app_a
routes to app_a, while $URL/app_b
routes to app_b
. Create nginx.conf
inside .platform/nginx/
folder structure:
user nginx;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 65906;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
include conf.d/*.conf;
map $http_upgrade $connection_upgrade {
default "upgrade";
}
server {
listen 80;
access_log /var/log/nginx/access.log main;
location /app_a {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /app_b {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
And that's it! You can run eb open
and add app_a
or app_b
on the url to get responses from the different apps.
In the next blog post, I will show you how we moved from using EB to ECS for this particular use case.