Create a simple approach using ECS improving on our previous solution using AWS Elastic Beanstalk
If you're following our blog posts chronologically, you'll see I've made a blunder in the past of trying to use AWS Elastic Beanstalk to support 2 Nodejs web apps in the same instance. (see here). This blog post aims at providing a simple alternative using another AWS compute service, Elastic Container Service (ECS).
Let's get into it!
Setup
I'll use the same setup as in the previous blog post, which I will also detail here: Note: all the source code used in this blog post is stored here.
Create two apps with Nodejs, named app_a
and app_b
:
App A setup
mkdir app_a app_b cdk
cd app A && npm init
Create an app.js
file with the following code:
const http = require('http');
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World App A');
});
server.listen(port, () => {
console.log(`Server running on \${port}`);
});
`}/>
Create a file named `Dockerfile` in the root of the app with the following content:
<CodeHighlight language="docker" text={`
FROM node:16-alpine
COPY . .
CMD [ "node", "app.js" ]
App B setup
cd app_b && npm init
Create an app.js
file with the following code (this is identical to app_a
with a different response and port):
const http = require('http');
const port = 3001;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World App B');
});
server.listen(port, () => {
console.log(\`Server running on \${port}\`);
});
Create a file named Dockerfile
in the root of the app with the following content:
FROM node:16-alpine
COPY . .
CMD [ "node", "app.js" ]
CDK setup
Initialize the CDK package
cd cdk && cdk init app --language typescript
Go to the generated code for the stack under bin/my-app-cdk.ts
and uncomment the env:
line and replace with your AWS account info:
// env: { account: '123456789012', region: 'us-east-1' },
Let's add the necessary pieces for building our infrastructure stack on lib/cdk-stack.ts
. I'm keeping all this in the same file for simplicity, but this can be split into smaller files:
import { Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as ecs from 'aws-cdk-lib/aws-ecs';
import * as elbv2 from 'aws-cdk-lib/aws-elasticloadbalancingv2';
import { ApplicationTargetGroup } from 'aws-cdk-lib/aws-elasticloadbalancingv2';
export class CdkStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const vpc = ec2.Vpc.fromLookup(this, "vpc", {
isDefault: true
});
const cluster = new ecs.Cluster(this, 'cluster', { vpc });
cluster.addCapacity('clusterCapacity', {
instanceType: new ec2.InstanceType('t3.micro'),
});
//App A
const taskDefinition = new ecs.Ec2TaskDefinition(this, 'appATaskDef');
taskDefinition.addContainer('appAContainer', {
containerName: 'app_a',
image: ecs.ContainerImage.fromAsset("/home/luis/lmtv_workplace/blog_eb_2_apps_ecs/app_a"),
portMappings: [{ containerPort: 3000 }],
memoryReservationMiB: 256,
});
const appAService = new ecs.Ec2Service(this, 'appAService', {
cluster,
taskDefinition,
});
// App B
const appBTaskDefinition = new ecs.Ec2TaskDefinition(this, 'appBTaskDef');
appBTaskDefinition.addContainer('appBContainer', {
containerName: 'app_b',
image: ecs.ContainerImage.fromAsset("/home/luis/lmtv_workplace/blog_eb_2_apps_ecs/app_b"),
portMappings: [{ containerPort: 3001 }],
memoryReservationMiB: 256,
});
const appBService = new ecs.Ec2Service(this, 'appBService', {
cluster,
taskDefinition: appBTaskDefinition,
});
// Target Groups
const appAtargetGroup = new ApplicationTargetGroup(this, 'appATargetGroup', {
vpc,
port: 80,
targets: [appAService.loadBalancerTarget({
containerName: 'app_a',
})],
});
const appBTargetGroup = new ApplicationTargetGroup(this, 'appBTargetGroup', {
vpc,
port: 80,
targets: [appBService.loadBalancerTarget({
containerName: 'app_b',
})],
});
// Load balancer and listener setup
const lb = new elbv2.ApplicationLoadBalancer(this, 'lb', {
vpc,
internetFacing: true,
loadBalancerName: 'Ecs2AppTest'
});
const listener = lb.addListener('listener', {
port: 80,
defaultTargetGroups: [
appAtargetGroup,
]
});
listener.addTargetGroups('targetGroupAdd', {
targetGroups: [appBTargetGroup],
conditions: [
elbv2.ListenerCondition.pathPatterns(['/app_b']),
],
priority: 10
});
}
}
The code is ready, run cdk deploy
, and all resources will be created in the Cloudformation stack.
To test, go to EC2 -> Load Balancers
and search for the load balancer with the name Ecs2AppTest
. Select the entry, copy the DNS name from the details and paste the DNS name in a new browser window. If all goes well, you should see the message Hello World App A
. For App B, add /app_b
to the end of the URL, and you should see the message Hello World App B
.
For cleanup, run cdk destroy
and it will cleanup the created resources for this example.
How it works
The docker containers for both app A and B are both self explanatory if you work with Docker. The interesting details come from the CDK setup. Lets go step by step:
For this example, just use the default VPC since it always exists by default on any account.
Create a cluster.
An Amazon ECS cluster is a logical grouping of tasks or services. See the official developer guide for more info
Add capacity to it (in this case just one instance).
In more production level apps, this capacity would be replaced with a capacity provider, consisting of an Autoscaling group (for scaling up and down based on need)
Create a task definition for each app, consisting of the path to the corresponding application path and a port mapping corresponding to the port the app is listening to (in this case 3000 and 3001 respectively).
Note: With this setup (using
ecs.ContainerImage.fromAsset
), the apps are deployed as part of the CDK config. An ECR (Elastic Container Registry) repository is created to store the docker images.Create a service for each app inside the cluster, based on each task definition.
A service runs the tasks defined on the task definition and also manages their lifecycle (if a task dies, the service will start a new one to take its place)
Create target groups for each app to be used with the load balancer.
Target groups act as destinations for requests received by the load balancer.
Create an internet facing load balancer.
Add a listener to it, that defaults to forward requests to
app_a
.Having
app_a
target group as default means that requests coming from port80
will be forwarded toapp_a
. So how are requests forwarded toapp_b
? Read below!Add an additional target group for
app_b
to the listener, with a condition to forward traffic to it if the path includes/app_b
in it.This uses an Application Load balancer feature called Path-based routing. See this blogpost for more info. Technically, the same could be achieved with having a reverse proxy in the instance, but this is just must faster to setup.
Conclusion
Looking back at the blog post for Elastic Beanstalk and now this, I can say working with ECS is a better experience. As long as you have some knowledge about the AWS services at play, the service definition is more explicit, and, in the end, less custom code was needed to make everything run as intended.
This shows the value of learning by trial and error. It's hard to do it right the first time, so keep improving!