Skip to content

Commit 8f60890

Browse files
committed
final commit
1 parent 8ebee68 commit 8f60890

40 files changed

+1983
-22
lines changed

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
*Issue #, if available:*
2+
3+
*Description of changes:*
4+
5+
6+
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
.DS_Store
2+
node_modules

1-no-container/README.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
## Basic Node.js Server
2+
3+
This is an example of a basic monolithic node.js service that has been designed to run directly on a server, without a container.
4+
5+
### Architecture
6+
7+
Since Node.js programs run a single threaded event loop it is necessary to use the node `cluster` functionality in order to get maximum usage out of a multi-core server.
8+
9+
In this example `cluster` is used to spawn one worker process per core, and the processes share a single port using round robin load balancing built into Node.js
10+
11+
We can use an Application Load Balancer to round robin requests across multiple servers, providing horizontal scaling.
12+
13+
![Reference diagram of the basic node application deployment](../images/monolithic-no-container.png)

1-no-container/db.json

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
{
2+
"users": [
3+
{
4+
"id": 1,
5+
"username": "marceline",
6+
"name": "Marceline Abadeer",
7+
"bio": "1000 year old vampire queen, musician"
8+
},
9+
{
10+
"id": 2,
11+
"username": "finn",
12+
"name": "Finn 'the Human' Mertens",
13+
"bio": "Adventurer and hero, last human, defender of good"
14+
},
15+
{
16+
"id": 3,
17+
"username": "pb",
18+
"name": "Bonnibel Bubblegum",
19+
"bio": "Scientist, bearer of candy power, ruler of the candy kingdom"
20+
},
21+
{
22+
"id": 4,
23+
"username": "jake",
24+
"name": "Jake the Dog",
25+
"bio": "Former criminal, now magical dog adventurer, and father"
26+
}
27+
],
28+
29+
"threads": [
30+
{
31+
"id": 1,
32+
"title": "What's up with the Lich?",
33+
"createdBy": 4
34+
},
35+
{
36+
"id": 2,
37+
"title": "Party at the candy kingdom tomorrow",
38+
"createdBy": 3
39+
},
40+
{
41+
"id": 3,
42+
"title": "In search of a new guitar",
43+
"createdBy": 1
44+
}
45+
],
46+
47+
"posts": [
48+
{
49+
"thread": 1,
50+
"text": "Has anyone checked on the lich recently?",
51+
"user": 4
52+
},
53+
{
54+
"thread": 1,
55+
"text": "I'll stop by and see how he's doing tomorrow!",
56+
"user": 2
57+
},
58+
{
59+
"thread": 2,
60+
"text": "Come party with the candy people tomorrow!",
61+
"user": 3
62+
},
63+
{
64+
"thread": 2,
65+
"text": "Mathematical!",
66+
"user": 2
67+
},
68+
{
69+
"thread": 2,
70+
"text": "I'll bring my guitar",
71+
"user": 1
72+
},
73+
{
74+
"thread": 3,
75+
"text": "I need a new guitar to play the most savory licks in Ooo",
76+
"user": 1
77+
}
78+
]
79+
}

1-no-container/index.js

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
const cluster = require('cluster');
2+
const http = require('http');
3+
const numCPUs = require('os').cpus().length;
4+
5+
if (cluster.isMaster) {
6+
console.log(`Leader ${process.pid} is running`);
7+
8+
// Fork workers.
9+
for (let i = 0; i < numCPUs; i++) {
10+
cluster.fork();
11+
}
12+
13+
cluster.on('exit', (worker, code, signal) => {
14+
console.log(`worker ${worker.process.pid} died`);
15+
});
16+
} else {
17+
require('./server.js');
18+
19+
console.log(`Worker ${process.pid} started`);
20+
}

1-no-container/package.json

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
{
2+
"dependencies": {
3+
"koa": "^1.2.5",
4+
"koa-router": "^5.4.0"
5+
},
6+
"scripts": {
7+
"start": "node index.js"
8+
}
9+
}

1-no-container/server.js

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
const app = require('koa')();
2+
const router = require('koa-router')();
3+
const db = require('./db.json');
4+
5+
// Log requests
6+
app.use(function *(next){
7+
const start = new Date;
8+
yield next;
9+
const ms = new Date - start;
10+
console.log('%s %s - %s', this.method, this.url, ms);
11+
});
12+
13+
router.get('/api/users', function *(next) {
14+
this.body = db.users;
15+
});
16+
17+
router.get('/api/users/:userId', function *(next) {
18+
const id = parseInt(this.params.userId);
19+
this.body = db.users.find((user) => user.id == id);
20+
});
21+
22+
router.get('/api/threads', function *() {
23+
this.body = db.threads;
24+
});
25+
26+
router.get('/api/threads/:threadId', function *() {
27+
const id = parseInt(this.params.threadId);
28+
this.body = db.threads.find((thread) => thread.id == id);
29+
});
30+
31+
router.get('/api/posts/in-thread/:threadId', function *() {
32+
const id = parseInt(this.params.threadId);
33+
this.body = db.posts.filter((post) => post.thread == id);
34+
});
35+
36+
router.get('/api/posts/by-user/:userId', function *() {
37+
const id = parseInt(this.params.userId);
38+
this.body = db.posts.filter((post) => post.user == id);
39+
});
40+
41+
router.get('/api/', function *() {
42+
this.body = "API ready to receive requests";
43+
});
44+
45+
router.get('/', function *() {
46+
this.body = "Ready to receive requests";
47+
});
48+
49+
app.use(router.routes());
50+
app.use(router.allowedMethods());
51+
52+
app.listen(3000);

2-containerized/README.md

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
## Deploying in containers
2+
3+
In this example we take our node application and put it into a container for deployment on EC2 Container Service.
4+
5+
![Reference architecture of the containerized monolith](../images/monolithic-containers.png)
6+
7+
### Why containers?
8+
9+
__Dependency Control__: Containers wrap application code in a unit of deployment that captures a snapshot of the code as well as its dependencies, which solves a few problems:
10+
11+
- The version of `node` on a local developer's machine may not match the version on the production servers, or the version on the CI server, allowing developers to ship code that runs locally but fails in production. On the other hand a container will ship with a specific version of node included.
12+
- If `package.json` dependencies aren't rigorously shrinkwrapped then `npm install` may end up installing different package versions locally, on a CI server, and on the production servers. Containers solve this by including all the npm dependencies with the application code.
13+
- Even if dependencies are locked down using a shrinkwrap file a particular package you depend on [may be unavailable, or removed](http://blog.npmjs.org/post/141577284765/kik-left-pad-and-npm). If this happens it doesn't stop a container from working, because the container still has a copy of the package from the moment that the container was built.
14+
15+
__Improved Pipeline__: The container also allows an engineering organization to create a standard pipeline for the application lifecycle. For example:
16+
17+
1. Developers build and run container locally.
18+
2. CI server runs the same container and executes integration tests against it to make sure it passes expectations.
19+
3. Same container is shipped to a staging environment where its runtime behavior can be checked using load tests or manual QA.
20+
4. Same container is finally shipped to production.
21+
22+
Being able to ship the exact same container through all four stages of the process makes delivering a high quality, reliable application considerably easier.
23+
24+
__No mutations to machines:__ When applications are deployed directly onto instances you run the risk of a bad deploy corrupting an instance configuration in a way that is hard to recover from. For example imagine a deployed application which requires some custom configurations in `/etc`. This can become a very fragile deploy as well as one that is hard to roll back if needed. However with a containerized application the container carries its own filesystem with its own `/etc` and any custom configuration changes that are part of this container will be sandboxed to that application's environment only. The underlying instance's configurations stay the same. In fact a container can't even make persistant filesystem changes without an explicit mounted volume which grants the container access to a limited area on the host instance.
25+
26+
## Why EC2 Container Service?
27+
28+
EC2 Container Service provides orchestration for your containers. It automates the process of launching containers across your fleet of instances according to rules your specify, then automates keeping track of where those containers are running so that you can use a load balancer to get traffic to them. It also has built in features to roll out deploys with zero downtime, gather metrics and logs from your containers, and auto scale the number of containers you are running based on metrics.
29+
30+
## Application Changes for Docker
31+
32+
1. __Single process instead of `cluster`.__ The first and biggest change involved with containerizing this application is getting rid of `cluster`. With docker containers the goal is to run a single process per container, rather than a cluster of processes.
33+
34+
The reason for this change is that a lighweight container with a single process in it allows for greater granularity and flexibility in container placement onto infrastructure. A large container that has four processes in it and requires four cores of CPU power can only be run on an instance of a particular size. However by breaking that up into four containers that each have a single process in them we can now make use of two smaller instances that will each run two containers, or even four tiny instances that will each run a single container. Or we could go the opposite direction and easily run 64 of these small containers on a single massive instance.
35+
36+
2. __Create `Dockerfile`:__ This file is basically a build script that creates the container. The base container that the dockerfile starts from contains a specific version of node.js. Then the rest of the commands add both the application code and the `node_modules` folder into the container. The result is a container image that is a reliable unit of deployment. The container can be run locally, or run on a remote server. It will run the same in both places.
37+
38+
## Deployment
39+
40+
1. Launch an ECS cluster using the Cloudformation template:
41+
42+
```
43+
$ aws cloudformation deploy \
44+
--template-file infrastructure/ecs.yml \
45+
--region <region> \
46+
--stack-name <stack name> \
47+
--capabilities CAPABILITY_NAMED_IAM
48+
```
49+
50+
2. Deploy the services onto your cluster:
51+
52+
```
53+
$ ./deploy.sh <region> <stack name>
54+
```

0 commit comments

Comments
 (0)