Skip to content

Infrastructure

David Killmon edited this page Mar 4, 2020 · 12 revisions

The ecs-cli spins up infrastructure on your behalf in your account to help you get your containers running on ECS and AWS. In this section, we'll go over the different infrastructure that's set up for you, why we set it up, and what it looks like.

Environments

When you create your applications, you deploy them to a particular environment. Each environment has its own networking stack, load balancer, and ECS Cluster.

When you create a new environment, through ecs-preview env init (or through the first ecs-preview init experience), we'll set up an environment for you. The basic infrastructure of an environment looks like this:

Environment Diagram

VPC and Networking

Each environment gets its own multi-AZ VPC. Your VPC is the network boundary of your environment, allowing the traffic you expect in and out, and blocking the rest. The VPCs we create are spread across two availability zones to help balance availability and cost - with each AZ getting a public and private subnet.

Currently, your applications are launched in the public subnets but can only be reached through your load balancer. In the future, as we add new application types, you can expect applications that are launched in the private subnets.

Load Balancers and DNS

If you set up any application using one of the Load Balanced application types, we'll create a load balancer. In the case of a Load Balanced Web App, we'll create an Application Load Balancer, specifically. All applications within a particular environment will share a load balancer by creating app specific listeners on it. Your load balancer is whitelisted to communicate with services in your VPC.

Optionally, when you set up a project, you can provide a domain name that you own and is registered in Route 53. If you provide us a domain name, each time you spin up an environment, we'll create a subdomain environment-name.your-domain.com, provision an ACM cert, and bind it to your Application Load Balancer so it can use HTTPS. You don't need to provide a domain name, but if you don't you'll have to use HTTP connections to your application load balancer.

Applications

After you've created an environment, you can deploy an application into it. The application stack contains the actual ECS service for your app as well as all the configuration to get requests from your existing load balancer (set up by the environment) to your service.

Loadbalanced Webapp

Load Balancer Config

For load balanced webapps, we create a Listener Rule on the Application Load Balancer. This rule will match certain URL paths and forward them to your service.

For example, if you have a load balancer for the ecs-cli which fronts ecs-cli.aws and a front-end application, we'll create a listener on the ecs-cli load balancer to match ecs-cli.aws/front-end, and forward requests to that URL to our service. Another service, back-end, might have a listener on the same load balancer that forwards requests to the URL ecs-cli.aws/back-end to its service.

What these listeners actually forward traffic to is our Target Group. Our Target Group knows all the running tasks (containers) in our application's service. The Target Group will pick one of these tasks and forward the request to it.

Fargate Service

The final piece of the puzzle is our actual ECS Service. We provision an ECS service using Fargate as the launch type - that way you don't have to manage any EC2 instances. This is where your application code runs.

Deploying your application also generates a new Task Definition - but we'll talk more about that in the manifest section.

Project

While the bulk of the infrastructure we provision is specific to an environment and application, there are some project-wide resources, as well.

Project Infrastructure

ECR Repositories

ECR Repositories are regional resources which store your application images. Each application has its own ECR Repository per region in your project.

In the above diagram, the project has several environments spread across three regions. Each of those regions is provisioned with an ECR repository for every app in your project - so in this case, there are three apps.

Every time you add an app, we create an ECR Repository in every region - that way when you deploy your app, a local copy is available in that region. We do this to maintain region isolation (if one region goes down, environments not in that region won't be affected) and to reduce cross region data transfer costs.

These ECR Repositories all live within your project's account (not the environment accounts) - and have policies which allow your environment accounts to pull from them.

Release Infrastructure

For every region represented in your project, we create a KMS Key and an S3 bucket. These resources are used by CodePipeline to facilitate cross region and cross account deployments. All Pipelines in your project share these same resources.

Similar to the ECR Repositories, the S3 bucket and KMS keys have policies which allow for all of your environments, even in other accounts, to read encrypted deployment artifacts. This makes your cross-account, cross-region CodePipelines possible.