Running Docker in production has quickly become the norm. Cloud hosting providers like AWS, GCE and Azure realized that this is what organizations need. Services like EKS and ECS from Amazon offer a completely managed environment for your Docker containers to run on. Through this article we’ll take a closer look to one of them– Amazon ECS, which is, Amazon Elastic Container Service.
If you don’t know what any of this means, then the rest of the article is going to help you with that. Suffice it to, say, ‘fully-managed’ implies you don’t have to pay any third-party software vendor to run your containerized application. ‘Scalable’ means you don’t have to worry, ahead of time, about resource utilization. AWS cloud will make resources, like CPU, memory and storage, available to you,on demand.
But why should you care about it? The reason behind it is two-fold.
First of all, is the flexibility and scalability of a microservice based architecture which most applications are now adopting. As it turns out, Docker containers are really good for deploying microservices. So, it stands to reason that your application may get shipped as a Docker image. If that’s not the case, you may not want ECS, since this service is exclusive for those who intend to run Docker containers.
You might also like: 10 Best Practices to architect and scale your Laravel Application on AWS Hosting
The second reason is the cost-effectiveness. This is true especially if you use Amazon Fargate. It’s a pay-as-you-go policy that bills you by the minute and can result in a significant price reductions. You can also launch your own ECS task on top of EC2 instances.
What is Amazon ECS?
Of the many services that AWS offers like S3 for Storage, and VPC for networking, ECS falls into the category of compute. This locates it in the same category as Lambda functions or EC2 instances. Containers, just in case you don’t know, are like light-weight VMs that offer a secure environment for the users to run their application isolated from all the other applications running on the same infrastructure.
Amazon ECS runs and manages your containerized apps in the cloud. Helping you to save valuable time. Typically, running containers in the cloud involves spinning up compute resources (like EC2 virtual machines), installing Docker inside them, connecting it to your container image registry, securely, and then launching your containers on top of it.
The application itself is made up of multiple containers, each with its own specific nuances and attributes. So operators use tools like Docker-compose to launch multiple containers. These tools themselves are under constant improvement and one update or another might render the entire stack unusable.
ECS is designed to be a complete container solution from top to bottom. Docker images can be hosted in Amazon ECR (ECS’ sister project) where you can host your private image repository, create a complete CI/CD workflow and have fine-grained access control using IAM or ACL, etc.
Is it like Kubernetes or Docker Swarm?
One may ask a valid question. Why not use Elastic Kubernetes Service from Amazon or any other container orchestration services like DC/OS, OpenShift, Kubernetes, Docker Swarm, etc? These technologies are all free and open source, and can be run on any cloud service. For example, if your Ops team is familiar with Kubernetes, they can setup Kubernetes and run applications on any cloud, not just on AWS.
Why to bother with closed source externally managed ECS, when we have better alternatives?
First and foremost reason, because of its complexity. Kubernetes is a complex body of software with many moving parts. In order to get the most out of it, you need either to get expensive support from your hosting provider, or your team will have to go through the steep learning curve themselves.
Secondly, the return on investment, especially for start-ups is very small. Running your own container orchestration incurs on an additional charge, which can result to be more expensive than what your application itself might cost you alone!
Even when you are using Amazon EKS, as your Kubernetes provider, you have to pay 20 cents per hour for EKS control plane alone in the USA. This doesn’t even include the EC2 instances that you will have to allocate as worker nodes. Keep in mind that there will be quite a few of these EC2 instances since the application is supposed to be scalable. Here is a Beginner’s Guide to create Amazon EC2 Instances.
Features and Benefits of using AWS ECS
Amazon ECS comes packed with every feature you may already know and love about AWS and Docker. For example, developers running Docker on their personal devices are familiar with Docker Networking modes like Docker NAT and Bridge networking. The same simple technology running on your laptop is what you see when you launch ECS containers across an EC2 cluster.
The same services that we use for logging, monitoring and troubleshooting EC2 instances can be used to monitor running containers as well. Not only that, you can automate what action needs to be taken when a certain event is seen in your monitoring system. For example, AWS CloudWatch alarms can be used for auto scaling purposes. So when the load increases more containers are spawned to pick up the slack, but once things are back to normal, then extra containers are killed. This reduces human intervention and optimizes your AWS bills quite a bit as well.
Like in modern desktops programs are stored on the disk as executable files, containers are stored as container images. A single application is made up of multiple containers, and each one of these has a corresponding image.
As your app evolves, new versions of these images are introduced, and various branches for development, testing and production are created. To manage all of this ECS comes with another service ECR which acts as a private repository where you can manage all the images and securely deploy them when needed.
You will need Docker
Docker is one of the key technologies that underlie all the container orchestration systems like Kubernetes, DC/OS and also Amazon ECS. Docker is what enables containers to run on a single operating system. This could be a desktop or an EC2 instance. Docker runs and manages various containers on the OS and ensures that they are all secure and isolated from one another as well as the rest of the system.
Technologies like ECS take this model of running multiple containers on a single OS and scales it up so that containers can run across entire data centers. Given the importance of this Docker and its concepts, keep in mind that they are an essential prerequisite before you adopt ECS. Applications are broken down into microservices and then each one of these microservices is packaged into a Docker container.
If Docker is not a core part of your development workflow, you should probably try to incorporate it. The free and open source Community edition is available for most desktop operating systems including Windows, Mac OS and most Linux distros.
You can start by choosing the base image upon which to build your application. If a microservice is written in Python, there are Python base images available to get started with. If you need MongoDB for datastore there’s an image available for that as well. Start from these building blocks and gradually grow your application as new features are designed and added.
Containers are the fundamental unit of deployment for ECS. You will have a hard time migrating to ECS if your app is not already packaged as a Docker image. Conversely, if you have a “Dockerized” application you do not have to over complicate the task with Docker compose or Docker Swarm, etc. Everything else including how the containers will talk to one another, networking, load balancing, etc can be managed on ECS platform itself.
In terms of Docker skill-set you only need to be aware of the basic networking, volumes and Dockerfiles.
Toolings you won’t need
If you are already accustomed to using Docker, there is a plethora of services that can help you deploy Docker containers. Services like Docker Compose help you deploy applications that are made up of multiple containers. You can define storage volumes, networking parameters and expose ports using Docker Compose.
However, most of these toolings are limited to a single VM or Docker Swarm exclusively. Services like Docker Swarm are incompatible with Amazon ECS. Creating a Docker Swarm typically involves launching a cluster of EC2 instances. Installing Docker on all of them. Running Docker Swarm init and creating a Docker Swarm out of the EC2. Installing Docker Compose. Writing docker-compose.yml files for each application and then deploying it.
Furthermore, you will need to maintain and update all the underlying software like Docker, Docker Compose, Docker Swarm and then ensure that the docker-compose files are compatible with the new versions.
Amazon ECS on the other hand, takes all of that away from you. You don’t need to allocate EC2 instances for Docker Swarm master nodes. You won’t have to worry about updating any of the container management software. Amazon does that for you. You can deploy multi container applications using a single Task definition. Task definitions replace your docker-compose.yml files and can be supplied either using the Web Console or as a JSON payload.
ECS when used with services like AWS Fargate can take away even the EC2 instances away. You still have to pay for compute and memory, however, your containers don’t consume all of the allocated memory and compute allocated to them. Resulting in cost savings.
Pricing for Amazon ECS
The pricing model for ECS depends on one important factor — Where are your containers running? They can run on either Amazon Fargate or on your own EC2 cluster
If you choose the traditional way of running containers on EC2 instances then you simply pay for the EC2 prices. Every EC2 pricing policy works. You can use Spot Instances for non-critical workload, On-demand Instances, Reserved Instances whichever makes economic sense for your applications.
If you are using AWS Fargate to run your containers on, then the pricing consists of two independent factors:
CPU requested: Here, you typically pay upto $0.06 per hour per vCPU
Memory requested: This is priced at $0.015 per hour per GB of memory
When you define your services, you will set the values for vCPU and memory for each different kind of container you will be launching. At the end of the month, your Amazon Fargate bill would include memory utilization charges plus the CPU utilization charges.
You are billed by the seconds, with a minimum of one minute of usage any time you run an ECS task on Fargate. Over all you are billed from the instant you start your task to the moment that task terminates. The pricing differs from one region to another and you can visit this page for more details. The CPU values start from 0.25 vCPU all the way up to 4 vCPUs and each CPU value has a minimum and maximum memory that can be associated with it.
For example, a 1 vCPU needs at least 2GB of memory and can’t have more than 8GB of memory.
The bill you incur depends also upon the way your application scales. Suppose you are running a single container and all of a sudden the workload spikes up. Then the application will autoscale and spawn,say, n more containers. This would result in n times the normal resource utilization. Consequently, in times of peak load you will be charged higher.
Also read: AWS Cost Optimization
AWS Fargate can save you a lot of money, if you want to run containers for batch processes like data processing, analytics, etc. For services, like web servers, which are supposed to be active all the time, you billing would not differ all that much from EC2 prices. However, you may still want to leverage ECS for running containers over EC2, because containers come with a whole different set of advantages.
Amazon ECS Architecture
Tasks and Task Definition
An application consists of many microservices, and each one of these services can be shipped as a Docker image (a.k.a a container image). You define a an ECS task to within which the Docker image is selected, the CPU and memory allocated per container is selected. IAM roles can be associated within the task definition for granular privilege control and also various other Docker specific parameters like Networking Mode and Volumes can be specified in here.
You can have multiple containers inside a single task definition, but rarely should you ever run your entire application on it. For example if your are running a web app, a task definition can have the front-end web server image. Similarly, you can have a different task associated with your backend database.
Later you may realize that your app can perform better if the front-end has a caching mechanism. So you can update the task definition to include a redis container to go along with your front-end container.
To summarize, you can have multiple closely related containers in a task. A task is run as per its task definition. The task definition can be updated to update a part of your application. Notice, you don’t touch the backend software when you update the front-end task definition.
If you are familiar with Kubernetes, then tasks are similar to pods in a Kubernetes cluster.
Remember that we still have to ensure that our application is scalable. Services are what allow us to do that. It’s the next level of abstraction on top of tasks. You can run multiple instances created from the same task definition across your entire cluster (multiple EC2 instances, for example).
Services help you autoscale your application based on CloudWatch alarms, they can have load balancers to distribute the incoming traffic to individual containers and are the interface via which one part of your application talks to another. Going back to our previous example, a web server doesn’t directly talk to the database but instead talks to the database service which in turn talks to the underlying containers running your database server. This is the service in microservice based architecture. Kubernetes has a similar concept with the same name i.e service.
Cluster, VPC and Networking
Lastly, you may want to logically separate one set of service from another. Say you have multiple applications. You can create a different ECS Cluster for each one of them. Inside each Cluster would reside the services that make up the application and inside those services the tasks run.
Moreover, from a security standpoint it is better to run each ECS cluster on its VPC — Virtual Private Cloud. This will provide you with a range of private IP addresses and you can further split it into subnets if you so desire. Sensitive information can reside in a different subnet with only one gateway, this way if a service has any vulnerability and gets compromised, it may not reach the sensitive stuff.
The ECS console create a VPC for you if you don’t have one.
We have talked a little about Fargate before. How it is different in terms of pricing from the regular EC2 clusters and how management is simpler with it. Let’s take a closer look at it.
A given ECS cluster can pool compute resources from both EC2 and AWS Fargate and schedule containers across them as and when needed. However, when you are writing task definitions you need to specify whether the task would run on AWS Fargate or is it designed for EC2.
Besides the ease of management and highly scalable model that AWS Fargate offers, it also offers the right environment to practice running containers in production. You don’t get the option of tweaking the underlying VM or restarting your container from the Docker host. This is important if we are ever going to run containers on bare metal servers.
The ultimate goal for cloud providers is to run containers from multiple users on the same server, instead of virtualizing the hardware and then running containers on top of it. We as application developers should no longer desire to “restart our containers” from the VM. Worse still is having an implicit assumption that your container will run in an isolated VM instead of a multi-tenant environment.
AWS Fargate doesn’t let you get away with those assumptions. Instead, it encourages cloud native logging and monitoring solutions, fine grained access policies and allows you to build apps that are ultimately scalable without us having to spin up more VMs or EC2 instances. Some things are still region specific but it is certainly a step in the right direction.
Amazon ECS from a business perspective. It is easy to learn, manage and deploy apps on ECS. It lets you run Dockerized apps across multiple EC2 instances or on Amazon Fargate without paying for control nodes or setting up Kubernetes or any other distributed system on your own.
Yes, there is always a fear that this will lead to vendor lock-ins but Docker containers are fairly portable to begin with so if you wish to migrate away from AWS you won’t have to rewrite your code. You can also save a significant amount of money in terms of your AWS bills if you use AWS Fargate and/or set up auto scaling to leverage the pay-as-you-go model of AWS.
Finally, running Docker containers in production is the way going forward in the future. Adopting technologies like ECS will also make your application and your team well prepared for multi-tenant cloud computing environment.