From docker-compose to AWS ECS

This module is responsible to understanding the docker compose file as a whole and then more specifically putting together the settings of the services defined.


The services are defined in YAML under the services section. Each service then has its own set of properties that can be defined.


To enable further configuration and customization in an easy consumable format, still ignored by docker-compose natively, you can define x-configs into the services definitions.

Features that ECS ComposeX takes care of for you, if you needed to:

  • Create AWS LoadBalancers, NLB or ALB that route traffic to your applications

  • Register services into Service Discovery using AWS Cloud Map

  • Adds X-Ray side car when you need distributed tracing

  • Calculates the compute requirements based on the docker-compose v3 declaration

  • Supports to add IAM permission boundary for extended security precautions.

  • Support for AWS Secrets Manager secrets usage.

  • Supports for scaling definitions
    • SQS based step scaling

    • Target Tracking scaling for CPU/RAM

ECS Cluster configuration

x-cluster allows you to configure the ECS Cluster as you wish to, instead of my own defaults. If you do not specify your own settings, the default settings will be applied.

Default settings:

As you know, I am going for Fargate first and only as the default deployment mechanism.

  • FARGATE_SPOT, Weight=4, Base=1

  • FARGATE, Weight=1

Setting the Properties accordingly to AWS CloudFormation Reference for ECS Cluster will allow you to override the default settings with your own.


Head to cluster_syntax_reference for more details on how to use x-cluster.

AWS AppMesh & AWS Cloud Map for services mesh & discovery

AWS AppMesh is a service mesh which takes care of routing your services packets logically among the different nodes. What this allows you to do, it to explicitly declare which services have access to others, either on http, tcp or gRPC.


For HTTP, it supports both http2 and http.

There are a lot more features to know about, so I would recommend to head to the AWS Appmesh official documentation.


At the time of working on this feature, mutualTLS is not available, for lack of $$ to use AWS ACM CA and do the dev work.


By default in ECS Compsoex, the EGRESS policy for nodes it to DROP_ALL so that only explicitly allowed traffic can go across the mesh, in/out the services.


The nodes are a logical construct to indicate an endpoint. With ECS ComposeX, it will either be

  • a service defined and deployed in ECS

  • a database

  • any DNS discoverable target.

When you enable AWS AppMesh in ECS ComposeX, it will automatically add all the necessary resources for your ECS task to work correctly:

  • envoy container

  • update task definition with proxy configuration

  • add IAM permissions for envoy to discover services and the mesh settings.


Routers are logical endpoints that apply the logic you define into routes. For TCP routers, it mostly is about defining TCP settings, such as timeouts.

For HTTP and gRPC however, it is far more advanced. You can define routes based on path, method etc. I also can perform healthcheck for you, to evaluate the nodes health. It effectively is a virtual ALB listener with a long set of rules.


From experimenting and testing however, you cannot mix routes protocols within the same router.


The virtual services are once again, a logical pointer to a resource. That resource will either be a Node or a Router. But again, it is aimed to be a virtual pointer, therefore, you do not need to call your virtual service with the same name as one of the services defined in the compose services.

What does that mean?

In essence, when you define a VirtualService as the backend of a virtual node, this means this node and its services will be granted access to the nodes of the VirtualService itself. But, you might have called your services clock and watch, and yet the virtual service will be called time.

Problem: when trying to connect to the endpoint time, your application won’t be able to resolve time. Solution: ECS ComposeX will create a virtual service in the same AWS CloudMap as where the ECS Services are registered, and create a fake instance of it, for which the IPv4 address will be How does it work?: your microservice in ECS will try to resolve time. The DNS response will be an IP address, here, Which obviously does not exist in a VPC (see RFC 3927 for more details) but, it will allow your application to establish the connection. The connection is intercepted by the envoy proxy container, which internally figures out, where to connect and how. It will then take your package, and send it across to the destination, to the right IP address. Which is why resolving the IP in DNS is important, but the value of the record is not.

The other things ECS ComposeX takes care of for you

In addition to configuring the ECS Task definition appropriately etc, ECS ComposeX also will take care of the security groups opening between the Virtual Nodes, and to other backends.

Yes, a mesh with DROP_ALL will ensure that communication between nodes only happens if explicitly allowed, but this does not mean we should not also keep the underlying network in check.

The security group inbound rule defined is from the source node to the target node(s), allowing all traffic for now between the nodes.


For troubleshooting, you can use the ClusterWide Security Group which is attached to all containers deployed with ECS ComposeX, and allow all traffic within the security group to allow your ECS Services to communicate.


This module aims to create the SQS Queues and expose there properties via AWS CFN exports so that they can be used and referenced to/by services stacks to create IAM policies accordingly.

Queue properties

In order to make things very simple, the definition of properties follows the exact pattern as for the CFN SQS definition.

Special properties

Redrive policy

The redrive policy works exactly as you would expect it and is defined in the exact same way as for within the SQS proprties. Only, here, you only need to put the queue name of the DLQ. The generated ARN etc. will be fetched via exports (which also implicitly adds a lock on it).

Example with DLQ:

    Properties: {}
    Settings: {}
    Services: []

      deadLetterTargetArn: DLQ
      maxReceiveCount: 10
      - APPQUEUE01


See x-sqs


This package is here to create all the CFN templates necessary to create RDS instances and allow microservices to access the databases.


RDS is far more complex to configure and allow access to from microservices than pure IAM (at least at this time, using IAM based auth might have performances impact on your applications, so we are going to consider usual DB credentials are in use).

The engine

The engine & engine version are going to be used to determine if you are trying to create an Aurora Cluster in RDS or a normal traditional DB. You have nothing more to do.

Security groups configuration

Per database, is created one Security Group for the DB itself and another that will be assigned to all microservices which have been registered to have access to the database. However, keep in mind the SG Account limitations which apply, by default, 5 Security Groups max per ENI. Given we are in awsvpc networking mode, each microservice running (container) has its own ENI.


AWS Secrets Manager integrates very nicely to AWS RDS. This has no intention to implement the rotation system at this point in time, however, it will generate the password for the database and expose it securely to the microservices which can via environment variables fetch

  • DB Endpoint

  • DB username

  • DB Password

  • DB Port

Standalone usage

You can use ECS ComposeX to create a standalone version of your RDS database.


See x-rds


This python subpackage is responsible for creating the DynamoDB or finding existing tables based on tags.


As for all resources in ECS ComposeX, this section is here to represent the AWS CloudFormation properties you would normally use to define all the settings.


All current DynamoDB properties are supported. This feature was tested from copy-pasting the AWS examples. Find examples in use-cases/dynamodb of this repository


Lookup allows you to search for existing DynamoDB tables using tags to identify your existing resources.

IAM Access types

Three access types have been created for the table:

  • RW

  • RO

  • PowerUser

RW - Read/Write

This allows the micro service read and write access to the table items.

Read/Write policy statement snippet
    "Action": [
    "Effect": "Allow",

RO - Read Only

This only allows to query information out of the table items.

Read Only policy statement snippet
    "Action": [
    "Effect": "Allow",


This allows all API calls apart from create and delete the table.

PowerUser IAM statement snippet
    "NotAction": [
"Effect": "Allow"

AWS VPC, needs no introduction

I am not here to tell you what a VPC should look like. So in that spirit, this is really here to be one less thing developers who wish to use that tool are going to have to think about.

Outputs and exports

By default, all outputs are also exported, and for the VPC it is a particularly useful one. If you want to create resources which are today not supported by ECS ComposeX but with to use CFN or something else like Terraform to identify and get subnet IDs, CIDR and otherwise, you will get these from CFN exports.

Using an existing VPC

You might already have network configuration and VPC setup all done, and want to simply plug-and-play to that existing network configuration you have.

To help with that, we have added the x-vpc key support in the docker-compose file, with allows you to find your VPC in and subnets with many options.

See also

Head to x-vpc to see how to use that feature.

Default VPC Network design

The design of the VPC generated is very simple 3-tiers:

  • Public subnets, 1/4 of the available IPs of the VPC CIDR Range

  • Storage subnets, 1/4 of the available IPs of the VPC CIDR Range

  • Application subnets, 1/2 of the available IPs of the VPC CIDR Range

I used to have a calculator for CIDR Range that would do things in percentage so it would be far more granular but I found that it wasn’t worth going so in depth into it.

Network architects out there will have created the VPCs by other means already or already know exactly what and how they want these configured.

If that is not the case and you just want a VPC which will work with ingress and egress done in a sensible way, use the –create-vpc argument of the CLI.

Default range

The default CIDR range for the VPC is It can be overridden with –vpc-cidr

This leaves a little less than 120 IP address for the EC2 hosts and/or Docker containers.


  • Add option to enable VPC Flow logs

  • Add option to enable VPC Endpoints


This python subpackage is responsible for creating the KMS Keys.


As for all resources in ECS ComposeX, this section is here to represent the AWS CloudFormation properties you would normally use to define all the settings.


All current KMS Key properties are supported. This feature was tested from copy-pasting the AWS examples.

IAM Access types

Three access types have been created for the table:

  • EncryptDecrypt

  • EncryptOnly

  • DecryptOnly

  • SQS


This allows the micro service read and write access to the table items.

Read/Write policy statement snippet
    "Action": [
    "Effect": "Allow",


This only allows to query information out of the table items.

Encrypt Only.
    "Action": ["kms:Encrypt", "kms:GenerateDataKey*", "kms:ReEncrypt*"],
    "Effect": "Allow",


This allows to use the KMS Key to decrypt data.

Decrypt Only snippet
{"Action": ["kms:Decrypt"], "Effect": "Allow"}


This allows all API calls apart from create and delete the table.

SQS Decrypt messages
{"Action": ["kms:GenerateDataKey", "kms:Decrypt"], "Effect": "Allow"}

EC2 resources for ECS Cluster

This module is here to create the compute resources if so chosen instead of using Fargate. Given that the default it to use AWS Fargate (soon will make it use Fargate Spot as well), the EC2 resources which by default were provisioned are now optional.

I would only recommend to use EC2 resources over fargate if you need for performances reasons create backed AMIs which will have a lot of the docker layers that your images and volumes need.


  • Creates the ECS Cluster for the deployment of the services to it.

  • Creates an IAM Role and Instance profile for potential EC2 hosts

  • Creates a Lunch Template using the IAM Role/Instance Profile and Security group so if you want to run instances to troubleshoot inside the VPC, it’s easy!

Optionally it will also allow you to:

  • Create a SpotFleet to run services on top of ECS instances.

The EC2 instances running on Spot/OnDemand will have a configuration that forces the nodes to bootstrap properly in order to work. If not as this might happen, the instances will “self-destroy” given it could not bootstrap properly.

You can come up and override the AMI ID if you’d like (has to be in SSM though at the moment) but I can’t recommend enough to just use a vanilla AWS Amazon Linux ECS Optimized. They just work.

CLI Usage

The CLI is here primarily to have an example of the various settings you would need if you wanted to go and create the Compute resources yourself (EC2, ASG, SpotFleet).

At the moment, the option –iam-only is not implemented but soon it will allow you to get the CFN templates for just the IAM parts if you so wished to.

The default EC2 configuration

As I mentioned above, this is not going to provision any compute resources (instances) by default. The configuration is very simple and uses cfn-init which must be one of the most underestimated feature of CloudFormation.

The IAM Profile allows the node to register against the ECS Cluster and only against that one. As you will soon realize in this project, everything with IAM is done to be least privileges only.