ECS-ComposeX

x-Resources common syntax

ECS ComposeX requires to expands onto the original Docker compose file defintion in order to map the docker compose properties to their equivalent settings on AWS ECS and otherwise for the other “Extra” resources.

In general for each x- section of the docker compose document, we will find three attributes to each resource:

  • Settings

  • Services

  • Properties

Settings

The settings is the section where we can take shortcuts or wrap around settings which would otherwise be complex to define. Sometimes, it simply is an easy way to use helpers which are configurable. For example, in the next interation for the x-rds resources, we will allow to define the latest RDS engine and version that supports Serverless for aurora.

There is a set of settings which are going to be generic to all modules.

EnvNames

Multiple teams who would want to adopt ECS ComposeX might already have their own environment variable keys (or names) for a common resource. For example, team A and team B can use the same SQS queue but they did not define a common name for it, so team A calls it QueueA and team B calls it QUEUE_A.

With EnvNames, you can define a list of environment variables that will all share the same value, simply have a different name.

Hint

No need to add the name of the resource as defined in the docker compose file, this will always be added by default.

Services

This is a list of object, with two keys: name, access. The name points to the service as defined in the docker compose file.

Warning

This is case sensitive and so the name of the service in the list must be the same name as the service defined.

Note

At this point in time, each x- section has its own pre-defined IAM permissions for services that support IAM access to the resources. In a future version, I might add a configuration file to override that behaviour.

Refer to each x- resource syntax to see which access types are available.

Properties

Unless indicated otherwise, these are the properties for the resource as you would define them using the AWS properties in the AWS CloudFormation resource definition.

Warning

In order to update some resources, AWS Sometimes needs to create new ones to replace the once already in place, depending on the type of property you are changing. To do so, AWS will need to have the name of the resource generated, and not set specifically for it. It is a limitation, but in the case of most of the resources, it also allows for continued availability of the service to the resources.

Therefore, some resources will not be using the Name value that you give to it, if you did so.

x-configs & services reference

This is where we try to re-use as much as possible the docker compose (v3) reference as much as possible. For the definition of the services, you can simply use the already existing Docker compose definition for your services. However, there are only a limited number of settings that are today working:

ECS ComposeX configurations

This is where developers can leverage the automation implemented in ECS ComposeX to simplify access to their services, between services themselves and from external sources too.

To define configuration specific to the service and override ECS ComposeX default settings for network configuration, you can use the native configs key of Docker compose.

Note

To define configuration for your service, simply create a new element/dict in the configs element of the YAML file.

x-configs

Configs is a section natively supported by docker-compose. The sections allows you to define generic settings for all services, and apply it to services.

The way the definition of settings has been implemented is to go from the generic to the specific:

    1. x-configs -> composex

    1. x-configs -> service name

    1. x-services -> service

Hint

If a setting is set in both step 1 and step 3 for example, the value that will be kept is the value from step 3.

network

This is a top section describing the network configuration in play for the service.

Subkeys of the section:

services:
  serviceA:
    image: image
    links: []
    ports:
    - 80:80
    x-configs:
      network:
        lb_type: application
        ingress: {}
        healthcheck: {}

ingress

This allows you to define specific ingress control from external sources to your environment. For example, if you have to whitelist IP addresses that are to be allowed communication to the services, you can list these, and indicate their name which will be shown in the EC2 security group description of the ingress rule.

x-configs:
  app01:
    network:
      ingress:
        ext_sources:
          - ipv4: 0.0.0.0/0
            protocol: tcp
            source_name: all
          - ipv4: 1.1.1.1/32
            protocol: icmp
            source_name: CloudFlareDNS
        aws_sources:
          - type: SecurityGroup
            id: sg-abcd
          - type: PrefixList
            id: pl-abcd
        myself: True/False

Note

Future feature is to allow to input a security group ID and the remote account ID to allow ingress traffic from a security group owned by another of your account (or 3rd party).

is_public

boolean to indicate whether or not the service should be accessible publicly. If set to true, the load balancer associated to the service will be made public.

lb_type

When using a load-balancer to reach to the service, specify the Load Balancer type. Accepted values:

  • network

  • application

use_cloudmap

This indicates whether or not you want the service to be added to your VPC CloudMap instance. if set to true, it will automatically register the service to the discovery instance.

healthcheck

At this time, this does not replace the docker compose native functionality of healthcheck. It is a simplified expression of it which is used for cloudmap or the load-balancer to register the targets.

Note

This is used for network healthchecks, not service healthcheck

scaling

This section allows to define scaling for the ECS Service. For SQS Based scaling using step scaling, refer to SQS Documentation.

services:
  serviceA:
    x-configs:
      scaling:
        range: "1-10"
        target_tracking:
            cpu_target: 80

range

Range, defines the minimum and maximum number of containers you will have running in the cluster.

#Syntax
# range: "<min>-<max>"
# Example
range: "1-21"

allow_zero

Boolean to allow the scaling to go all the way down to 0 containers running. Perfect for cost savings and get to pure event driven architecture.

Hint

If you set the range minimum above 0 and then set allow_zero to True, it will override the minimum value.

target_scaling

Allows you to define target scaling for the service based on CPU/RAM.

x-configs:
  target_scaling:
    range: "1-10"
    cpu_target: 75
    memory_target: 80

Available options:

x-configs:
  scaling:
      range: "1-10"
      target_scaling:
        cpu_target: int (will be casted to fload)
        memory_target: int (will be casted to float)
        scale_in_cooldown: int (ie. 60)
        scale_out_cooldown: int (ie. 60)
        disable_scale_in: boolean (True/False)

iam

This section is the entrypoint to further extension of IAM definition for the IAM roles created throughout.

boundary

This key represents an IAM policy (name or ARN) that needs to be added to the IAM roles in order to represent the IAM Permissions Boundary.

Note

You can either provide a full policy arn, or just the name of your policy. The validation regexp is:

r"((^([a-zA-Z0-9-_.\/]+)$)|(^(arn:aws:iam::(aws|[0-9]{12}):policy\/)[a-zA-Z0-9-_.\/]+$))"

Examples:

services:
  serviceA:
    image: nginx
    x-configs:
      iam:
        boundary: containers
  serviceB:
    image: redis
    x-configs:
      iam:
        boundary: arn:aws:iam::aws:policy/PowerUserAccess

Note

if you specify ony the name, ie. containers, this will resolve into arn:${partition}:iam::${accountId}:policy/containers

policies

Allows you to define additional IAM policies. Follows the same pattern as CFN IAM Policies

x-configs:
  iam:
    policies:
      - name: somenewpolicy
        document:
          Version: "2012-10-17"
          Statement:
            - Effect: Allow
              Action:
                - ec2:Describe*
              Resource:
                - "*"
              Sid: "AllowDescribeAll"

managed_policies

Allows you to add additional managed policies. You can specify the full ARN or just a string for the name / path of the policy. If will resolve into the same regexp as for boundary

xray

This section allows to enable X-Ray to run right next to your container. It will use the AWS original image for X-Ray Daemon and exposes the ports to the task.

Example:

x-configs:
  composex:
    xray:
      enabled: true

services:
  serviceA:
    x-configs:
      xray:
        enabled: True

See also

ecs_composex.ecs.ecs_service#set_xray

logging

Section to allow passing in arguments for logging.

logs_retention_period

Value to indicate how long should the logs be retained for the service.

Note

If the value you enter is not in the allowed values, will set to the closest accepted value.

deploy

The deploy section allows to set various settings around how the container should be deployed, and what compute resources are required to run the service.

For more details on the deploy, see docker documentation for deploy here

At the moment, all keys are not supported, mostly due to the way Fargate by nature is expecting settings to be.

resources

The resources is probably what interests most individuals, in setting up how much CPU and RAM should be setup for the service. I have tried to capture for various exceptions for the RAM settings, as you can find in ecs_composex.ecs.docker_tools.set_memory_to_mb

Once the container definitions are put together, the CPU and RAM requirements are put together. From there, it will automatically select the closest valid Fargate CPU/RAM combination and set the parameter for the Task definition.

Important

CPUs should be set between 0.25 and 4 to be valid for Fargate, otherwise you will have an error.

Warning

At the moment, I decided to hardcode these values in the CFN template. It is ugly, but pending bigger work to allow services merging, after which these will be put into a CFN parameter to allow you to change it on the fly.

replicas

This setting allows you to define how many tasks should be running for a given service. To make this work, I simply update the MicroserviceCount parameter default value, to keep things configurable.

Note

update_config will be use very soon to support replacement of services using a LB to possibly use CodeDeploy Blue/Green deployment.

labels

These labels aren’t used for much in native Docker compose as per the documentation. They are only used for the service, but not for the containers themselves. Which is great for us, as we can then leverage that structure to implement a merge of services.

In AWS ECS, a Task definition is a group of one or more containers which are going to be running as a one task. The most usual use-case for this, is with web applications, which need to have a reverse proxy (ie. nginx) in front of the actual application. But also, if you used the use_xray option, you realized that ECS ComposeX automatically adds the x-ray-daemon sidecar. Equally, when we implement AppMesh, we will also have another side-car container for this.

So, here is the tag that will allow you to merge your reverse proxy or waf (if you used a WAF in container) fronting your web application:

ecs.task.family

For example, you would have:

---
# Blog applications

version: '3.8'
services:
  rproxy:
    image: ${IMAGE:-nginx}
    ports:
      - 80:80
    deploy:
      replicas: 2
      resources:
        reservations:
          cpus: "0.1"
          memory: "32M"
        limits:
          cpus: "0.25"
          memory: "64M"
    depends_on:
      - app01

  app01:
    image: ${IMAGE:-nginx}
    ports:
      - 5000
    deploy:
      resources:
        reservations:
          cpus: "0.25"
          memory: "64M"
    environment:
      LOGLEVEL: DEBUG
      SHELLY: ${SHELL}
      TERMY: "$TERM"
    links:
      - app03:dateteller

  app02:
    image: ${IMAGE:-nginx}
    ports:
      - 5000
    deploy:
      resources:
        reservations:
          cpus: "0.25"
          memory: "64M"
    environment:
      LOGLEVEL: DEBUG

  app03:
    image: ${IMAGE:-nginx}
    ports:
      - 5000
    deploy:
      resources:
        reservations:
          cpus: "0.25"
          memory: "64M"
    environment:
      LOGLEVEL: DEBUG

Warning

The example above illustrates that you can either use, for deploy labels

  • a list of strings

  • a dictionary

ecs.depends.condition

This label allows to define what condition should this service be monitored under by ECS. Useful when container is set as a dependency to another.

Hint

Allowed values are : START, SUCCESS, COMPLETE, HEALTHY. By default, sets to START, and if you defined healthcheck, defaults to HEALTHY. See Dependency reference for more information

secrets

As you might have already used these, docker-compose allows you to define secrets to use for the application. To help continue with docker-compose syntax compatiblity, you can now declare your secret in docker-compose, and add an extension field which will be a direct mapping to the secret name you have in AWS Secrets Manager.

secrets:
  topsecret_info:
    external: True
    x-secrets:
      Name: /path/to/my/secret

services:
  serviceA:
    secrets:
      - topsecret_info

This will automatically add IAM permissions to the execution role of your Task definition and will export the secret to your container, using the same name as in the compose file.

Note

At this time, AWS Fargate does not support to specify the secret JSON key of secrets, so not implementing this here.

Hint

If you believe that your service application should have access to the secret via Task Role, simply add to the secret definition as follows:

secret-name:
  x-secrets:
    Name: String
    LinksTo:
      - EcsExecutionRole
      - EcsTaskRole

Warning

If you do not specify EcsExecutionRole when specifying LinksTo then you will not get the secret exposed to your container via AWS ECS Secrets property of your Container Definition

Hint

For security purposes, the containers envoy and xray-daemon are not getting assigned the secrets.

x-cluster

This section allows you to define how you would like the ECS Cluster to be configured. It also allows you to define Lookup to use an existing ECS Cluster.

Properties

Refer to the AWS CFN reference for ECS Cluster

Override default settings
x-cluster:
  Properties:
    CapacityProviders:
      - FARGATE
      - FARGATE_SPOT
    ClusterName: spotalltheway
    DefaultCapacityProviderStrategy:
      - CapacityProvider: FARGATE_SPOT
        Weight: 4
        Base: 2
      - CapacityProvider: FARGATE
        Weight: 1

Lookup

Allows you to enter the name of an existing ECS Cluster that you want to deploy your services to.

Lookup existing cluster example.
x-cluster:
  Lookup: mycluster

Warning

If the cluster name is not found, by default, a new cluster will be created with the default settings.

Use

This key allows you to set a cluster to use, that you do not wish to lookup, you just know the name you want to use. (Useful for multi-account where you can’t lookup cross-account).

x-appmesh

The properties for the mesh are very straight forward. Even though, the wish with ECS ComposeX is to keep the Properties the same as the ones defined in CFN as much as possible, for AWS AppMesh, given the simplicity of the properties, we are going with somewhat custom properties, mostly to allow for more features integration down the line.

Warning

There is only one mesh that will be either created or used to deploy the services into.

x-appmesh:
  Properties: {}
  Settings: {}

Properties

MeshName

This is the name of the mesh. However, if you do not specify the MeshOwner, then the name is ignored and the root stack name is used.

The MeshName is going to be used if you specify the MeshOwner, in case you are deploying into a Shared Mesh.

AllowedPattern: ^[a-zA-Z0-9+]+$

MeshOwner

The MeshOwner as described above, doesn’t need to be specified, if you are creating your Nodes, Routers and Services (virtual ones) into a Mesh shared with you from another account.

AllowedPattern: [0-9]{12}

EgressPolicy

The mesh aims to allow services, nodes to communicate to each other only through the mesh. So by default, ECS ComposeX sets the policy to DROP_ALL. Meaning, no traffic out of the nodes will be allowed if not to a defined VirtualService in the mesh.

For troubleshooting and otherwise for your use-case, you might want to allow any traffic to get out of the node anyway. If so, simply change the policy to ALLOW_ALL

AllowedValues: DROP_ALL, ALLOW_ALL

Settings

The settings section is where we are going to define how our services defined in Docker compose are going to integrate to the mesh.

nodes

This section represents the nodes. The nodes listed here must be either a service as listed in docker-compose or a family name.

nodes:
  - name: app01
    protocol: http
  - name: app02
    protocol: tcp
    backends:
      - service-abcd

routers

Routers as mentioned in the module description, are here to allow developers to define how packets should be routed from one place to another.

For TCP ones, one can only really set timeout settings, in addition to TLS etc. However for http, http2 and gRPC it allows you to define further more rules. The example below shows how a request to the router on path / it should send requests with the POST method to app02, but requests with the GET method to app01.

routers:
  - name: httprouter
    listener:
      protocol: http
      port: 8080
    routes:
      http:
        - match:
            prefix: /
          method: GET
          scheme: http
          nodes:
            - app01
        - match:
            prefix: /
          method: POST
          nodes:
            - app02

services

The VirtualServices are what acts as backends to nodes, and as receiver for nodes and routers. The Virtual Services can use either a Node or a Router as the location to route the traffic to.

services:
  - name: service-xyz
    router: httprouter
  - name: service-xyz
    node: app03

Examples

---
# Blog applications

version: '3.8'

services:
  rproxy:
    ports:
      - 80:80
    deploy:
      replicas: 1
      labels:
        ecs.task.family: app01
    x-configs:
      use_xray: True
  app01:
    ports:
      - 5000
      - 5001:5000
    deploy:
      labels:
        ecs.task.family: app01

  app03:
    x-configs:
      logging:
        logs_retention_period: 60
    ports:
      - 5000
    deploy:
      resources:
        reservations:
          cpus: "0.25"
      labels:
        ecs.task.family: app03

  app04:
    ports: []
    image: nginx

x-tags:
  owner: johnpreston
  contact: john@lambda-my-aws.io
  another: one

x-appmesh:
  Properties:
    MeshName: root
  Settings:
    nodes:
      - name: app03
        protocol: http
      - name: app02
        protocol: http
      - name: app01
        protocol: http
        backends:
          - dateteller # Points to the dateteller service, not router!
    routers:
      - name: dateteller
        listener:
          port: 5000
          protocol: http
        routes:
          http:
            - match:
                prefix: /date
                method: GET
                scheme: http
              nodes:
                - name: app02
                  weight: 1
            - match:
                prefix: /date/utc
              nodes:
                - name: app03
                  weight: 1
    services:
      - name: api
        node: app01
      - name: dateteller
        router: dateteller

x-dns:
  PrivateNamespace:
    Name: mycluster.lan
  PublicNamespace:
    Name: lambda-my-aws.io

x-vpc:
  Create:
    VpcCidr: 10.0.0.0/24

x-cluster: dev

x-vpc

The VPC module is here to allow you to define settings for the VPC from docker-compose file directly instead of the CLI, using the same arguments. Equally, for ease of use, you can also define lookup settings to use an existing VPC.

Creating a new VPC

x-vpc:
  Create:
    SingleNat: true
    VpcCidr: 172.6.7.42/24
    Endpoints:
      AwsServices:
        - service: s3
        - service: ecr.api
        - service: ecr.dkr

Use and existing VPC and subnets

x-vpc:
  Use:
    VpcId: vpc-id
    AppSubnets:
      - subnet-id
      - subnet-id
    StorageSubnets:
      - subnet-id
      - subnet-id
    PublicSubnets:
      - subnet-id
      - subnet-id

Hint

The difference with Lookup is that, it won’t try to find the VPC and Subnets, allow to “hardcode” static values.

Looking up for an existing VPC

x-vpc:
  Lookup:
    VpcId:
      Tags:
        - key: value
    StorageSubnets:
        - subnet-abcd
    PublicSubnets:
      Tags:
        - vpc::usage: public
    AppSubnets: subnet-abcd,subnet-1234

Supported filters

VpcId

Lookup VPC ID
x-vpc:
  Lookup:
    VpcId: vpc-123456
Lookup VPC ARN
x-vpc:
  Lookup:
    VpcId: arn:aws:ec2:eu-west-1:012345678912:vpc/vpc-123456
Lookup via Tags
x-vpc:
  Lookup:
    VpcId:
      Tags:
        - Name: vpc-shared

StorageSubnets, AppSubnets, PublicSubnets

If defined as a string, it will expected a CommaDelimitedList of valid SubnetIds. If defined as a list, it will be expecting a list of strings of valid subnet IDs. If defined as a object, it will expect tags list, in the same syntax as for VPC.

VPC ID
x-vpc:
  Lookup:
    AppSubnets: subnet-abcd,subnet-123465,subnet-xyz
VPC ARN
x-vpc:
  Lookup:
    StorageSubnets:
      - subnet-abcd
      - subnet-12345
      - subnet-xyz
EC2 Tags
x-vpc:
  Lookup:
    PublicSubnets:
      Tags:
        - Name: vpc-shared

Note

The AppSubnets are the subnets in which will the containers be deployed. Which means, that it requires access to services such as ECR, Secrets Manager etc. You can use any subnet in your existing VPC so long as network connectivity is achieved.

Tip

When you are looking up for the VPC and Subnets, these parameters are added to ComposeX. At the time of rendering the template to files, it will also create a params.json file for the stack, and put your VPC ID and Subnets IDs into that file.

[
    {
        "ParameterKey": "VpcId",
        "ParameterValue": "vpc-01185d1aad942441c"
    },
    {
        "ParameterKey": "AppSubnets",
        "ParameterValue": "subnet-00ad888b1434a7187,subnet-04d5d90d04874f8e2,subnet-04103167a162e3f8e"
    },
    {
        "ParameterKey": "StorageSubnets",
        "ParameterValue": "subnet-0dc9044f0b566c878,subnet-0fe6f4beb6ce2403d,subnet-0aa49c83e98120a5d"
    },
    {
        "ParameterKey": "PublicSubnets",
        "ParameterValue": "subnet-005eb795e33b68464,subnet-0fb1855c9316aab3c,subnet-0f4f3d27a17b1c3da"
    },
    {
        "ParameterKey": "VpcDiscoveryMapDnsName",
        "ParameterValue": "cluster.local"
    }
]

Warning

If you are doing a lookup, you must configure the VpcId so that all subnets will be queried against that VPC for higher accuracy.

Warning

If you specify both Create and Lookup in x-vpc, then the default behaviour is applied.

x-sqs

Services

Similar to all other modules, we have a list of dictionaries, with two keys of interest:

  • name: the name of the service as defined in services

  • access: the type of access to the resource.

  • scaling: Allow to define the scaling behaviour of the service based on SQS Approximate Messages Visible.

Access types

  • RO - read only

  • RWMessages - read/write messages on the queue

  • RWPermissions - read/write messages and grants access to modify some queue attributes

Tip

IAM policies, are defined in sqs/sqs_perms.py

Settings

No specific settings for SQS at this point.

Properties

Mandatory Properties

SQS does not require any properties to be set in order to create the queue. No settings are mandatory.

Special properties

It is possible to define Dead Letter Queues for SQS messages (DLQ). It is possible to easily define this in ECS ComposeX simply by referring to the name of the queue, deployed in this same deployment.

Warning

It won’t be possible to import a queue ARN at this time in ECS ComposeX that exists outside of the stack.

To do so, simply use the following syntax:

Examples

x-sqs:
  Queue02:
    Services:
      - name: app02
        access: RWPermissions
      - name: app03
        access: RO
    Properties:
      RedrivePolicy:
        deadLetterTargetArn: Queue01
        maxReceiveCount: 10
    Settings:
      EnvNames:
        - APP_QUEUE
        - AppQueue

  Queue01:
    Services:
      - name: app03
        access: RWMessages
    Properties: {}
    Settings:
      EnvNames:
        - DLQ
        - dlq

Example with step scaling scaling:

x-sqs:
  QueueA:
    Services:
      - name: abcd
        access: RWMessages
        scaling:
          steps:
            - lower_bound: 0
              upper_bound: 10
              count: 1 # Gives you 1 container if there is between 0 and 10 messages in the queue.
            - lower_bound: 10
              upper_bound: 100
              count: 10 # Gives you 10 containers if you have between 10 and 100 messages in the queue.
            - lower_bound: 100
              count: 20 # Gives you 20 containers if there is 100+ messages in the queue

Note

The last step cannot have defined a upper_bound. If you set one, it will be automatically be removed.

Note

You need to have defined x-configs/scaling/range to enable step scaling on the ECS Service.

x-rds

Services

At this point in time, there is no plan to deploy as part of ECS ComposeX a lambda function that would connect to the DB and create a DB/schema specifically for the microservice, as would this lambda function do.

The syntax for listing the services remains the same as the other x- resources but the access type won’t be respected.

Access types

Warning

The access key value won’t be respected at this stage.

Settings

Some use-cases require special adjustments. This is what this section is for.

copy_default_parameters

Type: boolean Default: True when using aurora

Creates a DBClusterParameterGroup automatically so you can customize later on your CFN template for the DB Settings. This avoids the bug where only default.aurora-mysql5.6 settings are found if the property is not set.

Tip

The function performing the import of settings in ecs_composex.rds.rds_parameter_groups_helper.py

Properties

RDS cluster or instances need a lot of parameters. At this stage, you would not copy the settings as defined on AWS CFN documentation, simply because a lot of it is done automatically for you. The plan is to use the settings in the future to drive changes and some more properties (ie. snapshots) will be added to allow for far more use-cases.

Hint

Technically, the use of snapshots is already implemented, but not fully tested. Stay tuned for next update!

Mandatory properties

The properties follow the Aurora Cluster properties as I have more use-cases for using Aurora than using traditional RDS. Cluster and DB Instance share a lot of common properties so therefore the difference will be very minor in the syntax.

Special Properties

No special properties available for RDS yet.

Examples

x-rds:
  dbname:
    Properties:
      Engine: aurora-mysql
      EngineVersion: 5.7.12
    Services:
      - name: app01
        access: RW

Hint

The DB Family group will be found automatically and the setting copy_default_parameters will allow creation of a new RDS Parameter group for the Cluster / DB Instance.

x-dynamodb

Services

List of key/pair values, as for other ECS ComposeX x-resources.

Three access types have been created for the table:

  • RW

  • RO

  • PowerUser

Services example
x-dynamodb:
  tableA:
    Properties: {}
    Services:
      - name: serviceA
        access: RW
      - name: serviceB
        access: RO

Settings

The only setting available at this time is EnvNames, as for all other x-resources. Stay tuned for updates.

Lookup

Allows to discover existing resources in your account. Everything works the same for Settings etc, only this time, you will be expected to provide a series of Tags.

If tables are found in your account with the provided Tags, then its ARN will be used in the service policy and exposed as the value of environment variables to the microservice task role and definition.

Warning

If you wanted only 1 table specifically to be found by Lookup, and the current tags return multiple tables results, ensure that you make the tag combination unique.

Tags example
x-dynamodb:
  tableC:
    Lookup:
      Tags:
        - name: tableC
        - key: value

Tip

Tags keys and vlaues are case sensitive. At this stage, this does not support regexps.

Hint

The reason why it is done by tags rather than by name was that you might have multiple tables you want to use multiple tables at once. Of course, you can do a 1:1 mapping between your table in ComposeX and AWS.

secrets

You might have secrets in AWS Secrets Manager that you created outside of this application stack and your services need access to it.

By defining secrets in docker-compose, you can do all of that work rather easily. To help make it as easy in AWS, simply set external=True and a few other settings to indicate how to get the secret.

version: "3.8"

services:
  servicename:
    image: abcd
    secrets:
      - abcd

secrets:
  mysecret:
    external: true
    x-secret:
      Name: /name/in/aws
      LinkTo:
        - EcsExecutionRole
        - EcsTaskRole

x-secret

Name

The name (also known as path) to the secret in AWS Secrets Manager.

LinksTo

List to determine whether the TaskRole or ExecutionRole (or both) should have access to the Secret. If set as TaskRole, then the secret value will not be exposed in env vars and only the secret name will be set.

x-sns

---
# Syntax reference for SNS

x-sns:
  Topics:
    topic1:
      Properties: {}
      Settings: {}
      Services: []
    topicN:
      Properties: {}
      Settings: {}
      Services: []
  Subscriptions:
    subscription01:
      Properties: {}
      Settings: {}

x-kms

Services

List of key/pair values, as for other ECS ComposeX x-resources.

Three access types have been created for the table:

  • EncryptDecrypt

  • EncryptOnly

  • DecryptOnly

  • SQS

KMS and Services
x-kms:
  keyA:
    Properties: {}
    Services:
      - name: serviceA
        access: EncryptDecrypt
      - name: serviceB
        access: DecryptOnly

Settings

In addition to EnvNames, for KMS, we also have Alias which will create an Alias along with the KMS Key. The alias name must be a string, not starting with alias/aws or aws. If you specify a an alias starting with alias/ then the string will be used as is, if you only specify a short name, then the alias will be prefixed with the RootStack name and region.

x-kms:
  keyA:
    Properties:
      PendingWindowInDays: 14
    Services:
      - name: serviceA
        access: EncryptDecrypt
      - name: serviceB
        access: EncryptDecrypt
    Settings:
      Alias: keyA

x-acm

This module is here to allow people to create ACM certificates, auto-validate these with their DNS registration, and front their applications with HTTPS.

It recently got supported by CloudFormation to natively add the CNAME entry to your Route53 DNS record as the certificate is created, removing the manual validation process.

Warning

At the time of working on that feature, Troposphere has not released the feature for it, but is available in their master branch.

x-acm:
  blogdemo:
    Properties:
      DomainName: blog-demo.lambda-my-aws.io
      DomainValidationOptions:
        - DomainName: lambda-my-aws.io
          HostedZoneId: Zredacted
    Settings: {}
    Services:
      - name: app01
        ports: [443]

Properties

The properties will be supported exactly like in the native AWS CloudFormation definition. At the time of writing the module though, only 1 DomainValidation option is supported.

Hint

Remember as well, you can only auto-validate with providing the HostedZoneId, and you probably only would do that once.

Settings

No settings yet implemented. By default, the Name tag key will use the same value as the DomainName.

Services

List the services which will have a Listener using a port as listed in the ports for it. Just alike the other modules, we are going to list the services with a set of properties

name

The name of the service or ecs.task.family you want to add the listener to

ports

The list of ports for which you would have a listener and want to use the ACM certificate for. If the protocol was set to HTTP, which is default for ALB, the protocol will automatically be set to HTTPS

Compute Reference Syntax

This module is not strictly a module which the same settings as the other AWS resources. This is a module which allows users to create the EC2 compute resources necessary to run the ECS Containers on top of EC2 workloads.

Note

At this point in time, there is no support for creating Capacity providers in CloudFormation, therefore we cannot implement that functionality.

Note

By default, everything is built to use EC2 spot fleet, simply to save money on deployment for testing. Future will allow to run pure OnDemand or hybrid mode.

Define settings in the configs section

At the moment, the settings you can change for the compute definition of your EC2 resources are defined in

configs -> globals -> spot_config

Example:

x-configs:
  spot_config:
    bid_price: 0.42
    use_spot: true
    spot_instance_types:
    m5a.xlarge:
      weight: 4
    m5a.2xlarge:
      weight: 8
    m5a.4xlarge:
      weight: 16

With the given AZs of your region, it will create automatically all the overrides to use the spot instances.

Note

This spotfleet comes with a set of predefined Scaling policies, in order to further reduce cost or allow for scaling out based on EC2 metrics.