In this blog post, we describe how to set up a cross account continuous integration and continuous delivery (CI/CD) pipeline on AWS. A CI/CD pipeline helps you automate steps in your software delivery process, such as initiating automatic builds, artifacts store, integration testing and then deploying to Amazon ECS Service or Lambda function or other AWS Services.
We use AWS CodePipeline, a service that builds, tests and deploys your code every time there is a code change, alongside CodeBuild for specific integration testing and deploying to AWS services which is not yet supported by AWS CodePipeline.
We use CodePipeline to orchestrate each step in the release process, such as getting the source code from Github or CodeCommit, running integration tests, deploying to DEV environment and manual approval for higher environments such as QA and PROD.
One thing to mention is that most organizations create multiple AWS accounts because they provide the highest level of resource and security isolation, this introduces a challenge to create cross account access between different AWS services. So to manage multiple AWS accounts we have the following accounts:
One note to mention is that we are avoiding hard coded secrets values and use AWS Secrets manager as much as possible to retrieve passwords, API keys or other sensitive data.
Before you can use CodePipeline to update your ECS service in the targets account, there are set of permissions including resource based policies and cross account roles which need to be set up first. After that CodePipeline supports updating ECS Services in local or target accounts. Note that you can use CodeDeploy for more detail deployment types, but for now we are using the simple CodePipeline deployment.
When using CodePipeline or any other AWS services you need to make sure that permissions to access target services or accounts are permitted, and vise versa, target accounts need to access AWS resources in the automation or shared services account, we can achieve access using cross account roles and resource based policies.
CodePipeline uses S3 bucket as an artifact store and use a KMS key to encrypt these artifacts. In order to let other accounts and their services access these s3 buckets we need to create a CMK KMS key first. The account's KMS key can't be accessed from other accounts so we use a resource based policy for CMK which allows us to use a resource based policy and external principals (roles or root access from the target account) which in our case are the dev, qa and prod accounts to access this artifact store bucket. Beside that we need to use a S3 bucket policy (resource based policy) to allow access from external principals. This way we can make sure that dev, qa and prod accounts can access artifact s3 bucket and use CMK KMS keys to decrypt the content to use it.
The same approach will be followed for ECR. We will update it's resource based policy to allow ECS services in dev, qa and prod accounts to pull the image and use it with containers definitions.
In the shared service account update the ECR Policy to allow access from dev, qa and prod accounts.
variable "ecr_policy_identifiers" {
description = "The policy identifiers for ECR policy."
type = list(string)
default = []
}
data "aws_iam_policy_document" "ecr" {
statement {
sid = "ecr"
effect = "Allow"
actions = [
"ecr:*"
]
principals {
type = "AWS"
identifiers = var.ecr_policy_identifiers
}
}
}
resource "aws_ecr_repository_policy" "this" {
repository = aws_ecr_repository.this.name
policy = data.aws_iam_policy_document.ecr.json
}
In the shared service account update KMS Policy to allow access from dev, qa and prod accounts.
variable "kms_policy_identifiers" {
description = "The policy identifiers for kms policy."
type = list(string)
default = []
}
data "aws_iam_policy_document" "kms" {
statement {
sid = "kms"
effect = "Allow"
actions = [
"kms:*"
]
principals {
type = "AWS"
identifiers = var.kms_policy_identifiers
}
}
}
resource "aws_kms_key" "this" {
...
policy = data.aws_iam_policy_document.kms.json
}
In shared service account update S3 Policy to allow access from dev, qa and prod accounts, and make sure you are using the CMK KMS key you created with the S3 bucket.