Terraform Pipeline Part 2 - Now with more er
Terraform Pipeline Part 2 - Now with more er
Wait it’s barely been three weeks and I’m already posting again? What can I say, this has been a very fun project to work on. This time I will be expanding on (and correcting) the Terraform Pipeline that I started in part one. The goal here is to add a lot more polish to this pipeline and actually make it more useful. Also maybe actually showing and explaining the Terraform template this time since in the first post I apparently omitted that entire part…Really professional there buddy.
I am sure you have noticed (or not if you are new to the site, Welcome!) the new look to my blog. I found a more friendly software to use in creating it in, Hugo. I think the readability is a lot better and less of a hassle to test and update what I am working on.
So, whats the er? That my friends would be HashiCorp’s, Packer and Docker Inc’s, docker. Packer, like every other HashiCorp product, is built with IaC at its core. It is used to build machine images using a similar template design to Terraform. Docker is… containers. What are containers you ask? Read here. But the quick, and dirty, is the docker container will serve as the App that is being deployed with our Terraform Pipeline. It will give us something more substantial than some random EC2 instance.
Refined Setup
In order to work with this revision of the pipeline, the original requirements still exist plus the following:
- I didn’t specify it in the first post, but make sure you are working out of us-east-1 in AWS, as I have not made this multi-regional yet.
- EC2 Keypair Take note of the name of the Keypair as you will need it later.
If you’ve been following along, from part one, your GitHub access token should still be configured in AWS Secrets Manager and you’ll have my GitHub Repo forked with our infrastructure (and now application) code. If you purged those things out, head back over to that post for what is required.
Unleash the Kraken!
Deploy CloudFormation Template - rev2
-
In AWS, go to CloudFormation, under Management & Governance, and create a new stack. Use this: https://vmadbro-cf.s3.amazonaws.com/main-rev2.yaml s3 url to deploy the new revision of the CF template.
You will notice a few more parameters added to this template:
- GitHubRepo: The name of the GitHub repo with the Application and Infrastructure Code. Defaults to terraRepo.
- SSHKeyName: The name of the Keypair you created earlier.
- TemplateRevision: The revision for the Cloudformation template. It defaults to rev2.
-
After the parameters have been filled in, Click Next.
-
Click Next on the Configure stack options page. On the last page scroll to the bottom and check the box since we will be building IAM roles with this stack. Click Create stack.
Much success!
Once again we have a successful deployment! Doesn’t work…. What did you break? Ok, well it should work. If it doesn’t, let me know as I would love to see what I missed in my configuration, and fix it. But for now, I am going to continue with the assumption that it works. So what is different with our Pipeline this time around? For starters there is very tangible way to see if the deployment really works or not. This time, the pipeline is not just deploying any ole EC2 instance. Now it is a custom built AMI, with a docker-based website inside of it. Let’s dive in.
Check out this engine
The CloudFormation template did not change very much, from rev1, other than the additional parameters, so I will not dive into those changes here. Instead most of our changes are made in the CodeBuild buildspec and the Terraform configuration template. I’ll also go over the additional code from the Packer template and the Docker App.
Starting chronologically:
Buildspec
env:
variables:
STATE_BUCKET: "pipeline"
LOCKDB_TABLE: "table"
REV: "revision"
IMAGE_NAME: "name"
SSH_KEY: "key_name"
Notes:
- Additional variables have been added to pass in the revision, the name of the stack, which will set the image and instance names, and the name of the ssh_key, which will set the keypair for the new instance.
phases:
install:
commands:
- echo Install Terraform
- wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip
- unzip terraform_0.12.28_linux_amd64.zip
- mv terraform /usr/local/bin/
- echo Install Packer
- wget https://releases.hashicorp.com/packer/1.6.0/packer_1.6.0_linux_amd64.zip
- unzip packer_1.6.0_linux_amd64.zip
- mv packer /usr/local/bin/
Notes:
- I’ve added the Packer installation into the Install stage.
build:
commands:
- echo Change directory and run Packer to build AMI image
- cd $REV/infrastructure/packer
- packer build -var "image_name=$IMAGE_NAME" build.pkr.hcl
- echo Change directory and Run terraform init and apply to build Infrastructure
- cd ../terraform
- terraform init -backend-config="bucket=$STATE_BUCKET" -backend-config="dynamodb_table=$LOCKDB_TABLE"
- terraform apply -var="image_name=$IMAGE_NAME" -var="ssh_key=$SSH_KEY" -auto-approve
Notes:
- During the build stage, I added the commands to execute the packer build and to pass the image name as a variable to Packer and Terraform. The latter also gets the name of the SSH key passed into it.
Packer
Next up is the Packer build template. Packer builds machine images from a base source image and uses provisioners to build a customized image.
source "amazon-ebs" "build1" {
ami_name = "${var.image_name}"
region = "us-east-1"
instance_type = "t2.micro"
source_ami = "ami-08f3d892de259504d"
force_deregister = true
force_delete_snapshot = true
ssh_username = "ec2-user"
}
Notes:
- In the source I provided the essentials required to build the custom AMI. Change the source_ami if you want to use a different OS base or the instance_type to use a larger EC2 instance.
- force_deregister and force_delete_snapshot are there to allow future pipeline executions from failing due to the custom AMI already existing from previous runs.
build {
sources = [
"source.amazon-ebs.build1"
]
provisioner "file" {
source = "../../app/"
destination = "/tmp"
}
provisioner "shell" {
inline = [
"sudo yum install -y docker",
"sudo systemctl start docker",
"sudo docker build /tmp -t vmadbro/apache:1.0",
"sudo systemctl enable docker",
"sudo docker run -d --name Apache --restart unless-stopped -p 80:80 vmadbro/apache:1.0"
]
}
}
Notes:
- The build source is the one that was specified earlier in the template.
- Two provisioners are used here. One to copy the files from the app source into /tmp. The next installs docker, builds the docker image (from the files in /tmp), enables docker to start up, and starts the container. Since I am creating an AMI, I added the –restart unless-stopped parameter to insure the container starts when the new instance is deployed.
Docker
The docker “App” is ridiculously simple. It is just an apache web server that injects our Index.html file into. I am still really early into my docker development learning, so I wasn’t going to get too fancy yet. It mainly serves the purpose of being something that is deployed with our pipeline.
FROM httpd:2.4
COPY index.html /usr/local/apache2/htdocs/
Terraform
Now I get to talk in great detail the one template that I totally forgot to show in the first part…
data "aws_ami" "packer" {
owners = ["self"]
filter {
name = "name"
values = [var.image_name]
}
}
data "aws_vpc" "default" {
default = true
}
Notes:
- Here I set a data source for the packer ami that was created earlier and one for the default vpc which will be used with the security groups resource later. New in rev2
terraform {
backend "s3" {
bucket = "handled-by-pipline"
key = "terraform.tfstate"
region = "us-east-1"
dynamodb_table = "handled-by-pipline"
}
}
Notes:
- I specified a s3 backend in the template, but provide the values during the terraform call in the buildspec file.
resource "aws_security_group" "allow_http_ssh" {
name = "${var.image_name}-allow_http_ssh"
description = " Allow HTTP and SSH traffic to this instance from ANYWHERE"
vpc_id = data.aws_vpc.default.id
ingress {
description = "Custom HTTP from WORLD"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "Custom SSH from WORLD"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "Allow HTTP/SHH"
}
}
Notes:
- I created this custom security group in the default VPC to allow SSH and HTTP access from anywhere (Yes it’s not a best practice to have SSH open to the world, so you can replace the cidr_blocks with your personal CIDR to limit its exposure.) New in rev2
resource "aws_instance" "remote"{
ami = data.aws_ami.packer.id
instance_type = "t2.micro"
key_name = var.ssh_key
tags = {
Name = var.image_name
}
vpc_security_group_ids = [aws_security_group.allow_http_ssh.id]
}
Notes:
- This is where Terraform provisions the EC2 instance from the custom, Packer built, AMI.
This thing will do 0-60 in 2.5 seconds flat!
I think I covered everything this time around. Who knows.. Eventually all of these words become a blur and coherent sentences are a challenge. All of this awesome code will yield a nice little Docker website that one can go to as verification of success. The website address will be http://EC2 Public DNS (Ex: http://ec2-01-234-567-89.compute-1.amazonaws.com). It is by no means meant to be production ready, but an example of what one can do with a bit of “code”. IaC does take time to learn, and perfect, but the beauty is in it’s consistency and eventual ease of change. Instead of rebuilding something through some crazy orchestrator or GUI, you have a few lines of text that can change an entire infrastructure.
Tinker around and put away your toys
To really see the continuous deployment side of this pipeline, make changes to the code in your GitHub repo. When changes are committed, they will trigger a webhook to CodePipeline and rebuild your entire App. Experiment a bit with it to see how easy it is to make changes to the environment. It can be easy to break it but that is the fun of learning.
As before make sure you take care to clean up anything you don’t want to pay for. Most of this will be relatively cheap (or free depending on what you use) but I don’t want anyone to have any unexpected bills. You will have to clean out the two S3 buckets this creates before you can let CloudFormation delete the stack. After that you will have your EC2 instance, your Packer AMI and associated snapshot, and the GitHub access token secret to take care of.
The end… is a ways off
I hope this post expands well on what I started in the first one. I swear I think it is complete this time and hope that you found it informative and fun to mess around with. The quick turnaround time for this post was due to the fun I had in experimenting and building this thing out. I am far from done with it and will keep adding to it to add a good bit of functionality to it. My next steps will be to add some CI to our CD and maybe add some testing and validation. I make no promises as my attention span is…. oh a piece of candy…where was I? If you have made it this far you are starting to get used to my chaotic nature.
As always I appreciate you for taking your time in reading and Thank You for any feedback you can give. Feel free to reach out to me LinkedIn or Twitter @GregMadro.