(λ (x) (create x) '(knowledge))

Building AMIs with Packer

Just enough to get started ยท January 13th, 2023

Sharing the love on this one, I had to figure this out in a pinch the other day and felt like it'd be neat to share with everyone. The issue at hand is that I needed a custom AWS AMI with the SSM agent built in, and to pre-seed the SSH keys before deployment. It took a little bit of setup, but was ultimately a really simple process if you're familiar with what's going on under the hood. It's all just HCL and hand waving the silly AWS web console for the sake of automation at the end of it.

Terraform

This could easily be a single .tf file with how simple it is, but I like to break them up between separate files, just to give myself some sense of organization about the whole thing.

First up, vpc.tf. Now inside of my AWS accounts I always delete the default VPC, I just don't find it necessary to keep. And I don't want packer to accidentally affect any of my existing VPCs, so we're launching a dedicated one just for the sake of building AMIs. It doesn't need a lot, just basic network egress, ipv6 enabled, defaults pretty much! Like I said, this is just a landing point.

The vpc is a simple one, it's got an Egress gateway setup in it, and all the EC2 instances that Packer launches get a wan IP assigned to them. So all this really does is setup the /16 CIDR and then launch a /24 subnet inside of that.

#Define AMI Build VPC
resource "aws_vpc" "AMI" {
  cidr_block = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support = true
  assign_generated_ipv6_cidr_block = true

  tags = {
    Production = true
    Purpose = "Using packer to build AMIs"
    Name = "AMI Build"
  }
}

#Allow egress from VPC
resource "aws_internet_gateway" "Egress" {
  vpc_id = aws_vpc.AMI.id

  tags = {
    Production = true
	Purpose = "Using packer to build AMIs"
	Name = "AMI Build"
  }
}

resource "aws_egress_only_internet_gateway" "Egress-Only" {
  vpc_id = aws_vpc.AMI.id
}

#Define public subnet
resource "aws_subnet" "Public" {
  vpc_id = aws_vpc.AMI.id
  cidr_block = "10.0.10.0/24"
  map_public_ip_on_launch = true
  availability_zone = "us-east-1a"

  ipv6_cidr_block = "${cidrsubnet(aws_vpc.AMI.ipv6_cidr_block, 8, 1)}"
  assign_ipv6_address_on_creation = true

  tags = {
    Production = true
	Purpose = "Using Packer to build AMIs"
    Name = "AMI Build"
  }
}

Next onto our routes.tf file, this sets up the route table for the public subnet so everything routes out of the VPC. We're just letting everything egress out of our internet gateway.

#####################
# Public Subnetting #
#####################
#Create public route table for VPC
resource "aws_default_route_table" "Public_Route_Table" {
  default_route_table_id = aws_vpc.AMI.main_route_table_id

  tags = {
    Name = "Public Route Table"
    Production = true
    Purpose = "Networking"
  }
}

#Egress through public route, routes through Egress gateway
resource "aws_route" "Public_Egress" {
  route_table_id = aws_default_route_table.Public_Route_Table.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id = aws_internet_gateway.Egress.id
}

#IPV6
resource "aws_route" "Public_Egress_v6" {
  route_table_id = aws_default_route_table.Public_Route_Table.id
  destination_ipv6_cidr_block = "::/0"
  gateway_id = aws_internet_gateway.Egress.id
}

#Associate public route with Public subnet (allow egress from Public network)
resource "aws_route_table_association" "Public_Route_Association" {
  subnet_id = aws_subnet.Public.id
  route_table_id = aws_default_route_table.Public_Route_Table.id
}

Finally we need some security groups, so you know, be secure and only allow the traffic we want to leave the VPC. Well, that's what you'd do normally, this just lets everything enter & exit the VPC! I don't really know what kind of traffic my EC2 instances are going to need when I build them, and since they last all of a few minutes, it's not a huge deal. If you want to restrict this you absolutely can, Packer will modify your Security Groups to allow ssh traffic through on port 22.

resource "aws_security_group" "allow_all" {
  name = "Allow All"
  description = "Allow All"
  vpc_id = aws_vpc.AMI.id

  ingress {
    to_port = 0
    from_port = 0
    protocol = "-1"
	cidr_blocks = ["0.0.0.0/0"]
#	ipv6_cidr_blocks = ["::/0"]
  }

  egress {
    to_port = 0
    from_port = 0
    protocol = "-1"
	cidr_blocks = ["0.0.0.0/0"]
#	ipv6_cidr_blocks = ["::/0"]
  }
}

Great, once you've got all of that together it should look a little like this.

-rw-r--r-- 1 durrendal durrendal 1073 Jan 11 20:05 routes.tf
-rw-r--r-- 1 durrendal durrendal 410 Jan 11 20:05 security.tf
-rw-r--r-- 1 durrendal durrendal 1097 Jan 11 20:08 vpc.tf

And you can launch the staging area like so:

#Install modules required to run the terraformer
terraform init

#See the changes the terraformer is going to make
terraform plan

#Actually apply them
terraform apply

Packer

Now for the real point of this point, with the landing point for packer defined we can focus on building our AMI. Now usually I just use the ones provided on the marketplace that are maintained by the distributions themselves, like the official Debian images. Those tend to work just fine, especially if you deploy them with AWS and then let Ansible handle the configuration. But if you want to leverage AWS System Session Manager and not expose ssh to the WAN, then you'll need an AMI with that built in. There's probably one of those for Debian somewhere I suppose, and there's Amazon Linux 2 which comes with it built it, but this is an opportunity to learn something!

And this little HCL setup is what I came up with to build that image, all of the magic really happens inside of the in-line shell call, because under the hood all Packer is really doing is deploying an EC2 instance into the VPC we defined earlier and then running a shell script on it, and packaging the entire EBS file system into an AMI. Super simple, we could literally manually configure a full server and do this ourselves in the AWS web console, but this automates it all away and lets me do it from my Droid, with just Emacs and Packer!

Here's my lc-debian.pkr.hcl file, lets dig into it a little bit.

packer {
  required_plugins {
    amazon = {
      source = "github.com/hashicorp/amazon"
      version = ">= 1.1.6"
    }
  }
}

variable "pub_key" {
  type = string
  default = "broken-because-you-forgot-to-set-me.."
}

source "amazon-ebs" "debian" {
  ami_name = "lc-debian-hvm-x86_64-ebs"
  instance_type = "t3.small"
  region = "us-east-1"
  subnet_filter {
	filters = {
	  "tag:Name" = "AMI Build"
	}
  }
  vpc_filter{
	filters = {
	  "tag:Name" = "AMI Build"
	}
  }
  source_ami_filter {
    filters = {
      name = "debian-11-amd64-*"
      root-device-type = "ebs"
      virtualization-type = "hvm"
	}

    most_recent = true
    owners = ["903794441882"]
  }

  ssh_username = "admin"
}

build {
  sources = ["source.amazon-ebs.debian"]

  provisioner "shell" {
	inline = [
	  "DEBIAN_FRONTEND=noninteractive sudo apt-get update -y",
	  "DEBIAN_FRONTEND=noninteractive sudo apt-get install python3 wget -y",
	  "wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb -O /tmp/amazon-ssm-agent.deb >/dev/null",
	  "DEBIAN_FRONTEND=noninteractive sudo dpkg -i /tmp/amazon-ssm-agent.deb",
	  "sudo systemctl enable amazon-ssm-agent",
	  "echo '${var.pub_key}' | sudo tee -a /home/admin/.ssh/authorized_keys",
	  "exit 0"
	]
  }
}

You can find a some really solid tutorials for Packer on Hashicorp's website, like this one that I used. Hashicorp makes seriously great documentation. The only thing that I found a little bit muddled was finding the right AMI filter values so I could use the official Debian AMIs as my base.

Fortunately, you can get all of this information through the AWS cli tool, it's really simple to pull the name and owner values with a single line command. For example, here's the owner ID I used above. If you have a gist of what AMI you want to use, you can sort of fuzzy find your way around this, just searching for debian*, or something like it, returns an absolutely massive amounts of AMIs with various tweaks and pre-installations. Any of them are valid really.

~|>> aws ec2 describe-images --region us-east-1 --filters Name=name,Values=debian* | jq '.Images[]| .Name, .ImageId, .OwnerId'
...
"debian-11-amd64-daily-20230113-1259"
"ami-0bfd89d636cb00a69"
"903794441882"

Armed with that you just need to format the source_ami_filter to glob the back half of the name, for example debian-11-amd64-* paired with most_recent = true gives me the latest debian AMI as the build base.

You probably also noticed the var.pub_key in the packer template, that just lets you provide a separate whatever.pkrvars.hcl file for your variables. I use it to change the context of the authorized ssh key. The syntax looks like this:

pub_key = "ssh-rsa key-material-goes-here you@hostname"

Really simple stuff right? Once you've got it all together you just need to validate, and then build the AMI like so:

packer validate debian.pkr.hcl
packer build --var-file=nsm.pkrvars.hcl debian.pkr.hcl

One more thing

The last thing you've got to do to tie these new AMI instances into SSM, since that's kind of the entire point of me doing this anyways, is to add an IAM role to your EC2 instances. This will let the ssm-agent that gets installed via packer communicate with SSM, and allow you to initialize SSH sessions with them, or whatever else you may want to do.

Throw this into something like ssm-role.tf

resource "aws_iam_role" "ssm-managed" {
  name = "ssm-managed"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow"
    }
  ]
}
EOF
}

resource "aws_iam_instance_profile" "ssm-managed" {
  name = "ssm-managed"
  role = aws_iam_role.ssm-managed.name
}

resource "aws_iam_role_policy_attachment" "ssm-managed" {
  role = aws_iam_role.ssm-managed.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}

And then add the instance profile to each of the EC2 instances that you want to use, make sure that you use the AMI created earlier with the SSM agent in it.

iam_instance_profile = aws_iam_instance_profile.ssm-managed.name

And after that you can configure you SSH config like this to get a nice comfy ssh setup. The example below will tunnel your SSH session through SSM, and expose port 80 on the server as 8080 on your machine. You can even do multiple local forwards like this. On the surface it seems a little weird but this setup has been rock solid for me.

Host swankynewserver
Hostname i-abcdefg1234678
IdentityFile ~/.ssh/id_something
User durrendal
Port 22
Localforward 8080 127.0.0.1:80
ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"

Just know that you'll need to install the aws cli, and the session-manager-plugin to get all of this to work. But I suspect if you're using packer on AWS you kind of already knew that.

Bio

(defparameter *Will_Sinatra* '((Age . 31) (Occupation . DevOps Engineer) (FOSS-Dev . true) (Locale . Maine) (Languages . ("Lisp" "Fennel" "Lua" "Go" "Nim")) (Certs . ("LFCS"))))

"Very little indeed is needed to live a happy life." - Aurelius