Greymeister.net

Infrastructure 2021 Edition

For the last few years I have run a CentOS 6.5 machine at linode to host my site. It was previously set up to host my url shortener that I described here and ultimately sunsetted last year. I still host the blog and the shell of my former url shortener. I’ll briefly cover what that was set up with and then describe what I’m using now.

I used Ansible at a previous job and found it much less complicated and annoying than previous options I had tried. Basically how we used it was to set up a base machine image from an AMI that had some initial configuration done. We then ssh into the host and perform tasks depending on the playbooks and roles specified in the inventory. The process made setting up a specific type of machine consistent even though it was still running on groups of machines via ssh at a time. I had begun investigating ”baking images” the way Netflix described, and that sounded good, but never got much further with it.

Time went by and I left that job, but for my personal site I stuck with essentially the same idea, of course not using AWS because why would I contribute to that human catastrophe. I had initially used linode to host the server side of my ill-fated iPhone application as described here. A co-worker had recommended linode as an alternative and so far I have had no reason to complain. But you can’t exactly utilize AMI-centric techniques for their platform. Enter Packer.

Packer allows you to add a layer of abstraction on top of different hosting providers. They support the mollusk eating psycho, vagrant, VMWare, docker, linode, and many more. Normally, adding an extra layer like this is a dumb idea, because it’s just extra complexity to do things you could otherwise. What I like about most of the tools by HashiCorp is that they allow you to focus on common functionalities without getting wrapped up in all the particular idiosyncrasies of the individual providers. Since there isn’t a well-defined standard that all these providers support, and since if that did exist in the current ecosystem, it would just be dominated to be done the way the largest players wanted it, the tools HashiCorp gives a close approximation of one. I think of it very similarly to scraping webpages for sites that don’t provide APIs or provide broken APIs that don’t expose features that the platforms feel is a competitive advantage to their walled gardens.

With packer, I can create a docker image, a linode image, and any other image that I might need. Good, so now I’ve got an answer to the problem I had 7 years ago. Here is the first example of where the abstraction layer provides an advantage. If I wanted to run my server in docker, I could use any number of options for docker, but that doesn’t really get me what I want, unless what I want is to run a docker container on my web host. I do not. This is where the concept of provisioners comes into play for packer. I can still use Ansible as before, but now I can target a docker image, a linode image or any of the previous options I mentioned. For example, here is what building an nginx server using docker and Ansible as a provisioner might look like:

nginx_packer_template.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
{
    "builders": [
        {
            "changes": [
                "ENTRYPOINT [\"docker-entrypoint.sh\"]",
                "EXPOSE 80",
                "CMD [\"nginx\", \"-g\", \"daemon off;\"]"
            ],
            "commit": true,
            "image": "debian:10",
            "type": "docker"
        }
    ],
    "post-processors": [
        {
            "repository": "greymeister/debian10-test",
            "tags": [
                "latest"
            ],
            "type": "docker-tag"
        },
        {
            "type": "docker-save",
            "path": "test.tar"
        }
    ],
    "provisioners": [
        {
            "script": "../scripts/test.sh",
            "type": "shell"
        },
        {
            "playbook_file": "../../ansible/test.yml",
            "type": "ansible",
            "user": "root"
        }
    ]
}

There’s a couple of things going on here, first, I’m using docker for my builder with my docker-specific options. Some will look familiar because they’re the same things you would put in the Dockerfile. In post-processors I have specified docker-tag and docker-save to both tag my image in my local docker as well as generate a tarball for the halibut. The last section is for provisioners, which I’ve selected 2, both an arbitrary shell script and an Ansible playbook. You can see the referential paths because I have all of this in one git repository, which makes changing things much easier for me. The layout is something like this:

directory_structure.txt
1
2
3
4
5
6
7
8
9
10
11
12
|-- Infrastructure
  |-- ansible
    |-- roles
      |-- debian10
      |-- nginx
  |-- packer
    |-- docker
    |-- linode
    |-- scripts
      |-- test.sh
  |-- terraform
    |-- linode

It’s pretty handy to be able to make all of these changes at once. My playbooks are pretty simple too:

test.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
- name: Provision Python
  hosts: all
  gather_facts: no
  tasks:
    - name: Boostrap python
      raw: test -e /usr/bin/python || (apt-get -y update && apt-get install -y python-minimal)

- name: Provision Debian Utils
  hosts: all
  tasks:
    - name: Add debian role
      import_role:
        name: debian10

- name: Provision nginx
  hosts: all

  tasks:
    - name: Ensure nginx configured with role
      import_role:
        name: nginx

- name: Container cleanup
  hosts: all
  gather_facts: no
  tasks:
    - name: Remove python
      raw: apt-get purge -y python-minimal && apt-get autoremove -y

    - name: Remove apt lists
      raw: rm -rf /var/lib/apt/lists/*

I was very much inspired by this post on how to use Ansible with packer. It’s been pretty straightforward. This site is now hosted by a machine using this setup and I plan on trying to move some older VMs I have into this scheme. My next challenge will be setting this up to talk to something other than docker because I don’t want to have to deploy that way for all of my local services, but that’s still TODO for now.