(λ (x) (create x) '(knowledge))

Overlay Deployments for SaltStack

or building exception systems with Ansible · February 18th, 2024

Let's take a look at a real world problem and build a simple solution for it. We have a complex configuration management system that has been built up over several years. The repository that is hosting it was devised around a monolithic one set of rules fits all. This didactic view of the world worked, initially, but as everyone expects could not scale. Suddenly there are exceptions to the monolith being hand written on the salt master's themselves, and while that was also fine for the first couple of exceptions the entire system falls apart rapidly. What happens when the monolithic configuration gets re-applied to the Salt servers? There goes all of those nice hand written exceptions, I sure do hope you were tracking them somewhere because they weren't in the git repo where they belonged.

This sort of thing happens all the time, and it is all too tempting when you're working with a small team or first working to adopt a new technology, to get stuck in this sort of situation and then have to dig your way out of it. With enough forethought you can, and should, build solutions around these problems before they ever even occur! But sometimes that isn't what happens and you need to dig yourself out of the technical debt. Fortunately, we're all a smart sort around here, and we have tools to work with that will make this issue not only easy to fix, but also easy to maintain going forward.

First we need a lab, here's a couple of Verkos scripts to setup a Salt Master and Minions. We're going to quickly build a master and a couple of minions, all of which will be running Alpine Linux. I personally prefer to use LXD containers for this sort of prototyping, but VMs or bare metal systems would be fine as well.


      # Launch our testing containers
      lxc launch images:alpine/edge master
      lxc launch images:alpine/edge minion1
      lxc launch images:alpine/edge minion2

      # Push the verkos script and configure our Master
      lxc file push setup-salt-master.sh master/root/
      lxc exec $c -- sh -c "./setup-salt-master.sh"

      # Push the verkos script and configure our Minions
      for c in minion1 minion2; do
         lxc file push setup-salt-minion.sh $c/root/
         lxc exec $c -- sh -c "sed -i 's/x.x.x.x/192.168.88.123/' setup-salt-minion.sh && ./setup-salt-minion.sh"
      done
  

Excellent, with our lab setup we can start our tinkering. Normally we would use file directories to split environments for SaltStack, but this project is a little unique. Instead of having a single master, or even a cluster of masters, there are several different small masters that all perform a similar but different purpose. For that purpose, it is better to have a core corpus of state files that are our default states that get applied everywhere, and then environment based exceptions.

Lets throw together a few salt states to test with. We'll create a baseline directory and an exceptions directory, these will both get synced directly into /srv/salt. We can use nginx as our test case, since it's an easy service to work with.

Our simplified baseline will perform a single function, setup some tools we expect to always have on our systems. This is the starting point, we assume that every system we deploy with one of these masters will have this, I like the idea of an Alpine package system being a default for me specifically, so that's what our default states are setup to do. All of our systems will be packaging capable!


baseline_packages:
  pkg.installed:
    - pkgs:
        - tmux
        - htop
        - mg
        - git
        - make
        - fennel5.3
        - abuild
  

And then a couple of simple states to check information about our systems. I always want my default to never have nginx installed. These are package build systems, why would they need nginx?


nginx:
  pkg.removed
  


Get Ip:
  cmd.run:
    - name: 'ip addr'
  

Next we need to define exceptions for our web servers. Obviously some of these baseline states conflict with that purpose even. Hell, why would I try and build packages on a web server? Lets define a completely different base package state.


baseline_packages:
  pkg.installed:
    - pkgs:
        - tmux
        - htop
        - mg
        - nginx
        - curl
  

And I need a state to manage the nginx service, that should be running, the default doesn't even touch services.


nginx:
  service.running:
    - enable: true
  

Lastly we need a way to validate that our exceptions states are actually applied, we can use this state to confirm that the default packages are never applied.


baseline_packages:
  pkg.removed:
    - name: ""
    - pkgs:
        - git
        - make
        - fennel5.3
        - abuild
  

And it doesn't hurt to have an easy way to check that nginx service.


Check Http Response:
  cmd.run:
    - name: 'curl http://127.0.0.1'
  

Now we should have a file structure like this inside of our Salt repo. We have a defined baseline configuration. We also have a series of exceptions that either conflict in both path & name as the baseline, or add states that do not exist. Further, there are states in the baseline that do not exist in the exceptions environment.


Salt|>> tree
.
├── baseline
│   ├── checks
│   │   ├── network_info.sls
│   │   └── not_installed.sls
│   └── packages.sls
└── exceptions
    ├── checks
    │   ├── nginx.sls
    │   └── not_installed.sls
    ├── packages.sls
    └── services.sls
  

Now for this system to work we need to apply the baseline, and then apply our exceptions over it, all on the remote server directly. Lets use Ansible for this. First we'll import some secrets, set our become password so we can escalate privileges, then verify that we have rsync installed on our Salt Master. Finally using rsync we'll push configuration into /srv/salt. First the baseline, then the exceptions.


---
- hosts: "{{ host | default('salt') }}"
  vars:
    ansible_python_interpreter: /usr/bin/python3
  tasks:

  #Usage: ansible-playbook Overlay_Config.yaml --vault-password-file <(pass show personal/ansible_vault)
  - include_vars: ../../Vault/vault.yaml

  - name: Set Ansible Become Pass
    set_fact:
      ansible_become_pass: "{{ sudo_cred }}"

  - name: Ensure Rsync is Installed
    apk:
      name: "rsync"
    become: true
    become_method: sudo

  - name: Sync Configuration
    synchronize:
      src: "{{ item.src }}"
      dest: "{{ item.dest }}"
      mode: "push"
    with_items:
        - { src: "~/Development/Salt/baseline/", dest: "/srv/salt/" }
        - { src: "~/Development/Salt/exceptions/", dest: "/srv/salt/" }
    become: true
    become_method: sudo
  

Great, lets run that sucker!


Management|>> ansible-playbook -i ../../inventory Overlay_Config.yaml --vault-password-file <(pass show personal/ansible_vault)

PLAY [salt] ****************************************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************
ok: [salt]

TASK [include_vars] ********************************************************************************************************************************
ok: [salt]

TASK [Set Ansible Become Pass] *********************************************************************************************************************
ok: [salt]

TASK [Ensure Rsync is Installed] *******************************************************************************************************************
ok: [salt]

TASK [Sync Configuration] **************************************************************************************************************************
changed: [salt] => (item={'src': '~/Development/Salt/baseline/', 'dest': '/srv/salt/'})
changed: [salt] => (item={'src': '~/Development/Salt/exceptions/', 'dest': '/srv/salt/'})

PLAY RECAP *****************************************************************************************************************************************
salt                       : ok=5    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
  

Looks like everything went through okay, now on our Salt Master we can see the /srv/salt directory is actually a combination of both our baseline and our exception states! The network info state definitely comes from our baseline, but it looks like we also have a services file, and the other states defined in the baseline are there too. Lets apply some to verify.


salt:/srv/salt# tree
.
├── checks
│   ├── network_info.sls
│   ├── nginx.sls
│   └── not_installed.sls
├── packages.sls
└── services.sls

1 directories, 5 files
  

A gif showing various salt states being tested against an lxd salt minion.

And there you have it, a simple way to overlay salt configurations using just an itty bitty bit of Ansible to smooth over the process.

Bio

(defparameter *Will_Sinatra* '((Age . 31) (Occupation . DevOps Engineer) (FOSS-Dev . true) (Locale . Maine) (Languages . ("Lisp" "Fennel" "Lua" "Go" "Nim")) (Certs . ("LFCS"))))

"Very little indeed is needed to live a happy life." - Aurelius

Software Development: