(λ (x) (create x) '(knowledge))

Verkos

Templated Shell Scripts · April 2nd, 2023

Lets talk a bit about Verkos and the Kubernetes adventure I'm departing on. If your name is Lucidiot and you're reading this, you probably already know that here lies yaml. I am happy to report that this post has 100% more yaml than it needs to, just like any good DevOps thing. We're all just yaml engineers after all right? So in that vein it makes a ton of sense to tell you I wrote a shell script generator that reads in a yaml file and gloms together a bunch of pre-written shell snippets. And that I then used this yaml shell script monstrosity to bootstrap a Kubernetes cluster! It's like I enjoy yaml or something. Anyways, hot takes on yaml aside, lets dive into what Verkos is.

Verkos is a little nim program that takes a yaml template, and a directory of shell snippets and it builds single run shell scripts out of them. This is a special sort of insanity that only a proficient Ansible user could come up with. See I write a ton of Ansible playbooks for $work, like many of us do, and while that's wonderful for what it is, I don't really want to do the things I do at work for fun in my homelab. But I frankly really enjoy how easy it is to import tasks and throw together quick playbooks, and follow my train of thought months after the fact, to me that's where Ansible shines. And that's definitely what I need in my homelab configurations. But to do that I need to bootstrap python onto everything in my homelab, and that's just not a pretty site. And then there's the matter of setting up ssh to access the system and run the playbooks against them. No thanks.

Instead of doing anything enterprise I've been kicking it old school and automating the various LXC containers and servers with plain old shell scripts. That's worked well for a couple of years now, but it's not perfect. I can source existing scripts into new ones, but then when I boot strap systems I need to copy over all of the various bits and pieces. So those playbooks started to become monolithic single use deployment scripts. All very self contained, all invoking various things in similar manners. And the boiler plate was strong with them. What I really wanted was one thing to pull onto a server, that could do all the configuration, and the only way to do that was to standardize the way I wrote scripts. The result was lots of repetitious and messy scripts that were just clunky to maintain. But Verkos fixes that! And it works well enough that I just finished deploying a 3 node Kubernetes cluster with scripts it generated and I still have time to write this blog!

"Verkos" verkos nian la areton

"Verkos" will write our cluster

So to study for a couple of certs I've got my eyes on I need a Kubernetes cluster. I happen to have a bunch of old Celeron N3040 NUCs from years back, and some spare Mikrotik networking gear so my only real problem is configuring the cluster. I'm not sure how deep into k8s I'm going to get, probably decently, so I need to be able to rebuild the entire stack from scratch at a moments notice. Just in case I need to upgrade away from junk hardware, or completely nuke the cluster being stupid. We all know number two is the likely case here.

Verkos allows me to define two nearly identical templates to configure these nodes. They are after all more or less the same thing right? Each of these templates defines a set of variables and tasks that are used to pull snippets from a directory called "Tasks" inside of the Verkos repo. Each snippet is just a shell function, like this one used to install alpine packages."

	
#Usage: pkgs 'htop tmux emacs'
pkgs() {
	apk add $1
}
	
  

This is the Verkos template to configure out k8s control plane, I've named mine Viralko after Teddy Roosevelt. And the workers are similarly named Cervo and Alko in the same vein. Gotta practice my Esperanto while dealing with all this yaml. I feel the template is pretty legible.

The template starts by defining a shell, an output directory for the generated script, whether to run set -ex debugging on the script. And then the real fun begins. Variables describes a list of globals and their values (I'm actually not a huge fan of the syntax here, but it works well enough). After that comes Tasks. These are the paths to the snippets, and how to invoke them in the shell script.

	
Shell: '#!/bin/ash'
Script: Generated/setup-k3s-ctrl.sh
Debug: false
Variables:
  - Name: lan
    Value: 192.168.90.0
  - Name: zabbix
    Value: 192.168.90.101
Tasks:
  - Path: Tasks/stable_apk_repos
    Invo:
      - 'repos edge'
  - Path: Tasks/apk_pkgs
    Invo:
      - 'pkgs "procps htop iftop net-tools tmux iptables mg syslog-ng haveged iproute2 coreutils logrotate shadow k3s cni-plugins helm"'
  - Path: Tasks/crontab_base
    Invo:
      - crontab_base
  - Path: Tasks/crontab_append
    Invo:
      - 'crontab_append "0 	2 	* 	* 	5 	/sbin/apk -U -a upgrade"'
  - Path: Tasks/apply_crontab
    Invo:
      - apply_crontab
  - Path: Tasks/change_services
    Invo:
      - 'change_services start "k3s"'
  - Path: Tasks/k3s_iptables
    Invo:
      - 'iptables_conf'
  - Path: Tasks/enable_services
    Invo:
      - 'enable_services boot "syslog-ng"'
      - 'enable_services default "crond iptables k3s"'
  - Path: Tasks/reboot_system
    Invo:
      - 'reboot_system'
	
  

Almost identical to the control plane setup, the template for our nodes is filled with more or less the same configuration. Though I've commented this one to help elucidate upon the setup.

	
Shell: '#!/bin/ash'
Script: Generated/setup-k3s-node.sh
Debug: false
Variables:
  - Name: lan
    Value: 192.168.90.0
  - Name: zabbix
    Value: 192.168.90.101
  - Name: token
    Value: changeme
  - Name: ctrl
    Value: 192.168.90.101
Tasks:
  #Setup Edge main/community repos
  - Path: Tasks/stable_apk_repos
    Invo:
      - 'repos edge'
  #Install the following packages
  - Path: Tasks/apk_pkgs
    Invo:
      - 'pkgs "procps htop iftop net-tools tmux iptables mg syslog-ng haveged iproute2 coreutils logrotate shadow k3s cni-plugins"'
  #Setup Crontab, this is broken into three steps to allow it to be flexible
  - Path: Tasks/crontab_base
    Invo:
      - crontab_base
  #Though not show, you can have multiple invocations of the same task, such as multiple crontab appends with one task import
  - Path: Tasks/crontab_append
    Invo:
      - 'crontab_append "0 	2 	* 	* 	5 	/sbin/apk -U -a upgrade"'
  #Import the created crontab
  - Path: Tasks/apply_crontab
    Invo:
      - apply_crontab
  #Start k3s ahead of configuration
  - Path: Tasks/change_services
    Invo:
      - 'change_services start "k3s"'
  #Run k3s agent --server X --token Y + configure persistently
  - Path: Tasks/k3s_node
    Invo:
      - 'k3s_node $token $ctrl'
  #Setup iptables firewall
  - Path: Tasks/k3s_iptables
    Invo:
      - 'iptables_conf'
  #Enable the services listed under the runlevel provided
  - Path: Tasks/enable_services
    Invo:
      - 'enable_services boot "syslog-ng"'
      - 'enable_services default "crond iptables k3s"'
  #Yeah this does just call reboot, but it could do more
  - Path: Tasks/reboot_system
    Invo:
      - 'reboot_system'
	
  

These templates are only really meaningful once they've been composed into shell scripts. When Verkos composes a script it does an in order append of each variable then task and the resulting shell script is a series of functions, with a set of invocations at the very bottom of the script. I tend to write shell scripts in this way, so the design chose is more or less idiosyncratic not pragmatic. I think it makes the script more legible/self documenting.

Here's the composed shell script to setup a k3s node using the second template above.

	
#!/bin/ash
set -ex
lan=192.168.90.0
zabbix=192.168.90.101
token=changeme
ctrl=192.168.90.101

#Usage: repos
repos() {
	cat > /etc/apk/repositories <<EOF
http://dl-cdn.alpinelinux.org/alpine/$1/main
http://dl-cdn.alpinelinux.org/alpine/$1/community
##http://dl-cdn.alpinelinux.org/alpine/$1/testing
EOF

	apk -U upgrade
}

#Usage: pkgs 'htop tmux emacs'
pkgs() {
	apk add $1
}

#Usage: crontab_base
crontab_base() {
	cat > /tmp/new.cron <<EOF
# do daily/weekly/monthly maintenance
# min	hour	day	month	weekday	command
*/15	*	*	*	*	run-parts /etc/periodic/15min
0	*	*	*	*	run-parts /etc/periodic/hourly
0	2	*	*	*	run-parts /etc/periodic/daily
0	3	*	*	6	run-parts /etc/periodic/weekly
0	5	1	*	*	run-parts /etc/periodic/monthly
EOF
}

#Usage: crontab_append '*/15 * * * * /usr/local/bin/atentu -m > /etc/motd'
crontab_append() {
	printf "$1\n" | tee -a /tmp/new.cron
}

#Usage: apply_crontab
apply_crontab() {
	crontab /tmp/new.cron
}

#Usage: change_services start 'lighttpd rsyslog samba iptables'
#Variables:
change_services() {
	for service in $2; do
		rc-service $service $1
	done
}

#Usage: k3s_nodes token x.x.x.x
#Variables:
k3s_node() {
	#Append cni-plugins to ash path
	sed -i 's|append_path "/bin"|append_path "/bin"\nappend_path "/usr/libexec/cni/"|' /etc/profile

	#Export to path for duration of script
	export PATH="/usr/libexec/cni/:$PATH"
	k3s agent --server https://$2:6443 --token $1 &

	#Configure agent options
	cat > /etc/conf.d/k3s <<EOF
# k3s options
export PATH="/usr/libexec/cni/:$PATH"
K3S_EXEC="agent"
K3S_OPTS="--server https://$2:6443 --token $1"
EOF
}

#Usage: iptables_conf
#Variables: lan zabbix
iptables_conf() {
	if [ ! -f /etc/iptables/k3s.rules ]; then
		touch /etc/iptables/k3s.rules
	else
		rm /etc/iptables/k3s.rules
	fi
	
	cat > /etc/iptables/k3s.rules <<EOF
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -s 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable
# Allow ICMP
-A INPUT -s $lan/24 -p icmp -j ACCEPT
# Allow SSH
-A INPUT -s $lan/24 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
# Allow etcd
-A INPUT -p tcp --match multiport -m state --state NEW -m tcp --dports 2379:2380 -j ACCEPT
# Allow k3s
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6443 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6444 -j ACCEPT
# Allow Flannel vxlan
-A INPUT -p udp -m state --state NEW -m udp --dport 8472 -j ACCEPT
# Allow Kubelet
-A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
# Allow Flannel wireguard
-A INPUT -p udp --match multiport -m state --state NEW -m udp --dports 51820:51821 -j ACCEPT
# Allow Zabbix
-A INPUT -s $zabbix -i eth0 -p tcp -m state --state NEW -m tcp --dport 10050 -j ACCEPT
-A INPUT -s $zabbix -i eth0 -p tcp -m state --state NEW -m tcp --dport 10051 -j ACCEPT
-A INPUT -j DROP
-A FORWARD -j DROP
-A OUTPUT -j ACCEPT
COMMIT
EOF

	iptables-restore /etc/iptables/k3s.rules
	/etc/init.d/iptables save
}

#Usage: enable_services default 'lighttpd rsyslog samba iptables'
enable_services() {
	for service in $2; do
		rc-update add $service $1
	done
}

#Usage: reboot_system
#Variables
reboot_system() {
	reboot
}

repos edge
pkgs "procps htop iftop net-tools tmux iptables mg syslog-ng haveged iproute2 coreutils logrotate shadow k3s cni-plugins"
crontab_base
crontab_append "0 	2 	* 	* 	5 	/sbin/apk -U -a upgrade"
apply_crontab
change_services start "k3s"
k3s_node $token $ctrl
iptables_conf
enable_services boot "syslog-ng"
enable_services default "crond iptables k3s"
reboot_system
	
  

Neat right? From a deployment perspective I typically make these available using my fServ tool, then just pull the script with wget. What this means is that when I setup a new piece of hardware or an LXC container, all I need to worry about installing on the host is something like wget if it isn't already there, and maybe a text editor like mg/vi so that I can tweak a variable before running the script.

Sure it's a little more hands on than running an ansible playbook if you've got ssh and python on the system. It's definitely not a perfect system. But it's one that's unique and prevents me from burning out my brain trying to automate my homelab. Nothing is more stressful than going home from work and working more as your hobby, verkos makes things just different enough for me to reap that benefit.

Kubernetes cluster built out of 3 old NUC mini computers, a Mikrotik RB260GSP, and a MAP 2ND.

Anyways that's a rough overview of my yaml monstrosity. It's here to stay, and I can happily say I have zero intentions of anyone else using this tool, and that's okay by me. It lets me enjoy my blinkenlights in peace and that's worth the effort.

If anyone is curious about the Mikrotik gear it's literally nothing special. I'm using the MAP 2ND as a firewall, with the wlan1 interface connected to my wifi as the WAN uplink, and ether1-2 on a bridge as LAN. Which connects to the RB260GSP switch. I wanted to setup a transparent link with the MAP originally, but I'm using capsman to configure my APs and really don't want to tank my wifi config just so my k8s lab isn't double NAT'd. Not worth the effort since they're not going to be exposed anyways.

Bio

(defparameter *Will_Sinatra* '((Age . 31) (Occupation . DevOps Engineer) (FOSS-Dev . true) (Locale . Maine) (Languages . ("Lisp" "Fennel" "Lua" "Go" "Nim")) (Certs . ("LFCS"))))

"Very little indeed is needed to live a happy life." - Aurelius