📰

Packet Metadata for SaltStack

It's 1020 words long and the reading time is about 5 minutes.

This article was published on August 03, 2020.

packetsaltstack

In my last article, we spun up some bare metal compute on Packet with Pulumi and installed SaltStack.

In order to use SaltStack to provision our workloads on our servers, we need a way to identify which machines should be provisioned with what workload. SaltStack uses Grains to do this ... and there's a metadata grain that can read metadata from cloud providers; unfortunately, it doesn't support Packet.

Drats ☹️

Happy news though, SaltStack is rather extensible; as long as you don't mind getting your hands a little dirty with Python.

Writing a Custom Grain

Writing a SaltStack grain module is SUPER easy. Lets take a look at the simplest implementation I can put together.

1def test():
2 return dict(test={
3 "name": "David",
4 "age": "18",
5 })
6

Yeah, yeah. I know I'm not 18 anymore. Shush.

Grain modules are Python functions that return key value pairs. This code above returns a grain named "test" with the key/value pairs name = David and age = 18. This means we can run salt minion-1 grains.item test and we'll see:

1minion-1:
2 ----------
3 test:
4 ----------
5 name:
6 David
7 age:
8 18
9

Of course, we don't want to return hared coded key values! We want to return information about our servers from Packet's metadata API.

The code to handle this isn't particularly complicated. In-fact, performing a HTTP request in Python is really simple 😀

Lets take a look.

01import json
02import logging
03import salt.utils.http as http
04
05# Setup logging
06log = logging.getLogger(__name__)
07
08# metadata server information
09HOST = "https://metadata.packet.net/metadata"
10
11
12def packet_metadata():
13 response = http.query(HOST)
14 metadata = json.loads(response["body"])
15
16 log.error(metadata)
17
18 grains = {}
19 grains["id"] = metadata["id"]
20 grains["iqn"] = metadata["iqn"]
21 grains["plan"] = metadata["plan"]
22 grains["class"] = metadata["class"]
23 grains["facility"] = metadata["facility"]
24
25 grains["tags"] = metadata["tags"]
26
27 return dict(packet_metadata=grains)
28

The important lines here are these three:

1HOST = "https://metadata.packet.net/metadata"
2
3response = http.query(HOST)
4metadata = json.loads(response["body"])
5

We first query the metadata API endpoint, defined by the variable HOST. We then decode the body of the response into a Python dict, using json.loads.

This gives us access to every bit of metadata returned by the Packet metadata API. That looks like:

001{
002 "id": "c5ce85c5-1eef-4581-90b6-88a91e47e207",
003 "hostname": "master-1",
004 "iqn": "iqn.2020-08.net.packet:device.c5ce85c5",
005 "operating_system": {
006 "slug": "debian_9",
007 "distro": "debian",
008 "version": "9",
009 "license_activation": {
010 "state": "unlicensed"
011 },
012 "image_tag": "b32a1f31b127ef631d6ae31af9c6d8b69dcaa9e9"
013 },
014 "plan": "c2.medium.x86",
015 "class": "c2.medium.x86",
016 "facility": "ams1",
017 "private_subnets": ["10.0.0.0/8"],
018 "tags": ["role/salt-master"],
019 "ssh_keys": [
020 "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGf0w9b+lPcZhsNHU8Sw5hJPBhpNICTNkjlBz9jxtLbWNGvHTE1lBeXU5VA2/7cuYw48apHmMURHFtK5AZx3srg="
021 ],
022 "storage": {
023 "disks": [
024 {
025 "device": "/dev/sdd",
026 "wipeTable": true,
027 "partitions": [
028 {
029 "label": "BIOS",
030 "number": 1,
031 "size": "512M"
032 },
033 {
034 "label": "SWAP",
035 "number": 2,
036 "size": "3993600"
037 },
038 {
039 "label": "ROOT",
040 "number": 3,
041 "size": 0
042 }
043 ]
044 }
045 ],
046 "filesystems": [
047 {
048 "mount": {
049 "device": "/dev/sdd1",
050 "format": "vfat",
051 "point": "/boot/efi",
052 "create": {
053 "options": ["32", "-n", "EFI"]
054 }
055 }
056 },
057 {
058 "mount": {
059 "device": "/dev/sdd3",
060 "format": "ext4",
061 "point": "/",
062 "create": {
063 "options": ["-L", "ROOT"]
064 }
065 }
066 },
067 {
068 "mount": {
069 "device": "/dev/sdd2",
070 "format": "swap",
071 "point": "none",
072 "create": {
073 "options": ["-L", "SWAP"]
074 }
075 }
076 }
077 ]
078 },
079 "network": {
080 "bonding": {
081 "mode": 4,
082 "link_aggregation": "bonded",
083 "mac": "50:6b:4b:b4:a9:3a"
084 },
085 "interfaces": [
086 {
087 "name": "eth0",
088 "mac": "50:6b:4b:b4:a9:3a",
089 "bond": "bond0"
090 },
091 {
092 "name": "eth1",
093 "mac": "50:6b:4b:b4:a9:3b",
094 "bond": "bond0"
095 }
096 ],
097 "addresses": [
098 {
099 "id": "5d28837b-29c5-4505-bb05-930fd3760bac",
100 "address_family": 4,
101 "netmask": "255.255.255.252",
102 "created_at": "2020-08-03T14:07:50Z",
103 "public": true,
104 "cidr": 30,
105 "management": true,
106 "enabled": true,
107 "network": "147.75.84.128",
108 "address": "147.75.84.130",
109 "gateway": "147.75.84.129",
110 "parent_block": {
111 "network": "147.75.84.128",
112 "netmask": "255.255.255.252",
113 "cidr": 30,
114 "href": "/ips/7a30c2bf-f0e5-402c-b0c0-b8ab03359e63"
115 }
116 },
117 {
118 "id": "937552c6-cf1a-474d-9866-9fb1e0525503",
119 "address_family": 4,
120 "netmask": "255.255.255.254",
121 "created_at": "2020-08-03T14:07:49Z",
122 "public": false,
123 "cidr": 31,
124 "management": true,
125 "enabled": true,
126 "network": "10.80.76.4",
127 "address": "10.80.76.5",
128 "gateway": "10.80.76.4",
129 "parent_block": {
130 "network": "10.80.76.0",
131 "netmask": "255.255.255.128",
132 "cidr": 25,
133 "href": "/ips/8f8cd919-165a-4e62-b461-af7c15a25ec4"
134 }
135 }
136 ]
137 },
138 "customdata": {},
139 "specs": {
140 "cpus": [
141 {
142 "count": 1,
143 "type": "AMD EPYC 7401P 24-Core Processor @ 2.0GHz"
144 }
145 ],
146 "memory": {
147 "total": "64GB"
148 },
149 "drives": [
150 {
151 "count": 2,
152 "size": "120GB",
153 "type": "SSD",
154 "category": "boot"
155 },
156 {
157 "count": 2,
158 "size": "480GB",
159 "type": "SSD",
160 "category": "storage"
161 }
162 ],
163 "nics": [
164 {
165 "count": 2,
166 "type": "10Gbps"
167 }
168 ],
169 "features": {}
170 },
171 "switch_short_id": "f8dd5e3f",
172 "volumes": [],
173 "api_url": "https://metadata.packet.net",
174 "phone_home_url": "http://tinkerbell.ams1.packet.net/phone-home",
175 "user_state_url": "http://tinkerbell.ams1.packet.net/events"
176}
177

I decided not to make all of this available within the grains system, as only a few data points make sense for scheduling workloads. Hence, I cherry pick out the attributes I want for the next demo. You can pick and choose whatever you want too.

01grains = {}
02grains["id"] = metadata["id"]
03grains["iqn"] = metadata["iqn"]
04grains["plan"] = metadata["plan"]
05grains["class"] = metadata["class"]
06grains["facility"] = metadata["facility"]
07
08grains["tags"] = metadata["tags"]
09
10return dict(packet_metadata=grains)
11

Provisioning the Custom Grain

Now that we have a custom grain, we need to update our Pulumi code to install this on our Salt master.

NB: We only need to make this grain available on our Salt master, as the Salt master takes responsibility for syncing custom grains to the minions.

I've updated our user-data.sh to create the directory we need and added the mustache template syntax that allows us to inject the Python script. We use & before the variable name to request that mustache doesn't escape our quotes to HTML entities ... I only learnt that today 😂

1mkdir -p /srv/salt/_grains
2
3cat <<EOF >/srv/salt/_grains/packet_metadata.py
4{{ &PACKET_METADATA_PY }}
5EOF
6

Next up, we provide the Python script at render time and provide some tags for our servers when we create them.

01const pythonPacketMetadataGrain = fs
02 .readFileSync(path.join(__dirname, "..", "salt", "packet_metadata.py"))
03 .toString();
04
05const saltMaster = new Device(`master-${name}`, {
06 // ... code omitted for brevity
07 userData: mustache.render(bootstrapString, {
08 PACKET_METADATA_PY: pythonPacketMetadataGrain,
09 }),
10 // Add tags to this server
11 tags: ["role/salt-master"],
12});
13

Syncing Custom Grains

Finally, we need to tell our Salt master to sync the grains to our minions.

1salt "*" saltutil.grains_sync
2

You can now confirm the custom grain is working with:

01root@master-1:~# salt "*" grains.item packet_metadata
02minion-1:
03 ----------
04 packet_metadata:
05 ----------
06 class:
07 c2.medium.x86
08 facility:
09 ams1
10 id:
11 ab0bc2ba-557b-4d99-a1eb-0beec02adff2
12 iqn:
13 iqn.2020-08.net.packet:device.ab0bc2ba
14 plan:
15 c2.medium.x86
16 tags:
17 - role/salt-minion
18master-1:
19 ----------
20 packet_metadata:
21 ----------
22 class:
23 c2.medium.x86
24 facility:
25 ams1
26 id:
27 97ce9196-077d-4ce9-82a5-d58bf59d0dbc
28 iqn:
29 iqn.2020-08.net.packet:device.97ce9196
30 plan:
31 c2.medium.x86
32 tags:
33 - role/salt-master
34

That's it! Next time we'll take a look at using our tags to provision and schedule our workloads.

See you then.