Warning: This expects you already know about rego/opa and is more of a brain dump than a blog.
First up take a look at conftest it’s a great little CLI tool which lets you take rules you’ve written in rego/opa and run them easily.
In our case we have the following:
– ./rules folder containing our rego rules – ./yaml folder containing yaml we want to validate
We’re going to write a rule to flag duplicate resources, ie. when you have two yamls with the same kind and name.
The rule will be written in rego then executed by conftest and when a failure occurs it’ll be shown as an annotation on the Pull Request using GitHub Actions.
Firstly for conftest we want to use the --combine option so we get a single array of all the yaml files passed into our rule. This allows us to compare the files against one another to determine if there are any duplicates.
As well as validating the rule we also use the path property to output metadata about which file generated the warning.
We can then use jq to parse the json output from conftest and convert it to “Workflow Commands: Warning Messages” these are outputted to the console and read by GitHub Actions. With the details in the message it generates an annotation on the file in the PR like so:
Here is a gist of this wrapped together.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This is a brain-dump rather than a fully fleshed out blog. Most of the code was written with an unwell small human sleeping on me and python isn’t my best language, it’s very much a hack.
I have two kids, both have asthma and chest issues. Unfortunately, these are things you manage rather than cure, they’re more prone to normal colds escalating quickly and need more medical interventions in general.
My oldest hasn’t started school yet but has spent more time in hospital already than I have in my entire life.
“How does this relate to coding Lawrence?”, Glad you asked. We keep a track of the medication, temp, pulse ox and other key events in a Signal Group.
We’ve found that between swapping parents, sleepless nights and different hospital wards/doctors its easy for things to get lost.
This has worked really well in the past, Signal keeps things tracked, it’s quick and easy. You can write down whatever you want. If your offline it’ll sync up later.
When you swap parents or see a new doctor you can do a quick rundown of what’s happened in the last x hours, chase up missed doses etc just by scrolling up the chat.
What was new this time round was that both of my kids where ill at the same time, both with chest infections. Both needed medication, observations on temp, pulse ox etc and the group got messy fast.
So I decided to write something to make things nicer. Partly because I thought it would help, partly because having something to focus on helped dissipate the nervous energy of seeing your kids ill and not being able to do much about it.
The aim is a bot to pickup the messages on the group and then store them and build out views/graphs.
First up, massive shout out to Finn for the work on Signald and to Lazlo for the Semaphore bot library that builds on it. Both of these where awesome to work with and made this project easy.
The basic aim is for the bot to listen on the group, pickup updated then pull out the relevant information and store it in a sqlite db.
I used the ‘reaction’ in Signal to show that the bot has successfully picked up an item and stored it, you can see this as the 💾 added to the messages below.
Last when someone sends a message ‘graphs’ the bot should build out graphs and share them back to the group.
What does this code look like? See the Semaphone examples for a full fledged starting point (seriously they’re awesome). In the meantime, I’ll show my specific bits. It’s surprisingly small, I added a handler to the bot to detect messages that had a temperature in them using a regex and insert them into the temperature table in sqlite.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Then for graphing I tried out something a bit different. I used a Juypiter notebook to author and play with the code then I used jupyter nbconvert graphs.ipynb --to python to output the notebooks code as a python file.
This was a nice mix for a side/hack project, I could iterate quickly in the notebook but still have that code callable from the bot easily.
The handler and graph rendering look like this, I was seriously impressed with pandasdatafame, I’ve not used it much in the past and being able to easily read in from sqlite was a big win.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
attachmentTemps= {"filename": '/signald/temps.jpg', # cos is't the path in the signald container that matters here
"width": "250",
"height": "250"}
attachmentTimeline1= {"filename": '/signald/timeline1.png', # cos is't the path in the signald container that matters here
"width": "250",
"height": "250"}
attachmentTimeline2= {"filename": '/signald/timeline2.png', # cos is't the path in the signald container that matters here
"width": "250",
"height": "250"}
awaitctx.message.reply(body="Temp graphs for the last 3 days, last 12 hours timeline", attachments=[attachmentTemps, attachmentTimeline1, attachmentTimeline2])
Last was drawing the timelines, labella was awesome here, I had to hack a bit but it does awesome stuff like let you pick a colour for the item based on it’s content. With this I could label different types of medication with different colours on the timeline.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
What does this look like when drawn? (Granted I’ve picked rubbish colors).
It gives a chronologically accurate timeline with each medicine or item type easily distinguishable. This is useful to take in how things are going over 24 hours and also spot issues with missed doses.
So that’s it really, I haven’t published the full set of code as it’s got more specific stuff to them in there, but hopefully this is a useful overview and drop comments if you’d find this interesting/useful. If there is enough interest I can clean stuff up to make this sharable.
Hyper-v on Server 2019 supports Discrete Device Assignment (DDA) which allows PCI-E devices to be assigned directly to underlying VMs. This through me off as my searches for Device Pass Through didn’t return any results!
Typically this is used with Graphics cards and all the docs talk extensively about doing just that. What I wanted to do was pass through an LSI SAS controller to my TrueNAS VM.
Start by downloading and running the Machine Profile Script. This is going to tell you if you have a machine setup that can support pass-through. If things are good you’ll see something like this (but with LSI adapter name not my adapter – my LSI is already setup so it doesn’t show here). Make a note of the `PCIROOT` portion we’ll need that later.
Use steps 1/2 in a tight loop to make sure your all setup right. My BIOS settings weren’t clear, so I did a couple of loops here trying different settings with the Chipset, PCI-E and other bits.
Find and disable the LSI Adapter in Device Manager. The easiest way I found to do this is to find a hard drive you know is attacked to the device then switch the device manager view to “by connection” and the hard drive you have selected will now show under the LSI Adapter. Right-click the adapter and click disable (note at this point you’ll lose access to the drives). Reboot.
Run the following script replacing $instancePath with the PCIROOT line from the Machine Profile script and truenas with your VMs name.
Things to note, I tried to pass through the inbuilt AMD storage controller with -force even though the Machine Profile script said it wouldn’t work. It did kind of work, showing one of the disks but it also made the machine very unstable rebooting the host when the VM was shut down so best to listen to the output of the script and only try to pass through devices that show up green!
I’ve run now for a couple of days with the LSI adapter passed through and loaded about 2TB onto a RAIDZ2 pool of 5x3TB disks and so far everything is working well.