Uncategorized

TrueNAS storage controller pass-through with Windows Hyper-V (DDA)

Hyper-v on Server 2019 supports Discrete Device Assignment (DDA) which allows PCI-E devices to be assigned directly to underlying VMs. This through me off as my searches for Device Pass Through didn’t return any results!

Typically this is used with Graphics cards and all the docs talk extensively about doing just that. What I wanted to do was pass through an LSI SAS controller to my TrueNAS VM.

Here are my learnings:

  1. Enable SR-IOV and I/O MMU Guide here
  2. Start by downloading and running the Machine Profile Script. This is going to tell you if you have a machine setup that can support pass-through. If things are good you’ll see something like this (but with LSI adapter name not my adapter – my LSI is already setup so it doesn’t show here). Make a note of the `PCIROOT` portion we’ll need that later.
  1. Use steps 1/2 in a tight loop to make sure your all setup right. My BIOS settings weren’t clear, so I did a couple of loops here trying different settings with the Chipset, PCI-E and other bits.
  2. Find and disable the LSI Adapter in Device Manager. The easiest way I found to do this is to find a hard drive you know is attacked to the device then switch the device manager view to “by connection” and the hard drive you have selected will now show under the LSI Adapter. Right-click the adapter and click disable (note at this point you’ll lose access to the drives). Reboot.
  3. Run the following script replacing $instancePath with the PCIROOT line from the Machine Profile script and truenas with your VMs name.
$vm = Get-VM -Name truenascore
$locationPath = "PCIROOT(0)#PCI(0102)#PCI(0000)#PCI(0200)#PCI(0000)"
Dismount-VmHostAssignableDevice -LocationPath $locationPath -Force -Verbose
Add-VMAssignableDevice -VM $vm -LocationPath $locationPath –Verbose

Boot the VM and your done.

Things to note, I tried to pass through the inbuilt AMD storage controller with -force even though the Machine Profile script said it wouldn’t work. It did kind of work, showing one of the disks but it also made the machine very unstable rebooting the host when the VM was shut down so best to listen to the output of the script and only try to pass through devices that show up green!

I’ve run now for a couple of days with the LSI adapter passed through and loaded about 2TB onto a RAIDZ2 pool of 5x3TB disks and so far everything is working well.

Standard
Uncategorized

TrueNas OneDrive Cloudsync corrupted on transfer

This is a quick one, if you get the following error or similar:

2021/06/07 00:15:35 ERROR : Attempt 3/3 failed with 1 errors and: corrupted on transfer: sizes differ 189118 vs 130560
2021/06/07 00:15:35 Failed to copy: corrupted on transfer: sizes differ 189118 vs 130560

These track back to an issue with OneDrives metadata generation altering the size of the file. You can see details on this issue here: https://github.com/rclone/rclone/issues/399

To resolve this you need to add --ignore-size to the rclone config that TrueNas creates.

While the UI doesn’t expose a extra-args field, it is present in the underlying database. This post guides you through how to add additional args: https://www.truenas.com/community/threads/cloud-sync-task-add-extra-rclone-args-to-specify-azure-archive-access-tier.85526/

For this OneDrive error the following works: (Assuming you only have 1 cloudsync task ID == 1)

$ sqlite3 /data/freenas-v1.db
update tasks_cloudsync set args = "--ignore-size" where id = 1;

You can double check the change with the following

sqlite> .headers on
sqlite> select * from tasks_cloudsync;

Then it’s just a case of re-running the task in the UI and 🎉

Standard
Azure

Azure: Workbook without hard coded resources for automated deployment

Brain dump post, excuse typos and writing getting this out of my head while I still remember it.

In this post I’m going to go through how I used query based parameters to setup an Application Insights workbook so it does not have hard coded resource ID’s in it’s definition.

This means it’s much easier to use for automated deployments where these ID’s aren’t known and as new resources are deployed the existing workbook picks these up automatically without requiring manual changes.

In this scenario we’re deploying a Resource Group with ARM/Terraform. Each group has it’s own Application Insights deployed. In the group are some App Service plans, Cosmos DBs and Service Bus Namespaces. We want the workbook to deploy into the Application insights instance in the group and graph the resources for App Service Plans, Cosmos and ServiceBus.

I’m new to workbooks so be aware there may be a simpler way to do this that I’ve not yet found!

First up what does it look like if you don’t use this approach and just use the GUI to add metrics to your workbook.

Along the top we use the “Resource Type” and other drop downs to select the resource we want to graph then we add our “CPU” metric.

If you click into the “Advanced editor” you’ll see the following, notice that the “resourceIds” field now has a hard coded reference to the resource we selected.

This means if we exported this JSON and deployed it using ARM or Terraform the workbook wouldn’t work. We’d want it to graph the metrics for the resource deployed alongside it not the resource that is hard coded.

So how do we fix this?

Well we can use workbook parameters.

Parameters can be simple strings or more complex queries and resource selectors.

The first step is to find out the resource group we’re deployed into, this can be done by creating a parameter which finds the “Owned Resources”.

“Owned Resources” for this Application Insights instance is itself and the query returns it’s full Azure ID like: /subscriptions/YOURSUB/resourceGroups/rg-processing/providers/Microsoft.Insights/components/app-insights

We’re going to use this to extract the current resource group’s name.

Next we use the an Azure Resource Graph query where id == "{OwningAppInsights}" | project split(id, "/", 4)[0] to pull the Resource Group name out of this ID.

The query finds the application insights instance then pulls out the Resource Group it’s deployed into (this is what the split is doing on the Azure ID). We add this as a new parameter called “ResourceGroup” notice this param can depend on the previous param “OwningAppInsights” we just created.

Now we can create our last workbook parameter, one which selects the App Service Plans in the resource group. This uses the output of the “ResourceGroup” parameter above to query for all the Plans in the group by filter the “type” of the resources in the group.

To find out which type you should use in the above query run the following Azure Graph Query and review the results (note turn “formatted results” off to see the original values not the cleaned up ones) where resourceGroup == "processing-myrg" | project type, name

So the query where resourceGroup == "{ResourceGroup}" and type == "microsoft.web/serverfarms" is returning all the resources that are servicefarms… this is internal azure speak for App Service Plans.

We’ve ticked the box to “Allow multiple selections” and we’ve ticked “Hide parameter in reading mode” as we don’t want users of the workbook to change this manually.

Then we can use this parameter when setting up our metric graph like so, we can select the “ResourceApplicationPlans” parameter from the drop down and the graph now uses our auto-populated set of App Service plans.

Now we’re they’re the code/json of the workbook no longer contains any hard coded references to ID’s

You can see the “ResourceIds” is now set by out “ResourceApplicationPlans” parameter which is dynamically generated and selects all the App Plans that are deployed in the resource group the workbook is deployed in.

We can now automate the deployment of the workbook without templating the json!

Bonus if you add a new App Plan the workbook will pick it up and start graphing it. You can use the same approach to add parameters detecting other resource types like cosmos and graph those too.

Standard