Uncategorized

TrueNAS storage controller pass-through with Windows Hyper-V (DDA)

Hyper-v on Server 2019 supports Discrete Device Assignment (DDA) which allows PCI-E devices to be assigned directly to underlying VMs. This through me off as my searches for Device Pass Through didn’t return any results!

Typically this is used with Graphics cards and all the docs talk extensively about doing just that. What I wanted to do was pass through an LSI SAS controller to my TrueNAS VM.

Here are my learnings:

  1. Enable SR-IOV and I/O MMU Guide here
  2. Start by downloading and running the Machine Profile Script. This is going to tell you if you have a machine setup that can support pass-through. If things are good you’ll see something like this (but with LSI adapter name not my adapter – my LSI is already setup so it doesn’t show here). Make a note of the `PCIROOT` portion we’ll need that later.
  1. Use steps 1/2 in a tight loop to make sure your all setup right. My BIOS settings weren’t clear, so I did a couple of loops here trying different settings with the Chipset, PCI-E and other bits.
  2. Find and disable the LSI Adapter in Device Manager. The easiest way I found to do this is to find a hard drive you know is attacked to the device then switch the device manager view to “by connection” and the hard drive you have selected will now show under the LSI Adapter. Right-click the adapter and click disable (note at this point you’ll lose access to the drives). Reboot.
  3. Run the following script replacing $instancePath with the PCIROOT line from the Machine Profile script and truenas with your VMs name.
$vm = Get-VM -Name truenascore
$locationPath = "PCIROOT(0)#PCI(0102)#PCI(0000)#PCI(0200)#PCI(0000)"
Dismount-VmHostAssignableDevice -LocationPath $locationPath -Force -Verbose
Add-VMAssignableDevice -VM $vm -LocationPath $locationPath –Verbose

Boot the VM and your done.

Things to note, I tried to pass through the inbuilt AMD storage controller with -force even though the Machine Profile script said it wouldn’t work. It did kind of work, showing one of the disks but it also made the machine very unstable rebooting the host when the VM was shut down so best to listen to the output of the script and only try to pass through devices that show up green!

I’ve run now for a couple of days with the LSI adapter passed through and loaded about 2TB onto a RAIDZ2 pool of 5x3TB disks and so far everything is working well.

Standard
Uncategorized

TrueNas OneDrive Cloudsync corrupted on transfer

This is a quick one, if you get the following error or similar:

2021/06/07 00:15:35 ERROR : Attempt 3/3 failed with 1 errors and: corrupted on transfer: sizes differ 189118 vs 130560
2021/06/07 00:15:35 Failed to copy: corrupted on transfer: sizes differ 189118 vs 130560

These track back to an issue with OneDrives metadata generation altering the size of the file. You can see details on this issue here: https://github.com/rclone/rclone/issues/399

To resolve this you need to add --ignore-size to the rclone config that TrueNas creates.

While the UI doesn’t expose a extra-args field, it is present in the underlying database. This post guides you through how to add additional args: https://www.truenas.com/community/threads/cloud-sync-task-add-extra-rclone-args-to-specify-azure-archive-access-tier.85526/

For this OneDrive error the following works: (Assuming you only have 1 cloudsync task ID == 1)

$ sqlite3 /data/freenas-v1.db
update tasks_cloudsync set args = "--ignore-size" where id = 1;

You can double check the change with the following

sqlite> .headers on
sqlite> select * from tasks_cloudsync;

Then it’s just a case of re-running the task in the UI and 🎉

Standard
Uncategorized

Using your VSCode dev container as a hosted Azure DevOps build agent

Devcontainers are awesome for keeping tooling consistent over the team, so what about when you need to run your build?

There is some great work already done talking about how to use these as part of a normal pipeline (shout out to Eliise!), what about if you need your build agent to be inside a virtual network in Azure?

The standard approach would be to create a VM, setup tools and join that as an Agent to Azure Devops.

As we’ve already got a definition of the tooling we need, our devcontainer, can we reuse that to simplify things?

Turns out we can, using an Azure Container Repository, Azure Container Instance and a few tweaks to our devcontainer we can spin up an agent for Devops based on the devcontainer and start using it.

To do this we need to:

  1. Add the AzureDevops Agent script to your devcontainer
  2. Build the image and push up to your Azure Container Repository following this guide
  3. Use Terraform to deploy the built container into an Azure Container Instance

The snippets below assume you already have your agent built and pushed up to your Azure Container Repository with the name your_repo_name_here.azurecr.io/devcontainer:buildagent.

It shows the .Dockerfile for the devcontainer the bash script to start a devcontainer (slight edit from doc here) and the terraform to deploy it into a VNET.

You’ll have to do some tweaks, best to treat this as a starting point. See this doc for more detailed docs on how this work.

# Very basic devcontainer, see line 15 copying in build agent start script
# https://github.com/Azure/azure-functions-docker/blob/master/host/3.0/buster/amd64/dotnet/dotnet-core-tools.Dockerfile
FROM mcr.microsoft.com/azure-functions/dotnet:3.0-dotnet3-core-tools
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
# Install system tools
RUN apt-get update \
&& apt-get -y install –no-install-recommends apt-utils nano unzip curl icu-devtools bash-completion jq
# Add AzureDevops build agent script
COPY ./buildagentstart.sh .
view raw .Dockerfile hosted with ❤ by GitHub
#!/bin/bash
set -e
# This script comes from the following documentation
# See https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
mkdir -p /azp/
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
rm -rf /azp/agent
mkdir /azp/agent
cd /azp/agent
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent…"
./config.sh remove –unattended \
–auth PAT \
–token $(cat "$AZP_TOKEN_FILE")
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent…"
AZP_AGENT_RESPONSE=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;api-version=3.0-preview' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=linux-x64")
if echo "$AZP_AGENT_RESPONSE" | jq . >/dev/null 2>&1; then
AZP_AGENTPACKAGE_URL=$(echo "$AZP_AGENT_RESPONSE" \
| jq -r '.value | map([.version.major,.version.minor,.version.patch,.downloadUrl]) | sort | .[length-1] | .[3]')
fi
if [ -z "$AZP_AGENTPACKAGE_URL" -o "$AZP_AGENTPACKAGE_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent – check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and installing Azure Pipelines agent…"
curl -LsS $AZP_AGENTPACKAGE_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent…"
./config.sh –unattended \
–agent "${AZP_AGENT_NAME:-$(hostname)}" \
–url "$AZP_URL" \
–auth PAT \
–token $(cat "$AZP_TOKEN_FILE") \
–pool "${AZP_POOL:-Default}" \
–work "${AZP_WORK:-_work}" \
–replace \
–acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent…"
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
# To be aware of TERM and INT signals call run.sh
# Running it with the –once flag at the end will shut down the agent after the build is executed
./run.sh & wait $!
view raw buildagentstart.sh hosted with ❤ by GitHub
variable azp_docker_image {
description = "The docker image to use when running the build agent. This defaults to a build of ./.devcontainer pushed to the ACR container"
type = string
default = "your_repo_name_here.azurecr.io/devcontainer:buildagent"
}
variable azp_token {
description = "The token used for the azure pipelines build agent to connect to Azure Devops"
type = string
default = ""
}
variable azp_url {
description = "The url of the Azure Devops instance for the agent to connect to eg: https://dev.azure.com/yourOrg"
type = string
default = "https://dev.azure.com/your_org_here"
}
variable "docker_registry_username" {
description = "Docker registry to be used for containers"
default = "your_repo_name_here"
}
variable "docker_registry_password" {
description = "Docker registry password"
}
variable "subnet_id" {
description = "Azure subnet ID the build agent should be deployed onto"
}
variable "docker_registry_url" {
description = "Docker registry url"
default = "your_repo_here.azurecr.io"
}
resource "azurerm_resource_group" "env" {
location = var.resource_group_location
name = var.resource_group_name
tags = var.tags
}
resource "azurerm_network_profile" "buildagent" {
name = "acg-profile"
location = azurerm_resource_group.env.location
resource_group_name = azurerm_resource_group.env.name
container_network_interface {
name = "acg-nic"
ip_configuration {
name = "aciipconfig"
subnet_id = var.subnet_id
}
}
}
resource "azurerm_container_group" "build_agent" {
name = "buildagent"
location = azurerm_resource_group.env.location
resource_group_name = azurerm_resource_group.env.name
tags = var.tags
network_profile_id = azurerm_network_profile.buildagent.id
ip_address_type = "Private"
os_type = "Linux"
image_registry_credential {
username = var.docker_registry_username
password = var.docker_registry_password
server = var.docker_registry_url
}
container {
name = "buildagent"
image = var.azp_docker_image
cpu = "1"
memory = "2"
commands = ["bash", "-f", "./buildagentstart.sh"]
ports {
port = 443
protocol = "TCP"
}
environment_variables = {
// The URL of the Azure DevOps or Azure DevOps Server instance.
AZP_URL = var.azp_url
// Personal Access Token (PAT) with Agent Pools (read, manage) scope, created by a user who has permission to configure agents, at AZP_URL.
AZP_TOKEN = var.azp_token
// Agent name (default value: the container hostname).
AZP_AGENT_NAME = local.shared_env.rg.name
// Agent pool name (default value: Default).
AZP_POOL = local.shared_env.rg.name
// Work directory (default value: _work).
AZP_WORK = "_work"
}
}
}
view raw main.tf hosted with ❤ by GitHub
Standard