How to copy your Proxmox backups with AzCopy to Azure Storage Containers

How to copy your Proxmox backups with AzCopy to Azure Storage Containers

Recently I went out of storage for my homelab so I bought an used NAS (Synology DS214 play) to have some more capacities for Proxmox Backups and OpenStreetMap. I still had a 1TB hdd lying around at home, which I now use for proxmox backups.

To have some redundancy (and to learn something new) I decieded to copy the Proxmox backups to the cloud, in particular to an Azure Storage Account with AzCopy and in the following I will describe with more details how I was able to do it.

Overall this article will cover the following informations:

  • Creating an Azure Storage Account
  • Getting started with AzCopy
  • Creating a bash-script to copy the Proxmox backups to an Azure Storage Container

Creating an Azure Storage Account

First off all you need an active Azure subscription and an storage account to be able to store your backups. In the Azure Portal you can search for the service "Storage Accounts" which you will need.

Sorry, somehow the image is not available :(

In the service "Storage Accounts" you can create a new storage account. For the storage account you will need

  • an active azure subscription,
  • a ressource group (create one if you don't have one it, e.g. RG-HOMELAB),
  • a storage account name,
  • selecting a region and
  • selecting redundancy (e.g. LRS;)
  • Access tier "cold" (See Advanced)

Sorry, somehow the image is not available :( Sorry, somehow the image is not available :(

You can keep all the other settings as default. After your Storage Account has been deployed you can add a lifecycle rule from "Lifecycle Management" which will move files from the "cold" access tier to the archive storage.

Sorry, somehow the image is not available :(

For example I created a rule which moves all new files after one day to the archive storage tier.

Sorry, somehow the image is not available :(

Sorry, somehow the image is not available :(

By storing files in archive storage instead of in the regular "cold" access tier you can actually save about 82%. But keep in mind that accessing data in the Archive storage is more expensive than in the cold (or any other) storage tier.

Also you could create another rule which will for example will delete all all blobs which were created 365 days ago.

Sorry, somehow the image is not available :(

Please have a look at https://azure.microsoft.com/en-us/pricing/details/storage/blobs/ for uptodate Azure Storage pricing.

After the storage account has been configured you will need to create a Container where the actual files will be stored. Go to "Data storage" -> "Containers" and create a Container. Sorry, somehow the image is not available :( Again name it however you want.

Due to the fact that the current version of AzCopy V10 does not support Azure AD authorization in cron jobs I used an SAS token to be able to upload files to the container. You can create a SAS token in the container at "Shared access tokens".

Sorry, somehow the image is not available :(

For the Shared access token you will need to select the permissions for Add/Create/Write and select an expiry date for security reasons. Then you can generate the SAS token and URL. Copy that Blob SAS URL because you will need it for the upload script.

Getting started with AzCopy

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This article helps you download AzCopy, connect to your storage account, and then transfer data. (https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10)

To get AzCopy for Linux you have to download a tar file and decompress the tar file anywhere you like. You can then just use AzCopy because it's an executable file, so nothing has to be installed.

bash
Copy code
#Download AzCopy
cd ~
wget https://aka.ms/downloadazcopy-v10-linux
 
#Expand Archive
tar -xvf downloadazcopy-v10-linux
 
#(Optional) Remove existing AzCopy version
rm /usr/bin/azcopy
 
#Move AzCopy to the destination you want to store it
cp ./azcopy_linux_amd64_*/azcopy /usr/bin/

# Remove Azcopy from home
rm -r downloadazcopy-v10-linux
rm -r azcopy_linux_amd64_10.16.2/

By adding the azcopy file location as system path you can just type azcopy from any directory on your system. You can add it to your system path with:

bash
Copy code
nano ~/.profile

and then adding these lines:

bash
Copy code
export PATH=/usr/bin/azcopy:$PATH

Lastly, update your system variables:

bash
Copy code
source ~/.profile

Creating a bash-script to synchronize the Proxmox backup directory to an Azure Storage Container

The only piece missing now is the script which will upload the the Proxmox backup files to the previously created azure storage container after the backup task has finished. For copying the backups to Azure we will use azcopy copy because acopy uses less memory and incurs lower billing costs because a copy operation does not need to index the source or destination before moving files in comparison to azcopy sync. Azcopy also compares file names and last modified timestamp to only upload new or changed files to the storage container, which overall will reduce bandwidth usage and it will work perfectly with the previously created lifecycle rule.

For automatically starting the upload after the backup has finished we can use a hook script for vzdump. Therefore you need to add the following line to the end of the "/etc/vzdump.conf" file.

bash
Copy code
script: /home/youruser/scripts/upload-backups-to-azure.sh

Afterwards you can create the script which will upload the files with:

bash
Copy code
cd ~
mkdir scripts
cd scripts
nano upload-backups-to-azure.sh

Then copy paste the following content into the file and replace the content for src with the location of your dumps. Note that there is "/*" at the end of src so that only the files inside the directory will be copied. Also replace token with the Blob SAS URL.

bash
Copy code
#!/bin/bash
# Script to upload Proxmox backups to Azure Storage

src="/mnt/pve/xyz/dump/*"
token="Blob SAS URL"

dobackup(){
  echo "Uploading Proxmox backups from $src to Azure..."
  azcopy copy "$src" "$token" --overwrite=false
  echo "Finished Uploading!"
}

if [ "$1" == "job-end" ]; then
  dobackup
fi

exit 0

Close the file and make it executable for the user with:

bash
Copy code
chmod +x ~/scripts/upload-backups-to-azure.sh

Now the next time your backup task has finished the files will be automatically uploaded to your Azure storage container. Due to the hook script you can check the status of the copy process in the proxmox ui.

First published January 1, 2023

    0 Webmentions

    Have you published a response to this? Send me a webmention by letting me know the URL.

    Found no Webmentions yet. Be the first!

    Write a comment

    About The Author

    Max
    Max

    Geospatial Developer

    Hi, I'm Max (he/him). I am a geospatial developer, author and cyclist from Rosenheim, Germany. Support me

    0 Virtual Thanks Sent.

    Continue Reading

    1. Optimizing images for Next.js sites with imgproxy and docker

      How to transform and optimize images with imgproxy hosted with docker for your Next.js application.

      Continue reading...

    2. How to deploy your GatsbyJS site on your own server

      With Gatsby 4 bringing in Server-Side Rendering (SSR) and Deferred Static Generation (DSG) you need an alternative methode to just hosting static files. Each page using SSR or DSG will be rendererd after a user requests it so there has be a server in the background which will handle these requests and build the pages if needed.

      Continue reading...

    3. Hosting NextJS on a private server using PM2 and Github webhooks as CI/CD

      This article shows you how can host your Next.js site on a (virtual private) server with Nginx, a CI/CD pipeline via PM2 and Github Webhooks.

      Continue reading...