How to copy your Proxmox backups with AzCopy to Azure Storage Containers

How to copy your Proxmox backups with AzCopy to Azure Storage Containers

Recently I went out of storage for my homelab so I bought an used NAS (Synology DS214 play) to have some more capacities for Proxmox Backups and OpenStreetMap. I still had a 1TB hdd lying around at home, which I now use for proxmox backups.

To have some redundancy (and to learn something new) I decieded to copy the Proxmox backups to the cloud, in particular to an Azure Storage Account with AzCopy and in the following I will describe with more details how I was able to do it.

Overall this article will cover the following informations:

  • Creating an Azure Storage Account
  • Getting started with AzCopy
  • Creating a bash-script to copy the Proxmox backups to an Azure Storage Container

Creating an Azure Storage Account

First off all you need an active Azure subscription and an storage account to be able to store your backups. In the Azure Portal you can search for the service "Storage Accounts" which you will need.

azure_storage_account_e81efc2682.png

In the service "Storage Accounts" you can create a new storage account. For the storage account you will need

  • an active azure subscription,
  • a ressource group (create one if you don't have one it, e.g. RG-HOMELAB),
  • a storage account name,
  • selecting a region and
  • selecting redundancy (e.g. LRS;)
  • Access tier "cold" (See Advanced)

storage_account_basics_e82f1d8a4d.png storage_account_advanced_02f6609c40.png

You can keep all the other settings as default. After your Storage Account has been deployed you can add a lifecycle rule from "Lifecycle Management" which will move files from the "cold" access tier to the archive storage.

storage_lifecycle_management_e113a6efb8.png

For example I created a rule which moves all new files after one day to the archive storage tier.

storage_lifecycle_lifecycle_rule_85a074cae0.png

storage_lifecycle_lifecycle_rule_2_5486e872ee.png

By storing files in archive storage instead of in the regular "cold" access tier you can actually save about 82%. But keep in mind that accessing data in the Archive storage is more expensive than in the cold (or any other) storage tier.

Also you could create another rule which will for example will delete all all blobs which were created 365 days ago.

storage_pricing_8998b65a68.png

Please have a look at https://azure.microsoft.com/en-us/pricing/details/storage/blobs/ for uptodate Azure Storage pricing.

After the storage account has been configured you will need to create a Container where the actual files will be stored. Go to "Data storage" -> "Containers" and create a Container. create storage container.png Again name it however you want.

Due to the fact that the current version of AzCopy V10 does not support Azure AD authorization in cron jobs I used an SAS token to be able to upload files to the container. You can create a SAS token in the container at "Shared access tokens".

storage_shared_access_token_1646ebf346.png

For the Shared access token you will need to select the permissions for Add/Create/Write and select an expiry date for security reasons. Then you can generate the SAS token and URL. Copy that Blob SAS URL because you will need it for the upload script.

Getting started with AzCopy

AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. This article helps you download AzCopy, connect to your storage account, and then transfer data. (https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10)

To get AzCopy for Linux you have to download a tar file and decompress the tar file anywhere you like. You can then just use AzCopy because it's an executable file, so nothing has to be installed.

#Download AzCopy
cd ~
wget https://aka.ms/downloadazcopy-v10-linux
 
#Expand Archive
tar -xvf downloadazcopy-v10-linux
 
#(Optional) Remove existing AzCopy version
rm /usr/bin/azcopy
 
#Move AzCopy to the destination you want to store it
cp ./azcopy_linux_amd64_*/azcopy /usr/bin/

# Remove Azcopy from home
rm -r downloadazcopy-v10-linux
rm -r azcopy_linux_amd64_10.16.2/

By adding the azcopy file location as system path you can just type azcopy from any directory on your system. You can add it to your system path with:

nano ~/.profile

and then adding these lines:

export PATH=/usr/bin/azcopy:$PATH

Lastly, update your system variables:

source ~/.profile

Creating a bash-script to synchronize the Proxmox backup directory to an Azure Storage Container

The only piece missing now is the script which will upload the the Proxmox backup files to the previously created azure storage container after the backup task has finished. For copying the backups to Azure we will use azcopy copy because acopy uses less memory and incurs lower billing costs because a copy operation does not need to index the source or destination before moving files in comparison to azcopy sync. Azcopy also compares file names and last modified timestamp to only upload new or changed files to the storage container, which overall will reduce bandwidth usage and it will work perfectly with the previously created lifecycle rule.

For automatically starting the upload after the backup has finished we can use a hook script for vzdump. Therefore you need to add the following line to the end of the "/etc/vzdump.conf" file.

script: /etc/your-custom-script.sh

Afterwards you can create the script which will upload the files with:

cd ~
mkdir scripts
cd scripts
nano upload-backups-to-azure.sh

Then copy paste the following content into the file and replace the content for src with the location of your dumps. Note that there is "/*" at the end of src so that only the files inside the directory will be copied. Also replace token with the Blob SAS URL.

#!/bin/bash
# Script to upload Proxmox backups to Azure Storage

src="/mnt/pve/xyz/dump/*"
token="Blob SAS URL"

dobackup(){
  echo "Uploading Proxmox backups from $src to Azure..."
  azcopy copy "$src" "$token" --overwrite=false
  echo "Finished Uploading!"
}

if [ "$1" == "job-end" ]; then
  dobackup
fi

exit 0

Close the file and make it executable for the user with:

chmod +x ~/scripts/upload-backups-to-azure.sh

Now the next time your backup task has finished the files will be automatically uploaded to your Azure storage container. Due to the hook script you can check the status of the copy process in the proxmox ui.

First published January 1, 2023

    0 Webmentions

    Have you published a response to this? Send me a webmention by letting me know the URL.

    Found no Webmentions yet. Be the first!

    About The Author

    Max
    Max

    Geospatial Developer

    Hi, I'm Max (he/him). I am a geospatial developer, author and cyclist from Rosenheim, Germany. Support me

    0 Virtual Thanks Sent.

    Continue Reading