

You can see again in the tool the reason. We often suggest customers, if budget allows, to use non-parity based RAID like Raid10. Compare it with the previous result of 20,6 hours, and you can immediately understand the importance of configuring the right stripe size on your Veeam repository.Īlso, the RAID level is important. If you change only the storage stripe size to for example 256KB, the new bandwidth will become 40,51, three times the previous one! This means the same backup job will last 6,86 hours. Say you have to transform 1 TB of data, you need to move 2TB of blocks. Since half the I/O is used for reads, the amount of data that needs to be moved is double the size of the backup file. This profile uses 512KB blocks and a 50/50 read/write mix.īandwidth is going to be 13.50 MBs.

We apply the I/O profile of a Transform operation, the same we can find in a Backup Copy Job or in the new Forward incremental-forever backup method. Let’s assume we have a common NAS machine, using 8 SATA disks, as it’s commonly found in small shops.
#Veeam backup to nas update#
UPDATE 15-03-2018: thanks to one of my readers that suggested me a new calculator, that is derived from the previous one, so it’s basically the same. I tried without any success to find another solution, if and when I’ll find anoher online calculator with stripe size options, I will update the post again. UPDATE 18-10-2017: I’ve been notified by several users that the website I used for the calculations is not working anymore. With this value in mind, we can do some simulations using an online calculator. So, we can assume the default block size written in the repository is going to be 512KB. Again, as explained in the whitepaper, thanks to compression these values are reduced to half when the block lands into the repository. The other values are 8MB for the 16TB+ option, 512KB for LAN target and 256KB for the WAN target.

The default value, listed in the interface as “Local Storage” under Storage optimization is 1024KB: Since in many situations the different volumes are still carved out from the same RAID pool, this pool has a unique stripe size value.Īs you can read however in the aforementioned whitepaper, Veeam uses a different and specific block value.
#Veeam backup to nas archive#
This is because they have to be a general purpose solution able to manage at the same time different workloads: datastore for VMware volumes, NFS or SMB shares, archive volumes. Many storage arrays use a default stripe size around 32 or 64k. I just published a dedicated white paper where I explain the different backup modes in Veeam, how they have huge differences in their I/O profiles, and how the choice of one method over the other can have a great impact on the final performances.Īnother huge factor, often overlooked, is the stripe size of the underlying storage. Especially when it comes to I/O intensive operations like Reversed Incremental or the transform operation done during synthetic fulls or the new forever forward incremental, people see their storage arrays performing at low speed and backup operations taking a long time. 0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email - 0 Flares ×įrom time to time, Veeam users in the forums ask for help on unexpected performances in their backup operations.
