What happens if you create a full backup, subsequent incrementals and the oldest incremental is deleted because the configured number of rollback points is reached? Will the oldest incremental be committed to the last full backup or simply deleted?
If it's committed to the last full backup, you could theoretically run a full backup only once per VM and run incrementals for years to come
Veeam uses "reverse incrementals". The VBK file is always the most recent FULL backup, and the VBR file are "rollbacks". In other words, when you run an incremental, the changes are written to the FULL backup, and the old blocks are moved to the rollback files. To remove the oldest rollback it's pretty much a matter of just deleting it, now you can't roll back to that "reverse incremental" anymore.
So, considering how Synthetic Backup works, The largest file will be "touched" every backup-cycle.
For those of us who are forced (by policy) to use Tape as a secondary backup, What's the best strategy for backing up the .VBK / .VBR files ?
Since the .VBK file is changed every night, then the Tape-backup will need to backup the .VBK + the latest .VBR file every time.
Incremental-forever to Tape (TSM) won't be very efficient this way, since it will allways have to do a complete backup of the changed .VBK file.
Any tips on how to minimize/avoid this ?
I guess this problem will be the same for those who FTP/SCP the Bakup-folders off-site.
Yes, this is a problem. We spool to tape daily and it's a major pain. I've been trying to work with our retention policy to allow us to spool to tape once a week since our disk backups are "offsite" but we have a requirement for our backups to also be "offline". Even having to rsync the VBK files takes a very long time (hours). There is really no way to mitigate this issue that I'm aware of, you just have to deal with the fact that you've got to copy a lot more data to tape or offsite. If you have a power Veeam server use "best" compression and the least number of jobs (to maximize de-dupe) and you can minimize the size of the data that has to be copied.
Veeam has stated that they will include a "true" incremental in a future version of the product. This will certainly help the situation for those of us that need to be able to get our backups to tape each night.
I'm hoping that I can get some changes to our retention policy that allows for "offline" disk backups, my thought is that, since we mostly use iSCSI, I could create two or three volumes, perhaps even on two or three different, low-cost arrays, and create a script that mounts a different volume each night and rsyncs the data to one of the remote volumes, then when the backup is complete, it drops the iSCSI connection to that LUN. That way, in the event of a virus that somehow wipes all disks (not very likely I know) the data on the iSCSI volume would still be safe since the virus wouldn't know to mount it. I suppose this doesn't cover for the administrator that intentionally wipes all the data.
tsightler wrote:Veeam has stated that they will include a "true" incremental in a future version of the product. This will certainly help the situation for those of us that need to be able to get our backups to tape each night.
Correct, and not just some "future release far far away" - we are already developing this functionality, so this should be out in just a few months.
Gostev wrote:
Correct, and not just some "future release far far away" - we are already developing this functionality, so this should be out in just a few months.
Right, but since I'm not a Veeam representative or anything I didn't want to comment on the "when". I think this feature has some great potential especially if you guys go forward with another feature I've heard you mention in another thread, specifically the idea of a "consolidate helper" to perhaps combine rollbacks to get less granular as they age. If you guys do create a "consolidate" type feature, it could also be used to roll a full backup with incrementals "forward". You could run a full, then a week of incrementals, and then, instead of running another full, simply "consolidate" all of the incrementals into the previous full to create the new "synthetic full". Don't know if you thought about that already, and I'm not expecting something like that in the next release or anything, but if something like this could be put on the drawing board, it would be absolutely perfect for us.
I would also like to see a "Veeam Remote Storage" a service to run off-site and maintain replica of backup directories. It would certainly be faster than rsync since backup job can communicate with "Veeam Remote Storage" a service and use apply the same algorithms it uses for updating vbk/vbr files locally. For us it would take care of off site backup storage (and we might even ditch tape backups
@alexr I like that Idea and second it.
I've been trying to discuss a method of doing both replication and backup of VM's without hitting the production environment twice.
Since Veeam allready knows what has changed and inject these changes into the .VBK It would be nice have an option to inject these changes into an off-site storage-unit as well. Maybe in a delay'ed fashion.
Could the changes being injected into the .VBK's be stored in a local, temporary space on disk, so that it could be shipped off and injected on a second site before being deleted ?
Changed block tracking will still work, the required disk space will be the same.
The only big difference is that latest backup will no longer be always full (obviously, because there will be true increments).