I have a case open with support (02057156) about our cloud backup job failing to run the incremental.
The initial full backup works perfectly, and is around 8TB in size. Support are saying the following:
"the full backup is too big to be opened again. That's why Veeam writes it successfully, but fails to use it for incremental job runs (Veeam needs to append metadata to it about incrementals). This is also to do with encryption and how we interact with linux repositories and linux limitations, so there's no easy fix right now".
Support and my cloud provider have asked me to break my job up into 10 Veeam jobs to get the size down.
I'm just not buying it. Does anyone out there have large cloud backup jobs similar to ours that are working fine?
To me, the implications on de-duplication will be reasonable significant, considering our VMs have a reasonable amount of similar data.
Can anyone chime in on this? What is the actual limitations referred to above for full backup file sizes? Perhaps I could then plan the backup jobs, rather than simply "Creating 10 jobs" which seems like an arbitrary number.
Shouldn't there be some handling built in, rather than writing a massive file over the cloud that it knows it won't be able to use?