Comprehensive data protection for all workloads
Post Reply
JasonJoel
Enthusiast
Posts: 32
Liked: 9 times
Joined: May 24, 2011 3:17 pm
Full Name: Jason Bottjen
Contact:

v6 Upgrade Losing Repository Credintials for CIFS

Post by JasonJoel »

I saw in the release notes that I would lose CIFS credentials when upgrading to v6, so it wasn't a surprise. But man, what a pain in the rear end to dig out of.

We chose to have a separate job for each VM. And then we stored the files in a subfolder on a network share (separate subfolder for each VM). So now after the upgrade to v6, we have a bazillion Backup repositories, none of which work since they lost their credentials. Have to go edit all of them by hand and fix the credentials. Not very user friendly.

Maybe we'll finally just give up and do multiple VMs in the same job and be done with it.

Jason Bottjen
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: v6 Upgrade Losing Repository Credintials for CIFS

Post by Gostev »

JasonJoel wrote:Maybe we'll finally just give up and do multiple VMs in the same job and be done with it.
That's right :D one job per VM is really poor design, unless you only have a handful of VMs and no plans to grow. Thanks!
JasonJoel
Enthusiast
Posts: 32
Liked: 9 times
Joined: May 24, 2011 3:17 pm
Full Name: Jason Bottjen
Contact:

Re: v6 Upgrade Losing Repository Credintials for CIFS

Post by JasonJoel »

Understood. :)

We started that way more out of fear. We were uncomfortable having all VMs in a single/huge backup VBK file (1 corrupt file = dozens of lost backups). Probably an overly conservative approach.

Jason
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: v6 Upgrade Losing Repository Credintials for CIFS

Post by Gostev »

JasonJoel wrote:(1 corrupt file = dozens of lost backups)
This is not so. You will only be unable to restore those VMs sharing the "corrupted" block - other VMs will restore fine. Considering that shared block is usually OS file block (because this is what gets deduped), this is really not something one might be afraid to lose. As for the actual data, you will still be able to restore it with file level restore for instance.

More generally speaking, it does not matter if you loose 1 VM or 10 VMs. What matters here is that you seem to be willing to accept data loss, albeit only 1 VM. But data loss is completely unacceptable. That 1 VM in a job may very well be worth all of the remaining VMs together. So, the key is to design your backup to make sure you NEVER lose data, under ANY circumstances.

You fear of corruption on file level does have it's ground, disk read and write errors happened to most of us at least once in a life time. No media available today is perfectly reliable. Which is exatly why the say "backup is not a backup, until you have 3 copies of it". Even if you get corruption on one media, the other 2 copies are still good.

To share how we ourselves are doing backups at our main IT site:
1. Regular backups to our main backup repository on-site.
2. Taking fulls backups of all data once a week off-site to a different part of the city (external hard drives).
3. Most critical VMs are additionaly backed up over Atlantic to the repository in our US datacenter (with separate jobs).

As you can see, this keeps us well covered not only from simple block level corruption in our main backup repository - our data will even survive flood, plane crashing into the main datacenter, and even revolution. :D
JoshF
Enthusiast
Posts: 27
Liked: 1 time
Joined: Sep 06, 2011 5:57 am
Full Name: Josh
Contact:

Re: v6 Upgrade Losing Repository Credintials for CIFS

Post by JoshF »

I feel compelled to ask Gostev as to why breaking VMs into a single job for each VM is such a bad design. Following the Veeam recommendations, we have found ourselves in a world of pain trying to get larger backup jobs replicated offsite outside of a Veeam replication infrastructure (failover of Veeam services to DR for operational backup to just keep going)?
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: v6 Upgrade Losing Repository Credintials for CIFS

Post by tsightler »

Mainly I would say because you don't get the full advantage of deduplication and it doesn't scale very well past a few dozen VM's from a scheduling standpoint.

I'm curious how having many smaller files, that overall are actually more data, is better for getting files offsite? Is it simply a matter of being able to sync multiple files simultaneously?
JoshF
Enthusiast
Posts: 27
Liked: 1 time
Joined: Sep 06, 2011 5:57 am
Full Name: Josh
Contact:

Re: v6 Upgrade Losing Repository Credintials for CIFS

Post by JoshF »

As I'm trying to do 4 hour backups of the most critical VMs, from a scheduling standpoint, I think there is no ideal solution, but multiple v6 worker proxies may assist with getting the times down on larger jobs.

The issue I'm finding is that data manipulation on files in excess of 1TB seems to never get to the remote site in an appropriate period of time when copying the entire file, and RSync performance being tied to the amount of time it takes to do MD5 hashes of the files, the amount of time to do a differential transfer grows exponentially larger than the time taken to copy the file whole. So, split out the data, and there is a significant decrease the amount of time for a differential copy, and by using this method, the link isn't slammed with more data.

Not 'best practice', but really want to know why this is 'bad practice'!
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: v6 Upgrade Losing Repository Credintials for CIFS

Post by Gostev »

Bad practice for the 2 reasons stated by Tom. But of course, it's fine to do this for a few selected VMs.
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 159 guests