-
- Enthusiast
- Posts: 32
- Liked: 9 times
- Joined: May 24, 2011 3:17 pm
- Full Name: Jason Bottjen
- Contact:
v6 Upgrade Losing Repository Credintials for CIFS
I saw in the release notes that I would lose CIFS credentials when upgrading to v6, so it wasn't a surprise. But man, what a pain in the rear end to dig out of.
We chose to have a separate job for each VM. And then we stored the files in a subfolder on a network share (separate subfolder for each VM). So now after the upgrade to v6, we have a bazillion Backup repositories, none of which work since they lost their credentials. Have to go edit all of them by hand and fix the credentials. Not very user friendly.
Maybe we'll finally just give up and do multiple VMs in the same job and be done with it.
Jason Bottjen
We chose to have a separate job for each VM. And then we stored the files in a subfolder on a network share (separate subfolder for each VM). So now after the upgrade to v6, we have a bazillion Backup repositories, none of which work since they lost their credentials. Have to go edit all of them by hand and fix the credentials. Not very user friendly.
Maybe we'll finally just give up and do multiple VMs in the same job and be done with it.
Jason Bottjen
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v6 Upgrade Losing Repository Credintials for CIFS
That's right one job per VM is really poor design, unless you only have a handful of VMs and no plans to grow. Thanks!JasonJoel wrote:Maybe we'll finally just give up and do multiple VMs in the same job and be done with it.
-
- Enthusiast
- Posts: 32
- Liked: 9 times
- Joined: May 24, 2011 3:17 pm
- Full Name: Jason Bottjen
- Contact:
Re: v6 Upgrade Losing Repository Credintials for CIFS
Understood.
We started that way more out of fear. We were uncomfortable having all VMs in a single/huge backup VBK file (1 corrupt file = dozens of lost backups). Probably an overly conservative approach.
Jason
We started that way more out of fear. We were uncomfortable having all VMs in a single/huge backup VBK file (1 corrupt file = dozens of lost backups). Probably an overly conservative approach.
Jason
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v6 Upgrade Losing Repository Credintials for CIFS
This is not so. You will only be unable to restore those VMs sharing the "corrupted" block - other VMs will restore fine. Considering that shared block is usually OS file block (because this is what gets deduped), this is really not something one might be afraid to lose. As for the actual data, you will still be able to restore it with file level restore for instance.JasonJoel wrote:(1 corrupt file = dozens of lost backups)
More generally speaking, it does not matter if you loose 1 VM or 10 VMs. What matters here is that you seem to be willing to accept data loss, albeit only 1 VM. But data loss is completely unacceptable. That 1 VM in a job may very well be worth all of the remaining VMs together. So, the key is to design your backup to make sure you NEVER lose data, under ANY circumstances.
You fear of corruption on file level does have it's ground, disk read and write errors happened to most of us at least once in a life time. No media available today is perfectly reliable. Which is exatly why the say "backup is not a backup, until you have 3 copies of it". Even if you get corruption on one media, the other 2 copies are still good.
To share how we ourselves are doing backups at our main IT site:
1. Regular backups to our main backup repository on-site.
2. Taking fulls backups of all data once a week off-site to a different part of the city (external hard drives).
3. Most critical VMs are additionaly backed up over Atlantic to the repository in our US datacenter (with separate jobs).
As you can see, this keeps us well covered not only from simple block level corruption in our main backup repository - our data will even survive flood, plane crashing into the main datacenter, and even revolution.
-
- Enthusiast
- Posts: 27
- Liked: 1 time
- Joined: Sep 06, 2011 5:57 am
- Full Name: Josh
- Contact:
Re: v6 Upgrade Losing Repository Credintials for CIFS
I feel compelled to ask Gostev as to why breaking VMs into a single job for each VM is such a bad design. Following the Veeam recommendations, we have found ourselves in a world of pain trying to get larger backup jobs replicated offsite outside of a Veeam replication infrastructure (failover of Veeam services to DR for operational backup to just keep going)?
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: v6 Upgrade Losing Repository Credintials for CIFS
Mainly I would say because you don't get the full advantage of deduplication and it doesn't scale very well past a few dozen VM's from a scheduling standpoint.
I'm curious how having many smaller files, that overall are actually more data, is better for getting files offsite? Is it simply a matter of being able to sync multiple files simultaneously?
I'm curious how having many smaller files, that overall are actually more data, is better for getting files offsite? Is it simply a matter of being able to sync multiple files simultaneously?
-
- Enthusiast
- Posts: 27
- Liked: 1 time
- Joined: Sep 06, 2011 5:57 am
- Full Name: Josh
- Contact:
Re: v6 Upgrade Losing Repository Credintials for CIFS
As I'm trying to do 4 hour backups of the most critical VMs, from a scheduling standpoint, I think there is no ideal solution, but multiple v6 worker proxies may assist with getting the times down on larger jobs.
The issue I'm finding is that data manipulation on files in excess of 1TB seems to never get to the remote site in an appropriate period of time when copying the entire file, and RSync performance being tied to the amount of time it takes to do MD5 hashes of the files, the amount of time to do a differential transfer grows exponentially larger than the time taken to copy the file whole. So, split out the data, and there is a significant decrease the amount of time for a differential copy, and by using this method, the link isn't slammed with more data.
Not 'best practice', but really want to know why this is 'bad practice'!
The issue I'm finding is that data manipulation on files in excess of 1TB seems to never get to the remote site in an appropriate period of time when copying the entire file, and RSync performance being tied to the amount of time it takes to do MD5 hashes of the files, the amount of time to do a differential transfer grows exponentially larger than the time taken to copy the file whole. So, split out the data, and there is a significant decrease the amount of time for a differential copy, and by using this method, the link isn't slammed with more data.
Not 'best practice', but really want to know why this is 'bad practice'!
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v6 Upgrade Losing Repository Credintials for CIFS
Bad practice for the 2 reasons stated by Tom. But of course, it's fine to do this for a few selected VMs.
Who is online
Users browsing this forum: Bing [Bot] and 67 guests