Comprehensive data protection for all workloads
Post Reply
jgremillion
Enthusiast
Posts: 87
Liked: never
Joined: Oct 20, 2009 2:49 pm
Full Name: Joe Gremillion
Contact:

VeeamBR save my bacon, I mean email, today.

Post by jgremillion »

At the risk of sounding like a Veeam homer or that I drank the Veeam kool-aid, I want to tell you guys why I love your Veeam 5.0 product.

#1. I came in to work this morning and found out that one of 16 GroupWise post offices (Windows) was hard down (The disk controller was bad in the VM). I fired up an instant recovery session and the backup post office was up and running in no time! All it took was a few mouse clicks and about 5 minutes until I had fully functional post office. What a relief!

It's nice to know that that I can recover a mission critical VM with ease and simplicity in a matter of minutes, not hours! ..... I really am happy this feature worked as advertised. It's not just vaporware.

#2. I have just upgraded my OS code in our Clariion array to Flare 30 and our VMware infrastructure is ESXi 4.1. Our ESX servers are now implementing the storage off-load feature through the VAAPI (VMware API for Array Integration) and now our VEEAAM backup are flying! According to our 24 hour summary, I am backing up 168 VMs, totaling 17.5 TB with an average processing speed of 739 mb/s! That's domain controllers, GroupWise post offices, SQL servers, etc. Sweet!

Thanks for making a great product Veeam Guys!
Gostev
Chief Product Officer
Posts: 31806
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by Gostev »

Hi Joe, thanks for sharing this great story! I believe, this is officially first reported use of Instant VM Recovery feature in anger, during real outage. :D
How many backup servers do you use to backup 17.5TB of data?
Thank you for your kind words!
jgremillion
Enthusiast
Posts: 87
Liked: never
Joined: Oct 20, 2009 2:49 pm
Full Name: Joe Gremillion
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by jgremillion »

Anton,

I currently use three servers: one Dell m600 blade server and two Dell1950s.
I am debating adding another m600. Although at the moment these three servers seem to be handling the load quite fine.

-Joe
drbarker
Enthusiast
Posts: 45
Liked: never
Joined: Feb 17, 2009 11:50 pm
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by drbarker »

jgremillion wrote:Anton,

I currently use three servers: one Dell m600 blade server and two Dell1950s.
I am debating adding another m600. Although at the moment these three servers seem to be handling the load quite fine.

-Joe
I'm feeling a little cheap... I've got a single Dell m600 churning against 14Tb. :D Since switching over to incremental backups, performance has been much better (avg 200MB/s per job, 4 concurrent jobs).

The only bottleneck we now have is with the synthetic fulls. I know if I switch to using a linux server as a target the synthetic full creation gets offloaded. Does this offload work asyncronously? (aka: Can I kick off another backup while the synthetic full job is running?)
Gostev
Chief Product Officer
Posts: 31806
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by Gostev »

Yep, absolutely. Incremental data will be collected by Veeam Backup server and sent to Linux target, while actual synthetic full processing will be handled by the target, so at this point Veeam Backup server is no longer spending any cycles on this specific job.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by tsightler »

No, the synthetic full must finish before the next job starts, even with Linux targets. I'm curious, what speeds you are seeing with your synthetic fulls? I find them to be quite fast, coming close to maxing out our backend storage performance for our backup storage targets (~300MB/sec), but we do use Linux targets exclusively. Are synthetics slower when running on Windows?
Gostev
Chief Product Officer
Posts: 31806
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by Gostev »

Well, yes technically the job will still be "running" while synthetic full is being created on target, but actual Veeam Backup server will not be spending any resources for this. And since the question was around offloading synthetic full processing from backup server, I figured this is what mostly matters...
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by tsightler »

Funny, Anton answered differently from me. I think we interpreted the question differently. When I read "Can I kick off another backup while the synthetic full job is running?" I thought he meant he wanted to kick off another instance of the same job, which you can't actually do even with a linux target (the job won't restart until the synthetic job is finished even though it's the linux target that's doing the work).

If you meant, "Can I kick off a different job while the first job works on the synthetic full?", well sure, you can do that. We do that all the time.
Gostev
Chief Product Officer
Posts: 31806
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by Gostev »

That's right :)
drbarker
Enthusiast
Posts: 45
Liked: never
Joined: Feb 17, 2009 11:50 pm
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by drbarker »

tsightler wrote:Funny, Anton answered differently from me. I think we interpreted the question differently. When I read "Can I kick off another backup while the synthetic full job is running?" I thought he meant he wanted to kick off another instance of the same job, which you can't actually do even with a linux target (the job won't restart until the synthetic job is finished even though it's the linux target that's doing the work).

If you meant, "Can I kick off a different job while the first job works on the synthetic full?", well sure, you can do that. We do that all the time.
Yeah, I was after kicking off another instance of the same job. Ofcourse that'll be tricky if there's no full backup to work against, but I was going for optimism :-)

To answer your other question: I'm only seeing synthetic full speeds of ~75MB/s, but my backup storage is 3 miles from the backup server so the latency is relativly high. I'm going to try moving the backup server closer to the storage[1]; I'll see if it goes any better.

[1] The backup media is a Dell MD3200i. It's only a quirk of local geography that meant I had to run iSCSI over such a long distance - I'll be able to fix that soon!
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by tsightler »

Well, our backup targets are 7 miles from each other, and are iSCSI, but they are frontended by local Linux servers so the synthetic gets built locally. I could imaging the latency might work against it, but 75MB/sec is still pretty respectable since in your case the data has to be both read and written across the inter-building link (admittedly I have no idea what your link speed is, we have a 1Gb link between our sites so 75MB/sec would be pretty high utilization).
Mindflux
Enthusiast
Posts: 32
Liked: never
Joined: Nov 10, 2010 7:52 pm
Full Name: RyanW
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by Mindflux »

jgremillion wrote:At the risk of sounding like a Veeam homer or that I drank the Veeam kool-aid, I want to tell you guys why I love your Veeam 5.0 product.

#1. I came in to work this morning and found out that one of 16 GroupWise post offices (Windows) was hard down (The disk controller was bad in the VM). I fired up an instant recovery session and the backup post office was up and running in no time! All it took was a few mouse clicks and about 5 minutes until I had fully functional post office. What a relief!

It's nice to know that that I can recover a mission critical VM with ease and simplicity in a matter of minutes, not hours! ..... I really am happy this feature worked as advertised. It's not just vaporware.
How do you deal with the redirected writes? The instant recovery allows you to bring up a machine in read only with writes redirected elsewhere. How would you go about merging those changes once your production VM is back in action?
Gostev
Chief Product Officer
Posts: 31806
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by Gostev »

You don't deal with the redirected writes at all :)

By default, writes are redirected internally (within vPower engine), so VMware does not even know anything about redirection. ESX(i) sees regular virtual disks on vPower NFS (without any snapshots present). So no matter what way you choose to move the VM backup to production (Storage VMotion, replication, hot VM copy, cold VM files copy) - the actual, latest state will always be copied (completely transparently for you).

Even if you change the default Instant VM Recovery settings, and redirect updates to some VMFS datastore - which effective creates regular VMware snapshot on instantly recovered VM (and so disables Storage VMotion), using replication or hot VM copy will still copy the actual, latest state (again, completely transparently for you). Veeam Backup always processes latest (consolidated) state of VM disks if there are snapshots present.
drbarker
Enthusiast
Posts: 45
Liked: never
Joined: Feb 17, 2009 11:50 pm
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by drbarker »

tsightler wrote:Well, our backup targets are 7 miles from each other, and are iSCSI, but they are frontended by local Linux servers so the synthetic gets built locally. I could imaging the latency might work against it, but 75MB/sec is still pretty respectable since in your case the data has to be both read and written across the inter-building link (admittedly I have no idea what your link speed is, we have a 1Gb link between our sites so 75MB/sec would be pretty high utilization).
Yes, it's a 1Gb link, so we're doing OK. I've moved it closer now, which has improved latency & throughput. I've also had it backup to a linux box box with an Atmos IFS filesystem. (PM me if anyone is interested in the second bit...)
jgremillion
Enthusiast
Posts: 87
Liked: never
Joined: Oct 20, 2009 2:49 pm
Full Name: Joe Gremillion
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by jgremillion »

We just use three servers so we can get more concurrent jobs running at the same time and shorten our backup window. Since we started virtulizing (~90%) we have a lot of spare servers laying around. Might as well use them!
drbarker wrote: I'm feeling a little cheap... I've got a single Dell m600 churning against 14Tb. :D Since switching over to incremental backups, performance has been much better (avg 200MB/s per job, 4 concurrent jobs).

The only bottleneck we now have is with the synthetic fulls. I know if I switch to using a linux server as a target the synthetic full creation gets offloaded. Does this offload work asyncronously? (aka: Can I kick off another backup while the synthetic full job is running?)
Gostev
Chief Product Officer
Posts: 31806
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: VeeamBR save my bacon, I mean email, today.

Post by Gostev »

Apparently, Veeam Backup provides one more reason to virtualize that we did not realize before. And that is, to have more spare hardware available for those physical backup servers backing up your VMs :mrgreen:

Actually, there is in fact one good reason to add an extra backup server even if it is not really needed, and that is to load-balance vPower NFS. Might come handy during large-scale disaster, when you need to run multiple VMs from backup. Hmm... isn't this exactly what started this thread? :wink:
Post Reply

Who is online

Users browsing this forum: No registered users and 249 guests