Comprehensive data protection for all workloads
Post Reply
Asahi
Expert
Posts: 135
Liked: 7 times
Joined: Jun 03, 2016 5:44 am
Full Name: Iio Asahi
Location: Japan
Contact:

On replication of very large VM

Post by Asahi »

Hi,

Has Veeam's replication process ever replicated a VM with a capacity of 200TB or more?
I want to replicate a monster VM that resides on vSphere 7.0 to another ESXi host.

The VM has multiple virtual disks connected to it, with a total disk space of almost 300TB.
As far as Veeam is concerned, as long as VMware snapshots can be created, there is no problem, but are there any other caveats?
https://docs.vmware.com/en/VMware-vSphe ... 81466.html

I would like to know if Veeam has any experience backing up or replicating these monster VMs.

Kind Regards,
Asahi,
Climb Inc.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: On replication of very large VM

Post by HannesK »

Hello,
simple answer: yes, there are customers on this planet who did backup of large machines. I don't know anyone in person with replication, but the concept of backup is similar.

Now to your point... did you see any issues? If yes, can you please provide the case number (and more details about the setup. proxies, backup / storage access mode etc.)?

Yes, the snapshot time can be an issue. But I hope that your infrastructure is fast enough.

Best regards,
Hannes
Asahi
Expert
Posts: 135
Liked: 7 times
Joined: Jun 03, 2016 5:44 am
Full Name: Iio Asahi
Location: Japan
Contact:

Re: On replication of very large VM

Post by Asahi »

Hi Hannes,

Thank you for answer!

We were relieved to see that they have a track record of backing up these monster VMs.

Since we are still in the pre-proposal configuration stage, we have not encountered any problems.
※For now, we plan to configure the Veeam Manager Server as a virtual machine.

Certainly, I need to be careful about the time the snapshot is open.
In the meantime, I am thinking of FC configuration, so I will use Direct SAN for reading from the source, and use another mode for writing to the target.

Kind Regards,
Asahi,
Climb Inc.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: On replication of very large VM

Post by HannesK »

the Veeam proxies are the key component. the manager server is just managing...

yes, on the target side you probably want to go with hot-add. see the limitations of direct-san in https://helpcenter.veeam.com/docs/backu ... ml?ver=100
Asahi
Expert
Posts: 135
Liked: 7 times
Joined: Jun 03, 2016 5:44 am
Full Name: Iio Asahi
Location: Japan
Contact:

Re: On replication of very large VM

Post by Asahi »

Hi Hannes,

Yes, I understand that the Manager server only does Managing.
I will probably configure the Manager and Proxy server on the target ESXi host to handle the Hotadd.

For reading from the source, I think I will have a physical Proxy server and take a Direct SAN configuration.

Kind Regards,
Asahi,
Climb Inc.
Asahi
Expert
Posts: 135
Liked: 7 times
Joined: Jun 03, 2016 5:44 am
Full Name: Iio Asahi
Location: Japan
Contact:

Re: On replication of very large VM

Post by Asahi »

Hi Hannes,

Sorry, I need additional help from you to help me.

Do you have any information about how long the incremental backup process took on the Large Machine?
It would be nice to know how long the incremental backup actually took.
※I understand that the environment is different.

Kind Regards,
Asahi,
Climb inc.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: On replication of very large VM

Post by HannesK »

Hello,
With 7 disks I just saw 1:40h for an incremental run of a 50 TB VM (lab). So for 300 TB that would mean around 10h for the hardware we have here.

But probably your production hardware is faster than our lab hardware :-)

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: Semrush [Bot] and 109 guests