Comprehensive data protection for all workloads
Post Reply
LynnJ57
Novice
Posts: 4
Liked: never
Joined: Oct 31, 2012 4:32 pm
Full Name: Lynn Johnson
Contact:

Another 2TB question

Post by LynnJ57 »

What is the best way to do this? I have a VMWare guest that needs to host a volume with greater than 2TB. My options for storage - local storage or a connection to an iSCSI device. We have VEEAM 6.5. I would like to avoid MS dynamic disks at the guest level. Is a better solution to use extents? I have the MS dynamic disks going right now, but when VEEAM runs against this volume, the differentials are much larger than I would expect. Is there something about MS dynamic disks that changes many more blocks, causing large VEEAM backups? Just looking for the best way to do this, and not have to resort to Backup Exec and backup things at a file level. Thanks!
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Another 2TB question

Post by Vitaliy S. »

Hi Lynn,

If you do not need to backup this volume, then iSCSI connection from the Guest OS should be a way to go, but if you do want to perform backups then you need to use VM virtual disks to host this volume.
LynnJ57 wrote:I have the MS dynamic disks going right now, but when VEEAM runs against this volume, the differentials are much larger than I would expect.
The number of changed blocks is reported by VMware CBT and this number heavily depends on the application you have installed on this VM. Can you please tell me what type of VM it is?

Thanks!
LynnJ57
Novice
Posts: 4
Liked: never
Joined: Oct 31, 2012 4:32 pm
Full Name: Lynn Johnson
Contact:

Re: Another 2TB question

Post by LynnJ57 »

Hi Vitaly - I do need to backup this volume - lots of critical data here. The guest is running Windows 2008 x64 w/SP2. The C: volume resides on a different array, and the E: volume is on a dedicated array(using local storage). The data on the E: volume is all flat file data - no databases or anything that needs application aware software. I could move this to an iSCSI connection too(on a different device/array), but mostly I'm wanting to leverage VEEAM to backup this data, and want to set this up in the best way possible given my limited options here. Thanks!
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Another 2TB question

Post by Vitaliy S. »

Ok, thanks for clarification. Using datastore extents will not help you, as maximum size of the virtual disk will still be 2 TB, so I'm afraid the only option you have is to provide multiple 2 TB disks to the VM and let the OS span across all disks.
Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Re: Another 2TB question

Post by Yuki »

Our file server is Win2012 with 5x2TB VMDKS ina single 10Tb windows volume. It works OK, but we are also seeing more changed blocks/data than I would expect. I can't say that it has anything to do with the 2TB vmdk limit and dynamic disk though, perhaps more to do with block size in use by Windows for such a large volume?
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Another 2TB question

Post by tsightler »

Also, be sure not to create the full 2TB disks if you want to back them up with Veeam as VMware requires a small amount of overhead on a disk even to be able to snapshot it. I suggest 1.98TB per VMDK.
LynnJ57
Novice
Posts: 4
Liked: never
Joined: Oct 31, 2012 4:32 pm
Full Name: Lynn Johnson
Contact:

Re: Another 2TB question

Post by LynnJ57 »

Thanks for the suggestion. I tried VEEAM originally as a test but when I saw the backups being larger than expected, I went ahead and reconfigured the VMDK's as well, keeping in that 1.98 range you speak of. I just can't imagine there isn't a better solution here. I love VEEAM and VMWare, but those stupid 2TB limits are getting O-L-D!!! I know about RDM, but it seems like there should be another solution in this day and age.
Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Re: Another 2TB question

Post by Yuki »

I'm quite sure VMware is going to address this within two years. It's not soon enough, but they are not known to respond to customer demands quickly anyway (unless it's a bug that crashes systems across the board).
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Another 2TB question

Post by tsightler »

I'll be amazed if it takes them that long at this point seeing that Hyper-V had 64TB vhdx format. They may have been historically slow to respond to customer demand (the 2TB limit should have long since been broken), but I suspect they know they'll need to respond more quickly to competitive pressure.
yizhar
Service Provider
Posts: 182
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: Another 2TB question

Post by yizhar »

Hi.

Regarding the large changed block data issue mentioned, it might be related to scheduled disk defrag.

Yizhar
Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Re: Another 2TB question

Post by Yuki »

it is disabled on our server.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Another 2TB question

Post by tsightler »

I believe that Yuki has already mentioned on another thread that his sever is using 2012 Dedupe. This can cause a significant increase in changed blocks over a baseline volume without dedupe.
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Another 2TB question

Post by dellock6 »

tsightler wrote:I'll be amazed if it takes them that long at this point seeing that Hyper-V had 64TB vhdx format. They may have been historically slow to respond to customer demand (the 2TB limit should have long since been broken), but I suspect they know they'll need to respond more quickly to competitive pressure.
Or maybe, given the news of new concept like vVols coming in the (near) future, there would be no need to correct this limit, because it would go away (just thinking about it, before you come to me asking for more information, I haven't).

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Post Reply

Who is online

Users browsing this forum: andreas_1985, CoLa, Majestic-12 [Bot], veremin and 274 guests