Comprehensive data protection for all workloads
StephenDolphin
Novice
Posts: 7
Liked: never
Joined: Feb 25, 2011 11:50 am
Full Name: Stephen Dolphin
Contact:

+2TB sizes (again)

Post by StephenDolphin » Apr 08, 2011 8:50 am

I'm using veeam to back up a whole bunch of VMs on a site (about 40), it's running in appliance mode and I'm currently having issues with my storage: using iomega storcentres as backup targets the backups fail mid-operation because they can no longer see the NAS (even though they can).

I'm experimenting with different backup sources, previously I was using a simple folder shared via CIFS and writing straight to that (which works about 40% of the time and then errors like above about 60% of the time), but the huge advantage here is that I can expose all of the nas via one CIFS share, so 5.3TB.

If I want to go iSCSI, which I think would be better, the NAS box doesn't support LUNs > 2TB (so if I vmdk or raw map it I'm limited to <2TB); if I want to use NFS, which lots of people do, I'll be limited to a vmdk of <2TB again. This sort of behaviour isn't a massive issue normally, but 7 of the machines I'm backing up are 1TB file servers, which means that the entire backup will not fit onto a single 2TB disk of any description (current CIFS backup folder size is about 3.5-4TB).

Has anyone else experienced a requirement to backup such large machines using veeam and if so, is my only option to stick with CIFS?

Thanks.

Steve

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: +2TB sizes (again)

Post by Gostev » Apr 08, 2011 10:05 am

Well, I think it is not normal that your CIFS share is so unreliable. Sounds like your NAS is having some issues. Did you check if NAS vendor may be has newer firmware? But, even if CIFS would work reliably, it is still not ideal as transactional NTFS is not supported on shared folders. iSCSI via guest initiator would be truly ideal...

Are you sure your NAS does not support more LUNs larger than 2TB, because this is a very strange limitation? Or may be you are thinking about connecting this iSCSI storage to your ESX, creating max size VMDK on this datastore, connecting this disk to Veeam Backup VM, and backing up there (thus the 2TB limitation)? If yes, it is not a good idea at all to backup to VMFS to start with. Instead, just use in-guest Windows Software iSCSI Initiator on your backup server to mount raw LUN from NAS, and format this volume with NTFS. This can give you up to 16TB volume for your backups (NTFS limitation). And if your backup server dies, you will always be able to simply mount this LUN to any other Windows box, and get your backup files out easily (unlike when backing up to VMFS).

Another possible option would be mount NFS share to any Linux server as regular NFS share, and Veeam Backup supports Linux targets natively.

Hope this helps.

nboch
Novice
Posts: 3
Liked: never
Joined: Dec 08, 2010 8:11 am
Contact:

Re: +2TB sizes (again)

Post by nboch » Apr 08, 2011 1:17 pm

Hi,

if your NAS is a Windows server, the 2Tb limit might be caused by the disk type.
MBR disks are limited to 2Tb, where GPT are limited to something like 9Zb (consider it's not limited).

You have to convert your disk in GUID Partition Table, do you ?

Nicolas.

tsightler
VP, Product Management
Posts: 5328
Liked: 2175 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: +2TB sizes (again)

Post by tsightler » Apr 08, 2011 7:56 pm

Gostev wrote:Instead, just use in-guest Windows Software iSCSI Initiator on your backup server to mount raw LUN from NAS, and format this volume with NTFS. This can give you up to 16TB volume for your backups (NTFS limitation). And if your backup server dies, you will always be able to simply mount this LUN to any other Windows box, and get your backup files out easily (unlike when backing up to VMFS).
This is almost certainly the best option. We do this with tremendous success at some of our remote sites. While we generally backup to Linux targets, at some of the remote sites we also keep local backups on old SAN's that otherwise would have been decommissioned. We configure these old SAN's as a single big iSCSI volume and present it to the local Veeam server (which itself is a VM with the iSCSI initiator). We keep a cloned image of this local VM on the local disks of one of the ESX hosts, so, in the event of a failure of the primary SAN, we can simply power up the Veeam VM on the local disk, import the more recent backups, and instant restore all the VM's, then get to work fixing the local SAN while business continues on.

StephenDolphin
Novice
Posts: 7
Liked: never
Joined: Feb 25, 2011 11:50 am
Full Name: Stephen Dolphin
Contact:

Re: +2TB sizes (again)

Post by StephenDolphin » Apr 11, 2011 8:37 am

Thanks All,

The NAS box is an Iomega StorCenter Pro NAS ix4-200r - upgraded to the latest firmware already.

I have logged a support call with Iomega because I think for some reason it's not allowing access to a >2TB iSCSI LUN, if I make a 1TB LUN it's fine, but if I go to 2TB it simply stops being seen by the iSCSI initiator, so that's either an error or a deliberate limitation of the hardware, which would be silly, but is still possible!

jgremillion
Enthusiast
Posts: 87
Liked: never
Joined: Oct 20, 2009 2:49 pm
Full Name: Joe Gremillion
Contact:

Re: +2TB sizes (again)

Post by jgremillion » Apr 11, 2011 1:31 pm

I do the same. All of my target LUNs that hold Veeam Backups are GPT partitions so I can get bigger volumes.

Oletho
Enthusiast
Posts: 66
Liked: 1 time
Joined: Sep 17, 2010 4:37 am
Full Name: Ole Thomsen
Contact:

Re: +2TB sizes (again)

Post by Oletho » Apr 12, 2011 4:24 am

I am a little surprised that guest initiator is recommended.

Always thought that storage access is handled most effective through the host stack? The few tests I made also showed that.

And using vmdk extents or Windows stripe sets is no big thing.

I am having performance trouble with Veeam and a Qnap NAS myself, never thought of trying guest iSCSI in that scenario. Will test that when possible.

Ole Thomsen

StephenDolphin
Novice
Posts: 7
Liked: never
Joined: Feb 25, 2011 11:50 am
Full Name: Stephen Dolphin
Contact:

Re: +2TB sizes (again)

Post by StephenDolphin » Apr 27, 2011 2:03 pm

iomega have confirmed that +2TB is not allowed, deliberate limitation, so don't buy the ix4-200r if you've got large backup files! http://blog.stephendolphin.co.uk/projec ... m-backups/

digitlman
Enthusiast
Posts: 94
Liked: 3 times
Joined: Jun 10, 2010 6:32 pm
Contact:

Re: +2TB sizes (again)

Post by digitlman » Apr 27, 2011 2:07 pm

I believe iscsi is limited to 2TB LUNs as well.

StephenDolphin
Novice
Posts: 7
Liked: never
Joined: Feb 25, 2011 11:50 am
Full Name: Stephen Dolphin
Contact:

Re: +2TB sizes (again)

Post by StephenDolphin » Apr 27, 2011 2:10 pm

digitlman wrote:I believe iscsi is limited to 2TB LUNs as well.
No, it's not :D

StephenDolphin
Novice
Posts: 7
Liked: never
Joined: Feb 25, 2011 11:50 am
Full Name: Stephen Dolphin
Contact:

Re: +2TB sizes (again)

Post by StephenDolphin » Apr 27, 2011 2:15 pm

StephenDolphin wrote: No, it's not :D
Sorry, to be more helpful: the limit in Windows is 16TB, obviously the format of the disk might limit it to 2TB if you use MBR but the fact is that the iSCSI initiator can see up to 16TB certainly, maybe more.

tsightler
VP, Product Management
Posts: 5328
Liked: 2175 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: +2TB sizes (again)

Post by tsightler » Apr 27, 2011 2:15 pm

While I agree that this is a pretty lame limitation, you could work around it by presenting multiple 2TB LUN's to your Windows host and spanning the volumes.

StephenDolphin
Novice
Posts: 7
Liked: never
Joined: Feb 25, 2011 11:50 am
Full Name: Stephen Dolphin
Contact:

Re: +2TB sizes (again)

Post by StephenDolphin » Apr 27, 2011 2:19 pm

Well, maybe multiple 1TB luns because at the moment even the 2TB isn't showing in the initiator, but maybe you're right... interesting...

tsightler
VP, Product Management
Posts: 5328
Liked: 2175 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: +2TB sizes (again)

Post by tsightler » Apr 27, 2011 2:30 pm

I think with Windows 2008 you can have up to 32 volumes in a single stripe set, I would think it would be the same for spanned volumes as well so even at 1TB you could use the entire drive. I know it's not pretty, but at least you could use the device and it would probably work fine.

TaylorB
Enthusiast
Posts: 49
Liked: 4 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: +2TB sizes (again)

Post by TaylorB » Apr 28, 2011 7:13 pm

You might consider splitting your backup into two smaller jobs going to two smaller volumes.

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: +2TB sizes (again)

Post by Gostev » Apr 29, 2011 5:15 pm

This is bad for dedupe and ultimately means excessive storage loss. Not just those multiple unused chunks of smaller volumes are lost, but also total backup size is a few times larger than it could have been.

TaylorB
Enthusiast
Posts: 49
Liked: 4 times
Joined: Jan 28, 2011 4:40 pm
Full Name: Taylor B.
Contact:

Re: +2TB sizes (again)

Post by TaylorB » May 02, 2011 5:23 pm

Gostev wrote:This is bad for dedupe and ultimately means excessive storage loss. Not just those multiple unused chunks of smaller volumes are lost, but also total backup size is a few times larger than it could have been.
It seems better than failing backup jobs to me. You gotta work with what you have.

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: +2TB sizes (again)

Post by Gostev » May 02, 2011 5:40 pm

Spanned volume is just a better way to work with what you have.

yrrah2010
Influencer
Posts: 18
Liked: never
Joined: Jan 26, 2011 7:06 am
Contact:

[MERGED] Backup huge vServers

Post by yrrah2010 » Jul 13, 2011 7:59 am

Currently I have a virtual Veeam Backup & Replication server running, but I am struggling with backup huge 4+ TB servers.
This is my current configuration:
  • Create 3x 1,99TB LUNs on the storage (VMware limit 2TB -512b).
  • Create one VMFS volume with the 3 LUNs.
  • Create 3 separate virtual disks (VMware limit 2TB -512b).
  • Within the OS create a dynamic disk to create one big backup volume (because within Veeam you can choose only one driveletter).
Is there a simpler solution for this?

Vitaliy S.
Product Manager
Posts: 22527
Liked: 1475 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: Backup huge vServers

Post by Vitaliy S. » Jul 13, 2011 9:09 am

You may want to use LUN extents to store 4 TBs of data.

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: +2TB sizes (again)

Post by Gostev » Jul 13, 2011 10:42 am

However, please keep in mind that backup into VMFS is considered bad practice for a number of reasons (you can find more details on that in the existing discussions). I would recommend to backup to NTFS formatted LUN of iSCSI NAS instead (via software iSCSI initiator). This will give you up to 16TB target (NTFS limit).

yrrah2010
Influencer
Posts: 18
Liked: never
Joined: Jan 26, 2011 7:06 am
Contact:

Re: +2TB sizes (again)

Post by yrrah2010 » Jul 13, 2011 12:16 pm

Gostev wrote:However, please keep in mind that backup into VMFS is considered bad practice for a number of reasons (you can find more details on that in the existing discussions). I would recommend to backup to NTFS formatted LUN of iSCSI NAS instead (via software iSCSI initiator). This will give you up to 16TB target (NTFS limit).
If i use the iSCSI initiator from Windows 2008 R2 I can't use MPIO active\active only active\failover.
The ESXi iSCSI initiator is much faster with throughput, and i don't have a vm port group configuration on iSCSI network because i only want the ESXi layer to handle storage IO.

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: +2TB sizes (again)

Post by Gostev » Jul 13, 2011 12:43 pm

Sure, performance is important, but what is the sense in nice and fast backups if you cannot restore from them during disaster, and do it quickly?

yrrah2010
Influencer
Posts: 18
Liked: never
Joined: Jan 26, 2011 7:06 am
Contact:

Re: +2TB sizes (again)

Post by yrrah2010 » Jul 13, 2011 1:14 pm

Gostev wrote:Sure, performance is important, but what is the sense in nice and fast backups if you cannot restore from them during disaster, and do it quickly?
Time to move over to vSphere 5 :mrgreen:
VMFS-5 supports volumes up to 64TB
This includes Pass-through RDMs!

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: +2TB sizes (again)

Post by Gostev » Jul 13, 2011 1:19 pm

There might be one small issue with this suggestion though, as vSphere 5 is not released yet :D

mpozar
Enthusiast
Posts: 30
Liked: 1 time
Joined: Jan 01, 2006 1:01 am
Contact:

Re: +2TB sizes (again)

Post by mpozar » Oct 26, 2011 5:46 am

A file thus the .vmdk in vSPhere 5 is STILL limited to 2TB according to the VMware vSphere 5 Configuration Maximums document.

Have FUN!

Michael

Gostev
SVP, Product Management
Posts: 24174
Liked: 3301 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: +2TB sizes (again)

Post by Gostev » Oct 26, 2011 9:24 am

Yes but we have been talking about LUN and pRDM sizes - and they indeed can be up to 64TB with vSphere 5.

lobo519
Expert
Posts: 297
Liked: 34 times
Joined: Sep 29, 2010 3:37 pm
Contact:

[MERGED] Backup Repository W/ MS iSCSI Initiator

Post by lobo519 » Jan 26, 2012 3:56 pm

Is anyone using the MS iSCSI initator inside a guest OS to host a Backup repository on a VM? I am trying to get around the 2TB limit in VMware.

Good? Bad? Performance?

Thanks!

rmiller
Novice
Posts: 9
Liked: 1 time
Joined: Dec 19, 2012 3:33 pm
Full Name: Ryan Miller
Contact:

[MERGED] backup repository - single striped volume or multip

Post by rmiller » Dec 19, 2012 3:40 pm

I am at a customer and we are configuring the Veeam server (Windows 2008 R2) as a VM, and using DAS storage that is attached to the vmware cluster via SAS. Total DAS storage is ~16 TB, which has been presented to VMWare as a single LUN, and a datastore has been created. We are using ESXi5, so the file size limitation is 2 TB for virtual disks added to the Veeam server.

My question is whether there is a general preference for either:
1) Aggregating multiple 2 TB LUN's at the OS level as a striped volume (or spanned? I'd assume striped is better for I/O purposes) to have fewer Veeam repositories, or
2) Keeping each 2 TB LUN as a basic, MBR disk and being a separate Veeam repository

I'm sure there's some degree of style to go with one or the other, but I'm trying to determine if there are major gotchas with either one. For the former, I see the major downside as being less flexible, and the latter I see the major downside being more administrative overhead with having to balance backup jobs among the various repositories. A hybrid option may be to aggregate 2 x 2 TB LUN's together into 4 TB chunks, to maintain flexibility while not having quite as many smaller repositories (making backup job balancing easier).

Thanks!

rmiller
Novice
Posts: 9
Liked: 1 time
Joined: Dec 19, 2012 3:33 pm
Full Name: Ryan Miller
Contact:

Re: [MERGED] backup repository - single striped volume or mu

Post by rmiller » Dec 19, 2012 4:24 pm

rmiller wrote:I am at a customer and we are configuring the Veeam server (Windows 2008 R2) as a VM, and using DAS storage that is attached to the vmware cluster via SAS. Total DAS storage is ~16 TB, which has been presented to VMWare as a single LUN, and a datastore has been created. We are using ESXi5, so the file size limitation is 2 TB for virtual disks added to the Veeam server.

Thanks!
Or (based on others' comments after my question was grouped with this thread) - is it highly recommended not to have the veeam storage go through vmware, and instead have it be a RDM?

Post Reply

Who is online

Users browsing this forum: Baidu [Spider], jonom, Zach123 and 29 guests