Comprehensive data protection for all workloads
Post Reply
leiw
Expert
Posts: 121
Liked: never
Joined: Feb 25, 2010 2:46 am
Contact:

New on SAN mode

Post by leiw » Dec 06, 2010 1:55 am

Hello

I read FAQ, but still do not understand ..

Here's my concept:
1. Install Microsoft iscsi-initiator on Veeam backup server.
2. Connect to our iscsi-target server by Microsoft iscsi-initiator.
3. How to use SAN mode on Veeam backup server ??

Thanks !

stevenj

Re: New on SAN mode

Post by stevenj » Dec 06, 2010 3:21 am

Hi,

I have not tried this yet, but, so the veeam backup server can see the ESXi VMFS partitions?

regards

Vitaliy S.
Product Manager
Posts: 23001
Liked: 1557 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: New on SAN mode

Post by Vitaliy S. » Dec 06, 2010 8:44 am

If you've configured everything properly (you're able to see LUNs in Disk Management on your backup server), just choose SAN job processing mode and that's it, you should be good to go.

leiw
Expert
Posts: 121
Liked: never
Joined: Feb 25, 2010 2:46 am
Contact:

Re: New on SAN mode

Post by leiw » Dec 07, 2010 3:47 am

Hello

I just used Microsoft iscsi-initiator to connect iscsi-target server and active one LUN, I can see the drive on Disk Management, but it have to convert to dynamic disk. I afraid lose all data if converted to dynamic disk.

Thanks !

Alexey D.

Re: New on SAN mode

Post by Alexey D. » Dec 07, 2010 9:11 am

Please don't convert or format these LUNs, otherwise you will lose your data.

If you need any assistance, please contact our support team directly. Thanks.

Titanmike
Enthusiast
Posts: 65
Liked: never
Joined: Apr 16, 2010 1:52 pm
Full Name: ZeGerman
Contact:

Re: New on SAN mode

Post by Titanmike » Dec 10, 2010 11:14 am

I just had a play myself.
I am honestly a bit worried that either due to User-Error, or even more importantly, a crashed Windows box destroys the LUN and potentially 100s of VMs with it.

We had a while go where an iSCSI drive became corrupt due to a Windows fault - it would be a disaster if this obviously happens with a vSphere LUN.

Is there anything you can suggest to avoid those, even seldom, scenarios ?

Vitaliy S.
Product Manager
Posts: 23001
Liked: 1557 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: New on SAN mode

Post by Vitaliy S. » Dec 10, 2010 12:04 pm

Just make those LUNs Read-only for the Veeam Backup server.

B4VAMTime
Lurker
Posts: 1
Liked: never
Joined: Dec 09, 2010 2:33 pm
Full Name: Brett Foland
Contact:

Re: New on SAN mode

Post by B4VAMTime » Dec 10, 2010 12:21 pm

Vitaliy S. wrote:Just make those LUNs Read-only for the Veeam Backup server.
I've looked everywhere - How can you make one machine have read-only access?

Gostev
SVP, Product Management
Posts: 24804
Liked: 3566 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: New on SAN mode

Post by Gostev » Dec 10, 2010 12:54 pm

You should consult to your SAN documentation (if your SAN provides this feature).
Titanmike wrote:I am honestly a bit worried that either due to User-Error, or even more importantly, a crashed Windows box destroys the LUN and potentially 100s of VMs with it. We had a while go where an iSCSI drive became corrupt due to a Windows fault - it would be a disaster if this obviously happens with a vSphere LUN.
This can only happen if you actually mount the VMFS LUN as volume to Windows server in Disk Management. By default, Windows automounts all detected volumes and can resignature them, which corrupts VMFS. This is very old and known issue.

Veeam Backup v5 setup automatically disables automount, which prevents this from happening. Windows simply cannot interact with volumes which are not mounted. And in order to prevent users from manually mounting those volumes, simply remove everyone (except you) from Local Administrators group.

joergr
Expert
Posts: 386
Liked: 39 times
Joined: Jun 08, 2010 2:01 pm
Full Name: Joerg Riether
Contact:

Re: New on SAN mode

Post by joergr » Dec 10, 2010 10:36 pm

...and if you are still very very uncertain there is a) vstorage API hot add mode where you use the backup server inside am VM or b) vstorage NBD mode which uses pure NBD protocol without any need to expose anything to the backup server.

best regards,
Joerg

Gostev
SVP, Product Management
Posts: 24804
Liked: 3566 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: New on SAN mode

Post by Gostev » Dec 10, 2010 11:16 pm

The backup speed will suck with NBD though, unless you have 10Gb ethernet like Joerg ;)

joergr
Expert
Posts: 386
Liked: 39 times
Joined: Jun 08, 2010 2:01 pm
Full Name: Joerg Riether
Contact:

Re: New on SAN mode

Post by joergr » Dec 10, 2010 11:25 pm

Anton is right ;-) - but actually i am doing a lot of research and tweaking together with some vmware and equallogic guys these days to even with 1 gb get lots more out of NBD mode. That because NBD mode is the most secure mode regarding your luns. no fucked up windows host could touch your luns anymore. nice imagination, eh? stay tuned for some nice reports....but can take a few weeks. We are actually doing experiments with esxi 4.1 iscsi software initiator configured on special vkernel vswitch with delayed ack set off and latest async intel e1000 drivers. no jumbo frames at all ;-) - we found out jumbo frames don´t get that much gain out of it like other nice stuff. fun - i can tell you ;-)

Titanmike
Enthusiast
Posts: 65
Liked: never
Joined: Apr 16, 2010 1:52 pm
Full Name: ZeGerman
Contact:

Re: New on SAN mode

Post by Titanmike » Dec 11, 2010 2:24 am

Funny enough, we are using Dell Equallogics for some of our deployments as well and I agree - Jumbo Frames AND Flow Controll (as suggested by Dell when using Cisco switches) doesn't make much of a difference, but still, the performance is very poor and the only decent performance I am getting out of it is when using SAN backups - that however doesn't stop people from accidentally deleteing LUNs from inside of Winodws as the Dell doesn't allow readonly chap accounts but only readonly luns which doesn't really help. Joerg, whats a "special vmkernel" switch though ? Best practise, Dell or not, is still, using one uplink per vswitch per vmkernel, multipath them via the cli, have jumbo frames and flow controll enabled .. We evaluated a LOT of different SAN vendors over the last few months and the best performing combo so far are indeed vsphere 4.1 and the Dell Equallogic with firmware 5.0.2 ..

Anyway, back to your first response Joerg - we tried now running Veeam inside a VM and get some odd messages so which I have opened a ticket about - let's see what happens there ("Hot add is not supported for this disk, failing over to network mode") ...But yes - I would prefer SAN backups but I simply cannot risk people accidentally deleting LUNs from inside of Windows (sometimes we have 10s of LUNs connected to those SANs).... or even worse and more likely - Windows corrupting LUNs... We are OEM partner especiall for WUDSS (Windows Unified Data Storage Server) which provides an iSCSI target based on Server 2003 / 2008 .. to make a long story short - we had, thankfully only once, the operating system itself corrupting an iSCSI LUN - so even if you take out the human error factor - you still in "danger" of Windows having a flue and corrupting your VMFS LUN - something you don't want - especially when running 10s or 100s of VMs on that LUN.

joergr
Expert
Posts: 386
Liked: 39 times
Joined: Jun 08, 2010 2:01 pm
Full Name: Joerg Riether
Contact:

Re: New on SAN mode

Post by joergr » Dec 11, 2010 8:25 am

Titanmike,

until now we are focused on the 10 gb stuff and only did the 1 gb stuff tests since a few days, but anyhow, i could tell you our results which it seems will also work exactly the way i described it here for 1 GB.

We did not use MPIO at all ;-) Neither VMware native, nor with EQL MPIO profile/driver.

Just one 10 GB nic bound to just one vswitch with just one vkernel port with just one ip address ;-) - but all dedicated to pure iscsi. Flow control turned on on any pswitch. Jumbo frames NOT turned on, neither on ESXi 4.1, nor on the pswitches.

OK you asked for it, so here we go. Our research shows these days (but will take weeks to complete as i said before, so don´t take THIS here as my final suggestion).

1. Suggesting you use intel NICS, update the NICS in the ESXi 4.1 hosts with the latest intel drivers from VMware website.
Example: vihostupdate.pl –server [IP address] –username root –install –bundle [CD/DVD]:\offline-bundle\INT-intel-lad-ddk-igb-1.3.19.12.1-offline_bundle-185976.bla

2. Create a dedicated iSCSI vswitch, vkernel port and bind the software iscsi initiator to it. rescan. reboot.
esxcfg-vswitch -a vSwitch-iSCSI
esxcfg-vswitch -A iSCSI-VMKernel1 vSwitch-iSCSI
esxcfg-vmknic -a -i 172.16.150.12 -n 255.255.0.0 iSCSI-VMKernel1
add NIC to it via GUI
esxcli swiscsi nic add -n vmk1 -d vmhba38
…where vmhba is your software iscsi initiator vmhba.

3. disable delayed ack in iscsi software initiator properties at advanced for the whole iscsi system or at focus to a single portal or lun, how you like. Reboot.

4. You are good to go.

5. I agree, Equallogic in my opinion is one of the very best SAN solutions for vSphere today. From both the design and from the performance perspective.

best regards,
Joerg

Titanmike
Enthusiast
Posts: 65
Liked: never
Joined: Apr 16, 2010 1:52 pm
Full Name: ZeGerman
Contact:

Re: New on SAN mode

Post by Titanmike » Dec 11, 2010 10:16 am

1/2 = we do anyway , apart from the addition of using multipathing (vmware native)
3 = never played with it, might give it a try ...

Gostev
SVP, Product Management
Posts: 24804
Liked: 3566 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: New on SAN mode

Post by Gostev » Dec 11, 2010 8:47 pm

Titanmike wrote:we tried now running Veeam inside a VM and get some odd messages so which I have opened a ticket about - let's see what happens there ("Hot add is not supported for this disk, failing over to network mode")
Please check out FAQ for Virtual Appliance mode, there is list of limitations which result in hot add not being availalble.

I must say I definitely do not share your concerns about direct SAN access reliability, and possible corruptions. In fact, I give direct SAN access 10 point ahead as far as reliability concerned comparing to hot add. Direct SAN access has been around for sooo many years (since VCB times), it is absolutely flawless - polished by time and largest enteprise environments. Hot Add, on the other hand is only around for 1 year (Veeam was the first to support it with v4 released 1 year ago).

Post Reply

Who is online

Users browsing this forum: No registered users and 16 guests