Best practices for DirectSAN access proxy?

Availability for the Always-On Enterprise

Best practices for DirectSAN access proxy?

Veeam Logoby nokogerra » Tue Sep 19, 2017 5:53 am

Hello there.
I`m going to test lan-free proxy and now i`m collecting best practices for this type of proxy configuration. However I'm confused a bit about some recommendations, I think people here can help me to understand them.
1. It says in the documentation that Windows SAN plociy on the proxy is set to "OfflineShared" automatically. But what does it mean? According to MS docs it is almost the same as "OnlineAll":
OnlineAll. Specifies that all newly discovered disks will be brought online and made read/write.
OfflineShared. Specifies that all newly discovered disks that do not reside on a shared bus (such as SCSI and iSCSI) are brought online and made read-write. Disks that are left offline will be read-only by default.

So in both cases LUNs should be brought online, am I right? If so then what the point of setting "OfflineShared" parameter for the SAN policy?
2. Author of https://www.veeam.com/blog/vmware-backu ... ation.html says that LUN can be set up as read-only to guarantee that it wouldn`t be resignatured. Does he mean "attributes disk set readonly" option of diskpart? I guess in this case it would be possible to use this proxy only for backup purposes and not for restore.
3.
Find the best RAID controller cache setting for your environment.
I`m going to use proxy with DAS as repository with LSI 9271-8i RAID controller. Any adivces about that?
4.
Update MPIO software (disable MPIO may increase performance).
My prod storage is VNX5400 and I`ve never had the exprerience of presentation it for Windows hosts. I now this is not EMC forum, but may be I will get some tips. I know that unisphere host agent should be installed on Windows host to connect LUN to it, but what about multipathing? Is Windows 2012R2 native multipathing module OK for my purposes? Or should I configure only single path on my FC fabric?
nokogerra
Enthusiast
 
Posts: 30
Liked: 2 times
Joined: Wed Sep 09, 2015 3:12 am
Full Name: Anatoliy Kopylov

Re: Best practices for DirectSAN access proxy?

Veeam Logoby PTide » Tue Sep 19, 2017 9:45 am

Hi,

So in both cases LUNs should be brought online, am I right? If so then what the point of setting "OfflineShared" parameter for the SAN policy?
As you've mentioned - it's almost the same. It should be read as "Windows does not bring disks that reside on shareable buses (iSCSI, FC, SAS) online".

Does he mean "attributes disk set readonly" option of diskpart?
No, it does not. SAN policy and disk attributes are two different settings.

I guess in this case it would be possible to use this proxy only for backup purposes and not for restore.
Veeam sets particular disk as writeable for restore operation.

LSI 9271-8i RAID controller
That depends on many factors, but in general I'd recommend to go with RAID 10.

Thanks
PTide
Veeam Software
 
Posts: 3147
Liked: 262 times
Joined: Tue May 19, 2015 1:46 pm

Re: Best practices for DirectSAN access proxy?

Veeam Logoby nokogerra » Tue Sep 19, 2017 10:04 am

Thank you for your reply. Really appreciate your help.

No, it does not. SAN policy and disk attributes are two different settings.

Well, here is the quote from that article:
Worried about resignaturing? (Almost never happens and Veeam setup puts in preventions) Present VMFS LUNs to backup proxy as read-only.
. I guess he talks not about SAN policy.

As you've mentioned - it's almost the same. It should be read as "Windows does not bring disks that reside on shareable buses (iSCSI, FC, SAS) online".
Ah, I`ve understood it wrong initially. Fine, then LUNs of production storage will not be brought online automatically for veeam proxy, but doesn`t it required for LUN to be online to backup vms from it? Or it is enough for proxy just to "see" it without bringing it online?

I'd recommend to go with RAID 10. How many disks do you have?

My question is about specific cache configuration not about RAID level. We are going to use DAS (LSI 9271-8i controller) as repo, it will consist of 8 SATA3 8TB disks configured in RAID6 with 1 hot spare disk. So it will be ~40 TB NTFS volume. I guess RAID write penalty doesn`t really matter in case of sequential access.
nokogerra
Enthusiast
 
Posts: 30
Liked: 2 times
Joined: Wed Sep 09, 2015 3:12 am
Full Name: Anatoliy Kopylov

Re: Best practices for DirectSAN access proxy?

Veeam Logoby PTide » Tue Sep 19, 2017 10:25 am 1 person likes this post

I guess he talks not about SAN policy.
I can assure you that we do not touch disk attributes :) He talks about SAN policies that are global, whereas disk attributes are individual per disk.

Or it is enough for proxy just to "see" it without bringing it online?
This.

My question is about specific cache configuration not about RAID level
If you're asking what cache mode to use then you should probably stick with write-back as it has advantages with synthetic operations (the very bottom of the article).

I guess RAID write penalty doesn`t really matter in case of sequential access.
It does matter since all transform operations (merge, compact, synthetic full) are random. Please check this BP to learn more about impacts of different backup methods.

Thanks
PTide
Veeam Software
 
Posts: 3147
Liked: 262 times
Joined: Tue May 19, 2015 1:46 pm

Re: Best practices for DirectSAN access proxy?

Veeam Logoby nokogerra » Thu Sep 21, 2017 9:07 am

Thanks a lot, that links a re very useful, however we are not going to use any transform backups probably.
Actually I`ve faced the strange thing: I`m testing DirectSan access now with old CLARiiON CX4 connected to the hosts via FC 4Gib. The source is one LUN of CLARiiON and the target is the other LUN of the same storage (but not the same storage group, the source and the destination LUNs are placed on the different disks). The power supplies of this array are dead, so the write cache is disabled (I know there is some tweak to enable it even with dead SPSs but I dont want to do it) and I`m not expecting some great performance, but I think the activity occured during the backup job is strange:
Image
There were some peaks with 330 mbps transfer rate (fits FC 4g pretty much) but the most time the proxy is idle. Now the job is stuck at 99% for 20 mins already and the proxy is doing nothing. Why so? Not sure that disabled write cache can be the cause of this, veeam says that the bottleneck is source device and it is not concerned of write cache.
nokogerra
Enthusiast
 
Posts: 30
Liked: 2 times
Joined: Wed Sep 09, 2015 3:12 am
Full Name: Anatoliy Kopylov

Re: Best practices for DirectSAN access proxy?

Veeam Logoby nokogerra » Thu Sep 21, 2017 10:20 am

Well the backup job is done now and it took 50 mins to complete. The real transfer of data took only 8 mins (110 GB). I guess I will try another vm and if the result will be the same, then will try to contact veeam support.
nokogerra
Enthusiast
 
Posts: 30
Liked: 2 times
Joined: Wed Sep 09, 2015 3:12 am
Full Name: Anatoliy Kopylov

Re: Best practices for DirectSAN access proxy?

Veeam Logoby PTide » Thu Sep 21, 2017 1:28 pm

<...>however we are not going to use any transform backups probably.
That is possible only if you use forward incremental chains with intermediate active full backups without synthetic fulls.

I`m testing DirectSan access now with old CLARiiON CX4 connected to the hosts via FC 4Gib. The source is one LUN of CLARiiON and the target is the other LUN of the same storage (but not the same storage group, the source and the destination LUNs are placed on the different disks).
I'd say that the config that you use is not really feasible for production, because of repository residing on the very same box with the VM storage, in the first place. Therefore it won't make much sense to troubleshoot performance problems on a config that should never go live. Also, what are your backup proxy and repo machines, are they physical or VMs?

Thanks
PTide
Veeam Software
 
Posts: 3147
Liked: 262 times
Joined: Tue May 19, 2015 1:46 pm

Re: Best practices for DirectSAN access proxy?

Veeam Logoby nokogerra » Thu Sep 21, 2017 3:48 pm

PTide wrote:That is possible only if you use forward incremental chains with intermediate active full backups without synthetic fulls.

I`m gonna use exactly these types of jobs.
PTide wrote:I'd say that the config that you use is not really feasible for production, because of repository residing on the very same box with the VM storage, in the first place. Therefore it won't make much sense to troubleshoot performance problems on a config that should never go live. Also, what are your backup proxy and repo machines, are they physical or VMs?

The problem is not in the performance at all, the problem is that the proxy does nothing 90% of backup window. Really there is 0.0 kbps read and transfer rate for 42 minutes of 50. Well fine, I have made another scheme: source is VNX5400 RAID 5 LUN of 9 SAS 10K, destination is CLARiiON CX4 RAID 5LUN of 13 SATA 7.2K. Totally different boxes. However in the previous scheme source and destionation LUNs were owned by different SPs, but nvm.
Well lets see:
Image
As you can see there is a window of 13 mins, and most time of this window proxy did nothing. However the total process time is 46(!) mins, the part after 21:47:55 was cut because the read/transfer rate at that part of graph is also 0 kbps. An then job just finished after 30 mins of idle.
Image
P.S. the proxy is physical vm, furthermore it is a test blade with x4 Xeon E7-4850 and 256 (lol) GB of RAM.

UPDATE:
Here, totally another vm in process but the graph is the same:
Image
nokogerra
Enthusiast
 
Posts: 30
Liked: 2 times
Joined: Wed Sep 09, 2015 3:12 am
Full Name: Anatoliy Kopylov

Re: Best practices for DirectSAN access proxy?

Veeam Logoby dellock6 » Sun Sep 24, 2017 4:41 pm

If you select the specific VM on the left part, you/we could see the details of what's happening in the different steps of the job, like for example a snapshot commit taking a long time, that would explain for example the time with no I/O, as Veeam would just be waiting for the snapshot commit to complete.
Luca Dell'Oca
EMEA Cloud Architect @ Veeam Software

@dellock6
http://www.virtualtothecore.com
vExpert 2011-2012-2013-2014-2015-2016
Veeam VMCE #1
dellock6
Veeam Software
 
Posts: 5118
Liked: 1361 times
Joined: Sun Jul 26, 2009 3:39 pm
Location: Varese, Italy
Full Name: Luca Dell'Oca

Re: Best practices for DirectSAN access proxy?

Veeam Logoby ag_ag » Mon Sep 25, 2017 10:31 am

Anatoliy, it'd be better to open a support case with this as there might be something connected with the resource scheduling in our logic. Please be sure that you're running the latest update bits for 9.5 (I hope it is 9.5).
ag_ag
Veeam Software
 
Posts: 3
Liked: never
Joined: Mon May 26, 2014 7:18 am
Full Name: Alex Gerasimov

Re: Best practices for DirectSAN access proxy?

Veeam Logoby mcloud » Mon Sep 25, 2017 1:00 pm

nokogerra, Have you looked at your backup logs? You can take a look at them to see what the job is doing when it goes idle. They usually can be found here - C:\ProgramData\Veeam\Backup.
mcloud
Lurker
 
Posts: 2
Liked: never
Joined: Fri Jul 29, 2016 3:10 pm
Full Name: Michael Cloud

Re: Best practices for DirectSAN access proxy?

Veeam Logoby YouGotServered » Mon Sep 25, 2017 5:12 pm

Hey guys!

I've seen a similar issue at one of my clients (large gaps with apparently nothing going on). I haven't opened a case because I haven't had the time and the backups are still finishing successfully. I'd love to know what you find out!
YouGotServered
Service Provider
 
Posts: 12
Liked: 1 time
Joined: Fri Mar 11, 2016 7:41 pm
Full Name: Cory Wallace

Re: Best practices for DirectSAN access proxy?

Veeam Logoby foggy » Tue Sep 26, 2017 10:33 am

Hi Cory, if you do not see any tasks taking longer than expected if you select the particular VM in the list in the job session window, please ask support for assistance in reviewing the log files.
foggy
Veeam Software
 
Posts: 15094
Liked: 1111 times
Joined: Mon Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson


Return to Veeam Backup & Replication



Who is online

Users browsing this forum: Yahoo [Bot] and 45 guests