Comprehensive data protection for all workloads
Post Reply
JailBreak
Veeam Vanguard
Posts: 35
Liked: 9 times
Joined: Jan 01, 2006 1:01 am
Full Name: Luciano Patrao
Contact:

Veeam B&R v9 direct from SAN not working

Post by JailBreak »

Hi All,

We have install a new Veeam Backup Server version 9. Since we were using a old G5 we decided to install this new version in a new server. So after install and backup the configuration from the old server and import and upgrade in the new one, it was installed. We had some issues in the jobs and authentication that we fixed. Now all jobs are running properly, except they only work trough network transport mode(particularly trough our O&M network). And this is something that we dont want of course. Network guys were already complaining that the Veeam backup server was using too much traffic in this O&M network. We have a isolated Storage 10Gb Network just for Storage and backups etc.

I have double check my environment and cannot see where is the issue, or any difference between our old server and the new one. Because this problem did not happen in the old server(the only difference is that this one is in a different Subnet).

Our environment.

Window Server 2012 R2 - Veeam Backup Server v9 IP: 192.168.6.x (before was a 2008 and with the subnet 192.168.68.x)
Our Backup Repository is all iSCSI volumes(Dell Equaligoc Storage).
iSCSI subnet 192.0.27.x

vCenter 6.0 IP: 192.168.6.x
ESXi hosts are half in 192.168.68.x and other in 192.168.6.x subnet.
Storage Volumes in the VMware Environment are NetpApp NFS Volumes
NFS subnet 192.0.28.x

If I disable the transport mode to not use the Network, none of the jobs start and I get : Unable to allocate processing resources. Error: No backup proxy is able to process this VM due to proxy processing mode restrictions.

I have create a Veeam Proxy with the subnet 192.168.68.x to see if it fix the issue, but still no success.

PS: Another question, before I think(not sure) when a job started we could check in the Action log which transport mode the job was using, but now I cannot see it. where can we see this when the job starts?

Thank You all

JL
PTide
Product Manager
Posts: 6431
Liked: 729 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Veeam B&R v9 direct from SAN not working

Post by PTide »

Hi,

Might be a dumb question, but are you sure that you backup proxies have an access to NFS storage?

Thank you.
JailBreak
Veeam Vanguard
Posts: 35
Liked: 9 times
Joined: Jan 01, 2006 1:01 am
Full Name: Luciano Patrao
Contact:

Re: Veeam B&R v9 direct from SAN not working

Post by JailBreak »

Hi,

What do you mean access the NFS Storage?? Do you mean have the Volumes/shares mounted in this Veeam server?

JL
nunciate
Expert
Posts: 248
Liked: 39 times
Joined: May 21, 2013 9:08 pm
Full Name: Alan Wells
Contact:

Re: Veeam B&R v9 direct from SAN not working

Post by nunciate »

We use a physical server with FC connections to the SAN so I am posting just to keep an eye on this thread to see if this is a bigger issue. We have not upgraded to v9 from v8 yet but plan to very soon.

Just a couple of things I am thinking about on your side.
I assume you were able to configure the iSCSI initiator on your new server and see all the target volumes on your SAN?
Did the SAN get configured to allow the new iSCSI Imitator name of the new server to all of the volumes?
Forgive me if I am not that familiar. I have a couple of NetApps I use for Dev/Backup. I create volumes and then LUNs in the volumes. If any system connects via iSCSI you have to have that initiator in a group and that group has to be given access to the LUNs.

Can you see all of the NetApp volumes in Disk Management on the Veame Server? They all appear as Offline in my backup server.

Just some thoughts. Hope some of that helps.
JailBreak
Veeam Vanguard
Posts: 35
Liked: 9 times
Joined: Jan 01, 2006 1:01 am
Full Name: Luciano Patrao
Contact:

Re: Veeam B&R v9 direct from SAN not working

Post by JailBreak »

Hi nunciate,

Thank you for your reply.

From NetApp mainly is NFS, so need o have IQN in the Netapp side. For the Volumes(2 or 3) we still use iSCSI, yes I add the new Veeam Server IQN to the iGroup.

The Backup Repository that is all iSCSI. So no issues here.

Thanks

JL
JailBreak
Veeam Vanguard
Posts: 35
Liked: 9 times
Joined: Jan 01, 2006 1:01 am
Full Name: Luciano Patrao
Contact:

Re: Veeam B&R v9 direct from SAN not working

Post by JailBreak »

Hi

Do I need to open a Ticket Support for this kind of issue?

Thank You

JL
nielsengelen
Product Manager
Posts: 5635
Liked: 1181 times
Joined: Jul 15, 2013 11:09 am
Full Name: Niels Engelen
Contact:

Re: Veeam B&R v9 direct from SAN not working

Post by nielsengelen »

Please open a support case to have this investigated as logs will need to checked to gather the correct information why it is failing/not working.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
foggy
Veeam Software
Posts: 21071
Liked: 2115 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Veeam B&R v9 direct from SAN not working

Post by foggy »

Luciano, what transport mode was utilized before the migration to new v9 instance? Btw, you can find the transport mode tag in the same place as previously: in the job stats window, select the particular VM in the list and look for the tag next to the proxy server name selected for processing.

Also, please note that starting v9, Veeam B&R supports Direct NFS transport mode.
JailBreak
Veeam Vanguard
Posts: 35
Liked: 9 times
Joined: Jan 01, 2006 1:01 am
Full Name: Luciano Patrao
Contact:

Re: Veeam B&R v9 direct from SAN not working

Post by JailBreak »

Hi All,

Thanks for the reply.

I have check the issues and fix the problem. And yes now I am using Direct NFS access feature.

I also have written an article about this to help other fix the issue when using iSCSI and NFS in the Veeam Backup Infraestructure.

Hope will help other

http://myitoverview.blogspot.de/2016/02 ... ccess.html

Thank You

JL
Post Reply

Who is online

Users browsing this forum: Amazon [Bot], Bing [Bot], Google [Bot], Semrush [Bot] and 127 guests