-
- Influencer
- Posts: 21
- Liked: 9 times
- Joined: Oct 31, 2012 1:05 pm
- Full Name: Lee Christie
- Contact:
High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
Hi guys
Background:
a friend and I have just started a new hosting company. We are using latest and greatest tech, namely 10Gb Arista networking, Dell R620 servers, EMC VNX storage (iSCSI) and vSphere 5.1 / vCloud Director 5.1. One of the last things to do was to implement a decent backup solution and Veeam was No1 on the list since we used it at my last company.
So - onto the Veeam chat
I am looking to hear from anyone who is running with our kind of setup, or perhaps from the friendly Veeam staff who will be knowledgeable about these matters. My question is a general one, around the design of an optimised Veeam setup that is also scalable.
For our backup we have built a couple of 24TB storage boxes (supermicro boards, 24 x 1TB 2.5", 2 x 10Gb NIC that kind of thing) and our original plan was that our mass storage would be presented to Veeam (and other hosts) via iSCSI or NFS. However since evaluating Veeam I am wondering if that isn't the best approach.
Currently, I have Veeam B&R installed directly onto one of these big storage boxes. I'm running Server 2008 R2, using the Microsoft iSCSI initiator with Microsoft MPIO feature to get multipathing. All my tests show that iSCSI/multipathing is optimal as traffic is balanced perfectly evenly over the two 10Gb NICs which go off to the EMC VNX. I can see my VMFS LUNS in disk manager just fine. Veeam is configured to use local storage for the Repository.
Veeam is communicating with my vCenter server and pulling the data off the SAN directly, as verified by looking at the iSCSI NIC traffic. No-where does my statistics/report show that it is definitely using direct SAN, however its definitely using the iSCSI NICs and I set the Backup Proxy to "Direct SAN only", no errors.
Based on the very successful testing and massive throughput rates I've achieved, my conclusion is "go with this" - however I note that Veeam created the idea of using Backup Proxies in order to provide a scalable solution for large environments - I'm worried that our solution won't scale, but it is surely the most efficient? I don't want to go creating VMs that can "see" our SAN at iSCSI layer and pull that iSCSI traffic up through VMware as VM traffic.......I'm also a little confused about the Repository - can a single repository be seen by numerous Proxies or do they all need their own (whether local or mounted via iSCSI).
Currently my thought is that this is about as fast/efficient as it could get, and if we ran into trouble with scale I'd simply add another entire physical server running B&R/Repository/SQL. So a sideways scale out.
Anyway, look forward to hearing any comments and I'm perfectly happy sharing any knowledge that we've gained on vSphere 5.1 / EMC VNX / iSCSI multipathing / 10Gb etc.
cheers
Lee.
Background:
a friend and I have just started a new hosting company. We are using latest and greatest tech, namely 10Gb Arista networking, Dell R620 servers, EMC VNX storage (iSCSI) and vSphere 5.1 / vCloud Director 5.1. One of the last things to do was to implement a decent backup solution and Veeam was No1 on the list since we used it at my last company.
So - onto the Veeam chat
I am looking to hear from anyone who is running with our kind of setup, or perhaps from the friendly Veeam staff who will be knowledgeable about these matters. My question is a general one, around the design of an optimised Veeam setup that is also scalable.
For our backup we have built a couple of 24TB storage boxes (supermicro boards, 24 x 1TB 2.5", 2 x 10Gb NIC that kind of thing) and our original plan was that our mass storage would be presented to Veeam (and other hosts) via iSCSI or NFS. However since evaluating Veeam I am wondering if that isn't the best approach.
Currently, I have Veeam B&R installed directly onto one of these big storage boxes. I'm running Server 2008 R2, using the Microsoft iSCSI initiator with Microsoft MPIO feature to get multipathing. All my tests show that iSCSI/multipathing is optimal as traffic is balanced perfectly evenly over the two 10Gb NICs which go off to the EMC VNX. I can see my VMFS LUNS in disk manager just fine. Veeam is configured to use local storage for the Repository.
Veeam is communicating with my vCenter server and pulling the data off the SAN directly, as verified by looking at the iSCSI NIC traffic. No-where does my statistics/report show that it is definitely using direct SAN, however its definitely using the iSCSI NICs and I set the Backup Proxy to "Direct SAN only", no errors.
Based on the very successful testing and massive throughput rates I've achieved, my conclusion is "go with this" - however I note that Veeam created the idea of using Backup Proxies in order to provide a scalable solution for large environments - I'm worried that our solution won't scale, but it is surely the most efficient? I don't want to go creating VMs that can "see" our SAN at iSCSI layer and pull that iSCSI traffic up through VMware as VM traffic.......I'm also a little confused about the Repository - can a single repository be seen by numerous Proxies or do they all need their own (whether local or mounted via iSCSI).
Currently my thought is that this is about as fast/efficient as it could get, and if we ran into trouble with scale I'd simply add another entire physical server running B&R/Repository/SQL. So a sideways scale out.
Anyway, look forward to hearing any comments and I'm perfectly happy sharing any knowledge that we've gained on vSphere 5.1 / EMC VNX / iSCSI multipathing / 10Gb etc.
cheers
Lee.
-
- Veteran
- Posts: 1531
- Liked: 226 times
- Joined: Jul 21, 2010 9:47 am
- Full Name: Chris Dearden
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
Sounds like you have got some great kit to play with !
What sort of CPU power have you got in those supermicro boxes? remeber that each concurrent job will need a couple of cores.
I personally like to run the backup server as a VM - this isloates the job control from the data moving - it also means that we can be protected with our own technology.
If you have sufficient cores on those storage units , then why not consider the following.
- Backup Server as a VM
- Physcial boxes with proxy and repository roles to act as a "Veeam Pod" - scale the jobs that proxy will run so that they wont fill it ( 2Tb of VM's or so per job is a good starting place )
That way you can scale your backup pods out as you scale the underlying vcenter environment out.
there are a few things you might need to be aware of when it comes to cloud director - note that the VM's appear in vcenter as a GUID - no friendly names , though you can use the VCD generated resource pools as backup containers.
When you resotre, you may need to restore to a stage host before you reimport into VCD.
If you look into the detailed stats for the job , you should see "using proxy <PROXYNAME> [san] " if your proxies are operating in direct san mode.
What sort of CPU power have you got in those supermicro boxes? remeber that each concurrent job will need a couple of cores.
I personally like to run the backup server as a VM - this isloates the job control from the data moving - it also means that we can be protected with our own technology.
If you have sufficient cores on those storage units , then why not consider the following.
- Backup Server as a VM
- Physcial boxes with proxy and repository roles to act as a "Veeam Pod" - scale the jobs that proxy will run so that they wont fill it ( 2Tb of VM's or so per job is a good starting place )
That way you can scale your backup pods out as you scale the underlying vcenter environment out.
there are a few things you might need to be aware of when it comes to cloud director - note that the VM's appear in vcenter as a GUID - no friendly names , though you can use the VCD generated resource pools as backup containers.
When you resotre, you may need to restore to a stage host before you reimport into VCD.
If you look into the detailed stats for the job , you should see "using proxy <PROXYNAME> [san] " if your proxies are operating in direct san mode.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
Hi Lee,
first of all, cool environment!!!
Your assumptions are both correct: your setup is the most efficient since it runs in SAN mode on a 10G dedicated network, but it can at some point hit the limit of the single "VeeamPod", and you could have also high availability problems since a lost pod makes you loose both proxy and repository.
Another design you could evaluate can be made with virtual proxies, running within the environment and using hotadd mode. VM will have the vmxnet3 network running at 10G, and with the switches you can nonetheless create a dedicated VM portgroup pointing at the iscsi network (I suspect via vlan rather than physical separation). You can then use the pod only as repositories for Veeam jobs.
Another limit I see in direct mode is restore: this method can be used for backup, but not in restore, so at least you would consider at least one virtual proxy to be used for restore purposes.
Luca.
first of all, cool environment!!!
Your assumptions are both correct: your setup is the most efficient since it runs in SAN mode on a 10G dedicated network, but it can at some point hit the limit of the single "VeeamPod", and you could have also high availability problems since a lost pod makes you loose both proxy and repository.
Another design you could evaluate can be made with virtual proxies, running within the environment and using hotadd mode. VM will have the vmxnet3 network running at 10G, and with the switches you can nonetheless create a dedicated VM portgroup pointing at the iscsi network (I suspect via vlan rather than physical separation). You can then use the pod only as repositories for Veeam jobs.
Another limit I see in direct mode is restore: this method can be used for backup, but not in restore, so at least you would consider at least one virtual proxy to be used for restore purposes.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Influencer
- Posts: 21
- Liked: 9 times
- Joined: Oct 31, 2012 1:05 pm
- Full Name: Lee Christie
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
Cheers guys
to answer the questions......
the supermicro storage boxes have been fitted out with a single E5-2620 CPU (6 core) as originally we intended to use them just for iSCSI/NAS/NFS where horsepower isn't important. I can always chuck an extra CPU, or even a couple of 8 cores in there......the Dell R620 servers have dual E5-2660s so are not short of power, but I want to reserve that for the VM hosting...
Originally yes we had planned to "drink our own medicine" and run everything as VMs - as you say you gain inbuilt HA protection. I just don't like the idea of taking the iSCSI traffic (which I've gone to a lot of effort to keep segragated/efficient and away from the rest of our infrastructure) and dragging it upto a VM as a mapped VM network. I'm OK with the idea of the hotadd mode although I had assumed this isn't going to be as efficient as direct SAN...
(We're aware of the great joys of using vCD, its got some good functionality but not without its own set of bloat and issues)
Right some hard data - currently I have a single backup job which includes all of our AD and management VMs. It processes each VM in turn (no concurrency) and at the points where it processes a VM the CPU usage can be reasonably high - 50% or more across all the cores. I would suspect that this is a byproduct of being able to access the source data at high speed and compression (Veeam reports over 200MB/s as its average and I've seen transfers of upto 280MB/s from the SAN - we'll get more when I move onto a storage pool with a decent number of disks/raid level)
Ah yes I can see it says "Using source proxy VMware Backup Proxy [san]" in the report.
I'm not overly bothered about the lack of HA in this "VeeamPod" style setup. If these servers were not running the full Veeam suite, then they would at the very least be the back-end datastores, so if we lost one we kinda lose the whole stack anyway. It was more about scaling up.
I'll spin up a Server 2008 R2 VM and put the Proxy software on it to see how it fares. Won't it not be used though? The default priority is to go for direct SAN first?
Only other comment is that Veeam can be a bit sluggish when it comes to
a) Browsing the vCenter for VMs, or browsing datastores
b) Setting about running a backup job, getting a list of VMS to process etc
Luca - you said about creating a portgroup pointing to the iSCSI network - this won't be required surely? After all, the proxy hot-adds the vmdk which is done at an ESX level. So the VM just needs a network to get to the main veeam server and its storage repository ?
You are right about the restore - clearly Veeam cannot write to the SAN so this traffic went via our management vlan which I presume must be NBD to restore the files and vmdk ?
Yes its all great kit to play with but unfortunately I had to buy it all personally so I've got a bit at stake here
cheers
Lee.
to answer the questions......
the supermicro storage boxes have been fitted out with a single E5-2620 CPU (6 core) as originally we intended to use them just for iSCSI/NAS/NFS where horsepower isn't important. I can always chuck an extra CPU, or even a couple of 8 cores in there......the Dell R620 servers have dual E5-2660s so are not short of power, but I want to reserve that for the VM hosting...
Originally yes we had planned to "drink our own medicine" and run everything as VMs - as you say you gain inbuilt HA protection. I just don't like the idea of taking the iSCSI traffic (which I've gone to a lot of effort to keep segragated/efficient and away from the rest of our infrastructure) and dragging it upto a VM as a mapped VM network. I'm OK with the idea of the hotadd mode although I had assumed this isn't going to be as efficient as direct SAN...
(We're aware of the great joys of using vCD, its got some good functionality but not without its own set of bloat and issues)
Right some hard data - currently I have a single backup job which includes all of our AD and management VMs. It processes each VM in turn (no concurrency) and at the points where it processes a VM the CPU usage can be reasonably high - 50% or more across all the cores. I would suspect that this is a byproduct of being able to access the source data at high speed and compression (Veeam reports over 200MB/s as its average and I've seen transfers of upto 280MB/s from the SAN - we'll get more when I move onto a storage pool with a decent number of disks/raid level)
Ah yes I can see it says "Using source proxy VMware Backup Proxy [san]" in the report.
I'm not overly bothered about the lack of HA in this "VeeamPod" style setup. If these servers were not running the full Veeam suite, then they would at the very least be the back-end datastores, so if we lost one we kinda lose the whole stack anyway. It was more about scaling up.
I'll spin up a Server 2008 R2 VM and put the Proxy software on it to see how it fares. Won't it not be used though? The default priority is to go for direct SAN first?
Only other comment is that Veeam can be a bit sluggish when it comes to
a) Browsing the vCenter for VMs, or browsing datastores
b) Setting about running a backup job, getting a list of VMS to process etc
Luca - you said about creating a portgroup pointing to the iSCSI network - this won't be required surely? After all, the proxy hot-adds the vmdk which is done at an ESX level. So the VM just needs a network to get to the main veeam server and its storage repository ?
You are right about the restore - clearly Veeam cannot write to the SAN so this traffic went via our management vlan which I presume must be NBD to restore the files and vmdk ?
Yes its all great kit to play with but unfortunately I had to buy it all personally so I've got a bit at stake here
cheers
Lee.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
Some replies:
- depending on the size of vcenter objects, yes browsing can be sometimes slow
- well, since you are already using iscsi (so you're on ethernet) the uplink in the SAN network are already there, and you have vmkernel portgroup there. Adding another portgroup, but a VM type, will let you use that network instead of the management. You are right hotadd reads data from the storage using the ESXi stack, but then it has to ship data to the VeaamPod!!! So a dedicated VM portgroup over the iscsi network is a good configuration for virtual proxies.
- yes, restore without a virtual proxy runs over the nbd connection
Luca.
- depending on the size of vcenter objects, yes browsing can be sometimes slow
- well, since you are already using iscsi (so you're on ethernet) the uplink in the SAN network are already there, and you have vmkernel portgroup there. Adding another portgroup, but a VM type, will let you use that network instead of the management. You are right hotadd reads data from the storage using the ESXi stack, but then it has to ship data to the VeaamPod!!! So a dedicated VM portgroup over the iscsi network is a good configuration for virtual proxies.
- yes, restore without a virtual proxy runs over the nbd connection
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 1531
- Liked: 226 times
- Joined: Jul 21, 2010 9:47 am
- Full Name: Chris Dearden
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
thats quiet a lot at Stake !
If you wanted to test hotadd , you'll have to explicitly select the VM proxy as against auto proxy selection.
if you end up running a lot of jobs , it may be worth moving the Veeam db to a full SQL server - the express version never seems to be as efficient. Although we do cache a lot of VC data - that cache has to populate , which relies on the speed of the vcenter API, which doesn't get any faster at scale !
280 MB/sec for the full backup is pretty respectable - fasted I have seen was around 480 , though again target storage speed was the limiting factor on that.
If you kept a single virtual proxy for restores, it would be preferentially be used over the physical box as hotadd beats NBD in the proxy top trumps stakes
Hotadd isn't without its own little issues , generally a little bit more overhead per VM than direct SAN.
Allow around 2Gb of RAM per concurrent job on the proxy servers too ( +4Gb for the OS )
If you wanted to test hotadd , you'll have to explicitly select the VM proxy as against auto proxy selection.
if you end up running a lot of jobs , it may be worth moving the Veeam db to a full SQL server - the express version never seems to be as efficient. Although we do cache a lot of VC data - that cache has to populate , which relies on the speed of the vcenter API, which doesn't get any faster at scale !
280 MB/sec for the full backup is pretty respectable - fasted I have seen was around 480 , though again target storage speed was the limiting factor on that.
If you kept a single virtual proxy for restores, it would be preferentially be used over the physical box as hotadd beats NBD in the proxy top trumps stakes
Hotadd isn't without its own little issues , generally a little bit more overhead per VM than direct SAN.
Allow around 2Gb of RAM per concurrent job on the proxy servers too ( +4Gb for the OS )
-
- Influencer
- Posts: 21
- Liked: 9 times
- Joined: Oct 31, 2012 1:05 pm
- Full Name: Lee Christie
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
Hi guys
Luca:
- well, since you are already using iscsi (so you're on ethernet) the uplink in the SAN network are already there, and you have vmkernel portgroup there. Adding another portgroup, but a VM type, will let you use that network instead of the management. You are right hotadd reads data from the storage using the ESXi stack, but then it has to ship data to the VeaamPod!!! So a dedicated VM portgroup over the iscsi network is a good configuration for virtual proxies.
I'm not sure I agree with that, or perhaps don't understand the logic. I have two VLANS (and two separate subnets) for iSCSI traffic, each mapped to individual 10Gb adapters so true multipath without any networking trickery (which is not recommended these days, its best to leave iSCSI initiators to sort out failover rather than link bonding). The iSCSI vmkernels and switches are configured for iSCSI best practice ie jumbo frames, flow control, delayed ack off etc. I have full control over the network utilisation using NIOC so my VM traffic will always get through, regardless of vMotion or iSCSI. (btw, you want to see a multiple NIC vMotion @ 20Gbps - it takes 4 seconds to move a VM from one host to another !)
So my ESXi servers talk to the SAN in a very efficient optimal manner. The Veeam server is physical and external, so using direct SAN is a complete offload, the VMware stack never gets to see any CPU or I/O.
If I were to create VM networks for my iSCSI VLANS and use a backup proxy in direct SAN mode, I would be breaking some of my rules, I'm now mixing storage traffic with VM traffic and starting to lose control. Not to mention its a bit of a pig to setup as I would be running the iSCSI initiator inside the VM with MPIO and multipathing over the underlying VM networks. Yuck.
So my current belief is that a B&R Proxy should really be left alone to hot-add mode, after all thats the benefit of using a VM for a proxy as it is the only way hot-add can work. I also like the idea of having a single proxy around to be used for restores and avoid NBD crap
I have just spun up a Server2008R2 VM and installed the proxy software on it, then configured a backup. I configured the proxy to only use hot-add, and configured the backup to use this particular proxy. First bit of good news is that it all worked as configured (hey don't laugh, plenty of products don't do the basics of what they claim to, so this is a reassuring difference!). The job started, snapshot taken and then the disks released and hot-added to the proxy.
Speeds weren't great - the VM being backed up has two disks, an OS and a data disk
Hard Disk 1 (20.0 GB) 11.7 GB read at 51 MB/s [CBT]
Hard Disk 2 (10.0 GB) 220.0 MB read at 22 MB/s [CBT]
01/11/2012 09:05:36 :: Busy: Source 98% > Proxy 97% > Network 3% > Target 3%
Right enough I witnessed around 2.5% maximum on my network adapter so around 250Mbit.
The restore was better - around 6 - 8% NIC utilisation.
I'm assuming I have an optimal setup here also - the 2008R2 B&R Proxy is running with pvSCSI controller/disk and VMXNet3. As the source/proxy was 98%/97% I assume there's no more to be had from this and this is about right for a proxy?
cheers
Lee
Luca:
- well, since you are already using iscsi (so you're on ethernet) the uplink in the SAN network are already there, and you have vmkernel portgroup there. Adding another portgroup, but a VM type, will let you use that network instead of the management. You are right hotadd reads data from the storage using the ESXi stack, but then it has to ship data to the VeaamPod!!! So a dedicated VM portgroup over the iscsi network is a good configuration for virtual proxies.
I'm not sure I agree with that, or perhaps don't understand the logic. I have two VLANS (and two separate subnets) for iSCSI traffic, each mapped to individual 10Gb adapters so true multipath without any networking trickery (which is not recommended these days, its best to leave iSCSI initiators to sort out failover rather than link bonding). The iSCSI vmkernels and switches are configured for iSCSI best practice ie jumbo frames, flow control, delayed ack off etc. I have full control over the network utilisation using NIOC so my VM traffic will always get through, regardless of vMotion or iSCSI. (btw, you want to see a multiple NIC vMotion @ 20Gbps - it takes 4 seconds to move a VM from one host to another !)
So my ESXi servers talk to the SAN in a very efficient optimal manner. The Veeam server is physical and external, so using direct SAN is a complete offload, the VMware stack never gets to see any CPU or I/O.
If I were to create VM networks for my iSCSI VLANS and use a backup proxy in direct SAN mode, I would be breaking some of my rules, I'm now mixing storage traffic with VM traffic and starting to lose control. Not to mention its a bit of a pig to setup as I would be running the iSCSI initiator inside the VM with MPIO and multipathing over the underlying VM networks. Yuck.
So my current belief is that a B&R Proxy should really be left alone to hot-add mode, after all thats the benefit of using a VM for a proxy as it is the only way hot-add can work. I also like the idea of having a single proxy around to be used for restores and avoid NBD crap
I have just spun up a Server2008R2 VM and installed the proxy software on it, then configured a backup. I configured the proxy to only use hot-add, and configured the backup to use this particular proxy. First bit of good news is that it all worked as configured (hey don't laugh, plenty of products don't do the basics of what they claim to, so this is a reassuring difference!). The job started, snapshot taken and then the disks released and hot-added to the proxy.
Speeds weren't great - the VM being backed up has two disks, an OS and a data disk
Hard Disk 1 (20.0 GB) 11.7 GB read at 51 MB/s [CBT]
Hard Disk 2 (10.0 GB) 220.0 MB read at 22 MB/s [CBT]
01/11/2012 09:05:36 :: Busy: Source 98% > Proxy 97% > Network 3% > Target 3%
Right enough I witnessed around 2.5% maximum on my network adapter so around 250Mbit.
The restore was better - around 6 - 8% NIC utilisation.
I'm assuming I have an optimal setup here also - the 2008R2 B&R Proxy is running with pvSCSI controller/disk and VMXNet3. As the source/proxy was 98%/97% I assume there's no more to be had from this and this is about right for a proxy?
cheers
Lee
-
- Influencer
- Posts: 21
- Liked: 9 times
- Joined: Oct 31, 2012 1:05 pm
- Full Name: Lee Christie
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
As an update, I noticed the proxy was 100% CPU bound. So I added some more
Going past 8 cores doesn't add anything to be honest. Frankly there's not much more to be had past 4.
However with 8 cores CPU is near as dammit 100% and I get a smidge over gigabit speeds on the backup - 141MB/s on a full backup
01/11/2012 11:01:36 :: Busy: Source 98% > Proxy 87% > Network 19% > Target 13%
So definitely, unless there's a tweak for the proxy, it seems to be a hellish resource hog compared to direct SAN.
cheers
Lee.
Going past 8 cores doesn't add anything to be honest. Frankly there's not much more to be had past 4.
However with 8 cores CPU is near as dammit 100% and I get a smidge over gigabit speeds on the backup - 141MB/s on a full backup
01/11/2012 11:01:36 :: Busy: Source 98% > Proxy 87% > Network 19% > Target 13%
So definitely, unless there's a tweak for the proxy, it seems to be a hellish resource hog compared to direct SAN.
cheers
Lee.
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
Hi Lee, just a quick reply about my previous post that was confusing you: I was saying that if you want to have a virtual proxy server, and it needs to talk with the repository over the iscsi network, you need to create a VM-type portgroup over the vswitch managing iscsi. Hope it's more clear now, you are right about other issues like VLANs and others.
About speed, directSAN is supposed to be faster than hotadd, not so much to do about it. Sometimes hotadd is preferred for other advantages like proy scale-out, restore, but NOT for speed. There are situations where hotadd can run at 80-90% of the directSAN, but I suspect that the faster the storage to backup, the bigger the difference between directSAN and hotadd.
Luca.
About speed, directSAN is supposed to be faster than hotadd, not so much to do about it. Sometimes hotadd is preferred for other advantages like proy scale-out, restore, but NOT for speed. There are situations where hotadd can run at 80-90% of the directSAN, but I suspect that the faster the storage to backup, the bigger the difference between directSAN and hotadd.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veeam ProPartner
- Posts: 27
- Liked: 4 times
- Joined: Jan 31, 2012 2:00 pm
- Full Name: Giorgio Colucci
- Location: Italy
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
What environment you've used to achieve this performances?chrisdearden wrote: 280 MB/sec for the full backup is pretty respectable - fasted I have seen was around 480 , though again target storage speed was the limiting factor on that.
I'm evaluating a Dell environment (3xR720 as vmWare hosts, 1xEQL PS 16x900GB/SAS RAID50, 10Gbit iSCSI SAN) and I need to find the best backup-host hardware configuration.
thank you
Giorgio
Giorgio
-
- Influencer
- Posts: 21
- Liked: 9 times
- Joined: Oct 31, 2012 1:05 pm
- Full Name: Lee Christie
- Contact:
Re: High end vSphere 5.1 / EMC iSCSI 10Gb Deployment Advice
I had a bunch of EQLs at my last place. They are pretty good units to be fair, performance is somewhere between the MD series arrays and something a bit more enterprise like VNX/Compellent. The only "gotcha" is that if you end up with a load of EQLs in a few years, you could stand back and work out that it would have been cheaper to get the enterprise unit in the first place.
As an update to this thread, each day our veeam jobs have been running and it is absolutely ronseal - does exactly what it says on the tin. No issues whatsoever.
As an update to this thread, each day our veeam jobs have been running and it is absolutely ronseal - does exactly what it says on the tin. No issues whatsoever.
Who is online
Users browsing this forum: No registered users and 22 guests