-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
3PAR at full speed
Hi all,
A small post to give some feedback about speed, because Veeam is all about speed
Since I won the 2012 speed contest last week at the french Veeam Experts club, Pierre-Francois@Veeam advised me to post that screenshot on the forum, so here I am.
This is a screenshot of the speed of a full backup from an HP 3PAR 7200 with 40*450GB 10K (one RAID 5 5+1 CPG) to an HP P2000 with 11*2TB 7,2K RAID6. One physical proxy using DirectSAN access through 8Gb FC network.
It's just awesome : 810MB/s and a good source side data reduction job from Veeam
Eric.
A small post to give some feedback about speed, because Veeam is all about speed
Since I won the 2012 speed contest last week at the french Veeam Experts club, Pierre-Francois@Veeam advised me to post that screenshot on the forum, so here I am.
This is a screenshot of the speed of a full backup from an HP 3PAR 7200 with 40*450GB 10K (one RAID 5 5+1 CPG) to an HP P2000 with 11*2TB 7,2K RAID6. One physical proxy using DirectSAN access through 8Gb FC network.
It's just awesome : 810MB/s and a good source side data reduction job from Veeam
Eric.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: 3par full speed
Awesome, indeed!
-
- Veeam ProPartner
- Posts: 208
- Liked: 28 times
- Joined: Jun 09, 2009 2:48 pm
- Full Name: Lucio Mazzi
- Location: Reggio Emilia, Italy
- Contact:
Re: 3par full speed
Hi Eric,
truly awesome. Do you have by chance the possibility of posting a similar screenshot of an incremental run of the same job? And what type of incremental are you using?
[I don't want to hijack the thread. I'm just curious because in my setup I regularly see 200MB/s for full backups, but only between 10 and 20 MB/s for the reverse incremental runs of the same jobs, which could make sense considering that reverse incrementals create much higher I/O on the target.]
Thanks!
truly awesome. Do you have by chance the possibility of posting a similar screenshot of an incremental run of the same job? And what type of incremental are you using?
[I don't want to hijack the thread. I'm just curious because in my setup I regularly see 200MB/s for full backups, but only between 10 and 20 MB/s for the reverse incremental runs of the same jobs, which could make sense considering that reverse incrementals create much higher I/O on the target.]
Thanks!
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: 3par full speed
I am using reverse incremental (45 retention points).
Of course, reverse incremental run is far slower than full one because of the I/O profil (one read + two writes and fully random ). I am seeing between 12 MB/s and 16 MB/s which is the maximum theorical throughut on random acess for the RAID 6 group.
The P2000 also has 10K SAS disks used as target for replication jobs (RAID5 7+1), it can handle much more I/O than SATA disks. On the replication jobs, the bottleneck is always the source (3Par) which vary between 70 MB/s and 95 MB/s on random reads (CBT).
The sequential reads/writes are very impressive by the way.
Of course, reverse incremental run is far slower than full one because of the I/O profil (one read + two writes and fully random ). I am seeing between 12 MB/s and 16 MB/s which is the maximum theorical throughut on random acess for the RAID 6 group.
The P2000 also has 10K SAS disks used as target for replication jobs (RAID5 7+1), it can handle much more I/O than SATA disks. On the replication jobs, the bottleneck is always the source (3Par) which vary between 70 MB/s and 95 MB/s on random reads (CBT).
The sequential reads/writes are very impressive by the way.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Veeam ProPartner
- Posts: 252
- Liked: 26 times
- Joined: Apr 05, 2011 11:44 pm
- Contact:
Re: 3par full speed
I wish we had this hardware with our 100TB we need to backup
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 3PAR at full speed
I heard it through the grapevine that HP is having hard times coping with the current demand for 3PAR... and above is the perfect demonstration why: this storage is the beast.
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: 3PAR at full speed
And the recent annoucements are awesome too:
Looking at list prices, the 980GB new SSD is cheaper that the previous 400GB SSD....The 3PAR operating system 3.1.3 release, which will be available in January, has been tweaked to lower the latency to around 500 milliseconds on the 4 KB random read test. The controller in the StoreServ 7450 can now, thanks to the faster flash drives and software improvements, push around 900,000 IOPS. By the way, HP says the new flash drives are 50 percent less expensive than the ones used in earlier models.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Veeam Software
- Posts: 116
- Liked: 9 times
- Joined: Jan 17, 2011 4:04 pm
- Full Name: Jason Leiva
- Contact:
Re: 3PAR at full speed
Very impressive!!
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: 3PAR at full speed
Don't want to bash on HP, but that 900k IOPS is 100% read. Unless they are pre-loading all my data at the factory before shipping me the unit, and I would never update any data in it, that number is not so useful. I'm NOT saying 3PAR is a slow machine, it would problably speed around 500-600K read/write, but please don't make it a speed race. It's about requirements, use cases and features. Speed honestly is "given for granted" once you are running an All Flash Array, whatever vendor you chose.
Luca.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 5
- Liked: 3 times
- Joined: Nov 18, 2013 9:13 am
- Contact:
Re: 3PAR at full speed
I think you missed the point. For production systems it is very important to maintain as shorter the backup window as possible. With various techniques on 3Par, VMware, and of course Veeam Backup, it is possible to fetch data at 1GB/s speeds for a single job as shown at the OP's screenshot. This reduces the time when your VMs are snapshotted and reduces time it takes to secure your servers. I am getting same numbers for full backup and ~400MB/s for incremental backups. Unfortunately VMware API calls are the ones that slow down our backup jobs at the moment.dellock6 wrote:Don't want to bash on HP, but that 900k IOPS is 100% read. Unless they are pre-loading all my data at the factory before shipping me the unit, and I would never update any data in it, that number is not so useful. I'm NOT saying 3PAR is a slow machine, it would problably speed around 500-600K read/write, but please don't make it a speed race.
-
- Influencer
- Posts: 21
- Liked: 9 times
- Joined: Oct 31, 2012 1:05 pm
- Full Name: Lee Christie
- Contact:
Re: 3PAR at full speed
Luca is quite correct in that speed is "given for granted" when using an All Flash Array.
This is a reversed incremental top up which ran for a customer's workloads last night:
I cannot remember what we see on an Active Full. Over 1000MB/s I am sure.
Our setup
VMware 5.1 running on Dell R620 ESXi with dual 10Gb vDS uplinks
Arista 7050 cloud optimised switches
Pure Storage FA-420 All Flash Array
Physical Server running as Veeam proxy and storage repository all in one (Single E5-2620, 24 x 1TB 2.5 SAS in RAID-6)
10Gb iSCSI throughout
This looks good, but when I tell you that the Veeam server is some 40 miles away from the SAN and this is an offsite backup, its a bit more impressive. We have a 20Gb/s fiber ring which runs around the south of the UK.
In terms of benchmarking at a 32K blocksize I can max out 2 x 10Gb NICs reading data from the Array. At 4K blocksize I think we've seen around 200K IOPs.
Mechanical disk has well had its day I'm afraid.
This is a reversed incremental top up which ran for a customer's workloads last night:
I cannot remember what we see on an Active Full. Over 1000MB/s I am sure.
Our setup
VMware 5.1 running on Dell R620 ESXi with dual 10Gb vDS uplinks
Arista 7050 cloud optimised switches
Pure Storage FA-420 All Flash Array
Physical Server running as Veeam proxy and storage repository all in one (Single E5-2620, 24 x 1TB 2.5 SAS in RAID-6)
10Gb iSCSI throughout
This looks good, but when I tell you that the Veeam server is some 40 miles away from the SAN and this is an offsite backup, its a bit more impressive. We have a 20Gb/s fiber ring which runs around the south of the UK.
In terms of benchmarking at a 32K blocksize I can max out 2 x 10Gb NICs reading data from the Array. At 4K blocksize I think we've seen around 200K IOPs.
Mechanical disk has well had its day I'm afraid.
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: 3PAR at full speed
I'm almost sure I did not.m1m1n0 wrote:I think you missed the point.
Per se, the hyper speed of an all-flash array is useless. It becomes useful, and so it justifies its price per gigabyte, if it fits the use case of the customer. And the use case is not only about the pure speed of the storage, otherwise everyone would run their workload on PCI-E SSD cards like fusion-io or the like. Why it does not happens? Because without the ability to share them among ESXi servers for example you loose vmotion and HA, and that's just an example.
Instead, you look at the overall speed obviously, but also to other features like snapshots, replication, thin provisioning, deduplication, VAAI support, or even Veeam snapshots support like 3PAR. My point is, if you picked an All Flash array ONLY based on its speed, you did it wrong: few months from now the last and fastest model on earth would be surely surpassed by a new model, or by a competitor.
I'm sure cronosinternet did not selected Pure Storage only because it's an all flash array.
Furthermore, when talking about Veeam backups, the overall performances are a balanced design between source, proxies, backup network and repositories. A single fast component is not going to give a great result, without a proper overall design.
BTW: cronosinternet your infrastructure is outstanding!
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 5
- Liked: 3 times
- Joined: Nov 18, 2013 9:13 am
- Contact:
Re: 3PAR at full speed
Oh you surely did. The original graphs have nothing to do with all-flash, do they? I get the same performance on 3Par 7200 with two shelves of disks, ESX 5.5 hosts connected via 8Gbit/s FibreChannel and a physical server dedicated for collecting backups directly from the SAN via FC fabric. The results shown by the OP are legit real-life figures that I can confirm on my own quite busy environment.dellock6 wrote: I'm almost sure I did not.
Per se, the hyper speed of an all-flash array is useless.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: 3PAR at full speed
But the OP is just streaming reads and has nothing to do with IOPS. I would have been able to show you a similar graph 10 years ago with the right storage (assuming Veeam had even existed back then). There is nothing really special about storage that can deliver 800MB/s streaming reads, whether spinning media or not. Don't get me wrong, it's a great performance number form Veeam, but it doesn't really say anything about the storage at all. I would fully expect any properly configured mid-enterprise range system to achieve the same results.
I believe that Luca's comments were referring to the 900K IOPS numbers from 3PAR and nothing about the original post, which I'm sure is just as non-real-world as every other vendors quoted IOPS number, reads reads of the same block over and over as fast as possible so effectively the speed of the cache/processor.
I believe that Luca's comments were referring to the 900K IOPS numbers from 3PAR and nothing about the original post, which I'm sure is just as non-real-world as every other vendors quoted IOPS number, reads reads of the same block over and over as fast as possible so effectively the speed of the cache/processor.
-
- Expert
- Posts: 179
- Liked: 8 times
- Joined: Jul 02, 2013 7:48 pm
- Full Name: Koen Teugels
- Contact:
Re: 3PAR at full speed
what are u using as target storage to get 400 MB/s in reverse incremental?
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: 3PAR at full speed
Thanks Tom, it's exactly what I was trying to explain from the beginning. The comment was only about the 900k iops of the all flash 3PAR. Since it's made with 100% reads, is unreal, also because we do not know for example the block size, or the latency during that operations.
About speed, a single SATA mechanical disk can run up to 90 MBs on sequential reads, so even 10 disks on raid0 can give 900 MBs. Again, I'm NOT saying the OP has bad numbers, quite the opposite, but as Tom stated, a full backup is 100% read. Better look at a incremental backup where the storage needs to read random blocks across its disks. I'm far more impressed by the 400 MBs of the reversed incremental.
Luca.
About speed, a single SATA mechanical disk can run up to 90 MBs on sequential reads, so even 10 disks on raid0 can give 900 MBs. Again, I'm NOT saying the OP has bad numbers, quite the opposite, but as Tom stated, a full backup is 100% read. Better look at a incremental backup where the storage needs to read random blocks across its disks. I'm far more impressed by the 400 MBs of the reversed incremental.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veeam ProPartner
- Posts: 252
- Liked: 26 times
- Joined: Apr 05, 2011 11:44 pm
- Contact:
Re: 3PAR at full speed
Just wanted to drop a line here:
We are getting half the speed of the transfer on that 3PAR with a $15,000 36TB 12x4TB SATA (after raid and format) SuperMicro system that can go up-to 100TB. Our bottleneck is the Source server (Dell R710 with 12 2TB SATA on a slow PERC controller). I think we could go much higher if we had a faster source system.
We are getting half the speed of the transfer on that 3PAR with a $15,000 36TB 12x4TB SATA (after raid and format) SuperMicro system that can go up-to 100TB. Our bottleneck is the Source server (Dell R710 with 12 2TB SATA on a slow PERC controller). I think we could go much higher if we had a faster source system.
-
- Influencer
- Posts: 21
- Liked: 9 times
- Joined: Oct 31, 2012 1:05 pm
- Full Name: Lee Christie
- Contact:
Re: 3PAR at full speed
To answer a few questions......
Why Pure Storage all-Flash Array ?
We are an enterprise cloud provider which means reliability/HA is a must. Every VM we host is on shared storage to facilitate this. The reason we bought Pure Storage was primarily the performance, not because we needed hundreds of thousands of IOPs, but because on a mechanical/hybrid array the performance becomes your limiting factor quite quickly.
For example consider this business case. We have a customer who asked about moving their Oracle/SAP system to us. Their SAP consultants/calculator believe they need around 7000 IOPs. This is a lot on a shared platform backed by mechanical disk. In my last company we would have talked to them about dedicated storage, so it breaks the cloud model, becomes a tough sale to win etc. With our flash array I can enter into a discussion revolving more around service and quality without having to drag the (typically not technical) attendees into boring discussions about performance.
There are a great many other reasons why we chose Pure Storage, mainly that right now they are the only guys who are doing it right. The rest of the world seem to be treating SSDs as "disks" and using conventional hardware, raidsets etc. It just doesn't work with SSDs, they have to be treated very differently.
Veeam Target Storage
We have a ridiculously basic approach with Veeam. We make use of dedicated physical servers with local storage so they take on the role of the Veeam management, the backup proxy and the backup repository. The only thing "outside" this box is a VM which runs as a backup proxy role; required for restores.
Why did we do it this way? Well in our environment we hate physical kit. Everything we have we don't care if it breaks because there's another server/controller/switch/router etc. Its all N+1, N+2 or higher in some cases.
We have fast SANS which can be read directly by Veeam, so we chose to keep all the processing away from our VMware clusters. Which means no VMs doing "hot add" backups. So why bother building a physical proxy server, a physical veeam management server (admittedly this could be a VM) and a physical backup repository when we can conveniently combine the three roles ?
And it is likely that the SAN read performance (on our EMC stuff, not the Pure) and the Backup Repository disk speeds will be the limiting factors.
They are boxes we designed ourselves using commodity hardware. Its a supermicro 2U case which takes 24 x 2.5" disks and a decent sized motherboard. We put in a Xeon E5-2620, 8GB RAM, an LSI 9266-4i RAID card and 25 x 1TB Seagate disks in RAID-6 to maximise storage capacity and provide a good level of protection. Quite possibly the slowest disk configuration we could have opted for. The board has a pair of 10Gb uplinks as well as a pair of 1Gb for management. We think we might get CPU limited so might add a second CPU. Other than that, we're done.
This gives us 20TB of useable capacity per node (currently we have 4) and of course, if the hardware fails in some way
a) Customers won't be impacted unless they need a restore urgently, or until backups run overnight
b) We could simply pull the disks and place into a spare chassis
c) Failures to date = zero, everything is dual PSU and decent quality
As I said, we have a VM running with the proxy role purely because Veeam cannot write data back to your SANs for restores, we'd rather avoid the NBD approach so the VM allows for hot-add restores.
HTH!
Why Pure Storage all-Flash Array ?
We are an enterprise cloud provider which means reliability/HA is a must. Every VM we host is on shared storage to facilitate this. The reason we bought Pure Storage was primarily the performance, not because we needed hundreds of thousands of IOPs, but because on a mechanical/hybrid array the performance becomes your limiting factor quite quickly.
For example consider this business case. We have a customer who asked about moving their Oracle/SAP system to us. Their SAP consultants/calculator believe they need around 7000 IOPs. This is a lot on a shared platform backed by mechanical disk. In my last company we would have talked to them about dedicated storage, so it breaks the cloud model, becomes a tough sale to win etc. With our flash array I can enter into a discussion revolving more around service and quality without having to drag the (typically not technical) attendees into boring discussions about performance.
There are a great many other reasons why we chose Pure Storage, mainly that right now they are the only guys who are doing it right. The rest of the world seem to be treating SSDs as "disks" and using conventional hardware, raidsets etc. It just doesn't work with SSDs, they have to be treated very differently.
Veeam Target Storage
We have a ridiculously basic approach with Veeam. We make use of dedicated physical servers with local storage so they take on the role of the Veeam management, the backup proxy and the backup repository. The only thing "outside" this box is a VM which runs as a backup proxy role; required for restores.
Why did we do it this way? Well in our environment we hate physical kit. Everything we have we don't care if it breaks because there's another server/controller/switch/router etc. Its all N+1, N+2 or higher in some cases.
We have fast SANS which can be read directly by Veeam, so we chose to keep all the processing away from our VMware clusters. Which means no VMs doing "hot add" backups. So why bother building a physical proxy server, a physical veeam management server (admittedly this could be a VM) and a physical backup repository when we can conveniently combine the three roles ?
And it is likely that the SAN read performance (on our EMC stuff, not the Pure) and the Backup Repository disk speeds will be the limiting factors.
They are boxes we designed ourselves using commodity hardware. Its a supermicro 2U case which takes 24 x 2.5" disks and a decent sized motherboard. We put in a Xeon E5-2620, 8GB RAM, an LSI 9266-4i RAID card and 25 x 1TB Seagate disks in RAID-6 to maximise storage capacity and provide a good level of protection. Quite possibly the slowest disk configuration we could have opted for. The board has a pair of 10Gb uplinks as well as a pair of 1Gb for management. We think we might get CPU limited so might add a second CPU. Other than that, we're done.
This gives us 20TB of useable capacity per node (currently we have 4) and of course, if the hardware fails in some way
a) Customers won't be impacted unless they need a restore urgently, or until backups run overnight
b) We could simply pull the disks and place into a spare chassis
c) Failures to date = zero, everything is dual PSU and decent quality
As I said, we have a VM running with the proxy role purely because Veeam cannot write data back to your SANs for restores, we'd rather avoid the NBD approach so the VM allows for hot-add restores.
HTH!
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: 3PAR at full speed
Hi Lee,
thanks so much for your input, highly appreciated. Many of your reasons for going with Pure are the same for us evaluating in these weeks ssd solutions like for example Solidfire, plus the possibility with this one to size iops and ultimately bill them.
About the Veeam design, that is the main topic of this forum, a couple drawbacks of dedicated physical servers (they are called sometimes Veeam Pods) are usually:
- as you noted, they are not redundant. If you loose one, all backups in it are lost. If a customer asks for a restore, you are violating the SLA (if it was defined)
- with v7 you are not using parallel processing at its full potential. Each job I suppose it's linked to a specific pod, and after you used all your 6 cores in the pod, there is no further optimization.
My design was different: small and fast physical proxies with no local storage, only a couple virtual proxies for restores (usually powered off so are not selected for backups), virtual veeam server, and a scale-out storage made with Ceph and cheap supermicro servers. Ceph is an object storage, and exposes its storage via NFS. Veeam installed the linux component on it, and Ceph is directly registered as repository. If a Ceph node fails, NFS is failed over to another node, and no data is lost. There are other possibilities with commercial storages as well, for example Gridstore.
Maybe you and others can get some suggestions from this kind of design. My primary goal in design was a complete scale-out architecture, that can be expanded without having to redisign it at some point. Proxies can be added as needed, and Ceph can scale to hundreds of nodes.
Luca.
thanks so much for your input, highly appreciated. Many of your reasons for going with Pure are the same for us evaluating in these weeks ssd solutions like for example Solidfire, plus the possibility with this one to size iops and ultimately bill them.
About the Veeam design, that is the main topic of this forum, a couple drawbacks of dedicated physical servers (they are called sometimes Veeam Pods) are usually:
- as you noted, they are not redundant. If you loose one, all backups in it are lost. If a customer asks for a restore, you are violating the SLA (if it was defined)
- with v7 you are not using parallel processing at its full potential. Each job I suppose it's linked to a specific pod, and after you used all your 6 cores in the pod, there is no further optimization.
My design was different: small and fast physical proxies with no local storage, only a couple virtual proxies for restores (usually powered off so are not selected for backups), virtual veeam server, and a scale-out storage made with Ceph and cheap supermicro servers. Ceph is an object storage, and exposes its storage via NFS. Veeam installed the linux component on it, and Ceph is directly registered as repository. If a Ceph node fails, NFS is failed over to another node, and no data is lost. There are other possibilities with commercial storages as well, for example Gridstore.
Maybe you and others can get some suggestions from this kind of design. My primary goal in design was a complete scale-out architecture, that can be expanded without having to redisign it at some point. Proxies can be added as needed, and Ceph can scale to hundreds of nodes.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Influencer
- Posts: 21
- Liked: 9 times
- Joined: Oct 31, 2012 1:05 pm
- Full Name: Lee Christie
- Contact:
Re: 3PAR at full speed
I looked at Solidfire - they work out more expensive until you buy tonnes of the stuff. At the time I looked, apart from Pure/Solidfire there was no-one else around with a solution designed for SSDs.
Your arguments about the design are valid, however I guess it comes down to personal preference.
In our world we use a very small footprint in terms of rackspace and power. A physical server to be used as a proxy would be breaking this model. Also, we are 100% 10Gb-e based. Each server needs two 10Gb ports for redundancy / iSCSI pathing and we value our 10Gb ports because the switches are quite expensive. I would not want to use a physical proxy approach for these reasons.
I wouldn't want to use virtual proxies because you are then introducing backup resource into your cluster - we love the fact we can offload everything away from VMware and the ESXi servers. In addition, if the virtual proxy wanted to see the SAN at iSCSI, then you are taking your storage fabric and pulling this into your VMware networking fabric. That's also against our design.
Everyone's architecture will be different and so have different bottlenecks. Even though our datacentres are miles apart we have tonnes of (free) bandwidth so we don't have to be concerned with (wan) optimisations. In our "pod" configuration 20TB of storage I think matches well with the processing power of the server. Remember we can always add a second CPU to get 12 cores. So this approach is scale out in as much that when a "pod" is used up, you simply deploy another one.
Also - we offer offsite backup as standard. So that means a veeam pod in a different datacentre to the source VMs. Trying to achieve this simply and cost effectively with a distributed architecture would be different I think.
My personal opinion is that the bottleneck in veeam is more to do with VMware and your SAN. If you tried to kick off multiple backups concurrently, VMware will be creating a great many snapshots at the same time, this really needs offload to the SAN, I/O is increased whilst a snapshot is in place and you are also reading from your SAN, so I think the SAN will have a pretty hard time of it. Another reason why we chose Pure (SSD) as we don't have to be overly concerned about the effect backup I/O might have on production I/O.
I've never looked at Ceph however from 15 years of experience I do know this - self build solutions that try to be scalable are fantastic, until the day they fail. Why do any of us buy into EMC/3Par/Solidfire/Netapp/Pure etc ? Its because the DIY approach just isn't good enough. I am happy with the pod approach because I know that short of a localised disaster, I can simply place the disks into a replacement chassis and the server will boot as if nothing had happened. I would not want to place my backup data onto a complex storage medium as this adds risk.
Lastly, there is cost. Under the hosting partnership we pay for VMs backed up by Veeam. So it doesn't matter to us whether we have a tonne of veeam servers etc. We also use the standard edition of veeam to keep the costs down (remember I said we don't need wan optimisation etc). It is possible at a later date we might look at enterprise to get the single pane of glass management, but its also more likely by then we'll build our own.
cheers
Lee.
Your arguments about the design are valid, however I guess it comes down to personal preference.
In our world we use a very small footprint in terms of rackspace and power. A physical server to be used as a proxy would be breaking this model. Also, we are 100% 10Gb-e based. Each server needs two 10Gb ports for redundancy / iSCSI pathing and we value our 10Gb ports because the switches are quite expensive. I would not want to use a physical proxy approach for these reasons.
I wouldn't want to use virtual proxies because you are then introducing backup resource into your cluster - we love the fact we can offload everything away from VMware and the ESXi servers. In addition, if the virtual proxy wanted to see the SAN at iSCSI, then you are taking your storage fabric and pulling this into your VMware networking fabric. That's also against our design.
Everyone's architecture will be different and so have different bottlenecks. Even though our datacentres are miles apart we have tonnes of (free) bandwidth so we don't have to be concerned with (wan) optimisations. In our "pod" configuration 20TB of storage I think matches well with the processing power of the server. Remember we can always add a second CPU to get 12 cores. So this approach is scale out in as much that when a "pod" is used up, you simply deploy another one.
Also - we offer offsite backup as standard. So that means a veeam pod in a different datacentre to the source VMs. Trying to achieve this simply and cost effectively with a distributed architecture would be different I think.
My personal opinion is that the bottleneck in veeam is more to do with VMware and your SAN. If you tried to kick off multiple backups concurrently, VMware will be creating a great many snapshots at the same time, this really needs offload to the SAN, I/O is increased whilst a snapshot is in place and you are also reading from your SAN, so I think the SAN will have a pretty hard time of it. Another reason why we chose Pure (SSD) as we don't have to be overly concerned about the effect backup I/O might have on production I/O.
I've never looked at Ceph however from 15 years of experience I do know this - self build solutions that try to be scalable are fantastic, until the day they fail. Why do any of us buy into EMC/3Par/Solidfire/Netapp/Pure etc ? Its because the DIY approach just isn't good enough. I am happy with the pod approach because I know that short of a localised disaster, I can simply place the disks into a replacement chassis and the server will boot as if nothing had happened. I would not want to place my backup data onto a complex storage medium as this adds risk.
Lastly, there is cost. Under the hosting partnership we pay for VMs backed up by Veeam. So it doesn't matter to us whether we have a tonne of veeam servers etc. We also use the standard edition of veeam to keep the costs down (remember I said we don't need wan optimisation etc). It is possible at a later date we might look at enterprise to get the single pane of glass management, but its also more likely by then we'll build our own.
cheers
Lee.
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 3PAR at full speed
Sounds like you are talking about backup repository here? OP uses HP P2000 with 11*2TB 7,2K RAID6 as the backup repository, he has 3PAR for the production workload.Yuki wrote:We are getting half the speed of the transfer on that 3PAR with a $15,000 36TB 12x4TB SATA (after raid and format) SuperMicro system that can go up-to 100TB
-
- Expert
- Posts: 179
- Liked: 8 times
- Joined: Jul 02, 2013 7:48 pm
- Full Name: Koen Teugels
- Contact:
Re: 3PAR at full speed
do u use forward or reverse incremental?
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: 3PAR at full speed
Thanks Lee for the additional thoughts. You touched a topic that is often overlooked: production storage is not designed for heavy backup actvities, and it could become at times the biggest bottleneck for backups. Sure its main goal is to run VMs, but also backup operations need to be evaluated. Nice you did.cronosinternet wrote:My personal opinion is that the bottleneck in veeam is more to do with VMware and your SAN. If you tried to kick off multiple backups concurrently, VMware will be creating a great many snapshots at the same time, this really needs offload to the SAN, I/O is increased whilst a snapshot is in place and you are also reading from your SAN, so I think the SAN will have a pretty hard time of it. Another reason why we chose Pure (SSD) as we don't have to be overly concerned about the effect backup I/O might have on production I/O.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veeam ProPartner
- Posts: 252
- Liked: 26 times
- Joined: Apr 05, 2011 11:44 pm
- Contact:
Re: 3PAR at full speed
Yes, but what i'm saying is that if we upgrade our production system to something better than the old dell with 12 SATA drives and perc, we should see performance even closer to the 3PAR OP has. We are getting very high processing rate even on low end systems that are built for this (compared to typical SANs).Gostev wrote:Sounds like you are talking about backup repository here? OP uses HP P2000 with 11*2TB 7,2K RAID6 as the backup repository, he has 3PAR for the production workload.
We use SAN from big manufacturer at the data-center and paid $60K for a starter package (2 shelves with all features such as replication, unlimited disks, etc and couple servers). Purpose built hosting and storage platform can outperform this and be cheaper by a half.
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: 3PAR at full speed
You are true. That's why we love the intergation between Veeam and HP products like 3Par and lefthand (and others manufacturers are coming soon!), because the VMWare snapshot impact is reduced at its minimum. In the above setup, the vmware snapshot lasts for 37 seconds on average for each VM being processed so the commit is very fast .My personal opinion is that the bottleneck in veeam is more to do with VMware and your SAN. If you tried to kick off multiple backups concurrently, VMware will be creating a great many snapshots at the same time, this really needs offload to the SAN, I/O is increased whilst a snapshot is in place and you are also reading from your SAN, so I think the SAN will have a pretty hard time of it. Another reason why we chose Pure (SSD) as we don't have to be overly concerned about the effect backup I/O might have on production I/O.
By the way, the 3par pricing (after rebate) is very attractive at the moment, like the one of VNX2 arrays. NimbleStorage is attractive too.
I agree, you can make monster storage with comodity hardware. I had made storage servers for content delivry networks and session sharing for big websites using ZFS. You put enough RAM for the ARC, one raid adapater driving a 25 slots shelve and three internal SATA SSD drives (one for L2ARC and a RAID1 for the ZIL). It was handling a constant heavy load ( more than 30K iops 60/40). But like was said before, it was very difficult to maintain over time and too risky.
Eric.
Ps: Happy New Year !
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: 3PAR at full speed
I just wanted to say that I 100% agree with this. This is the problem of ANY "do it yourself" approach. Vast majority of people tend to only consider CAPEX (capital expenses) when acquiring any stuff, and very few consider OPEX (operational expenses), which is where all the money go in the long run. This is especially so in IT, because the most expensive IT resource is humans.emachabert wrote:very difficult to maintain over time and too risky
Sure, you might save 5-10K USD by building the storage yourself instead of buying it. But then, you will end up spending the same 10K USD in man/hours yearly supporting it? And even more money (up to hundreds of thousands, depending on what business you are in) due to storage downtime that you will not be able to resolve quickly and efficiently, as in case with professional help from the storage vendor. Thus, overall "savings" of this approach are questionable. This might still be a valid approach for a very small 1-man IT shop without any money to spend whatsoever (why this kind of shop is not using cloud provider to host the infrastructure these days is beyond me though), but IMHO this is seriously wrong approach for any serious business making at least 6 figures in revenue quarterly.
Same goes for software: people tend to chase a few gran buying a cheaper backup solution, not realizing they will be spending a few extra hours weekly managing and troubleshooting this sub-par solution, doing restores slowly and inefficiently, etc. Result? The spending will go through the roof of those savings in less than a month (in the form of IT staff salary, and cost of down time).
If you really are looking to save money, look at OPEX first, and CAPEX second.
-
- Enthusiast
- Posts: 29
- Liked: 9 times
- Joined: Jul 01, 2013 3:26 pm
- Full Name: Christopher
- Contact:
Re: 3PAR at full speed
Thought this might be of interest to some. A full backup done today on two VMs from a 3Par 7300 bay:
-
- VP, Product Management
- Posts: 27371
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: 3PAR at full speed
That's impressive, especially when looking at bottleneck stats that says - source
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Jun 05, 2014 3:08 pm
- Full Name: Jason
- Contact:
Re: 3PAR at full speed
I've done better on crappier storage (HP MDS2040).
-
- VP, Product Management
- Posts: 27371
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: 3PAR at full speed
Jason, now you should be able to use private messages. Thanks!
Who is online
Users browsing this forum: No registered users and 133 guests