-
- Enthusiast
- Posts: 39
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Nearline Sata drive performance
Is anyone here using veeam backup to backup to non-san based nearline sata drives? If so what raid level are you using and how is your performance using the nearline?
-
- Enthusiast
- Posts: 37
- Liked: 2 times
- Joined: Oct 30, 2009 2:43 am
- Full Name: Felix Buenemann
- Contact:
Re: Nearline Sata drive performance
I'm backing up to 6x 500GB SFF 5400 UPM in a RAID5 setup. Was about the cheapest solution given the HP DL380 G5 and SmartArray P400 w/BBWC. Performance isn't great maxing out at about 60 MByte/s, which can be largely attributed to the P400, about the crappiest controller I've come across so far. Of course those drives have latency of 15ms so not usable for random i/o.
However given a fast controller like Areca 1212 or 1680 and decent 7200 UPM SATA drives you should get 300 MByte/s or more sequential throughput in RAID5 or better RAID10 if you can afford. Seagate Barracuda 1.5 TB are pretty fast and have good latency (get latest fw revision though to avoid trouble).
After all even my sluggish RAID5 is fast enough, because after initial full backup, veeam does mostly reads with large blocksize and only small amount of writes, I usually see writes between 7-14 MByte/s during incrementals with changed block tracking.
However given a fast controller like Areca 1212 or 1680 and decent 7200 UPM SATA drives you should get 300 MByte/s or more sequential throughput in RAID5 or better RAID10 if you can afford. Seagate Barracuda 1.5 TB are pretty fast and have good latency (get latest fw revision though to avoid trouble).
After all even my sluggish RAID5 is fast enough, because after initial full backup, veeam does mostly reads with large blocksize and only small amount of writes, I usually see writes between 7-14 MByte/s during incrementals with changed block tracking.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Nearline Sata drive performance
I guess we're technically using a "SAN" but it's a super cheap Enhance Technology RS16i (iSCSI) with a bunch of Seagate Barracuda 1.5TB drives in the array. We can easily push 300-400MB/sec to this array for sequential operations, but it's not a barn-burner for random IO, although not bad. We chose iSCSI because the array is sitting across a MetroE 1Gb link about 8 miles away from our datacenter, although in the end we actually placed a Linux box in front of it and are doing direct to target backups, so we could have probably saved $1500 or so and simply bought an RS16 SAS JBOD tray, but the iSCSI connectivity gives us some nice options.
-
- Enthusiast
- Posts: 39
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Re: Nearline Sata drive performance
Is that 300-400MB/sec seq write performance or seq read performance?
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Nearline Sata drive performance
Around 300-400MB/sec writes. The box can supposedly do 600MB/sec or more, but the iSCSI version we selected only has 4 1GB ports so we could get much higher than 400MB (theoretical max was 480). We're also using the box in RAID 6 mode which is not it's fastest for writes.
Reads topped out around 250-300MB/sec although a single thread was around 160-200MB/sec. The top speeds were achieved using IOzone's throughput mode with 4 threads and Linux dm-multipath for load balancing with 4 Qlogic iSCSI HBA's and using jumbo frames. I could probably have improved the read performance by tweaking the dm-multipath settings to read a single block per link, but the speeds were good enough for me.
Reads topped out around 250-300MB/sec although a single thread was around 160-200MB/sec. The top speeds were achieved using IOzone's throughput mode with 4 threads and Linux dm-multipath for load balancing with 4 Qlogic iSCSI HBA's and using jumbo frames. I could probably have improved the read performance by tweaking the dm-multipath settings to read a single block per link, but the speeds were good enough for me.
-
- Influencer
- Posts: 10
- Liked: never
- Joined: Oct 29, 2009 11:30 am
- Contact:
Re: Nearline Sata drive performance
I am also using this box, but I am using it in a production environment. We have a second one where we dump all our replica.
I am using 8x SATA Seagate ES.2 in RAID5 and have 15VM running on it. Performance is good, but not as good as the previous poster, I can get 3600 iops, 115mb/sec 50% READ 50% WRITE using iometer when only one VM is powered on. When we do the first backup the average speed is around 35mb/sec, that's with all 15VM powered on.
I am using 8x SATA Seagate ES.2 in RAID5 and have 15VM running on it. Performance is good, but not as good as the previous poster, I can get 3600 iops, 115mb/sec 50% READ 50% WRITE using iometer when only one VM is powered on. When we do the first backup the average speed is around 35mb/sec, that's with all 15VM powered on.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Nearline Sata drive performance
So you have the iSCSI version? Do you have load balancing enabled on your ESX host? If not, the 115MB/sec number is pretty good as that's pretty much saturating a single 1Gb link. If not, I'm surprised that's all the performance you could get. Admittedly, we have twice that number of spindles, and we're not running VM's on it, it has a single Linux host that is our backup target and is dedicated 100% to disk based backups (both Veeam, and others, like RMAN for our Oracle databases). The performance numbers I posted were with no other load on the array, just this single host, RHEL 5.3, with the filesystem formatted as ext4.
Actually, I'm somewhat curious how this storage preforms for you with running VM's. We currently use Equallogic storage for all production VM's, and use Veeam to replicate VM's to a failover site. The array at the failover site is also Equallogic, but is not nearly as big as our primary array in production and thus, while we replicate all of our "critical" VM's, there are other VM's that we simply don't have the space for that we would consider "important", i.e. we could live without them in an emergency, and are not worth the cost of additional Equallogic storage, but we'd sure like to have them available. We were thinking of buying another Enhance Technology array for that site (either an RS8i or RS16i) and replicating these "important" VM's to it and simply running from the RS array in the failover case. Any feedback on performance would be interesting.
Actually, I'm somewhat curious how this storage preforms for you with running VM's. We currently use Equallogic storage for all production VM's, and use Veeam to replicate VM's to a failover site. The array at the failover site is also Equallogic, but is not nearly as big as our primary array in production and thus, while we replicate all of our "critical" VM's, there are other VM's that we simply don't have the space for that we would consider "important", i.e. we could live without them in an emergency, and are not worth the cost of additional Equallogic storage, but we'd sure like to have them available. We were thinking of buying another Enhance Technology array for that site (either an RS8i or RS16i) and replicating these "important" VM's to it and simply running from the RS array in the failover case. Any feedback on performance would be interesting.
-
- Enthusiast
- Posts: 39
- Liked: never
- Joined: Jan 01, 2006 1:01 am
- Contact:
Re: Nearline Sata drive performance
How is the support from Enhance Technology? Also do they provide 24/7 4 hour hardware replacements?
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Nearline Sata drive performance
What little interaction I've had with support was OK (get a firmware upgrade, clarify an iSCSI load balancing comment in their manual). I don't believe they provide 24/7 4 hour that I'm aware of (I asked about extended service contracts and was told they didn't sell them). I guess I don't consider that a requirement for "near-line" storage, that's why we pay the big bucks our main storage. We did the BYOD (bring your own drives) approach and thus spare our own disk which is by far the most likely thing to go bad. I guess if I wanted 4 hour response I'd just buy another whole chassis to keep on-hand since an empty chassis is <$4,000. I'm sure they'd be willing to sell you a spare power supply and controller though.
I'm not trying to sell you on Enhance or anything, they're not really anything special, just cheap and easily available (purchased ours via PCMall). The web interface is mostly simple, and we've been pretty happy, but there are dozens of cheap storage shelfs out there based on what you are looking for.
I'm not trying to sell you on Enhance or anything, they're not really anything special, just cheap and easily available (purchased ours via PCMall). The web interface is mostly simple, and we've been pretty happy, but there are dozens of cheap storage shelfs out there based on what you are looking for.
-
- Enthusiast
- Posts: 81
- Liked: 5 times
- Joined: Oct 15, 2009 8:52 am
- Contact:
Re: Nearline Sata drive performance
The P400 isn't the problem. I use a DL380 G5 with the same controller and 8 x 146GB 10K SAS as a Veeam Backup server. I consistently get over 280MB/s read, 260MB/s write on this RAID5 array. Synthetic benchmark tools like ATTO diskbench get way past 400MB/s for anything larger than a 128K block size. Sure, there are much better HBA's around, but for this application it's fine.Felix wrote:Was about the cheapest solution given the HP DL380 G5 and SmartArray P400 w/BBWC. Performance isn't great maxing out at about 60 MByte/s, which can be largely attributed to the P400, about the crappiest controller I've come across so far.
Who is online
Users browsing this forum: Bing [Bot] and 66 guests