-
- Influencer
- Posts: 18
- Liked: 2 times
- Joined: May 30, 2016 12:18 am
- Contact:
Nimble SFA array
We're a full blown Nimble shop and just snagged a few of their SFAs to augment our CS/AFAs. This throws in a few questions as we build out our our environment. We expect to heavily use the Nimble snapshot replicas, but there is plenty that will still need good old repos (SQL and Exchange logs come to mind).
1. Would it make sense to just convert our current Veeam proxies to also be repos and directly mount the SFA iSCSI to the proxy/repo? Or, should we build separate VMs and/or physical boxes as dedicated repos?
2. Seeing that the SFA will do all the dedupe and compression for us, is there any advantage to ReFS fast clone any more?
EDIT: For those not aware of what I am referring to as they are super new: https://www.nimblestorage.com/technolog ... sh-arrays/
1. Would it make sense to just convert our current Veeam proxies to also be repos and directly mount the SFA iSCSI to the proxy/repo? Or, should we build separate VMs and/or physical boxes as dedicated repos?
2. Seeing that the SFA will do all the dedupe and compression for us, is there any advantage to ReFS fast clone any more?
EDIT: For those not aware of what I am referring to as they are super new: https://www.nimblestorage.com/technolog ... sh-arrays/
-
- Veeam Software
- Posts: 315
- Liked: 74 times
- Joined: Mar 23, 2015 11:55 am
- Full Name: Michael Cade
- Location: Cambridge, United Kingdom
- Contact:
Re: Nimble SFA array
Hi there,
I have not been able to run any comparisons between NTFS and ReFS but i would still expect synthetic operations to be faster on a ReFS formatted LUN.
In regards to your first question it really depends on the system requirements and how much you are utilising them as to if we would suggest further components. My initial steps would be to try out on the existing proxy presenting iSCSI LUNs and then monitor the workload closely before possibly scaling the solution out.
Thanks,
I have not been able to run any comparisons between NTFS and ReFS but i would still expect synthetic operations to be faster on a ReFS formatted LUN.
In regards to your first question it really depends on the system requirements and how much you are utilising them as to if we would suggest further components. My initial steps would be to try out on the existing proxy presenting iSCSI LUNs and then monitor the workload closely before possibly scaling the solution out.
Thanks,
Regards,
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1
-
- Influencer
- Posts: 18
- Liked: 2 times
- Joined: May 30, 2016 12:18 am
- Contact:
Re: Nimble SFA array
Welp, we shall find out and I'll report back.
-
- Veeam Software
- Posts: 315
- Liked: 74 times
- Joined: Mar 23, 2015 11:55 am
- Full Name: Michael Cade
- Location: Cambridge, United Kingdom
- Contact:
Re: Nimble SFA array
Thanks that would be really useful.
Regards,
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1
-
- Veteran
- Posts: 323
- Liked: 25 times
- Joined: Jan 02, 2014 4:45 pm
- Contact:
Re: Nimble SFA array
Very curious about your experience with the SFAs as we are also looking at one. Would appreciate any and all feedback!bubbagump wrote:Welp, we shall find out and I'll report back.
-
- Influencer
- Posts: 18
- Liked: 2 times
- Joined: May 30, 2016 12:18 am
- Contact:
Re: Nimble SFA array
We have our SFAs up and running. For the most part, they are just CS series arrays with deduplication. In order to get the dedupe on a hybrid array they have crippled IOPS a bit to give more CPU to the dedupe. Honestly, we have yet to hit the ceiling on the SFA yet CPU wise. We have a few work loads on the SFA. Firstly, we are using them as secondary array s and replicating from our primaries. This works perfectly as expected and we're using Veeam to manage the vast majority of the replication. The one slight niggle is that you have to manually setup your Volume Collections for Veeam to work. We ended up creating a Volume Collection per volume to maintain flexibility in our environment as we don't have any LUNs that are spanned. (Hint: It would be FABULOUS if Veeam just managed the Volume Collections for you.)
The other use is we have created LUNs and mounted them to a few Veeam Proxies. These are used as repos for physical Windows agents or SQL log backups.
Dedupe and compression are as one would expect. Performance is good and in some cases, I have seen really good dedupe on PDFs. A 12 TB replicated volume of ours was getting 3.6X in dedupe alone which was surprising. I don't think that is typical though as we generate a ton of faxes using the same underlying PDF template.
The only thing I am keeping an eye on is synthetic fulls. The blocks can get cold over a week and when Veeam goes to create a synthetic full, it can create a lot of sequential reads from disk rather than from SSD cache. Of course, we are using NTFS for the time being until ReFS gets itself sorted on Win2k16. Once we feel brave enough to use ReFS, the fastclone may change all of this.
A quick mention of our methodologies....
We create regular snaps of VMs... say one an hour. Those are then all replicated to the SFA. We keep say 3 days (72) snaps on the primary array and 7 days (168) on the SFA. Then, in a separate job, we run a daily copy job to different storage (Exagrid) for both on and off site. So far so good. 3 copies (primary Nimble, secondary Nimble, Exagrid) 2 medias (NImble and Exagrid), 1 off site (Exagrid replication).
For certain SQL boxes we are even more aggressive with replicated snaps every 15 minutes and then we are layering Fulls and Log backups in the mix. A ton of SQL snaps using COPYONLY have saved our butts a few times due to table corruption. Our DBAs were VERY skeptical of this new magic (they wanted old school scripted SQL dumps) but with an RTO of 1 hour on a 12TB box, they had no choice. One box completely corrupted itself (some analyst did a VERY bad select * into disaster) and an Instant Recovery from a Nimble snap saved the day.
We'll be playing with tape and air gapping later this summer.
The other use is we have created LUNs and mounted them to a few Veeam Proxies. These are used as repos for physical Windows agents or SQL log backups.
Dedupe and compression are as one would expect. Performance is good and in some cases, I have seen really good dedupe on PDFs. A 12 TB replicated volume of ours was getting 3.6X in dedupe alone which was surprising. I don't think that is typical though as we generate a ton of faxes using the same underlying PDF template.
The only thing I am keeping an eye on is synthetic fulls. The blocks can get cold over a week and when Veeam goes to create a synthetic full, it can create a lot of sequential reads from disk rather than from SSD cache. Of course, we are using NTFS for the time being until ReFS gets itself sorted on Win2k16. Once we feel brave enough to use ReFS, the fastclone may change all of this.
A quick mention of our methodologies....
We create regular snaps of VMs... say one an hour. Those are then all replicated to the SFA. We keep say 3 days (72) snaps on the primary array and 7 days (168) on the SFA. Then, in a separate job, we run a daily copy job to different storage (Exagrid) for both on and off site. So far so good. 3 copies (primary Nimble, secondary Nimble, Exagrid) 2 medias (NImble and Exagrid), 1 off site (Exagrid replication).
For certain SQL boxes we are even more aggressive with replicated snaps every 15 minutes and then we are layering Fulls and Log backups in the mix. A ton of SQL snaps using COPYONLY have saved our butts a few times due to table corruption. Our DBAs were VERY skeptical of this new magic (they wanted old school scripted SQL dumps) but with an RTO of 1 hour on a 12TB box, they had no choice. One box completely corrupted itself (some analyst did a VERY bad select * into disaster) and an Instant Recovery from a Nimble snap saved the day.
We'll be playing with tape and air gapping later this summer.
-
- Veteran
- Posts: 323
- Liked: 25 times
- Joined: Jan 02, 2014 4:45 pm
- Contact:
Re: Nimble SFA array
Very neat! Thanks a ton for the detailed response.
Questions:
1) Nimble SFA is advertising 8:1 dedup/compression ratios. Are you seeing those rates?
2) Does the dedup/compression occur as the data is being written to disk (e.g. inline)? Or do you have to wait for the Nimble to catch up on dedup/compression after the data has been written?
3) Is your backup data sitting on VMDKs served off the Nimble?
Questions:
1) Nimble SFA is advertising 8:1 dedup/compression ratios. Are you seeing those rates?
2) Does the dedup/compression occur as the data is being written to disk (e.g. inline)? Or do you have to wait for the Nimble to catch up on dedup/compression after the data has been written?
3) Is your backup data sitting on VMDKs served off the Nimble?
-
- Influencer
- Posts: 18
- Liked: 2 times
- Joined: May 30, 2016 12:18 am
- Contact:
Re: Nimble SFA array
Having worked on deduplicated storage of one form or another for years now, I don't find their dedupe/compression any better or worse than others. Understand, they're just running lz4 compression which is pretty standard. Then, they are using variable block dedupe. It all is pretty standard stuff. All that to say, it really depends what you are putting in the array. I have seen dedupe ratios (not on the SFA, but dedupe in general) of 50-60x on some work loads... especially mountains of full database backups that are mostly the same. On the one SFA I am seeing 3.6X compression and dedupe with about 2 weeks worth of data. On one volume of entirely PDFs I saw about 5x just in the initial replica which was pretty surprising. (We create a ton of PDFs for faxing from a limited number of templates) I imagine at a month or more I'll see better dedupe for our more standard VM/database data.1) Nimble SFA is advertising 8:1 dedup/compression ratios. Are you seeing those rates?
100% inline. There is no offline or after the fact procession like Exagrid.Does the dedup/compression occur as the data is being written to disk (e.g. inline)? Or do you have to wait for the Nimble to catch up on dedup/compression after the data has been written?
Veeam snaps the primary Nimble and replicates the data to the SF. We keep a few days of snaps on the primary and multiple days on the SF. Then, for 2nd media and off site purposes, we have copy jobs copying from the SFA to Exagrids daily. So I think to answer your question, yes?3) Is your backup data sitting on VMDKs served off the Nimble?
-
- Lurker
- Posts: 1
- Liked: never
- Joined: May 29, 2015 10:32 pm
- Full Name: Mike Reel
- Contact:
Re: Nimble SFA array
This is very good information for us. We are very close to closing a deal on a very similar setup (Nimble AFA, Nimble SFA and an Exagrid). I wanted to check with you now that you have had a few weeks running your setup to see if you have ran into any problems or situations that you would do differently with your Veeam\Nimble\Exagrid setup? I am new to Nimble and Exagrid so any information on real world gotchas or best practices that you have experienced would be appreciated. Thanks.
-
- Veeam Software
- Posts: 315
- Liked: 74 times
- Joined: Mar 23, 2015 11:55 am
- Full Name: Michael Cade
- Location: Cambridge, United Kingdom
- Contact:
Re: Nimble SFA array
Interesting Mike, are you able to elloborate on how you are planning on using all three (Nimble AFA, Nimble SFA and an Exagrid) ?
Regards,
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1
Michael Cade
Global Technologist
Veeam Software
Email: Michael.Cade@Veeam.com
Twitter: @MichaelCade1
Who is online
Users browsing this forum: Bing [Bot] and 129 guests