Comprehensive data protection for all workloads
AcrisureRW
Novice
Posts: 8
Liked: 2 times
Joined: Mar 20, 2018 7:51 pm
Full Name: Ryan Walker
Contact:

Cisco Veeam Machine?! S3260

Post by AcrisureRW »

Hey all,
I've a situation where I'm attempting to move away from using Data Domains, and am looking into the S3260 Cisco Storage Server.

I have a question specifically in what people have experienced with JBOD vs RAID, and VMWare vs Microsoft.

Previously I've used the R7x0-XD series Dells with just a Microsoft 2016 running ReFS and DAS storage as a Veeam Database server / Primary repository and this has worked VERY well.

In those cases I used either RAID-6/RAID-60 arrays, but I do know that with StorageSpaces there have been substantial improvements in JBOD, to the point that I'm curious if I could do a ReFS safely on a JBOD with Microsoft, vs using a RAID card.

The backend is full SSD SAN running off a UCS Chassis, so our bottleneck inevitably will be the destination no matter what.

It'd likely be loaded out with either (14) 4tb or (14) 6tb NL-SAS drives
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Cisco Veeam Machine?! S3260

Post by Gostev »

All I keep hearing about S3260 is that it is pure awesomeness... also, at least one of the top Veeam Cloud Connect providers has been using these boxes exclusively and for a while now, and these guys are hosting PBs of tenant backups - which makes me feel confident recommending one to any customer regardless of the size. Just imagine the load these guys get from hundreds of tenants! So, you can't go wrong with one for sure.

As far as ReFS specifically, actually as it stands right now Microsoft ONLY supports ReFS on JBOD - they want to talk to disks directly, not trusting RAID controller's write caching. I have an on-going discussion about this limitation with them and it is starting to sound like they might be willing to relax this down the road - but for now, it is JBOD only.
AcrisureRW
Novice
Posts: 8
Liked: 2 times
Joined: Mar 20, 2018 7:51 pm
Full Name: Ryan Walker
Contact:

Re: Cisco Veeam Machine?! S3260

Post by AcrisureRW »

Interesting -- I Ran ReFS on 2016 on an R720-xd with no issues, that were in a RAID -- even one array that was on a PowerVault DAS

Still, I'm glad to hear some feedback on these. Talk about expensive though! I priced out a supermicro using 44 bays for only 12k, but that wasn't with compute. I think an average single node system with (14) 4tb retails around 55k.
nitramd
Veteran
Posts: 298
Liked: 85 times
Joined: Feb 16, 2017 8:05 pm
Contact:

Re: Cisco Veeam Machine?! S3260

Post by nitramd »

Gostev wrote:
All I keep hearing about S3260 is that it is pure awesomeness...
I can attest to this :D

I went with RAID 6 and NL-SAS drives.

I would suggest getting a model that as many physical cores per CPU as you afford, ingest rate will be pretty good.
nmdange
Veteran
Posts: 528
Liked: 144 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: Cisco Veeam Machine?! S3260

Post by nmdange » 1 person likes this post

AcrisureRW wrote: Interesting -- I Ran ReFS on 2016 on an R720-xd with no issues, that were in a RAID -- even one array that was on a PowerVault DAS

Still, I'm glad to hear some feedback on these. Talk about expensive though! I priced out a supermicro using 44 bays for only 12k, but that wasn't with compute. I think an average single node system with (14) 4tb retails around 55k.
I'll agree that for backup repositories, Supermicro is a lot more cost effective because of how much Cisco/Dell/HP markup NLSAS drives. They even sell a 60-bay chassis similar to Cisco's https://www.supermicro.com/products/sys ... 1CR45H.cfm
nitramd
Veteran
Posts: 298
Liked: 85 times
Joined: Feb 16, 2017 8:05 pm
Contact:

Re: Cisco Veeam Machine?! S3260

Post by nitramd »

I agree about the cost of Cisco/Dell/HP; we seem to have a predilection for Cisco :roll:

The specs for the Supermicro look good; I don't have experience with this vendor. The price point seems hard to be beat...
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: Cisco Veeam Machine?! S3260

Post by ferrus »

Gostev wrote:As far as ReFS specifically, actually as it stands right now Microsoft ONLY supports ReFS on JBOD - they want to talk to disks directly, not trusting RAID controller's write caching. I have an on-going discussion about this limitation with them and it is starting to sound like they might be willing to relax this down the road - but for now, it is JBOD only.
Wait, wha .. what?
I thought it was just a restriction on consumer grade HW, and most enterprise RAID was supported. Has that changed now?

I have 5x UCS managed Cisco C240 M4 servers (bought just before the S series started getting popular), each with 12x 6TB disks.
I was planning on a UCS managed 53TB RAID 6 ReFS volume for each server. Is there a support issue with this (UCS manages the onboard RAID card)?

And are you talking Standard Ed. Storage Spaces or Enterprise S2D?
ferrus
Veeam ProPartner
Posts: 300
Liked: 44 times
Joined: Dec 03, 2015 3:41 pm
Location: UK
Contact:

Re: Cisco Veeam Machine?! S3260

Post by ferrus »

I've done some reading, but it's no clearer.

The MS ReFS page seems unequivocal:

https://docs.microsoft.com/en-us/window ... eployments
'ReFS is not supported with hardware virtualized storage such as SANs or RAID controllers in non-passthrough mode.'
But, as mentioned in the Veeam thread here, there's a Veeam Deployment Guide that covers my hardware and storage EXCATLY, on HW RAID6 ReFS - https://www.veeam.com/wp-cisco-ucs-c240 ... guide.html

I thought this was resolved a couple of months ago - but I'm unsure if it just covered SAN attached storage now.

For an very imminent server rebuild - should I be using HW RAID6 ReFS, or JBOD Dual Parity Storage Space (Std) ReFS?
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Cisco Veeam Machine?! S3260

Post by Gostev »

You are right, these are quite recent changes in support stance on Microsoft side. So JBOD is a safer choice if you want to ensure support from Microsoft - although with parity, performance will not be very good on classic Storage Spaces. Or you can just ignore and use RAID6, like everyone else is doing right now with zero issues, due to not knowing of these changes! But, let's move further discussion on this particular topic to the existing thread you have already linked, not to hi-jack this one. Thanks!
theta12
Influencer
Posts: 21
Liked: 1 time
Joined: May 24, 2017 1:37 pm
Contact:

Re: Cisco Veeam Machine?! S3260

Post by theta12 »

If it's any consolation, I just did this same migration myself last year. I was running Avamar and backing up to Data Domain, and switched to Veeam/Data Domain. When then ran out of capacity on our Data domains and purchased 2 3260's (prod/DR) and they've been running great. I'm not sure about a MS supportability stand point, but we used the Veeam config guide for the 3260 using Raid 6, configured the repository with Refs and haven't had any issues.
AcrisureRW
Novice
Posts: 8
Liked: 2 times
Joined: Mar 20, 2018 7:51 pm
Full Name: Ryan Walker
Contact:

Re: Cisco Veeam Machine?! S3260

Post by AcrisureRW » 1 person likes this post

My biggest competitor right now to the S3260 is running a SuperStorage Server 6049P. It's a single compute node, but can nearly replicate the HDD space for a fraction of the cost, and still offers 4-hour support, etc.

If you run two nodes, do you assign certain disks/virtual disks to a specific host via a DAS? Or, is it more common just to run a single compute node.

From a Veeam perceptive, the only advantage I can see is - due to Veeam hating us apparently - running one node for Perpetual licensing, and one node for per-vm licensing (as you can't manage both from the same)
AcrisureRW
Novice
Posts: 8
Liked: 2 times
Joined: Mar 20, 2018 7:51 pm
Full Name: Ryan Walker
Contact:

Re: Cisco Veeam Machine?! S3260

Post by AcrisureRW »

So here's a question that might be variable: Best proc for a db/repo server?

The S3260 only runs the older series Xeon so it's the E5-2600 series -- would a higher Ghz but less cores be better than more cores and slower?

I'm thinking for licensing side, a Xeon 6134 (8c/16t @ 3.2ghz) Gold series would be better than something like a E5-2650v4 (12c/24t @ 2.2ghz) but I can't find specifics on if Veeam prefers more core/threads, or higher CPU rate.
theta12
Influencer
Posts: 21
Liked: 1 time
Joined: May 24, 2017 1:37 pm
Contact:

Re: Cisco Veeam Machine?! S3260

Post by theta12 »

I think that depends on the amount of concurrent tasks you're wanting to run. Concurrent tasks looks at # of cores you have on a proxy so that might come into consideration as well.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Cisco Veeam Machine?! S3260

Post by tsightler » 2 people like this post

AcrisureRW wrote:The S3260 only runs the older series Xeon so it's the E5-2600 series -- would a higher Ghz but less cores be better than more cores and slower?
As far as I know, the M5 server nodes should be available now, which support the newer Xeon Scalable Processors (Skylake). I believe they specifically support the 4110, 4114, 5118, 6132, 6138 and 6152 processors.
AcrisureRW wrote:I'm thinking for licensing side, a Xeon 6134 (8c/16t @ 3.2ghz) Gold series would be better than something like a E5-2650v4 (12c/24t @ 2.2ghz) but I can't find specifics on if Veeam prefers more core/threads, or higher CPU rate.
A repository really doesn't matter much with regards to CPU, so not a lot of concern there either way. However, if you're using these boxes as a proxy/repo then the goal is to balance the throughput the box is capable of with the CPU horsepower required to take full advantage of that throughput. This is not a perfect science, but there's some basic rules you can use to get a rough idea.

I sometimes say that for every 100MB of Veeam throughput you need 1 core, but perhaps a slightly more accurate statement would be 100MB of throughput requires ~2Ghz of CPU. This makes it a little easier to distinguish between a 2.0Ghz processor and a 3.2Ghz process in regards to how much Veeam throughput a single core can push. It's still not a 100% perfect answer (different generations perform slightly differently), but I've found it to be a good rule when trying to determine balance.

So, if I were to look at the S3260, it typically has 2x 40GbE network ports, so that's the potential for up 10GB/s of ingest data if we could saturate those (probably a high ask admittedly). Looking at the storage system, assuming it's fully loaded, and the SATA disks can do, on average 100MB/s, you should be able to get ~5GB/s sequential write throughput (testing indicates this is fairly accurate as well, perhaps even slightly conservative). Since Veeam typically reduces data by at least 2x, and in many cases much more, that indicates that if we want to be able to use 100% of the potential throughput, we could need as much as 200Ghz of total CPU.

Basically, this tells me that, no matter how much CPU I put in the box, the CPU is likely to be the bottleneck, so I most likely want to choose the processor option with the most available processor performance total. This would be the processors with lots of cores. For the M4 server modules, that's the E5-2695E (2.1GHz 18c, total 75.6 total GHz), and for the M5 server modules, that's the Xeon 6152 Gold (2.1GHz 22c, total 92.4Ghz).

The Xeon 6134 Gold would only be 51.2GHz total. Admittedly, that's likely good enough to sustain 2.5GB/s of ingest, which is still great, and might be fine if you know in your environment that other things will be the bottleneck (for example 10GbE or your storage, or your only half loading the disks, etc), but it would likely be a limiting factor in some cases. If you want to make sure you can hit your

This seems to be proven by both testing and real world results. I've seen S3260s with E5-2680 processors (28 total cores at 2.4GHz each, 67.2GHz total) hit around 3-3.5GB/s, at roughly 90% CPU utilization, which is pretty close to the above numbers.
AcrisureRW
Novice
Posts: 8
Liked: 2 times
Joined: Mar 20, 2018 7:51 pm
Full Name: Ryan Walker
Contact:

Re: Cisco Veeam Machine?! S3260

Post by AcrisureRW »

@TSlightler that was a wonderful explanation. That needs to be a sticky on these forums!!

To balance the cost vs msft licensing, I think you're dead right on either going with something in the 2.1-2.4ghz but higher CPU count. Great information!

How about JBOD with an HBA vs RAID-6 arrays? I'm not going to be running Datacenter on this, so I won't benefit from the Tiered storage w/SSD, but I've heard that Veeam /really/ likes having JBOD to Windows due to the self-healing of ReFS (which, we will be using ReFS for certain).

I've heard some people say JBOD/Storage Spaces (Standard) has bad performance compared to something with a 2gb/4gb RAID-Cache in Raid-6.

They'd be running NL-SAS (12gb) Enterprise Storage no matter the hardware platform we go with.

Thanks in advance!
dellock6
VeeaMVP
Posts: 6165
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Cisco Veeam Machine?! S3260

Post by dellock6 » 1 person likes this post

I've seen several providers testing out both hardware jbod+storage spaces and hardware raid with then simple windows volumes. I admit those tests have been done before the latest stable ReFS drivers, but I always thought that has more to do with Storage Spaces, anyway the performance of a parity-based raid in Storage Spaces like R6 was considerably worse than hardware R6. I would exclude also R1 (too much overhead so higher price per usable GB) and R5 (single parity is honestly gambling these days with large disks and their rebuild times); also, storage spaces lacks more advanced raid configurations like R60 that would combine the dual parity of R6 with the striping of R0 to increase performance.

Indeed, "the" biggest feature you lose going for simple volumes is self-healing, but so far when presented the options to my customers, they've always chosen the hardware raid one, accepting to lose self-healing (scrubbing and block corruption identification/notification is still there however).
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Cisco Veeam Machine?! S3260

Post by tsightler » 4 people like this post

What Luca said!

You can compensate for the poor performance of basic Storage Spaces if you have a set of latency optimized, enterprise SSD's in the pool and you allocate a decent sized write cache during creation of the virtual disk. The problem is, you can only create a fairly small write cache per virtual disk (I think max is 100GB, but standing recommendation from Microsoft seems to be not more than 16GB per non-clustered storage space volume). You could potentially put in a small, fast, NVMe SSD and add that to the pool and use that, but, even though that NVMe is fast, it's still likely to be the throughput bottleneck as a single NVMe SSD will do good to sustain 1GB/s of in/out bandwidth, while the backend disk can do much more. Admittedly, the M5 server nodes have dual NVMe (the M4 only had a single), so that might get it a little closer to making sense, but I haven't been able to get my hand on those yet.

You could instead use a bank of SSDs, but then you're spending real money, losing a bank of SATA disks, and you'd only be using a very small portion of the SSDs for the write cache. The rest could, in theory, be used for a read cache, but the read cache in non-S2D storage spaces is a scheduled tiering which isn't that useful for the repository use case. That's a high price to pay.

On the other hand, the RAID controller option is very proven and rock solid. A lot of people get really excited about the self repair capabilities of ReFS + Storage Spaces parity, but it's not like RAID controllers just leave your data up to chance. Every enterprise class RAID controller, including the one in the S3260, includes both background consistency check capabilities, which check and repair the RAID data at a logical level, as well as patrol read functions, which perform hardware level disk scans to detect and repair issues at the individual disk level. The fact of the matter is, software RAID solutions, like storage spaces, have to implement scan and repair operations because they don't have hardware doing this in the background, otherwise they would be significant less safe than hardware RAID. Admittedly, they can add some other cool stuff, like extra checksums and speading data chunks across nodes (for example with S2D), but the data is quite safe in all cases.

The biggest issue that remains is the Microsoft statement that ReFS is only supported on JBOD or Storage Spaces. Honestly, most of the customers I know have, to this point, chosen to ignore this, but I can't just say to ignore it, that has to be a decision made based on the end users understanding of the situation and the potential risk that, if something does go wrong, Microsoft may not be willing to help. I'm hopeful that, now that the latest round of ReFS patches seems to have made significant strides in overall stability and performance, that Microsoft will relax this position.
AcrisureRW
Novice
Posts: 8
Liked: 2 times
Joined: Mar 20, 2018 7:51 pm
Full Name: Ryan Walker
Contact:

Re: Cisco Veeam Machine?! S3260

Post by AcrisureRW » 1 person likes this post

Thank you for everyone's input. We placed the order today for the S3260, with an M4 node (2)10c/20t 2.2ghz, 128gb RAM, (2)480gb, (48)6tb NL-SAS.

As such, I'm pretty sure the best option is hardware RAID. I might take the time and run some benchmarks on pass-through too, but as that's not an HBA I don't think it'll be a completely fair comparison (as typically, an HBA will handle JBOD a lot better than a RAID card)

I'm incredibly excited, but as with all things Cisco, it's hurry up and wait. 30 day lead time -- on the C13-C14 connector power cords, because clearly those are hard to keep in stock? - so we'll see when it actually comes :)

We got a steal on this though. It was only $15,000 more than a SSG-6048R-E1CR60N (which runs 60x6tb for the price) so lower storage initially, but technically will have more than the SuperMicro when we fully populate the fourth row. And it will tie in directly with our current UCS Fabric, which is awesome.

I'll make it a point to post some results for the community!
stagnant
Novice
Posts: 6
Liked: 2 times
Joined: Sep 27, 2015 10:34 pm
Contact:

Re: Cisco Veeam Machine?! S3260

Post by stagnant »

How has the S3260 worked out for you? We are a Veeam & Cisco partner and deploy them all the time. What a perfect solution for a Veeam proxy & backup target in one. Are you doing direct storage access backups to that bad boy, too? Just curious...it's cool tech!
bpayne
Enthusiast
Posts: 55
Liked: 12 times
Joined: Jan 20, 2015 2:07 pm
Full Name: Brandon Payne
Contact:

Re: Cisco Veeam Machine?! S3260

Post by bpayne »

I am too coming up for a backup storage refresh and I am interested in the Cisco S3260. I do hear a fair bit about this box every once in a while but is anyone really running this thing with good luck? I did read all the posts in this thread and looks like a Veeam partner and another customer like it (and one purchased).

I'm currently Data Domain and am looking into Exagrid and this Cisco S3260 mainly. Problem I have is super long yearly retentions (Healthcare) and thats where we really need that deduplication and compression. I dont have the Cisco Hyperflex infrastructure and nor do I trust REFS yet given all the problems. So I would not have that infrastructure integration (not sure if thats required?) and would have to rely on Veeams deduplication and compression per job.

Honestly, I'm liking Exagrid and their landing zone. Seems like I would get the best of both worlds, but again, I am trying to learn more about this S3260 box and if anyone is willing to share their experiences so far. Thanks!
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Cisco Veeam Machine?! S3260

Post by Gostev »

tsightler wrote:I'm hopeful that, now that the latest round of ReFS patches seems to have made significant strides in overall stability and performance, that Microsoft will relax this position.
And so they did 8)
Steve-nIP
Service Provider
Posts: 129
Liked: 59 times
Joined: Feb 06, 2018 10:08 am
Full Name: Steve
Contact:

Re: Cisco Veeam Machine?! S3260

Post by Steve-nIP » 1 person likes this post

We have 3 of these hooked up and running as VBR servers/proxies/storage.
Got to say, they offer a lot of capacity in a reasonably small box. (Although they are deeper than most servers).
I'm happy so far...
nitramd
Veteran
Posts: 298
Liked: 85 times
Joined: Feb 16, 2017 8:05 pm
Contact:

Re: Cisco Veeam Machine?! S3260

Post by nitramd » 1 person likes this post

I have two 3160s and one 3260 - I love these and recommend them. If you have not already, take a look at the S3260 spec sheet: https://www.cisco.com/c/en/us/products/ ... 38059.html ; these are quite configurable. If I recall correctly, Cisco has various "pre-configured" models that seem to meet various price points/budgets.

Perhaps one thing to keep in mind about hardware like ExaGrid, especially after data has been moved off of the landing zone and deduplicated, is the amount of time it will take to re-hydrate data if a restore is required. Various anecdotes I've read and heard is that data re-hydration can be painfully slow; no criticism of ExaGrid intended.

A week or two ago I restored a 100GB VM, its disk had become corrupted, in ~5 minutes - my colleague was quite pleased with this restore time.

While Veeam does not do a lot of deduplication (by design I believe), we're quite happy with the combination of Cisco 3260s and Veeam.

Hope this helps.

Good luck.
sandsturm
Veteran
Posts: 290
Liked: 25 times
Joined: Mar 23, 2015 8:30 am
Contact:

Re: Cisco Veeam Machine?! S3260

Post by sandsturm »

Supermicro storage servers like the SSG-6048R-E1CR60N are much cheaper then a S3260. Does anybody have a good explanation to buy a S3260 anyway? Because I can‘t see why should one buy a S3260... is it just because of the brand? Or maybe the maintenace? Supermicro sells maintenace too, but I don‘t know the quality?
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Cisco Veeam Machine?! S3260

Post by Gostev »

Top reasons why customers choose S3260 seem to be:
1. Enterprise-grade RAID controller (not something to save on, as reliability is the key for backup).
2. Dual port 40Gb connectivity (don't underestimate the importance of network bandwidth).
3. It's proven times and again in multiple multi-PB scale Veeam deployments.

I cannot comment about the Supermicro one due to lack of knowledge of their respective offering... however, I heard they just got delisted from NASDAQ, so that may be a potential concern with them for some as a supplier.
l0stb@ackup
Influencer
Posts: 14
Liked: 4 times
Joined: Jul 19, 2018 2:10 am
Contact:

Re: Cisco Veeam Machine?! S3260

Post by l0stb@ackup » 4 people like this post

Hello, just wanted to chime in a share a hugely important tip we learned today:

Cisco released a great document titled "Veeam Availability Suite on Cisco UCS S3260":
https://www.cisco.com/c/dam/en/us/solut ... 739852.pdf

Page 20 discusses the recommended virtual drive settings for the Veeam repository.

Our settings were:
Cache Policy = "Direct IO"
Read Policy = "No Read Ahead"
Write Policy = "Write Through"
Disk Cache Policy = "Unchanged"

We then applied the recommended settings:
Cache Policy = "Cached IO"
Read Policy = "Always Read Ahead"
Write Policy = "Write Back Good BBU"
I also set Disk Cache Policy to "Disabled"

Since we made the recommended changes, our backup performance saw a 600-1300% improvement, which is quite amazing.

Our specs are:
UCS S3260 M4
2x Intel Xeon E5-2620 v4
128GB RAM
Cisco UCS C3000 RAID Controller for M4 Server Blade with 4G RAID Cache
Veeam repository virtual drive: 23x 8TB SAS HDDs in RAID6
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Cisco Veeam Machine?! S3260

Post by Gostev »

Thank you so much for sharing! Can you share what absolute numbers did you see before and after applying these changes?
l0stb@ackup
Influencer
Posts: 14
Liked: 4 times
Joined: Jul 19, 2018 2:10 am
Contact:

Performance post RAID reconfiguration

Post by l0stb@ackup »

One example job has 80 VMs.

It went from:
  • Processing rate 25-60MBps, duration 5-13 hours
To:
  • Processing rate 420-510MBps, duration 1-4 hours
Can't say I'm not happy with these results! :)
nmdange
Veteran
Posts: 528
Liked: 144 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: Cisco Veeam Machine?! S3260

Post by nmdange »

I would be curious if changing the Cache Policy back to Direct IO would have any impact. DirectIO seems like it would provide better performance, or at least the same performance. I would suspect enabling the write cache had the biggest impact on your performance improvements.
From the IO Policy drop-down list, choose one of the following:
– Direct —In direct I/O mode, reads are not buffered in the cache memory. Data is transferred to the cache and the host concurrently. If the same data block is read again, it comes from the cache memory. This is the default.
– Cached —In cached I/O mode, all reads are buffered in the cache memory.
BigJack
Lurker
Posts: 2
Liked: never
Joined: Apr 27, 2018 9:29 pm
Full Name: Jack Clark
Contact:

Re: Cisco Veeam Machine?! S3260

Post by BigJack »

I'm a bit confused by that Cisco document. When setting up the storage via the CIMC, Cisco recommends a strip size of 512KB (page 20.) When doing the same thing with UCS Manager, Cisco recommends a different strip size of 64KB and also recommends setting Drive Cache to disable (page 60.) Cisco didn't mention anything about the Drive Cache in the page 20 CIMC config, and the screenshot didn't specify disable.

So what is Cisco's recommendation? 64KB/512KB? Unchanged/Disable? And since I'm picking Cisco's document apart, the screenshots all say "Strip Size", but the document text calls out both "Strip" & "Stripe".
Post Reply

Who is online

Users browsing this forum: Google [Bot], Mildur and 143 guests