-
- Novice
- Posts: 8
- Liked: 1 time
- Joined: Mar 21, 2019 5:10 pm
- Full Name: Ed Seibert
- Contact:
[MERGED] JBOD repositories
What is everyone out there using as repositories? We have standard 30 day retention and looking to get into JBOD repos. What is the best stuff available today? I found some older posts but nothing from the last few years. Primarily a Dell shop but open to others.
-
- Influencer
- Posts: 22
- Liked: 1 time
- Joined: Mar 06, 2013 1:53 pm
- Full Name: David
- Contact:
Re: JBOD repositories
We use a mix of storage:
multiple HPE StoreOnce,
Multiple QNAP nas,
IBM V7000,
AWS.
Retention depends on the service, it ranges from 2days all the way through to 30 days.
I'm not sure what stuff (hardware) is recommended now, so I'm interested what other peoples views are.
multiple HPE StoreOnce,
Multiple QNAP nas,
IBM V7000,
AWS.
Retention depends on the service, it ranges from 2days all the way through to 30 days.
I'm not sure what stuff (hardware) is recommended now, so I'm interested what other peoples views are.
-
- Veteran
- Posts: 298
- Liked: 85 times
- Joined: Feb 16, 2017 8:05 pm
- Contact:
Re: JBOD repositories
Is there a reason you want to use JBOD? This method does not provide any type of redundancy.
Given that you want a repo, I think you'd better off with a RAID array and a discrete RAID card, aka HBA.
You can use:
RAID-10 for a good balance of read/write speeds but you sacrifice overall capacity, i.e. you'll have only 50% of total capacity.
RAID-6 for quite good read performance but not great write performance - this type of RAID yields higher overall capacity than RAID-10.
Given that you want a repo, I think you'd better off with a RAID array and a discrete RAID card, aka HBA.
You can use:
RAID-10 for a good balance of read/write speeds but you sacrifice overall capacity, i.e. you'll have only 50% of total capacity.
RAID-6 for quite good read performance but not great write performance - this type of RAID yields higher overall capacity than RAID-10.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Recommendations for backup storage, backup target
Hi Ed, tons of recommendations are given in the thread above, please review.
-
- Novice
- Posts: 6
- Liked: never
- Joined: Nov 04, 2013 5:11 pm
- Full Name: Dave Jerome
- Contact:
[MERGED] Repo recommendations??
Hi All,
Im currently managing an environment where our store once device is coming to the end of its life.
We backup around 30TB of data including a couple of big physical servers (legacy exchange and file services), I haven't had a look at the market for a while and id be interested to know what people are using these days and recommend?
Im open to either appliances or software that can be installed on something, im looking for good dedupe and as fast as possible ingestion and obviously as cheap as possible.
Thanks
Im currently managing an environment where our store once device is coming to the end of its life.
We backup around 30TB of data including a couple of big physical servers (legacy exchange and file services), I haven't had a look at the market for a while and id be interested to know what people are using these days and recommend?
Im open to either appliances or software that can be installed on something, im looking for good dedupe and as fast as possible ingestion and obviously as cheap as possible.
Thanks
-
- Veteran
- Posts: 636
- Liked: 100 times
- Joined: Mar 23, 2018 4:43 pm
- Full Name: EJ
- Location: London
- Contact:
Re: Repo recommendations??
We use a lot of HP DS3600 storage arrays connected to Windows 2016 servers. We've formatted using ReFS (latest 2016 version) and we get a lot of good space savings with the ReFS.
I've configured one of my systems with three DS3600s daisy-chained and presented as a single volume to Windows with about 150tb of space. Not had any trouble with that and it's nice to have the continuous volume so you don't have to mess about moving jobs between volumes running low on space.
I've configured one of my systems with three DS3600s daisy-chained and presented as a single volume to Windows with about 150tb of space. Not had any trouble with that and it's nice to have the continuous volume so you don't have to mess about moving jobs between volumes running low on space.
-
- Veteran
- Posts: 298
- Liked: 85 times
- Joined: Feb 16, 2017 8:05 pm
- Contact:
Re: Recommendations for backup storage, backup target
Hello Dave.
I've found that one of the keys to fast ingest is a high number of physical CPU cores on the data repository - think parallel ingest. In this case more is better.
For a disk array, see my post above for a couple of RAID types.
Other factors involved for ingesting quickly are network speed and storage type, i.e. flash has higher bandwidth than mechanical drives.
To mention what you already know, your budget determines all.
Hope this helps and good luck.
I've found that one of the keys to fast ingest is a high number of physical CPU cores on the data repository - think parallel ingest. In this case more is better.
For a disk array, see my post above for a couple of RAID types.
Other factors involved for ingesting quickly are network speed and storage type, i.e. flash has higher bandwidth than mechanical drives.
To mention what you already know, your budget determines all.
Hope this helps and good luck.
-
- Novice
- Posts: 8
- Liked: never
- Joined: Feb 07, 2020 9:21 am
- Contact:
[MERGED] Choose new backup repository
Hi,
I’m looking for new backup repository for my Veeam backup. I’m currently looking at a Synology RS2418 with 10x10TB disks. At the moment I need 50TB diskspace and room for expansion. I’m backing up 3 ESXi hosts with 16 VMs.
Is this a bad choice? It will be used as my backup copy (GFS) repository.
Anything I should pay attention to? Is it the right CPU, is 4GB of RAM enough, SSD Cache?
And when it comes to configuration of the NAS, I was thinking of using RAID6, but is there a better option? SHR?
What block size and filesystem when formatting the drive?
Should I use iSCSI, CIFS or NFS?
I know it’s a lot of questions and there are a lot of threads about some of these questions but no clear answers. I really hope you guys can help me.
Best Regards,
I’m looking for new backup repository for my Veeam backup. I’m currently looking at a Synology RS2418 with 10x10TB disks. At the moment I need 50TB diskspace and room for expansion. I’m backing up 3 ESXi hosts with 16 VMs.
Is this a bad choice? It will be used as my backup copy (GFS) repository.
Anything I should pay attention to? Is it the right CPU, is 4GB of RAM enough, SSD Cache?
And when it comes to configuration of the NAS, I was thinking of using RAID6, but is there a better option? SHR?
What block size and filesystem when formatting the drive?
Should I use iSCSI, CIFS or NFS?
I know it’s a lot of questions and there are a lot of threads about some of these questions but no clear answers. I really hope you guys can help me.
Best Regards,
-
- Product Manager
- Posts: 2581
- Liked: 708 times
- Joined: Jun 14, 2013 9:30 am
- Full Name: Egor Yakovlev
- Location: Prague, Czech Republic
- Contact:
Re: Choose new backup repository
Hi!
You are right, too many variables here to recommend "the best" way to implement it.
I would start with RAID6, 128K stripe size, iSCSI to Windows Server 2016 with all ReFS-related patches installed, 64K block size on Windows volume.
From here, you can launch first backups and observe bottlenecks to improve performance further.
/Cheers!
You are right, too many variables here to recommend "the best" way to implement it.
I would start with RAID6, 128K stripe size, iSCSI to Windows Server 2016 with all ReFS-related patches installed, 64K block size on Windows volume.
From here, you can launch first backups and observe bottlenecks to improve performance further.
/Cheers!
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Recommendations for backup storage, backup target
This thread contains other hints as well, worth reviewing - at least the very first post. You can also search the forums for other blocksize-related discussions. Thanks!
-
- Influencer
- Posts: 22
- Liked: 5 times
- Joined: Apr 27, 2018 11:40 am
- Full Name: Andreas Svensson
- Contact:
Re: Recommendations for backup storage, backup target
I've been using Lenovo SR650 with tripple 12Gbit HBAs and 24 SATA SSD's since that series was born.
Has been working really well I have however disabled Windows SS dedupe since it was having lots of issues.
When using instant restore of hard working database servers there is only one way to go and that is -> SSDS. It dosen't need to be enterprise grade SSDS but the latency gains makes astronomical difference in both backup and restore scenarios.
The servers are running NDBSSL and acting as proxies aswell, hard working babies but relativly cheap setup which scales well.
Windows SS STD mirroring and REFS 64k
Has been working really well I have however disabled Windows SS dedupe since it was having lots of issues.
When using instant restore of hard working database servers there is only one way to go and that is -> SSDS. It dosen't need to be enterprise grade SSDS but the latency gains makes astronomical difference in both backup and restore scenarios.
The servers are running NDBSSL and acting as proxies aswell, hard working babies but relativly cheap setup which scales well.
Windows SS STD mirroring and REFS 64k
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Nov 19, 2020 12:46 pm
- Full Name: Nigel Bradley
- Location: UK
- Contact:
Re: Recommendations for backup storage, backup target
This post caught my eye as we're looking at a synology Rs1619xs box and wondered what you went with and configurations. It comes wth 8gb ram and has an option for SSD cache, is this cache something worth setting up to increase backup speeds?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Recommendations for backup storage, backup target
We recommend using general-purpose servers for backup repositories. This will give much bigger boost to the long-term performance than SSD cache, as in this case we can run our data mover directly on the box.
In general, consumer-grade NAS is the worst choice for backup repositories due to the lack of enterprise-grade RAID controllers, which causes data loss in a number of scenarios.
In general, consumer-grade NAS is the worst choice for backup repositories due to the lack of enterprise-grade RAID controllers, which causes data loss in a number of scenarios.
-
- Veteran
- Posts: 599
- Liked: 87 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Recommendations for backup storage, backup target
We're currently struggling with our NetApp CIFS approach. We have 500TB backup job data spread over two side A and B, which is then again copied crosswise (~1,2PB) and parts of it send to S3. We use 4 physical Proxies, each with 20 Cores. The backup NetApp is all flash, the copy repository nearline sas. The backup performance is good (forever forward + syn fulls), but all synthetic fulls, merges etc. take very long time (sometimes more than 5 days for jobs wiht 30-40TB). In theory the all flash filers should be able to do more throughput, but we don't get more then 150MB/s in diskspd benchmark for merge operations. But we don't see high latency on the volumes. Looking at the pex data in the job logs, everything indicates that it's a storage problem. But we just can't nail it down.
As this is no thread to help me solve this problem. What storage do others use for this kind of use case and size? I personally don't like Windows REFS, everything I read about XFS sounds better to me.
As this is no thread to help me solve this problem. What storage do others use for this kind of use case and size? I personally don't like Windows REFS, everything I read about XFS sounds better to me.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Recommendations for backup storage, backup target
ReFS is by far the top in terms of usage, nothing comes even close. Based on our support big data, we estimate ~10EB of total capacity in all ReFS backup repositories deployed out there. XFS on the other hand does not have much adoption yet, as this integration is fairly new + it requires minimal Linux expertise to deploy, which is not something everyone has.
-
- Veteran
- Posts: 599
- Liked: 87 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Recommendations for backup storage, backup target
Yes, I heard and read a lot about ReFS. I still would like to hear what others are using as Enterprise grade storage and their experience. Especially in a relatively large environment where 400TB backup, 2 PB copy volume is needed. As we are not able to perform active fulls, we have a lot of synthetic operations which is just too much for our current infrastructure/storage. This is what currently hurts. Also on top there are the offload sessions to S3, which add more read operations on performance tier.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
-
- Veteran
- Posts: 599
- Liked: 87 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Recommendations for backup storage, backup target
I'd like to push this a bit. As we are currently in an internal discussion where to go next.pirx wrote: ↑Nov 23, 2020 8:10 pm We're currently struggling with our NetApp CIFS approach. We have 500TB backup job data spread over two side A and B, which is then again copied crosswise (~1,2PB) and parts of it send to S3. We use 4 physical Proxies, each with 20 Cores. The backup NetApp is all flash, the copy repository nearline sas. The backup performance is good (forever forward + syn fulls), but all synthetic fulls, merges etc. take very long time (sometimes more than 5 days for jobs wiht 30-40TB). In theory the all flash filers should be able to do more throughput, but we don't get more then 150MB/s in diskspd benchmark for merge operations. But we don't see high latency on the volumes. Looking at the pex data in the job logs, everything indicates that it's a storage problem. But we just can't nail it down.
As this is no thread to help me solve this problem. What storage do others use for this kind of use case and size? I personally don't like Windows REFS, everything I read about XFS sounds better to me.
First of all there is the discussion about OS and FS
So Windows and ReFS is used more often, linux support is still quite new. But there are some nice linux features coming in v11 and XFS is a rock solid fs in my opinion.
Are there any users around that already use XFS with reflink/fastclone with backup data of ~1PT, that can share their experience, especially with environments and backup chains that are not completely new?
Second there is the discussion about hardware
I'd opt for compute + storage in a commodity server with as many disks as possible, so that compute is as near as possible to the disks. Like a building block that can easily be scaled. Others vote for server + classic FC SAN, again because of scaling and "nobody" else uses servers with this amount of storage. And indeed, other companies I know which are using Veeam +ReFS use for SAN too. So it feels a bit like the storage server approach is more for smaller companies. I know that there is not one way to go, but I'd also like to here what others use as base for their ReFS/XFS repos.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Recommendations for backup storage, backup target
Personally, I'm with you here.
I love commodity servers approach as the cheapest, and because it works at scale! Even a long time ago, we already had Veeam Cloud Connect providers hosting PBs of data in multiple Cisco C3260 (this model was later renamed to S3260). So I've been very confident recommended this approach to Enterprise customers ever since, and we've also been investing heavily in this deployment scenario from the engine perspective. v11 is particularly game changing after all of our joint work with HPE using their Apollo 4510, which allowed us to more than double throughput comparing to v10. Eventually, we were able to reach 11.4 GB/s backup speed on a single all-in-one backup appliance! And yes, these are bytes not bits, not a typo. Which reminds me, I need to blog about it this weekend with more details.
I will also concur there's nothing wrong with server + SAN approach either. This will work just fine too. One other biggest Cloud Connect provider I have in mind uses Nimble as their backup target (they bought a lot and got an awesome deal from HPE). They decided to invest in this because they liked the idea of storage snapshots as an extra layer of ransomware protection, since Veeam did not have built-in immutability technologies at the time.
So you cannot really go wrong with this either - it just means higher costs and a bit larger data center footprint in any case. On the other hand, this may be something your existing IT staff are more comfortable managing, which will in the end translate into lower costs (as human hours are always the most expensive IT asset).
One other possible drawback of SAN approach is less flexibility to repurpose: some modern technology seem to prefer direct access to disks (like Storage Space Direct) and explicitly don't support anything that abstracts storage from them. But I guess it goes both ways: being shared storage, you can repurpose a regular SAN in ways you cannot easily achieve with regular servers.
I love commodity servers approach as the cheapest, and because it works at scale! Even a long time ago, we already had Veeam Cloud Connect providers hosting PBs of data in multiple Cisco C3260 (this model was later renamed to S3260). So I've been very confident recommended this approach to Enterprise customers ever since, and we've also been investing heavily in this deployment scenario from the engine perspective. v11 is particularly game changing after all of our joint work with HPE using their Apollo 4510, which allowed us to more than double throughput comparing to v10. Eventually, we were able to reach 11.4 GB/s backup speed on a single all-in-one backup appliance! And yes, these are bytes not bits, not a typo. Which reminds me, I need to blog about it this weekend with more details.
I will also concur there's nothing wrong with server + SAN approach either. This will work just fine too. One other biggest Cloud Connect provider I have in mind uses Nimble as their backup target (they bought a lot and got an awesome deal from HPE). They decided to invest in this because they liked the idea of storage snapshots as an extra layer of ransomware protection, since Veeam did not have built-in immutability technologies at the time.
So you cannot really go wrong with this either - it just means higher costs and a bit larger data center footprint in any case. On the other hand, this may be something your existing IT staff are more comfortable managing, which will in the end translate into lower costs (as human hours are always the most expensive IT asset).
One other possible drawback of SAN approach is less flexibility to repurpose: some modern technology seem to prefer direct access to disks (like Storage Space Direct) and explicitly don't support anything that abstracts storage from them. But I guess it goes both ways: being shared storage, you can repurpose a regular SAN in ways you cannot easily achieve with regular servers.
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: Recommendations for backup storage, backup target
We've been using Veeam (and DPM before that) on low cost, high density storage servers from Supermicro for over 10 years now. Never had any issues with performance or reliability. ReFS too now for a few of years. We're in the 300TB range of backup disk usage (though with ReFS that's not that much bigger than the source systems' total disk usage). If I was designing backup storage for a much larger environment, I would just take the same Supermicro storage server design and throw more servers/JBODs at it.
-
- Veteran
- Posts: 599
- Liked: 87 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Recommendations for backup storage, backup target
That sound promising. One point pro FC SAN is that we use it anyway with IBM storage snapshot integration, which means that the repository/proxy/gateway server need connection to FC SAN anyway. Regarding flexibility I'm not so sure if a (or multiple) large storage server(s) can be reused better for other purposes compared to a FC SAN device.
-
- Veteran
- Posts: 599
- Liked: 87 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Recommendations for backup storage, backup target
We will now test high density servers as our copy repositories, probably with xfs.
And it sounds great that we maybe solve the problems we currently have (SMB + syn operations), especially with this cheap high density servers. But they have only ~60 disks which then have to be divided in multiple RAID sets, which reduced the available IOPS even more. So I'm skeptical that this is an option as primary backup target for us. This storage is not only used as backup target, it's also the source for copy jobs, offloads and maybe sureback at some point.
Options for backup repositories:
1.) play safe and go with a SAN storage system with ~130 x 6TB N-SAS disks, this will have the IOPS needed and we can add disks/shelfs if needed. We have a couple of 20 core servers that are currently used as proxy/gateway/mountserver, they already have FC as we do backup from storage snapshots anyway. But they have to replaced in the next 12-18 months. We could even use this without blockcloning as there will be enough capacity. I'm not sure if I want to have backup and copy on the same - still pretty new - technology. I guess performance would be ok for syn operations without blockcloning.
2.) scale out with high density servers and cheap N-SAS disks. As I wrote above these server have only few disks compared to a SAN system, so one server with 60 disks will not work for 600 VM's, and 200-300TB. At least I can't imagine that this will work with copy/offload/sureback on top. We could use 4 or 6 of those servers with smaller disks which would then be as expensive as the SAN solution, but we would also get new CPU's with more cores. Additional storage could be added easily, but not as cheap as with #1. But I've no idea if the performance is sufficient. Testing with a demo server is also not done in a couple of days.
3.) high density with SSD's! With this we clearly would have the performance that we need, but CPU and capacity is tight together - if we want to have Proxy and Repository role on the same server - if not, we still can add servers as proxies. But if we need additional storage, this will get expensive. The solution is clearly the most expensive, about +60% compared to #1. Even if I add the price for new server to #1.
And it sounds great that we maybe solve the problems we currently have (SMB + syn operations), especially with this cheap high density servers. But they have only ~60 disks which then have to be divided in multiple RAID sets, which reduced the available IOPS even more. So I'm skeptical that this is an option as primary backup target for us. This storage is not only used as backup target, it's also the source for copy jobs, offloads and maybe sureback at some point.
Options for backup repositories:
1.) play safe and go with a SAN storage system with ~130 x 6TB N-SAS disks, this will have the IOPS needed and we can add disks/shelfs if needed. We have a couple of 20 core servers that are currently used as proxy/gateway/mountserver, they already have FC as we do backup from storage snapshots anyway. But they have to replaced in the next 12-18 months. We could even use this without blockcloning as there will be enough capacity. I'm not sure if I want to have backup and copy on the same - still pretty new - technology. I guess performance would be ok for syn operations without blockcloning.
2.) scale out with high density servers and cheap N-SAS disks. As I wrote above these server have only few disks compared to a SAN system, so one server with 60 disks will not work for 600 VM's, and 200-300TB. At least I can't imagine that this will work with copy/offload/sureback on top. We could use 4 or 6 of those servers with smaller disks which would then be as expensive as the SAN solution, but we would also get new CPU's with more cores. Additional storage could be added easily, but not as cheap as with #1. But I've no idea if the performance is sufficient. Testing with a demo server is also not done in a couple of days.
3.) high density with SSD's! With this we clearly would have the performance that we need, but CPU and capacity is tight together - if we want to have Proxy and Repository role on the same server - if not, we still can add servers as proxies. But if we need additional storage, this will get expensive. The solution is clearly the most expensive, about +60% compared to #1. Even if I add the price for new server to #1.
-
- Enthusiast
- Posts: 98
- Liked: 12 times
- Joined: Mar 06, 2013 4:12 pm
- Contact:
Re: Recommendations for backup storage, backup target
I need to replace 3 old QNAP NAS devices used as Veeam repositories at 3 small remote offices. The load is very light, under 100GB full backup size. I've had these running for years using CIFS and have never had a problem, but the word from Veeam is that this class of device is among the highest when it comes to corruption.
I would like something that will hold 4 3.5" disks and support RAID. Most tower PCs these days don't have enough bays, so I got thinking about eSATA. There are small external devices which would handle my needs and then some. They are even less inexpensive than QNAP, so that raises the question about reliability. Can anyone recommend small eSATA options, or should I abandon that idea? I'm looking for a max of 8TB storage space.
I would like something that will hold 4 3.5" disks and support RAID. Most tower PCs these days don't have enough bays, so I got thinking about eSATA. There are small external devices which would handle my needs and then some. They are even less inexpensive than QNAP, so that raises the question about reliability. Can anyone recommend small eSATA options, or should I abandon that idea? I'm looking for a max of 8TB storage space.
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Dec 22, 2011 7:12 pm
- Full Name: John Doe
- Contact:
[MERGED] What repository hardware are you using?
I am currently using a Qnap QTS NAS and it has been OK; but, I am looking for a new repository for my backups and something that can do immutability. I was wondering what everyone else is using.
-
- Product Manager
- Posts: 20413
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: What repository hardware are you using?
A physical server stuffed with bunch of directly attached disk, Ubuntu on top and XFS as file system can make it into a perfect immutable repository. Some general information regarding it can be found here, might be worth checking. Thanks!
-
- Service Provider
- Posts: 192
- Liked: 21 times
- Joined: Feb 12, 2019 2:31 pm
- Full Name: Dave Hayes
- Contact:
[MERGED] Linux repo HW to replace NAS
Hello All. So we have been rolling out a Veeam hardened repos for a few months now on some larger customers with great success. Thank you Veeam!
These have all been server class Dell servers with a bunch of Enterprise grade drives installed. But we have several smaller customers who we might have installed a Nas (yeah...I know bad). However, we would like to offer a lower end Linux device as the primary target of their backups and then replicate to the cloud via backup copy job. We are doing this at one site now and it works quite well. But I am concerned that some of the lower end options are not necessarily "enterprise" grade drives and I know non-enterprise drives is frowned upon occasionally. We were thinking of checking out some freenas HW build specs to use as a Linux repo. It really seems like you could put together a decent linux "appliance" type device for these lower end customers. What are key HW areas to be concerned about (obviously Ubuntu compatibility, etc).
We do have a bunch of Dell servers out there running Windows server and they work well for DR purposes, surebackup, etc. However, there are some customers who do not need that functionality.
Any thoughts?
Dave
These have all been server class Dell servers with a bunch of Enterprise grade drives installed. But we have several smaller customers who we might have installed a Nas (yeah...I know bad). However, we would like to offer a lower end Linux device as the primary target of their backups and then replicate to the cloud via backup copy job. We are doing this at one site now and it works quite well. But I am concerned that some of the lower end options are not necessarily "enterprise" grade drives and I know non-enterprise drives is frowned upon occasionally. We were thinking of checking out some freenas HW build specs to use as a Linux repo. It really seems like you could put together a decent linux "appliance" type device for these lower end customers. What are key HW areas to be concerned about (obviously Ubuntu compatibility, etc).
We do have a bunch of Dell servers out there running Windows server and they work well for DR purposes, surebackup, etc. However, there are some customers who do not need that functionality.
Any thoughts?
Dave
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Recommendations for backup storage, backup target
@johnDoe,
In addition to Vladimir's suggestion:
If you go with a bunch of disks don't forget to compose a single block device (using software/physical RAID, or LVM) before applying XFS on top of that.
Thanks!
In addition to Vladimir's suggestion:
If you go with a bunch of disks don't forget to compose a single block device (using software/physical RAID, or LVM) before applying XFS on top of that.
Thanks!
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: [MERGED] Linux repo HW to replace NAS
Just do it to be honest. Thing is that you're not going to really know the performance until you test it (use fio to simulate some random write IO). With XFS and fast clone, your performance concerns will mostly be eliminated, and your primary concern is just meeting 3-2-1.dhayes16 wrote: ↑Oct 29, 2021 6:47 pm We were thinking of checking out some freenas HW build specs to use as a Linux repo. It really seems like you could put together a decent linux "appliance" type device for these lower end customers. What are key HW areas to be concerned about (obviously Ubuntu compatibility, etc).
Fast clone removes most of your concerns with performance since you just don't have to worry about the pain on production you get with Active Fulls, and you get all the benefits of Synthetics without the IO penalty.
Ubuntu + XFS are __very__ stable, and even if your clients are maybe a bit nervous going into Linux land, there isn't a ton of work to do with it. It's a good distribution to learn on and you will have a lot of stability from the beginning.
I've been running this combination for clients for awhile now and the worst we had was when clients decide they want to be clever and start messing with permissions randomly. I'd even just use a hardened repository and lock it down harshly to make sure that you're protecting "knows enough to be dangerous" clients from themselves.
-
- Novice
- Posts: 9
- Liked: 1 time
- Joined: Oct 19, 2022 12:45 pm
- Contact:
Re: Recommendations for backup storage, backup target
Hi
We are planning a fast short-term performance storage for the primary backup copy. Without diving into details, does immutable SOBR of (3-4) servers (e.g. HPE ProLiant DL380) with each 24x NVMe RAID6 sound resonable?
I assume NVMe would basically negate the write performance impact of RAID6 and RAID60 would be overkill? Or the raid controller would not be able to keep up and bottleneck drives?
We are planning a fast short-term performance storage for the primary backup copy. Without diving into details, does immutable SOBR of (3-4) servers (e.g. HPE ProLiant DL380) with each 24x NVMe RAID6 sound resonable?
I assume NVMe would basically negate the write performance impact of RAID6 and RAID60 would be overkill? Or the raid controller would not be able to keep up and bottleneck drives?
-
- Veeam Software
- Posts: 1494
- Liked: 655 times
- Joined: Jul 17, 2015 6:54 pm
- Full Name: Jorge de la Cruz
- Contact:
Re: Recommendations for backup storage, backup target
Hello crun,
Well, NVMe always sounds good if the budget is right for you. When you are talking backup copy, are you already landing the backup job somewhere else, and this SOBR will be just for backup copies?
Any Capacity/Arcihve Tier at all?
I have not seen any real Customer with NVMe RAIDs personally, but I am aware that some vendors have started doing GPU based RAID.
If I were to buyand use HPE I would very much use HPE Apollo, and leverage the great Architecture References that they are out there.
Have you asked HPE directly about what they think on bottlenecks with NVMe + RAID controllers on their systems?
Thanks!
Well, NVMe always sounds good if the budget is right for you. When you are talking backup copy, are you already landing the backup job somewhere else, and this SOBR will be just for backup copies?
Any Capacity/Arcihve Tier at all?
I have not seen any real Customer with NVMe RAIDs personally, but I am aware that some vendors have started doing GPU based RAID.
If I were to buyand use HPE I would very much use HPE Apollo, and leverage the great Architecture References that they are out there.
Have you asked HPE directly about what they think on bottlenecks with NVMe + RAID controllers on their systems?
Thanks!
Jorge de la Cruz
Senior Product Manager | Veeam ONE @ Veeam Software
@jorgedlcruz
https://www.jorgedelacruz.es / https://jorgedelacruz.uk
vExpert 2014-2024 / InfluxAce / Grafana Champion
Senior Product Manager | Veeam ONE @ Veeam Software
@jorgedlcruz
https://www.jorgedelacruz.es / https://jorgedelacruz.uk
vExpert 2014-2024 / InfluxAce / Grafana Champion
Who is online
Users browsing this forum: Semrush [Bot] and 71 guests