-
- Novice
- Posts: 7
- Liked: never
- Joined: Dec 23, 2015 3:47 pm
- Contact:
Tiering my Backups between Slow and Fast repositories.
Hi All
Here is a quick overview of our current config.
Currently our backups are stored on a QNAP TVS-871U-RP with 8x 6TB WD Red Pro (WD6001FFWX) in a RAID10 giving us of 21TB of usable space and 7 Days of reverse incremental in our environment uses about 16.2 TB.
The repository and proxy are on the same server, and we are using DIRECT SAN to the SAN which the vms reside on, which is a Nimble CS440G-X2.
Finally, we store 14days of backups with a Veam Cloud provider.
All iSCSI connections are 10GB, and we have a dedication
We are finding that the backups times and speeds to be slow, most especially on busy SQL servers. Most backups run at 10-20mb (Load: Source 61% > Proxy 11% > Network 59% > Target 52%).
Our assumption right now is that the QNAP can not handle IO generated by multiple backups running, and slow us down.
We have access to a second Nimble CS220G-X2, but its capacity is not large enough to meet our space needs.
Could we backup to the Nimble CS220G-X2 for say 2-3 days, copy to the QNAP once backups are done, and store there for 7 days, and finally keep 14 still on our cloud provider.
The hope is that the speed of the nimble arrays will allow us to backup quicker, and the QNAP should handle the sequential IO better then the random IO now.
Is this plan possible or feasible?
Here is a quick overview of our current config.
Currently our backups are stored on a QNAP TVS-871U-RP with 8x 6TB WD Red Pro (WD6001FFWX) in a RAID10 giving us of 21TB of usable space and 7 Days of reverse incremental in our environment uses about 16.2 TB.
The repository and proxy are on the same server, and we are using DIRECT SAN to the SAN which the vms reside on, which is a Nimble CS440G-X2.
Finally, we store 14days of backups with a Veam Cloud provider.
All iSCSI connections are 10GB, and we have a dedication
We are finding that the backups times and speeds to be slow, most especially on busy SQL servers. Most backups run at 10-20mb (Load: Source 61% > Proxy 11% > Network 59% > Target 52%).
Our assumption right now is that the QNAP can not handle IO generated by multiple backups running, and slow us down.
We have access to a second Nimble CS220G-X2, but its capacity is not large enough to meet our space needs.
Could we backup to the Nimble CS220G-X2 for say 2-3 days, copy to the QNAP once backups are done, and store there for 7 days, and finally keep 14 still on our cloud provider.
The hope is that the speed of the nimble arrays will allow us to backup quicker, and the QNAP should handle the sequential IO better then the random IO now.
Is this plan possible or feasible?
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Tiering my Backups between Slow and Fast repositories.
Hi,
I'd say that your case is perfect for the new scale-out backup repository that will be available really soon in Veeam Backup & Replication v9: https://www.veeam.com/blog/introducing- ... te-v9.html.
In your case, you can use the fast nimble as the extent ro receive the incrementals, and the Qnap to hold the full backups, and you can even reconfigure the qnap itself using Raid 5 or raid6 instead of Raid10 to gain additional space for the retention.
I'd say that your case is perfect for the new scale-out backup repository that will be available really soon in Veeam Backup & Replication v9: https://www.veeam.com/blog/introducing- ... te-v9.html.
In your case, you can use the fast nimble as the extent ro receive the incrementals, and the Qnap to hold the full backups, and you can even reconfigure the qnap itself using Raid 5 or raid6 instead of Raid10 to gain additional space for the retention.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 7
- Liked: never
- Joined: Dec 23, 2015 3:47 pm
- Contact:
Re: Tiering my Backups between Slow and Fast repositories.
Hi Luca
Thanks for the response, Veeam Backup & Replication v9 defiantly sounds like the way to go, when it's released.
However we need to address the issue sooner then later, does anyone know if a the staggered approach I outlined earlier in v8 would work?
Thanks for the response, Veeam Backup & Replication v9 defiantly sounds like the way to go, when it's released.
However we need to address the issue sooner then later, does anyone know if a the staggered approach I outlined earlier in v8 would work?
-
- VeeaMVP
- Posts: 6166
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Tiering my Backups between Slow and Fast repositories.
v9 release is supposed to happen in few days, if you can wait you could use this approach from the start.
Otherwise, you can land the primary backups in the nimble machine (1 full plus the amount of incrementals that it can hold) and by using backup copy jobs create an additional retention that is sent to the qnap. And then have the third copy in Veeam Cloud Connect.
Otherwise, you can land the primary backups in the nimble machine (1 full plus the amount of incrementals that it can hold) and by using backup copy jobs create an additional retention that is sent to the qnap. And then have the third copy in Veeam Cloud Connect.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Novice
- Posts: 7
- Liked: never
- Joined: Dec 23, 2015 3:47 pm
- Contact:
Re: Tiering my Backups between Slow and Fast repositories.
I see, then we will do our best to wait.
Thanks again.
Thanks again.
Who is online
Users browsing this forum: apolloxm, Bing [Bot], CoLa, Google [Bot] and 319 guests