-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Writing large 60TB disk backup to tape
Hi, what is the best practice for writing large full backups to tape?
Currently a 60TB (MS Files Cluster) backup is taking around 100 hours to write to LTO7 tapes and we typically get around 170MB/s. Is there any way to improve this, for example to write to multiple tapes at the same time (We have 4 drives, so sometimes 3 are sitting idle).
From what I can see, all the Parallel options are to do with writing parallel backups (Per VM or multiple backup jobs) to a single tape... not a single backup to multiple tapes.
Also, if this job fails at any point, even though we have an unusable backup written to 12 tapes, the tapes are not freed up for reuse automatically, but are wasted until the recycle period has expired.
Thanks in advance
Stu
Currently a 60TB (MS Files Cluster) backup is taking around 100 hours to write to LTO7 tapes and we typically get around 170MB/s. Is there any way to improve this, for example to write to multiple tapes at the same time (We have 4 drives, so sometimes 3 are sitting idle).
From what I can see, all the Parallel options are to do with writing parallel backups (Per VM or multiple backup jobs) to a single tape... not a single backup to multiple tapes.
Also, if this job fails at any point, even though we have an unusable backup written to 12 tapes, the tapes are not freed up for reuse automatically, but are wasted until the recycle period has expired.
Thanks in advance
Stu
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: Writing large 60TB disk backup to tape
As far as I understand Veeam optimizes for space on tapes. If it was splitting that file (assuming it knew where to split, which it doesn't) you would end with 4 tapes with free space left but that were basically now unusable.
As mentioned, it also doesn't know where it could split the backup file because it doesn't actually know how much data will fit on the disk since it's compressing it.
I'd look more at why you're only getting 170MB/s to the tape drive since LTO7 supports 300MB/s. Whats showing as the bottleneck?
You may also want to accelerate your upgrade timeframe to v11 since it supposeldy has significantly better speed
As mentioned, it also doesn't know where it could split the backup file because it doesn't actually know how much data will fit on the disk since it's compressing it.
I'd look more at why you're only getting 170MB/s to the tape drive since LTO7 supports 300MB/s. Whats showing as the bottleneck?
You may also want to accelerate your upgrade timeframe to v11 since it supposeldy has significantly better speed
-
- VeeaMVP
- Posts: 1007
- Liked: 314 times
- Joined: Jan 31, 2011 11:17 am
- Full Name: Max
- Contact:
Re: Writing large 60TB disk backup to tape
How many disks does your file server cluster have? Can you split up those disks in separate jobs? If so, then you could write these jobs in parallel to multiple drives.
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Writing large 60TB disk backup to tape
Yeah, I'm looking forward to v11 and we're fairly early adopters, so fingers crossed there are some good tape improvements. I think it was Veeam 7 where tape support almost felt like an afterthought and now it's one of the things they're heavily advocating and improving thanks to the greater need for customers to air gap backups due to encryption attacks.
Our cluster has 21 disks, I guess volume level backups is an option to consider.
Thanks
Our cluster has 21 disks, I guess volume level backups is an option to consider.
Thanks
-
- VeeaMVP
- Posts: 1007
- Liked: 314 times
- Joined: Jan 31, 2011 11:17 am
- Full Name: Max
- Contact:
Re: Writing large 60TB disk backup to tape
I would create as many jobs as you have tape drives and divide the disks equally among those jobs; then you should benefit from parallel tape processing.
The disadvantage of this procedure will be that restores get more complicated as you need to restore from X different restore points/jobs.
Also each job will contain a different point in time of your disks, depending on your job runtime/duration. So, if you need to restore all your disks, the won't have the same state/timestamp.
The disadvantage of this procedure will be that restores get more complicated as you need to restore from X different restore points/jobs.
Also each job will contain a different point in time of your disks, depending on your job runtime/duration. So, if you need to restore all your disks, the won't have the same state/timestamp.
-
- Product Manager
- Posts: 14720
- Liked: 1705 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Writing large 60TB disk backup to tape
Hi folks,
Thanks for the feedback!
Thanks for the feedback!
Check the connections: [Repository <> Repository Gateway Server] <> Tape Server <> Tape Drive. If possible you can install tape server role on the repository and get the tape device connected directly to the repo: with such setup data flow wont be affected by network between repository and tape server (so from repo data will go directly to the drive).we typically get around 170MB/s.
Max is right: in order to use parallel processing you need to have multiple backup chains (thus splitting the backup jobs per source drive/volume might be a good idea). Cheers!Our cluster has 21 disks, I guess volume level backups is an option to consider.
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Writing large 60TB disk backup to tape
That's great, I'll look into that on Monday (I'm off work today) and post my findings. We already have our LTO7 library connected directly to the same Proxy/Repo (2016 ReFS) server, so I'm not sure what the problem is.
Any comment on the wasted tapes when a tape backup fails?
Thanks, Stu.
Any comment on the wasted tapes when a tape backup fails?
Thanks, Stu.
-
- Product Manager
- Posts: 14720
- Liked: 1705 times
- Joined: Feb 04, 2013 2:07 pm
- Full Name: Dmitry Popov
- Location: Prague
- Contact:
Re: Writing large 60TB disk backup to tape
Stuart,
Unfortunately there is no way to retry the tape write due to the nature of LTO stream architecture, so the only possible way is to manually erase the tape and start the job again. Cheers!
Unfortunately there is no way to retry the tape write due to the nature of LTO stream architecture, so the only possible way is to manually erase the tape and start the job again. Cheers!
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Writing large 60TB disk backup to tape
What is your backup source?
With V10 you should try to backup from a full + incremental (yes, this sounds strange but it is true), so that a virtual full is beeing created in the tape session. That way another IO mode is used and reading is much faster. The same IO mode is used in V11 for all reading.
With V10 you should try to backup from a full + incremental (yes, this sounds strange but it is true), so that a virtual full is beeing created in the tape session. That way another IO mode is used and reading is much faster. The same IO mode is used in V11 for all reading.
-
- Enthusiast
- Posts: 41
- Liked: 9 times
- Joined: Aug 09, 2018 3:22 pm
- Full Name: Stuart
- Location: UK
- Contact:
Re: Writing large 60TB disk backup to tape
Thanks @mkretzer, I thought the easiest thing to do was to upgrade to V11 and give it a go to see if things are any faster.
The first full backup hasn't completed writing to tape yet, but looks like it's going considerably faster... It's looking like it's going to complete in around 65 hours compared to around 100 previously and writing at an average of 280MB/s over the last 40 odd hours, so a gold star for Veeam 11!
The Agents are still on version 4, as we've not got around to upgrading them yet, so maybe a bit of performance increase to come on the disk stage too.
The first full backup hasn't completed writing to tape yet, but looks like it's going considerably faster... It's looking like it's going to complete in around 65 hours compared to around 100 previously and writing at an average of 280MB/s over the last 40 odd hours, so a gold star for Veeam 11!
The Agents are still on version 4, as we've not got around to upgrading them yet, so maybe a bit of performance increase to come on the disk stage too.
Who is online
Users browsing this forum: No registered users and 15 guests