I'm curious about some behavior I'm seeing with snapshotless file-level backups.
I have a relatively large Linux server I am attempting to backup using this method. The reason for this is that we are unable to take a traditional backup due to the snapshot space being filled (this is a heavily used database server). Note, we are not backing up the active database itself using the file-level method, just the files on the operating system.
I was warned by support the file-level backup would be slow, but I seem to see behavior that is a little out of the ordinary. In this case, the file-level backup proceeds very quickly, nearly the same speed as a snapshot-based backup would be or even a little faster. However, once it hits a certain point, all read and transfer operations pretty much stop. I'm talking 1-5 kb/s read and transfer speed. As such, the backup never progresses past 30-40 percent, despite leaving it running for days. This "stopping point" seems to correspond with the rough space consumed on the server.
I know from backups using our previous software that 1.7TB is roughly the space consumed on the server. The job is currently stuck with processed and read statistics at 1.7 TB, but transferred statistics at 867GB.
This just seems odd that the job runs for days and does not progress from a certain percentage. Right now, for example, it's been stuck on 32 percent for a day. A previous run sat at 42 percent for three days before I cancelled it.
-
- Influencer
- Posts: 15
- Liked: 2 times
- Joined: Jun 26, 2018 12:51 pm
- Full Name: Cody Eding
- Contact:
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Snapshotless file-level backups and non-existent transfer speeds
Hi,
Speaking about files - if it's a large amount of small files then VAL needs to transfer and update their corresponding metadata on the target storage. That process involves a lot of datatransfer between source and target. Honestly speaking, the implementation is not quite optimal and is undergoing a massive rework.
That is, I would suggest to try reaching out to support team so the can battle snapshot overflow problem in order for you to switch to volume-level backup which is more effective in terms of speed and data transfer. Should you decide to contact them please let me know your case ID.
Thanks!
Snapshot overflow might be worked around with a larger CBT blocksize settings and a larger snapshot datastore space allocation. Please contact support team so they can help you with the tweaks.The reason for this is that we are unable to take a traditional backup due to the snapshot space being filled (this is a heavily used database server).
Speaking about files - if it's a large amount of small files then VAL needs to transfer and update their corresponding metadata on the target storage. That process involves a lot of datatransfer between source and target. Honestly speaking, the implementation is not quite optimal and is undergoing a massive rework.
That is, I would suggest to try reaching out to support team so the can battle snapshot overflow problem in order for you to switch to volume-level backup which is more effective in terms of speed and data transfer. Should you decide to contact them please let me know your case ID.
Thanks!
Who is online
Users browsing this forum: Semrush [Bot] and 3 guests