-
- Lurker
- Posts: 1
- Liked: never
- Joined: Feb 07, 2017 8:53 pm
- Full Name: Carl Strebel
- Contact:
Re: Snapshot overflow
I'm just trying the software and had the same issue. Can you also send me the "fix"?
Thanks
Thanks
-
- VP, Product Management
- Posts: 27371
- Liked: 2799 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Snapshot overflow
Guys, in order to receive instructions on how to fix it, please contact our Veeam technical support directly. Thanks!
-
- Influencer
- Posts: 19
- Liked: never
- Joined: Aug 25, 2016 12:30 pm
- Contact:
Re: Snapshot overflow
Hi,
what is the solution?
what is the solution?
-
- Product Manager
- Posts: 5796
- Liked: 1215 times
- Joined: Jul 15, 2013 11:09 am
- Full Name: Niels Engelen
- Contact:
Re: Snapshot overflow
Please contact support for the instructions as advised before.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
GitHub: https://github.com/nielsengelen
-
- Influencer
- Posts: 16
- Liked: 7 times
- Joined: Mar 07, 2012 12:11 pm
- Full Name: Tobias Gebler
- Contact:
Re: Snapshot overflow
Same Problem, Backing up a Debian Server to a 9.5 Repo.
#02137729
I run an VMWare VM at home with Debian Jesse and Ownloud. No LVMs, just plain Disks. Veeam Agent is in its stable Version installed. I backup the full System to our 9.5U1 Repository through a 5MBit VPN Site2Site link for testing. The Job should run about 5-7 Days, it always fails after 3 Days. A different issue is, it is scheduled to 23.05 every day. I get errors that a different job is running. Thats obvious that the job runs for more then 24h. Would be nice to surpress this error because its the same job.
root@geb-srv-cloud:~# df -h
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
/dev/dm-0 47G 2,8G 42G 7% /
udev 10M 0 10M 0% /dev
tmpfs 1,2G 50M 1,2G 5% /run
tmpfs 3,0G 0 3,0G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 3,0G 0 3,0G 0% /sys/fs/cgroup
/dev/sda1 236M 33M 191M 15% /boot
/dev/sdb1 296G 248G 33G 89% /var/www/owncloud/data
#02137729
I run an VMWare VM at home with Debian Jesse and Ownloud. No LVMs, just plain Disks. Veeam Agent is in its stable Version installed. I backup the full System to our 9.5U1 Repository through a 5MBit VPN Site2Site link for testing. The Job should run about 5-7 Days, it always fails after 3 Days. A different issue is, it is scheduled to 23.05 every day. I get errors that a different job is running. Thats obvious that the job runs for more then 24h. Would be nice to surpress this error because its the same job.
root@geb-srv-cloud:~# df -h
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
/dev/dm-0 47G 2,8G 42G 7% /
udev 10M 0 10M 0% /dev
tmpfs 1,2G 50M 1,2G 5% /run
tmpfs 3,0G 0 3,0G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 3,0G 0 3,0G 0% /sys/fs/cgroup
/dev/sda1 236M 33M 191M 15% /boot
/dev/sdb1 296G 248G 33G 89% /var/www/owncloud/data
Tobias Gebler
ametras
ametras
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Snapshot overflow
Hi,
May I ask you what kind of load do you have on your server? I mean what software do you run (any databases). Also at what percentage did the job fail and what are the stats ("read", "transferred" etc.)?
Thanks
May I ask you what kind of load do you have on your server? I mean what software do you run (any databases). Also at what percentage did the job fail and what are the stats ("read", "transferred" etc.)?
Thanks
-
- Influencer
- Posts: 16
- Liked: 7 times
- Joined: Mar 07, 2012 12:11 pm
- Full Name: Tobias Gebler
- Contact:
Re: Snapshot overflow
Its a Debian Jesse LAMP stack, it runs the latest Owncloud. about 20 minutes ago, the Job failed and I send new logs to support.
Thankfully I had ssh opended and did an df-h before and after it failed. I notived that the data disk was filled at 90% when it was running. After it failed it dropped to 81%.
Thankfully I had ssh opended and did an df-h before and after it failed. I notived that the data disk was filled at 90% when it was running. After it failed it dropped to 81%.
Tobias Gebler
ametras
ametras
-
- Lurker
- Posts: 1
- Liked: never
- Joined: May 13, 2017 6:10 pm
- Contact:
Re: Snapshot overflow
I have the same issue. Could You please send the instructions in PM. Thank You!
-
- Influencer
- Posts: 16
- Liked: 7 times
- Joined: Mar 07, 2012 12:11 pm
- Full Name: Tobias Gebler
- Contact:
Re: Snapshot overflow
I m still currently testing, it helped to raise the snapshot limit to 96% but my changes on the disk were to big probably. So either you make your disk bigger or you ad an additional disk only for the snapshots like I m currently testing. You can change in the veeam ini config the snapshot path to the new disk.
Tobias Gebler
ametras
ametras
-
- Lurker
- Posts: 1
- Liked: never
- Joined: May 16, 2017 3:38 pm
- Full Name: Carmen DiCamillo
- Contact:
Re: Snapshot overflow
I am also having the same problem. Could you please send me instruction. Thanks.
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Snapshot overflow
Please contact support to get them.
-
- Novice
- Posts: 6
- Liked: never
- Joined: Jun 25, 2017 10:37 am
- Full Name: Ronny Hößrich
- Contact:
Re: Snapshot overflow
Hello. I use the free version with Debian Jessie. I test the program with different servers. All servers are without any LVM but different disksize. All machines have a hardware raid controller. A partition size of 1.8TB works fine with HDD, but a size of 875GB with SSD leads to problems.
Can it be a problem because many directories with "mount bind" point to the same partition /dev/sda3?
Regards, Ronny
Code: Select all
[20.07.2017 15:19:48] <139913808537344> lpbcore| Checking fail 'No space for snapshot' ok.
[20.07.2017 15:19:48] <139913808537344> lpbcore| ERR |Snapshot overflow
[20.07.2017 15:19:48] <139913808537344> lpbcore| >> |--tr:Snapshot overflow fail found
[20.07.2017 15:19:48] <139913808537344> lpbcore| >> |--tr:Failed to execute method [0] for class [N10lpbcorelib11interaction9proxystub25CSnapshotResourceLockStubE].
[20.07.2017 15:19:48] <139913808537344> lpbcore| >> |An exception was thrown from thread [139913808537344].
Code: Select all
...
/dev/sda3 on /var/www/clients/client1/web20/log type ext4 (rw,relatime,errors=remount-ro,data=ordered,jqfmt=vfsv0,usrjquota=quota.user,grpjquota=quota.group)
/dev/sda3 on /var/www/clients/client1/web19/log type ext4 (rw,relatime,errors=remount-ro,data=ordered,jqfmt=vfsv0,usrjquota=quota.user,grpjquota=quota.group)
/dev/sda3 on /var/www/clients/client1/web18/log type ext4 (rw,relatime,errors=remount-ro,data=ordered,jqfmt=vfsv0,usrjquota=quota.user,grpjquota=quota.group)
/dev/sda3 on /var/www/clients/client1/web16/log type ext4 (rw,relatime,errors=remount-ro,data=ordered,jqfmt=vfsv0,usrjquota=quota.user,grpjquota=quota.group)
/dev/sda3 on /var/www/clients/client1/web15/log type ext4 (rw,relatime,errors=remount-ro,data=ordered,jqfmt=vfsv0,usrjquota=quota.user,grpjquota=quota.group)
/dev/sda3 on /var/www/clients/client1/web14/log type ext4 (rw,relatime,errors=remount-ro,data=ordered,jqfmt=vfsv0,usrjquota=quota.user,grpjquota=quota.group)
/dev/sda3 on /var/www/clients/client1/web13/log type ext4 (rw,relatime,errors=remount-ro,data=ordered,jqfmt=vfsv0,usrjquota=quota.user,grpjquota=quota.group)
...
-
- Product Manager
- Posts: 5796
- Liked: 1215 times
- Joined: Jul 15, 2013 11:09 am
- Full Name: Niels Engelen
- Contact:
Re: Snapshot overflow
Best would be to contact support. It seems the server runs out of space to create the snapshot and they can assist you with it.
Personal blog: https://foonet.be
GitHub: https://github.com/nielsengelen
GitHub: https://github.com/nielsengelen
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Aug 08, 2017 9:18 pm
- Full Name: Artem
- Contact:
Re: Snapshot overflow
The same problem happens here.
The size of the disk we need to backup is 1.8 TB
The size of the disk we need to backup is 1.8 TB
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Snapshot overflow
Please contact our support team on that issue so they can take a look and suggest how to tune your system.
Thank you
Thank you
-
- Novice
- Posts: 3
- Liked: never
- Joined: Nov 13, 2017 4:52 pm
- Full Name: Petr Kallen
[MERGED] Snapshot overflow
Hi, can I ask for a help, what may be wrong ?
Now, we download TRIAL of Veeam Agent for Linux (Server) and we need to backup 1.2 TB to SMB/CIFS share.
Screenshot here...
https://nextcloud.kallen.cz/index.php/s/QIE3puRm1b3tM4B
Thanks for a help - this is really urgent for us.
Support case ID we doesn't have - now, we are only trying this product for our environment.
Petr Kallen
Now, we download TRIAL of Veeam Agent for Linux (Server) and we need to backup 1.2 TB to SMB/CIFS share.
Screenshot here...
https://nextcloud.kallen.cz/index.php/s/QIE3puRm1b3tM4B
Thanks for a help - this is really urgent for us.
Support case ID we doesn't have - now, we are only trying this product for our environment.
Petr Kallen
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Snapshot overflow
Hi,
There are too many factors that should be taken into account before tweaking anything. I suggest you to contact our support team directly so they can take a closer look. Please also collect logs from the system (use 'M' button on the main screen)
P.S. Yes, FREE version users are also eligible for support, however there is no SLA, so support is provided on the best effort basis.
Thank you
There are too many factors that should be taken into account before tweaking anything. I suggest you to contact our support team directly so they can take a closer look. Please also collect logs from the system (use 'M' button on the main screen)
P.S. Yes, FREE version users are also eligible for support, however there is no SLA, so support is provided on the best effort basis.
Thank you
-
- Novice
- Posts: 3
- Liked: never
- Joined: Nov 13, 2017 4:52 pm
- Full Name: Petr Kallen
Re: Snapshot overflow
My case id is: 02386336
Thanks,
Petr
Thanks,
Petr
-
- Enthusiast
- Posts: 26
- Liked: 3 times
- Joined: Jan 21, 2018 7:55 pm
- Contact:
Re: Snapshot overflow
Got the same problem after new years, backup up several TBs, but getting snapshot overflow even though I can't see much writing to disk.
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Snapshot overflow
Hi,
Normally, in case of snapshot overflow Agent should attempt 3 retries, increasing snapshot data file size before each retry. If that has happened but the session still ends up with an overflow then please contact our support team directly so they can advise on how to tune your snapshot properly.
Thanks
Normally, in case of snapshot overflow Agent should attempt 3 retries, increasing snapshot data file size before each retry. If that has happened but the session still ends up with an overflow then please contact our support team directly so they can advise on how to tune your snapshot properly.
Thanks
-
- Influencer
- Posts: 12
- Liked: never
- Joined: Jan 09, 2018 11:05 pm
- Full Name: Glitch
- Contact:
Re: Snapshot overflow
Hello,
And is it possible to have the instructions by MP?
We also have the same error on one of our jobs that goes into
Thank you
And is it possible to have the instructions by MP?
We also have the same error on one of our jobs that goes into
Thank you
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Snapshot overflow
Hi,
If all three retries haven't managed to adjust snapshot parameters then please contact support team directly on that as manual tuning might be required.
Thank you
If all three retries haven't managed to adjust snapshot parameters then please contact support team directly on that as manual tuning might be required.
Thank you
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Feb 19, 2018 11:35 am
- Full Name: Ralf Görtzen
- Contact:
Re: Snapshot overflow
We encountered the same problem with an rhel6 oracle application server - case #02626516 just created.
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Feb 23, 2018 10:32 am
- Full Name: Johan
- Contact:
Re: Snapshot overflow
We are experiencing the same problem with RHEL, open case {02631548}.
I have uploaded all the log files with the case
Backing up vg_local
Snapshot overflow
Snapshot overflow
Failed to perform managed backup
I have uploaded all the log files with the case
Backing up vg_local
Snapshot overflow
Snapshot overflow
Failed to perform managed backup
-
- Influencer
- Posts: 10
- Liked: never
- Joined: Jan 06, 2017 10:50 am
- Full Name: K
- Contact:
Snapshot overflow error
I am trying to make an entire system snapshot to remote server, but unfortunately it throwing following error:
[code]16:07:37 Job Server started at 2018-03-06 16:07:37 GMT
16:07:41 Preparing to backup
16:07:42 Waiting for backup infrastructure resources availability 00:00:02
16:07:44 Creating volume snapshot 00:02:13
16:09:57 Starting full backup to [server] Endpoint Repository
16:12:04 Backing up BIOS bootloader on /dev/sda 00:00:01
16:12:05 [error] Backing up sda 00:02:12
16:14:17 [error] Failed to perform backup
16:14:17 [error] Snapshot overflow
16:14:17 [error] Snapshot overflow
16:14:17 [error] Processing finished with errors at 2018-03-06 16:14:17 GMT
[/code]
I've seen very similar topic, but instructions how to workaround have been sent by PM only.
[code]16:07:37 Job Server started at 2018-03-06 16:07:37 GMT
16:07:41 Preparing to backup
16:07:42 Waiting for backup infrastructure resources availability 00:00:02
16:07:44 Creating volume snapshot 00:02:13
16:09:57 Starting full backup to [server] Endpoint Repository
16:12:04 Backing up BIOS bootloader on /dev/sda 00:00:01
16:12:05 [error] Backing up sda 00:02:12
16:14:17 [error] Failed to perform backup
16:14:17 [error] Snapshot overflow
16:14:17 [error] Snapshot overflow
16:14:17 [error] Processing finished with errors at 2018-03-06 16:14:17 GMT
[/code]
I've seen very similar topic, but instructions how to workaround have been sent by PM only.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Mar 02, 2018 11:57 pm
- Full Name: Daniele Paoni
- Contact:
Re: Snapshot overflow error
I am getting the same error but with a single disk backup.
The log messages in the server are the following
The log messages in the server are the following
Code: Select all
[06.03.2018 21:10:18] <140025018693376> vsnap | Creating snapshot file (path: /bigdisk/veeamsnapdata_stretch_{8c73d7a2-f0d4-4f32-ba47-8a072ba3817f}_#2, length: 4294967296)
[06.03.2018 21:10:18] <140025018693376> vsnap | Initializing snapshot file with 'fiemap' (flags: 2)
[06.03.2018 21:10:18] <140025018693376> vsnap | Allocating new snapshot file (path: /bigdisk/veeamsnapdata_stretch_{8c73d7a2-f0d4-4f32-ba47-8a072ba3817f}_#2, size: 4294967296)
[06.03.2018 21:12:07] <140025284060928> | Thread started. Thread id: 140025284060928, parent id: 140025294550784, role: Client processor thread
[06.03.2018 21:12:07] <140025284060928> net | Client connected...
[06.03.2018 21:12:07] <140024379074304> | Thread started. Thread id: 140024379074304, parent id: 140025284060928, role: peer local sock peer
[06.03.2018 21:12:07] <140024379074304> lpbcore| Starting proxystub protocol dispatch loop.
[06.03.2018 21:12:07] <140025284060928> | Thread finished. Role: 'Client processor thread'.
[06.03.2018 21:12:07] <140025008203520> | Thread started. Thread id: 140025008203520, parent id: 140025294550784, role: Client processor thread
[06.03.2018 21:12:07] <140025008203520> net | Client connected...
[06.03.2018 21:12:07] <140025284060928> | Thread started. Thread id: 140025284060928, parent id: 140025008203520, role: peer local sock peer
[06.03.2018 21:12:07] <140025284060928> lpbcore| Starting proxystub protocol dispatch loop.
[06.03.2018 21:12:07] <140024379074304> lpbcore| Starting proxystub protocol dispatch loop. ok.
[06.03.2018 21:12:07] <140025008203520> | Thread finished. Role: 'Client processor thread'.
[06.03.2018 21:12:07] <140024379074304> | Closing socket device.
[06.03.2018 21:12:07] <140024379074304> | Thread finished. Role: 'peer local sock peer'.
[06.03.2018 21:12:07] <140025284060928> lpbcore| Starting proxystub protocol dispatch loop. ok.
[06.03.2018 21:12:07] <140025284060928> | Closing socket device.
[06.03.2018 21:12:07] <140025284060928> | Thread finished. Role: 'peer local sock peer'.
[06.03.2018 21:12:19] <140024389564160> vsnap | Stretch snapshot data. Overflow command received. ErrorCode=4294967274
[06.03.2018 21:12:19] <140024389564160> vsnap | Snapstore filled 8192 MiB
[06.03.2018 21:12:24] <140025060652800> vsnap | Reading snapshot errno for device [252:1].
[06.03.2018 21:12:24] <140025060652800> vsnap | errno=SUCCESS
[06.03.2018 21:12:24] <140025060652800> vsnap | Reading snapshot errno for device [252:17].
[06.03.2018 21:12:24] <140025060652800> vsnap | ERR |errno=-61
[06.03.2018 21:12:24] <140025060652800> lpbcore| Checking fail 'No space for snapshot'
[06.03.2018 21:12:24] <140025060652800> vsnap | Stretch snapshot data thread complete
[06.03.2018 21:12:24] <140024389564160> vsnap | Stretch snapshot data thread ok.
[06.03.2018 21:12:24] <140024389564160> | Thread finished. Role: 'IThread'.
[06.03.2018 21:12:24] <140025060652800> vsnap | Stretch snapshot data thread complete ok.
[06.03.2018 21:12:24] <140025060652800> lpbcore| Checking fail 'No space for snapshot' ok.
[06.03.2018 21:12:25] <140025060652800> lpbcore| ERR |Snapshot overflow
[06.03.2018 21:12:25] <140025060652800> lpbcore| >> |--tr:Snapshot overflow fail found
[06.03.2018 21:12:25] <140025060652800> lpbcore| >> |--tr:Failed to execute method [0] for class [lpbcorelib::interaction::proxystub::CSnapshotResourceLockStub].
[06.03.2018 21:12:25] <140025060652800> lpbcore| >> |An exception was thrown from thread [140025060652800].
[06.03.2018 21:12:25] <140025060652800> vsnap | Destroying snapshot, snapshot id: [0xffff880378685280].
[06.03.2018 21:12:27] <140025060652800> vsnap | Stretch snapshot data thread complete
[06.03.2018 21:12:27] <140025060652800> vsnap | Stretch snapshot data thread complete ok.
[06.03.2018 21:12:27] <140025060652800> vsnap | Snapstore cleanup
[06.03.2018 21:12:27] <140025060652800> vsnap | Snapstore filled 8192 MiB
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Jan 25, 2018 7:11 pm
- Full Name: Chris Cruger
- Contact:
Re: Snapshot overflow
I also have this problem, can I get the instructions?
-
- Novice
- Posts: 3
- Liked: never
- Joined: Dec 05, 2016 10:50 am
- Full Name: Boris Virc
- Contact:
Re: Snapshot overflow
I also have this problem. Can you send me instructions please ?
-
- Product Manager
- Posts: 6551
- Liked: 765 times
- Joined: May 19, 2015 1:46 pm
- Contact:
Re: Snapshot overflow
Hi Boris, Chris,
There is no "silver bullet", every config require different adjustments. Kindly let our support team to review your setups and adjust them.
Thank you
There is no "silver bullet", every config require different adjustments. Kindly let our support team to review your setups and adjust them.
Thank you
-
- Novice
- Posts: 3
- Liked: never
- Joined: Mar 15, 2018 11:14 am
- Full Name: C James
- Contact:
Re: Snapshot overflow
I seem to be getting this same snapshot overflow issue on my newly configured job, system is as details below :-
Ubuntu 16.04lts
Sda1 10Gb - No issues
Sdb1 5TB - mounted to /media/XYZ-XZY - ext4 fs holding samba shares and mounted via UUID in fstab
sda backs up ok.
sdb snapshot overflow - doesn't seem to try and capture anything at all.
Please can I have the instructions to resolve?
Thanks
CJ
Ubuntu 16.04lts
Sda1 10Gb - No issues
Sdb1 5TB - mounted to /media/XYZ-XZY - ext4 fs holding samba shares and mounted via UUID in fstab
sda backs up ok.
sdb snapshot overflow - doesn't seem to try and capture anything at all.
Please can I have the instructions to resolve?
Thanks
CJ
Who is online
Users browsing this forum: No registered users and 9 guests