adamswann wrote:Earlier posts on this thread (which looks like it dates back to 2009) indicate that x86 is also a requirement. Is that still the case?
craig.anderson wrote:One issue I have though is that the capacity/free space always reports as 8388608.0 TB (quite obviously this is not correct)
craig.anderson wrote:Also, is there anywhere that I can get more detailed information on what *exactly* the difference is between a transport-agent-enabled repository and a CIFS or NFS share?
Obviously the transport agent takes care of some of the processing so that the proxy does not have to (which I presume results in less cpu load on the proxy and less network traffic between the proxy and repository) but I'm unsure of exactly what it does and how that might benefit or hinder certain jobs/scenarios (incrementals, synthetic fulls, reverse incremental, low bandwidth WAN connections). I've already poured through the online documentation and done some forum searching, but it's still not clear.
If I could understand this better I'd have a better case for using higher quality repositories (read: more expensive) and probably be able to design overall better backup solutions.
ChrisRomer wrote: I assume this iSCSI would share those same concerns as what you stated with a CIFS share?
craig.anderson wrote:I've got this working on my RS812+ Synologies (which use x86 architecture) but it does not work on on my D213's (the repository can be setup but the backup jobs fail)
One issue I have though is that the capacity/free space always reports as 8388608.0 TB (quite obviously this is not correct)
It doesn't hinder my backups but it would be nice if Veeam were aware oif the remaining backup capacity.
Anybody have any ideas on this?
[01.02.2014 04:32:33] <01> Error /bin/df -P -x vmfs returned non-zero code\n - /bin/df: invalid option -- x\nBusyBox v1.16.1 (2013-11-06 05:22:56 CST) multi-call binary.\n\nUsage: df [-Pkmh] [FILESYSTEM]...\n\nPrint filesystem usage statistics\n\nOptions:\n -P POSIX output format\n -k 1024-byte blocks (default)\n -m 1M-byte blocks\n -h Human readable (e.g. 1K 243M 2G)\n at Veeam.Backup.EsxManager.XmlCommandBuilder.ValidateFeedback(String parData)
[01.02.2014 04:32:33] <01> Error at Veeam.Backup.EsxManager.XmlCommandBuilder.ValidateFeedbackNoErrorXmlException(String parData)
v.Eremin wrote:Thank you for sharing such a valuable information; much appreciated.
Also, I'm wondering what particular issues you have while trying to backup to the said NAS device added as Linux server.
avit wrote:2. Run a full backup (reverse-incremental 'mode') - this works fine, resulting in one full set of backups for all 11 of my VMs
3. The next time (and every subsequent time) the incremental runs, it will fail with the error message:
="avit" Erase all restore points for my "Daily Backup" job, from the repository (Synology NAS acting as Linux server)
avit wrote:What is happening here is that the VolumesHostDiscover module of Veeam Backup is trying to issue the command "/bin/df -P -x vmfs", but the BusyBox implementation of Linux used on my Synology NAS (and probably all of them) does not support the -x command, which tells it to exclude from the df listing all filesystems of type 'vmfs'. What Veeam sees is the "usage" reply from the df command, which obviously doesn't make sense to it.
I tweaked the perl script that sends this df with the -x option, and simply removed the text "-x vmfs" then saved the perl script and tar'd up the veeam_soap.tar package and put it back in the Veeam folder on my server. Now if I Rescan that repository, I see the proper size reported. There was no risk to me tweaking the script because my NAS doesn't have any VMFS filesystems on it anyway. I can only guess that when VolumesHostDiscover (the module that populates dialog boxes with your repository information, size, free space left etc) receives the wrong response back from BusyBox's df, it results in that spurious terabyte disk size.
We still haven't resolved the main issue I'm having with backing up to the NAS as a Linux server, and in actual fact this issue with df had absolutely nothing to do with the problem... I'm glad I looked into it, because I was all ready to accept Veeam Support's diagnosis that it was a problem with the df command. But having put a workaround in place by tweaking the script to remove "-x vmfs", I have proven that this isn't the cause.
[i]21/03/2014 10:30:30 Starting synchronization of backup repositories for all backup jobs
21/03/2014 10:30:31 Found 1 backup repositories
21/03/2014 10:32:12 Error Processing backup repositories
21/03/2014 10:32:12 Error Failed to synchronize backup repository backup copy linux direct Error: Timed out waiting for operation "(cd /tmp && perl veeam_soap3e961f6f-3bf5-46bc-87e9-ced1c41d2384.pl -d -c -l lib3e961f6f-3bf5-46bc-87e9-ced1c41d2384 -e /tmp/veeam_error3e961f6f-3bf5-46bc-87e9-ced1c41d2384 2>> /tmp/veeam_error3e961f6f-3bf5-46bc-87e9-ced1c41d2384) || cat /tmp/veeam_error3e961f6f-3bf5-46bc-87e9-ced1c41d2384 2>&1", timeout: 100000 ms
21/03/2014 10:32:12 Error Failed to perform backup repositories synchronization[/i]
Users browsing this forum: Google [Bot] and 25 guests