Comprehensive data protection for all workloads
Locked
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

READ THIS FIRST : [FAQ] FREQUENTLY ASKED QUESTIONS (updated to v12)

Post by Gostev » 4 people like this post

Last updated: June 2023

General FAQ
This topic (scroll down) covers general information about the product, and all core features which are not hypervisor-specific.

VMware vSphere FAQ
The following FAQ covers VMware vSphere specific questions:
VMware : [FAQ] FREQUENTLY ASKED QUESTIONS

Tape FAQ
The following FAQ covers tape specific questions.
Tape : [FAQ] FREQUENTLY ASKED QUESTIONS

Object Storage FAQ
The following FAQ covers object storage specific questions.
Object Storage: [FAQ] FREQUENTLY ASKED QUESTIONS

Hardened Repository FAQ
The following FAQ covers Hardened Repository specific questions.
Hardened Repository: [FAQ] FREQUENTLY ASKED QUESTIONS
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Licensing (v12)

Post by Gostev »

Q: How is the product licensed?
A: There are two licensing options:
• Per Workload (default and recommended): for any supported workload including on-prem VMs, cloud instance (VM, database, file share), physical computers, enterprise applications, NAS backup, File to Tape backup (aka Veeam Universal License).
• Per CPU Socket: legacy licensing option for VMware and Hyper-V VMs only. Only the "source" hypervisor host (where protected virtual machines reside) CPU Sockets are counted. Destination hosts (or VMs) for replication and migration jobs do not need to be licensed. Hosts running virtual machines which are not being processed by Veeam do not need to be licensed, even if they are a part of the same cluster.

Q: What are the feature differences between Standard, Enterprise and Enterprise Plus editions?
A: Please refer to the Veeam Data Platform Feature Comparison PDF. Note that product editions apply to product installations with a Socket license only. Veeam Universal License does not have editions and includes all features and capabilities (so is effectively Enterprise Plus edition).

Q: What is the Veeam Data Platform?
A: Veeam Data Plattform is available in three editions and provides access to the following products:
1) Foundation: Veeam Backup & Replication
2) Advanced: Veeam Backup & Replication & Veeam One
3) Premium: Veeam Backup & Replication & Veeam One & Veeam Recovery Orchestrator

Q: Are any of the product components licensed separately? Namely Enterprise Manager, backup servers, backup proxy servers, backup repositories, WAN accelerators, tape libraries.
A: No, they are not licensed separately. You can deploy as many of those components as it makes sense for your environment.

Q: Can I install multiple backup servers in the same or multiple sites using the same license file?
A: Customers can use a single License Key to deploy multiple backup infrastructures with no design restrictions, provided that they use Veeam Backup Enterprise Manager for centralized license management across these infrastructures.

Q: Can I mix and match different product editions, or different license types in the same environment?
A: Customers can use multiple License Keys with different license terms, but only for completely separate backup infrastructures (which are defined as not sharing backups, servers or storage between each other, and are protecting different source infrastructures). Please refer to the Licensing Policy for more details.

Q: What counts as a “workload”?
A: “Workload” means a computer (physical, virtual or cloud), an application (on-prem or SaaS), unstructured data (files or objects) or any data source that VBR protects or manages.

Q: I’m using File to Tape Jobs to write data to a Tape. Do I need a license for such jobs?
A: Starting in V12, File to Tape Jobs require a license. 500GB of source files counts as a single workload. Veeam backup files are excluded from this calculation and are not counted against the workload consumption. The protected data is already licensed within their source job.

Q: At what specific moment does a workloads source host CPU sockets or get counted towards the license pool?
A: Upon first backup, replication or copy of a workload. Note that it does not matter which Veeam solution is used to process a workload.

Q: Does CPU core count on my hosts matter for Socket-based licensing?
A: No, the number of CPU cores does not affect licensing or pricing in any way.

Q: I have removed some hosts or workloads from my environment, can I get the licenses back into the license pool?
A: Yes. Open the license management dialog in the main product UI, select the workload -> manage -> Revoke.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Upgrade: Licensing (v12)

Post by Gostev »

Q: Do I need a new license file to install the new major product version?
A: V12 uses the same license file format introduced with v10, so you can use your existing v10 license file to install v11 and future versions. Do check the support expiration date in the license file though, as your support contract must be active as of the date when the version you’re installing was built.

Q: Is there a free upgrade from previous versions?
A: With Veeam, all upgrades are "free" for customers on maintenance. For instance based licensing (subscription) you are eligible to upgrade as long as you have a valid subscription. All perpetual license purchases include 1 year of maintenance. If you did not extend your maintenance after the 1st year, you need to address this with your Veeam sales representative first by purchasing maintenance, including the "blank" period coverage.

Q: Do I need a special license if I want to upgrade the community edition?
A: Community Edition can be upgraded without providing a license.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Upgrade: Technical (v12)

Post by Gostev »

Q: Do I need to perform clean install, or can I upgrade my existing B&R install?
A: No, we support in-place upgrade.

Q: How do I upgrade?
A: The upgrade process is very simple and straightforward. Make sure no jobs or restore processes are active and run the setup program on your backup server. After the setup finishes, launch the B&R and follow the Upgrade wizard to automatically upgrade all the remaining components. For more details and step-by-step process, please refer to the Upgrade section of the Release Notes document. Upgrade Enterprise Manager first if you use it.

Q: Will upgrade preserve my jobs and other settings?
A: Yes, all your jobs and settings will be preserved. This also means that potential new features will not be enabled by default, and must be enabled manually if desired. This is to ensure the product behavior remains the same after upgrade.

Q: What B&R version can I upgrade to the latest version?
A: Please refer to the “Upgrading Veeam Backup & Replication” section in the release notes. Additionally, supported Upgrade paths are documented in KB2053 - Veeam Backup & Replication Upgrade Paths.

Q: Can current and previous versions be installed on the same server?
A: No. You can, however, run different versions in parallel on different servers while processing the same VMs without any issues (as long as jobs do not overlap). This is the approach most of our customers choose for POC testing of new versions before upgrading their production deployment.

Q: Can Enterprise Manager collect data from older Veeam Backup & Replication?
A: In general, Enterprise manager works with older VBR versions. Please find that information in the supported versions release notes (in the Upgrading Veeam Backup & Replication section).

Q: Can I restore backup made with previous product versions?
A: Yes. Current version can restore backups made with any version of Veeam Backup & Replication starting from version 1.0.

Q: Will my configuration database be migrated to PostgreSQL while doing the upgrade?
A: No. Migration to PostgreSQL server as the configuration database engine is a post upgrade process.

Q: How can I migrate my configuration database to PostgreSQL server after the upgrade to Veeam Backup & Replication v12?
A: It is a multi step process. Q: Can Veeam Backup & Replication and Enterprise Manager use different configuration database engines ( MSSQL, PostgreSQL)?
A: No. Both products must use the same database engine for the configuration database.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

System Requirements (v12)

Post by Gostev »

Q: What are the system requirements and supported configurations for the product components, and for protected VMs?
A: Refer to the System Requirements section of the Release Notes document available in the download area, https://helpcenter.veeam.com and on the product page under Resources tab. Refer to platform-specific FAQ topics for additional platform-specific information.

Q: Is Veeam Backup & Replication supported running in a VM?
A: Yes.

Q: Is Veeam Backup & Replication available as a Linux-based virtual appliance?
A: No.Veeam Backup & Replication must be installed on Microsoft Windows. VmWare proxy, VmWare CDP proxy, repository and tape server role exist also for Linux.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Job Types (v12)

Post by Gostev » 2 people like this post

Q: What are Backup jobs designed for?
A: Backup jobs produce highly compressed and deduplicated backup files with production VMs data, which allows you to save significant amount of space required to host the backups. Full VM restore from backup normally takes significant time due to the need to extract and copy full VM image from backup to the production storage, but can also be done instantly for a limited number of VMs (see Instant VM Recovery).

Q: What are Application Backup jobs designed for?
A: Application Backup jobs for gives you the option of centralised configuration of your enterprise plugin backups (Oracle RMAN, SAP HANA and SAP on Oracle). You do not have to open the plugin configuration on each application server.

Q: What are Backup Copy jobs designed for?
A: Backup Copy jobs efficiently create copies of your backups both on-site (usually for archival purposes) and off-site (to meet the off-site backup storage requirement). Maintaining multiple copies of your backups, with some of them being off-site is dictated by industry best practice known as 3-2-1 backup rule: at least 3 copies of production data (1 production and 2 backups), with backups stored on 2 different media types, and 1 of them stored off-site.

Q: I have Legacy Backup Copy jobs in my backup console. What are they?
A: After an upgrade to Veeam Backup & Replication v12, pre-v12 backup copy jobs using periodic copy mode will be listed as Legacy Backup Jobs. You can still edit those jobs.
Creating new legacy jobs or cloning one is not possible.

Q: What are SureBackup jobs designed for?
A:: SureBackup jobs perform actual recovery verification by powering on one or more VMs in the isolated environment, and verifying recovery by checking if VM was started, OS was booted, responds to ping, and applications are running fine. vSphere, HyperV or Agent backups can be tested with a SureBackup job. SureBackup job is also the key component of U-AIR and On-Demand sandbox functionality.

Q: What are Replication jobs designed for?
A: Replication jobs produce exact replicas of production VMs on standby hosts. These replicas can be powered on immediately when production VM goes down, without any dependencies on Veeam Backup and Replication server, and at full I/O performance. However, replicas require standby host, and more disk space than backups due to being stored in uncompressed, native format. Thus, replica are typically used for tier 1 VMs with low recovery time objectives.

Q: What are CDP policies designed for?
A: CDP policies produce exact replicas of production VMs on standby hosts (like replication), but with an RPO of seconds instead of minutes / hours.

Q: What are File Share jobs designed for?
A:: File share or NAS backup jobs perform backup of SMB (CIFS) or NFS file shares. Usually they protect data from NAS filers. But you can also back up any other managed Windows or Linux machine on file level.

Q: What are VM Copy jobs designed for?
A: VM Copy jobs produce exact copies of selected VMs on the selected storage, and can be used for scenarios such as datacenter migrations, creating test labs, and ad-hoc backups. VM Copy jobs support processing of running VMs. Unlike backup job however, VM Copy does not support "incremental" runs. VM copy jobs are only supported for VMware VMs.

Q: What are File Copy jobs designed for?
A: File Copy jobs copy regular files between any servers managed by Veeam (Windows or Linux servers, or hypervisor hosts), and can be used for various administrative tasks. File Copy jobs do not support processing of virtual disk files belonging to running VMs.

Q: What are Backup to Tape jobs designed for?
A: Backup to Tape copy Veeam backup files to tape with full tracking of the content of copied backup files in the tape catalog. This allows for streamlined restores, with restore process being able to pick the required tapes automatically.

Q: What are File to Tape jobs designed for?
A: File to Tape jobs copy regular files from any servers managed by Veeam (Windows or Linux servers, or hypervisor hosts) to tape. Copied files are tracked in the tape catalog, however Veeam backup files are treated as regular files.

Q: What are Quick Migration jobs designed for?
A: Migration jobs can move the running VMs to the selected hosts and storage with minimum possible downtime. Depending on your migration scenario and VMware licensing level, the migration job will automatically leverage one of the following: VMware VMotion, VMware Storage VMotion, Veeam Quick Migration with SmartSwitch, or Veeam Quick Migration with cold switch. This allows you to quickly evacuate VMs from hosts requiring urgent maintenance without affecting bandwidth or performance, or perform inter- and intra- datacenter migrations. Migration jobs are only supported for VMware VMs.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Backup (v12)

Post by Gostev »

Backup Architecture

Q: What is the data flow in case of backup?
A: Disk > Backup proxy > Network > Backup repository > Disk

Backup Proxies

Q: What is the backup proxy server?
A: Backup proxy fetches the VM data (configuration files and virtual disks) from the production storage, processes the data to reduce it size by applying deduplication and compression, and sends it off to the backup repository (in case of backup) or another backup proxy server (in case of replication). Backup proxy is also used to write the VM data (configuration files and virtual disks) back to the production storage, which is used for VM restores, and to create and update replica VMs for replication.

Q: Why is it best to install the proxy server on a physical machine?
A: Because on-the-fly processing (deduplication and compression) of heavy data streams (up to multiple gigabytes per second) requires significant CPU, Memory and I/O resources, physical proxy server suits best for 24/7 virtual environments with high consolidation ratio. Otherwise, you may find backup process affecting your production.

Q: What OS can I install the proxy server on?
A: Microsoft Windows 10 / Server 2012 in 64bit edition or later and supported Linux 64bit systems (VMware only. See release notes for supported systems)

Q: Do I have to setup a proxy server to start using a product?
A: No, as default proxy server is deployed by the setup automatically. However, we recommend that you add additional ones for redundancy and load-balancing. For recommendations on where it is best to deploy additional proxies, please refer to the hypervisor-specific FAQ.

Q: Can the proxy server backup itself?
A: Yes, the proxy server can backup itself and any other Veeam Backup & Replication component.


Backup Repositories

Q: What is the backup repository?
A: Backup repository is the place where your backups are stored. Each backup repository has local agent that enables for efficient local processing of incremental data in cases when backup proxy and backup repository communicate over LAN or WAN.
Q: What format does Veeam use for storing backups on the backup repository?
A: Veeam Backup & Replication provides three backup chain formats:
  • Per-machine backup with separate metadata files
  • Per-machine backup with single metadata file
  • Single-file backup
The difference in these three formats is documented in our user guide.
Q: Which format is the default in V12?
Q: The default for new repositories is per-machine backup with separate metadata files.

Q: What is the benefit of "per-machine backup with separate metadata files"?
A: Per-machine backup files have many advantages over " Single-file backup”. The advantages are:
  • Easier tape restore
  • No 16TB files on wrong formatted NTFS volumes
  • More performance through parallel processing
  • Easier job management (put more VMs in one job)
  • Resource usage with SOBR
  • Easy deletion of VMs from backups
  • Per VM accounting
  • Move individual workloads between jobs
  • “Active Full” or “retry” for single machines.
Per-machine backup with separate metadata files are mandatory for new backup copy jobs in V12.

Q: What do you support as a backup repository?
A: The following repositories are supported:
  • Any storage directly attached to a Microsoft Windows server. The storage can be local disks, directly attached disk-based storage (such as USB hard drive), or a iSCSI/FC SAN LUN in case the server is connected into the SAN fabric.
  • Any storage directly attached to, or mounted on a Linux server (x86 and x64 of all major distributions are supported, must have bash shell and SSH Perl installed). The storage can be local disks, directly attached disk-based storage (such as USB hard drive), NFS share, or iSCSI/FC SAN LUN in case the server is connected into the SAN fabric.
  • NFS shares. Data can be written to NFS shares directly from the backup proxy server, or through a gateway server (useful in cases when NFS share is located in the remote site).
  • SMB (CIFS) shares. Password authentication is supported. Data can be written to SMB share directly from a windows backup proxy server, or through a windows gateway server (useful in cases when SMB share is located in the remote site or the backup proxy server runs on Linux).
  • Disk based deduplication appliances with integrations to Veeam. Currently Dell / EMC DataDomain, ExaGrid, HPE StoreOnce, QuantumDXI (and OEM).
  • Object Storage. Data can be written to AWS, Azure and S3 compatible object storages directly from the backup proxy server or through dedicated gateway server. See Object Storage FAQ for more information.
  • Object Storage in a scale-out repository. Data can be written to AWS, Azure and S3 compatible object storages as part of a scale-out repository. Object Storage can be added as a performance tier extend, capacity tier or archive tier.
Q: Can I use a virtual machine as my backup repository?
A: Yes, however be sure to think you recover plan in case of disaster carefully. While actual VM does not need to be running in order for you to be able to restore (as you can always import your backups directly from storage), remember that disaster may affect your ability to retrieve the backup files if you store them in the VM disks located on your production storage. Additional recommendations and considerations are provided in the hypervisor-specific FAQ.

Q: What RAID level do you recommend for the underlying backup storage?
A: We recommend at RAID6 (or any other similar dual-parity implementation) for optimal redundancy. RAID10 offers higher performance especially for I/O intensive backup operations such as synthetic fulls.

Tape & Offsite

Q: Does Veeam support writing its backups to tape?
A: Yes, Veeam offers native tape support for LTO-3 and newer. See Tape Support FAQ for more information.

Q: What is the best way to copy my backups offsite?
A: Veeam Backup & Replication provides three options:
  • Backup Copy job to an offsite backup repository. Considering using built-in WAN acceleration for slow links.
  • Backup Copy job to an object storage provider.
  • Backup Copy job to a veeam cloud connect repository provided by a Veeam service provider. Considering using built-in WAN acceleration for slow links.

Backup Modes

Q: What backup mode should I choose among available?
A: This depends on your requirements. As features for VBR evolved over the years, today we recommend the following backup modes:
  • Forever forward incremental (recommended)
  • Forward incremental with synthetic or active fulls (recommended)
Reverse incremental is not recommended anymore for performance reasons.

Forward incremental with active full is recommended for slow backup storage systems (NAS devices and other systems that doesn’t support block cloning with REFS / XFS).

Forward incremental with synthetic or active fulls have the advantage, that the backup chain does not change during backups. This allows to run virtual labs or tape jobs while a new backup is created.

Forever Forward incremental has advantages over Reverse incremental as the merge happens after the backup finished. That means VM snapshots are consolidated faster. If you want to write a full backup to tape, a “Virtual synthetic backup” needs to be created for Forever forward incremental.

Reverse incremental mode does not allow to write incremental backups to tape.

Q: I am using deduplicating storage device. Is synthetic full backup good for me?
A: Depending on how your storage implements deduplication (inline, post-process or integrated with DDboost or Catalyst), you may get a better performance by using active (real) full backup instead of synthetic.

Q: We have a policy in place that requires me to do real full backups. Am I forced to use synthetic fulls with Veeam?
A: No, you can configure the job to perform active (real) full backups instead. Also, you can schedule active full backup, for example, once a month (or once a quarter), while doing synthetic full backup for the rest of the time. Veeam provides great flexibility around scheduling active full backups.

Q: How exactly does reversed incremental backup mode work?
A: Please see www.veeam.com/kb1933

Q: How exactly do the forward incremental modes work?
A: Please see www.veeam.com/kb1932

Q: How can the advanced ReFS / XFS integration help?
A: In general it's about performance and space efficiency for backup modes without active full backup. This blog post explains the benefits.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Backup Copy (v12)

Post by Gostev »

Q: What is the data flow in case of backup copy?
A: Data flow depends on the transport mode selected in the job.
Direct mode: Disk > Source backup repository > Network > Target backup repository > Disk
WAN accelerated: Disk > Source backup repository > Source WAN accelerator > Network > Target WAN accelerator > Target backup repository > Disk

Q: Does Backup Copy job literally copy backup files?
A: No. Backup Copy job creates new backup files containing workloads it is selected to process.

Q: What is the difference between selecting jobs, repositories or backups as an object to process?
A: The wizard lets you choose from three sources:
  • From jobs: copy restore points from a selected job
  • From repositories: copy restore points from all jobs pointed to the selected repository
  • From backups: copy restore points from selected backup jobs on external repositories
Q: Previously I could choose to copy single machines? Now I can only copy entire backup jobs?
A: A single machine cannot be added to a backup copy job. But we can work with exclusions.
Add a repository, job or backup to the backup copy job. Then use the exclusion feature for objects you don’t want to be processed by the copy job.

Q: How can I create a Backup Copy Job from a Backup Copies (source repository -> BCJ -> destination 1 repository -> BCJ -> destination 2 repository)?
A: You cannot create Backup Copy Jobs from a Backup Copy with Veeam Backup & Replication V12. Legacy Backup Copy jobs from V10 or V11 which allowed such configuration will keep working after the upgrade to V12.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Replication (v12)

Post by Gostev »

Q: What is the data flow in case of replication?
A: Disk > Source proxy > Network > Target proxy > Disk

Q: Can I use the same source and target proxy for replication?
A: Yes, but only when replicating locally (on-site replication). In this case, B&R scheduler will attempt to use the same backup proxy whenever possible.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

SureBackup (v12)

Post by Gostev »

Q: Can you please go over of the whole concept and what it is all about?
A: The following two demo videos explain the most common scenarios and questions Q: What is SureBackup job?
A: SureBackup jobs perform actual recovery verification by powering on one or more VMs from backup or replica (SureReplica Job) in the isolated environment, and verifying recovery by checking if VM was started, OS was booted, VM responds to ping, and applications are running fine. Additionally, to be able to detect the storage corruption issue (aka bit rot), SureBackup job can optionally verify the complete machine image by reading all disk blocks from the backup file and comparing the contents against CRC included in the block.

Q: Which backups are supported?
A: SureBackup Jobs can verify the following workloads:
  • VM backups (vSphere, HyperV)
  • Agent Backups (Veeam Agent for Microsoft Windows, Veeam Agent for Linux)
A SureReplica Job can verify:
  • VM Replicas (vSphere)
Q: What exactly is verified by the SureBackup job? VM availability, OS boot up or also applications?
A: All of this. There are 4 steps or level of recovery verification:
1. Check for successful VM startup (API call results). For example, if virtual disk is missing from backup file, or if disk descriptor file is corrupted, VM will not start and vCenter will tell SureBackup the issue.
2. Check for successful OS boot up by checking VM heartbeat. If OS does not boot, then guest tools will never start and heartbeat will never appear.
3. Check for network connectivity by pinging VM. If VM never appears on the network, this also indicates in some recovery issue.
4. Check VM applications by running test scripts against them. If application does not respond to test script with expected results, this indicates recovery issue.

Q: Are all of the above tests mandatory?
A: No. You can define which tests you want to use granularly for each VM, container, or linked backup job.

Q: Which machines will be verified by SureBackup job?
A: All machines from the selected application group (these run for the duration of the job providing required infrastructure services), and all machines from linked backup jobs (these are simply started, verified and stopped one by one).

Q: I can see that it is possible setup SureBackup job without specifying application group to use. How so?
A: If you do not care to verify application recoverability, and just want to make sure VM can boot up, then you do not need to worry about some applications possibly not starting because of missing dependencies, and thus you do not need an application group to start and run those dependencies first. Just stuff your SureBackup job with linked backup jobs, and make sure application test scripts are disabled to avoid getting verification error reports.

Q: Can I verify Veeam Agent backups from my physical machines?
A: Yes. Veeam Agent backups can be verified within an application group. This allows you to power on critical components (such as domain controller or database server) from Agent backups before you start testing VMs from a linked job. You can configure the same verification tasks for Agent backups as for your VM backups.

Q: I received report that verification had failed for one of the VMs. How can I quickly see what is going on?
Q: Open the corresponding SureBackup job session, locate the VM in question, right-click it and choose to restart it. Click the hyperlink under VM name to open its console (or use vSphere client), and troubleshoot the recovery issue manually.

Q: Verification procedure for some application we have is a manual process that cannot be scripted. What is our best option?
A: Create dedicated application groups for these applications. Create SureBackup job with this application group, select the Keep VM running checkbox, and schedule the job with the required period (say, once a week after full backup). VMs will sit there and wait for staff responsible for manual recovery verification to come in the morning and perform the required tests by connecting to the VM manually.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Deduplication (v12)

Post by Gostev » 1 person likes this post

Q: What kind of deduplication do you perform?
A: We perform agentless, block-level, inline (on-the-fly) deduplication. Deduplication happens both at source (before data is sent to backup storage, which significantly improves incremental backup performance), as well as target (to achieve additional reduction for jobs with multiple VMs).

Q: What are the typical data reduction ratios?
A: Compression and deduplication ratios up to 10x and more of the original size can be achieved, but this depends on too many factors, such as amount of VMs in the job, similarity of those VMs, content of virtual disks, etc. Over the years 2x became a rule of thumb over a "normal" environment.

Q: 10x is nice from software, but other software and hardware dedupe vendors claim to have 100x and more deduplication ratio?
A: Ask them to provide the formula they are using to calculate dedupe ratio. With Veeam, 10x is pure deduplication ratio within a single full backup file (bytes in divided by bytes out). Other vendors often inflate ratios to achieve impressive numbers for marketing purposes. This is typically done by assuming each backup is full. If you apply this approach to Veeam, then with most typical 30 days retention policy with daily backups, you will get up to 300x "marketing" dedupe ratio. This is because Veeam allows you to keep only one full backup on disk at any time (no matter of how long your retention policy is).

Q: To what level the deduplication is done?
A: We do block level dedupe with constant block size (configurable 8192 KB, 4096 KB, 1024 KB, 512 KB or 256 KB blocks), on the job level (not between jobs). If the repository is configured with “Per-machine backup files” the deduplication is on per VM level.

Q: I've been told that Veeam deduplication is inefficient because it uses large block sizes.
A: Veeam deduplication is designed to work alongside with compression, so you should be looking at the overall data reduction ratio instead. Large block size allows for both higher processing performance, and much better compression ratio for individual blocks. This allows Veeam to achieve the same data reduction factors with much smaller processing overhead. As you can see from this research by EMC, when deduplication is coupled with compression, overall data reduction ration remains about the same as the block size increases. This is because compression algorithms benefit from having more data in the block to work with.

Q: What about REFS / XFS block cloning versus deduplication appliances?
A: Customers want to save disk space. We suggest to save money and get better performance.

When deduplication appliance vendors argument with 1:15 / 1:20 or even higher values, that is mostly because of the (required) full backups. Veeam stores backups uncompressed to dedupe appliances. That also increases the deduplication values.

A customer shared his 12 months GFS backup copy job values in the forums. Doing the math:

15.4 TB used on disk
99 TB size on disk
6.4 data reduction factor based on block cloning
12.8 data reduction factor if to compare with dedupe boxes where we store data uncompressed (2x on average)

So we can see 1:13 ratio for free with REFS or XFS while maintaining fast backup & restore speed. Chances are low that a deduplication appliance pays off, except for cases where "active full" backup is a requirement.

Image

Image

Q: My VMs are not made from the same template. Will dedupe work between them?
A: Yes. Because deduplication is done on block level, it does not matter if VMs were made from the same template, or provisioned manually. Any similar blocks between VMs will be deduped, even if these VMs have different operating systems.

Q: Does deduplication work for replication, or for backup only?
A: Because replicas are created in native format (uncompressed, directly on the hypervisor datastore), deduplication is not applicable to them.

Q: Since Veeam has its own deduplication, does it make any sense to write Veeam backup files to storage device with hardware deduplication?
A: Yes, this way you will get global deduplication (between backup files produced by different backup jobs). Generally speaking, deduplicating storage devices are best choice for long-term archival of backup files produced by Veeam. Most deduplicating storage device are not good as primary backup target, because unlike raw disks, these devices are not designed to provide good IOPS, and may become primary bottleneck for your backup performance, thus affecting your backup window. Likewise, poor random read I/O performance certain deduplicating storage devices are exhibiting may affect restore performance.

Q: Have you done integration testing with other vendor's deduplicating technology such as EMC DataDomain, HP StoreOnce, ExaGrid, Quantum, etc?
A: Yes, we have partnerships with most deduplicating storage vendors. Moreover, there are noticeable performance improvements over many versions.

Q: Should I disable built-in deduplication it I am backing up to a deduplicating appliance?
A: If you use a “Veeam integrated” deduplication device (EMC Data Domain with DDboost, ExaGrid, HPE StoreOnce with Catalyst, Quantum DXi) the settings will be adjusted automatically according the best practices. For other deduplication devices or protocols please refer to the job and repository settings in the best practice guide
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Built-in WAN Acceleration (v12)

Post by Gostev »

Q: What is the expected bandwidth saving?
A: Data reduction ratios up to 50x and more of the original size can be achieved, but this depends on too many factors, such as similarity of the content. Approximate savings ratio can be assumed as of 10x

Q: Does it make sense to use the WAN accelerator with more than 100 Mbit/s bandwidth available?
A: Yes, if you use the "High bandwidth mode". For the standard mode you will see that you do not save transfer time with 50-100Mbit/s or more bandwidth. But you can still achieve bandwidth savings by using WAN accelerator.

Q: Which cache size should be configured at the target WAN accelerator?
A: It is recommended to configure the cache size at 10 GB for each operating system (Windows 2012, 2012R2, 2016…) processed by the WAN accelerator. All Linux distributions count as one operating system.

Q: Which cache size should be configured at the source WAN accelerator?
A: The global cache on the source WAN accelerator is not used, but it must exist as a number (5GB minimum). Keep in mind that Source WAN Accelerator requires ~20 GB per 1 TB of source data to store digests of data blocks of source VM disks. Disk space consumption is dynamic and changes as unique VMs are added to (or removed from) to jobs with WAN Acceleration enabled
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Application-Aware Image Processing (v12)

Post by Gostev »

Q: What exactly do you mean by application-aware image processing?
A: Application-aware image processing is unique Veeam technology that allows creating image-level backups in application-aware manner. It is a multi-step process that consists of detecting applications running inside of the processed VM; using Microsoft VSS to perform application-level quiescence to ensure that each application state is transactionally consistent; applying application-specific settings to prepare each application to perform VSS-aware restore on the next VM startup; and finally performing transaction logs pruning for certain applications if backup was successful. The whole process is fully automated.

Q: Why application aware image processing functionality in Veeam is important? How is it better than VMware Tools VSS integration?
A: Microsoft VSS was not designed with image-level backup and restores in mind, but rather for file-level backup and restore process. For some applications, on top of basic VSS quiescence, additional steps need to be taken when backing up and restoring the VM image as a whole.

Q: Do I need to deploy persistent agent in every VM that I am backing up in order to be able to use application-aware image processing?
A: No. Per default, Veeam is agentless. Instead, Veeam automatically deploys small runtime coordination process to each VM when backup starts, and removes it immediately after backup finishes. This frees you up from agents micromanagement (deployment, configuration, updates, monitoring, troubleshooting). Besides, actual VM runs without any 3rd party component present most of the time. Veeam can optionally use a persistent guest agent inside VMs to reduce TCP / IP port requirements.

Q: Does Veeam install its own VSS provider on each guest?
A: No, we leverage default VSS provider from Microsoft that is already available on each Windows guest.

Q: What is Microsoft VSS and how it can provide transaction consistency with image-level backups?
A: Please read the following beginners guide to Microsoft VSS: What is Windows VSS & why you should care

Q: What applications do you support for transaction-consistent backups?
A: Almost all VSS-aware application running on Windows Vista/2008 or later. All modern server applications from Microsoft are VSS-aware, plus many 3rd party vendors ship their server application with VSS writers as well. For an official list of applications with a supported Veeam Explorer integration, please consult the system requirements in our user guide.

Q: How do I know if my application is VSS-aware?
A: It should implement VSS writer and have it installed and registered in Microsoft VSS framework. Open command prompt on backed up VM, and run vssadmin list writers for complete list of VSS-aware applications on specific system.

Q: Do you know if Oracle has VSS writer?
A: Yes, Oracle 11g has a component named "Oracle VSS Writer" that installs when selecting Windows OCI Components on the Oracle 11g database install wizard. It does support Oracle 10g starting from patchset 10.2.0.3. Also later versions (12c, 18c, 19c, 21c) have a VSS writer.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

File Level Recovery (v12)

Post by Gostev »

Windows

Q: Does file level recovery (FLR) require that backup file content is extracted and staged on disk or in RAM before recovery can take place?
A: No, file-level recovery happens directly from backup file, without prior extraction.

Q: Do you have to back up VMs with guest file system indexing enabled to be able to do file level recovery?
A: No. Unlike with other solutions, with Veeam indexing is completely optional and is not a requirement for file level recovery. However, indexing enables you to additionally perform 1-Click File Restore through web UI (see below).

Q: How does native Windows file level recovery work?
A: Native Windows file level recovery mounts the content of backup file directly to backup server as folders. You can even point any applications to files located in these folders, and use them normally (backup file remains read-only no matter what you do).

Q: Can you restore files with correct permissions?
A: Yes, restoring to original location will preserve file and folder permissions. When copying files and folders to a new location, you can decide if you want to preserve permissions or inherited the permission from the new location.

Q: Can you only restore file permissions?
A: Yes, our backup browser allows you to only restore permission of the selected objects.

Q: Can I only restore changed files and folders?
A: Yes. Our backup browser allows you to compare your selected restore point with the production server and only restore objects which were changed or deleted since the backup was run.

Q: What file systems are supported for instant file-level recovery?
A: FAT, NTFS and ReFS.

Q: Are GPT disks and dynamic disks supported?
A: Yes.

Other OS

Q: How does multi-OS file level recovery work?
A: Because Windows cannot read other file systems natively, we implanted two options.
  • A Linux machine that is registered in “managed servers” in the “backup infrastructure” can be used as helper host. This is the “normal way” to do it. Virtual disk files of VM you are restoring from are mounted to that host directly from backup file (without prior extraction).
  • A Veeam FLR helper appliance that runs stripped down Linux that can read the data from as many as 17 file systems.
Q: Can you restore files with correct permissions?
A: Yes, this option is available in the multi-OS file-level restore wizard when restoring directly to a Linux host.

Q: What operating systems are supported for instant file-level recovery?
A: 17 most commonly used file systems from Windows, Linux, Solaris, BSD, Unix, Novell and Mac operating systems.

Q: Do you support instant file level recovery from NSS volumes on Novell / Micro Focus OES?
A: Yes. See the full list of supported file systems in the release notes or user guide

1-Click File Restore (Enterprise Manager web UI)

Q: I have 1-Click File Restore buttons (Restore and Download) disabled? Guest system indexing is enabled on all jobs.
A: This is premium functionality that is only available in the Enterprise Edition of our product. If you have Standard Edition, use file level restore wizards in the B&R console instead.

Q: How does 1-Click File Restore work?
A: Enterprise Manager web UI user picks one or more guest files to restore by browsing or searching guest file system index of the backed up VMs (indexing explained below). Enterprise Manager then creates a task on Backup server, and the Backup server restores the file using native file level restore capabilities (see above).

Q: Do I need to install any agents on the guest to be able to restore files to the original location?
A: No, 1-Click File restore is agentless.

Q: Do you have to backup VMs with guest file system indexing enabled in order to enable 1-Click File Restore?
A: Yes, 1-Click file restore requires that guest file system is indexed during the backup.

Q: Does the 1-Click File Restore process preserve the original file, or simply overwrite it?
A: Yes, the original file is preserved with the _original suffix.

Q: What if the original file is locked by some process, and cannot be renamed?
A: In this case, we restore the file with the _restored suffix, and log a warning no notify the restore operator.

Q: How do I make someone the File Restore Operator?
A: Using the Configuration page of the Enterprise Manager, grant user the corresponding role. The user will then be able to logon to the Enterprise Manager web UI. File Restore Operators can only see a subset of web UI (specifically, Files tab only).

Q: Does the File Restore Operator need to have permissions on restored file, guest, VM, or host to be able to perform the in-place restore?
A: No.

Q: Can I restrict File Restore Operators to be able to restore specific file types only? Disable ability to download the restore files? Restrict them to the certain virtual machines only?
A: Yes, these settings are available in the Enterprise Manager configuration.

Q: In case of in-place restore (back to original location), do you preserve files permissions and ownership?
A: Yes.

Q: What are the system requirements for 1-Click File Restore?
A: Same as for Windows file level restore (see above), since both are using the same engine.

Q: Is 1-Click File Restore supported for OS other than Windows?
A: Same as for Windows file level restore (see above), since both are using the same engine.

Guest File System Indexing and Search

Q: Do I need to deploy any agents inside of each VM to be able to index guest file system?
A: No agents are required. All you need to do is select the corresponding check box in the backup job wizard, and specify the administrator's credentials to your VMs.

Q: Will turning on indexing slow down my backups significantly?
A: Usually not. Instead of scanning through the whole file system, we capture index data directly from NTFS MFT, as a part of guest OS freeze process. For a typical VM, the required data is captured and parsed nearly instantly, which is we are calling this Instant Indexing.

Q: Is the Instant Indexing feature only available with Veeam Enterprise Manager?
A: You do not need Enterprise Manager to create the local catalog, but you have to install Enterprise Manager server to be able to browse and search for guest files in VM backups, and maintain global catalog across multiple backup servers. Also, please see the Standard vs. Enterprise Edition comparison document on product page under Resources tab for more information about slight differences in Instant Indexing feature set depending on your Veeam Backup license level.

Q: I have more than one Backup server. Will the guest file search show results across all Backup servers?
A: Yes, as long as all your Backup servers are federated in the Enterprise Manager.

Q: Where does the index database reside?
A: Local catalog is stored directly on the Veeam Backup server, in the location specified during setup. Global catalog (across all backup servers) is located on the Enterprise Manager server. Additionally, index is also stored in the backup file itself (and so it is immediately available for all imported backups).

Q: What do I need to backup to protect index database?
A: You need to backup Enterprise Manager server, since this is what holds global catalog (across all backup servers). Backing up local catalog data on Veeam Backup servers is not required, since any new index data that appears there is automatically (and incrementally) replicated to the global catalog.

Q: It looks like guest file index is missing some files?
A: By default, we do not index Windows system and temp folders to reduce the index size.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Instant VM Recovery (v12)

Post by Gostev »

Q: What is Instant VM Recovery?
A: Instant VM Recovery allows you to instantly recover any VM backup into your production environment by running it directly from backup file. Best analogy is that this gives you "spare tire" so that you can get to a service shop. You cannot go at full speed, but you are still going instead of being stuck in woods. To complete the restore, you can use native hypervisor capabilities to migrate the recovered VM to production storage without any impact on users (this is like changing spare tire to a real one as you go). Alternatively, you can move VM to production storage during off-hours with short downtime using Veeam Quick Migration (this is like pulling over to a service shop to change tire). Note that Instant VM Recovery supports bulk operations (multiple VMs at once).

Q: Can I use Instant Recovery for my Agent backups?
A: Yes. You can use Instant Recovery to restore your agent backups as vSphere, HyperV or Nutanix AHV virtual machines. You can use this feature if you want to virtualise your physical machines.

Q: As a percentage, what's the difference in performance when running a VM that has been replicated versus one that’s running from a backup file?
A: It depends on many factors (backup files location, storage speed of Veeam Backup server, number of concurrently running instantly recovered VMs). Generally, for all low I/O server (such as DC, DNS, DHCP, WWW, AV, PRINT) the performance difference will be hardly noticeable. Impact on high I/O servers will be much more noticeable.

Q: Our hypervisor license does not include migration capabilities, or migration does not work. What are my options to complete the restore?
A: Simply perform failover during the next maintenance window. To do that, use Quick Migration functionality in Veeam. Unlike with hypervisor-based migration, this approach require short downtime. However, this is still beneficial as this allows you to convert unplanned downtime (which is what cost businesses money) into planned downtime during your regular maintenance windows.

Q: Will Storage VMotion or Quick Migration carry over the actual, latest VM disks state (including delta from backup state)?
A: Yes, this happens automatically.

Q: What happens if Veeam Backup server fails when you have instantly recovered VMs running?
A: Just what happens when your production storage fails - nothing pretty. Keep in mind that no data is lost as long as the backup server comes up again later with no data corruption. Although the chance of consequent failures of 2 different storage devices is pretty unlikely to say the least. It is like getting a hole in your newly placed tire...

Q: What happens if there is no space left in the vPower NFS write cache location?
A: The VM will probably crash. Do not change anything (especially do not stop IVMR) and create a support call. Support might be able to help if you can free up space.

Q: With the instant VM recovery feature present, why would you replicate?
A: 2 big reasons: not to have dependencies on vPower engine to run the VM, plus full disk I/O performance in case of disaster (important for large-scale disasters).

Q: Will Instant VM Recovery work with RDM that have been backed up?
A: Yes. RDM in virtual mode is backed up as VMDK and are available directly in the backup file. RDM in physical mode is skipped during backup, however there is nothing preventing instantly recovered VMs from connecting and using it (if it is not impacted by the disaster, of course).

Manual Recovery Verification

Q: I have Veeam B&R Standard Edition. How can I do "manual" SureBackup recovery verification. What is the process?
A: To perform manual recovery verification, you should use Instant VM Recovery feature. For example, for simple VM boot up test, just go through the Instant VM Recovery wizard and power on VM, but do not select a checkbox to connect VM to a network.

Q: I performed boot up test as described above, and while VM booted fine, most applications could not start?
A: This is expected because there is no network connectivity, so application cannot establish connection to domain controller, DNS servers, and other services it is dependent on. If you want to test application recovery, create isolated network and edit instantly recovered VM network configuration before powering it on. Perform this for all required VMs which run dependent applications (such as DC and DNS), placing them all on the same isolated network, and start them in correct order (for example, DNS > DC > Application).

Q: This sounds very similar to what we are already doing during our monthly/quarterly/annual DR tests. What is the catch?
A: The catch is that with Veeam, you are able to run VMs directly from backup files, without spending many hours of extracting all those VMs from backup to production storage. Even finding free disk space alone (to extract all required backups) often becomes a challenge. So restore test that would previously take a whole weekend can now be completed in less than 30 mins.

Q: This manual process sounds too complex to perform for every backup, every VM, every time as your marketing materials state?
Q: That is right, which is why our VUL licensing (or Socket Enterprise Edition or higher) provides fully automated recovery verification that performs all of the tasks described above automatically, including running required test scripts against each VM. It even creates and manages isolated test environment automatically for you. This allows you to perform DR test every single day, with review of email report showing recovery verification job results being the only manual activity in the whole process.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Virtual Lab (v12)

Post by Gostev »

Q: Do I need to create virtual lab for Instant VM Recovery?
A: No. Instant VM Recovery feature is available even in the Standard Edition, which does not provide ability to create virtual labs.

Q: What exactly this "virtual lab" is?
A: By virtual lab we mean automatically managed, fully isolated environment where VMs can be run directly from backup file to facilitate features such as universal application item recovery (U-AIR), SureBackup (recovery verification) and On-Demand Sandbox. Virtual lab uses isolated virtual networks that mirror production networks, and uses proxy appliance for routing between production and isolated networks, and between isolated networks. Each virtual lab places all temporary VMs in the designated folder and resource pool. You can use resource pool to control resource usage of virtual lab VMs.

Q: How does the proxy appliance work?
A: The proxy appliance allows to route traffic between computers in the production network, and temporary VMs running from backup in the isolated network. Think about proxy appliance as your home router, which routes traffic between your home network and internet.

Q: Do you change temporary VMs IP addresses to prevent IP conflicts for VMs which are already running in production?
A: In fact, all temporary VMs in the isolated network have exact same IP addresses as in production network. IP address conflicts are simply not possible, as different VLANs are used for production and isolated networks.

Q: How is it possible to access temporary VMs in the isolated network from production network, if VMs in both networks have the same IP addresses?
A: Each temporary VM is assigned so called "masquerade address" from selected masquerade network (part of virtual lab settings). Routing table on Veeam Backup server is automatically updated, and proxy appliance IP address in the production network is assigned as gateway for masquerade network. Acting as gateway, the proxy appliance performs address translation and substitutes masquerade IP address with real IP address in the isolated network. Although this sounds pretty complex, all happens transparently for you as a user.

Q: What if i want all computers on the network to be able to access those temporary VMs running in the virtual lab?
A: You should assign the proxy appliance static IP address in the virtual lab settings, and update your production router settings to forward all request destined into masquerade network (as configured in virtual lab settings) to the proxy appliance IP address. Alternatively, if you only need to access select VMs in the isolated network, you can use virtual lab's Static Address Mapping feature and point specific IP addresses in the production network to selected IP addresses in the isolated network. Proxy appliance will grab specified production IP addresses for its production network interface, and will take care of routing automatically.

Q: Is it possible to enable internet access from within the virtual lab?
A: Yes, you will see the corresponding settings on the Proxy step of the Virtual Lab wizard.

Application Groups

Q: What is Application Group?
A: Application Group is our way to handling application dependencies for any VMs running in the isolated environment. Simplest example is Microsoft Exchange server - if you power it on in the isolated environment which does not have DNS server and Domain Controller present, mailbox store will not start.

Q: Can you give an example of what is typical Application Group looks like for small Windows shop.
A: Any application group should contain at least DNS server for name resolution, and directory server for authentication. In Windows world and smaller environments, both services are typically provided by Domain Controller, so application groups may look like these (put DNS before DC if DNS server is separate):
Exchange: DC > Exchange
FTP Server: DC > File Server > IIS
SharePoint: DC > SQL > SharePoint

Q: Can you give an example of what typical Application Group looks like for small Linux shop with no directory services used.
A: Any application group should contain at least DNS server for name resolution. Application groups may look like these:
CMS: DNS > MySQL > Apache w/CMS code
CRM: DNS > Oracle > CRM Server

Q: I have pretty static and small environment with just a few VMs. How should I configure application groups?
A: Simply put all of your VMs in a few application groups keeping in mind the required boot order. You can create one group per application, or you can have more than one application per group, or even all of them in a single group.

Q: I have a large and dynamic environment with many VMs created and deleted daily. Micro-managing application groups is hardly possible.
A: We have thought of that. In this case, you should setup your application groups to contain essential infrastructure services only (for example, DNS and DC is something almost every application in Windows shop depends on). Now, SureBackup jobs provide you with capability to "link" this application group and one or more of your backup jobs together. With such setup, SureBackup first starts the application group VMs and leaves them running for the duration of job, and then proceeds to powering on VMs from linked backup jobs one by one for verification. As a result, as you are adding or removing VMs from environment, they will be automatically added to backup jobs (granted you have backup job setup on container basis), which in turn will make them processed as a part of SureBackup job without requiring you to edit its settings.

On-Demand Sandbox

Q: How do I setup a sandbox?
A: Create new application group and stuff it with VMs you want to be available in sandbox. Create new SureBackup job, select the newly created application group, and select "Keep VMs running" checkbox. Now, simply run the SureBackup job. As soon all VM in the job start, your sandbox is ready! You can open VM console for all VMs running in the sandbox normally and do whatever you need to do! Or, simply connect to applications running in sandbox with native management tools you are using to manage the same application in your production environment.

Q: When I start SureBackup job, it always runs using latest backups - but I need to go 1 week ago?
A: To start SureBackup to restore point other than latest backup, right-click the job, select "Start To" in the short-cut menu, and select the desired date and time to start the job to.

Q: I can ping and access sandbox VMs running in the isolated environment by masquerade IP address from backup server, but not from any other computer?
A: This is because routing table was updated automatically on Veeam Backup server as a part of SureBackup job. If you execute route print you will see one of routes pointing to proxy appliance IP address in production network. So, to make other computers able to access sandbox VMs, you should either update their local routing table (note that Universal AIR wizard does this automatically), or configure router in your environment accordingly to make this work for all computers at once (however, it might be easier to use Static Mapping feature of the virtual lab instead).

Application-Owner or Developer-Directed Recovery

Q: How do I enable developers or engineers to restore data from an application that cannot be restored with file-level restore or the Veeam Explorers?
A: Essentially, you need to create ever-running SureBackup job with application in question, and provide convenient DNS name to it.
1. Create new application group with VMs running required application and its dependencies.
2. Create new SureBackup job with "Keep application group running" option selected, and schedule it to run after each backup.
3. Open settings for Virtual Lab used by SureBackup job, and edit static IP mapping feature to map isolated IP address of required VM to some unused IP address in the production network.
4. Update production DNS and assign simple DNS name to the IP address chosen earlier, for example "exchange-yesterday".
With that in place, any user in the production environment will be able to access the server running from latest backup in the isolated environment by this DNS name. Any VMs published in such a way will be running until the next backup job is started. The next run of backup job will stop any linked SureBackup jobs, and perform backup. Then, SureBackup job will start using the newly created latest backup file according to its schedule, and again runs for the whole day until the next backup. Effectively, you will have a copy of application running from last night's backup always available to users behind easy to remember DNS name - enabling them to logon to familiar user interface and recover any data they need. This system will require no maintenance whatsoever.

Q: What are some use cases for this?
A: Just a few ideas, I am sure you will find many more uses - please let me know!
1. After unintended changes on production PostgreSQL you want to check out the differences between the latest backup and production.
2. Development shops. Publish yesterday's copy of MySQL database, and let your developers test new code, lookup previous state of any values, compare database schema, and so on. Every day, the server behind selected DNS name will run the latest copy of MySQL from last night's backup, making it extremely simple and convenient to access, with the system requiring zero maintenance. Any changes made to the database will be discarded once the SureBackup job running the VM is stopped.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Data Labs Self Service And U-AIR Wizard (Universal Application Item recovery) (v12)

Post by Gostev »

Q: How does it work from developer or application owner perspective?
A: User perspective:
1. Install AIR wizard on user workstation (located on VBR ISO).
2. Right-click the Virtual Lab Manager tray icon to create new virtual lab request.
3. Specify description, estimated time needed, required VM and restore point, and submit request.
4. Wait until the request is approved by backup administrator, and the lab is started.
5. Continue in the AIR wizard and do whatever you like (test update, do a restore that is not possible with Veeam Explorers or file restore etc.).
6. If you need more time, just extend virtual lab lease.
7. When done, dismiss the lab (if you forget, it will expire automatically).


Q: How it works from backup administrator perspective?
A: Administrator perspective:
1. Receive email notification about new lab request.
2. Open Enterprise Manager, make decision to approve or deny the lab request.
3. Go through request approval wizard. If necessary, adjust request settings (such as lab lease time).
4. Manage active labs if needed. For example, stop lab used by developers to let somebody else perform emergency restore using the same virtual lab.
5. No need to babysit lab, as they will automatically expire after requested time passes.


Q: How it works under the hood?
A: All operations described below are fully automated and "hidden" from users:
1. AIR wizard generates lab (Virtual Lab must exist already) request and passes it over to Virtual Lab Manager (VLM), which and sends it over to Enterprise Manager.
2. Request is approved by admin, who selects SureBackup job to use as a part of approval process.
3. Enterprise Manager will automatically locate Veeam Backup server for selected SureBackup and will have it run the SureBackup job for required VM only.
4. Once all dependent, and the selected VM are running and ready (ready means recovery verification was successful), Veeam Backup server notifies Enterprise Manager.
5. Enterprise Manager notifies requesting VLM install and provides network parameters for lab (proxy appliance IP address and masquerade IP address of requested VM).
6. VLM updates routing table on local machine and notifies user the everything is ready with popup notification.
7. User is now able to proceeds through AIR wizard to perform application item recovery.
8. Lab automatically expires automatically as requested time passes (unless user extends the lease, or per user request to dismiss the lab).

Q: How this "universal recovery" is supposed to work?
A: Once the lab is prepared, Universal AIR wizard will provide masquerade IP address of requested VM in the isolated environment, and update routing on current computer automatically to enable transparent access. You can then use native management tools to extract required items from the application and put them back to production server. For example, you can use free Oracle SQL Developer to perform item-level recovery from Oracle database, or Microsoft SQL Management Studio to perform item-level recovery from Microsoft SQL database, or MySQL Workbench to perform item-level recovery from MySQL database, etc. - any application!

Q: I had never dealt with databases before. Is there a demo that can help me better understand the concept, so I can explain this to my colleagues?
A: This video shows (beginning from 28:10min) an example where the presenter connects the SQL Management Studio to the production SQL database and in parallel to the Data Lab / Universal Restore SQL Database. (The video is quite old, but the concept is still the same today)

Q: Do I have to install the AIR wizard on Veeam Backup server?
A: No, the AIR wizard is usually only needed on the workstations of the people who do the requests. For example, you can install them on developers' workstations.
Gostev
Chief Product Officer
Posts: 31428
Liked: 6633 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Advanced (v12)

Post by Gostev »

Bottleneck Analysis

Q: Job statistics tells me I have a bottleneck, and I cannot seem to get rid of this no matter what I do. What am I doing wrong?
A: You are doing nothing wrong. Even most powerful backup infrastructure will have a bottleneck - just like any bottle, no matter of size, has the bottleneck. We merely show you the "weakest" (thinnest) link of the chain - what to consider upgrading next to be able to improve processing performance. However, if you are happy with your jobs performance and backup window, you should not do anything about it - consider this an FYI.

Q: What do the 4 load numbers I get in per-VM statistics mean?
A: These numbers show percent of time the given data processing stage was busy versus waiting for other stages to provide or accept the data to process. Do not expect the numbers across all processing stages to add up to 100%, as busy time of each processing stage is measured separately and independently.

Q: What are the data processing stages?
A: No matter what job you are running, and how you have the product deployed, there are 4 main data processing stages that data passes in the specific order (think data processing conveyor). These are Source > Proxy > Network > Target, and each processing stage has a load monitoring counter associated with it.

"Source" is the source (production) storage disk reader component. The percent busy number for this component indicates percent of time that the source disk reader spent reading the data from the storage. For example, 99% busy means that the disk reader spent all of the time reading the data, because the following stages are always ready to accept more data for processing. This means that source data retrieval speed is the bottleneck for the whole data processing conveyor. As opposed to that, 1% busy means that source disk reader only spent 1% of time actually reading the data (because required data blocks were retrieved very fast), and did nothing the rest of the time, just waiting for the following stages to be able to accept more data for processing (which means that the bottleneck is elsewhere in the data processing conveyor).

"Proxy" is the backup proxy server (source backup proxy in case of replication). Proxy performs on-the-fly deduplication and compression of data received from the source component, which can be quite resource intensive operation on hundreds MB/s data streams. The percent busy number for proxy component shows the proxy CPU load. For example, if proxy shows 99% busy, it means that the proxy CPU is overloaded, and is likely presenting a bottleneck on the whole data processing conveyor.

"Network" is the network queue writer component. This component gets processed data from the proxy component, and sends it over a network to the target component (e.g. the repository). The percent busy number for the network component shows percent of time that network writer component was busy writing the data into the network stack queue. For example, 99% busy means that the network writer component spends most of the time pushing pending data into the network, because there is always some previous data still waiting to be sent over to the target. This means that your network throughput is insufficient and is presenting a bottleneck on the whole data processing conveyor.

"Target" is the target (backup/replica storage) disk writer component. Percent busy number for the target component shows percent of time that the target disk writer component spent writing the data to the storage. For example, if target shows 99% busy, it means that the target disk writer component spent most of its time performing I/O to backup files. This means your target storage speed is presenting a bottleneck for the whole data processing conveyor. All the pending I/O operations cannot complete fast enough (the storage fabric could also be the limit), and due to that there is always some data waiting in the incoming queue of the network component that is waiting to be written to disk.

Q: Can I see load numbers in the real-time?
A: If you hover over the bottleneck value in the real-time statistics window, you will get a tooltip with the immediate value. However, because this data is real-time, it may be affected by intermittent issues, or temporary conditions (such as file system cache population in the beginning of the job). On the other hand, average load data that is logged at the end of each job session log for each VM could be harder to interpret due to averaging across the entire job run.
HannesK
Product Manager
Posts: 14252
Liked: 2858 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

TAPE (v12)

Post by HannesK »

Tape

Q: Which tape autoloaders / libraries does Veeam support?
A: Veeam supports LTO compatible devices (physical and virtual). A list of tested libraries and autoloaders can be found in our Veeam Ready database. You can also find a list of libraries and autoloaders which were reported to work by community members in the tape forum

Q: Is path failover supported?
A: Veeam Backup & Replication supports path failover for tape devices with multiple drives that manage multiple paths over multiple SANs.

Q: Which device driver should I use?
A: Veeam Backup & Replication supports custom vendors drivers as well as generic windows drivers. However, it is recommended to install vendors drivers in non-exclusive mode.

Q: Is WORM media supported
A: Yes, since version 9.5 update 4.

Q: Can Veeam share one library with 3rd party tape solutions?
A: If the library supports partitioning, yes. By partitioning the library Veeam can only see its part of the library and same for 3rd party tape solutions.

Q: Can I install another backup product that also writes to the same tape (library) on the Veeam server?
A: No, this is not possible.
HannesK
Product Manager
Posts: 14252
Liked: 2858 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Object Storage (v12)

Post by HannesK » 2 people like this post

Object Storage

General

Q: How can I use Object Storage in Veeam Backup & Replication?
A: V11a and earlier versions of Veeam Backup & Replication can use Object Storage as a Capacity and Archive Tier in a Scale Out Repository. In Veeam Backup & Replication v12, backup and backup copy jobs can now also be targeted directly to Object Storage. Additionally, you can use Object Storage as performance tier extends.

Q: Which object storage is supported?
A: We support Amazon S3 including S3 compatible storage, Azure Blob, Google Cloud and IBM Cloud object storage. More information can be found in our official Veeam Ready and Veeam Integrated database or in the unofficial compatibility list.

Q: How much object storage disk space will I need?
A: The disk space usage in object storage is similar to the disk space consumption with REFS / XFS. So you can use the Veeam Backup Capacity Calculator with the REFS/ XFS checkbox enabled for estimation. If you need to calculate disk space usage for capacity tier, enable the Capacity tier enabled checkbox.

Q: How can I set an HTTPS proxy for communication with object storage?
A: Please use the netsh command as described on this Microsoft website

Example:

Code: Select all

C:\Users\Administrator>netsh winhttp set proxy 172.16.27.57:8888

Current WinHTTP proxy settings:

  Proxy Server(s) : 172.21.237.57:8888
  Bypass List   : (none)


C:\Users\Administrator>netsh winhttp reset proxy

Current WinHTTP proxy settings:

  Direct access (no proxy server).
Q: How much will it cost?
A: Veeam does not charge for the amount of data that is copied or moved to object storage.

For the cloud provider costs, that question is hard to answer. Different cloud providers have different pricing models. Some providers charge for API calls (PUT / GET operations). Others do not charge for API calls. Most providers charge for egress network traffic (restore). But not all charge outgoing traffic.

For API calls, you can expect around 1 call per 1 MB source data. CAUTION: this is for the default backup job setting “Local target” in the Storage Optimization tab in Advanced Settings
Some cloud providers have lower cost tiers that charge higher on API costs, early access and outgoing traffic. Using such kind of cloud storage for daily backups can increase costs because of these extra fees.

Please check out your cloud provider pricing model for more details.

Q: My API costs are higher than expected. Why is that?
A: In most cases it is because the default backup job setting “Local target” in the Storage Optimization tab in Advanced Settings was changed to “WAN target”. That results in 4x higher API costs.

Q: I want to use immutable object storage. Do I need to upload full backups again?
A: Yes, if you have non-immutable object storage in use. Converting an existing storage account/bucket to an immutable one is not supported. Immutability must be set when you configure the storage account/bucket the first time.
After the initial backup was transferred to object storage, only incremental data will be transferred to object storage.

Q: How does immutability increase costs?
A: There are two things to keep in mind:
1) there is an overhead for storing the data to guarantee immutability for each restore point. As we store data in an “incremental forever” way, this is already cost-optimized
2) if the cloud provider charges for API calls (e.g. Amazon), then you can estimate API costs around one full backup every 10 days. These 10 days are also already cost-optimized vs. storage costs. Adjusting them with a registry key probably leads to additional costs


Direct to Object Storage

Q: Are regular full backups required with object storage?
A: Default setting for backup and backup copy Jobs to object storage based repositories is forever incremental.

Q: What happens if I run a full backup?
A: Please be aware that active full backups will use the full disk space on object storage. Synthetic full backups are not available for those jobs.

Q: How does GFS work if there is no regular full?
A: Restore Points will be marked as GFS (weekly, monthly, yearly). A regular full backup is not needed.

Q: What happens if I combine GFS retention with regular active fulls?
A: Restore Points will be marked on their specified day and not when your active full day is scheduled.

Q: Which workloads can be targeted directly to Object Storage?
A: All workloads except Enterprise Database Plugins (Oracle RMAN, SAP Hana, SAP on Oracle and MSSQL) and Veeam Agent for AIX and Solaris are supported.

Q: How much bandwidth do I need?
A: There are three scenarios:
1) The initial upload. Sizing bandwidth for that would probably be oversized.
2) Incremental uploads: only the incremental changes count for the calculation.
3) regular active full backups (optional): full size of your full backup will be transferred to object storage.

When you know the amount of incremental backup size per day, then you need to define your own “backup window”. With “time” and “amount of data” you can calculate the required bandwidth. Don’t forget to do the same for the optional regular active full backups.

Q: If I lose my whole backup environment, can I restore with a new installation from object storage in the cloud?
A: Yes you can. Backups are self-sufficient in object storage repositories. Connect your object storage to a new server and run a rescan. The backups will be imported to the new backup server.


SOBR – Capacity Tier / Archive Tier

Q: In a scale out repository, which tiers do support Object Storage?
A: All tiers are supported. Performance Tier, Capacity Tier, Archive Tier.

Q: Do I need to upload full backups regularly to capacity tier?
A: The copy or move process is incremental forever. Even if you do active full backup: only the changes will be uploaded to the capacity tier. There is no need to upload full backups after the initial offload / copy.

Q: How much bandwidth do I need?
A: There are two scenarios:
1) The initial upload. Sizing bandwidth for that would probably be oversized. After initial upload, the following uploads include only changes.
2) Incremental uploads: only the incremental changes count for the calculation.

When you know the amount of incremental backup size per day, then you need to define an “upload window”. With “time” and “amount of data” you can calculate the required bandwidth.

Q: If the amount of data for upload is too large for one session, will Veeam continue to upload or start again from the beginning?
A: The software continues the upload. It does not start the upload from the beginning. Of course, it cannot work if you have more changes than bandwidth over a long time.

Q: Does the backup mode have any influence on the use cases of capacity tier?
A: There are no special requirements for the “copy” mode. For “move” with non-object storage based repositories in your performance tier you need to make sure to have a backup mode that creates full backups. Only inactive chains can be moved to object storage.

Q: Can multiple SOBRs use one object storage account / bucket?
A: Yes, that’s possible if you use different folders for each SOBR. But our recommendation is to use a dedicated Azure Blob Container or S3 bucket per SOBR.

Q: Can “archive tier” save money for me?
A: That depends on your backup retention. It only pays off for long term backups (many months / typically years) where you (almost) never read from. Depending on the cloud provider, something between 3- and 6-months backup retention is a value where it starts to pay off (for example AWS Deep Archive has a minimum storage duration period of 180 days).

Consider a long-term retention policy, where yearly backups are stored for 3 years. For yearly backup of 1TB in size, overly simplified the total storage cost across 3 years would be:

Amazon S3 > USD 848
Amazon S3 Glacier Instant Retrieval > USD 148 (6x cheaper)
Amazon S3 Glacier Deep Archive > USD 38 (22x cheaper)

Q: How do I get data into archive tier?
A: There are three scenarios for data flow to archive tier:
1) object storage as performance tier: Data flow from “performance tier” -> (move) -> “archive tier”
2) object storage as performance tier and capacity tier: Data flow flow from “performance tier” -> (copy and / or move) -> “capacity tier” -> (move) -> “archive tier”.
3) block based (or network attached) repository in performance tier, object storage in capacity tier: Data flow flow from “performance tier” -> (copy and / or move) -> “capacity tier” -> (move) -> “archive tier”.

Remember that only GFS restore points go to archive tier. That is to save money (low retentions cost very much as you pay for a minimum amount of time at Azure & AWS)
This table shows the impact of copy or move policy for performance and capacity tier.

Q: If I lose my whole backup environment, can I restore with a new installation from object storage in the cloud?
A: Yes you can. Backups are self-sufficient in capacity tier and in archive tier. That also means, that archive tier never requires data from capacity tier and vice versa.


Q: When should I use "capacity tier" in "copy mode" vs. "backup copy jobs" directly to object storage?
A: It depends. The table below gives an overview about which scenario fits best for which functionality.

Image
HannesK
Product Manager
Posts: 14252
Liked: 2858 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Hardened Repository (v12)

Post by HannesK » 1 person likes this post

Hardened Repository

Q: Can the Hardened Repository store all kinds of Veeam backups?
A: Yes. Just keep in mind that some workloads are only immutable after 24 hours (Enterprise Database Plugins) or after a new image level backup was created (SQL, Oracle, PostgreSQL Log File backups). Configuration Backups can be pointed to a Hardened Repository, but they won’t be stored immutable.

Q: How can I change my existing Linux repositories to Hardened Repository after upgrade to V12 or later?
A: The process is documented in our user guide (Migrating Linux Repository to Hardened Repository) and involves manual configuration steps on your Linux repository and Veeam backup console.

Q: Which edition of Backup & Replication is required for the Hardened Repository feature?
A: Hardened Repository is available in all editions including Community Edition.

Q: Does immutability have influence on disk space usage or XFS block cloning / fast clone deduplication?
A: No. The only influence on disk space usage is the minimum immutability time of 7 days.

Q: Can a Hardened Repository be shared between different backup servers?
A: Technically that works, but it's an unsupported scenario.
1) Officially sharing servers / components between different VBR servers is unsupported. The reason for that is that the two VBR servers don't know about each other which means the server could be overloaded due to too many tasks. Plus it creates challenges during upgrade, as all components need to be updated together. So that scenario is not tested by Veeam QA.
2) As long as you keep an eye on the load and do not overload, it works fine
3) Also sharing a Hardened Repository between multiple VBR works fine with 11a. Each VBR server gets its own certificate (so that scenario was considered during development).

Q: What if an attacker has access to root credentials and an ability to logon remotely to the Hardened Repository?
A: Same like with any other WORM software. You are lost once an attacker has admin/root access on the operating system. See long explanation below (excerpt from whitepaper).

The best analogy would be the following:

Imagine you first let a person carrying a flaming torch and a portable gas tank into the bank, through the security in the employees-only area, and through the vault door. He's now standing on top of a pile of paper bills, and we're discussing "how can we make it impossible for him to burn them". But it's a waste of time at this point, as obviously you're already too late: the fire will consume any smarts you put in place, no matter how cool they look from the technology perspective.

General attack vectors against WORM software and hardening
This section puts the focus on attack vectors against all WORM "software only" solutions and how attacks can be mitigated. Understanding the attack vectors helps you understand the Hardened Repository software design (see how Hardened Repository works in the background section).

There are several security recommendations for the Linux system where the Veeam Hardened Repository runs.

1) Implement "separation of duties". It's recommended that the backup administrator has no access to the Hardened Repository.
2) Have a secure physical environment. If an attacker can walk to the Hardened Repository server and destroy the data physically, then software-based security is useless.
3) Secure the integrated remote management of the server. If an attacker has administrative access to IPMI/iLO/IMM/iDRAC/CIMC etc., then they can delete all data and software-based security is useless.
4) Secure the virtual infrastructure in case the Hardened Repository runs in a virtual machine. If an attacker can delete the VM running the Hardened Repository, then software-based security is useless.
5) Use a Linux system that has support for security updates and install security fixes in a timely manner.
6) Secure access to the operating system. Use state-of-the-art multi-factor authentication. If possible, disable the SSH Server completely and leave server access to the local physical console alone, to protect yourself from zero-day vulnerabilities in the SSH server itself.
7) Use a minimal installation of your preferred Linux distribution.
8) Only run (under limited account) the Veeam transport service available on the network (SSH can be an exception). There should especially be no third-party network services running with root permissions. If an attacker can gain root access via a third-party software on the Hardened Repository, then they can delete all data.

WORM software has existed for many years as standalone software solutions or even as pre-configured virtual appliances. In both cases, the underlying operating system or infrastructure must be secured. If an attacker gets access to the virtualization platform, then they can delete the whole WORM VM without needing to attack the WORM software itself. Here's a similar situation for WORM software based on Windows or Linux: if an attacker becomes an administrator or root, they can destroy the data protected by the WORM software. As an administrator/root, there is always a way to get around the protections of a software running on that system (that includes countermeasures like filter drivers or encryption).

While it sounds strange at the first look, a system is lost if an attacker can somehow gets console access to an industry standard machine. Even with the best operating system hardening (SE-Linux, encrypted disks, secure boot, trusted platform modules, etc.), an attacker can still boot another operating system or boot with special kernel parameters and delete all the data.
Q: Can I install any other Veeam roles (proxy, tape server, WAN Accelerator) on a Hardened Repository server?
A: NBD proxy can be installed on a Hardened Repository. All other roles require root permission and therefore cannot run on a hardened repository. It would weaken the security of the Hardened Repository.

Q: Is “Hardened Repository” compatible with Veeam Cloud Connect?
A: Yes. Veeam cloud service providers can leverage “Hardened Repositories” in their data center.

Q: Can a normal Linux repository be configured on the same machine as a “Hardened Repository”?
A: No. Classic repositories would reduce the overall security of the system.

Q: What happens to existing backup files if I change the immutable retention in the repository settings?
A: In general: nothing changes for existing backups files. But there is an exception for the active backup chain. The immutable time for existing files can only be extended. If you extend the immutability period, then the active backup chain immutable time will be extended. Existing backup files' immutable time of inactive chains stays the same. If you reduce the immutability period, then existing backup files keep the currently assigned immutable time.

Q: How can I extend the immutable time for existing backup files?
A: The Set-VBRImmutabilityLockExpirationDate command can do that. Its main purpose is "legal hold" compliance.

Q: What happens if the backup retention is lower than the immutability retention?
A: The retention policy will be unable to delete immutable backups.

Q: What can be done if the "Hardened Repository" runs out of disk space?
A: Adding more disk space is the best way (e.g. expand the volume on the storage and expand the filesystem afterwards). Additional scale-out repository extents can also be a solution. Existing backups are immutable and thus can only stay where they are.

Q: Which Linux distribution should I use?
A: For Hardened Repository the Linux distribution is irrelevant as long as they support extended attributes and chattr. But you probably want to chose a distribution with full XFS support to leverage block cloning. Also see here
HannesK
Product Manager
Posts: 14252
Liked: 2858 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

How to get started (v12)

Post by HannesK »

How to get started

Q: I’m new to Veeam, where can I start?
A: A good starting point are the quick start guides on https://helpcenter.veeam.com

Q: It’s 2023… do you have How-To videos?
A: Yes, they are available on YouTube in the Veeam Channel
Locked

Who is online

Users browsing this forum: Majestic-12 [Bot] and 74 guests