Comprehensive data protection for all workloads
Post Reply
srdegeus
Enthusiast
Posts: 40
Liked: 5 times
Joined: Jul 03, 2018 6:27 pm
Contact:

Enterprise Manager - Upgrade runs 'out of database space'

Post by srdegeus »

Boggling my mind why my planned upgrade for the Enterprise Manager to v10 failed, while the test upgrade (offline snapshot VM) a few months ago (when v10a was just released) was successful.

It turns out that my 4GB (3507.63 MB according to SQL) database balloons to over 10GB during the upgrade. (which is the limit in SQL Express)
I cannot understand why this would happen. Because even if all tables would be upgraded by copying the ‘old’ tables to ‘new’ tables, the data would double but that would still not exceed 10GB. (Would be 7-8GB.)

The most likely suggestions to fix this would probably be:
  • Upgrade to a ‘full’ version of SQL Server.
    While technically possible, the database size after the upgrade will most likely be around 4GB again, so I would only need a ‘full’ version of SQL during the upgrade wizard. (Alternatively: installing a SQL trail version; moving the database; upgrading; moving it back; uninstalling trial version, seems overkill for ‘just’ an upgrade.)
  • Set the retention of the Event History to less than 53 weeks.
    We will lose some historical information, but the challenge is to what number should this be set.
    A guestimation of the time difference between the test run and now is not a real option, as we have been running this for several years, in the mean time we added multiple backup jobs and servers which most likely tipped it over the point, but we still don’t know by what ratio.
    Just realized that the 53 weeks conflicts with our ‘requirement’ to be able to show last years backup results for the IT audits. (Audits don’t happen in January but usually in Q2.)
    While I (we) are willing to purge the event history in the Enterprise Manager, changing this setting would also roll down to all backup servers which we don't want. Ideally the Enterprise Manager would only have the backup results from the last few weeks, but the VBR servers the ‘full’ history (for auditing.)
  • Install v10 from scratch.
    Which would mean that we need to add all the VBR servers again (so an upgrade is preferred.)
    Assuming that this will read the history from all the VBRs again, otherwise this not an option.
So my main question is if someone has a query that can be run (or calculation) to determine what the ‘database upgrade size’ would balloon to.

Or is it possible to purge the history from the Enterprise Manager (tables) but keeping all the settings.
Egor Yakovlev
Veeam Software
Posts: 2537
Liked: 683 times
Joined: Jun 14, 2013 9:30 am
Full Name: Egor Yakovlev
Location: Prague, Czech Republic
Contact:

Re: Enterprise Manager - Upgrade runs 'out of database space'

Post by Egor Yakovlev »

Greetings.

- We do have some additional data in the database during and after Enterprise Manager upgrade to support cross-edition VBR servers afterwards. Feel free to open support case and investigate your database contents with an engineer, as there might be stale records worth clearing.
- For audit purposes I would recommend checking Veeam One reporting capabilities, as it has some neat reports made specifically for your case and it uses it's own database to keep records outside Veeam Backup \ Veeam Enterprise Manager. Infrastructure changes audit, Backup infrastructure audit, Data protection status reports, Job status reports, Restore activities just to name a few!
- Freshly installed EM version will read all available session history from connected VBR servers again!

/Cheers!
DonZoomik
Service Provider
Posts: 368
Liked: 120 times
Joined: Nov 25, 2016 1:56 pm
Full Name: Mihkel Soomere
Contact:

Re: Enterprise Manager - Upgrade runs 'out of database space'

Post by DonZoomik »

If your SQL is newer than 2016 SP1, you could enable page compression. It tends to squeeze Veeam database down about 3 times. I've been using it for a while, about 2y I'd guess. No hard measurements but ancient SQL Server blog posts said that it's almost fully transparent to application (SCCM has some problems with replication) with possible performance improvements (less disk IO).
You should rerun if after upgrade as well because new tables and indexes don't get compression by default.

Example code cobbled together from various online examples:

Code: Select all

--Compress tables
declare commands cursor for
SELECT 
   'ALTER TABLE [' 
   + schemas.[name] 
   + '].[' 
   + tables.[name] 
   + '] REBUILD WITH (DATA_COMPRESSION=PAGE);'
from sys.tables AS tables with (nolock)
INNER JOIN sys.schemas AS schemas WITH (NOLOCK) ON tables.[schema_id] = schemas.[schema_id]
inner JOIN sys.partitions AS partitions with (nolock) on tables.object_id = partitions.object_id
INNER JOIN sys.dm_db_partition_stats AS partitionstats WITH (NOLOCK) ON tables.object_id = partitions.object_id and partitions.partition_id = partitionstats.partition_id
WHERE tables.[type] = 'U' AND partitions.[data_compression] < 2 and partitions.index_id in (0,1)
ORDER BY partitionstats.[reserved_page_count]

declare @cmd varchar(max)
open commands
fetch next from commands into @cmd
while @@FETCH_STATUS=0
begin
  print (@cmd)
  exec(@cmd)
  fetch next from commands into @cmd
end

close commands
deallocate commands
set @cmd = NULL

--Compress indexes
declare commands cursor for
SELECT 
   'ALTER INDEX "' 
   + indexes.[name] 
   + '" ON "' 
   + schemas.[name] 
   + '"."' 
   + tables.[name] 
   + '" REBUILD WITH (DATA_COMPRESSION=PAGE);'
FROM sys.tables AS tables WITH (NOLOCK)
INNER JOIN sys.indexes AS indexes WITH (NOLOCK) ON tables.[object_id] = indexes.[object_id]
INNER JOIN sys.partitions partitions with (nolock) ON tables.[object_id] = partitions.[object_id]
INNER JOIN sys.schemas schemas WITH (NOLOCK) ON tables.[schema_id] = schemas.[schema_id]
INNER JOIN sys.dm_db_partition_stats AS partitionstats WITH (NOLOCK) ON indexes.[object_id] = partitionstats.[object_id] AND partitionstats.[index_id] = indexes.[index_id] and partitions.partition_id = partitionstats.partition_id
WHERE tables.type = 'U' AND indexes.[index_id] >1 AND partitions.[data_compression_desc] <> 'PAGE'
ORDER BY partitionstats.[reserved_page_count]

open commands
fetch next from commands into @cmd
while @@FETCH_STATUS=0
begin
  print (@cmd)
  exec(@cmd)
  fetch next from commands into @cmd
end

close commands
deallocate commands
set @cmd = NULL

--Shrink database
declare commands cursor for
SELECT 'DBCC SHRINKFILE (N''' + [name] + ''',0);' FROM sys.database_files;

open commands
fetch next from commands into @cmd
while @@FETCH_STATUS=0
begin
  print (@cmd)
  exec(@cmd)
  fetch next from commands into @cmd
end

close commands
deallocate commands
set @cmd = NULL


--Rebuild indexes
declare @tableName nvarchar(500) 
declare @indexName nvarchar(500) 
declare @indexType nvarchar(55) 
declare @percentFragment decimal(11,2)
declare @SchemaName nvarchar(500) 
declare @pagelocks int
 
declare FragmentedTableList cursor for 
 SELECT
	OBJECT_NAME(ind.OBJECT_ID) AS TableName, 
	ind.name AS IndexName,
	indexstats.index_type_desc AS IndexType, 
	indexstats.avg_fragmentation_in_percent,
	schemas.name as schemaname,
	ind.allow_page_locks as pagelocks
 FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats 
   INNER JOIN sys.indexes ind ON ind.object_id = indexstats.object_id AND ind.index_id = indexstats.index_id
   inner join sys.tables tables ON tables.object_id = ind.object_id
   INNER JOIN sys.schemas schemas ON tables.schema_id = schemas.schema_id
  WHERE 
   indexstats.avg_fragmentation_in_percent > 5 
  AND ind.Name is not null 
  ORDER BY indexstats.avg_fragmentation_in_percent DESC 
 
    OPEN FragmentedTableList 
    FETCH NEXT FROM FragmentedTableList  
    INTO @tableName, @indexName, @indexType, @percentFragment, @schemaname, @pagelocks
 
    WHILE @@FETCH_STATUS = 0 
    BEGIN 
      print 'Processing ' + @indexName + 'on table ' + @tableName + ' which is ' + cast(@percentFragment as nvarchar(50)) + ' fragmented' 

       
      if(@percentFragment<= 30 or @pagelocks = 0) 
      BEGIN 
            EXEC( 'ALTER INDEX "' +  @indexName + '" ON "' + @schemaname + '"."' + @tableName + '" REBUILD; ') 
       print 'Finished reorganizing ' + @indexName + 'on table ' + @schemaname + '"."' + @tableName
      END 
      ELSE 
      BEGIN 
         EXEC( 'ALTER INDEX "' +  @indexName + '" ON "' + @schemaname + '"."' + @tableName + '" REORGANIZE;') 
        print 'Finished rebuilding ' + @indexName + 'on table ' + + @schemaname + '"."' + @tableName
      END  
      FETCH NEXT FROM FragmentedTableList  
        INTO @tableName, @indexName, @indexType, @percentFragment, @schemaname, @pagelocks
    END 
    CLOSE FragmentedTableList 
    DEALLOCATE FragmentedTableList
srdegeus
Enthusiast
Posts: 40
Liked: 5 times
Joined: Jul 03, 2018 6:27 pm
Contact:

Re: Enterprise Manager - Upgrade runs 'out of database space'

Post by srdegeus »

@Egor

I have opened a support case, hopefully the ‘stale data cleaning’ is also applicable in my case.

Veeam One is something we have looked at in the past but skipped it. Pricing for this is not transparent, prices have been removed from the website which makes it hard to request budget if I don’t know MSRP. And frankly the functionality it offers should already be in the base product (like competitors have.) Also I doubt the auditors will accept an ‘external’ reporting application as audit log, usually they want the raw data/primary source as proof.

@DonZoomik
I didn’t know this was supported in SQL Express as well.
The two largest tables (60% of total db size) contain quite some xml data that should indeed compress quite well. I tried your suggestion in a clone, the new size is now 3.2GB (instead of 3.5GB.) I had hoped for a better compression, but maybe this is just enough.
Gostev
Chief Product Officer
Posts: 31561
Liked: 6724 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Enterprise Manager - Upgrade runs 'out of database space'

Post by Gostev »

srdegeus wrote: Oct 26, 2020 12:37 pmAnd frankly the functionality it offers should already be in the base product (like competitors have.) Also I doubt the auditors will accept an ‘external’ reporting application as audit log, usually they want the raw data/primary source as proof.
v10 does provide built-in backup server audit report feature. You can export the corresponding report with the Veeam Backup Enterprise Manager or PowerShell.
DonZoomik
Service Provider
Posts: 368
Liked: 120 times
Joined: Nov 25, 2016 1:56 pm
Full Name: Mihkel Soomere
Contact:

Re: Enterprise Manager - Upgrade runs 'out of database space'

Post by DonZoomik »

That's much less compression than I'd expect. Are you checking data file size vs full database size? Index rebuilds tend to create quite large transaction logs so it might offset your compression savings. You might run the shrink again, at some (mostly irrelevant) performance loss due to fragmented indexes.
srdegeus
Enthusiast
Posts: 40
Liked: 5 times
Joined: Jul 03, 2018 6:27 pm
Contact:

Re: Enterprise Manager - Upgrade runs 'out of database space'

Post by srdegeus »

I got the suggestion from support to delete (truncate) all data from the BackupTaskSessions table, which is the largest table (35% of all data.) This table should be populated again when the data collection for all the backup servers run. When I do this, the upgrade completes successfully. The database still grows 286% (2233MB -> 6397MB) in that case.

I added a SQL trace to see what is happening that makes the database grow so much during the upgrade. What I found was that the upgrade does all the intermediate upgrades versions as well. It isn’t (just) an upgrade from 9.5.3a -> V10a, but from 9.5.3a -> 9.5.4 -> v10 -> v10a. For each of these steps it will keep a copy of the old data:

9.5.3 -> 9.5.4: Copy C.* tables to 95.* and upgrade the C.*-tables
9.5.4 -> v10: Copy C.* tables to 95u4.* and upgrade the C.*-tables
v10 -> v10a: Copy C.* tables to 100.* and upgrade the C.*-tables
So for each of those tables you will have 4 copies that contain -all- the records.

Testing a (separate) clean install of v10a shows that the v10a VBR data will ‘land’ in the C.* tables (same version as Enterprise Manager) and the 9.5.3 data will ‘land’ in the 95.* tables, but is then also copied to the 95u4.* and 100.* tables as well.
So even a clean install would not solve my issue as the data from the 9.5.3a VBR servers will go into the 95.* -> 95u4.* -> 100.* -> C.* tables as well. As all my current VBR servers (except one) are still 9.5.3a, I would still end up with over 10GB in the database as that data is stored 4 times. Obviously if all my VBR server would be v10a I would not have this issue, but it is a prerequisite to upgrade the Enterprise Manager first, so I am in a Catch-22.

My only escape to be able to keep using SQL Server Express is if I create a 2nd Enterprise Manager for v10a and ‘move’ VBR servers to this 2nd one after they have been upgraded. And then after all are upgraded decommission the v9.5 Enterprise Manager. But that same procedure would need to be followed for each version upgrade. I’ve decided to purchase an additional SQL Server Standard for this, it’s the ‘only’ logical option when looking at the data size we generate.

I ran a new SQL trace after the upgrade and found that it indeed copies the data from table to table. I noticed that these queries run significantly longer than I would normally expect. It appears that the 95.*; 95u4.* and 100.* tables do not contain any index, only the C.* tables have indexes.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Semrush [Bot] and 116 guests