-
- Veteran
- Posts: 928
- Liked: 52 times
- Joined: Nov 05, 2009 12:24 pm
- Location: Sydney, NSW
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Great, thanks, Guido and tntteam for the confirmation on this backup limitation.
I will not enable the Windows OS deduplication for my Veeam backup repository.
I will not enable the Windows OS deduplication for my Veeam backup repository.
--
/* Veeam software enthusiast user & supporter ! */
/* Veeam software enthusiast user & supporter ! */
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Are you sure you formatted with /L ?
-
- Enthusiast
- Posts: 68
- Liked: 5 times
- Joined: Aug 28, 2015 12:40 pm
- Full Name: tntteam
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
I think dedup is great on windows, I think the most bad luck you can have is dedup will stop working. Corruption I think is rare.
BTW, win2016 and multi cpu dedup just rocks! It's the storage array that can't go fast egnouth now !!
Also I found out memory consumption to be less on win2016 that 2012.
BTW, win2016 and multi cpu dedup just rocks! It's the storage array that can't go fast egnouth now !!
Also I found out memory consumption to be less on win2016 that 2012.
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Albert. We have been using dedup from the start and have been happy with 2012 and also 2016. Recently some ntfs bugs have been fixed in 2016 so as long as you keep to some basic rules discussed in these topics i would say you're quite safe. Dedupe certainly saves much more than ReFS, and ReFS has issues too (which will be solved of course in the future). Anyway, having a landing zone (non dedupe) and a secondary repositorie with Dedupe enabled for archiving for me still is the way to go!
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
Dedupe still rocks, just checked one of our repositories. Saved 1.2PB on 270TB physical. This is a 2012R2 Server. Even the volumes with incremental only (R,T,V) dedupe quite well
PS C:\Windows\system32> get-dedupstatus
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
39.25 TB 188.58 TB 501 546 Q:
53.06 TB 42.26 TB 6252 6252 V:
54.55 TB 40 TB 72 72 T:
4.99 TB 175.46 TB 1296 1580 U:
52.99 TB 53.19 TB 125453 125453 Y:
53.64 TB 78.41 TB 22 22 S:
48.29 TB 49.36 TB 2430 2430 R:
57.69 TB 132.47 TB 350 351 O:
52.91 TB 62.83 TB 451 451 I:
26.67 TB 272.08 TB 783 783 H:
50.87 TB 64.28 TB 236 236 J:
44.6 TB 95.33 TB 365 365 G:
PS C:\Windows\system32> get-dedupstatus
FreeSpace SavedSpace OptimizedFiles InPolicyFiles Volume
--------- ---------- -------------- ------------- ------
39.25 TB 188.58 TB 501 546 Q:
53.06 TB 42.26 TB 6252 6252 V:
54.55 TB 40 TB 72 72 T:
4.99 TB 175.46 TB 1296 1580 U:
52.99 TB 53.19 TB 125453 125453 Y:
53.64 TB 78.41 TB 22 22 S:
48.29 TB 49.36 TB 2430 2430 R:
57.69 TB 132.47 TB 350 351 O:
52.91 TB 62.83 TB 451 451 I:
26.67 TB 272.08 TB 783 783 H:
50.87 TB 64.28 TB 236 236 J:
44.6 TB 95.33 TB 365 365 G:
-
- Influencer
- Posts: 21
- Liked: never
- Joined: Feb 01, 2010 7:41 pm
- Full Name: Shawn Barnhart
- Contact:
[MERGED] Server 2016 deduplication -- considered reliable ye
I've used 2012r2 with deduplication successfully as a Veeam repository, but when I tried it with a 2016 repository server I got problems that went away when I disabled deduplication.
I read a number of people also had this problem and that MS was working on tracking down bugs in 2016 dedupe.
Has this generally been considered fixed or not? I paged back several pages worth of messages in this forum and didn't see anything new mentioning deduplication at all, which of course could mean "fixed" or "broken so long we gave up".
I read a number of people also had this problem and that MS was working on tracking down bugs in 2016 dedupe.
Has this generally been considered fixed or not? I paged back several pages worth of messages in this forum and didn't see anything new mentioning deduplication at all, which of course could mean "fixed" or "broken so long we gave up".
-
- Veteran
- Posts: 361
- Liked: 109 times
- Joined: Dec 28, 2012 5:20 pm
- Full Name: Guido Meijers
- Contact:
Re: [MERGED] Server 2016 deduplication -- considered reliabl
It was actually a NTFS issue and not a dedupe Issue.
From Gostev's Digest:
Good news for Windows Server 2016 deduplication users, but bad news for all NTFS users. Microsoft has finally nailed that phantom data corruption bug that I talked about here before. We actually had quite an exceptional experience working with Windows dedupe team on this bug – they were treating one extremely seriously from the start, and it was one of those rare cases when they pushed us for updates - and not the other way around. So it made me smile when in the end, it appeared that the bug was not even in the dedupe engine, but rather in the NTFS sparse files implementation. As such, Windows dedupe gets to keep its name clean – however, the update is strongly recommend that all NTFS users, and will be included in August "Patch Tuesday". And most importantly – anyone with "Volume bitmap is incorrect" chkdsk error on your NTFS backup repositories, be sure to perform an Active Full backup after installing this patch, because you current backups may not be good. And apply the same logic to any other data that uses Windows dedupe or sparse files (create a new instance of the data).
This includes the patch: https://support.microsoft.com/en-us/hel ... -kb4025334
Ps. Dedupe should be coming to ReFS sometime...
From Gostev's Digest:
Good news for Windows Server 2016 deduplication users, but bad news for all NTFS users. Microsoft has finally nailed that phantom data corruption bug that I talked about here before. We actually had quite an exceptional experience working with Windows dedupe team on this bug – they were treating one extremely seriously from the start, and it was one of those rare cases when they pushed us for updates - and not the other way around. So it made me smile when in the end, it appeared that the bug was not even in the dedupe engine, but rather in the NTFS sparse files implementation. As such, Windows dedupe gets to keep its name clean – however, the update is strongly recommend that all NTFS users, and will be included in August "Patch Tuesday". And most importantly – anyone with "Volume bitmap is incorrect" chkdsk error on your NTFS backup repositories, be sure to perform an Active Full backup after installing this patch, because you current backups may not be good. And apply the same logic to any other data that uses Windows dedupe or sparse files (create a new instance of the data).
This includes the patch: https://support.microsoft.com/en-us/hel ... -kb4025334
Ps. Dedupe should be coming to ReFS sometime...
-
- Enthusiast
- Posts: 49
- Liked: 3 times
- Joined: Jan 14, 2016 8:02 am
- Full Name: Peter
- Contact:
Re: Best Practice for MS Server 2012 DeDup Repo
i was wondering if someone has any experiences or made some tests regarding the read speed from a deduped repo-volume e.g. Synolgy.
Currently i want to migrate my deduped backup data from a synolgy DS2413+ to a temp synolgy nas, because we need to re-format the Drive / optimize it for large files ( format z: /A:64k /L /q)
Both are connect via iscsi Dual 1GB to our backup server. Copying the dedupe data is very very slow, only 15 - 80 MB/s and really unstable, while working with undeduped data it is blazing fast more than 170 MB/s.
now i dont want do disable dedupe, cause this would also take forever and i would lost a lot of my restore points, but copying the data seems to take days/weeks
maybe someone has any hints or could recommend a usefull tool e.g. block-level copying?
Currently i want to migrate my deduped backup data from a synolgy DS2413+ to a temp synolgy nas, because we need to re-format the Drive / optimize it for large files ( format z: /A:64k /L /q)
Both are connect via iscsi Dual 1GB to our backup server. Copying the dedupe data is very very slow, only 15 - 80 MB/s and really unstable, while working with undeduped data it is blazing fast more than 170 MB/s.
now i dont want do disable dedupe, cause this would also take forever and i would lost a lot of my restore points, but copying the data seems to take days/weeks
maybe someone has any hints or could recommend a usefull tool e.g. block-level copying?
Who is online
Users browsing this forum: BostjanUNIJA, Semrush [Bot] and 32 guests