Is anybody else noticing that VBR11 is starting to be *really* intolerant of issues that v10 never had a problem with?
For instance, starting in October, random jobs started randomly failing with "Error: Unstable connection: unable to transmit data." After a frustrating week and a half of missing RPOs while going back and forth with support (ID #02034922), Jason from support finally suggested that I reboot my proxy servers. I did, and immediately all my jobs started running again.
???
Since then, I've gotten this error many times, and a reboot of the proxy servers always fixes the issue.
Next, we have one backup repository that eats seagate hdds like candy; we lose a drive about once a month, and the raid6 array needs to rebuild. (I replace the drives with WD Gold each time; eventually they'll all get replaced.) With v10, this didn't even affect backups to that repository (they might have been a little slower, but not very noticeable). With v11, I finally figured out that I literally have to stop all backups to that repository until the array is rebuilt; otherwise all those jobs start hanging and failing, and the raid software on the repository says that it's going to take 12 to 18 days to rebuild! (Once I stop the jobs, the raid rebuilds in about a day.)
All this is super frustrating; I'm just wondering if I'm alone in this?
-
- Expert
- Posts: 183
- Liked: 29 times
- Joined: Feb 23, 2017 10:26 pm
- Contact:
-
- Product Manager
- Posts: 9848
- Liked: 2607 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: v11 too sensitive?
Hi bhagen
I cannot confirm that from my environments.
We never had any disk failure since V11 on a physical backup server or NAS Device.
I cannot confirm that from my environments.
We never had any disk failure since V11 on a physical backup server or NAS Device.
Product Management Analyst @ Veeam Software
-
- Expert
- Posts: 183
- Liked: 29 times
- Joined: Feb 23, 2017 10:26 pm
- Contact:
Re: v11 too sensitive?
Wow @mildur; that's amazing. We have a single environment, and it's riddled with these types of issues since v11.
-
- Chief Product Officer
- Posts: 31812
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: v11 too sensitive?
But it does make sense that hard drives produced on the same day/week at the same factory all have the same design flaw that after some time makes them all fail close to one another in time. I made a mistake to buy 4 drives from the same shipment once for my home NAS and that appeared to be a bad shipment, so I had the 3rd drive fail during RAID5 rebuilt caused by the 2nd drive failure. Since then, I try to buy each hard drive from a different store, and also use two different hard drive vendors. But of course, such micro-management is only doable for home NAS.
-
- Veeam Legend
- Posts: 251
- Liked: 136 times
- Joined: Mar 28, 2019 2:01 pm
- Full Name: SP
- Contact:
Re: v11 too sensitive?
I used to work for one of the large IT corporations that provide storage/maintenance.
I'd always get on customers to replace disks ASAP when they fail as a rebuild super hard on the rest of the disks.
2 times when disk failures are the most common.
When you're rebuilding a failed disk. That extra parody and spare can save you in this situation. Also not waiting until you have multiple failures.
When you power off and on. I have seen some disks running for a significant amount of years. Power that storage unit off and on to see a Christmas tree of red and green lights.
Bad batches are common too. I've had calls to go replace every drive in a SAN for a customer before. Maybe it's why I am paranoid and 3-2-1 has never been enough for me. It's more like 6-3-3 haha
I'd always get on customers to replace disks ASAP when they fail as a rebuild super hard on the rest of the disks.
2 times when disk failures are the most common.
When you're rebuilding a failed disk. That extra parody and spare can save you in this situation. Also not waiting until you have multiple failures.
When you power off and on. I have seen some disks running for a significant amount of years. Power that storage unit off and on to see a Christmas tree of red and green lights.
Bad batches are common too. I've had calls to go replace every drive in a SAN for a customer before. Maybe it's why I am paranoid and 3-2-1 has never been enough for me. It's more like 6-3-3 haha
Who is online
Users browsing this forum: Semrush [Bot] and 87 guests