Comprehensive data protection for all workloads
Post Reply
KarmaKuma
Enthusiast
Posts: 32
Liked: 6 times
Joined: Feb 05, 2022 11:16 am
Contact:

Dell Isilon / PowerScale as Veeam Repository

Post by KarmaKuma »

[EDIT moderator]: split from post440587.html#p440587

When you guys are talking about the dataset size, are you referring to "processed" data (source dataset size seen as by veeam before compression/dedupe)? Or to the transferred to repo dataset size?

My result with two 4 node Dell Isilon/PowerScale H500 Clusters (will double to 8 nodes in about one month) used as two SMB (CA enforced, don't worry!) SOBRs (one extent per Cluster node for parallelism)
for BJs and BCJs during my current VEEAM testing:

23 VMs, 2.3TB "Processed" Data (approx. 10% of complete environment used for testing):

Constantly between 17 and 18 minutes, both, for BJ and BCJ Health Check!

Might lead me to plan daily Health Checks as this should easily fit into the time window, especially with 8 nodes which should show a significant performance boost... ...If we land at VEEAM after my testing and comparing phase.
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Health check on large backups

Post by HannesK »

Hello,
and welcome to the forums.

As long as the same numbers are compared, it's irrelevant which ones are taken. It shows a significant improvement.
4 node Dell Isilon/PowerScale H500 Clusters [...] used as two SMB
sorry for de-railing the topic. But this kind of repo should only be used with the dedicated load balancer dsmISI. As Isilon only works with active full backups, health checks sounds actually overkill to me.
significant performance boost
well no :-) Isilon is one of the hardest to implement backup repositories if it should scale. With dsmISI it can work, but I would suggest to look for a different solution if you want to scale (servers with internal disks or stupid block-SAN storage attached). If you want to continue discussing that topic, I will split the thread into a separate topic.

Best regards,
Hannes

PS: forum search shows my experience with Isilon / PowerScale as Veeam repository
KarmaKuma
Enthusiast
Posts: 32
Liked: 6 times
Joined: Feb 05, 2022 11:16 am
Contact:

Re: Health check on large backups

Post by KarmaKuma »

Hi HannesK

Yes please split this into a new topic... Interested in sharing my (brief but very intensive) experience to you and comparing it with your findings.

I did read everything I could find on Isilon+Veeam before even starting the testing and also had several high level discussions with DSMisi/Concat engineers incl. receiving a preliminary offer as I was pointed towards them from my main Dell Storage/Isilon rep a few months ago.

Maybe things have changed for the better with the newer Isilon Node generations, maybe its OneFS evolution. Maybe its me and my level of expectations. But my experience so far is quite promising actually. But now I am drifting off-topic...

Looking forward to a ping/link to a new topic for this discussion
HannesK
Product Manager
Posts: 14322
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Dell Isilon / PowerScale as Veeam Repository

Post by HannesK »

Hello,
Disclaimer: my experience is with 1000VMs+ environments and 10+ Isilon nodes. With active full, it works. But I have not seen any environment (20k VMs was the largest with Isilon) that would work with synthetic full / forever forward incremental backups

One thing I saw that PowerScale now supports 16TB files. Last time I looked at it the limit was 4TB which is the first reason why SMB / NFS has limited scale. If your VMs are small enough that you never hit the 16TB limit, then this might be irrelevant for you.
brief but very intensive
happy to hear what you tested :-) My guess is, that where were no merges / synthetic full backups yet. The easiest way to test it is using reverse incremental with let's say 50 VMs in one backup job. That leads to inter-cluster traffic on any scale-out storage at the second day. The same behavior on any other scale-out NAS with the same performance issues.

If reverse incremental works, then I would say you are lucky. For me it means that the hardware is probably over-sized (compared to simple block storage) :-)

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: No registered users and 119 guests