-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jan 30, 2018 12:18 pm
- Full Name: Matias Almeida
- Location: Belo Horizonte, MG, Brazil
- Contact:
Backblaze B2 S3 Compatible API tested?
Hey there. Backblaze just announced that their B2 is now S3 compatible.
https://www.backblaze.com/blog/backblaz ... tible-api/
What called my attention is that they list Veeam as a launch partner leveraging their API:
"We have a number of launch partners leveraging our S3 Compatible APIs so you can use B2 Cloud Storage. These partners include IBM Aspera, Quantum, and Veeam."
Were there any real tests performed? I don't see them listed at Veeam Ready yet.
Best regards,
Matias
https://www.backblaze.com/blog/backblaz ... tible-api/
What called my attention is that they list Veeam as a launch partner leveraging their API:
"We have a number of launch partners leveraging our S3 Compatible APIs so you can use B2 Cloud Storage. These partners include IBM Aspera, Quantum, and Veeam."
Were there any real tests performed? I don't see them listed at Veeam Ready yet.
Best regards,
Matias
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
They have not yet completed the Veeam Ready testing to obtain the Object Ready status.
But with that said, I have tested it, and functionally their S3 API works.
But with that said, I have tested it, and functionally their S3 API works.
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Jan 30, 2018 12:18 pm
- Full Name: Matias Almeida
- Location: Belo Horizonte, MG, Brazil
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Thank you. I believe some of my clients will be glad with the news.
-
- Influencer
- Posts: 14
- Liked: 4 times
- Joined: Aug 26, 2016 4:30 pm
- Full Name: SPremeau
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Just as a note... if you are security conscious and restrict B2 Application Key access to a specific bucket, you will need to insure that 'Allow List All Bucket Names' is checked before VBR is able to log in.
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
@premeau This is correct. This is also something that you should put in a feature request for with BackBlaze. Currently the S3 API is limited to sending and retrieving data, but they could possibly add IAM APIs that would allow you to create policies to be more secure with bucket access.
Not saying that what they are doing is insecure or that there API is inferior in any way, just mentioning that there are improvement areas.
Not saying that what they are doing is insecure or that there API is inferior in any way, just mentioning that there are improvement areas.
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Influencer
- Posts: 14
- Liked: 4 times
- Joined: Aug 26, 2016 4:30 pm
- Full Name: SPremeau
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Given the newness of Backblaze's S3 support, should I open Veeam tickets (in addition to Backblaze) with any issues? Or is it "unsupported" by Veeam at this point?
Tangential question: Is there a way to have Veeam log it's HTTP Object storage calls? (Or do I need to work with something like mitmproxy?)
Tangential question: Is there a way to have Veeam log it's HTTP Object storage calls? (Or do I need to work with something like mitmproxy?)
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
It is supported until it fails. Now what i mean by that is that if backblaze S3 API fails in any way then the support case is on them. I have tested Backblaze and they have tested for functional testing only and not performance testing. Meaning that it works with the S3 API calls that we leverage, but it has not been tested for scale, nor has it passed our Veeam Ready testing. Backblaze is currently working on Veeam Ready as this testing is done by the vendor and then test results are validated by Veeam.
Veeam logs S3 actions already and have methods for enhanced logging on S3 commands that support leverages. For now if you have a problem and you think it is Veeam related then open a case with support, if it seems more BB related then open a case with them.
Veeam logs S3 actions already and have methods for enhanced logging on S3 commands that support leverages. For now if you have a problem and you think it is Veeam related then open a case with support, if it seems more BB related then open a case with them.
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Service Provider
- Posts: 176
- Liked: 53 times
- Joined: Mar 11, 2016 7:41 pm
- Full Name: Cory Wallace
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
@dalbertson,
In your testing, have you found that BackBlaze supports the s3:HeadBucket permission? You may recall you and I were discussing Wasabi's lack of support for this here in my feature request for asking Veeam to remove the s3:ListAllMyBuckets permission requirement (so that we could have the option to manually specify a bucket path instead of picking from a list if we wanted to). object-storage-f52/feature-request-remo ... 65483.html
Really looking for a solution cheaper than Amazon's S3 that supports s3:HeadBucket (or if Veeam would remove the list all requirement and let me specify manually)
Thanks!
In your testing, have you found that BackBlaze supports the s3:HeadBucket permission? You may recall you and I were discussing Wasabi's lack of support for this here in my feature request for asking Veeam to remove the s3:ListAllMyBuckets permission requirement (so that we could have the option to manually specify a bucket path instead of picking from a list if we wanted to). object-storage-f52/feature-request-remo ... 65483.html
Really looking for a solution cheaper than Amazon's S3 that supports s3:HeadBucket (or if Veeam would remove the list all requirement and let me specify manually)
Thanks!
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Hey there! I do remember.
Unfortunately backblaze does not support it at this moment. They will accept the call but they do leverage IAM policies for S3 currently as they have their own security method.
Unfortunately backblaze does not support it at this moment. They will accept the call but they do leverage IAM policies for S3 currently as they have their own security method.
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Service Provider
- Posts: 176
- Liked: 53 times
- Joined: Mar 11, 2016 7:41 pm
- Full Name: Cory Wallace
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Interesting. Any feasible way you can see to accomplish with Backblaze what I was after with Wasabi where you could only see the buckets you had access to?
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
nope. if you take a look at the pic when you select to limit the access to one bucket it shows a checkbox that says it has to show all buckets (IE head bucket)
https://ibb.co/ZSmcmw9
https://ibb.co/ZSmcmw9
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Service Provider
- Posts: 176
- Liked: 53 times
- Joined: Mar 11, 2016 7:41 pm
- Full Name: Cory Wallace
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Thanks for that. Lame sauce. Must be a tricky policy to implement if these vendors keep omitting it.
-
- Enthusiast
- Posts: 73
- Liked: 7 times
- Joined: Apr 07, 2017 5:30 pm
- Full Name: Selva Nair
- Location: Canada
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
I was testing out Backblaze B2 and found deleted backups do not disappear from the bucket. The restore points did get removed from Veeam, both from performance and capacity tiers, and do not reappear on re-sync, so that's good. But when the bucket is browsed or listed otherwise, the folder corresponding to the deleted job and all objects within it till show up, now with two versions (one original and one zero size). They also continue to consume storage.
Following Backblaze advice I had left the lifecycle rules at their default which is probably not the right thing to do? Changed it to "keep only the last version" but that has't helped.
Anyone has experience with this?
Following Backblaze advice I had left the lifecycle rules at their default which is probably not the right thing to do? Changed it to "keep only the last version" but that has't helped.
Anyone has experience with this?
-
- Veeam Software
- Posts: 3626
- Liked: 608 times
- Joined: Aug 28, 2013 8:23 am
- Full Name: Petr Makarov
- Location: Prague, Czech Republic
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Any chance that Immutability is configured at the level of the object storage repository settings?
Thanks!
Thanks!
-
- Enthusiast
- Posts: 73
- Liked: 7 times
- Joined: Apr 07, 2017 5:30 pm
- Full Name: Selva Nair
- Location: Canada
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
No immutability (object lock). In fact Backblaze doesn't support it though they have plans to do so.
I checked again with Backblaze support and was told lifecycle should be left at default (keep all versions). I've reverted it and will wait and see, but from B&R pov the job is gone, files are deleted, rescan doesn't find it, so this may not make a difference now.
I will wait and see what happens when retention rule kicks in an active job.
I checked again with Backblaze support and was told lifecycle should be left at default (keep all versions). I've reverted it and will wait and see, but from B&R pov the job is gone, files are deleted, rescan doesn't find it, so this may not make a difference now.
I will wait and see what happens when retention rule kicks in an active job.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
According the Backblaze S3 docs they have versioning enabled by default on their buckets so I believe you are seeing expected behavior. I think you'll want to set the Lifecycle Rules to delete your old objects within a short period after Veeam marks them as deleted.
-
- Enthusiast
- Posts: 73
- Liked: 7 times
- Joined: Apr 07, 2017 5:30 pm
- Full Name: Selva Nair
- Location: Canada
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Right, that's what one would think based on their default settings (keep all versions). I tested using S3 API to create and delete a files and see the same behaviour -- on deletion a new hidden file with zero size appears and old one is retained as previous version. But backblaze support insists on leaving the lifecycle rule at its default saying it doesn't do anything unless an explicit rule is added -- but their "do nothing" seems to mean keep all versions.
This makes little sense unless they expect Veeam to control lifecycle or delete files by specifying versions.
My deleted files are still there after 48 hours, so now testing with explicit lifecycle rules -- could take 24 hours for the rules to run.
This makes little sense unless they expect Veeam to control lifecycle or delete files by specifying versions.
My deleted files are still there after 48 hours, so now testing with explicit lifecycle rules -- could take 24 hours for the rules to run.
-
- Expert
- Posts: 111
- Liked: 10 times
- Joined: Nov 21, 2017 7:18 am
- Full Name: Peter Helfer
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Great news here I thought... to save some money
So I just added a Backblaze Bucket for S3 Workload from Veeam as Capacity tier in a Scale out Repository.
Actually I have already an Azure Capacity tier in place.
So as far as I see there is currently no possiblity to have two different capacity tiers in a SOBR.
When I add the Backblaze bucket as capacity tier to the SOBR it gives a warning that the current Azure tier will be disabled.
I thought it will be possible to add a second capacity tier and then move the data between the two capacity tiers somehow.
But as it seems this is not possible.
Am I correct that currently it would be only possible to first put the Azure Capacity tier into maintenance mode?
This will download all the contents to the performance tier?
Then I could change to the backblaze bucket as capacity tier and Veeam would the offload all again to that cloud.
Or is there an other way to transfer the data between two different capacity tiers?
So I just added a Backblaze Bucket for S3 Workload from Veeam as Capacity tier in a Scale out Repository.
Actually I have already an Azure Capacity tier in place.
So as far as I see there is currently no possiblity to have two different capacity tiers in a SOBR.
When I add the Backblaze bucket as capacity tier to the SOBR it gives a warning that the current Azure tier will be disabled.
I thought it will be possible to add a second capacity tier and then move the data between the two capacity tiers somehow.
But as it seems this is not possible.
Am I correct that currently it would be only possible to first put the Azure Capacity tier into maintenance mode?
This will download all the contents to the performance tier?
Then I could change to the backblaze bucket as capacity tier and Veeam would the offload all again to that cloud.
Or is there an other way to transfer the data between two different capacity tiers?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
That is correct. Object storage vendors don't implement API for direct copies to competitive clouds so all data has to go through some 3rd party software, in this case Veeam. Also, keep in mind the egress fees associated with downloading all backups.MCH_helferlein wrote: ↑Jul 13, 2020 9:29 am Am I correct that currently it would be only possible to first put the Azure Capacity tier into maintenance mode?
This will download all the contents to the performance tier?
Then I could change to the backblaze bucket as capacity tier and Veeam would the offload all again to that cloud.
-
- Enthusiast
- Posts: 73
- Liked: 7 times
- Joined: Apr 07, 2017 5:30 pm
- Full Name: Selva Nair
- Location: Canada
- Contact:
Re: Backblaze B2 S3 Compatible API tested?
Just a followup comment that I finally got confirmation from backblaze that their initial recommendation is not right, and setting life cycle policy to "keep only the last version" is required to recover storage after Veeam Backup deletes objects. Though it took a while for them to confirm this, I've been using B2 storage with Veeam like this for a while now with no issues. It takes about 24 hours to release storage but much better than deleted objects being retained for ever by default.selva wrote: ↑Jun 30, 2020 5:34 pm Right, that's what one would think based on their default settings (keep all versions). I tested using S3 API to create and delete a files and see the same behaviour -- on deletion a new hidden file with zero size appears and old one is retained as previous version. But backblaze support insists on leaving the lifecycle rule at its default saying it doesn't do anything unless an explicit rule is added -- but their "do nothing" seems to mean keep all versions.
This makes little sense unless they expect Veeam to control lifecycle or delete files by specifying versions.
Who is online
Users browsing this forum: No registered users and 7 guests