-
- Novice
- Posts: 5
- Liked: 2 times
- Joined: Feb 23, 2021 2:44 pm
- Full Name: Andrew Peplinski
- Contact:
Backup Directly To Object Storage
Hello - I'm in the middle of re-architecting our backup infrastructure, and was curious if backing up directly to object storage is on the roadmap, and if so what that timeline might be?
Our existing "performance" tier is very old storage, which is off warranty (and off lease terms), and speccing out 80tb of storage only for a backup target is a very hard sell to our budgeting committee; it would be ideal if I could just push our backups to S3 directly, no matter how long it takes, but as that's not currently available I am kind of stuck.
Any guidance would be welcome. Thanks!
Our existing "performance" tier is very old storage, which is off warranty (and off lease terms), and speccing out 80tb of storage only for a backup target is a very hard sell to our budgeting committee; it would be ideal if I could just push our backups to S3 directly, no matter how long it takes, but as that's not currently available I am kind of stuck.
Any guidance would be welcome. Thanks!
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: Backup Directly To Object Storage
Currently you would need to land on a performance tier first. I understand your request.
Are backups of critical systems not important to the budgeting committee?
Are backups of critical systems not important to the budgeting committee?
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
This is a good point for discussion. I want to specifically touch on the following three drawbacks of this approach, and how you consider them acceptable:andrew.peplinski wrote: ↑Feb 23, 2021 2:53 pmit would be ideal if I could just push our backups to S3 directly, no matter how long it takes
1. Extremely long VM backup snapshots life time when backups take "no matter how long".
a) This will cause a significant impact on production workloads during long periods when those huge snapshots will be committed after each backup.
b) High risk of primary datastores overfilling with snapshot data, causing hard production VM stops. Unless you increase your production storage capacity, but this again means buying more storage (only a few times more expensive now, comparing to your typical backup storage). While you're trying to avoid buying storage here in the first place.
2. Restore directly from S3 taking "no matter how long" means potentially a few days of downtime for your critical servers. How can this possibly be acceptable to any business?
3. Your only copy will be in the cloud so you won't be in compliance with the 3-2-1 rule of backup, which is written with blood of backup admins based on decades of experience.
In general, of course we will support backup direct to object storage down the road, simply because more and more of on-prem object storage devices appear every day. But I am always curios to learn the thinking behind going direct to cloud object storage given all of the drawbacks of this approach.
-
- Novice
- Posts: 5
- Liked: 2 times
- Joined: Feb 23, 2021 2:44 pm
- Full Name: Andrew Peplinski
- Contact:
Re: Backup Directly To Object Storage
Our backups are currently only serving as off-site copies, or for DR - any data we need recovered quickly, we recover from daily/weekly/monthly rolling storage snapshots, with varying degrees of retention (daily for 2 weeks, weekly for 4 weeks, monthly for 15 months). Veeam already integrates nicely with our Nimble array, so shipping VM backups directly from our existing snapshots to S3 seems like the best possible option.
The previous admin was using LTO7 tape for his backups, but with much of our business going remote (and the growing end-of-life usefulness to magnetic tape [in my opinon]), I would like to avoid that. It's currently my "best" option, although it's a bitter pill to swallow. Virtual tapes are reliant on the storage tier as well, and with Glacier being the fastest tier we can use, we have had MANY issues with reliability.
As I said earlier, the existing on-premise (non-Nimble) storage is end-of-life, off warranty, and on month-to-month lease; selling 80tb+ of NAS storage to my budget committee when its only purpose is to be temporary storage before shipping it to the scale-out repository I have on S3 is essentially a non-starter.
SO - TL:DR: I guess I'm not really looking for "guidance", just advice on how long I am likely going to have to wait for this feature.
The previous admin was using LTO7 tape for his backups, but with much of our business going remote (and the growing end-of-life usefulness to magnetic tape [in my opinon]), I would like to avoid that. It's currently my "best" option, although it's a bitter pill to swallow. Virtual tapes are reliant on the storage tier as well, and with Glacier being the fastest tier we can use, we have had MANY issues with reliability.
As I said earlier, the existing on-premise (non-Nimble) storage is end-of-life, off warranty, and on month-to-month lease; selling 80tb+ of NAS storage to my budget committee when its only purpose is to be temporary storage before shipping it to the scale-out repository I have on S3 is essentially a non-starter.
SO - TL:DR: I guess I'm not really looking for "guidance", just advice on how long I am likely going to have to wait for this feature.
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
This is rather me looking for guidance, because knowing the use cases of backing up direct to cloud object storage and the type of customers willing to use this approach helps us design a better solution. So thanks for explaining your situation.
Would this be the correct summary then:
1. This is not a concern because you are using our storage snapshot integration, which I agree does fully address the above-mentioned issues.
2. In case of loss of primary storage, 10 days of downtime to pull everything back from cloud (assuming 1 Gbps downlink) is acceptable for your business.
3. You consider storage snapshot to be the first real "backup" for the purpose of the 3-2-1 rule. Since this is a religious stuff, we will just leave it at that
As for backup direct to object storage, this is something we're actively working on already. We should have a good idea about the release vehicle in the next few months, once the implementation is done and we know for sure there are no more road blocks.
Would this be the correct summary then:
1. This is not a concern because you are using our storage snapshot integration, which I agree does fully address the above-mentioned issues.
2. In case of loss of primary storage, 10 days of downtime to pull everything back from cloud (assuming 1 Gbps downlink) is acceptable for your business.
3. You consider storage snapshot to be the first real "backup" for the purpose of the 3-2-1 rule. Since this is a religious stuff, we will just leave it at that
As for backup direct to object storage, this is something we're actively working on already. We should have a good idea about the release vehicle in the next few months, once the implementation is done and we know for sure there are no more road blocks.
-
- Novice
- Posts: 5
- Liked: 2 times
- Joined: Feb 23, 2021 2:44 pm
- Full Name: Andrew Peplinski
- Contact:
Re: Backup Directly To Object Storage
Yes, I think that's an apt summation, with some minor tweaks:
1. We are actually not using Veeam at all on this layer - the snapshots are wholly on the Nimble array, and for "small" data recovery we will present snapshot clones to vCenter. Since it's not file data (yet), the process is relatively simple (and fast). Our Veeam backups use the temporary snapshot model (I'm still exploring using snapshots as the backup repository, as it may address many more issues, but without capturing the file data it's still just "testing").
2. We will likely ALSO do periodic tape backups to keep onsite in case of emergencies (we have it, so why not use it in some fashion). The cloud copies will end up being for true DR (say, there is some catastrophic infrastructure failure in our "datacenter"), in which case they will be happy to have ANYthing. And, that model could easily change - I'm still testing that workflow for possible hurdles (mainly replication time to S3 on our 500Mb uplink)
3. I agree on the "religiosity" of the topic - for example, I never in my life would have thought a user community would pivot to using Dropbox as their primary source of data for active projects, but our VFX folks map a local drive to Dropbox and sync up to the cloud for their workflow. It all comes around to best-serving the user community, in the most efficient way possible.
That's great news, thanks - and will likely align with our (prospective) return to the office; I'm already pitching our upgrade to Enterprise Plus from Enterprise for the snapshot integration, as we are expanding the Nimble array to accommodate our file data (which currently sits on a bare-metal server of the same ilk as my "performance tier" backup target). In my estimation that is a better expenditure than co-locating a second Nimble array somewhere and doing block-level replication (even though that is my personal favorite [re: the religiosity again ]).
1. We are actually not using Veeam at all on this layer - the snapshots are wholly on the Nimble array, and for "small" data recovery we will present snapshot clones to vCenter. Since it's not file data (yet), the process is relatively simple (and fast). Our Veeam backups use the temporary snapshot model (I'm still exploring using snapshots as the backup repository, as it may address many more issues, but without capturing the file data it's still just "testing").
2. We will likely ALSO do periodic tape backups to keep onsite in case of emergencies (we have it, so why not use it in some fashion). The cloud copies will end up being for true DR (say, there is some catastrophic infrastructure failure in our "datacenter"), in which case they will be happy to have ANYthing. And, that model could easily change - I'm still testing that workflow for possible hurdles (mainly replication time to S3 on our 500Mb uplink)
3. I agree on the "religiosity" of the topic - for example, I never in my life would have thought a user community would pivot to using Dropbox as their primary source of data for active projects, but our VFX folks map a local drive to Dropbox and sync up to the cloud for their workflow. It all comes around to best-serving the user community, in the most efficient way possible.
That's great news, thanks - and will likely align with our (prospective) return to the office; I'm already pitching our upgrade to Enterprise Plus from Enterprise for the snapshot integration, as we are expanding the Nimble array to accommodate our file data (which currently sits on a bare-metal server of the same ilk as my "performance tier" backup target). In my estimation that is a better expenditure than co-locating a second Nimble array somewhere and doing block-level replication (even though that is my personal favorite [re: the religiosity again ]).
-
- Veeam Legend
- Posts: 945
- Liked: 221 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
[MERGED] Backup directly to object storage
Folks, I think this is probably one of the most asked questions: "why is it not possible to backup directly to an object storage". Well, if I consider the latest performance statistics (of v11) by Anton Gostev 2 weeks ago (11 GB/sec), then this might be one reason - it might be hard to get an object storage with such a connection, at least it would be hard to get one in the cloud
But what if it doesn't matter if it takes some time (hours) to write the data to the (cloud) object storage or if you've got very little deltas? Writing the backups (vbk's, vib's) to a local cache and start the sync from there should do the trick like vbo does it already today. Of course, restores wouldn't be that fast but you've got the advantage that you don't need to hold your backups on prem, etc. There might be other reasons as well like avoiding desasters by not shipping too many (object storage) functionalities at once - ReFS comes to my mind - but now I'm curious and would like to know if that is already on the list for v12 or if this functionality maybe will never be implemented (for good reasons).
Please give us some insights - thanks
But what if it doesn't matter if it takes some time (hours) to write the data to the (cloud) object storage or if you've got very little deltas? Writing the backups (vbk's, vib's) to a local cache and start the sync from there should do the trick like vbo does it already today. Of course, restores wouldn't be that fast but you've got the advantage that you don't need to hold your backups on prem, etc. There might be other reasons as well like avoiding desasters by not shipping too many (object storage) functionalities at once - ReFS comes to my mind - but now I'm curious and would like to know if that is already on the list for v12 or if this functionality maybe will never be implemented (for good reasons).
Please give us some insights - thanks
-
- Product Manager
- Posts: 14835
- Liked: 3082 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Backup Directly To Object Storage
Hello Michael,
I split your question away from the other topic.
I merged it with the existing discussion. For more answers I can recommend forum search with keywords "direct object" with author "gostev"
Best regards,
Hannes
I split your question away from the other topic.
I merged it with the existing discussion. For more answers I can recommend forum search with keywords "direct object" with author "gostev"
Best regards,
Hannes
-
- Veteran
- Posts: 599
- Liked: 87 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
[MERGED] Cloud Connect still the only option to backup directly to cloud?
Hi,
we have some smaller location with small clusters or just a single ESXi host. Usually these locations have no budget for dedicated backup storage, we are happy if they have budget for a ESXi host or a small cluster.
As far as I understand the only option to backup these VM's directly to the cloud is cloud connect. Is this still true? Currently we are using our other backup tool for this, but I'd like to have those VM's in Veeam too. Now that we have a lot of offloading/cloud stuff in Veeam, I'd hope that this in at least on the roadmap. We already use S3 buckets for offloading, so using a S3 bucket for those locations would be a logical next step.
we have some smaller location with small clusters or just a single ESXi host. Usually these locations have no budget for dedicated backup storage, we are happy if they have budget for a ESXi host or a small cluster.
As far as I understand the only option to backup these VM's directly to the cloud is cloud connect. Is this still true? Currently we are using our other backup tool for this, but I'd like to have those VM's in Veeam too. Now that we have a lot of offloading/cloud stuff in Veeam, I'd hope that this in at least on the roadmap. We already use S3 buckets for offloading, so using a S3 bucket for those locations would be a logical next step.
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
Hello, merged your post into the existing discussion. Please see the implications of this approach above, and share why they do not apply to your environment?
-
- Veteran
- Posts: 599
- Liked: 87 times
- Joined: Dec 20, 2015 6:24 pm
- Contact:
Re: Backup Directly To Object Storage
In our case its simple: no budget. For this use case we are talking about locations and foreign subsidiary that act mostly independent. We cannot force them to use any of our standards, we can only work in a best effort mode. If such a subsidiary has a very low revenue they simply cannot afford a dedicate backup storage, which usually would be a small to medium QNAP in our case. We communicate the risks and try to convince them, but it's not that easy if they have very low profit. The only good thing is, we are not responsible that there are no backups
With this in mind we try our best to provide them with _some_ backup. Currently we are using a mix of snapshot and agent backups with our other backup solution, agent backups are usually faster. Direct backup to S3 works surprisingly well. Those locations do not have a high amount of data. Most of the time they just have a SCCM/Printing VM which does not need to be backed up. But sometimes they start enjoying virtualization and suddenly there is a VM running that hosts an application.
With this in mind we try our best to provide them with _some_ backup. Currently we are using a mix of snapshot and agent backups with our other backup solution, agent backups are usually faster. Direct backup to S3 works surprisingly well. Those locations do not have a high amount of data. Most of the time they just have a SCCM/Printing VM which does not need to be backed up. But sometimes they start enjoying virtualization and suddenly there is a VM running that hosts an application.
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
Got it, thanks for your explanation!
-
- Veeam Legend
- Posts: 945
- Liked: 221 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: Backup Directly To Object Storage
I'd like to add my thoughts to this topic. I think about a scenario when you're probably running vm's on azure, wouldn't it be cool to write that backups directly to the azure blob store? Due to the fact that these two instances are 'close together' backups and even restores would be really fast (I guess).
A scenario that I had from time to time is the archive purpose: You have a vm that you don't need anymore and you'd like to store it on the object storage as an archive (maybe also in combination with immutability). Now you have to run a backup job, maybe also a copy job and then the offload starts (if your parameters are set correctly), but then your backups are still on prem and waste space. So you have to delete them, remove them from the config, do a rescan of the object storage to import those from the object storage and then the 'archive' is how you need it. Being able to point the backup target directly to the object storage would be much easier. Of course, don't forget 3-2-1, but if your goal is to bring it to the object storage, you always have those local vbk's that you maybe don't need. I talk about archives that you maybe need once in 2 years and where it wouldn't matter if it takes 2 days to do a download. If you just need a file then there wouldn't be a need at all to do a download of the whole vbk. The longer I think about this scenario I'd say that your copy jobs should support object storage as a target. Then you could pick your vm (or chain) and archive it on the object storage and maybe also with GFS retention. Currently you always have the local copy too - wasted space most of the time.
Anton, please share your thoughts. Thank you!
A scenario that I had from time to time is the archive purpose: You have a vm that you don't need anymore and you'd like to store it on the object storage as an archive (maybe also in combination with immutability). Now you have to run a backup job, maybe also a copy job and then the offload starts (if your parameters are set correctly), but then your backups are still on prem and waste space. So you have to delete them, remove them from the config, do a rescan of the object storage to import those from the object storage and then the 'archive' is how you need it. Being able to point the backup target directly to the object storage would be much easier. Of course, don't forget 3-2-1, but if your goal is to bring it to the object storage, you always have those local vbk's that you maybe don't need. I talk about archives that you maybe need once in 2 years and where it wouldn't matter if it takes 2 days to do a download. If you just need a file then there wouldn't be a need at all to do a download of the whole vbk. The longer I think about this scenario I'd say that your copy jobs should support object storage as a target. Then you could pick your vm (or chain) and archive it on the object storage and maybe also with GFS retention. Currently you always have the local copy too - wasted space most of the time.
Anton, please share your thoughts. Thank you!
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
Hi, Michael.
For VMs running on Azure, I agree it makes perfect sense. However, we already support backup direct to object storage with our Veeam Backup for Microsoft Azure, which is the product that is purpose-built for Azure VMs protection. And as of V11, it is fully integrated into Veeam Backup & Replication and can be managed directly from your backup console.
VeeamZIP of an on-prem machine direct to object storage before decommissioning is a great use case for object storage indeed. However, this is possible today with V11 through the creation of dedicated SOBR with the Move policy on a very short time span (1 day) and using that as a target for VeeamZIP. This way, the backup leaves on-premises storage almost immediately, without wasting on-prem disk space.
Keep the use cases coming! As almost everything, you should already be able to do with Veeam today. And if something is missing, knowing the need will help us to better design the direct to object capability.
For VMs running on Azure, I agree it makes perfect sense. However, we already support backup direct to object storage with our Veeam Backup for Microsoft Azure, which is the product that is purpose-built for Azure VMs protection. And as of V11, it is fully integrated into Veeam Backup & Replication and can be managed directly from your backup console.
VeeamZIP of an on-prem machine direct to object storage before decommissioning is a great use case for object storage indeed. However, this is possible today with V11 through the creation of dedicated SOBR with the Move policy on a very short time span (1 day) and using that as a target for VeeamZIP. This way, the backup leaves on-premises storage almost immediately, without wasting on-prem disk space.
Keep the use cases coming! As almost everything, you should already be able to do with Veeam today. And if something is missing, knowing the need will help us to better design the direct to object capability.
-
- Enthusiast
- Posts: 42
- Liked: 5 times
- Joined: May 17, 2018 2:22 pm
- Full Name: grant albitz
- Contact:
Re: Backup Directly To Object Storage
I just had to setup my second instance of using wasabi to support our backups. So most of my frustrations are still in my head =)
We have daily backups to a performance tier. There are 16 servers in the environment. Most are utility based and HA. We really dont have a requirement to send these to the cloud. For example, a pair of haproxy linux servers, the information really needed is haproxy.conf and a couplle certificates. We dont need the OS backed up to the cloud. As a matter of fact there is probably only 2 I would send to the cloud in this case.
I really hate the way we have to do cloud backup, at the repository level, not at the vm level. In the past for another client I overspent on their local storage and created 1 repo that backs up every day and we keep that for 90 days. Then we created another repo and another backup job that backs up the same system and keeps it for 14 days. That 14 day repo has a scale out to wasabi and we have the option enabled to copy immediately. If I could pick I would want to create a backup copy job of our normal backups and for just 1 VM send it to the cloud and just keep 1 local repo. It becomes cumbersome and difficult to split the existing storage space into 2 repos for essentially the same VMs. There also has to be different retention for the cloud vs on prem. I can imagine that creates problems on the back end, but atleast if we had the option to run a second local backup for a single vm and set that retention time for the cloud differently, I would hope to only need one 1 local repo for both use cases worse case. You support the agent direct to a cloud connect partner, im curious how different this solution is?
We have daily backups to a performance tier. There are 16 servers in the environment. Most are utility based and HA. We really dont have a requirement to send these to the cloud. For example, a pair of haproxy linux servers, the information really needed is haproxy.conf and a couplle certificates. We dont need the OS backed up to the cloud. As a matter of fact there is probably only 2 I would send to the cloud in this case.
I really hate the way we have to do cloud backup, at the repository level, not at the vm level. In the past for another client I overspent on their local storage and created 1 repo that backs up every day and we keep that for 90 days. Then we created another repo and another backup job that backs up the same system and keeps it for 14 days. That 14 day repo has a scale out to wasabi and we have the option enabled to copy immediately. If I could pick I would want to create a backup copy job of our normal backups and for just 1 VM send it to the cloud and just keep 1 local repo. It becomes cumbersome and difficult to split the existing storage space into 2 repos for essentially the same VMs. There also has to be different retention for the cloud vs on prem. I can imagine that creates problems on the back end, but atleast if we had the option to run a second local backup for a single vm and set that retention time for the cloud differently, I would hope to only need one 1 local repo for both use cases worse case. You support the agent direct to a cloud connect partner, im curious how different this solution is?
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
Cloud connect partners run regular repositories, so supporting backup directly to them costs us nothing having said that, vast majority of Cloud Connect tenants are doing Backup Copy and not primary backups, for exact same reasons I have already mentioned.
-
- Veeam Legend
- Posts: 945
- Liked: 221 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: Backup Directly To Object Storage
Anton, I've got another usecase: Agents with backup target repository. Especially in covid-days, some folks are working in their homeoffice and they won't have a direct connection to our repository (allthough technically possible). It would be nice if their agent would backup to the local cache (already possible now) and then perform the offload to the object storage. Of course, it would need a sync on the server side with the repository and the topics about security/permissions would have to be cleared, but maybe it's something to thing about it...
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
Yes, security (multi-tenancy) is the biggest challenge for this scenario, unless of course you're willing to create and manage a separate S3 storage account for each agent - which is too much work. Amazon actually does provide a nice solution to this, but it's not a part of S3 protocol so it does not apply to Wasabi, Backblaze and other S3 compatible. The universal solution would still require Veeam Backup & Replication managing access to object storage in a multi-tenant environment.
-
- Veeam Legend
- Posts: 945
- Liked: 221 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: Backup Directly To Object Storage
Hmm... Anton, I was just thinking about a little proxy-module that could be deployed wherever you like just like a backup proxy. Now that proxy-module would receive the backed-up data from the cache of the clients and would write it to the object storage. The proxy-module holds (or fetches) the credentials for the repository and would manage all the structural stuff (which client, which folder, etc.) and for the client it was just a "repository". All it takes was internet access...
What do you think?
What do you think?
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
So basically how Veeam Cloud Connect works today, where the backup server manages multi-tenancy.
-
- Veeam Legend
- Posts: 945
- Liked: 221 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
- Location: Rheintal, Austria
- Contact:
Re: Backup Directly To Object Storage
Yep. I'm not familiar with cloud connect, so it looks like that I had no new ideas
Is such a feature on the roadmap?
Is such a feature on the roadmap?
-
- Chief Product Officer
- Posts: 31796
- Liked: 7297 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup Directly To Object Storage
Yes, this is something we're actively investigating.
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Aug 30, 2021 1:17 pm
- Full Name: Dominic Shoemaker
- Contact:
Re: Backup Directly To Object Storage
Commenting here to voice our organizations desire to back up directly to Object Storage without the need for a SOBR. Going straight to would be perfect.
-
- Product Manager
- Posts: 20389
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Backup Directly To Object Storage
It will be included in version 12, so stay tuned. Thanks!
Who is online
Users browsing this forum: No registered users and 11 guests