Unleash Your Data's Potential℠

Disaster Recovery in a Multi-Cloud Environment

When we think about disaster recovery, we normally take our cues from nature and consider hurricanes, tornadoes, floods, fires, etc. Sometimes we might consider an aircraft crashing into the facility and other extreme scenarios. To mitigate these potential risks we draw up business continuity plans and disaster recovery plans.

From a threat to data perspective, the foundation of recovering from one of these disasters has long been the offsite backup – a copy of the data stored at a physically separate and somewhat distant location that is judged unlikely to be subject to the same disaster at the same time.

The cloud allows you to automatically have redundant storage, either within the same data center or at another data center in the same region at no additional cost. If a natural disaster were to strike the cloud data center you are using, your data is safe and remains available from another location.

While adopting a regional redundancy strategy protects you against data loss from natural disasters, it does nothing to protect your data from the more common consequences caused by the pervasive presence of hackers and their targets.

Data Security’s Most Damaging Threats

  • Negligent Insiders – employees, contractors, business associates that have no bad intentions but make mistakes in configuration or use of the system
  • Malicious Insiders – employees, contractors, business associates making deliberate attempts to cause damage
  • Infiltrators –people external to the organization that have surreptitiously gained access for malicious intent

The news provides a constant stream of organizations beset by malware, ransomware, and data theft. Redundant storage does nothing to defend against these threats because actions against a file are rapidly replicated across the redundant copies. It might be a simple file delete but that file delete is rapidly replicated and the data is lost.

Data Recovery Protection – Common Methods

To guard against this, cloud providers offer soft delete or versioning. These allow you to recover from a known delete event but if you’re not aware within the time period set, then the data becomes unrecoverable. Also, any object that has been soft deleted or archived incurs additional storage cost as it is effectively a copy.

For a more resilient approach, you might choose an asynchronous geo-redundant copy stored in another part of the world. Again, the delete will eventually catch up and you have the additional cost of the additional storage and the inter-regional transfer fees both to send the data out and then bringing it back as part of your recovery.

Another approach you might take is replicating your data into a separate management plane. This would prevent the insider or infiltrator with access to the primary plane from damaging data stored in the redundant plane, but you still have the costs of the duplicated storage and you still have to manage the propagation of deletes.

Indeed, data doesn’t have to be deleted to be effectively lost. A simple change in file name or move from one storage account to another is all it takes to effectively lose the data. With these events being replicated synchronously or asynchronously, the risk of data loss becomes all too real.

Best Practices for Secure Data Management

The most secure method then for isolating a Disaster Recovery (DR) data set from interference is to keep that data set disconnected from the network. At Katalyst Data Management, we store your data in two separate locations: a primary data set in an IBM Spectrum Protect TSM ™ environment and an offline backup on tape. Where allowed by data sovereignty laws, we keep the DR offline tape in a different country from the primary as this also provides protection from political risk.

Katalyst also manages data in client-managed public cloud storage environments. For these environments we also recommend that an offline copy of the data be maintained at Katalyst. Even if you have configured soft delete or versioning on your cloud data stores, many subsurface data files can sit in storage for years without being viewed. The background processes in the cloud will automatically repair them if they suffer bit rot in storage, but if they are mistakenly deleted or moved, they may be long gone by the time you decide to revisit them.

Whether you choose to store your data with Katalyst, either at a Katalyst location or in a Katalyst-managed public cloud environment, we maintain a DR copy of your data offline.

As we’ve seen with recent data breaches, the responsibility for security in the cloud lies with the cloud account owner and not with the cloud service provider. A simple mistake in storage configuration or access provisioning could lead directly to an irretrievable loss of your data assets.

Accurate Inventory is Crucial

Without a DR backup strategy that physically or logically separates your backup from the primary data, your only recourse in this event is to reacquire the data that has been lost. However, your challenge now lies in identifying what has been lost, and for that you need an accurate inventory.  Performing a data audit and utilizing a subsurface data management solution, such as Katalyst’s PPDM-certified iGlass database, are essential to creating a solid DR backup strategy.

When you use iGlass as your data asset management system, you have peace of mind that you have an accurate inventory of your data assets that is physically and logically separate from your cloud data store. There’s a simple process to reconcile your cloud storage account details to iGlass in order to audit and identify any change in file count. In addition, the MD5# compare can be used to confirm the integrity of the file at any time with that at the time of load to iGlass.

To learn more about iGlass and how to implement a disaster recovery strategy for your subsurface data, please contact us.

Related Articles