By Modeen Malick, Senior Systems Engineer at Commvault South Africa.

Data centre architecture has undergone massive evolution in the last 20 years, from endless rows of single-purpose rack mounted servers to virtualisation and the cloud, and then finally hyperconverged infrastructure. However, the data management strategy for many businesses, including data protection and secondary storage, has failed to keep pace with this evolution. If your data management is stuck in the past it runs the risk of increased cost and complexity, ineffective backups, slow recoveries and more. This could cripple your business should a significant data loss event occur.

The challenge many businesses have is that, while hyperconvergence offers many benefits, including reduced cost and complexity, it is predominantly focused on primary workloads, applications and storage. This means that data management is still tied to the legacy model of one server, one role, with a back end made up of expensive, special purpose hardware. Since secondary storage makes up as much as 70% of the storage capacity in a data centre, there is a significant investment tied to this storage layer.

The reality is that backup and data management are as critical as any other application today. However, legacy data protection practices result in limited compute resources dedicated to backup and recovery, resulting in a limited number of parallel streams, limited or non-existent load balancing and multiple bottlenecks. The outcome is a single daily recovery point, and if multiple recovery points are required, then multiple different backup processes need to be implemented. The upshot of this is slow recovery due to a shortage of data management resources.

Modernising your data management strategy along with your infrastructure enables you to apply cloud-like economics and advantages to backup the same way we do to primary applications and storage. Once both infrastructure and strategy have been addressed you can leverage scalable combined compute resources with thousands of parallel streams, complete load balancing and zero bottlenecks. This means you can take advantage of multiple recovery points throughout the day, with a single backup process for all data and therefore much faster recovery times with multiple, parallel recoveries.

Hyperscale data management should address four key areas. It should simplify operations, minimising the effort required to operate systems, including deployment and management. It should ensure greater availability, with always-on data protection that is tolerant to failures and malicious or accidental data corruption. It should provide on-demand availability to scale systems quickly and easily, and it should offer cloud-like economics with the ability to purchase and pay only for what you use.

Modern data infrastructure and management solutions allow you to leverage high performance to optimise your recovery point and recovery time objectives. Restore performance should be delivered to enable full recovery from critical events such as ransomware or a data centre loss in just a few hours. Mission critical workloads can be recovered in less than an hour, and multiple recovery points throughout the day enhance backup performance.

Additionally, with a modernised data infrastructure and modernised data practices you can start to add on operational intelligence to your data management. Artificial intelligence and machine learning can be used to vastly improve outcomes. For example, scheduling and load balancing can be automated to meet even the strictest recovery point objectives. Beyond simply protecting data, you can also begin to detect and alert on events within the environment before they become major problems.

Data management is a critical component of any business in the information age. If your data management strategy and infrastructure is stuck in the past, the time to change that is now. To meet the needs of today’s enterprise you need a unified, modern backup and recovery solution that delivers cloud-like services on premises.