Three tips to get the right bandwidth for cloud services
Cloud services are bringing “no mess, no fuss” IT one step closer, making them one of the hottest topics in IT and business at present. But, says Shaheen Kalla, Manager of the Managed Services Department at ContinuitySA, many companies experience problems with the cloud because they fail to spec the right bandwidth for their requirements.
“Cloud computing in general is all about connectivity, so it makes sense to spend time upfront making sure that your bandwidth requirements have been properly scoped,” Kalla argues. “It’s even more important when you are using the cloud computing model for business-critical applications, such as mail and database environments.”
ContinuitySA, as Africa’s largest provider of business continuity solutions, offers its clients replication as a managed service to replace old-style, expensive and unreliable tape backups. Automated, online replication means that the production environment is copied at regular stated intervals and can be made available to clients in the event of a disaster very rapidly. Whereas it typically takes a week or more to rebuild an IT system from tape backups, Kalla says replication can reduce the recovery time to hours.
The benefits of the service mean that there is good take-up for it in South Africa. However, to ensure that the scheduled replication takes place successfully, it’s vital that the correct type and quality of connection between the production environment and the business continuity data centre, Kalla warns. He says that falling bandwidth costs mean that expense is no longer the factor it once was.
“In our experience, we find that companies often don’t really have any idea of the extent of change on their production systems. They also tend not to factor in the daily or weekly spikes, such as when a data or operational ‘data dumps’ takes place. Also, programmes like databases can be imperfectly written, so that incremental changes cause the whole database to be changed,” Kalla explains. “If issues like this aren’t identified, then the bandwidth requested from the provider will be inadequate, creating rather than mitigating risk.”
Kalla offers three tips for insuring that the bandwidth required for successful replication is identified:
- Establish the production environment’s rate of change. This is the most critical aspect to get right, Kalla says. Companies are often shocked at how much their systems change during the work day, with high rates of change requiring higher levels of bandwidth for replication. “Of course, excessively high rates of change can also point to deficiencies in application design,” Kalla adds.
- Consider the location of both the production and disaster recovery sites. Some providers may not cover certain areas, or there might be limited technology options. If the production environment is housed in a third-party data centre, there may be restrictions on the network providers who terminate there.
- Look beyond replication to disaster recovery. A common error is to spec the bandwidth needed based on routine replication. But, Kalla advises, one also needs to bear in mind the bandwidth that a disaster situation would need; when, for example, multiple people are accessing the restored environment at the disaster recovery site.
This type of thinking, Kalla concludes, is not just applicable to replication and disaster recovery, but can and should be adapted when commissioning any cloud service.