The buzz around software defined storage; networking and security are sparking new concerns in businesses. Grant Vine, Technical director at Cybervine IT Solutions, explains what they mean and what they could lead to.
By Grant Vine, Technical director at Cybervine IT Solutions
The terms software defined storage (SDS), software defined networking (SDN) and even software defined security and firewalls are being bandied around a great deal lately, sparking concern among those who aren’t clear on what the terms mean.
We have heard from several customers lately that they are anxious to be ‘ready for SDN’ even though they are unsure why, or what the implications are.
This trend toward the ‘software of everything’ could be better described as adding an abstraction layer to existing infrastructures, which allows for greater control and allocation of resources and resulting cost benefits.
It is only within large enterprises with multi-tiered datacentre environments that SDN will make a tangible difference, with the greatest impacts of SDN being realized in multi-tenant hosting facilities simply due to the nature of network infrastructure and independent configuration required for hosted service delivery. But for small to mid-sized enterprises, with a flat and simplistic network structure, the impact is virtually zero. On the other hand, software defined storage has a greater value for all businesses, regardless of size, allowing them to dictate different metrics for their storage requirements while also offering flexibility in how to invest in storage and choose the right vendor.
Simplistically, SDN might be compared to changing the way a factory is run. Assume you have a widget packing factory, where the assembly line is the hardware and the people packing the widgets are the software. The people (or software) can be upgraded or changed, but the assembly line (or hardware) remains fairly static.
When your factory has various orders or various sizes to fulfil, you may need to allocate your best packing resource to fulfil the regular bulk orders in a particular way, while the smaller ad hoc orders are processed differently with a different expected pace of packing. So you will design the assembly lines in a factory to pack boxes in the most efficient way possible while maintaining a standardized mechanism of packing widgets at a reasonable cost – you might split packing boxes into different assembly lines and change the staff or number of items per box accordingly. To apply a “software defined” abstraction, you would put a foreman in charge in advance (abstraction layer) who looks at the SLA requirements overall and is then able to allocate the correct resources to each assembly line to deliver on service level requirements and delivery time frames within the constraints of the SLA. In essence removing (abstracting) the control from the execution layer, separating off the definition of compliancy requirements from the application of the compliancy commitments.
SDN adds this abstraction layer on top of (around) existing network infrastructure to allow for standardized definition of the SLA resourcing a company or business unit may request from the network. Configuration is typically handled through a Self Service framework, with clients of the infrastructure paying for what their network requirements actually are, rather than being generalized amongst everyone else and paying a “flat rate” which may or may not include prioritization over other entities sharing the same infrastructure.
In future, there is the potential to add an abstraction layer to the “Software Defined Everything” layer, allowing for more efficient buying and provisioning of all IT services. And the closer cloud service providers come to a standardized API for provisioning infrastructural capacity, the higher the likelihood that in future it will not matter who you are running with – it will be about what’s most cost effective while best meeting your requirements.
There may even come a time when computing and network capacity is traded on a commodities-type exchange. This would be beneficial in that the more granular this trading was, the better the value to end user. They might buy CPUs, memory or storage in component form from vendors around the world, and changing vendors routinely, in line with the best available offers. Even trading off their own excess capacity bought earlier at a cost effective price but never fully utilized and therefore freely available for trade.
Essentially, this controlled abstraction of resource allocation is about finding the simplest, most effective way to operate while removing the human element from configuration – rather freeing up human capital for future skills development and environment improvements. As with the Industrial Revolution, the “Software Defined Everything” revolution is now moving to free IT resources from mundane tasks, enhance control, management, efficiency and cost savings, and simplify IT.