The Software-Defined Data Centre Explained: Part I of II

Steve Phillips, CIO, Avnet Inc.

The traditional data centre is being challenged harder than ever to keep up with the pace of change in business and technology.

Three recent megatrends—the growth of data and data analytics, the rise of cloud computing and the increasing criticality of technology to the operations of many businesses—have shown that legacy infrastructures are too inflexible, too inefficient and too expensive for a growing number of businesses.

In this first of two posts on this topic, we’ll briefly recap the evolution of the data centre to this point, examining where and why current architectures are falling short of many businesses’ needs.

THE TRADITIONAL HARDWARE-CENTRIC MODEL

From the beginning, data centres have been built around a hardware-centric model of expansion. Years ago, if your company wanted to launch a new application, the IT team would purchase and provision dedicated servers to handle the computing duties, pairing them with a dedicated storage unit to manage the application database and backup needs.

In order to ensure that the platform could handle surges in demand, IT would provision enough computing power and storage to meet the ‘high water mark’ of each application’s forecast demand.  As a result, many servers and storage spent most of the time running at a fraction of capacity—as little as 8% according to a 2008 McKinsey study.

To make matters worse, these server and storage pairs used a dedicated high capacity network backbone that kept each platform in its own distinct silos. As a result, data centres were overbuilt by design, overly expensive, and slow to adapt to the changing needs of the business.

 

THE VIRTUALIZED DATA CENTRE

The adoption of server and storage virtualization technologies into the enterprise about 10 years ago addressed some of these issues by allowing one server or storage array to do the job of many.

Provisioning a portion of a server or storage unit for a new application was faster and less expensive than buying new hardware, and it went some way to reduce the issue of underutilisation…but not far enough. According to a Gartner report, utilisation rates only grew to 12% by 2012.

 

However, increased computing densities allowed the data centre to provide significantly more computing horsepower to the business, without the need to expand the centre’s square footage in the process.

THE DATA CENTRE’S PERFECT STORM

But over the last five years, data centers have been buffeted by three megatrends that have pushed current virtualization technologies to their limits:

  1. The exponential growth of data and the rise of data analytics has exceeded the most aggressive scenarios for many storage and networking infrastructures.
  2. The adoption of cloud computing and the “hybrid cloud” model, where the company data centre shares computing and storage responsibilities with third-party cloud vendors in remote locations.
  3. Lastly, the increasing reliance of many businesses on always-on technology requires the IT team to provision and scale IT resources rapidly to accommodate new initiatives and business opportunities.

Cloud computing as a whole has also increased the rate of innovation and the expectations of the business, leaving IT teams and their data centres working hard to keep up.

One solution to this paradigm is the software-defined data centre, or SDDC: A new architecture that allows IT to deliver applications and network capacity with speed, agility and an eye on the bottom line.

 THE SOFTWARE-DEFINED DATA CENTRE

In a software-defined data centre (SDDC) the focus on hardware-related silos is removed. All the essential elements of a computing platform—computing, storage, networking and even security—are pooled, virtualized and implemented through a common set of application programming interfaces, or APIs.

 With all of the data centre’s hardware resources pooled together, the computing, storage, networking and security needs of the business can be monitored and provisioned much more rapidly.

Applications experiencing a surge in demand can have more computing power allocated in an instant. New applications can be provisioned just as quickly, speeding deployment and time-to-value. Data-rich analytics reports and backup routines receive the bandwidth they need, only when they need it.

It’s a compelling vision, but is it real?

We’ll answer that question in part two, predicting how soon SDDC’s may become a reality, what obstacles are holding it back today, and identity a few of the vendors to watch out for as SDDC gains traction in the marketplace.

Posted under Storage, Virtualisation