The Software-Defined Data Centre Explained: Part II of II

Steve Phillips, CIO, Avnet Inc.

In part one of this article, we examined how data centres have evolved, and why current solutions are leaving some businesses searching for what they believe is the next evolution in data centre architecture: the software defined data centre (SDDC).

In this post we’ll examine how soon SDDC’s may become a reality, what obstacles are holding it back, and identity a few of the vendors to watch out for as SDDC becomes a reality.


According to Gartner’s Hype Cycle for 2014, the SDDC—part of what they refer to as “software-defined anything,” or SDA/SDX—is still firmly in the Cycle’s first stage, where the promise of technology has yet to be grounded in the context of real-world application.

That hasn’t stopped Gartner from calling software-defined anything “one of the top IT trends with the greatest possible impact on an enterprise’s infrastructure and operations,” according to Computer Weekly.


While the potential for SDDC may be great, embracing it is more of a revolution than an evolution. The migration to a virtualized environment could be embraced by traditional data centres as time, budget and business need allowed, with virtualized racks next to traditional hardware racks.

On the other hand, software-defined data centres require common APIs to operate: the hardware can be pooled and controlled by software or it can’t. As a result, companies with significant legacy infrastructures may find it difficult to adopt SDDC in their own environments.

One way for existing data centres to avoid the “all or nothing” approach of SDDC is by embracing what Gartner began referring to as “bimodal IT” in 2014. Bimodal IT identifies two types of IT needs:

  • Type 1 is traditional IT, which places a premium on stability and efficiency for mission-critical infrastructure needs.
  • Type 2 refers to a more agile environment focused on speed, scalability, time-to-market, close alignment with business needs, and rapid evolution.

A bimodal IT arrangement would allow large legacy IT operations to establish a separate SDDC-driven environment to meet business needs that call for fast, scalable and agile IT resources, while continuing to rely on traditional virtualized environments for applications and business needs that value uptime and consistency above all else.

Over time, more resources could be devoted to the new SDDC architecture as the needs of the business evolve, without requiring the entire data centre to convert to SDDC all at once.


Given how different software-defined data centre architectures are from traditional and virtualized environments, it’s a golden opportunity for new and emerging vendors to gain a first-mover advantage on some of the entrenched data centre giants.

APIs: The critical components of SDDC are the APIs that control the pooled resources. OpenStack’s API is the open source market leader at this point, since many vendors currently rely on their own proprietary APIs to control their hardware.

Computing & Storage: Emerging players like Nimble Storage and Nutanix are at the forefront of the SDDC movement, but data centre incumbents like IBM, HP, Dell, NetApp, Cisco and EMC are right there with them.

Networking: While Cisco, Juniper and HP are certainly the focus of the software defined networking space, startups like Big Switch and Cumulus Networks are gaining significant market interest, funding and traction as the SDDC model gains momentum.

Converged Infrastructure: Two additional initiatives worth keeping an eye on are VCE and their VBlock solutions, as well as NetApp’s Flexpods integrated infrastructure solutions. These products are designed to meet the needs of both “clean sheet” and legacy IT environments interested in pursuing the bimodal IT approach.

So while the reality of the SDDC may be a few years away for many IT environments with considerable legacy investments, it’s certainly a new and compelling vision for the data centre.

More importantly, it appears to be the solution IT is looking for in the always on, mission critical, cloud-ready and data-rich environment we operate in today. Expect to hear more on this topic in future Behind the Firewall blog posts.

Posted under Storage, Virtualisation

The Software-Defined Data Centre Explained: Part I of II

Steve Phillips, CIO, Avnet Inc.

The traditional data centre is being challenged harder than ever to keep up with the pace of change in business and technology.

Three recent megatrends—the growth of data and data analytics, the rise of cloud computing and the increasing criticality of technology to the operations of many businesses—have shown that legacy infrastructures are too inflexible, too inefficient and too expensive for a growing number of businesses.

In this first of two posts on this topic, we’ll briefly recap the evolution of the data centre to this point, examining where and why current architectures are falling short of many businesses’ needs.


From the beginning, data centres have been built around a hardware-centric model of expansion. Years ago, if your company wanted to launch a new application, the IT team would purchase and provision dedicated servers to handle the computing duties, pairing them with a dedicated storage unit to manage the application database and backup needs.

In order to ensure that the platform could handle surges in demand, IT would provision enough computing power and storage to meet the ‘high water mark’ of each application’s forecast demand.  As a result, many servers and storage spent most of the time running at a fraction of capacity—as little as 8% according to a 2008 McKinsey study.

To make matters worse, these server and storage pairs used a dedicated high capacity network backbone that kept each platform in its own distinct silos. As a result, data centres were overbuilt by design, overly expensive, and slow to adapt to the changing needs of the business.



The adoption of server and storage virtualization technologies into the enterprise about 10 years ago addressed some of these issues by allowing one server or storage array to do the job of many.

Provisioning a portion of a server or storage unit for a new application was faster and less expensive than buying new hardware, and it went some way to reduce the issue of underutilisation…but not far enough. According to a Gartner report, utilisation rates only grew to 12% by 2012.


However, increased computing densities allowed the data centre to provide significantly more computing horsepower to the business, without the need to expand the centre’s square footage in the process.


But over the last five years, data centers have been buffeted by three megatrends that have pushed current virtualization technologies to their limits:

  1. The exponential growth of data and the rise of data analytics has exceeded the most aggressive scenarios for many storage and networking infrastructures.
  2. The adoption of cloud computing and the “hybrid cloud” model, where the company data centre shares computing and storage responsibilities with third-party cloud vendors in remote locations.
  3. Lastly, the increasing reliance of many businesses on always-on technology requires the IT team to provision and scale IT resources rapidly to accommodate new initiatives and business opportunities.

Cloud computing as a whole has also increased the rate of innovation and the expectations of the business, leaving IT teams and their data centres working hard to keep up.

One solution to this paradigm is the software-defined data centre, or SDDC: A new architecture that allows IT to deliver applications and network capacity with speed, agility and an eye on the bottom line.


In a software-defined data centre (SDDC) the focus on hardware-related silos is removed. All the essential elements of a computing platform—computing, storage, networking and even security—are pooled, virtualized and implemented through a common set of application programming interfaces, or APIs.

 With all of the data centre’s hardware resources pooled together, the computing, storage, networking and security needs of the business can be monitored and provisioned much more rapidly.

Applications experiencing a surge in demand can have more computing power allocated in an instant. New applications can be provisioned just as quickly, speeding deployment and time-to-value. Data-rich analytics reports and backup routines receive the bandwidth they need, only when they need it.

It’s a compelling vision, but is it real?

We’ll answer that question in part two, predicting how soon SDDC’s may become a reality, what obstacles are holding it back today, and identity a few of the vendors to watch out for as SDDC gains traction in the marketplace.

Posted under Storage, Virtualisation

Is converged infrastructure the answer to today’s data centre challenges?

Steve Phillips, Senior Vice President and Chief Information Officer
Avnet, Inc

In recent blog posts, I’ve discussed a few of the challenges facing today’s data centres, including the growth of data and data analytics, the rise of cloud computing and the growing importance of IT  across all aspects of the enterprise.

While software-defined architectures show a lot of potential to address the flexibility, agility and cost pressures on IT operations, widespread adoption is likely several years away, particularly for established data centres with significant legacy investments.

So what can IT teams do today to help make their current on-premise infrastructure more streamlined, agile and cost effective without fully committing to a next-generation software-defined environment? For many of us, the answer lies in converged infrastructures.


The traditional data centre is made up of silos, with each application running on its own server, storage and networking stack, often from different vendors. Provisioning, deploying and managing the hardware needs for each new application requires a significant amount of resources, reducing responsiveness and efficiency and increasing complexity and cost.

While virtualization has helped to increase efficiency and utilisation rates and reduce costs over the last decade or so, in recent years the gains simply haven’t been able to outpace growing IT-related cost and performance pressures.

In a converged infrastructure solution — sometimes referred to as an integrated system —the computing, storage and networking systems from different manufacturers are all brought together by a vendor into a single solution. All the individual components are engineered and pre-configured to work together and with your specific data centre environment, right out of the box.

By delivering a single, pre-configured solution—in some cases, complete with your specific applications already installed—IT teams are able to streamline the procurement, deployment and operation of new hardware, reducing time and cost in the process.

These converged resources arrive on site fully virtualised, pooled and able to be managed through a single interface. This approach increases utilisation rates beyond what virtualization alone can deliver, while also greatly reducing complexity and ongoing maintenance costs. 


The IT market has responded favourably to converged/integrated architectures, according to IDC analysts: “An expanding number of organisations around the world are turning to integrated systems as a way to address longstanding and difficult infrastructure challenges,” said Eric Sheppard, IDC’s Research Director, Storage “[which] makes these solutions an increasingly rare source of high growth within the broader infrastructure marketplace.”

A good example of converged infrastructure at work can be found at VCE, which began as a joint partnership between EMC storage, Cisco networking, and VMware computing and virtualization software.

While Gartner rates VCE, Cisco-NetApp and Oracle as leaders in converged infrastructure, there is healthy competition in a market that IDC forecasts is growing at nearly 20% a year.[1]

Source: Gartner Magic Quadrant for Integrated Systems, 2014


With the converged infrastructure model gaining real traction, manufacturers are beginning to take the concept one step further with what are being called “hyperconverged architectures.”

Instead of bringing together hardware from multiple vendors, hyperconverged solutions rely on the latest hardware from a single vendor, combining super-dense computing cores, all-flash storage and software-based networking into a single pooled appliance.

These next-generation infrastructure approaches are integrated so tightly in the hyperconverged appliance that they have the ability to deliver computing densities, performance and benefits that aren’t possible in traditional architectures, including:

  • Reduced IT complexity
  • Increased utilisation rates
  • Increased agility and responsiveness
  • Lower power consumption and space needs
  • Lower total cost of ownership
  • Lower procurement, deployment and management costs
  • Increased application performance
  • Lower maintenance costs



While the convergence market is populated with many familiar IT brands, the more disruptive hyperconvergence model has been embraced by startups and emerging brands like Nutanix and SimpliVity.

This new wave of vendors is looking to disrupt the market, and their innovative model is riding a wave of interest from the venture capital community: SimpliVity has raised more than $275 million in funding to date, with Nutanix attracting an additional $312 million and a long-term partnership with Dell in 2014.


Converged and hyperconverged solutions are a solution worth considering, if you’re part of an established IT shop that needs to:

  • reduce or contain IT expense for on-premise environments
  • respond to the changing needs of your business
  • evolve with the changing trends in IT

In many respects, converged infrastructures are an evolution of virtualization, delivering utilisation and efficiency improvements without having to “rip and replace” legacy investments in the process.

Until the software-defined data centre becomes a reality, converged/hyperconverged infrastructure solutions represent an excellent opportunity to improve the flexibility and expense profile of a data centre without the need for wholesale abandonment of existing data centre investments.

[1] Source: IDC, Worldwide Integrated Systems 2014–2018 Forecast: State of the Market and Outlook

Posted under Converged infrastructure