The Software-Defined Data Centre Explained: Part II of II

Steve Phillips, CIO, Avnet Inc.

In part one of this article, we examined how data centres have evolved, and why current solutions are leaving some businesses searching for what they believe is the next evolution in data centre architecture: the software defined data centre (SDDC).

In this post we’ll examine how soon SDDC’s may become a reality, what obstacles are holding it back, and identity a few of the vendors to watch out for as SDDC becomes a reality.

HOW PROMISING AND HOW SOON?

According to Gartner’s Hype Cycle for 2014, the SDDC—part of what they refer to as “software-defined anything,” or SDA/SDX—is still firmly in the Cycle’s first stage, where the promise of technology has yet to be grounded in the context of real-world application.

That hasn’t stopped Gartner from calling software-defined anything “one of the top IT trends with the greatest possible impact on an enterprise’s infrastructure and operations,” according to Computer Weekly.

EARLY OBSTACLES TO ADOPTION

While the potential for SDDC may be great, embracing it is more of a revolution than an evolution. The migration to a virtualized environment could be embraced by traditional data centres as time, budget and business need allowed, with virtualized racks next to traditional hardware racks.

On the other hand, software-defined data centres require common APIs to operate: the hardware can be pooled and controlled by software or it can’t. As a result, companies with significant legacy infrastructures may find it difficult to adopt SDDC in their own environments.

One way for existing data centres to avoid the “all or nothing” approach of SDDC is by embracing what Gartner began referring to as “bimodal IT” in 2014. Bimodal IT identifies two types of IT needs:

  • Type 1 is traditional IT, which places a premium on stability and efficiency for mission-critical infrastructure needs.
  • Type 2 refers to a more agile environment focused on speed, scalability, time-to-market, close alignment with business needs, and rapid evolution.

A bimodal IT arrangement would allow large legacy IT operations to establish a separate SDDC-driven environment to meet business needs that call for fast, scalable and agile IT resources, while continuing to rely on traditional virtualized environments for applications and business needs that value uptime and consistency above all else.

Over time, more resources could be devoted to the new SDDC architecture as the needs of the business evolve, without requiring the entire data centre to convert to SDDC all at once.

WHAT VENDORS ARE LEADING THE SDDC CHARGE?

Given how different software-defined data centre architectures are from traditional and virtualized environments, it’s a golden opportunity for new and emerging vendors to gain a first-mover advantage on some of the entrenched data centre giants.

APIs: The critical components of SDDC are the APIs that control the pooled resources. OpenStack’s API is the open source market leader at this point, since many vendors currently rely on their own proprietary APIs to control their hardware.

Computing & Storage: Emerging players like Nimble Storage and Nutanix are at the forefront of the SDDC movement, but data centre incumbents like IBM, HP, Dell, NetApp, Cisco and EMC are right there with them.

Networking: While Cisco, Juniper and HP are certainly the focus of the software defined networking space, startups like Big Switch and Cumulus Networks are gaining significant market interest, funding and traction as the SDDC model gains momentum.

Converged Infrastructure: Two additional initiatives worth keeping an eye on are VCE and their VBlock solutions, as well as NetApp’s Flexpods integrated infrastructure solutions. These products are designed to meet the needs of both “clean sheet” and legacy IT environments interested in pursuing the bimodal IT approach.

So while the reality of the SDDC may be a few years away for many IT environments with considerable legacy investments, it’s certainly a new and compelling vision for the data centre.

More importantly, it appears to be the solution IT is looking for in the always on, mission critical, cloud-ready and data-rich environment we operate in today. Expect to hear more on this topic in future Behind the Firewall blog posts.

Posted under Storage, Virtualisation

The Software-Defined Data Centre Explained: Part I of II

Steve Phillips, CIO, Avnet Inc.

The traditional data centre is being challenged harder than ever to keep up with the pace of change in business and technology.

Three recent megatrends—the growth of data and data analytics, the rise of cloud computing and the increasing criticality of technology to the operations of many businesses—have shown that legacy infrastructures are too inflexible, too inefficient and too expensive for a growing number of businesses.

In this first of two posts on this topic, we’ll briefly recap the evolution of the data centre to this point, examining where and why current architectures are falling short of many businesses’ needs.

THE TRADITIONAL HARDWARE-CENTRIC MODEL

From the beginning, data centres have been built around a hardware-centric model of expansion. Years ago, if your company wanted to launch a new application, the IT team would purchase and provision dedicated servers to handle the computing duties, pairing them with a dedicated storage unit to manage the application database and backup needs.

In order to ensure that the platform could handle surges in demand, IT would provision enough computing power and storage to meet the ‘high water mark’ of each application’s forecast demand.  As a result, many servers and storage spent most of the time running at a fraction of capacity—as little as 8% according to a 2008 McKinsey study.

To make matters worse, these server and storage pairs used a dedicated high capacity network backbone that kept each platform in its own distinct silos. As a result, data centres were overbuilt by design, overly expensive, and slow to adapt to the changing needs of the business.

 

THE VIRTUALIZED DATA CENTRE

The adoption of server and storage virtualization technologies into the enterprise about 10 years ago addressed some of these issues by allowing one server or storage array to do the job of many.

Provisioning a portion of a server or storage unit for a new application was faster and less expensive than buying new hardware, and it went some way to reduce the issue of underutilisation…but not far enough. According to a Gartner report, utilisation rates only grew to 12% by 2012.

 

However, increased computing densities allowed the data centre to provide significantly more computing horsepower to the business, without the need to expand the centre’s square footage in the process.

THE DATA CENTRE’S PERFECT STORM

But over the last five years, data centers have been buffeted by three megatrends that have pushed current virtualization technologies to their limits:

  1. The exponential growth of data and the rise of data analytics has exceeded the most aggressive scenarios for many storage and networking infrastructures.
  2. The adoption of cloud computing and the “hybrid cloud” model, where the company data centre shares computing and storage responsibilities with third-party cloud vendors in remote locations.
  3. Lastly, the increasing reliance of many businesses on always-on technology requires the IT team to provision and scale IT resources rapidly to accommodate new initiatives and business opportunities.

Cloud computing as a whole has also increased the rate of innovation and the expectations of the business, leaving IT teams and their data centres working hard to keep up.

One solution to this paradigm is the software-defined data centre, or SDDC: A new architecture that allows IT to deliver applications and network capacity with speed, agility and an eye on the bottom line.

 THE SOFTWARE-DEFINED DATA CENTRE

In a software-defined data centre (SDDC) the focus on hardware-related silos is removed. All the essential elements of a computing platform—computing, storage, networking and even security—are pooled, virtualized and implemented through a common set of application programming interfaces, or APIs.

 With all of the data centre’s hardware resources pooled together, the computing, storage, networking and security needs of the business can be monitored and provisioned much more rapidly.

Applications experiencing a surge in demand can have more computing power allocated in an instant. New applications can be provisioned just as quickly, speeding deployment and time-to-value. Data-rich analytics reports and backup routines receive the bandwidth they need, only when they need it.

It’s a compelling vision, but is it real?

We’ll answer that question in part two, predicting how soon SDDC’s may become a reality, what obstacles are holding it back today, and identity a few of the vendors to watch out for as SDDC gains traction in the marketplace.

Posted under Storage, Virtualisation

Is converged infrastructure the answer to today’s data centre challenges?

Steve Phillips, Senior Vice President and Chief Information Officer
Avnet, Inc

In recent blog posts, I’ve discussed a few of the challenges facing today’s data centres, including the growth of data and data analytics, the rise of cloud computing and the growing importance of IT  across all aspects of the enterprise.

While software-defined architectures show a lot of potential to address the flexibility, agility and cost pressures on IT operations, widespread adoption is likely several years away, particularly for established data centres with significant legacy investments.

So what can IT teams do today to help make their current on-premise infrastructure more streamlined, agile and cost effective without fully committing to a next-generation software-defined environment? For many of us, the answer lies in converged infrastructures.

WHAT IS A CONVERGED INFRASTRUCTURE?

The traditional data centre is made up of silos, with each application running on its own server, storage and networking stack, often from different vendors. Provisioning, deploying and managing the hardware needs for each new application requires a significant amount of resources, reducing responsiveness and efficiency and increasing complexity and cost.

While virtualization has helped to increase efficiency and utilisation rates and reduce costs over the last decade or so, in recent years the gains simply haven’t been able to outpace growing IT-related cost and performance pressures.

In a converged infrastructure solution — sometimes referred to as an integrated system —the computing, storage and networking systems from different manufacturers are all brought together by a vendor into a single solution. All the individual components are engineered and pre-configured to work together and with your specific data centre environment, right out of the box.

By delivering a single, pre-configured solution—in some cases, complete with your specific applications already installed—IT teams are able to streamline the procurement, deployment and operation of new hardware, reducing time and cost in the process.

These converged resources arrive on site fully virtualised, pooled and able to be managed through a single interface. This approach increases utilisation rates beyond what virtualization alone can deliver, while also greatly reducing complexity and ongoing maintenance costs. 

CONVERGED INFRASTRUCTURE MARKET LEADERS

The IT market has responded favourably to converged/integrated architectures, according to IDC analysts: “An expanding number of organisations around the world are turning to integrated systems as a way to address longstanding and difficult infrastructure challenges,” said Eric Sheppard, IDC’s Research Director, Storage “[which] makes these solutions an increasingly rare source of high growth within the broader infrastructure marketplace.”

A good example of converged infrastructure at work can be found at VCE, which began as a joint partnership between EMC storage, Cisco networking, and VMware computing and virtualization software.

While Gartner rates VCE, Cisco-NetApp and Oracle as leaders in converged infrastructure, there is healthy competition in a market that IDC forecasts is growing at nearly 20% a year.[1]

Source: Gartner Magic Quadrant for Integrated Systems, 2014

THE BIRTH OF HYPERCONVERGENCE

With the converged infrastructure model gaining real traction, manufacturers are beginning to take the concept one step further with what are being called “hyperconverged architectures.”

Instead of bringing together hardware from multiple vendors, hyperconverged solutions rely on the latest hardware from a single vendor, combining super-dense computing cores, all-flash storage and software-based networking into a single pooled appliance.

These next-generation infrastructure approaches are integrated so tightly in the hyperconverged appliance that they have the ability to deliver computing densities, performance and benefits that aren’t possible in traditional architectures, including:

  • Reduced IT complexity
  • Increased utilisation rates
  • Increased agility and responsiveness
  • Lower power consumption and space needs
  • Lower total cost of ownership
  • Lower procurement, deployment and management costs
  • Increased application performance
  • Lower maintenance costs

A NEW BREED OF DATA CENTRE BRANDS

 

While the convergence market is populated with many familiar IT brands, the more disruptive hyperconvergence model has been embraced by startups and emerging brands like Nutanix and SimpliVity.

This new wave of vendors is looking to disrupt the market, and their innovative model is riding a wave of interest from the venture capital community: SimpliVity has raised more than $275 million in funding to date, with Nutanix attracting an additional $312 million and a long-term partnership with Dell in 2014.

DOES A CONVERGED/HYPERCONVERGED INFRASTRUCTURE MAKE SENSE FOR YOU?

Converged and hyperconverged solutions are a solution worth considering, if you’re part of an established IT shop that needs to:

  • reduce or contain IT expense for on-premise environments
  • respond to the changing needs of your business
  • evolve with the changing trends in IT

In many respects, converged infrastructures are an evolution of virtualization, delivering utilisation and efficiency improvements without having to “rip and replace” legacy investments in the process.

Until the software-defined data centre becomes a reality, converged/hyperconverged infrastructure solutions represent an excellent opportunity to improve the flexibility and expense profile of a data centre without the need for wholesale abandonment of existing data centre investments.



[1] Source: IDC, Worldwide Integrated Systems 2014–2018 Forecast: State of the Market and Outlook

Posted under Converged infrastructure

The channel needs to wake up to converged infrastructure and emerging technologies in the data centre

Dieter Lott, Vice President Business Development, EMEA at Avnet Technology Solutions EMEA

Many organisations’ data centres today are made up of complicated legacy models. This has led to a drastic increase in IT complexity overall, creating big challenges when it comes to IT management, security, scalability and cost efficiency within data centres.

On top of this, today’s IT departments have a perplexing choice of technologies as they build and maintain their data centres to meet the demands of the digital economy. To address the needs of the new digitally savvy workforce too, larger organisations have built teams around the technology disciplines of server, storage and networking and the best-of-breed solutions in each area. The luxury of dedicated teams though isn’t available to all organisations.

One increasingly popular approach to this challenge is by implementing converged infrastructure and new emerging technologies in the data centre such as software defined networking (SDN), operational analytics and big data.

Converged infrastructure is now well and truly a growth market and the channel needs to address this now. This technology has the ability to bring together all fundamental hardware components in an intelligently engineered, purpose-built configuration. A key benefit of converged infrastructure is the fact that these systems are pre-configured, integrated, tested and installed as a single, cohesive unit, rather than ‘bolted together’ with a digital version of duct tape.

In a nutshell, by deploying converged infrastructure, organisations can reduce complexity, ease deployment and integration, lower expenses and improve their ability to deploy technology for truly transformative needs, rather than simply to ‘keep systems operating’.

But what else can it do and what should the channel be addressing?

  1. Accommodating new and emerging technologies – complex and rigid legacy systems make it difficult to integrate newer IT such as mobility and cloud computing.
  2. Bridging skills gaps – close integration points between the different technologies within the converged infrastructure stack and upper level management/orchestration software means IT management is greatly simplified and training requirements are often reduced. However, in order for customers to realise these benefits, the channel needs to address skills in delivering solutions and services in tools around management and orchestration.
  3. Businesses operate in silos – to realise the full potential of converged infrastructure, end customer businesses need to have a consolidated approach to managing their infrastructure, and channel organisations need the same joined up approach to delivering it.
  4. Limited resources – converged infrastructure can alleviate this challenge of limited resource by providing technology that is built to work together and can be managed in a simplified cohesive manner.
  5. Legacy infrastructure – standardisation of infrastructure is the key to simplifying infrastructure management. Converged infrastructure needs to be viewed not simply as a typical IT cost, but as a means to reducing complexity and operating costs over time.
  6. Complex regulatory environment – converged infrastructure creates a standardised model to ensure compliance is met as you can define the mould and repeat it, which is much simpler than maintaining compliance in a non-standardised, ad-hoc infrastructure.
  7. Ensuring continuity – the inability to respond to customer demands for even a moment can be massively detrimental to an organisation’s financial health –business continuity is key. This doesn’t mean simply saving data in the event of disaster but maintaining “business as usual” IT. Converged infrastructure simplifies disaster recovery planning as businesses can work with a standard model for infrastructure regardless of location.

On top of these challenges, the pace of change within data centres is spectacular and this is why converged infrastructure, and new emerging technologies like mobility, cloud and software defined networking (SDN), are already key discussion points in the IT industry due to the drive for more scalable IT architectures. The convergence of data centre tech and networking tech is causing this significant market shift.

Data centre infrastructure is becoming increasingly complex as end-users embrace a combination of on- and off-premise cloud solutions, as well as platform and software “as-a-service” models. At the same time, businesses are under pressure to align IT costs more effectively to performance, ensuring high demand times for IT are covered effectively without over-investing. This need is compelling IT organisations to place more emphasis on capacity forecasting and analytics – yet finding the different skill levels required for these new emerging technologies in the data centre is a real challenge. Gartner backed this when it found 80 percent of businesses “will find growth constrained from a lack of new data centre skills by 2016.” This means there’s a question over whether the channel has the IT skills to address market demands.

I believe we now have an opportunity in the channel; this is a chance for resellers to step in and fulfil the IT skills on behalf of their customers. The channel needs to help businesses in EMEA to understand how and why these technologies help to overcome business challenges and requirements for today – and tomorrow. What is more, it answers, to a certain extent, where the value proposition in the future reseller landscape lies. The role distributors play will be affected too. We will be trusted to provide both enablement and the skills and training that are required.

Posted under IT infrastructure