The Software-Defined Data Centre Explained: Part II of II

Steve Phillips, CIO, Avnet Inc.

In part one of this article, we examined how data centres have evolved, and why current solutions are leaving some businesses searching for what they believe is the next evolution in data centre architecture: the software defined data centre (SDDC).

In this post we’ll examine how soon SDDC’s may become a reality, what obstacles are holding it back, and identity a few of the vendors to watch out for as SDDC becomes a reality.

HOW PROMISING AND HOW SOON?

According to Gartner’s Hype Cycle for 2014, the SDDC—part of what they refer to as “software-defined anything,” or SDA/SDX—is still firmly in the Cycle’s first stage, where the promise of technology has yet to be grounded in the context of real-world application.

That hasn’t stopped Gartner from calling software-defined anything “one of the top IT trends with the greatest possible impact on an enterprise’s infrastructure and operations,” according to Computer Weekly.

EARLY OBSTACLES TO ADOPTION

While the potential for SDDC may be great, embracing it is more of a revolution than an evolution. The migration to a virtualized environment could be embraced by traditional data centres as time, budget and business need allowed, with virtualized racks next to traditional hardware racks.

On the other hand, software-defined data centres require common APIs to operate: the hardware can be pooled and controlled by software or it can’t. As a result, companies with significant legacy infrastructures may find it difficult to adopt SDDC in their own environments.

One way for existing data centres to avoid the “all or nothing” approach of SDDC is by embracing what Gartner began referring to as “bimodal IT” in 2014. Bimodal IT identifies two types of IT needs:

  • Type 1 is traditional IT, which places a premium on stability and efficiency for mission-critical infrastructure needs.
  • Type 2 refers to a more agile environment focused on speed, scalability, time-to-market, close alignment with business needs, and rapid evolution.

A bimodal IT arrangement would allow large legacy IT operations to establish a separate SDDC-driven environment to meet business needs that call for fast, scalable and agile IT resources, while continuing to rely on traditional virtualized environments for applications and business needs that value uptime and consistency above all else.

Over time, more resources could be devoted to the new SDDC architecture as the needs of the business evolve, without requiring the entire data centre to convert to SDDC all at once.

WHAT VENDORS ARE LEADING THE SDDC CHARGE?

Given how different software-defined data centre architectures are from traditional and virtualized environments, it’s a golden opportunity for new and emerging vendors to gain a first-mover advantage on some of the entrenched data centre giants.

APIs: The critical components of SDDC are the APIs that control the pooled resources. OpenStack’s API is the open source market leader at this point, since many vendors currently rely on their own proprietary APIs to control their hardware.

Computing & Storage: Emerging players like Nimble Storage and Nutanix are at the forefront of the SDDC movement, but data centre incumbents like IBM, HP, Dell, NetApp, Cisco and EMC are right there with them.

Networking: While Cisco, Juniper and HP are certainly the focus of the software defined networking space, startups like Big Switch and Cumulus Networks are gaining significant market interest, funding and traction as the SDDC model gains momentum.

Converged Infrastructure: Two additional initiatives worth keeping an eye on are VCE and their VBlock solutions, as well as NetApp’s Flexpods integrated infrastructure solutions. These products are designed to meet the needs of both “clean sheet” and legacy IT environments interested in pursuing the bimodal IT approach.

So while the reality of the SDDC may be a few years away for many IT environments with considerable legacy investments, it’s certainly a new and compelling vision for the data centre.

More importantly, it appears to be the solution IT is looking for in the always on, mission critical, cloud-ready and data-rich environment we operate in today. Expect to hear more on this topic in future Behind the Firewall blog posts.

Posted under Storage, Virtualisation

Five steps to delivering agile development and testing in the Cloud

HP cloud specialist

Andrew Stuart, HP Business Unit Manager, Avnet Technology Solutions UK

How ‘agile’ is your agile development?

  Software development cycles have become compressed over the years. This is true in terms of both pure software development and the implementation of package software applications. There has been a significant shift from historic waterfall development methodologies, which consisted of long planning cycles with a limited number of releases per year, to agile development techniques with continuous rapid delivery of incremental improvements. As a result of the larger number of release cycles there is need for increased testing and provisioning of controlled test environments which means a different set of challenges.

Some would say the development bottleneck has simply moved to the complex task of configuring and administering the hardware and software stack required for testing new applications. Modern composite applications rely on a complex stack of interdependent software programs. Changes in any one of the applications that contribute to the overall solution can have unanticipated consequences.

Here are five steps to avoid those consequences, break the bottleneck and test the cloud:

1) Automated deployment in a Private Cloud

Software development and testing is an ideal environment to exploit cloud automation software. It solves the issue of provisioning delays, inaccuracy and system administration costs without adding risk to production systems. With initiatives such as Avnet’s Cloud-in-a-Box organisations can implement a private cloud and populate it with advanced test management software to allow developers and testers to work together in a streamlined environment that fully supports agile methodologies without creating a testing bottleneck.

2) Beating the provisioning challenge

In order to keep up with agile development testing there is a need to stand up complex software and hardware environments quickly, accurately and reliably. Precise environment descriptions are required to ensure that the exact version of every contributing element is consistent between development, testing and production. On average each fresh install of components in a ‘sandbox’ can take 12-man hours of system administration per server, followed by six hours to configure and verify the complete environment. Assuming 80 ‘sandbox’ requests per year and the average request requires five servers to be built this gives an annual cost in excess of £290K. The beauty of using private cloud automation tools means this cost can be reduced by as much as 75 percent and even better it can be paid out of operational expense (OPEX) instead of capital expense (CAPEX).

3) Saving time on test planning and execution

Applications go through a predictable lifecycle and developers and testers need a systematic approach underpinned by tools that enforce the methodology in a productive manner. For example, there should be a Requirement Tree that displays the hierarchical relationship among requirements and ties them to tests and defects; a Test Plan Tree with defect and requirements association, risk-based prioritisation and test execution. By managing the scheduling and running of tests and organising them into test sets designed to achieve specific goals and business processes, time and expense can be saved by using the cloud.

4) Speeding production deployment

Server automation tools form the basis of production cloud deployments. By gaining familiarity with server automation tools during development and testing, IT departments become well-placed to evaluate their production for migration to the cloud. The precise software stack identified by the development team and verified by the quality testing team can then be deployed to a production environment for example running HP’s Server Automation tools.

5) Risk-based quality management

Risk-based, automated quality testing controls IT costs by reducing the number and duration of business critical application outages. This means less time and effort spent on problem identification, resolution and reworking. Centralised and rigorous risk-based testing should include three-way traceability between requirements, tests and defects to facilitate reduced outages and time spent on resolving them.

Following these five steps and taking advantage of a fully automated cloud environment spanning development, testing and production organisations can benefit from faster time-to-market, the elimination of production outages arising from software deployment errors and vast improvements in hardware and software license utilisation.

So just how agile are your agile developments and could the testing in the cloud make all the difference? For more information please click here.

Posted under Agile Development, Cloud Computing