The Software-Defined Data Centre Explained: Part I of II

Steve Phillips, CIO, Avnet Inc.

The traditional data centre is being challenged harder than ever to keep up with the pace of change in business and technology.

Three recent megatrends—the growth of data and data analytics, the rise of cloud computing and the increasing criticality of technology to the operations of many businesses—have shown that legacy infrastructures are too inflexible, too inefficient and too expensive for a growing number of businesses.

In this first of two posts on this topic, we’ll briefly recap the evolution of the data centre to this point, examining where and why current architectures are falling short of many businesses’ needs.


From the beginning, data centres have been built around a hardware-centric model of expansion. Years ago, if your company wanted to launch a new application, the IT team would purchase and provision dedicated servers to handle the computing duties, pairing them with a dedicated storage unit to manage the application database and backup needs.

In order to ensure that the platform could handle surges in demand, IT would provision enough computing power and storage to meet the ‘high water mark’ of each application’s forecast demand.  As a result, many servers and storage spent most of the time running at a fraction of capacity—as little as 8% according to a 2008 McKinsey study.

To make matters worse, these server and storage pairs used a dedicated high capacity network backbone that kept each platform in its own distinct silos. As a result, data centres were overbuilt by design, overly expensive, and slow to adapt to the changing needs of the business.



The adoption of server and storage virtualization technologies into the enterprise about 10 years ago addressed some of these issues by allowing one server or storage array to do the job of many.

Provisioning a portion of a server or storage unit for a new application was faster and less expensive than buying new hardware, and it went some way to reduce the issue of underutilisation…but not far enough. According to a Gartner report, utilisation rates only grew to 12% by 2012.


However, increased computing densities allowed the data centre to provide significantly more computing horsepower to the business, without the need to expand the centre’s square footage in the process.


But over the last five years, data centers have been buffeted by three megatrends that have pushed current virtualization technologies to their limits:

  1. The exponential growth of data and the rise of data analytics has exceeded the most aggressive scenarios for many storage and networking infrastructures.
  2. The adoption of cloud computing and the “hybrid cloud” model, where the company data centre shares computing and storage responsibilities with third-party cloud vendors in remote locations.
  3. Lastly, the increasing reliance of many businesses on always-on technology requires the IT team to provision and scale IT resources rapidly to accommodate new initiatives and business opportunities.

Cloud computing as a whole has also increased the rate of innovation and the expectations of the business, leaving IT teams and their data centres working hard to keep up.

One solution to this paradigm is the software-defined data centre, or SDDC: A new architecture that allows IT to deliver applications and network capacity with speed, agility and an eye on the bottom line.


In a software-defined data centre (SDDC) the focus on hardware-related silos is removed. All the essential elements of a computing platform—computing, storage, networking and even security—are pooled, virtualized and implemented through a common set of application programming interfaces, or APIs.

 With all of the data centre’s hardware resources pooled together, the computing, storage, networking and security needs of the business can be monitored and provisioned much more rapidly.

Applications experiencing a surge in demand can have more computing power allocated in an instant. New applications can be provisioned just as quickly, speeding deployment and time-to-value. Data-rich analytics reports and backup routines receive the bandwidth they need, only when they need it.

It’s a compelling vision, but is it real?

We’ll answer that question in part two, predicting how soon SDDC’s may become a reality, what obstacles are holding it back today, and identity a few of the vendors to watch out for as SDDC gains traction in the marketplace.

Be Sociable, Share!

Posted under Storage, Virtualisation

Is converged infrastructure the answer to today’s data centre challenges?

Steve Phillips, Senior Vice President and Chief Information Officer
Avnet, Inc

In recent blog posts, I’ve discussed a few of the challenges facing today’s data centres, including the growth of data and data analytics, the rise of cloud computing and the growing importance of IT  across all aspects of the enterprise.

While software-defined architectures show a lot of potential to address the flexibility, agility and cost pressures on IT operations, widespread adoption is likely several years away, particularly for established data centres with significant legacy investments.

So what can IT teams do today to help make their current on-premise infrastructure more streamlined, agile and cost effective without fully committing to a next-generation software-defined environment? For many of us, the answer lies in converged infrastructures.


The traditional data centre is made up of silos, with each application running on its own server, storage and networking stack, often from different vendors. Provisioning, deploying and managing the hardware needs for each new application requires a significant amount of resources, reducing responsiveness and efficiency and increasing complexity and cost.

While virtualization has helped to increase efficiency and utilisation rates and reduce costs over the last decade or so, in recent years the gains simply haven’t been able to outpace growing IT-related cost and performance pressures.

In a converged infrastructure solution — sometimes referred to as an integrated system —the computing, storage and networking systems from different manufacturers are all brought together by a vendor into a single solution. All the individual components are engineered and pre-configured to work together and with your specific data centre environment, right out of the box.

By delivering a single, pre-configured solution—in some cases, complete with your specific applications already installed—IT teams are able to streamline the procurement, deployment and operation of new hardware, reducing time and cost in the process.

These converged resources arrive on site fully virtualised, pooled and able to be managed through a single interface. This approach increases utilisation rates beyond what virtualization alone can deliver, while also greatly reducing complexity and ongoing maintenance costs. 


The IT market has responded favourably to converged/integrated architectures, according to IDC analysts: “An expanding number of organisations around the world are turning to integrated systems as a way to address longstanding and difficult infrastructure challenges,” said Eric Sheppard, IDC’s Research Director, Storage “[which] makes these solutions an increasingly rare source of high growth within the broader infrastructure marketplace.”

A good example of converged infrastructure at work can be found at VCE, which began as a joint partnership between EMC storage, Cisco networking, and VMware computing and virtualization software.

While Gartner rates VCE, Cisco-NetApp and Oracle as leaders in converged infrastructure, there is healthy competition in a market that IDC forecasts is growing at nearly 20% a year.[1]

Source: Gartner Magic Quadrant for Integrated Systems, 2014


With the converged infrastructure model gaining real traction, manufacturers are beginning to take the concept one step further with what are being called “hyperconverged architectures.”

Instead of bringing together hardware from multiple vendors, hyperconverged solutions rely on the latest hardware from a single vendor, combining super-dense computing cores, all-flash storage and software-based networking into a single pooled appliance.

These next-generation infrastructure approaches are integrated so tightly in the hyperconverged appliance that they have the ability to deliver computing densities, performance and benefits that aren’t possible in traditional architectures, including:

  • Reduced IT complexity
  • Increased utilisation rates
  • Increased agility and responsiveness
  • Lower power consumption and space needs
  • Lower total cost of ownership
  • Lower procurement, deployment and management costs
  • Increased application performance
  • Lower maintenance costs



While the convergence market is populated with many familiar IT brands, the more disruptive hyperconvergence model has been embraced by startups and emerging brands like Nutanix and SimpliVity.

This new wave of vendors is looking to disrupt the market, and their innovative model is riding a wave of interest from the venture capital community: SimpliVity has raised more than $275 million in funding to date, with Nutanix attracting an additional $312 million and a long-term partnership with Dell in 2014.


Converged and hyperconverged solutions are a solution worth considering, if you’re part of an established IT shop that needs to:

  • reduce or contain IT expense for on-premise environments
  • respond to the changing needs of your business
  • evolve with the changing trends in IT

In many respects, converged infrastructures are an evolution of virtualization, delivering utilisation and efficiency improvements without having to “rip and replace” legacy investments in the process.

Until the software-defined data centre becomes a reality, converged/hyperconverged infrastructure solutions represent an excellent opportunity to improve the flexibility and expense profile of a data centre without the need for wholesale abandonment of existing data centre investments.

[1] Source: IDC, Worldwide Integrated Systems 2014–2018 Forecast: State of the Market and Outlook

Be Sociable, Share!

Posted under Converged infrastructure

5 Ways Global and Regional Outsourcing is changing

Peter Blythe, Chief Architect, Systems Integrators & Outsourcers at Avnet Technology Solutions EMEA

According to IDC, the industry’s dramatic and disruptive shift to its 3rd Platform for innovation and growth will accelerate in 2015. Spending on these technologies and solutions — growing at 13% — will account for one-third of all industry revenue and 100% of growth. The 2nd Platform world will tip into recession by mid-2015. 1

Driven by some of the fastest changes that the IT market has seen for years, the traditional approach to outsourcing is becoming more difficult to apply, as client and consumer needs are constantly changing. Whether that is the need for mobile enabled applications, cloud delivered platforms and services, the need for delivering better business outcomes through use of the data and analytics, or the effect on brand reputation through instantaneous social feedback, the enterprise customer is becoming more demanding as technology moves to the third platform and organisations need to be more agile in the market place to survive.

If we look back 5-10 years, the IT Outsourcing (ITO) business was focused on running core IT functions for large organisations and government departments with long term contracts that have a typical length of 7-10 years. During the contract period, the outsourcer would deliver the core IT services and run the IT environment for the client and help drive down cost and improve the efficiency of the environment. However, this outsourcing approach was typically un-dynamic and often lead to the outsourcing contract being based on an old way of delivering outdated IT and IT efficiencies in a budget that was determined at the start of the contract.

So, with the change in contract length to typically 3 years with possible 2 year extensions, the introduction of cloud based infrastructure, services and applications, Global System Integrators (GSIs) need to be much more agile in their approach to their business.  Being large organisations themselves, they provide a good risk profile for the customer. However, on the downside, being large organisations they are not always as agile as smaller regional integrators or specialist companies when it comes to adapting to market change.

So how are the outsourcing businesses changing to adapt to the market change?

Adopting a more agile approach to outsourcing and adopting Systems Integration and Management (SIAM) models, in order to help deliver a consistent service level to their clients, no matter what services they are delivering.

By adopting the SIAM model, the systems integrator is able to deliver consistent service levels for the core outsourcing services to the client – whether they are delivered from an onshore, offshore or third party who has a specific industry or application knowledge for the specific area of outsourcing e.g. HCM, Retail etc. Additionally, by using this model, new services can be adopted and added into the service offering more quickly e.g. end user computing or SaaS.

Understanding the customers’ market and the specific market business needs. As an ITO it is not good enough to be able to deliver an outsourcing service alone, it is important to also understand the challenges and restrictions of the market you are entering.  For example, defence outsourcing is quite different from retail outsourcing. If they do not have the skills and market and business knowledge of that sector, it is very easy to see an outsourcing project go wrong and lose money due to unforeseen industry challenges.

Organisations are adopting DevOps processes. In order to deliver a more agile and faster approach to delivering applications and services, business are adopting the DevOps actions of: People, Process, Culture and Tools in order to deliver better Business Process Management outcomes. This is key to allow the systems integrator or outsourcer adapt to the changing cloud and consumer driven society and deliver front, back and mid-office solutions more rapidly to their clients.

Building Digital Practices. Outsourcers are building digital practices to help deliver the innovation that might otherwise be difficult to achieve under the traditional outsourcing model. These practices are designed to meet the needs of the third platform with cloud, mobility, social and Big Data Analytics at the core of the services they deliver. As with any IT project today, mobility is a key part of the deliverable outcome for the client, as it brings complete business and end user inclusion, so the digital practices have to have a mobility strategy as part of core offering they take to market.

Source: IDC MaturityScape Digital Transformation Stage Overview, March 2015. 

Focusing on the business transformation need – not just the ITO needs. Traditionally the systems integrator has been focussed on the IT side of outsourcing with separate businesses for consultancy & BPO. However, as the  change to the third platform accelerates, the ITO is changing to incorporate many of the elements from the business consultancy and BPO business as part of a wider business transformation offering.

In today’s channel, outsourcing businesses are being forced to adapt to the constant market change. Is your organisation using any other methods to adapt to the market change and to prepare for the third platform?

1 IDC Predictions 2015: Accelerating Innovation – and Growth – on the 3rd Platform, doc #252700, December 2014.

Be Sociable, Share!

Posted under IT infrastructure

Organisations should be adopting a new proactive approach to security breaches

Dieter Lott Avnet Technology Solutions EMEA

Dieter Lott, Vice President Business Development EMEA, Avnet Technology Solutions

IT infrastructure is in a constant state of change, and nowhere is this more evident than in the Security and Networking marketplace. New dynamics such as ‘Bring Your Own Device’ (BYOD) and hybrid computing trends are creating bigger security challenges for businesses as working cultures become more mobile. We’re seeing that the traditional methods of data protection are no longer enough as larger companies become increasingly subject to high profile data breaches, and cyber attacks become more calculated. Businesses of every size are slowly waking up to the principle that it’s no longer a case of ‘if’ you’re breached but ‘when’. Therefore, it is essential for any IT strategy to adopt a modern proactive approach to security breaches that takes into account business intelligence.

Traditionally, data breaches were seen to be the cause of external sources, and security efforts were designed to keep threats out by building walls around an organisation’s data. However, the requirements of an enterprise’s security solution are changing rapidly. The adoption of private, hybrid and public cloud solutions has allowed businesses to store their applications and information in a variety of places, all of which need addressing and securing. Therefore as businesses’ infrastructures become more complex to manage, insider threats become increasingly common.

These transformations are creating security gaps, and companies are facing the challenge of how to secure applications and devices in a way that’s not overly disruptive to the user but also provides the right level of corporate security. As companies adopt cloud solutions and broaden their network scope, they also begin to struggle with how to meet security compliance demands without sacrificing network availability. Users are also putting pressure on enterprise networks as they increasingly embrace mobile working practices. This level of flexibility increases the pressure on perimeter security, as users drive a greater volume of traffic onto the network by accessing services from multiple locations.

To move forward, organisations need to understand that breaches will very likely still happen, and it’s essential to have the right system and processes in place to manage an event once it’s happened. The modern approach to network security is all about the intelligence you have on your environment and the speed with which you can respond to a threat. The secret to which is using forensics and analytics to track a breach when it happens, and allow network managers to understand the damage that has been done  and find the person who has committed the offence.

In order to close the security gaps, businesses need network security solutions that give complete visibility and do more than just alert you to breaches. I believe the next generation of security software will evolve to operate in constant learning mode and be able to adapt to the strategies of potential threats in real time. There are a number of products and services already in existence in the market to help organisations manage such an event, and businesses should be looking to adopt a range of security solutions that can offer holistic support before, during and after a security breach.

There is much more focus now on the analytics of what’s happening in the network, how it’s happening and the forensics of ‘something’ has happened – what’s the damage that’s been done?’ These technologies can help businesses to plug the security intelligence gap which will enable them to move from a ‘defensive’ approach to one that’s ‘proactive’, to limit and prevent damage from security breaches – today and tomorrow.

Be Sociable, Share!

Posted under BYOD; celebrating mobility, IT Software

Ditch the Digital Duck Tape – Consider All Options with Hyper-Converged

Tom Corrigan, sales director, Avnet Technology Solutions, UK, explains why the channel is perfectly positioned to take advantage of new modular hyper-converged systems and why there is no time to lose.

The channel is buzzing with terms such as the software-defined data centre, converged systems, hybrid cloud and more recently, hyper-converged infrastructure.  But what does it all mean and what is the difference between converged and hyper-converged systems?  Most importantly, how can the UK channel develop strategies that drive profitable growth from this technology sector?

Analyst predictions are suggesting >150% market growth over 2015 and beyond and this dramatic growth has seen a large number of technology start-ups delivering very capable solutions focussed in this area. Equally as exciting is the breadth of emerging technologies being launched from the established vendors who have strategic relationships with Avnet both in the UK and across the region.

Alongside this vendor push we are seeing significant volumes of requests from business partners and their end-user communities around the delivery of integrated platforms to manage new application deployments rather than the provision of disparate compute, network and storage systems in legacy fashion. In line with this change in demand, reference architectures and converged systems have emerged as pre-defined combinations of server, storage and network solutions to help simplify platform design for what are often fairly complex workload requirements.

Hyper-converged infrastructure takes this to the next level and offers a fully integrated platform across compute, network, storage and hypervisors that are designed, configured and delivered as a single appliance. This modular design means they are quick to design, simpler to deploy and can be scaled out by adding more appliances as required.

So what does all this mean to business partners looking to broaden their capabilities beyond selling compute systems, storage arrays and perhaps introducing a hypervisor for virtualisaton? With potentially simplified technology for end-users, in terms of design and deployment, now is the time for partners to expand their skills to include the application stack and delivery of margin-rich services to support a hyper-converged infrastructure. By taking this approach, opportunities will open up around private cloud and application consulting in addition to application deployment, which is where the best margin opportunities reside for the channel community.

However, the first step to hyper-converged is to carefully choose which vendors to partner with.  Which eco-systems offer the most benefits? While hyper-converged is an emerging technology it has already been validated by many established vendors. The building blocks of most hyper-converged platforms may not yet be one size fits all, but certainly one size fits many.

Within our global markets Avnet is are seeing this change and we feel that now is the time to look beyond the digital duct tape that holds disparate hardware and software stacks together and consider a more consolidated approach to delivering applications and related services.

How can Avnet help? Well for starters, the strategic partnerships we hold globally, regionally and locally are with the leading technology providers and this gives us a huge head start as we can access the technology stack that best fits the customer requirements. To supplement this we have a dedicated in-house technical and sales team focused only on converged platforms. We also have immense capability to build these systems to order and at a scale across EMEA from our Tongeren facility, which is certified to the highest levels as required by our supplier partners.

Bringing new technologies to market and enabling the channel to capitalise on this clear market opportunity is hugely important as Avnet continues on its journey to transform technology into business solutions for customers around the world.

Be Sociable, Share!

Posted under IT infrastructure