Avnet on the ground at NetApp Insight, Berlin

Evan Unrue, EMEA Converged Infrastructure Technical Lead, Avnet Technology Solutions


Avnet ships pre-engineered Flexpod systems all over the world for NetApp; call it Trade Show Flexpod as a Service. Our primary purpose in this endeavour is to maintain, ship and deploy these Flexpod units into the various trade shows NetApp attends for demonstration  and display purposes, and then to educate whoever wants to understand the nuances, architecture, features and benefits of Flexpod. This year, however, was slightly different.

That’s because a solid proportion of people approaching the Flexpod were customers who had deployed Flexpod recently – or not so recently in some cases. This gave me the opportunity to ask a few questions:

  1. Why did they buy Flexpod?
  2. Did it deliver what they expected?
  3. What were they expecting it to deliver?
  4. How does Flexpod factor into their technology roadmap?
  5. What pain did it ease, if any?

One recurring theme was that many were so inwardly focused on what they were looking for when they bought the Flexpod, that this fixed reference point shaped how they leveraged the platform. As a result, they hadn’t fully explored the possibilities of what they can now do, with Flexpod on the ground.

What problems does Avnet solve?

Many of these reference points were re-enforced simply by “how they had always done it”. When I walked through the full extent of what they had invested in with Flexpod – things it can do, ways it can be deployed, managed, automated and the strong complimentary technologies from Cisco and NetApp that tie into Flexpod – some really interesting conversations developed around what customers could do next.

For context, the types of organisations I was speaking with were as follows:

  • A large media and publishing group with multiple divisions
  • A global mining company
  • A few service providers
  • A company that provides vertically-aligned managed IT to pharma and a few other verticals.
  • A number of typical commercial SME outfits (averaging a few hundred to a few thousand users).


Why did these organisations buy Flexpod in the first place?  The reasons ranged from being fed up of managing failing infrastructure and easing management pain to delivering on a more strategic IT roadmap.

Below are some of the reasons Flexpod customers I spoke with came to explore Flexpod™ as a solution:

Managing an “Accidental Architecture”

Dealing with increasingly unmanageable infrastructure born through sweating assets for too long, tactically replacing failing kit, plugging resource gaps, and in some cases through acquisition of other businesses where they have new infrastructure which has to be stitched into an existing platform.

This cocktail of diversely-branded old and new kit in many cases results in a seemingly endless struggle to keep critical applications up in the face of failing hardware or a constant flow of troubleshooting tasks, as the thin veneer of interoperability grows ever thinner. For the customers I spoke to, the resulting “accidental architecture” consumed so much time to maintain, innovation seemed to be off the table.

Supporting/deploying platforms and applications in the field

One thing was clear. A lack of standardisation was causing real issues around time-to-resolution of support issues, and time-to-deployment of applications and infrastructure. These customers had many platforms out in the field which weren’t necessarily poorly constructed, but the lack of standardisation in configuration, vendor technology and even the way the infrastructure was racked, patched and managed made it hard to apply a procedural approach to conducting a root cause analysis of issues and resolving them in good time.

A few of the companies I spoke with likened deployment of any significant applications to playing Jenga, in that stacking new workloads on creaking and overly-agnostic infrastructure was compounding the “accidental architecture” issue. They had to stitch resources together in increasingly creative ways and tactically deploy infrastructure on the fly. This process is not a quick one and the time it takes to prep everything for these new applications often takes weeks or months.

The IT vs Business Expanse

As NetApp Insight is primarily a technical conference, it was of course mainly attended by engineers, IT Managers, IT Directors and CTOs rather than customer CEOs and CFOs, so admittedly I only heard one side of the story here. However, this side made for an interesting story. A lot of these guys had become accustomed to feeling like “the help”: they were rarely invited to discuss the topics which influenced the demands being pushed onto the IT business; they weren’t asked what IT could do for the business; and the topic of making IT a profit centre rather than a cost centre was completely alien. The business attitude around old and hard to maintain kit is often to let to sweat because “it works, it’s fine and nothing has broken yet” – this divide forces IT into reactive mode.

The Battle with Shadow IT

The group IT Director of one company was faced with a situation where there were three distinct parts of the business, and each had aligned themselves to deploying applications with a different cloud provider for dev and non-critical/non-core applications. This was a struggle for IT as they were losing visibility of the business’ application landscape, competing with external IT providers and at real risk of breaching certain regulations if data was being dealt with on cloud platforms outside their sphere of control.

Flexpod Services by Avnet


Flexpod isn’t a magic box with the answer to all companies’ IT struggles, but it does give customers a platform they can leverage to address their issues. There has to be an appetite for business to address the people and process aspects around IT, and most importantly the business attitude towards IT, before any technology is going to offer a long-lasting solution. To put it bluntly, you can buy a new car, but if you’re a bad driver, a new car isn’t going to stop you having car crashes. Much like the relationship between driver and automobile, the driver needs to know where he or she is going, have full control of the vehicle and listen to the engine to know when it’s going wrong. These same rules apply for the relationship between the business and IT.

Starting from the bottom and working up, one thing Flexpod gives many of these organisations is control. Standardisation of hardware and software makes lifecycle management of IT simpler and less painful. Less diversity in the infrastructure means they can manage firmware levels across platforms with decreased risk, hardware interoperability is a non-issue as the components are all certified to work together, no questions asked. Adding new resource aligns to the set standardisation around Flexpod, meaning infrastructure deployments and application roll outs are massively accelerated.

Flexpod converged infrastructure

Another by-product of Flexpod’s standardisation is that with everything being a known commodity within the datacentre or across sites, companies can start to apply more efficient root cause analysis procedures with less guesswork around how they troubleshoot issues within their infrastructure.  This benefit is compounded further when you consider Flexpod is supported as a single platform, meaning you’re not spending half the day trying to get one vendor to take ownership of the issue as they point fingers amongst themselves.

Ultimately gaining control over your infrastructure means less downtime, less time troubleshooting alerts, and less of your time wasted. This allows more time to deploy people on tasks that actually improve, and don’t just “fix” things.

When the IT organisation moves out of a purely reactive state and has time to be pro-active, they can start to look at how to align closer to the business. In reality, this works both ways – they have to be met in the middle. But without the need to be purely reactive, there is at least time and breathing space to have the important conversations and start to make changes.

Something that one of the organisations I spoke with was looking to deal with was their shadow IT issue. Their roadmap involved leveraging Flexpod to regain some control around their core IT, and over time implement automation elements, such as UCS Director, prime service catalogue and a few others, to start developing a service-oriented and policy-driven approach to how they deliver internal services. Then over time, they could standardise on a set of cloud providers and leverage these same policy-driven approaches to manage how and where things go into the cloud. This would allow the business to consume from their own IT in much the same way they had in the cloud, but IT regains control of the application landscape and ensures they remain compliant where needed.

In summary, Flexpod offers a mechanism to help IT get control of a business’ infrastructure and free up time and money to do things better. Getting to the hub of it, doing things better means delivering services more quickly and seeing faster returns, or rationalising how you do things today and easing operating expenses both in time and man hours. The business is certainly responsible for implementing fundamental changes, but Flexpod is helping many customers execute faster, with less risk and with less pain.


If you’re a partner looking for more information on our Flexpod solution, visit our website: http://avnet.me/fsa

Be Sociable, Share!

Posted under Converged infrastructure, IT infrastructure, Storage

The Software-Defined Data Centre Explained: Part II of II

Steve Phillips, CIO, Avnet Inc.

In part one of this article, we examined how data centres have evolved, and why current solutions are leaving some businesses searching for what they believe is the next evolution in data centre architecture: the software defined data centre (SDDC).

In this post we’ll examine how soon SDDC’s may become a reality, what obstacles are holding it back, and identity a few of the vendors to watch out for as SDDC becomes a reality.


According to Gartner’s Hype Cycle for 2014, the SDDC—part of what they refer to as “software-defined anything,” or SDA/SDX—is still firmly in the Cycle’s first stage, where the promise of technology has yet to be grounded in the context of real-world application.

That hasn’t stopped Gartner from calling software-defined anything “one of the top IT trends with the greatest possible impact on an enterprise’s infrastructure and operations,” according to Computer Weekly.


While the potential for SDDC may be great, embracing it is more of a revolution than an evolution. The migration to a virtualized environment could be embraced by traditional data centres as time, budget and business need allowed, with virtualized racks next to traditional hardware racks.

On the other hand, software-defined data centres require common APIs to operate: the hardware can be pooled and controlled by software or it can’t. As a result, companies with significant legacy infrastructures may find it difficult to adopt SDDC in their own environments.

One way for existing data centres to avoid the “all or nothing” approach of SDDC is by embracing what Gartner began referring to as “bimodal IT” in 2014. Bimodal IT identifies two types of IT needs:

  • Type 1 is traditional IT, which places a premium on stability and efficiency for mission-critical infrastructure needs.
  • Type 2 refers to a more agile environment focused on speed, scalability, time-to-market, close alignment with business needs, and rapid evolution.

A bimodal IT arrangement would allow large legacy IT operations to establish a separate SDDC-driven environment to meet business needs that call for fast, scalable and agile IT resources, while continuing to rely on traditional virtualized environments for applications and business needs that value uptime and consistency above all else.

Over time, more resources could be devoted to the new SDDC architecture as the needs of the business evolve, without requiring the entire data centre to convert to SDDC all at once.


Given how different software-defined data centre architectures are from traditional and virtualized environments, it’s a golden opportunity for new and emerging vendors to gain a first-mover advantage on some of the entrenched data centre giants.

APIs: The critical components of SDDC are the APIs that control the pooled resources. OpenStack’s API is the open source market leader at this point, since many vendors currently rely on their own proprietary APIs to control their hardware.

Computing & Storage: Emerging players like Nimble Storage and Nutanix are at the forefront of the SDDC movement, but data centre incumbents like IBM, HP, Dell, NetApp, Cisco and EMC are right there with them.

Networking: While Cisco, Juniper and HP are certainly the focus of the software defined networking space, startups like Big Switch and Cumulus Networks are gaining significant market interest, funding and traction as the SDDC model gains momentum.

Converged Infrastructure: Two additional initiatives worth keeping an eye on are VCE and their VBlock solutions, as well as NetApp’s Flexpods integrated infrastructure solutions. These products are designed to meet the needs of both “clean sheet” and legacy IT environments interested in pursuing the bimodal IT approach.

So while the reality of the SDDC may be a few years away for many IT environments with considerable legacy investments, it’s certainly a new and compelling vision for the data centre.

More importantly, it appears to be the solution IT is looking for in the always on, mission critical, cloud-ready and data-rich environment we operate in today. Expect to hear more on this topic in future Behind the Firewall blog posts.

Be Sociable, Share!

Posted under Storage, Virtualisation

The Software-Defined Data Centre Explained: Part I of II

Steve Phillips, CIO, Avnet Inc.

The traditional data centre is being challenged harder than ever to keep up with the pace of change in business and technology.

Three recent megatrends—the growth of data and data analytics, the rise of cloud computing and the increasing criticality of technology to the operations of many businesses—have shown that legacy infrastructures are too inflexible, too inefficient and too expensive for a growing number of businesses.

In this first of two posts on this topic, we’ll briefly recap the evolution of the data centre to this point, examining where and why current architectures are falling short of many businesses’ needs.


From the beginning, data centres have been built around a hardware-centric model of expansion. Years ago, if your company wanted to launch a new application, the IT team would purchase and provision dedicated servers to handle the computing duties, pairing them with a dedicated storage unit to manage the application database and backup needs.

In order to ensure that the platform could handle surges in demand, IT would provision enough computing power and storage to meet the ‘high water mark’ of each application’s forecast demand.  As a result, many servers and storage spent most of the time running at a fraction of capacity—as little as 8% according to a 2008 McKinsey study.

To make matters worse, these server and storage pairs used a dedicated high capacity network backbone that kept each platform in its own distinct silos. As a result, data centres were overbuilt by design, overly expensive, and slow to adapt to the changing needs of the business.



The adoption of server and storage virtualization technologies into the enterprise about 10 years ago addressed some of these issues by allowing one server or storage array to do the job of many.

Provisioning a portion of a server or storage unit for a new application was faster and less expensive than buying new hardware, and it went some way to reduce the issue of underutilisation…but not far enough. According to a Gartner report, utilisation rates only grew to 12% by 2012.


However, increased computing densities allowed the data centre to provide significantly more computing horsepower to the business, without the need to expand the centre’s square footage in the process.


But over the last five years, data centers have been buffeted by three megatrends that have pushed current virtualization technologies to their limits:

  1. The exponential growth of data and the rise of data analytics has exceeded the most aggressive scenarios for many storage and networking infrastructures.
  2. The adoption of cloud computing and the “hybrid cloud” model, where the company data centre shares computing and storage responsibilities with third-party cloud vendors in remote locations.
  3. Lastly, the increasing reliance of many businesses on always-on technology requires the IT team to provision and scale IT resources rapidly to accommodate new initiatives and business opportunities.

Cloud computing as a whole has also increased the rate of innovation and the expectations of the business, leaving IT teams and their data centres working hard to keep up.

One solution to this paradigm is the software-defined data centre, or SDDC: A new architecture that allows IT to deliver applications and network capacity with speed, agility and an eye on the bottom line.


In a software-defined data centre (SDDC) the focus on hardware-related silos is removed. All the essential elements of a computing platform—computing, storage, networking and even security—are pooled, virtualized and implemented through a common set of application programming interfaces, or APIs.

 With all of the data centre’s hardware resources pooled together, the computing, storage, networking and security needs of the business can be monitored and provisioned much more rapidly.

Applications experiencing a surge in demand can have more computing power allocated in an instant. New applications can be provisioned just as quickly, speeding deployment and time-to-value. Data-rich analytics reports and backup routines receive the bandwidth they need, only when they need it.

It’s a compelling vision, but is it real?

We’ll answer that question in part two, predicting how soon SDDC’s may become a reality, what obstacles are holding it back today, and identity a few of the vendors to watch out for as SDDC gains traction in the marketplace.

Be Sociable, Share!

Posted under Storage, Virtualisation

Is converged infrastructure the answer to today’s data centre challenges?

Steve Phillips, Senior Vice President and Chief Information Officer
Avnet, Inc

In recent blog posts, I’ve discussed a few of the challenges facing today’s data centres, including the growth of data and data analytics, the rise of cloud computing and the growing importance of IT  across all aspects of the enterprise.

While software-defined architectures show a lot of potential to address the flexibility, agility and cost pressures on IT operations, widespread adoption is likely several years away, particularly for established data centres with significant legacy investments.

So what can IT teams do today to help make their current on-premise infrastructure more streamlined, agile and cost effective without fully committing to a next-generation software-defined environment? For many of us, the answer lies in converged infrastructures.


The traditional data centre is made up of silos, with each application running on its own server, storage and networking stack, often from different vendors. Provisioning, deploying and managing the hardware needs for each new application requires a significant amount of resources, reducing responsiveness and efficiency and increasing complexity and cost.

While virtualization has helped to increase efficiency and utilisation rates and reduce costs over the last decade or so, in recent years the gains simply haven’t been able to outpace growing IT-related cost and performance pressures.

In a converged infrastructure solution — sometimes referred to as an integrated system —the computing, storage and networking systems from different manufacturers are all brought together by a vendor into a single solution. All the individual components are engineered and pre-configured to work together and with your specific data centre environment, right out of the box.

By delivering a single, pre-configured solution—in some cases, complete with your specific applications already installed—IT teams are able to streamline the procurement, deployment and operation of new hardware, reducing time and cost in the process.

These converged resources arrive on site fully virtualised, pooled and able to be managed through a single interface. This approach increases utilisation rates beyond what virtualization alone can deliver, while also greatly reducing complexity and ongoing maintenance costs. 


The IT market has responded favourably to converged/integrated architectures, according to IDC analysts: “An expanding number of organisations around the world are turning to integrated systems as a way to address longstanding and difficult infrastructure challenges,” said Eric Sheppard, IDC’s Research Director, Storage “[which] makes these solutions an increasingly rare source of high growth within the broader infrastructure marketplace.”

A good example of converged infrastructure at work can be found at VCE, which began as a joint partnership between EMC storage, Cisco networking, and VMware computing and virtualization software.

While Gartner rates VCE, Cisco-NetApp and Oracle as leaders in converged infrastructure, there is healthy competition in a market that IDC forecasts is growing at nearly 20% a year.[1]

Source: Gartner Magic Quadrant for Integrated Systems, 2014


With the converged infrastructure model gaining real traction, manufacturers are beginning to take the concept one step further with what are being called “hyperconverged architectures.”

Instead of bringing together hardware from multiple vendors, hyperconverged solutions rely on the latest hardware from a single vendor, combining super-dense computing cores, all-flash storage and software-based networking into a single pooled appliance.

These next-generation infrastructure approaches are integrated so tightly in the hyperconverged appliance that they have the ability to deliver computing densities, performance and benefits that aren’t possible in traditional architectures, including:

  • Reduced IT complexity
  • Increased utilisation rates
  • Increased agility and responsiveness
  • Lower power consumption and space needs
  • Lower total cost of ownership
  • Lower procurement, deployment and management costs
  • Increased application performance
  • Lower maintenance costs



While the convergence market is populated with many familiar IT brands, the more disruptive hyperconvergence model has been embraced by startups and emerging brands like Nutanix and SimpliVity.

This new wave of vendors is looking to disrupt the market, and their innovative model is riding a wave of interest from the venture capital community: SimpliVity has raised more than $275 million in funding to date, with Nutanix attracting an additional $312 million and a long-term partnership with Dell in 2014.


Converged and hyperconverged solutions are a solution worth considering, if you’re part of an established IT shop that needs to:

  • reduce or contain IT expense for on-premise environments
  • respond to the changing needs of your business
  • evolve with the changing trends in IT

In many respects, converged infrastructures are an evolution of virtualization, delivering utilisation and efficiency improvements without having to “rip and replace” legacy investments in the process.

Until the software-defined data centre becomes a reality, converged/hyperconverged infrastructure solutions represent an excellent opportunity to improve the flexibility and expense profile of a data centre without the need for wholesale abandonment of existing data centre investments.

[1] Source: IDC, Worldwide Integrated Systems 2014–2018 Forecast: State of the Market and Outlook

Be Sociable, Share!

Posted under Converged infrastructure

5 Ways Global and Regional Outsourcing is changing

Peter Blythe, Chief Architect, Systems Integrators & Outsourcers at Avnet Technology Solutions EMEA

According to IDC, the industry’s dramatic and disruptive shift to its 3rd Platform for innovation and growth will accelerate in 2015. Spending on these technologies and solutions — growing at 13% — will account for one-third of all industry revenue and 100% of growth. The 2nd Platform world will tip into recession by mid-2015. 1

Driven by some of the fastest changes that the IT market has seen for years, the traditional approach to outsourcing is becoming more difficult to apply, as client and consumer needs are constantly changing. Whether that is the need for mobile enabled applications, cloud delivered platforms and services, the need for delivering better business outcomes through use of the data and analytics, or the effect on brand reputation through instantaneous social feedback, the enterprise customer is becoming more demanding as technology moves to the third platform and organisations need to be more agile in the market place to survive.

If we look back 5-10 years, the IT Outsourcing (ITO) business was focused on running core IT functions for large organisations and government departments with long term contracts that have a typical length of 7-10 years. During the contract period, the outsourcer would deliver the core IT services and run the IT environment for the client and help drive down cost and improve the efficiency of the environment. However, this outsourcing approach was typically un-dynamic and often lead to the outsourcing contract being based on an old way of delivering outdated IT and IT efficiencies in a budget that was determined at the start of the contract.

So, with the change in contract length to typically 3 years with possible 2 year extensions, the introduction of cloud based infrastructure, services and applications, Global System Integrators (GSIs) need to be much more agile in their approach to their business.  Being large organisations themselves, they provide a good risk profile for the customer. However, on the downside, being large organisations they are not always as agile as smaller regional integrators or specialist companies when it comes to adapting to market change.

So how are the outsourcing businesses changing to adapt to the market change?

Adopting a more agile approach to outsourcing and adopting Systems Integration and Management (SIAM) models, in order to help deliver a consistent service level to their clients, no matter what services they are delivering.

By adopting the SIAM model, the systems integrator is able to deliver consistent service levels for the core outsourcing services to the client – whether they are delivered from an onshore, offshore or third party who has a specific industry or application knowledge for the specific area of outsourcing e.g. HCM, Retail etc. Additionally, by using this model, new services can be adopted and added into the service offering more quickly e.g. end user computing or SaaS.

Understanding the customers’ market and the specific market business needs. As an ITO it is not good enough to be able to deliver an outsourcing service alone, it is important to also understand the challenges and restrictions of the market you are entering.  For example, defence outsourcing is quite different from retail outsourcing. If they do not have the skills and market and business knowledge of that sector, it is very easy to see an outsourcing project go wrong and lose money due to unforeseen industry challenges.

Organisations are adopting DevOps processes. In order to deliver a more agile and faster approach to delivering applications and services, business are adopting the DevOps actions of: People, Process, Culture and Tools in order to deliver better Business Process Management outcomes. This is key to allow the systems integrator or outsourcer adapt to the changing cloud and consumer driven society and deliver front, back and mid-office solutions more rapidly to their clients.

Building Digital Practices. Outsourcers are building digital practices to help deliver the innovation that might otherwise be difficult to achieve under the traditional outsourcing model. These practices are designed to meet the needs of the third platform with cloud, mobility, social and Big Data Analytics at the core of the services they deliver. As with any IT project today, mobility is a key part of the deliverable outcome for the client, as it brings complete business and end user inclusion, so the digital practices have to have a mobility strategy as part of core offering they take to market.

Source: IDC MaturityScape Digital Transformation Stage Overview, March 2015. 

Focusing on the business transformation need – not just the ITO needs. Traditionally the systems integrator has been focussed on the IT side of outsourcing with separate businesses for consultancy & BPO. However, as the  change to the third platform accelerates, the ITO is changing to incorporate many of the elements from the business consultancy and BPO business as part of a wider business transformation offering.

In today’s channel, outsourcing businesses are being forced to adapt to the constant market change. Is your organisation using any other methods to adapt to the market change and to prepare for the third platform?

1 IDC Predictions 2015: Accelerating Innovation – and Growth – on the 3rd Platform, doc #252700, December 2014.

Be Sociable, Share!

Posted under IT infrastructure