IoT Lab Munich—the partnership. Discuss the benefits of the partnership

By Victor Paradell, vice president, IoT Solutions & Analytics, EMEA at Tech Data

The Internet of Things (IoT) is connecting humanity to physical objects in ways we never imagined. With billions of objects exchanging an endless stream of data and becoming active participants in business processes enabled by the cloud, IoT is helping businesses transform the way they operate, creating significant efficiency gains and improved customer experience. Worldwide, IoT is a huge business, as global spending on the technology reached more than $643.8 billion in 2015 and, it is expected to account for approximately $998 billion of IT budgets in 2018.

Analytics and IoT offer the opportunity for partners to deliver new services and infrastructure, by building out existing platforms to offer even more value from them. The channel is in a unique position; it has access to a constantly developing technology landscape, and a partner ecosystem that is tapped into numerous verticals on the cusp of next-generation technology adoption. As customers become smarter, organisations recognise the importance of cross-vendor solutions and IoT applications will fast become a reality.

So, what is Technology Solutions, formerly a division of Avnet and now part of Tech Data, doing within the IoT sector and why is its recent partnership with the IBM Watson Lab so important? By working closely with IBM’s leading technologists and IoT experts, Tech Data plans to enhance its existing IoT technical expertise through hands-on training and on-the-job learning. Tech Data’s team of IoT and analytics experts will partner with IBM on joint business development opportunities across multiple industries including smart manufacturing, logistics & transportation, retail and smart spaces. This enables the future to look even more connected than ever before, meaning everything from your car to your home, to your kitchen, to the collar on your pet will be linked. By partnering with IBM, Tech Data is taking the guesswork out of IoT developments by building a solid ecosystem of the ‘bread-and-butter’ technology companies they can work with to develop IoT solutions.

IoT is important for Tech Data as it has the potential to transform every industry. Tech Data’s vendors are actively involved in developing products and solutions that will accelerate the adoption of IoT and further advance the industry.

Tech Data is perfectly positioned as an IoT aggregator, right at the centre of the IoT ecosystem, to create and optimise IoT solutions thanks to the 360-degree view of the market it possesses, from the heart of the technology supply chain.

Tech Data and IBM will work closely together to accelerate solution development with IBM’s Watson IoT and Bluemix platforms. This includes proof of concept models, IoT starter kits and an extensive catalogue of DevOps, mobile and analytics services delivered via IBM’s and Tech Datas’ cloud platforms. These IoT starter kits are just a glimpse of how this collaboration can help companies across the globe begin their IoT journey. Another reason the IoT lab is vital to Tech Data’s position in the field is that it aligns customers with capabilities and services to address the entire range of IoT complexities, such as sensor design and development, infrastructure and gateway solutions, connectivity options, cloud IoT platforms, and global inventory management; all of which will bring customers to the forefront of the IoT sector.

The new joint lab will be a key place for customers and partners to come together to collaborate and advance their solution design. It is a vital resource that will be an asset to the Tech Data portfolio over the long term. In a nutshell, Tech Data’s mission for the sector is to aggregate IoT solutions and provide a simplified route to the rapidly expanding IoT market for partners to bring innovative connected solutions to customers to make IoT visions a reality.

Be Sociable, Share!

Posted under Internet of Things (IoT)

This post was written by on May 17, 2017

The Future of Flash Storage

By Evan Unrue, Chief Technologist, IoT, Analytics & Cognitive EMEA 

Evan Unrue

Evan Unrue, Chief Technologist, IoT, Analytics & Cognitive EMEA, Technology Solutions, now part of Tech Data.

This is a big topic, but in essence there are a few areas where we will see flash technology advances, in the immediate to mid-term, and I would categorise this into a few areas: Cost, Capacity, Connectivity, Performance and Acceptance.

With new innovations in the silicon (with technologies such as Triple Level Cell and 3D NAND Flash), achievable capacities will be increasing and associated cost are going to be driven down further to the point where it is widely believed that 2017 will be the year where price parity is achieved on cost per gigabyte with SSD, depending on the type of SSD.

When we consider the variety of SSD types in the market from high performance SSD PCI-E cards to less performing higher capacity SSD drives, expect to see a tiered approach to Flash storage get adopted in the all flash datacentre. Flash is not one size fits all, but we are not limited to one size.

Companies that adopt flash for tactical solutions to performance hungry applications will find that the headroom in their systems allows them to move from the focussed approach to flash and they will expand its footprint into general workloads over time and not necessarily a drawn out period of time. When the gains of flash are staring a company in the face, they will move quickly.

It is also important to note, that when discussing these gains, we are not just relating this to large enterprise. The small to medium business end will no longer see the barriers to entry on cost. Even today, this barrier is getting smaller and smaller.

We have also seen that Flash has been inhibited by aging protocols and computer architecture, and there is a visible evolutionary shift underway.

The next phase is starting to happen now, which is replacing traditional storage protocols with NVMe on the network. This allows an organisation to get the full force of SSD performance in a shared storage environment over a fabric delivered over technology we already know such as a fibre channel network. This is unlocking SSD performance in a way that will change the way companies deal with storage at scale forever, especially in the world of big data and the mass of data needing to be analysed in real time, driven by technologies such as IoT or Financial Trading Systems.

As exciting as NVMe technology is, it is really a necessity to ensure we’re not curtailing the innovation of flash technologies.

The revolutionary piece, is what’s next with the SSD drive itself. As SSD gets faster, the line between what is memory and what is disk gets thinner, to the point where all we are left with is storage persistence as the key difference. Enter Intel and Micron, who between them over the last 10 years have been developing 3D Xpoint technology, now packaged as Intel Optane, changing the way that bits of data or stored with SSD completely and in essence SSD becomes RAM with persistence of data or non-volatile memory (exactly as the term NVM in NVMe implies).

With vastly improved storage capacity and early quotes stating it will be 1000x faster than NAND Flash and 1000x more reliable, but Gen 1 quoted at being 5x faster and 3x the reliability (still very impressive), the future is moving to a computing architecture where there may not a requirement to have a separation of storage and memory which will advance our ability to compute order of magnitudes greater than even imaginable today.

The implications of this in the finance, life sciences, the datacentre, and in our homes represent at true milestone in what many refer to as the latest industrial revolution through digital transformation.

The bottom line is Flash is here. It is not a flash in the pan technology it’s getting bigger and will be the standard for enterprise storage.

To read more, visit our IBM storage hub, and download our Flash and Beyond guide.

Be Sociable, Share!

Posted under Uncategorized

This post was written by on April 26, 2017

Why Flash is such a big deal in the enterprise

By Evan Unrue, Chief Technologist, IoT, Analytics & Cognitive EMEA 

Flash storage

Why is Flash’s impact likely to continue to expand?

At the point of its entry into the market, the All-Flash Array gained its appeal through jaw dropping performance, primarily to those with specific use cases that demanded they do something different to address the high performance storage requirements, such as high performance OLTP database environments and large-scale virtual desktop deployments.

However, All-Flash Arrays for many were still seen to be cost prohibitive when you looked at the cost per Gigabyte only, and there was also a maturity concern to be addressed. Enterprise Flash Technology adoption started in the form of hybrid or server side flash deployments early on, leveraging features such as caching which utilised SSD or tiering of data across SSD Drives and traditional hard disk drives. The principle was simple in so much as the hottest, most active data would sit on the fastest disk and cold data on the larger, slower disk.

However, since flash technology has had access to the large R&D budgets of enterprise storage technology vendors, the innovation around SSD technology has moved beyond just advances at the silicon level and into the broader context of storage array technology.
Mature storage features such as deduplication and compression that previously thrived in near line storage technologies around backup and archive arrays, have since found a home in the All-Flash Array market. Coupled with increases in CPU performance, storage services such as deduplication and compression are able to run in real-time on All-Flash Arrays with negligible overheads. With the cost of SSD reducing at the manufacturing layer, the combined effect has been a reduction in the cost per Gigabyte of All-Flash Arrays. In many cases this is now delivering price parity in-line with traditional storage arrays, as well as delivering operational cost savings around power, cooling and management complexity.

With all these things in mind, the likes of Gartner are tracking the compound annual growth rate of SSD at 20% between 2015 and 2019, with traditional HDD sales tracking at 4%, expecting SSD revenues to equal those of HDD in 2017.

With the maturity benefits coming through for SSD in terms of cost and enterprise grade storage features, organisations who are tasked with more transformational reform in how they implement IT to support their business now have the platform on which they can layer innovation, alongside all the other industry advances in compute, networking and security. If you add that arterial vein, “Software Defined everything” you now have the next generation datacentre.

To read more, visit our IBM storage hub, and download our Flash and Beyond guide.

Be Sociable, Share!

Posted under Storage

This post was written by on April 19, 2017

The Economic Realities of All-Flash Arrays Today

By Evan Unrue, Chief Technologist, IoT, Analytics & Cognitive EMEA 

Evan Unrue

Evan Unrue, Chief Technologist, IoT, Analytics & Cognitive EMEA, Technology Solutions, now part of Tech Data.

In the past, the economic viability of All-Flash Arrays was based primarily on cost per IO (a performance metric denoting price/performance).

Those companies running heavily storage transactional application estates found a friend in flash technology as they could achieve what previously needed racks and racks of spinning disk, all being managed to within an inch of it’s life, in a much smaller footprint and at a much lower cost. These cost reductions were in both Capex and Opex due to the substantial reduction in space, power requirements, cooling requirements and management complexity.

Through the pace of innovation from both drive manufacturers and enterprise storage vendors, we have seen major advances in the economic value of Flash technology driving its adoption in the modern business, specifically:

  • Reduction in cost.
  • Driving down power and cooling costs.
  • Reducing the physical footprint of flash storage arrays.
  • Allowing companies to choose the flash technology most suitable to their requirements.
  • Surplus of storage performance to service more workloads with Flash.

With all this in mind, there is no reason why companies can’t put flash everywhere, servicing mixed application workloads, rather than just the performance hungry monster applications.

This proliferation of flash throughout the datacentre has become a real catalyst for companies to explore what they could do next, rather than micromanaging the cost and complexity of the storage they have today.

Enterprise Strategy Group recently published a detailed economic analysis of the benefits of IBM’s all-flash solution against traditional Tier 1 performance HDD-based arrays. (This study was commissioned by IBM).

The report determined that in a typical enterprise use case:

  • The all-flash arrays delivered a 76% return on investment and an 11-month payback period, compared to traditional storage.
  • The IBM solution also delivered additional performance benefits of about $1.2 million over a three-year period.
  • Those arrays also enabled upfront Capex savings of nearly $600,000, as well as incremental savings for ongoing Opex savings of nearly $400,000.

The ESG report also pointed out several key takeaways about the economic benefits of IBM’s all-flash arrays, notably the substantial business benefit created even by flash deployments for very small amounts of persistent data.

To read more, visit our IBM storage hub, and download our Flash and Beyond guide.

Be Sociable, Share!

Posted under Storage

This post was written by on April 12, 2017

Choosing Flash Storage

Evan_Unrue_Chief Technologist, IoT, Analytics and Cognitive EMEA

Evan Unrue, Chief Technologist, IoT, Analytics & Cognitive EMEA, Technology Solutions, now part of Tech Data.

By Evan Unrue, Chief Technologist, IoT, Analytics & Cognitive EMEA

Now the All-Flash Array has developed some maturity in the market, it is important to look beyond just how blisteringly fast it is and to make your use cases based on other key features such as availability, management, flexibility and how well it will grow with your business. Couple this with how you integrate with your IT environment and meeting compliance requirements like GDPR and you can find what you are looking for. Below are a few key things to look out for:

  • Data Reduction
  • Storage Virtualisation features
  • Disaster Recovery tools
  • Scale Out Features
  • Performance Management
  • Flash Management
  • Orchestration & Automation Support
  • Compatibility/Interoperability
  • Easy deployment

As the use cases for All Flash Arrays broadens beyond targeted performance centric requirements, it is important to ensure systems stand up to general enterprise requirements. Don’t fall into the trap of being blinded by shiny performance figures without ensuring the system has mature enterprise software features to boot.

It is important to think about what is most important to your use case when looking at All Flash Array technologies. Is it just point performance for your database you’re looking for, maybe it’s flexibility or is the ability to process large volumes of data quickly for a new analytics platform?

There will continue to be to be a raft of different form factors from the monolithic scale up storage array, to the scale out and hyper-converged, software defined approaches. These will all become more synonymous with next generation 3rd platform approaches to IT and As a Service or Cloud platforms. The bottom line is ensuring the one you choose provides future proofing for your business.

To read more, visit our IBM storage hub, and download our Flash and Beyond guide.

Be Sociable, Share!

Posted under Storage

This post was written by on April 5, 2017