A Storage Technology Blog by a Technologist

Polls

Do you use a single storage protocol in your IT infrastructure?

View Results

Loading ... Loading ...

Sponsors

Categories

Archive

Blogroll

Meta

Subscribe in NewsGator Online

VSPEX, VCE, vBlock, FlexPod, vStart, PureSystems, HP Smart Bundles…

February 1st, 2013 by Steven Schwartz

Converged networking was a very hot topic a few years ago.  Cisco UCS, HP Virtual Connect, NextIO, Xsigo, Mellanox, and Voltaire all have had a hand in the ideology of the converged network.  Cisco, however, helped make the conversation of converged networking really turn into a converged infrastructure story.  The movement into highly virtualized services has made hardware components more and more commodity, forcing vendors to create new value in the solutions being presented.

Principle of the converging lens

Principle of the converging lens (Photo credit: Wikipedia)

What is Converged Infrastructure?

How do you define converged infrastructure?  Per WIKI – Converged infrastructure packages multiple information technology (IT) components into a single, optimized computing solution. Components of a converged infrastructure solution include servers, data storage devices, networking equipment and software for IT infrastructure management, automation and orchestration.

 

What does this really mean?  Nothing much.  All of the above solutions are really just tried and tested reference architectures with varying levels of testing and supporting management applications.  I need to be very clear about calling out of “Single Vendor Support”.  Many of the below solutions technically have a single number to call in the event of a support issue and the ownership of multi-vendor communication is owned by a single vendor.

 

Product

Computer Vendor

Networking Vendor

Storage Vendor

Single Vendor Support

EMC VSPEX IBM/Cisco Oracle/Cisco EMC n/a
VCE Cisco Cisco EMC Yes*
vBlock Cisco Cisco EMC n/a
FlexPod Cisco Cisco NetApp n/a
DELL vStart Dell Dell Dell Yes
IBM PureSystems IBM IBM IBM/NetApp Yes*
HP SmartBundles HP HP HP Yes
Nutanix Nutanix Nutanix Nutanix Yes
Simplivity Simplivity Simplivity Simplivity Yes

*VCE & IBM PureSystems technically would take and manage support calls, however the solutions still contain more one vendor’s products.

 

Nutanix and Simplivity?

New players to the seen are Nutanix and Simplivity.  Both had an impressive showing at VMWorld 2012.  They are a brick based converged infrastructure which tightly couples computer and storage in a expandable cluster model.  These are unique as they have been designed from day one as a single solution for virtualization, not a combination of server, networking, and storage products available separately. They have features such as clustered shared storage, single management , deduplication, and are tightly integrated.  They are focused on the mid-enterprise, reach down to medium business, and are being pulled into the large enterprise.

Enhanced by Zemanta

Posted in Clustered File Systems, Converged Infrastructure, Enterprise, General, IO Virtualization, NetApp, SAN and NAS, virtualization, VMWare | 4 Comments »

Tortoise and the Hare? (NFS vs. iSCSI and why this Apples to Broccoli)

January 24th, 2013 by Steven Schwartz

I regularly get asked about storage protocols, especially when it comes to the right protocol to use to virtualization.  There is not a single right answer here.  However, there are several less efficient ways to do it.

 

Boston - Copley Square: The Tortoise and the Hare

Boston – Copley Square: The Tortoise and the Hare (Photo credit: wallyg)

NFS vs. iSCSI vs. FCoE vs. CIFS vs. ATAoE

Ethernet based storage protocols all get lumped together for comparison and they shouldn’t!  You’ll find below a table of common storage protocols and some basics around them.  As it pertains to Ethernet based protocols, there are two basic types, and they can be divided into SAN(block) and NAS(file) protocols.  There is a very important distinction between the two, and this distinction becomes even more apparent based on the storage vendor and application.  Storage vendors really have two products (there are a very few exceptions to that rule), NAS devices and SAN devices.  There are SAN devices that have the ability to be front ended by a NAS gateway (i.e. Puts a file system on a block device and presents it out as NAS device), and NAS devices that have an ability to emulate a block device by manipulating a large file on its file system.

 

Protocol Type Transport Requires A File System Standards Based
FCP Block FC (Optical or Copper) YES YES
FCoE Block Ethernet* YES YES*
iSCSI Block Ethernet YES YES
NFS File Ethernet NO YES
CIFS File Ethernet NO YES
SMB File Ethernet NO* YES*
ATAoE Block Ethernet* YES NO

*Caveats Apply

Which is faster?

There is a difference between speed and throughput!  I’ve commonly used the garden hose vs. water main comparison for clients.  While at Equallogic the conversation of iSCSI vs. FC was the hottest topic, and the speed difference between GigE and 10GbE.  The common misunderstanding was that because 10 > 1 that 10GbE was of course faster!  So back to the water model.  If you need to pass a small pebble (one with a diameter smaller than the diameter of a garden hose), or even a consistent line of small pebbles (transactional data profile) a garden hose can move those pebbles at the same “speed” as a water main.  It is only when data sets that are boulders and multiple concurrent data sets, that would need to be broken down into smaller checks to fit in a garden hose that the water main exceeds the “throughput” of the garden hose.  As a physicist pointed out to me, light travels at the same speed regardless of it being a tiny laser, or a giant one.  So in many cases, we proved that GigE could actually compete with 2Gb & 4Gb Fibre Channel (iSCSI vs. FCP).  With today’s technologies we are comparing 10GbE to 8Gb FC, so the “pipe” is less of an issue.

 

VMWare – iSCSI, FCP, FCoE, or NFS?

The ongoing debate becomes what storage protocol to use with VMWare.  The history of feature support to storage protocol is important, because in the earlier days of VMWare, features were released for FCP based storage first, then iSCSI, and last NFS.  This is no longer the case with VMWare’s releases.  If you refer to the above table, FCoE, FCP and iSCSI require a file system on top of the protocol.  In the case of VMWare, it is typically VMFS, VMWare’s clustered/shared file system.  Keep in the back of your head that the most difficult thing to develop at scale is a clustered file system (usually because of distributed lock management which doesn’t really apply with vmdks).  NFS however, is already a POSIX compliant shared file system as a storage protocol.  This means that there is no mapping of LUNs to ESX hosts, no management of the VMFS, or multiple VMFS instances, and less overall management required.  NFS doesn’t require specialized networks beyond tuned traditional route/switch.  It doesn’t require any special commands or configuration to grow in size, and it is fully supported within ESX and VMWare!  So given the option of SAN vs. NAS, with the current state of support, I would choose NAS (NFS) for VMWare, however, make sure you choose an enterprise NAS storage solution!!!!

Enhanced by Zemanta

Posted in Clustered File Systems, FCoE, General, iSCSI, NFS, SAN and NAS, virtualization, VMWare | 4 Comments »

Point, Shoot, Solution!

January 21st, 2013 by Steven Schwartz

So lately I’ve been working mostly with IT Manufacturers and Vendors.  When you are representing a technology manufacturer all you have is the products they sell… no matter what the business problem, technical problem, budget, need, the answer always is product X.  I’ve been lucky to have worked with some of the most cutting edge and disruptive products on the market, so customers have wanted them either because they were the right fit, they were the fastest, most reliable, highest performing product, or because it was all we had to offer.  There were also the cases of absolutely none of the prior, they just wanted to play with the newest shiny bobble.

For several periods of my career, I’ve been in the unique position to talk about the business of a client.  What are the issues that are keeping them from being agile, stopping them doubling in revenue, what could be changed to increase profitability? The details that are usually important to every executive board across industries.  The amazing thing, when looking at making a large technology purchase, the IT Director and below usually ignore the business needs for a solution, but instead get caught up in features, speeds and feeds, and price.  The shiny new storage widget that will never be implemented, but makes storage Vendor X stand-out this month becomes a critical decision point.  Let’s be honest, in most cases, IT at a company is a cost of doing business.

 

Small Pond Ring - 1

Small Pond Ring – 1 (Photo credit: the justified sinner)

I was recently in front of a local IT team that currently has NetApp deployed.  They had been sold on the idea of SSD as the cool technology.  They were in middle of making a rather large (for them) storage purchase.  They run FCP LUNs to VMFS from a NetApp clustered pair, and were almost completely out of SATA storage and they were experiencing performance issues.  The vendor solution was sell them more storage, and disregard looking at what they have stored where, sell them on FlexPools and FlexCache as well, that will resolve and performance issues they are having.

 

Fast forward a few weeks…there were no performance issues other than misconfiguration and following bad practices.  With a little more storage, implementation of NFS (of course NFS< why on earth would you want to continue the complexity of FCP on NetApp, especially when you’ve already put in place 10Gb Ethernet…a topic I will revisit at another time), and some best practices…HAPPY CLIENT! In the end, a relatively small IT staff was really looking for a way to free up time spent on operations and maintenance in order to spend time deploying new applications to make the company more agile.

 

This brings me back to the original thought, attacking the IT group with business logic and justifications.  I’m a geek at heart, I like the latest and greatest technologies.  For most of my career I’ve taken large personal risks working with and for early stage start-ups, that was my choice.  The typical CTO doesn’t have the option of playing with a science project, they need technologies that are proven and reliable.  Technologies should be implemented to further the business, not to scratch a technology itch.  The above company got sold on the shiny bobble called SSD, when in reality they might never need SSD technology…ever.  For the record, that doesn’t make them less of a company.

Enhanced by Zemanta

Posted in Benchmarks, Enterprise, FCoE, General, iSCSI, NetApp, SAN and NAS, WAFL | 1 Comment »

SSD technologies and where are they being deployed?

March 13th, 2012 by Steven Schwartz
en:Wiring and structure of NAND flash cells.

Image via Wikipedia

We all know that the likes of Fusion IO have been around now for a few years and selling server based Flash acceleration for high prices.  What is less known is that TMS (Texas Memory Systems) has been selling flash storage (SAN) for several years quite successfully.  TMS is also one of the few storage vendors that has native support for IB storage connectivity.  DDN (Data Direct Networks) is another to have native IB storage controllers.  However, I’ve recently come across a vendor V3, that uses both IB and Solid State storage in an area where commodity has always won the technology battle, not speed or features.

 

VDI appliances are on the rise, and V3 is known for having a rock solid VDI appliance model.  I’ve seen it recently deployed in an ESX configuration for VMWare View.  This configuration utilized both IB and SSD to provide the fastest access to both data and storage infrastructure.  The use of the V3 appliance proved to be quicker to deploy, scaled to this customers needs, and helped reduce latency and boot times.

 

The appearance of SSD technology for application acceleration isn’t a new concept, however, it appears recently that it is very prevalent in Virtual Desktop Infrastructure designs.  On the acceleration side, NetApp has been using solid state storage in it’s PAM devices for several years as a caching engine for NAS reads.  NetApp has changed the name of PAM to Flash Cache in recent history, however, the use model is basically the same.  HDS and LSI have implemented models of SSD drives for most of the array models available to end-users.  These disk systems can utilize the SSDs as traditional volumes, or in some cases for specific application acceleration (read HNAS File System cache).  Of course, SSD drives are still at a 10x premium over more traditional spinning disk technologies.  Heck, most storage vendors have some level of support for SSD drives these days, however, I don’ t think there has been a good enough marketing push to show where they are valued.

 

I’m personally excited to see the movement toward shared SSD storage for use in virtualization, indexes for databases, and application acceleration.

 

 

 

 

Enhanced by Zemanta

Posted in Enterprise, IO Virtualization, NetApp, NFS, SAN and NAS, VMWare | No Comments »

And One Ring to rule them all!

March 5th, 2012 by Steven Schwartz
A 3D model of the One Ring

Image via Wikipedia

I’ve come to realize that the following is true…

  • We’ve been using virtual architectures in the storage marketplace for some time.  In the open systems area, this is pretty much commonplace.
  • We’ve come full circle for application and operating system virtualization.  Many companies have the majority of application servers virtualized with some type of hypervisor.
  • We’re stuck on the idea that port A(host) must go to Port A(switch)!

HP has something called VirtualConnect which leverages the HP Flex10 architecture to, within a single HP C7000 blade chassis, to create virtual IO resources.  Cisco with UCS has done something very similar to HP.  Both architectures work, but both require you to have a single server and/or server type in order to deploy.  With HP, they require Blade servers in a blade chassis, and only HP products, which eliminates the ability to use very powerful (memory & socket) rack servers if you want to use virtual IO resources.  Cisco, only works with Cisco’s server technology, imagine that!

 

Dell has taken another route to market, saying that it isn’t the hardware that is difficult, but the management and deployment of IO resources and has tried to solve the problem with VIS (*Thank you M. Rotkiz(s)) and AIM, software products that can help manage configurations of servers, networks, and storage.

 

Gartner however has taken a look and is developing a new “switch” market for the dynamic virtualized data center.  My current home at Xsigo Systems fits into this new categorization.  The idea is to create a pipe between servers that is low enough in latency to handle any type of application requirement, a pipe large enough in throughput to handle any type of storage or IO needs, and yet remain completely flexible and protocol agnostic.  This has been accomplished using several technologies and I think is where the next part of the virtualization market is moving to.  It is also a technology that allows the hypervisors to grab the last applications that your IT Operations group is fighting to keep on dedicated servers.

 

IO Virtualization, and I don’t just mean sharing 10Gb TCP/UDP with FCoE, is the next big thing in virtualization, big data, solid state storage, and IT flexibility.  VMware, Microsoft Hyper-V, Citrix Xen, RHOV, OVM, no matter what the hypervisor, IO should be flexible.

 

 

Enhanced by Zemanta

Posted in FCoE, General, HPC, IO Virtualization, iSCSI, NFS, Oracle, SAN and NAS, virtualization, VMWare | 1 Comment »

« Previous Entries