A Storage Technology Blog by a Technologist

Polls

Do you use a single storage protocol in your IT infrastructure?

View Results

Loading ... Loading ...

Sponsors

Categories

Archive

Blogroll

Meta

Subscribe in NewsGator Online

Tortoise and the Hare? (NFS vs. iSCSI and why this Apples to Broccoli)

January 24th, 2013 by Steven Schwartz

I regularly get asked about storage protocols, especially when it comes to the right protocol to use to virtualization.  There is not a single right answer here.  However, there are several less efficient ways to do it.

 

Boston - Copley Square: The Tortoise and the Hare

Boston – Copley Square: The Tortoise and the Hare (Photo credit: wallyg)

NFS vs. iSCSI vs. FCoE vs. CIFS vs. ATAoE

Ethernet based storage protocols all get lumped together for comparison and they shouldn’t!  You’ll find below a table of common storage protocols and some basics around them.  As it pertains to Ethernet based protocols, there are two basic types, and they can be divided into SAN(block) and NAS(file) protocols.  There is a very important distinction between the two, and this distinction becomes even more apparent based on the storage vendor and application.  Storage vendors really have two products (there are a very few exceptions to that rule), NAS devices and SAN devices.  There are SAN devices that have the ability to be front ended by a NAS gateway (i.e. Puts a file system on a block device and presents it out as NAS device), and NAS devices that have an ability to emulate a block device by manipulating a large file on its file system.

 

Protocol Type Transport Requires A File System Standards Based
FCP Block FC (Optical or Copper) YES YES
FCoE Block Ethernet* YES YES*
iSCSI Block Ethernet YES YES
NFS File Ethernet NO YES
CIFS File Ethernet NO YES
SMB File Ethernet NO* YES*
ATAoE Block Ethernet* YES NO

*Caveats Apply

Which is faster?

There is a difference between speed and throughput!  I’ve commonly used the garden hose vs. water main comparison for clients.  While at Equallogic the conversation of iSCSI vs. FC was the hottest topic, and the speed difference between GigE and 10GbE.  The common misunderstanding was that because 10 > 1 that 10GbE was of course faster!  So back to the water model.  If you need to pass a small pebble (one with a diameter smaller than the diameter of a garden hose), or even a consistent line of small pebbles (transactional data profile) a garden hose can move those pebbles at the same “speed” as a water main.  It is only when data sets that are boulders and multiple concurrent data sets, that would need to be broken down into smaller checks to fit in a garden hose that the water main exceeds the “throughput” of the garden hose.  As a physicist pointed out to me, light travels at the same speed regardless of it being a tiny laser, or a giant one.  So in many cases, we proved that GigE could actually compete with 2Gb & 4Gb Fibre Channel (iSCSI vs. FCP).  With today’s technologies we are comparing 10GbE to 8Gb FC, so the “pipe” is less of an issue.

 

VMWare – iSCSI, FCP, FCoE, or NFS?

The ongoing debate becomes what storage protocol to use with VMWare.  The history of feature support to storage protocol is important, because in the earlier days of VMWare, features were released for FCP based storage first, then iSCSI, and last NFS.  This is no longer the case with VMWare’s releases.  If you refer to the above table, FCoE, FCP and iSCSI require a file system on top of the protocol.  In the case of VMWare, it is typically VMFS, VMWare’s clustered/shared file system.  Keep in the back of your head that the most difficult thing to develop at scale is a clustered file system (usually because of distributed lock management which doesn’t really apply with vmdks).  NFS however, is already a POSIX compliant shared file system as a storage protocol.  This means that there is no mapping of LUNs to ESX hosts, no management of the VMFS, or multiple VMFS instances, and less overall management required.  NFS doesn’t require specialized networks beyond tuned traditional route/switch.  It doesn’t require any special commands or configuration to grow in size, and it is fully supported within ESX and VMWare!  So given the option of SAN vs. NAS, with the current state of support, I would choose NAS (NFS) for VMWare, however, make sure you choose an enterprise NAS storage solution!!!!

Enhanced by Zemanta

Posted in Clustered File Systems, FCoE, General, iSCSI, NFS, SAN and NAS, virtualization, VMWare | 4 Comments »

SSD technologies and where are they being deployed?

March 13th, 2012 by Steven Schwartz
en:Wiring and structure of NAND flash cells.

Image via Wikipedia

We all know that the likes of Fusion IO have been around now for a few years and selling server based Flash acceleration for high prices.  What is less known is that TMS (Texas Memory Systems) has been selling flash storage (SAN) for several years quite successfully.  TMS is also one of the few storage vendors that has native support for IB storage connectivity.  DDN (Data Direct Networks) is another to have native IB storage controllers.  However, I’ve recently come across a vendor V3, that uses both IB and Solid State storage in an area where commodity has always won the technology battle, not speed or features.

 

VDI appliances are on the rise, and V3 is known for having a rock solid VDI appliance model.  I’ve seen it recently deployed in an ESX configuration for VMWare View.  This configuration utilized both IB and SSD to provide the fastest access to both data and storage infrastructure.  The use of the V3 appliance proved to be quicker to deploy, scaled to this customers needs, and helped reduce latency and boot times.

 

The appearance of SSD technology for application acceleration isn’t a new concept, however, it appears recently that it is very prevalent in Virtual Desktop Infrastructure designs.  On the acceleration side, NetApp has been using solid state storage in it’s PAM devices for several years as a caching engine for NAS reads.  NetApp has changed the name of PAM to Flash Cache in recent history, however, the use model is basically the same.  HDS and LSI have implemented models of SSD drives for most of the array models available to end-users.  These disk systems can utilize the SSDs as traditional volumes, or in some cases for specific application acceleration (read HNAS File System cache).  Of course, SSD drives are still at a 10x premium over more traditional spinning disk technologies.  Heck, most storage vendors have some level of support for SSD drives these days, however, I don’ t think there has been a good enough marketing push to show where they are valued.

 

I’m personally excited to see the movement toward shared SSD storage for use in virtualization, indexes for databases, and application acceleration.

 

 

 

 

Enhanced by Zemanta

Posted in Enterprise, IO Virtualization, NetApp, NFS, SAN and NAS, VMWare | No Comments »

And One Ring to rule them all!

March 5th, 2012 by Steven Schwartz
A 3D model of the One Ring

Image via Wikipedia

I’ve come to realize that the following is true…

  • We’ve been using virtual architectures in the storage marketplace for some time.  In the open systems area, this is pretty much commonplace.
  • We’ve come full circle for application and operating system virtualization.  Many companies have the majority of application servers virtualized with some type of hypervisor.
  • We’re stuck on the idea that port A(host) must go to Port A(switch)!

HP has something called VirtualConnect which leverages the HP Flex10 architecture to, within a single HP C7000 blade chassis, to create virtual IO resources.  Cisco with UCS has done something very similar to HP.  Both architectures work, but both require you to have a single server and/or server type in order to deploy.  With HP, they require Blade servers in a blade chassis, and only HP products, which eliminates the ability to use very powerful (memory & socket) rack servers if you want to use virtual IO resources.  Cisco, only works with Cisco’s server technology, imagine that!

 

Dell has taken another route to market, saying that it isn’t the hardware that is difficult, but the management and deployment of IO resources and has tried to solve the problem with VIS (*Thank you M. Rotkiz(s)) and AIM, software products that can help manage configurations of servers, networks, and storage.

 

Gartner however has taken a look and is developing a new “switch” market for the dynamic virtualized data center.  My current home at Xsigo Systems fits into this new categorization.  The idea is to create a pipe between servers that is low enough in latency to handle any type of application requirement, a pipe large enough in throughput to handle any type of storage or IO needs, and yet remain completely flexible and protocol agnostic.  This has been accomplished using several technologies and I think is where the next part of the virtualization market is moving to.  It is also a technology that allows the hypervisors to grab the last applications that your IT Operations group is fighting to keep on dedicated servers.

 

IO Virtualization, and I don’t just mean sharing 10Gb TCP/UDP with FCoE, is the next big thing in virtualization, big data, solid state storage, and IT flexibility.  VMware, Microsoft Hyper-V, Citrix Xen, RHOV, OVM, no matter what the hypervisor, IO should be flexible.

 

 

Enhanced by Zemanta

Posted in FCoE, General, HPC, IO Virtualization, iSCSI, NFS, Oracle, SAN and NAS, virtualization, VMWare | 1 Comment »

NO IPO…HDS to Acquire BlueArc

September 7th, 2011 by Steven Schwartz
Hitachi Data Systems Logo

Image via Wikipedia

5 year OEM of BlueArc’s Titan and Mercury product lines, HDS (Hitachi Data Systems) will be acquiring BlueArc.  This of course will be a great combination of companies, and I imagine will be a very accelerated ramp up as HDS already has everything in place to absorb BlueArc in a great way.  HDS isn’t one to buy companies.  This will be among one few and one of the largest acquisitions for HDS.  Exciting times for BlueArc and HDS, scary times for NetApp I would imagine.

 

Press Releases here: http://www.hds.com/corporate/press-analyst-center/press-releases/2011/gl110907.html?_p=v

 

and here:http://www.bluearc.com/storage-news/press-releases/110907-Hitachi-Data-Systems-Announces-Acquisition-of-BlueArc.shtml

 

 

 

Enhanced by Zemanta

Posted in Enterprise, HDS, HPC, iSCSI, NetApp, NFS, Oracle, SAN and NAS, VMWare | 1 Comment »

NetApp to buy LSI’s external storage business…

March 10th, 2011 by Steven Schwartz
Image representing LSI as depicted in CrunchBase

Image via CrunchBase

 

     Well it looks like NetApp is making what I consider to be a very low offer on LSI’s storage business.  LSI’s storage line is the OEM behind many vendor’s product lines.  LSI’s disk systems power storage offered by IBM, DELL, BlueArc, Oracle/SUN/STK, CRAY, Panasas, SEPATON, SGI, and Terascala to name a few.

 

     For less then $500 Million NetApp hits these vendor’s storage lines pretty hard, or rather has the ability to do so.  I would have figured that LSI’s storage business would have been worth more, but maybe the above vendors aren’t selling as much as I had imagined.

 

     So now I ask myself what does NetApp want with an FPGA based, FC and SAS storage company?  Where will this fit into the NetApp product line?  Will NetApp be dropping it’s software based RAID-DP for a hardware based solution?  Will this new storage backend allow for a greater capacity and performance from NetApp?

 

Time will tell, but I have a feeling that we will be seeing some major changes to the NetApp backend storage configurations in the near future.

Enhanced by Zemanta

Posted in Backup and Recovery, Enterprise, General, HPC, NetApp, NFS, SAN and NAS, SUN, VMWare, ZFS | No Comments »

« Previous Entries