A Storage Technology Blog by a Technologist

Polls

Do you use a single storage protocol in your IT infrastructure?

View Results

Loading ... Loading ...

Sponsors

Categories

Archive

Blogroll

Meta

Subscribe in NewsGator Online

Disruptive–not a bad word when used correctly…

February 8th, 2013 by Steven Schwartz
Preventing disruptive technologies from disrup...

Preventing disruptive technologies from disrupting education (Photo credit: opensourceway)

Typically if you hear the word DISRUPTIIVE used in an IT conversation it means very bad things happened.  It is a term that requires looking at the entire use case…

 

…a disruptive child in a schoolroom is usually a bad thing, but what if that child is being disruptive because they see or sensed danger.  A disruption in power to your home is a serious inconvenience.  A disruptive technology, however, can lead tremendous leaps forward in how companies and businesses make money and reach higher productivity.

 

Back in about 2000 iSCSI was a disruptive storage protocol.  It was preceded by other IP based block protocols, however, it has gained the most market share, it is built into the base of almost all mainstream enterprise operating systems, and daily there are new companies releasing products leveraging it.

 

Infiniband is becoming a “disruptive” technology as well.  It is found regularly in HPC, clustered computing, and banking.  It has moved down into traditional enterprise implementations Isilon (being used as the cluster interconnect).  Mellenox has leveraged it highly and driving it into enterprise deployments as well.  In fact, there isn’t a mainstream server manufacturer that doesn’t offer IB as an IO option.  NetApp has used Infiniband connectivity in it’s clustered filer heads as well, likely due to its very low latency and high bandwidth capabilities.  Oracle bought Xisgo, which leveraged Infiniband as a data center fabric technology.  HP, DELL, IBM, and SuperMicro; which make up the majority of the server installations all support IB, some server models even on the motherboard.

 

With the release of FDR IB, the data bandwidth per port is 56Gbs, with latency so low it is difficult to detect, and one of the lowest power draws of any server/storage interconnect…and guess what?  IB is finding it’s way into the hypervisor.  Citrix, VMware, Microsoft, Oracle, and RedHat all have native IB support on the hypervisor road map.  So keep an eye out!

 

Enhanced by Zemanta

Posted in Enterprise, iSCSI, Start-up, VMWare | No Comments »

NASDAQ:DELL to be no more…

February 5th, 2013 by Steven Schwartz

English: Dell Logo

In great news for commoditizer DELL INC. it has started the process to bring itself private again.  DELL has been struggling quarter over quarter in an aggressive IT products, peripherals and services marketplace.  It can now make some much needed changes in management, portfolios, and brand in order to be the IT giant it once was.

 

Things that could/should change:

 

1. Bunt on the PC market?

2. Bunt on the generic server market?

3. Shed the professional services arm?

4. Double down on Channel Sales?

5. Aggressive leadership changes?

 

Things that are likely going to change?

 

1. Few new acquisitions

2. Selling off of poorly performing low margin business lines.

3. Layoffs/downsizing

4. Movement to telesales based sales organization similar to CDW

Enhanced by Zemanta

Posted in General | No Comments »

Office 364 released today!

February 1st, 2013 by Steven Schwartz
Testing bulletproof vest, Washington, DC.

Testing bulletproof vest, Washington, DC. (Photo credit: Wikipedia)

Outlook.com and Office 365 is having/had an outage today.  Just shows that no architecture is bulletproof!

Enhanced by Zemanta

Posted in General | No Comments »

VSPEX, VCE, vBlock, FlexPod, vStart, PureSystems, HP Smart Bundles…

February 1st, 2013 by Steven Schwartz

Converged networking was a very hot topic a few years ago.  Cisco UCS, HP Virtual Connect, NextIO, Xsigo, Mellanox, and Voltaire all have had a hand in the ideology of the converged network.  Cisco, however, helped make the conversation of converged networking really turn into a converged infrastructure story.  The movement into highly virtualized services has made hardware components more and more commodity, forcing vendors to create new value in the solutions being presented.

Principle of the converging lens

Principle of the converging lens (Photo credit: Wikipedia)

What is Converged Infrastructure?

How do you define converged infrastructure?  Per WIKI – Converged infrastructure packages multiple information technology (IT) components into a single, optimized computing solution. Components of a converged infrastructure solution include servers, data storage devices, networking equipment and software for IT infrastructure management, automation and orchestration.

 

What does this really mean?  Nothing much.  All of the above solutions are really just tried and tested reference architectures with varying levels of testing and supporting management applications.  I need to be very clear about calling out of “Single Vendor Support”.  Many of the below solutions technically have a single number to call in the event of a support issue and the ownership of multi-vendor communication is owned by a single vendor.

 

Product

Computer Vendor

Networking Vendor

Storage Vendor

Single Vendor Support

EMC VSPEX IBM/Cisco Oracle/Cisco EMC n/a
VCE Cisco Cisco EMC Yes*
vBlock Cisco Cisco EMC n/a
FlexPod Cisco Cisco NetApp n/a
DELL vStart Dell Dell Dell Yes
IBM PureSystems IBM IBM IBM/NetApp Yes*
HP SmartBundles HP HP HP Yes
Nutanix Nutanix Nutanix Nutanix Yes
Simplivity Simplivity Simplivity Simplivity Yes

*VCE & IBM PureSystems technically would take and manage support calls, however the solutions still contain more one vendor’s products.

 

Nutanix and Simplivity?

New players to the seen are Nutanix and Simplivity.  Both had an impressive showing at VMWorld 2012.  They are a brick based converged infrastructure which tightly couples computer and storage in a expandable cluster model.  These are unique as they have been designed from day one as a single solution for virtualization, not a combination of server, networking, and storage products available separately. They have features such as clustered shared storage, single management , deduplication, and are tightly integrated.  They are focused on the mid-enterprise, reach down to medium business, and are being pulled into the large enterprise.

Enhanced by Zemanta

Posted in Clustered File Systems, Converged Infrastructure, Enterprise, General, IO Virtualization, NetApp, SAN and NAS, virtualization, VMWare | 4 Comments »

Tortoise and the Hare? (NFS vs. iSCSI and why this Apples to Broccoli)

January 24th, 2013 by Steven Schwartz

I regularly get asked about storage protocols, especially when it comes to the right protocol to use to virtualization.  There is not a single right answer here.  However, there are several less efficient ways to do it.

 

Boston - Copley Square: The Tortoise and the Hare

Boston – Copley Square: The Tortoise and the Hare (Photo credit: wallyg)

NFS vs. iSCSI vs. FCoE vs. CIFS vs. ATAoE

Ethernet based storage protocols all get lumped together for comparison and they shouldn’t!  You’ll find below a table of common storage protocols and some basics around them.  As it pertains to Ethernet based protocols, there are two basic types, and they can be divided into SAN(block) and NAS(file) protocols.  There is a very important distinction between the two, and this distinction becomes even more apparent based on the storage vendor and application.  Storage vendors really have two products (there are a very few exceptions to that rule), NAS devices and SAN devices.  There are SAN devices that have the ability to be front ended by a NAS gateway (i.e. Puts a file system on a block device and presents it out as NAS device), and NAS devices that have an ability to emulate a block device by manipulating a large file on its file system.

 

Protocol Type Transport Requires A File System Standards Based
FCP Block FC (Optical or Copper) YES YES
FCoE Block Ethernet* YES YES*
iSCSI Block Ethernet YES YES
NFS File Ethernet NO YES
CIFS File Ethernet NO YES
SMB File Ethernet NO* YES*
ATAoE Block Ethernet* YES NO

*Caveats Apply

Which is faster?

There is a difference between speed and throughput!  I’ve commonly used the garden hose vs. water main comparison for clients.  While at Equallogic the conversation of iSCSI vs. FC was the hottest topic, and the speed difference between GigE and 10GbE.  The common misunderstanding was that because 10 > 1 that 10GbE was of course faster!  So back to the water model.  If you need to pass a small pebble (one with a diameter smaller than the diameter of a garden hose), or even a consistent line of small pebbles (transactional data profile) a garden hose can move those pebbles at the same “speed” as a water main.  It is only when data sets that are boulders and multiple concurrent data sets, that would need to be broken down into smaller checks to fit in a garden hose that the water main exceeds the “throughput” of the garden hose.  As a physicist pointed out to me, light travels at the same speed regardless of it being a tiny laser, or a giant one.  So in many cases, we proved that GigE could actually compete with 2Gb & 4Gb Fibre Channel (iSCSI vs. FCP).  With today’s technologies we are comparing 10GbE to 8Gb FC, so the “pipe” is less of an issue.

 

VMWare – iSCSI, FCP, FCoE, or NFS?

The ongoing debate becomes what storage protocol to use with VMWare.  The history of feature support to storage protocol is important, because in the earlier days of VMWare, features were released for FCP based storage first, then iSCSI, and last NFS.  This is no longer the case with VMWare’s releases.  If you refer to the above table, FCoE, FCP and iSCSI require a file system on top of the protocol.  In the case of VMWare, it is typically VMFS, VMWare’s clustered/shared file system.  Keep in the back of your head that the most difficult thing to develop at scale is a clustered file system (usually because of distributed lock management which doesn’t really apply with vmdks).  NFS however, is already a POSIX compliant shared file system as a storage protocol.  This means that there is no mapping of LUNs to ESX hosts, no management of the VMFS, or multiple VMFS instances, and less overall management required.  NFS doesn’t require specialized networks beyond tuned traditional route/switch.  It doesn’t require any special commands or configuration to grow in size, and it is fully supported within ESX and VMWare!  So given the option of SAN vs. NAS, with the current state of support, I would choose NAS (NFS) for VMWare, however, make sure you choose an enterprise NAS storage solution!!!!

Enhanced by Zemanta

Posted in Clustered File Systems, FCoE, General, iSCSI, NFS, SAN and NAS, virtualization, VMWare | 4 Comments »

« Previous Entries