Too good to be true? Make me prove it…

April 19, 2010

  

Taking disruptive technology to early adoption is in my DNA — the very fabric of who I am.  I’ve had the privilege to engage with great technologists — many of whom remain colleagues today.  As many of you know, I am recently on board at Xiotech, some of which was part of Seagate until a few years ago and is still an integral part of our ownership and our IP.  The messaging in the storage space can be confusing — here are some ideas that may help break through the noise.  So here’s your challenge:

 

Q:  What do IBM, HP, Dell, EMC, NetApp, 3Par, Compellent and Xiotech have in common?

A:  We all use disk drives manufactured by Seagate.

No matter which storage vendor you deploy, Seagate drives are at the heart of the technology.  Now let me share with you Xiotech’s Integrated Storage Element (ISE).  We take the very same Seagate technology to the next level — after all, we know these drives better than any other storage vendor.

Example:

PLAN #1:  Deploy 100 $20k storage subsystems of [INSERT YOUR BRAND HERE] each costing  15% to keep under maintenance in years 4 & 5 (assuming 3 year warranty) = $600,000. 

PLAN #2:  Acquire xiotech ISE @ 30% less initial acquistion cost/space/energy to produce the same performance (or better) and pay $0 in hardware maintenance over 5 years. 

          or – use the comparable acquisition cost to get 30% more performance from the start

          or – use the NPV of the maintenance to get more performance/density

If you’re a former client from FileTek, Agilis, Sensar, Verity, Immersion or Fusion-io, you’ve seen what I’ve seen:  disruptive technology making a significant difference — and Xiotech ISE is no different.

Don’t believe me?  MAKE ME PROVE IT!

 


LUG 2010 This Week in Stunning Monterey!

April 12, 2010

 

Lustre Advanced User Seminar 2010 Agenda

Wednesday, April 14 – Seascape Grand Ballroom, 2nd Floor, Seascape Conference Center

8:00 – 9:00 Breakfast – Riviera Room, 3rd Floor, Seascape Conference Center
9:00 – 10:30 Lustre Tips and Tricks

Andreas Dilger, Oracle
10:30 – 11:00 Coffee Break – Foyer, 2nd Floor, Seascape Conference Center
11:00 – 12:00 Administering Lustre at Scale, Lessons Learned at ORNL

Jason Hill, ORNL
12:00 – 1:00 Lunch – Riviera Room, 3rd Floor, Seascape Conference Center
1:00 – 2:30 A Look Inside HSM

Aurelien Degremont and Thomas Leibovici, CEA
2:30 – 3:00 Coffee Break – Foyer, 2nd Floor, Seascape Conference Center
3:00 – 5:00 A Deep Dive into Lustre Recovery Mechanisms:

Johann Lombardi, Oracle

LUG 2010 Agenda

LUG 2010, a two-day event, will feature a workshop and numerous presentations on select Lustre features, upcoming enhancements, site-specific experiences using Lustre, and much more.
Thursday, April 15 – LUG Day 1 – Seascape Grand Ballroom, 2nd Floor, Seascape Conference Center

8:00 – 9:00 Breakfast – Riviera/Bayview Rooms, 3rd Floor, Seascape Conference Center
9:00 – 9:30 LUG Kickoff

Peter Bojanic, Oracle
9:30 – 10:00 Lustre Development

Eric Barton, Oracle
10:00 – 10:30 Coffee Break – Foyer, 2nd Floor, Seascape Conference Center
10:30 – 11:30 Lustre 1.8 Update

Peter Jones, Oracle
11:30 – 12:00 Lustre 2.0

Robert Read, Oracle
12:00 – 1:00 Lunch – Riviera/Bayview Rooms, 3rd Floor, Seascape Conference Center
1:00 – 1:30 Getting the Best from Lustre in a NUMIOA and Multi-rail IB Environment

Sebastien Buisson, Bull
1:30 – 2:15 What RedSky and Lustre Have Accomplished

Steve Monk, Sandia
Joe Mervini, Sandia
2:15 – 3:00 Lustre at the OLCF: Experiences and Path Forward

Galen Shipman, ORNL
3:00 – 3:30 Coffee Break – Foyer, 2nd Floor, Seascape Conference Center
3:30 – 4:00 Comprehensive Lustre I/O Tracking with Vampir:

Michael Kluge, ZIH
4:00 – 4:30 Lustre Deployment and Early Experiences

Florent Parent, Clumeq
4:30 – 5:00 Indiana University’s Lustre WAN – Empowering Production Workflows on the TeraGrid

Stephen Simms, Indiana University
5:00 – 5:30 LCE: Lustre at CEA

Stephane Thiell, CEA
5:30 – 6:30 Break
6:30 Dinner Reception – Seascape Resort: To be announced at Thursday’s event

Friday, April 16 – LUG Day 2 – Seascape Grand Ballroom, 2nd Floor, Seascape Conference Center

8:00 – 9:00 Breakfast – Riviera/Bayview Rooms, 3rd Floor, Seascape Conference Center
9:00 – 9:30 Lustre SMP Scaling Improvements

Liang Zhen, Oracle
9:30 – 10:00 Lustre/HSM Binding

Aurelien Degremont, CEA
Hua Huang, Oracle
10:00 – 10:30 Lustre Enabled WAN in Government, NRL

Jeremy Filizetti, NRL/SMSi
10:30 – 11:00 Coffee Break – Foyer, 2nd Floor, Seascape Conference Center
11:00 – 11:30 Hedging Our Filesystem Bet

Kent Blancett, BP
11:30 – 12:00 Analysis and Recovery from Lustre Faults/Failures on Ranger

John Hammond, TACC
12:00 – 1:00 Lunch – Riviera/Bayview Rooms, 3rd Floor, Seascape Conference Center
1:00 – 1:45 Kerberized Lustre 2.0 over the WAN

Josephine Palencia, PSC
1:45 – 2:15 Reaping the Benefits of MetaData

Nic Cardo, NERSC
2:15 – 2:45 Coffee Break – Foyer, 2nd Floor, Seascape Conference Center
2:45 – 3:15 Porting Lustre to Operating Systems Other than Linux:

Ken Hornstein, NRL
3:15 – 3:45 Lustre and Community Development

Daniel Ferber, Oracle
3:45 LUG 2010 Concludes

MySQL Conference Santa Clara This Week

April 12, 2010

 

I’ll be there Tuesday…ping me it you’re going.


IT’S OFFICIAL: ClusterStor Now Out of Stealth Mode

December 17, 2009

Yesterday I got a call from Kevin Canady, former VP of Cluster File Systems to let me know that he is resurrecting Lustre support for the HPC community under a new entity called ClusterStor.  This looks to be a good move insuring there is viable open support for Lustre users.  They are also embarking on next generation storage tech for Exascale. Keep an eye on these guys:  they have some of the best pedigree in the industry.  Here is the update from Kevin:

“ClusterStor is pleased to offer full Lustre support and development services, worldwide, 24×7!

The Lustre File System is a leader in I/O performance and scalability for the world’s most powerful and complex computing environments. With constantly increasing demands on IT organizations, managing such sophisticated systems makes it challenging to identify and resolve product issues quickly.

At ClusterStor, we’re committed to maximizing the performance, security, and availability of your Lustre systems. The ClusterStor team combines the agility of a focused company with the in-depth architectural, development and implementation skills of over a hundred years of combined Lustre development and customer support experience. We will both enhance your support and save you money.

ClusterStor was founded in early 2009 by Peter Braam who founded and lead the Lustre project. It employs leading Lustre architects and developers. ClusterStor does not sell systems or storage hardware and will support Lustre for end-users, systems and storage vendors alike. Its goal is to enhance Lustre as an open source, community accepted product.

If you would like to explore Lustre Linux and Windows support options and large scale I/0 solutions, give us a call and compare!”  You can find out more here:

P. Kevin Canady
Vice President, Strategic Relationship Sales
ClusterStor Inc. (Formerly Horizontal Scale)
415.505.7701
kevin.canady@clusterstor.com

I know a lot of folks who have been waiting for this announcement — good luck to Kevin and Peter!


Storage Trends From SuperComputing 2009

December 2, 2009

This article is a REPOST from Jeffrey B. Layton writing in LINUX MAGAZINE.  I had to edit out some of the images/graphics to get this up quickly — apologies — use the link for the complete story.

SC09 TREND:  High-Density Flash Storage

Article by Jeffrey B. Layton

There was a past article about really fast storage (Ramdisks – Now We are Talking Hyperspace!) but flash based storage units are becoming more popular. For dense flash based storage units, the performance density (IOPS/U or Throughput/U, where “U” is the common rack unit measure – 1.75″) can be better than DRAM based storage units, especially when price is considered. This is particularly true for IOPS driven applications.

There are several vendors who offer Flash Storage units – Texas Memory, Violin Memory, and now Sun are offering flash based storage units with performance that is amazing.

The basic concept is that you take a standard rack unit, stuff them full of flash units (drives or flash based DIMMs), possibly combine them with RAID controller(s), and put FC, or IB connections, or direct connect via a PCIe cable to a host node, and you have a storage unit with very high performance. It sounds simple but it is actually more difficult than you think.

Texas Memory
Texas Memory has been making very high-speed storage units for a number of years. Originally they focused more on DRAM based solutions but in the last few years they have been offering flash based storage devices as well. The latest unit is the RamSan-620.

The RamSan-620 is a 2U unit that has a capacity of 1-5 TB of SLC Flash (the good but more expensive kind of flash) and uses only 230W of power. It has two 4 Gb FC connections in the back of the box with InfiniBand connections and 10GigE connections coming soon. It uses super capacitors in case of power loss so the DRAM on the flash chips can be flushed to the flash storage. It does have some really good management features including the ability to use 512 byte blocks rather than the standard 4KB blocks (if operating systems and file systems can make use of it). It also allows you to carve the storage into 1 to 1,024 LUNs with variable capacity in each LUN and the ability to assign LUNs to specific ports.

However, the really cool aspect of the unit is the performance. It is rated at up to 3 GB/s throughput and 250,000 random IOPS. Given that a hard drive can perform between 100-200 IOPS, this kind of performance is short of remarkable. An equivalent number of hard drives to match the IOPS capability is in the range of 1,250-2,500.

Violin Memory
Violin Memory is selling a high performance storage unit, the Violin 1010, that can use either DRAM or Flash based storage units. The basic 2U unit can accommodate FC and Ethernet connections via a “network head” or it can be attached directly to a node via a PCIe interface (dual x4 and x8 interfaces). The Flash based version of the 1010 has a raw capacity of up to 4 TB of SLC flash units. The 1010 uses 62 Violin Intelligent Memory Modules (VIMMs) that come in either 32GB or 64GB capacities.

Like the Texas Memory unit, the Violin Memory 1010 has fantastic performance. For a fully loaded unit with 64GB VIMM’s, the 1010 has a performance of 345,000 4K Read IOPS and 219,000 4K Write IOPS. These numbers are assuming a x8 PCIe connector. A x4 PCIe connection will have a performance of 215,000 4K Read IOPS and 145,000 4K Write IOPS. The sustained read throughput performance is just a little above 1.4 GB/s, a peak write of about 1 GB/s, and a sustained random write performance of about 850 MB/s.

Sun F5100
Recently, Sun introduced a totally flash based storage unit, the Sun F5100. It is a 1U box that has 20, 40, or 80 SO-DIMM based SLC Flash Modules (FMOD’s). Each FMOD currently has 24GB of usable capacity but is really a 36GB flash module (the extra space is used for over provisioning). Figure 2 below, courtesy of Robin Harris at StorageMojo, shows the SO-DIMM based flash unit.

The FMODs have a small amount of DRAM on-board, so the F5100 has up to four very large capacitors called Energy Storage Modules (they look like the drives in the front of the unit). These capacitors have enough power to allow the DRAM’s to flush their data to the flash storage.

Each of the FMOD’s shows up as a block device to the OS. For ZFS, this isn’t such a big deal since it can use the drives individually and manage them as part of the overall storage pool. For something like Linux you would have to use md to manage all of the FMOD’s to create a RAID group.

The price for the entry level unit is also very attractive. According to several articles the price point for 20 FMODs (480 GB raw capacity) is $45,995 (397,000 IOPS read and 304,000 IOPS write). Moreover, the entire unit only uses about 300W of power (less than your desktop system). For that level of IOPS performance and power consumption, the price is very, very attractive.


HARDCORE FANS ONLY — SC09 Session: Flash Technology in HPC: Let the Revolution Begin

November 27, 2009

Recorded 11/20/09 at SC09, this Panel discussion entitled “Flash Technology in HPC: Let the Revolution Begin” was moderated by Bob Murphy, Sun Microsystems. Download for iPod

Abstract: With an exponential growth spurt of peak GFLOPs available to HPC system designers and users imminent, the CPU performance I/O gap will reach increasingly gaping proportions. To bridge this gap, Flash is suddenly being deployed in HPC as a revolutionary technology that delivers faster time to solution for HPC applications at significantly lower costs and lower power consumption than traditional disk based approaches. This panel, consisting of experts representing all points of the Flash technology spectrum, will examine how Flash can be deployed and the effect it will have on HPC workloads.

* Bob Murpy slides: “Flash Technology in HPC: Let the Revolution Begin

* Paresh Pattani slides: “Intel SSD Performance on HPC Applications

* David Flynn slides: “Fusion-io Solid State in HPC

* Larry Mcintosh and Dale Layfield slides: “Sun’s Flash Solutions for
optimizing MSC.Software’s Simulation Products

For more information, check out this Sun Blueprint: Sun Business Ready HPC for MD Nastran.

* Jan Silverman slides: “Spansion EcoRAM NAM Network Attached Memory


Why PCIe-based SSDs Are Important

November 20, 2009

There’s an old expression I like: “Different isn’t better, it’s just different.”

When it comes to SSDs based around a SATA or SAS format — that’s pretty much the case in my view. Yes there are exceptional products suited for enterprise like Pliant and STEC. And, yes — there are more conventional items for consumers like Intel and OCZ (and about 20 others).  And yes, the standard pacakge 3.5″ form factor for these devices make them suitable for shared storage as well as for integration into hetreogenous and homogenous storage environments like you might find in a typical data center.  Embracing these SSDs you will find the usual manufacturers like EMC, NetApp, SUN, and others.  Their use of SSD is evolutionary, easy to digest.

PCIe-based SSDs are very different.  For one thing, they sit on the server system bus right next to the CPU.  This is a direct attached (DAS) model that has numerous advantages for certain types of processing.  We agree that not all PCIe-based SSDs are suitable for all applications — but in terms of applications that can take advantage of bandwidth, throughput, and latency enhancements, these devices are indeed a superior architecture.

There are some challenges:

1)  Not all servers are created equal.  PCIe-based devices require strict adherance to the PCIe specifications at the server level.  Ping if you want to learn more about why this is critical.

2)  Many servers do not have enough PCIe slots configure appropriately for PCIe devices.  This is especially true when creating HIGH AVAILABILITY (or HA) environments.

3)  Only a very few servers have enough of the right type of slots to be meaningful from a value perspective.  It makes no sense to refresh a server for a PCIe-based SSD if you have to spend 2x or 3x to get the right slots, power, etc.

4)  Applications may not be optimized for SSD DAS.  No kidding.  OLTP or DBMS applications that can take the most advantage of SSD DAS are optimized for high latency disk access over networks such as NAS.  These applications are totally comfortable sending out 1000s or 10s of 1000s of transaction requests to build up a queue depth for the CPUs.  The net result of this is that the CPUs appear very busy but in fact aren’t doing very much.  These limitations are known and well defined.  Over time, application vendors such as SUN, Oracle, and Microsoft will implement fixes to optimize PCIe-based storage.

Aside from these items, there is a discussion regarding suitability of NAND flash devices in the data center as well as the MLC/SLC issue.  I’ll tackle those in another post.  In my veiw, MySpace and Wine.com are leading the way — and there are many others who have not come forward publicly preferring to keep the ROI and GREEN advantages all to themselves.

The latest announcements from Fusion-io, Texas Memory Systems, Micron and others point out these differences.  FULL DISCLOSURE:  I am a former employe of Fusion-io.