Steve Sicola: SUPERSTAR!!

June 22, 2010

 

CRN unveiled its 2010 class of Storage Superstars spotlighting 10 “individuals and groups that made the modern storage industry what it is today.” And among the visionaries honored was Xiotech CTO Steve Sicola.

The “driving force on several generations of storage arrays and architectures,” Steve was recognized for his nearly 40 patents, tenure at Compaq and DEC, and as VP of Seagate’s Advanced Storage Architecture (ASA) Group. Informally known as the “Skunk Works” – an homage to Lockheed Martin’s legendary engineering team – the ASA Group was acquired by Xiotech in 2007 and architected ISE.

“Steve embodies the passion, commitment and innovative thinking – all hallmarks of Xiotech – that serve as the foundation of this company and have fueled the revolution that is ISE,” said Xiotech President and CEO Alan Atkinson. “With Steve overseeing our Storage Fellows Program, we look forward to his (and others’) continued contributions to the industry and Xiotech – including the next-generation of ISE.”

Congratulations Steve!


Too good to be true? Make me prove it…

April 19, 2010

  

Taking disruptive technology to early adoption is in my DNA — the very fabric of who I am.  I’ve had the privilege to engage with great technologists — many of whom remain colleagues today.  As many of you know, I am recently on board at Xiotech, some of which was part of Seagate until a few years ago and is still an integral part of our ownership and our IP.  The messaging in the storage space can be confusing — here are some ideas that may help break through the noise.  So here’s your challenge:

 

Q:  What do IBM, HP, Dell, EMC, NetApp, 3Par, Compellent and Xiotech have in common?

A:  We all use disk drives manufactured by Seagate.

No matter which storage vendor you deploy, Seagate drives are at the heart of the technology.  Now let me share with you Xiotech’s Integrated Storage Element (ISE).  We take the very same Seagate technology to the next level — after all, we know these drives better than any other storage vendor.

Example:

PLAN #1:  Deploy 100 $20k storage subsystems of [INSERT YOUR BRAND HERE] each costing  15% to keep under maintenance in years 4 & 5 (assuming 3 year warranty) = $600,000. 

PLAN #2:  Acquire xiotech ISE @ 30% less initial acquistion cost/space/energy to produce the same performance (or better) and pay $0 in hardware maintenance over 5 years. 

          or – use the comparable acquisition cost to get 30% more performance from the start

          or – use the NPV of the maintenance to get more performance/density

If you’re a former client from FileTek, Agilis, Sensar, Verity, Immersion or Fusion-io, you’ve seen what I’ve seen:  disruptive technology making a significant difference — and Xiotech ISE is no different.

Don’t believe me?  MAKE ME PROVE IT!

 


DESKTOP HPC RECIPE: Start with 1ea OSS-PCIe-SYS-IS-2.4-9270-4

February 20, 2010

How to cook up a desktop HPC server

Ingredients List

Hardware

OSS-PCIe-SYS-IS-2.4-9270-4                1ea (w/4 AMD 9270 GPU’s)
SAS Drives                                              4ea (LSI RAID Controller Included)
DRAM                                                      96gb
ioDrive Duo 640                                      4ea
Xiotech ISE                                             2 ea
Xiotech ISE High Perf DataPacs             4ea

Software

Linux OS
Lustre File System

Someone plesae cost out this configuration and pass it along.

Steve


IT’S OFFICIAL: ClusterStor Now Out of Stealth Mode

December 17, 2009

Yesterday I got a call from Kevin Canady, former VP of Cluster File Systems to let me know that he is resurrecting Lustre support for the HPC community under a new entity called ClusterStor.  This looks to be a good move insuring there is viable open support for Lustre users.  They are also embarking on next generation storage tech for Exascale. Keep an eye on these guys:  they have some of the best pedigree in the industry.  Here is the update from Kevin:

“ClusterStor is pleased to offer full Lustre support and development services, worldwide, 24×7!

The Lustre File System is a leader in I/O performance and scalability for the world’s most powerful and complex computing environments. With constantly increasing demands on IT organizations, managing such sophisticated systems makes it challenging to identify and resolve product issues quickly.

At ClusterStor, we’re committed to maximizing the performance, security, and availability of your Lustre systems. The ClusterStor team combines the agility of a focused company with the in-depth architectural, development and implementation skills of over a hundred years of combined Lustre development and customer support experience. We will both enhance your support and save you money.

ClusterStor was founded in early 2009 by Peter Braam who founded and lead the Lustre project. It employs leading Lustre architects and developers. ClusterStor does not sell systems or storage hardware and will support Lustre for end-users, systems and storage vendors alike. Its goal is to enhance Lustre as an open source, community accepted product.

If you would like to explore Lustre Linux and Windows support options and large scale I/0 solutions, give us a call and compare!”  You can find out more here:

P. Kevin Canady
Vice President, Strategic Relationship Sales
ClusterStor Inc. (Formerly Horizontal Scale)
415.505.7701
kevin.canady@clusterstor.com

I know a lot of folks who have been waiting for this announcement — good luck to Kevin and Peter!


Fusion-io ioDrive Octal (Repost from Vizworld)

December 4, 2009

Not a lot of new information but really well presented — kudos to Paul Adams at Vizworld!

The new 2.5 TB ioDrive Octal from Fusion-io was recently displayed at Supercomputing 2009 in Portland, Oregon. The ioDrive is a solid-state drive (SSD) that fits into a x16 PCI Express 2.0 slot. The beauty of the using the PCI Express 2.0 slot is that Fusion-io can really obtain great performance. Fusion-io is claiming that their drive can saturate a 16x PCIe 2.0 slot with their new Octal. The Octal can deliver a bandwidth of 6.4 GBytes/sec. It accomplishes this by using 1,600 flash dies. Samsung is the provider of the memory chips, and is also an investor in Fusion-io. With that many chips, one might wonder how Fusion-io handles the inevitable chip failures. To see how they handle that, let’s take a look at a simpler ioDrive.

ioDrive_Flat_orig
If you take a look at a regular 160 GB ioDrive, as in the above image, you will see that there are 24 pads on both the front and rear of the card. Each of these pads holds 8 chips each. You can think about it as 8 rows, where each row has 24 chips. There is also a 25th pad near the front of ioDrive with another 8 chips on it. That 25th pad handles the redundancy in case one or more of the other chips die. In the worst case, one chip could fail in each of the 8 rows, and the ioDrive could still recover from it. The same idea is used in the ioDrive Duo, which is really just two ioDrives, with one inverted. Both the ioDrive and the Duo consume 25 Watts, which it draws from the PCIe connector. Fusion-io has written some software to handle the rare cases where the Duo can exceed the available 25 Watts.
ioDrive Octal
The ioDrive Octal follows the same pattern, and as one might expect, can be thought of as 8 ioDrives placed together. It is not exactly 8 ioDrives put together, but you can think of it that way. At the end of the Octal is a connector, which is a key difference from a normal ioDrive. This connector is an I/O link that is capable of 6.4 GB/sec. Fusion-io has created a PCIe 2.0 link to connect their drives to other computers. They have also written their own software to handle data integrity during read and write operations. One of the benefits of using this connector is for high availability. If one server is connected to the Octal, and that server dies, the file system can be failed over to a redundant server, and the data from the Octal is still available.

Fusion-io 1TB/s Datacenter

The image above shows a mock up of a data center that can achieve 1 Terabyte per second sustained bandwidth. The clear colored racks represents what it would take in conventional hardware to achieve a 1TB/s sustained bandwidth. That amounts to approximately 55,440 disk drives in 132 racks of equipment. The black colored racks represents what it would take for 220 ioDrive Octals to achieve the same bandwidth, which would occupy six racks. If you look closely, you will notice that there are twelve such racks. That is because the customer wants to double the size of the installation. Who is this customer? All Fusion-io is saying is “two presently undisclosed government organizations.” If you read their press release, you might get the idea that the two locations are part of the Advanced Simulation and Computing (ASC) Program and one of those sites might be Lawrence Livermore. Apparently it depends on how the funding plays out.

When will the Octal be available? Fusion-io is not saying. However, rumors on the floor at SC09 have it being introduced in 1Q2010. Next year they are planning to take the Octal from 2.5 TBytes up to 5.0 Tbytes. Fusion-io is partnered with companies like DDN, IBM, and HP. It is likely that you will see the Octal end up in their products in the near future.

Fusion-io Media Wall

Fusion-io also showed off their media wall at SC09. This was the same setup that was shown at Siggraph 2009. Fusion-io had sixteen diskless servers which were booting off one ioDrive. To show that, all the disks were pulled out of each server. Once booted, each server then ran 256 instances of the VLC media player. Each VLC loaded a standard definition video file and displayed it on a screen. For those of you doing the math, that is 4,096 video streams coming from one ioDrive. Impressive.

Link to this article HERE.


HARDCORE FANS ONLY — SC09 Session: Flash Technology in HPC: Let the Revolution Begin

November 27, 2009

Recorded 11/20/09 at SC09, this Panel discussion entitled “Flash Technology in HPC: Let the Revolution Begin” was moderated by Bob Murphy, Sun Microsystems. Download for iPod

Abstract: With an exponential growth spurt of peak GFLOPs available to HPC system designers and users imminent, the CPU performance I/O gap will reach increasingly gaping proportions. To bridge this gap, Flash is suddenly being deployed in HPC as a revolutionary technology that delivers faster time to solution for HPC applications at significantly lower costs and lower power consumption than traditional disk based approaches. This panel, consisting of experts representing all points of the Flash technology spectrum, will examine how Flash can be deployed and the effect it will have on HPC workloads.

* Bob Murpy slides: “Flash Technology in HPC: Let the Revolution Begin

* Paresh Pattani slides: “Intel SSD Performance on HPC Applications

* David Flynn slides: “Fusion-io Solid State in HPC

* Larry Mcintosh and Dale Layfield slides: “Sun’s Flash Solutions for
optimizing MSC.Software’s Simulation Products

For more information, check out this Sun Blueprint: Sun Business Ready HPC for MD Nastran.

* Jan Silverman slides: “Spansion EcoRAM NAM Network Attached Memory


Fusion-io Update: CTO Slides from SC09

November 25, 2009

David Flynn, Fusion-io CTO, was featured on a panel at the recently concluded SC09 — you can read about the subject matter and participants here:  SC09 Flash Technology Panel

In addition, David presented some updated slides about Fusion-io, ioMemory, and the new ioDrive Octal.  He also detailed the mySpace use case.  Here are the slides:  DAVID FLYNN SC09 SLIDES