FIO Financial Roadshows Update: Four Major Conferences in San Francisco

February 1, 2012

Credit Suisse Solid State Storage Conference
San Francisco, California
Wednesday, February 8, 2012, 2:15 p.m. (PT)
David A. Flynn, CEO, and Dennis P. Wolf, CFO

Barclays Big Data Conference
San Francisco, California
Monday, February 13, 2012, 4:30 p.m. (PT)
David A. Flynn, CEO, and Dennis P. Wolf, CFO

Goldman Sachs Technology & Internet 2012 Conference
San Francisco, California
Tuesday, February 14, 2012, 10:20 a.m. (PT)
David A. Flynn, CEO, and Dennis P. Wolf, CFO

Morgan Stanley 2012 Technology, Media & Telecom Conference
San Francisco, California
Tuesday, February 28, 2012, 4:05 p.m. (PT)
David A. Flynn, CEO, and Dennis P. Wolf, CFO

Random Post from Tom’s Hardware: ROI/TCO for PCIe Flash

January 11, 2012




CaedenV 01/11/2012 3:15 PM

If they manage to get 12TB on a single board, running at a fraction of the power of a salvo of 15K SAS drives then they could charge whatever they want to, that is just mind-blowingly amazing!
It would take 40 300GB 10K or 15K drives to reach 12TB of data. Assuming a throughput of 150MB/s/drive that would be 6GB/s of sequential read/write performance, which will beat this card (assuming it is a PCIe2 8x slot which caps out at 4GB/s throughput). For IOPS, this card (and even the R4 cards) would easily beat 40 SAS drives at 200IOPS each (for a total of 8,000IOPS vs the R4’s 410,000 IOPS, and I am sure the R5 is faster). If this is a PCIe3 card with 8GB/s of bandwidth available then it will be even faster still! Plus when you figure that this single magical card could replace 40 physical HDDs… that’s a lot of power, and a ton of space saved!
Now for price, 40 15K 300GB drives can be found on newegg for ~$450ea (I am assuming also this price is also inflated due to the floods just like the consumer drives are), totaling $18,000. The R4 3.2TB drive starts at $20,000, which means a 12TB drive would be ~$50-60,000, which is 3x the price of the SAS solution, and a rough equivilant in performance in sequential throughput. For IOPS however (again using the R4 specs as we do not know what the R5 is yet, except that it will be better) you get .4IOPS/$ with the SAS setup, and 6.8IOPS/$ with the R5 for the same amount of storage space, but a small fraction of power usage, and even smaller fraction of physical space used. I think that says it all for the pro markets, if this is anywhere near $60,000 for 12TB (or even north of $100,000) it would be more than worth the cost compared to the performance gained. Simply amazing!

CaedenV 01/11/2012 3:20 PM

Just realized, I didnt even take into consideration the lessened noise and cooling factor for a setup like this! Data center cooling is insane, and 15K drives are not exactly quiet, especially when you have 40 of them lol. Imagine cooling a data center with a simple/normal AC instead of moving to the artic circle like FB did to help cut their cooling bill.

SSDs choked by crummy disk interfaces: NVMe and SCSI Express Explained

December 13, 2011

This is the complete repost of Chris Mellior’s terrific article from last week:

Gotta be PCIe and not SAS or SATA

By Chris Mellor • Get more from this author

Posted in Storage7th December 2011 15:43 GMT

Free whitepaper – VMready

A flash device that can put out 100,000 IOPS shouldn’t be crippled by a disk interface geared to dealing with the 200 or so IOPS delivered by individual slow hard disk drives.

Disk drives suffer from the wait before the read head is positioned over the target track; 11msecs for a random read and 13msecs for a random write on Seagate’s 750GB Momentus. Solid state drives (SSDS) do not suffer from the lag, and PCIe flash cards from vendors such as Fusion-io have showed how fast NAND storage can be when directly connected to servers, meaning 350,000 and more IOPS from its ioDrive 2 products.

Generation 3 PCIe delivers 1GB/sec per lane, with a 4-lane (x4) gen 3 PCIe interface shipping 4GB/sec.

You cannot hook an SSD directly to such a PCIe bus with any standard interface.

You can hook up virtually any disk drive to an external USB interface or an internal SAS otr ATA one and the host computer’s O/S will have standard drivers that can deal with it. Ditto for an SSD using these interfaces, but the SSD is sluggardly. To operate at full speed and so deliver data fast and help keep a multi-core CPU busy, it needs an interface to a server’s PCIe bus that is direct and not mediated through a disk drive gateway.

What could go wrong with this rosy outlook? Plenty; this is IT. There is, of course, a competing standards initiative called SCSI Express.

If you could hook an SSD directly to the PCIe bus you could dispense with an intervening HBA that requires power, and slows down the SSD through a few microseconds added latency and a hard disk drive-connectivity based design.

There are two efforts to produce standards for this interface: the NVMe and the SCSI Express initiatives.


NVMe, standing for Non-Volatile Memory express, is a standard-based initiative by some 80 companies to develop a common interface. An NVMHCI (Non-Volatile Memory Host Controller Interface) work group is directed by a multi-member Promoter Group of companies – formed in June 2011 – which includes Cisco, Dell, EMC, IDT, Intel, NetApp, and Oracle. Permanent seats in this group are held by these seven vendors, with six other seats held by elected representatives from amongst the other work group member companies.

It appears that HP is not an NVMe member, and most if not all NVMe supporters are not SCSI Express supporters.

The work group released a v1.0 specification in March this years, and details can be obtained at the NVM Express website.

A white paper on that site says:

The standard includes the register programming interface, command set, and feature set definition. This enables standard drivers to be written for each OS and enables interoperability between implementations that shortens OEM qualification cycles. …The interface provides an optimised command issue and completion path. It includes support for parallel operation by supporting up to 64K command queues within an I/O Queue. Additionally, support has been added for many Enterprise capabilities like end-to-end data protection (compatible with T10 DIF and DIX standards), enhanced error reporting, and virtualisation.

The standard has recommendations for client and enterprise systems, which is useful as it means it will embrace the spectrum from notebook to enterprise server. The specification can support up to 64,000 I/O queues with up to 64,000 commands per queue. It’s multi-core CPU in scope and each processor core can implement its own queue. There will also be a means of supporting legacy interfaces, meaning SAS and SATA, somehow.

blog on the NVMe website discusses how the ideal is to have a SSD with a flash controller chip, a system-on-chip (SoC) that includes the NVMe functionality.

What looks likely to happen is that, with comparatively broad support across the industry, SoC suppliers will deliver NVMe SoCS, O/S suppliers will deliver drivers for NVMe-compliant SSDs devices, and then server, desktop and notebook suppliers will deliver systems with NVMe-connected flash storage, possibly in 2013.

What could go wrong with this rosy outlook?

Plenty; this is IT. There is, of course, a competing standards initiative called SCSI Express.

SCSI Express

SCSI Express uses the SCSI protocol to have SCSI targets and initiators talk to each other across a PCIe connection; very roughly it’s NVMe with added SCSI. HP is a visible supporter of it, with there being SCSI Express booth at its HP Discover event in Vienna, and support at the event from Fusion-io.

Fusion said its “preview demonstration showcases ioMemory connected with a 2U HP ProLiant DL380 G7 server via SCSI Express … [It] uses the same ioMemory and VSL technology as the recently announced Fusion ioDrive2 products, demonstrating the possibility of extending Fusion’s Virtual Storage Layer (VSL) software capabilities to a new form factor to enable accelerated application performance and enterprise-class reliability.”

The SCSI Express standard “includes a SCSI Command set optimised for solid-state technologies … [and] delivers enterprise attributes and reliability with a Universal Drive Connector that offers utmost flexibility and device interoperability, including SAS, SATA and SCSI Express. The Universal Drive Connector also preserves legacy investments and enables support for emerging storage memory devices.”

An SNIA document states:

Currently ongoing in the T10 ( committee is the development of SCSI over PCIe (SOP), an effort to standardise the SCSI protocol across a PCIe physical interface. SOP will support two queuing interfaces – NVMe and PQI (PCIe Queuing Interface).

PQI is said to be fast and lightweight. There are proprietary SCSI-over-PCIe products available from PMC, LSI, Marvell and HP but SCSI Express is said to be, like PQI, open.

The support of the NVMe queuing interface suggests that SCSI EXpress and NVMe might be able to come together, which would be a good thing and prevent the industry working on separate SSD PCIe-interfacing SoCs and operating system drivers.

Of course this imagining could be just us blowing smoke up our own ass.

There is no SCSI Express website but HP Discover in Vienna last month revealed a fair amount about SCSI express, which is described in a Nigel Poulton blog.

He says that a 2.5-inch SSD will slot into a 2.5-inch bay on the front of a server, for example, and that “[t]he [solid state] drive will mate with a specially designed, but industry standard, interface that will talk a specially designed, but again industry standard, protocol (the protocol enhances the SCSI command set for SSD) with standard drivers that will ship with future versions of major Operating Systems like Windows, Linux and ESXi”.

HP SCSI Express cardHP SCSI Express card from HP Discover at Vienna

Fusion-io 2.5-inch, SCSI Express-supporting SSDs plugged into the top two ports in the card pictured above. Poulton says these ports are SFF 8639 ones. The other six ports appear to be SAS ports.

A podcast on HP social media guy Calvin Zito’s blog has two HP staffers at Vienna talking about SCSI Express.

SCSI Express productisation

SCSI Express productisation, according to HP, should occur around the end of 2012. We are encouraged (listen to podcast above) to think of HP servers with flash DAS formed from SCSI Express-connected SSDs, but also storage arrays, such as HP’s P4000, being built from ProLiant servers with SCSI Express-connected SSDs inside them.

This seems odd as the P4000 is an iSCSI shared SAN array, and why would you want to get data at PCIe speeds from the SSDs inside to its X86 controller/server, and then ship them across a slow iSCSI link to other servers running the apps that need the data?

It only makes sense to me if the P4000 is running the apps needing the data as well, if the P4000 and app-running servers are collapsed or converged into a single (servers + P4000) system. Imagine HP’s P10000 (3PAR) and X9000 (Ibrix) arrays doing the same thing: its Converged Infrastructure ideas seem quite exciting in terms of getting apps to run faster. Of course this imagining could be just us blowing smoke up our own ass.

El Reg’s takeaway from all this is that NVMe is almost a certainty because of the weight and breadth of its backing across the industry. We think it highly likely that HP will productise SCSI Express, with support from Fusion-io and that, unless there is a SCSI Express/NVMe convergence effort, we’re quite likely to face a brief period of interface wars before one or the other becomes dominant.

Concerning SCSI Express and NVMe differences, EMC engineer Amnon Izhar said: “On the physical layer both will be the same. NVMe and [SCSI Express] will be different transport/driver implementations,” implying that convergence could well happen, given sufficient will.

Our gut feeling is that PCIe interface convergence is unlikely, as HP is quite capable of going its own way; witness the FATA disks of recent years and also its individual and admirably obdurate flag-waving over Itanium. ®


November 2, 2011

FY2011 = $197.2

The company now expects full-year revenue of $381.5 million, a growth of about 55 percent.

It had earlier forecast a 40 percent growth.

Fusion-io ioDrive2: FIRST LOOK for THE REGISTER

October 4, 2011

Fusion-io deploys PCIe flash toaster

Self-healing powers claimed if you play the magic card

By Chris MellorGet more from this author

Posted in Storage, 4th October 2011 10:13 GMT

Free whitepaper – Fluid data architecture pays off for CMA

Fusion-io has refreshed the whole of its ioDrive product range with smaller flash chip dies and new controller firmware to produce high performance, longer lasting flash using less silicon.

CEO David Flynn said the current ioDrive technology was introduced four years ago and Fusion was now “introducing something that will toast it. This thing is a beast”.

The ioDrives are PCIe-connected cards in half-length format and use NAND chips that are either single-level cell (SLC) or 2-bit multi-level cell (MLC) and require a 29nm to 20nm process (2Xnm). We think this is Samsung NAND using a 27nm process. There’s a half-height ioDrive and a full-height ioDrive Duo to choose from.

The 2Xnm NAND is inherently slower than 3Xnm dies, but Fusion says its controller technology, including its firmware, more than compensates for this.

Flynn says Fusion can use the cheapest commodity 2Xnm dies and give it performance and endurance such that second-generation ioDrives out-perform first-generation products. There’s no need, he says, to use enterprise-grade MLC (eMLC).

ioDrive 2

There is a new card design with the flash mounted on up to three daughter cards attached to the base card.

The ioDrive 2 comes in SLC form with capacities of 400GB and 600GB. It can deliver 450,000 write IOPS working with 512-byte data blocks and 350,000 read IOPS. These are whopping great increases, 3.3 times faster for the write IOPS number, over the original ioDrive SLC model which did 135,000 write IOPS and 140,000 read IOPS. It delivered sequential data at 750-770MB/sec whereas the next-gen product does it at 1.5GB/sec, around two times faster.

There is an MLC version of the ioDrive 2 which comes in 365GB, 785GB and 1.2TB capacity points.

The SLC version of the larger format ioDrive 2 Duo has a 1.2TB capacity point and the MLC version 2.4TB.

ioDrive 2 DuoioDrive 2 Duo

How fast?

The SLC version of the ioDrive 2 Duo can deliver 900,000 write IOPS and 700,000 read IOPS, at 3GB/sec. The first-generation product did 262,000 and 261,000 respectively, at 1.5GB/sec reading and 1.1GB/sec writing speeds.

Flash product suppliers generally quote IOPS numbers using 4KB data blacks whereas Fusion typically uses 512B blocks. A 4KB block size aligns with flash’s 4KB page size but 512B blocks require a read, modify, write process. This is not necessary with Fusion-io’s products, according to Flynn.

We have one set of 4KB block IOPS numbers from Fusion: the 1.2TB, SLC ioDrive 2 Duo does 503,000 read IOPS with 4KB blocks and 664,000 write IOPS.

Flynn emphasises that the performance comes at low queue depth. Typically, he says, performance is quoted by suppliers with a high queue depth so that the parallelism in a product’s controllers can really get the data moving. But real world experience is lower queue depths, because flash responds so quickly. Here Fusion’s products speed along while competing ones, typically comprised of, he says, RAID controllers, SandForce controllers, and flash dies, start limping.

He also asserts that Fusion’s products perform very well with mixed read/write workloads, whereas other products show a bathtub effect: high numbers for pure reads and writes but much lower ones for mixed workloads.

A Fusion spokesperson said: “The cards are only now going through performance tuning. In addition, we expect Intel’s new Sandy Bridge processors to have a major beneficial impact since our design is built to leverage system processor improvements.”

Better availability

Flynn said that MLC flash products makes up about 80 per cent of Fusion’s business. It actually started shipping 2Xnm-class dies some months ago on the ioDrive Octal product, a custom product generally sold direct to large customers; where 3Xnm devices could hold 5.12TB of data, 2Xnm-class kit now stores up to 10TB.

It self-heals to the point where it covers for subsequent failures ad infinitum. We don’t believe that customers should have to service anything.

The first-generation product had N+1 redundancy at the chip level. Flynn said that the next-gen product is self-healing. Its Adaptive FlashBack technology provides full chip level fault tolerance, which enables an ioDrive to repair itself after a single chip or a multi chip failure without interrupting business continuity. The repair process takes about an hour: “It self-heals to the point where it covers for subsequent failures ad infinitum. We don’t believe that customers should have to service anything.”

This idea that customers should not have to replace modules reminds us of XIO’s Hyper ISE, that sealed canister of drives with a 5-year warranty against customers ever needing an engineer to poke around inside it and replace failed components.

Flynn said Fusion-io has added endurance extending technologies to the ioDrive 2 products. There are no published endurance numbers, though we expect, given Fusion’s OEM customers, that endurance is good. Flynn said that the endurance has increased with the ioDrive 2 products.

Fusion’s competition

There are several competitors who have been waiting for this Fusion refresh: Micron, OCZ, STEC, TMS and Virident are the main ones. They use a 4KB block size for their IOPS numbers.

All we can compare are raw numbers, and hope the numbers are divergent enough to indicate meaningful relationships, even though it’s an apples and oranges comparison, apart from the 4KB numbers for the ioDrive 2 Duo SLC product.

Micron’s rocket-like P320h, a 3Xnm SLC product, does 750,000 read IOPS and 341,00 write IOPS, with 3GB/sec read and 2GB/sec write bandwidths. The read IOPS are significantly higher that the 503,000 of Fusion’s ioDrive 2 Duo SLC but significantly slower than the Fusion product’s 4KB write IOPS. With 512B blocks, the Fusion product is almost twice as fast on write IOPS though, while being a mere 50,000 IOPS slower on reads and overall faster on bandwidth. The medal goes to Fusion overall then.

ioDrive OctalFusion-io’s ioDrive Octal

OCZ’s VeloDrive PCIe numbers are simply incomprehensible: for example, read IOPS are expressed in MB/sec for hardware RAID, while software RAID speed is quoted for compressible and incompressible data. We give up. Give the dratted thing a test drive alongside the Fusion product to make a comparison in your own shop.

“This thing is a beast”

STEC’s Kronos BiTurbo PCIe SSA holds up to 3.9TB of MLC and, in SLC form, does 440,000 read IOPS, 400,000 write IOPS and boasts a 4GB/sec bandwidth. The Kronos Turbo on its own does 220,000 read IOPS, 200,000 write IOPS and shifts 2GB/sec. Fusion has that one whipped it seems, apart from the bandwidth number.

TMS’ RamSan-70, based on 3Xnm Toshiba SLC NAND, does 330,000 read IOPS, 600,000 in burst mode, and 400,000 write IOPS to heave 2GB/sec. On the raw number basis Fusion has it beat as well. We note that a CSCS analysis had this TMS product way-outperform a first-generation ioDrive product from Fusion though.

Virident’s SLC TachION delivers a claimed (see CSCS analysis above) 300,000 IOPS with a mixed read/write workload and a peak 1.4GB/sec bandwidth. Fusion appears to have it whipped too.

The verdict

The verdict is pretty clear. On headline raw numbers Fusion-io’s ioDrive 2 products generally leave the competition in the dust, except for Micron, but the P320h is let down by poor write IOPS numbers.

Of course all these PCIe cards won’t compete in a uniform PCIe market; it being split up into various sectors each with their own workload and price/performance characteristics.

Fusion is hoping that, with its wide spread of capacity points and performance levels, it can compete in as many of these sectors as possible, while focussing on mainstream pure enterprise business and not the flash web businesses. Its mainstream enterprise customers buy kit from Dell, HP and IBM and look for that level of reliability, performance, value and support.

Our first reaction is that, with this launch, Fusion-io has, in El Reg’s opinion, cemented its position as the PCIe flash card leader.

All the products will ship in November. Prices start from $5,950. ®


June 8, 2011

from SEEKING ALPHA (repost)

Backed by venture firms NEA and Lightspeed, Fusion-io (FIO) markets a next generation storage memory platform that boosts data access speeds. The company plans to raise $209 million in its IPO by offering 12.3 million shares at a proposed price range of $16 and $18; it had originally filed to offer shares at $13 to $15 before boosting the price range by 21% in a sign of strong deal demand.

At the midpoint of the upwardly revised range, Fusion-io would be valued at $1.7 billion. Fusion-io plans to price today (Wednesday) after the market close and list on the NYSE on Thursday under the ticker symbol FIO. Goldman, Sachs & Co., Credit Suisse, Morgan Stanley and J.P. Morgan are the lead underwriters on the deal, which is one of three deals scheduled to price on this week’s US IPO calendar.


Fusion-io seeks to address what it refers to as the “data supply problem,” or low levels of server utilization caused by the widening gap between processing and storage performance. It markets a data decentralization platform that helps enterprises improve processing capabilities by relocating “active” data from centralized storage to servers, thereby improving processing capabilities by up to 10x and significantly reducing costs. Its platform, which bundles proprietary hardware and software, has been shipped to over 1,500 end users since inception, including companies such as Facebook and Apple (AAPL), as well as OEMs like Dell (DELL), HP (HPQ) and IBM (IBM).


Fusion-io booked $126 million in the nine months ended March 30, 2011, quadrupling the $25 million generated in the year-ago period. Facebook accounted for 47% of revenue, while its ten largest customers accounted for 91%. The company turned profitable with $7 million in EBITDA but remained cash flow negative (-$2 million) due to increasing levels of inventory. It expects to drive further growth by adding software capabilities, deepening customer relationships, growing its sales force and expanding internationally (18%).


Fusion-io has experienced rapid growth throughout its relatively short operating history, highlighting the value of its first-to-market data storage platform. That said, it carries execution risk as a small company with an accumulated deficit of $70 million. Furthermore, most of its business is derived from large-scale data storage installation projects rather than repeat purchases, resulting in highly volatile financial results, which is magnified by its high degree of customer concentration. For example, Fusion-io expects revenue to fall sequentially in the FY4Q11 following large orders by Facebook in the 3Q. Lastly, it competes with traditional storage/software vendors, as well as various privately held companies that are developing similar technology.


With a unique product and massive addressable market opportunity, Fusion-io should spark investor interest, especially in the wake of acquisitions of fast-growing storage and networking companies such as Compellent, Isilon, Netezza and 3PAR. Fusion-io may also benefit from its connection with Facebook amidst ongoing buzz in the social media space following Groupon’s (GRPNrecent IPO filing.

Fusion-io raises IPO range: $16 – $18

June 7, 2011

Fusion-io, which offers a next generation storage memory platform that boosts data access speeds, raised the proposed price range for its upcoming IPO on Tuesday. The company now plans to price its 12.3 million share IPO between $16 and $18, up 21% from the previous $13-$15 range. Fusion-io, which was founded in 2005 and booked $136 million in sales for the 12 months ended March 31, 2011, plans to list on the NYSE under the symbol FIO. Goldman, Sachs & Co., Credit Suisse, and Morgan Stanley are the lead underwriters on the deal. It is expected to price during the week of June 6.