HGST to Acquire sTec: FULL PR TEXT HERE

June 26, 2013

HGST to Deepen SSD Capabilities and Expertise with sTec IP and Engineering Talent 

SAN JOSE and SANTA ANA, Calif., June 24, 2013 – Western Digital® Corporation (NASDAQ: WDC) and sTec, Inc. (NASDAQ: STEC) announced today that they have entered into a definitive merger agreement under which sTec, Inc., an early innovator in enterprise solid-state drives (SSDs), will be acquired by HGST, a wholly-owned subsidiary of Western Digital. sTec will be acquired for approximately $340 million in cash, which equates to $6.85 per share. This represents approximately $207 million in enterprise value, net of sTec’s cash as of March 31, 2013.

The pending acquisition augments HGST’s existing solid-state storage capabilities, accelerating its ability to expand its participation in the rapidly growing area of enterprise SSDs. HGST remains committed to its highly successful joint development program with Intel® Corp. and will continue to deliver current and future SAS-based SSD products with Intel.

sTec has strong engineering talent and intellectual property that will complement HGST technical expertise and capabilities. HGST will continue to support existing sTec® products and collaborate with its customers to understand their future requirements.

“Solid state storage in the enterprise will play an increasingly strategic role in the future of Western Digital,” said Steve Milligan, president and chief executive officer, Western Digital Corporation. “This acquisition is one more building block in our strategy to capitalize on the dramatic changes within the storage industry by investing in SSDs and other high-growth storage products.”

“This acquisition demonstrates HGST’s ongoing commitment to the rapidly growing enterprise SSD segment, where we already have a successful product line,” said Mike Cordano, president, HGST. “We are excited to welcome such a talented team of professionals to HGST, where their inventive spirit will be embraced and encouraged.”

“At this key point in the evolution of the storage industry, sTec is excited to consummate this transaction. It will be an important next step in proliferating many of the innovative products and technologies that sTec has been known for throughout its 23-year history and provides immediate value for our shareholders and a strong future for our employees and customers,” said Mark Moshayedi, president and chief executive officer, sTec. “This merger will enable our world-class engineering team and IP to continue to make a significant contribution to the high-performance enterprise SSD space that has long been sTec’s focus.”

The board of directors of sTec, on the unanimous recommendation of a special committee of independent directors of the board, has unanimously approved the merger agreement and has resolved to recommend that sTec shareholders approve the transaction at a sTec shareholders meeting to be held to approve the merger agreement and the merger. The directors and executive officers of sTec have entered into separate voting agreements under which they have agreed, subject to certain exceptions, to vote their respective shares in favor of the proposed transaction.

Wells Fargo Securities, LLC has acted as the financial advisor to Western Digital and BofA Merrill Lynch has acted as the financial advisor to sTec in connection with this transaction.

Closing of the acquisition, which is subject to customary conditions, is expected to occur in the third or fourth calendar quarter of 2013.


Random Post from Tom’s Hardware: ROI/TCO for PCIe Flash

January 11, 2012

 

 

 

CaedenV 01/11/2012 3:15 PM

If they manage to get 12TB on a single board, running at a fraction of the power of a salvo of 15K SAS drives then they could charge whatever they want to, that is just mind-blowingly amazing!
It would take 40 300GB 10K or 15K drives to reach 12TB of data. Assuming a throughput of 150MB/s/drive that would be 6GB/s of sequential read/write performance, which will beat this card (assuming it is a PCIe2 8x slot which caps out at 4GB/s throughput). For IOPS, this card (and even the R4 cards) would easily beat 40 SAS drives at 200IOPS each (for a total of 8,000IOPS vs the R4’s 410,000 IOPS, and I am sure the R5 is faster). If this is a PCIe3 card with 8GB/s of bandwidth available then it will be even faster still! Plus when you figure that this single magical card could replace 40 physical HDDs… that’s a lot of power, and a ton of space saved!
Now for price, 40 15K 300GB drives can be found on newegg for ~$450ea (I am assuming also this price is also inflated due to the floods just like the consumer drives are), totaling $18,000. The R4 3.2TB drive starts at $20,000, which means a 12TB drive would be ~$50-60,000, which is 3x the price of the SAS solution, and a rough equivilant in performance in sequential throughput. For IOPS however (again using the R4 specs as we do not know what the R5 is yet, except that it will be better) you get .4IOPS/$ with the SAS setup, and 6.8IOPS/$ with the R5 for the same amount of storage space, but a small fraction of power usage, and even smaller fraction of physical space used. I think that says it all for the pro markets, if this is anywhere near $60,000 for 12TB (or even north of $100,000) it would be more than worth the cost compared to the performance gained. Simply amazing!

CaedenV 01/11/2012 3:20 PM

Just realized, I didnt even take into consideration the lessened noise and cooling factor for a setup like this! Data center cooling is insane, and 15K drives are not exactly quiet, especially when you have 40 of them lol. Imagine cooling a data center with a simple/normal AC instead of moving to the artic circle like FB did to help cut their cooling bill.


FLASH UPDATE: Wikibon Repost

January 4, 2012

Wikibon has just published an updated Flash Report — comprehensive — the usual great work we get from Dave Vellante et al.

Read more here:  WIKIBON FLASH UPDATE


SSDs choked by crummy disk interfaces: NVMe and SCSI Express Explained

December 13, 2011

This is the complete repost of Chris Mellior’s terrific article from last week:

Gotta be PCIe and not SAS or SATA

By Chris Mellor • Get more from this author

Posted in Storage7th December 2011 15:43 GMT

Free whitepaper – VMready

A flash device that can put out 100,000 IOPS shouldn’t be crippled by a disk interface geared to dealing with the 200 or so IOPS delivered by individual slow hard disk drives.

Disk drives suffer from the wait before the read head is positioned over the target track; 11msecs for a random read and 13msecs for a random write on Seagate’s 750GB Momentus. Solid state drives (SSDS) do not suffer from the lag, and PCIe flash cards from vendors such as Fusion-io have showed how fast NAND storage can be when directly connected to servers, meaning 350,000 and more IOPS from its ioDrive 2 products.

Generation 3 PCIe delivers 1GB/sec per lane, with a 4-lane (x4) gen 3 PCIe interface shipping 4GB/sec.

You cannot hook an SSD directly to such a PCIe bus with any standard interface.

You can hook up virtually any disk drive to an external USB interface or an internal SAS otr ATA one and the host computer’s O/S will have standard drivers that can deal with it. Ditto for an SSD using these interfaces, but the SSD is sluggardly. To operate at full speed and so deliver data fast and help keep a multi-core CPU busy, it needs an interface to a server’s PCIe bus that is direct and not mediated through a disk drive gateway.

What could go wrong with this rosy outlook? Plenty; this is IT. There is, of course, a competing standards initiative called SCSI Express.

If you could hook an SSD directly to the PCIe bus you could dispense with an intervening HBA that requires power, and slows down the SSD through a few microseconds added latency and a hard disk drive-connectivity based design.

There are two efforts to produce standards for this interface: the NVMe and the SCSI Express initiatives.

NVMe

NVMe, standing for Non-Volatile Memory express, is a standard-based initiative by some 80 companies to develop a common interface. An NVMHCI (Non-Volatile Memory Host Controller Interface) work group is directed by a multi-member Promoter Group of companies – formed in June 2011 – which includes Cisco, Dell, EMC, IDT, Intel, NetApp, and Oracle. Permanent seats in this group are held by these seven vendors, with six other seats held by elected representatives from amongst the other work group member companies.

It appears that HP is not an NVMe member, and most if not all NVMe supporters are not SCSI Express supporters.

The work group released a v1.0 specification in March this years, and details can be obtained at the NVM Express website.

A white paper on that site says:

The standard includes the register programming interface, command set, and feature set definition. This enables standard drivers to be written for each OS and enables interoperability between implementations that shortens OEM qualification cycles. …The interface provides an optimised command issue and completion path. It includes support for parallel operation by supporting up to 64K command queues within an I/O Queue. Additionally, support has been added for many Enterprise capabilities like end-to-end data protection (compatible with T10 DIF and DIX standards), enhanced error reporting, and virtualisation.

The standard has recommendations for client and enterprise systems, which is useful as it means it will embrace the spectrum from notebook to enterprise server. The specification can support up to 64,000 I/O queues with up to 64,000 commands per queue. It’s multi-core CPU in scope and each processor core can implement its own queue. There will also be a means of supporting legacy interfaces, meaning SAS and SATA, somehow.

blog on the NVMe website discusses how the ideal is to have a SSD with a flash controller chip, a system-on-chip (SoC) that includes the NVMe functionality.

What looks likely to happen is that, with comparatively broad support across the industry, SoC suppliers will deliver NVMe SoCS, O/S suppliers will deliver drivers for NVMe-compliant SSDs devices, and then server, desktop and notebook suppliers will deliver systems with NVMe-connected flash storage, possibly in 2013.

What could go wrong with this rosy outlook?

Plenty; this is IT. There is, of course, a competing standards initiative called SCSI Express.

SCSI Express

SCSI Express uses the SCSI protocol to have SCSI targets and initiators talk to each other across a PCIe connection; very roughly it’s NVMe with added SCSI. HP is a visible supporter of it, with there being SCSI Express booth at its HP Discover event in Vienna, and support at the event from Fusion-io.

Fusion said its “preview demonstration showcases ioMemory connected with a 2U HP ProLiant DL380 G7 server via SCSI Express … [It] uses the same ioMemory and VSL technology as the recently announced Fusion ioDrive2 products, demonstrating the possibility of extending Fusion’s Virtual Storage Layer (VSL) software capabilities to a new form factor to enable accelerated application performance and enterprise-class reliability.”

The SCSI Express standard “includes a SCSI Command set optimised for solid-state technologies … [and] delivers enterprise attributes and reliability with a Universal Drive Connector that offers utmost flexibility and device interoperability, including SAS, SATA and SCSI Express. The Universal Drive Connector also preserves legacy investments and enables support for emerging storage memory devices.”

An SNIA document states:

Currently ongoing in the T10 (www.t10.org) committee is the development of SCSI over PCIe (SOP), an effort to standardise the SCSI protocol across a PCIe physical interface. SOP will support two queuing interfaces – NVMe and PQI (PCIe Queuing Interface).

PQI is said to be fast and lightweight. There are proprietary SCSI-over-PCIe products available from PMC, LSI, Marvell and HP but SCSI Express is said to be, like PQI, open.

The support of the NVMe queuing interface suggests that SCSI EXpress and NVMe might be able to come together, which would be a good thing and prevent the industry working on separate SSD PCIe-interfacing SoCs and operating system drivers.

Of course this imagining could be just us blowing smoke up our own ass.

There is no SCSI Express website but HP Discover in Vienna last month revealed a fair amount about SCSI express, which is described in a Nigel Poulton blog.

He says that a 2.5-inch SSD will slot into a 2.5-inch bay on the front of a server, for example, and that “[t]he [solid state] drive will mate with a specially designed, but industry standard, interface that will talk a specially designed, but again industry standard, protocol (the protocol enhances the SCSI command set for SSD) with standard drivers that will ship with future versions of major Operating Systems like Windows, Linux and ESXi”.

HP SCSI Express cardHP SCSI Express card from HP Discover at Vienna

Fusion-io 2.5-inch, SCSI Express-supporting SSDs plugged into the top two ports in the card pictured above. Poulton says these ports are SFF 8639 ones. The other six ports appear to be SAS ports.

A podcast on HP social media guy Calvin Zito’s blog has two HP staffers at Vienna talking about SCSI Express.

SCSI Express productisation

SCSI Express productisation, according to HP, should occur around the end of 2012. We are encouraged (listen to podcast above) to think of HP servers with flash DAS formed from SCSI Express-connected SSDs, but also storage arrays, such as HP’s P4000, being built from ProLiant servers with SCSI Express-connected SSDs inside them.

This seems odd as the P4000 is an iSCSI shared SAN array, and why would you want to get data at PCIe speeds from the SSDs inside to its X86 controller/server, and then ship them across a slow iSCSI link to other servers running the apps that need the data?

It only makes sense to me if the P4000 is running the apps needing the data as well, if the P4000 and app-running servers are collapsed or converged into a single (servers + P4000) system. Imagine HP’s P10000 (3PAR) and X9000 (Ibrix) arrays doing the same thing: its Converged Infrastructure ideas seem quite exciting in terms of getting apps to run faster. Of course this imagining could be just us blowing smoke up our own ass.

El Reg’s takeaway from all this is that NVMe is almost a certainty because of the weight and breadth of its backing across the industry. We think it highly likely that HP will productise SCSI Express, with support from Fusion-io and that, unless there is a SCSI Express/NVMe convergence effort, we’re quite likely to face a brief period of interface wars before one or the other becomes dominant.

Concerning SCSI Express and NVMe differences, EMC engineer Amnon Izhar said: “On the physical layer both will be the same. NVMe and [SCSI Express] will be different transport/driver implementations,” implying that convergence could well happen, given sufficient will.

Our gut feeling is that PCIe interface convergence is unlikely, as HP is quite capable of going its own way; witness the FATA disks of recent years and also its individual and admirably obdurate flag-waving over Itanium. ®


Do you know Nigel Poulton?

December 9, 2011

Yes, I know most of you do — for the rest of you — time to get introduced here:

Ive Seen The Future of SSD Arrays!


Too good to be true? Make me prove it…

April 19, 2010

  

Taking disruptive technology to early adoption is in my DNA — the very fabric of who I am.  I’ve had the privilege to engage with great technologists — many of whom remain colleagues today.  As many of you know, I am recently on board at Xiotech, some of which was part of Seagate until a few years ago and is still an integral part of our ownership and our IP.  The messaging in the storage space can be confusing — here are some ideas that may help break through the noise.  So here’s your challenge:

 

Q:  What do IBM, HP, Dell, EMC, NetApp, 3Par, Compellent and Xiotech have in common?

A:  We all use disk drives manufactured by Seagate.

No matter which storage vendor you deploy, Seagate drives are at the heart of the technology.  Now let me share with you Xiotech’s Integrated Storage Element (ISE).  We take the very same Seagate technology to the next level — after all, we know these drives better than any other storage vendor.

Example:

PLAN #1:  Deploy 100 $20k storage subsystems of [INSERT YOUR BRAND HERE] each costing  15% to keep under maintenance in years 4 & 5 (assuming 3 year warranty) = $600,000. 

PLAN #2:  Acquire xiotech ISE @ 30% less initial acquistion cost/space/energy to produce the same performance (or better) and pay $0 in hardware maintenance over 5 years. 

          or – use the comparable acquisition cost to get 30% more performance from the start

          or – use the NPV of the maintenance to get more performance/density

If you’re a former client from FileTek, Agilis, Sensar, Verity, Immersion or Fusion-io, you’ve seen what I’ve seen:  disruptive technology making a significant difference — and Xiotech ISE is no different.

Don’t believe me?  MAKE ME PROVE IT!

 


DESKTOP HPC RECIPE: Start with 1ea OSS-PCIe-SYS-IS-2.4-9270-4

February 20, 2010

How to cook up a desktop HPC server

Ingredients List

Hardware

OSS-PCIe-SYS-IS-2.4-9270-4                1ea (w/4 AMD 9270 GPU’s)
SAS Drives                                              4ea (LSI RAID Controller Included)
DRAM                                                      96gb
ioDrive Duo 640                                      4ea
Xiotech ISE                                             2 ea
Xiotech ISE High Perf DataPacs             4ea

Software

Linux OS
Lustre File System

Someone plesae cost out this configuration and pass it along.

Steve