Too good to be true? Make me prove it…

April 19, 2010

  

Taking disruptive technology to early adoption is in my DNA — the very fabric of who I am.  I’ve had the privilege to engage with great technologists — many of whom remain colleagues today.  As many of you know, I am recently on board at Xiotech, some of which was part of Seagate until a few years ago and is still an integral part of our ownership and our IP.  The messaging in the storage space can be confusing — here are some ideas that may help break through the noise.  So here’s your challenge:

 

Q:  What do IBM, HP, Dell, EMC, NetApp, 3Par, Compellent and Xiotech have in common?

A:  We all use disk drives manufactured by Seagate.

No matter which storage vendor you deploy, Seagate drives are at the heart of the technology.  Now let me share with you Xiotech’s Integrated Storage Element (ISE).  We take the very same Seagate technology to the next level — after all, we know these drives better than any other storage vendor.

Example:

PLAN #1:  Deploy 100 $20k storage subsystems of [INSERT YOUR BRAND HERE] each costing  15% to keep under maintenance in years 4 & 5 (assuming 3 year warranty) = $600,000. 

PLAN #2:  Acquire xiotech ISE @ 30% less initial acquistion cost/space/energy to produce the same performance (or better) and pay $0 in hardware maintenance over 5 years. 

          or – use the comparable acquisition cost to get 30% more performance from the start

          or – use the NPV of the maintenance to get more performance/density

If you’re a former client from FileTek, Agilis, Sensar, Verity, Immersion or Fusion-io, you’ve seen what I’ve seen:  disruptive technology making a significant difference — and Xiotech ISE is no different.

Don’t believe me?  MAKE ME PROVE IT!

 


Consider Element-based Storage to Support Application-centric Strategies

March 29, 2010

What is Element-based storage?

Element-based storage is a new concept in data storage that packages caching controllers, self-healing packs of disk drives, intelligent power/cooling, and non-volatile protection into a single unit to create a building-block foundation for scaling storage capacity and performance. By encapsulating key technology elements into a functional ‘storage blade’, storage capability – both performance and capacity – can scale linearly with application needs. This building-block approach removes the complexity of frame-based SAN management and works in concert with application-specific function that resides in host servers (OSes, hypervisors and applications themselves).

How are Storage Elements Managed?

Storage elements are managed by interfacing with applications running on host servers (on top of either OSes or hypervisors) and working in conjunction with application function, via either direct application control or Web Services/REST communication. For example, running a virtual desktop environment with VMware or Citrix, or a highly-available database environment with Oracle’s ASM or performing database-level replication and recovery with Microsoft SQL Server 2008 – the hosts OSes, hypervisors, and applications control their own storage through embedded volume management and data movement. The application can directly communicate with the storage element via REST, which is the open standard technique called out in the SNIA Cloud Data Management Interface (CDMI) specification. CDMI forms the basis for cloud storage provisioning and cloud data movement/access going forward.

The main benefits of the element-based approach are:

  • Significantly better performance – more transactions per unit time, faster database updates, more simultaneous virtual servers or desktops per physical server.
  • Significantly improved reliability – self-healing, intelligent elements.
  • Simplified infrastructure – use storage blades like DAS.
  • Lower costs – significantly reduced opex, especially maintenance and service.
  • Reduced business risk – avoiding storage vendor lock-in by using heterogeneous application/hypervisor/OS functions instead of array-specific functions.

Action Item: Organizations are looking to simplify infrastructure, and an application-centric strategy is one approach that has merit. Practitioners should consider introducing storage elements as a means to support application-oriented storage strategies and re-architecting infrastructure for the next decade.

Rob Peglar is VP of Technology at Xiotech and a Xiotech Senior Fellow.  A 32-year industry veteran and published author, he leads the shaping of strategic vision, emerging technologies, defining future offering portfolios including business and technology requirements, product planning and industry/customer liaison. He is the Treasurer of the SNIA, serves as Chair of the SNIA Tutorials, as a Board member of the Green Storage Initiative and the Solid State Storage Initiative, and as Secretary/Treasurer of the Blade Systems Alliance.  He has extensive experience in storage virtualization, the architecture of large heterogeneous SANs, replication and archiving strategy, disaster avoidance and compliance, information risk management, distributed cluster storage architectures and is a sought-after speaker and panelist at leading storage and networking-related seminars and conferences worldwide.  He was one of 30 senior executives worldwide selected for the Network Products 2008 MVP Award.    Prior to joining Xiotech in August 2000, Mr. Peglar held key technology specialist and engineering management positions over a ten-year period at StorageTek and at their networking subsidiary, Network Systems Corporation. Prior to StorageTek, he held engineering development and product management positions at Control Data Corporation and its supercomputer division, ETA Systems.     Mr. Peglar holds the B.S. degree in Computer Science from Washington University, St. Louis Missouri, and performed graduate work at Washington University’s Sever Institute of Engineering.  His research background includes I/O performance analysis, queuing theory, parallel systems architecture and OS design, storage networking protocols, clustering algorithms and virtual systems optimization.

repost from WIKIBON: 


Why PCIe-based SSDs Are Important

November 20, 2009

There’s an old expression I like: “Different isn’t better, it’s just different.”

When it comes to SSDs based around a SATA or SAS format — that’s pretty much the case in my view. Yes there are exceptional products suited for enterprise like Pliant and STEC. And, yes — there are more conventional items for consumers like Intel and OCZ (and about 20 others).  And yes, the standard pacakge 3.5″ form factor for these devices make them suitable for shared storage as well as for integration into hetreogenous and homogenous storage environments like you might find in a typical data center.  Embracing these SSDs you will find the usual manufacturers like EMC, NetApp, SUN, and others.  Their use of SSD is evolutionary, easy to digest.

PCIe-based SSDs are very different.  For one thing, they sit on the server system bus right next to the CPU.  This is a direct attached (DAS) model that has numerous advantages for certain types of processing.  We agree that not all PCIe-based SSDs are suitable for all applications — but in terms of applications that can take advantage of bandwidth, throughput, and latency enhancements, these devices are indeed a superior architecture.

There are some challenges:

1)  Not all servers are created equal.  PCIe-based devices require strict adherance to the PCIe specifications at the server level.  Ping if you want to learn more about why this is critical.

2)  Many servers do not have enough PCIe slots configure appropriately for PCIe devices.  This is especially true when creating HIGH AVAILABILITY (or HA) environments.

3)  Only a very few servers have enough of the right type of slots to be meaningful from a value perspective.  It makes no sense to refresh a server for a PCIe-based SSD if you have to spend 2x or 3x to get the right slots, power, etc.

4)  Applications may not be optimized for SSD DAS.  No kidding.  OLTP or DBMS applications that can take the most advantage of SSD DAS are optimized for high latency disk access over networks such as NAS.  These applications are totally comfortable sending out 1000s or 10s of 1000s of transaction requests to build up a queue depth for the CPUs.  The net result of this is that the CPUs appear very busy but in fact aren’t doing very much.  These limitations are known and well defined.  Over time, application vendors such as SUN, Oracle, and Microsoft will implement fixes to optimize PCIe-based storage.

Aside from these items, there is a discussion regarding suitability of NAND flash devices in the data center as well as the MLC/SLC issue.  I’ll tackle those in another post.  In my veiw, MySpace and Wine.com are leading the way — and there are many others who have not come forward publicly preferring to keep the ROI and GREEN advantages all to themselves.

The latest announcements from Fusion-io, Texas Memory Systems, Micron and others point out these differences.  FULL DISCLOSURE:  I am a former employe of Fusion-io.