SSD Update: this is the one session you MUST ATTEND at Percona MySQL in April

March 20, 2012

MySQL and SSD: usage and tuning

In this talk, Vadim Tkachenko (Percona CTO) will cover Solid State Drives internals and how they affect database performance.
IO level benchmarks for SATA (Intel 320 SSD) and PCI-e (FusionIO, Virident) cards
to show absolute performance and give an idea on performance per $.
And finally how you can use MySQL and Percona Server with SSD,
what tuning parameters are most important and what performance may expect in real
production usage.

Track:

Utilizing Hardware

Experience level:
 Beginner
REGISTER HERE:  http://www.percona.com/live/mysql-conference-2012/
Note from Steve:  This show is a MUST for anyone thinking about solid state memory extensions or SSDs.  Just take a look at the speakers, sponsors, and exhibitors!
Home

First Look: Nimbus Data E-Class (Foskett Repost)

January 31, 2012

 

Stephen Foskett wrote up a terrific piece on Nimbus Data’s new E-Class — you can read the full text at this link:  http://blog.fosketts.net/2012/01/31/nimbus-eclass-big-redundant-allflash-enterprise-array/

(summary below)

Today’s announcement of the E-Class storage array is an important milestone for Nimbus Data and solid-state storage in the enterprise. Until now, most solid-state storage arrays have been fairly small-scale, focused on point performance rather than enterprise-wide capacity. But the E-Class, which scales to 500 TB and sports a redundant, multi-protocol interface, is the first all-flash array to go toe to toe at the top of the market.

Stephen’s Stance

Nimbus has always been an interesting company, with a longer history in the storage world than most startups. Their switch to all-flash architecture was perfectly timed with the market shift, and the new E-Class comes just at the right moment. Boasting 500 TB of maximum capacity, a fully redundant “dual active” controller architecture, massive performance (even InfiniBand), and complete feature set (once VAAI is released), Nimbus may have hit on their hands.


Wikibon Repost: Is Virident Poised for Cloud?

December 15, 2011

Turning IT Costs into Profits: New Storage Requirements Emerge for NextGen Cloud Apps

Last Update: Dec 14, 2011 | 06:39
Originating Author: David Vellante

One of the more compelling trends occurring in the cloud is the emergence of new workloads. Specifically, many early cloud customers focused on moving data to the cloud (e.g. archive or backup) whereas in the next twelve months we’re increasingly going to see an emphasis on moving applications to the cloud; and many will be business/mission critical (e.g. SAP, Oracle).

These emergent workloads will naturally have different storage requirements and characteristics. For example, think about applications like Dropbox. The main storage characteristic is cheap and deep. Even Facebook, which has much more complex storage needs and heavily leverages flash (e.g. from Fusion-io), is a single application serving many tenants. In the case of Facebook (or say Salesforce), it has control over the app, end-to-end visibility and can tune the behavior of the application to a great degree.

In 2012, cloud service providers will begin to deploy multitenant/multi-app environments enabled by a new type of infrastructure. The platform will not only provide block-based storage but it will deliver guaranteed quality of service (QoS) that can be pinned to applications. Moreover, the concept of performance virtualization – where a pool of performance (not capacity) is shared across applications by multiple tenants – will begin to see uptake. The underpinning of this architecture will be all-flash arrays.

This means that users can expect a broader set of workloads types to be run in the cloud. The trend is particularly interesting for smaller and mid-sized customers that want to run Oracle, SAP and other mission critical apps in the cloud as a way to increase flexibility, reduce CAPEX and share risk, particularly security risk.

The question is who will deliver these platforms? Will it be traditional SAN suppliers such as EMC, IBM, HP, HDS and NetApp or emerging specialists like SolidFire, Nimbus and Virident? The bet is that while the existing SAN whales will get their fair share of business, especially in the private cloud space, in the growing cloud service provider market, a new breed of infrastructure player will emerge with the vision, management, QoS and performance ethos to capture marketshare and grow with the hottest CSP players.

For data center buyers, where IT is a cost center, the safe bet may be traditional SAN vendors. For cloud service providers, where IT is a profit center, the ability to monetize services levels by providing QoS and performance guarantees on an application by application basis will emerge as a differentiator.

Action Item: All-flash arrays will emerge as an enabler for new applications in the cloud. The key ingredients will be not just the flash capability, but more importantly management functions that enable controlling performance at the application level. Those shops that view IT as a profit center (e.g. cloud service providers) should aggressively pursue this capability and begin to develop new value-based pricing models tied to mission critical applications.


Fusion-io + Kaminario: WHY IS THIS BLOG ENTRY CRITICAL??

September 13, 2011

…because now FIO owns both sides of the equation:  server-based and shared.  Here is the full text from the FIO BLOG:

Kaminario and Fusion-io: A Perfect Storm for Enterprise-class all Solid-state SAN Storage

Posted: 09/13/2011

According to Wikipedia, a perfect storm is “an expression that describes an event where a rare combination of circumstances will aggravate a situation drastically.” In the case of Kaminario andFusion-io, removing the barriers enterprises face in obtaining high performance and highly available SAN storage that they can afford, this is a perfect storm that is long overdue and will drastically revolutionize the way enterprise applications access critical data.

Commenting on the announcement, Jim Dawson, Fusion-io executive vice president worldwide sales, stated, “Kaminario and Fusion-io have combined Fusion’s solid-state technology with high availability and an all solid-state SAN storage architecture to deliver unsurpassed performance and enterprise-grade reliability, at a highly competitive price point.”

So what is so revolutionary about Kaminario’s approach? Aren’t there a number of SAN storage vendors that have incorporated Flash technology into their solutions and made these same claims? Part of the answer to this question lies in better understanding the problems that enterprises are facing in trying to improve the performance needs of a diverse set of applications with very different data access requirements for acceptable I/Os per second (IOPS), latency and throughput. For example, random write-intensive, latency sensitive data workloads such as high-end OLTP or RBMS applications require a higher level of solid-state storage performance than do predictable read-intensive workloads that are not so sensitive to high latency, such as some analytics and data warehousing applications.

For most enterprises, addressing the performance needs of diverse applications and their data workloads is daunting in its complexity and expense. They are often forced to put their data and databases on more expensive, and sometimes overly provisioned, SANs to meet their high performance needs. Or they use less expensive local storage to gain performance, but lose on maintaining enterprise-class scalability and availability. Neither of these situations is manageable from the point of view of application business managers or an IT organization, because in the end, neither side of the business really gets an all-inclusive high performance, highly available and scalable storage solution that it can afford. In addition, both organizations are likely using many more system resources to marginally serve their existing clients, and with no growth path.

By bringing to market Kaminario’s next generation K2 product family, Kaminario is provide enterprises with a solution that was architected from the ground up as an all solid-state, high performance, highly available shared SAN storage solution that offers both DRAM and Flash solid-state media for their unique application workloads and budgets. By incorporating the Fusion ioDrive Duo into the Kaminario K2 innovative Scale-out Performance Storage Architecture (SPEAR), the two companies have enabled their customers to significantly cut the cost and footprint of legacy SAN storage while providing substantially greater application performance – all without compromising on enterprise-class high availability or manageability.

Kaminario, working with Fusion-io, has created a positive, market-changing perfect storm for enterprises looking to dramatically improve the performance of high-value applications, at a price that enables them to grow their businesses.

https://www.fusionio.com/blog/kaminario-and-fusion-io-a-perfect-storm-for-enterprise-class-all-solid-state-san-storage/


Nimbus Data = Sustainable Storage

September 1, 2010

Nimbus Data Systems, Inc. develops Sustainable Storage™ systems and software that transform storage efficiency, IO performance, and IT operations in enterprises and datacenters. Nimbus’ flash storage systems leverage the incredible speed, efficiency, and comprehensive software of NAND flash technology and Nimbus’ HALO operating system to deliver up to 24x greater storage performance and 90% lower energy usage than traditional disk-based arrays. • Virtualization / VDI • Databases • Scientific computing • Service providers

Event Map:

Hilton New York 1335 Avenue of the Americas on 20/21 September — see you there!


Repost from Storage Channel Pipeline: SSD Arrays

August 26, 2010

Posted by: Eric Slack
SSD, Eric Slack, Storage Channel

The term “SSD” has usually been a mnemonic for “solid-state disk” drives, as in flash memory modules that are put into traditional disk drive form-factor packages. This format is perhaps the easiest to integrate into an existing storage environment, either as replacement server-based disk drive(s) or for use in an external disk array. But this packaging includes a SAS or SATA interface for each SSD itself and for legacy external arrays and involves running solid-state storage devices through a controller designed to support spinning disk. New, dedicated SSD arrays are available that integrate the flash memory modules directly onto cards inside the array and skip the disk form-factor package altogether. Storage Switzerland was briefed by two of these companies at the recent Flash Memory Summit.

Violin Memory’s Memory Array supports up to 10 TB of single-level cell (SLC) flash on hot-swappable, internal circuit cards the company calls “VIMMs” in a 3U enclosure. Card-mounted flash provides better density than drive form-factor SSDs, lowering the cost per gigabyte and providing better performance through the elimination of the SAS or SATA protocol. Putting more flash modules together on each card also improves the write performance, since writes can be spread over more flash modules and overhead can be done more efficiently.

Nimbus Data Systems’ dedicated flash array also puts flash modules onto hot-swappable circuit cards, or “flash blades.” A 2U array holds 24 of these blades and provides 2.5 TB of capacity. Nimbus’ HALO operating system includes storage features like snapshots and replication but also deduplication and thin provisioning. These capabilities give the system a much larger effective capacity and lower its cost per gigabyte.

Solid-state storage is moving into the dedicated array space with some compelling capabilities and price/performance numbers. These “pure flash” SSD arrays deserve a closer look by any VAR that’s serious about storage.

full post here:  http://itknowledgeexchange.techtarget.com/storage-channel-pipeline/ssd-arrays-skip-the-disk-form-factor-package/


Storage Switzerland Repost: Flash Memory Summit Briefing

August 26, 2010

George Crump, Senior Analyst

I sat down with Nimbus Data Systems’ CEO Thomas Isakovich at the Flash Memory Summit to get an update on their Sustainable Storage system. With their product the S-Class flash storage system, Nimbus Data is delivering potentially the first storage system that is completely solid state and has a full compliment of data services. These are capabilities that customers in the enterprise market have become accustomed to like thin provisioning, snapshots, replication etc… To those features they add the ability to do inline deduplication and compression to get a 10:1 storage efficiency rate which brings a flash only system closer to cost parity with a mechanical drive based storage system.

It has always seemed to me that solid state becomes a perfect platform for deduplication and compression. While there may be some performance impact to implementing the capabilities, SSD for many environments has storage I/O capability to spare. To the user and the application, the impact of implementing deduplication and compression are often unnoticeable. The payoff is that you are gaining extra storage space on a tier of storage where capacity comes at a premium still today. A 10X gain on storage that is already inexpensive like SATA is interesting, a 10X gain on storage like Solid State is very compelling.

As you would expect in a solid state only storage system performance is very good. The system has the ability to generate over 1 million IOPS per second but does so at a cost point that is within the reach of a very broad set of use cases. This could be an ideal platform for both desktop and server virtualization projects as well as high transaction database applications and even extends into the HPC market.

This performance also helps in maintaining system integrity. The Nimbus S-Class has RAID protection to protect from flash module failure similar to how a legacy storage system would use RAID to protect from a hard drive failure. The difference is the speed at which the S-Class can rebuild from a failure and the performance impact during that rebuild as compared to traditional storage systems. With traditional storage systems you  have to wait hours and in some cases days for the rebuild to complete all the while trying to balance rebuild performance vs. user performance. With the S-Class rebuilds take about 30 minutes with little to no impact to overall performance and minimal exposure time.

Where Nimbus is clearly focused though in on the overall power savings aspect of their Sustainable Storage System; the amount of power required to deliver an IOP. In high performance storage systems that need hundreds of disk drives to meet performance demands the cost to power and cool those systems can be staggering. The S-Class delivers up to 6,000 IOPs per watt and 675,000 IOPs per floor tile.

Storage Switzerland’s Take

This is the first storage system we have seen with a complete compliment of data services. Its use of deduplication and compression to bring solid state closer to price parity to mechanical drives makes for a compelling tier 1 storage solution for data centers that are struggling with I/O bound workloads.

full post here:  http://www.storage-switzerland.com/Blog/Entries/2010/8/19_Nimbus_Sustainable_Storage.html