At the start of each year, many companies – Xiotech included – have national meetings with their public-facing employees. These meetings are colloquially referred to as ‘kickoff’ events, which implies something new, a fresh start, the beginning of another game. While there is certainly a lot of truth in that, kickoff is also a good time to revisit topics which may have faded from memory but are in truth even more relevant today. One such topic is storage performance.
The last decade – football game, if you like – was dominated by one ‘team’, that being capacity. Growth of capacity, management of that growing capacity, backup of that capacity, replication and protection of that capacity, all things capacity. Enterprises couldn’t get enough storage, quickly enough, to meet their data growth needs. They were constantly running out of storage. So, they grew and grew and grew, and spent and spent and spent. Towards the end of the decade, predictably enough, some enterprises turned their focus towards data reduction after many years of data growth.
The other team – performance – took many hits over the years as capacity dominated. Performance made a few yards every now and then, but mostly, had to punt, as capacity won the day-to-day battle. However, in this decade, performance has the ball and is driving, as capacity is reduced. I personally believe performance – once the dominant criteria for storage, when datasets were small – is making a big comeback.
Performance learned many lessons over the last decade. Today, performance is not merely about IOPS, throughput or response time – the traditional three aspects. It’s about ratios: the three aspects of performance per unit of input. Now, what’s “input”, you say? To an enterprise, in particular IT, inputs are simple: money, time, space, power, cooling, and humans (which require money), bringing us full circle.
So, it’s not about IOPS, it’s about IOPS/$, IOPS/watt, (IOPS * TB)/watt, and other ratios. Measuring storage performance using ratios turns out to be the most useful technique for enterprises to evaluate their storage; after all, CFOs measure business performance using ratios. I believe CIOs should measure IT performance in general and storage performance in particular using ratios. On the compute side, we often refer to servers not by how many virtual machines they can merely hold, but how many they can run efficiently and meet a given SLA. There is a direct business translation between that and what a cloud compute provider would measure, for example. The same should be true for storage.
Storage devices (including arrays) perform only three essential functions; they store data, move data, and protect data. Inherent in all these essential functions is efficiency. We now know how to store efficiently, as seen by the increasing use of data reduction techniques. Protecting data efficiently is also a well-understood realm, both in terms of data at rest (such as the new RAGS method) and data in flight (using a variety of new and efficient encryption methods). But what of moving data? The ball is now in the red zone – can we move the ball (data) into the end zone, or will we be distracted by the cute cheerleader on the sideline (features that are licensed by the TB) who looks nice but doesn’t play?
Moving data to applications is the entire game now. The more efficiently data is moved, the more efficiently enterprises can run their workloads. Yes, Virginia, there is a Santa Claus, and his name is performance. But not the old performance, i.e. ‘my array gets more IOPS than your array’ – but the new performance, measured in ratios. It’s quite straightforward. Place the attributes where more is better, such as IOPS, GB/sec (yes, I said GB, not MB; MB is old-school), petabytes (yes, I said petabytes; the 2010 decade is the decade of the petabyte in the enterprise) and length of warranty (e.g. at least 5 years) in the numerator, and place attributes where less is better, such as $, rack U, floor tiles, watts, BTUs, and human costs in the denominator. For example, IOPS* TB / $. This particular metric measures cost efficiency over a given surface with a given workload.
Yes, it’s a new year, and we have a new kickoff. Performance has the ball, and is driving. Pay attention to it – because like most things in IT, it’s an old idea, an old notion, an old concept made new again. Performance still matters, because time is still money, and efficiency is the way to save time. After all, there’s only 24 hours in a day, and that is the inexorable limit we all battle.