Hullo there Barry W.!
Thanks for the comment on my old post - woke me up out of a year-long hibernation. You succeded where others have tried and failed....;^)
Yeah.. lets look at realistic benchmarks again - and I agree, its going to take more than the two of us. And don't hold your breath on EMC joining the SPC anytime soon.....
Here's one way of looking at it
Lets me grant that the intent behind the SPC was noble - to have a benchmark that customers could look to as a guideline for their performance needs from storage. There are then two major aspects that I find objectionable about it today, namely:
1. The benchmark itself is very narrow - cache hostile, basically counts spindles, not representative of real life workloads.
2. Governance - as the specific HW configurations are unconstrained, many of the tested systems are highly optimized, short-stroked, and not representative of what customers would buy. Apples-to-apples comparisons are very difficult. Even though the cost of the configuration is a weak measure of how efficiently assets were used, lets face it, no one pays attention to that. Every press release focuses on IOPS, nopt the cost, and that is where customer attention is drawn to.
So instead of designing the uber-benchmark from first principles, perhaps addressing these deficiencies for the SPC is one way of converging quickly. So for example, include more measurements for a range of workloads - ones that will let the underlying array architecture show its mettle. And demand high asset utilization (say 70%+) for ports, spindles, capacity etc. to discourage jury-rigging configurations for the test.
This way, the work done by SPC can be leveraged, and the meaningfullness of the results can be enhanced. A more complete benchmark with mixed concurrent workloads, backend processes like array replication etc are desirable, but will take a lot longer to craft.