Kudos to OSSG! Check out his Storage Benchmark Wiki on Wikibon - awesome work to get us started! Thanks! And to answer your question, absolutely! Any of the material I post is fair game to be cut and pasted into it to get things started.
Folks, this is your golden chance to get a neutral vote it! End users especially - but any interested party is welcome to participate.
The couple of people who commented (inch, OSSG,..) seemed to lean towards an open source effort for benchmark workload generators. I found this post by Jacob Gsoeld on SearchStorage.com that describes some generators like IOMeter, IOZone and NetBench - should we be checking these out? Are more people interested in the open source approach?
You saw my Postulate #1 last time. Without any further ado....
Postulate #2: No over configuring!
Don't claim to benchmark 25 TB while there is 100TB in the array. Less is more. Minimalism rules! Brownie points for getting nearly as good results with less HW. Our customers are struggling to work through 60% growth per year with a flat budget - efficiency is key here.
A very nice side effect of this that benchmark comparison may actually make sense now. Apples to apples is the only way to go. Now, constraints like cost, power consumption, floorspace can be used as optimizers for the right platform. Tricks like shortstroking and increasing spindle count artificially go away.
So with Postulate #1, we get different views for the same HW configuration - cache-friendly, -hostile, random sequential, small block, large block - and with #2 we level the playing field.
We still got major hurdles to cross to get to multi-host, dynamic, composite workloads - but I believe that if we start with some well defined simple workloads, a workbench approach where these can be combined could be possible. Inch, any thoughts?
Or anyone else?