By Vernon Felton
I am not a robot. On the whole, this works for me, since I’m partial to procreating and respirating and the like. Sometimes, however, there are drawbacks to being just so much flesh and blood. As a product tester, for instance, your very humanity invariably raises issues of credibility.
If I was a robot, you might trust me more.
I was reminded of this the other day when I posted a picture of a few shocks I’ve been testing for the upcoming Bible of Bike Tests issue, and one reader responded, “Please, God, use an actual dyno or just don’t write the article.”
His point, in essence, is that anything I write about the shocks based on my field testing will be close to meaningless if it’s not, at the very least, accompanied by a form controlled and standardized lab testing. His comment raises a worthy question: How valid are the reviews of bikes and parts if the tests aren’t based on actual figures derived in the lab?
Here are my two cents on the matter. For the record, I’m one of those ass hats who has made a living (for almost two decades now) as a tester of bikes. I am not, however, a robot. Since I’m clearly biased, I’ve also asked Mark Jordan of Fox Racing Shox, Duncan Riffle of RockShox and Noah Sears of MRP to weigh in.
A LOT OF WAYS TO CRAP THE BED
Let’s start by stating the obvious: there are a million ways to screw up a product review. And by “screw up”, I mean write a review that fails to capture just how good, decent or bad the product really is. Let’s start with the obvious turds in the punchbowl, gross confounding variables that testers fail to account for or eliminate, and which skew the actual test results.
In plain speak you can bungle a test of rear shocks by:
(1) Testing the shocks on different frames;
(2) Treating shocks as equivalents when they possess differing damping tunes;
(3) Testing the shocks on different courses from one another;
(4) Testing shocks and, at some point in the testing process, changing something (anything) else on the bike.
At the risk of boring the shit out of you, let’s explore these potential mistakes a bit.
Using different frames is a pretty obvious blunder. Different frames have different leverage ratios/ curves and will totally skew your results.
Mistake number two is just as problematic. If you are comparing Brand A’s shock on your test frame against Brand B’s shock, you should be comparing shocks with the same basic damping tunes. If the damping tunes aren’t equivalent, you are basically comparing apples to orangutans.
Mistake number three is obvious, but still easy to screw up. If you aren’t testing on the exact same trail loop, you can’t compare the results of your rides—even if we are only talking about subjective, seat-of-your-pants results.
Mistake number four is probably the easiest mistake to make. Let’s say you’re testing over a two-month period. You start out testing Shock A and you have 28 PSI in your tires. Three weeks later you are testing Shock B and the air pressure in your tubeless wheelset has snuck down to 25 PSI—since that change happens gradually, you don’t even realize it’s occurred, but after awhile, every shock you’re testing seems to have better compliance than Shock A. Bottom line—seemingly insignificant changes to your test apparatus, (i.e., your bike) will skew test results.
I’m just skimming the surface here. When your testing consists solely of field testing, you can louse up your review a million different, less obvious, ways. I don’t care who the “test pilot” is…even the most dedicated, vigilant reviewer is going to unwittingly introduce some level of variance into the mix during a drawn out field testing exercise.
Testers are, after all, human. What sets the better testers apart from the others is their awareness of this fact. You get results and you question them. You go out and verify that you can repeat those results. If a product suddenly shits the bed, you call the manufacturer and try to figure out why that happened. Was it a fluke or a consistent problem? You call in for a replacement, you give it a go and see if that replacement suffers the same fate.
In other words, you do your due diligence.
SO JUST TEST IT IN THE LAB, ALREADY!
By now you might be saying, “Well, damn, if there are so many ways to bungle product testing in the real world, why don’t you just do your testing in the lab?” In this case, for instance, we could strap those rear shocks to a shock absorber dynamometer (the “dyno”), which would cycle each shock—at the same speeds and temperatures—and measure the shocks’ rebound and compression damping forces. The data would graph nice and neat, and the tests would be consistent and repeatable—the holy grail of accuracy.
That sounds awesome. I don’t, however, ride a dyno. I ride a bike. What we’re trying to convey to readers is how bikes and components perform on the trail. Not in the lab.
I DON’T RIDE A DYNO
There are times when lab data truly trumps the subjective recall of trail testers. If we’re talking frame flex, for instance, I’m far more inclined to believe the data acquired from standardized deflection tests than from the reports of two riders—one of whom is a 145-pound tester who rides light and smooth, and the other of whom is a 220-pound Clydesdale who tacos wheels daily.
I’m not sure, however, that lab testing is the end-all-be-all when it comes to every component. Blind belief in lab results is ridiculous, because it assumes that the tests accurately represent the exact forces and stresses that a product will experience in the real world. Labs lack dust, mud, grit, rainstorms, shitty landings to flat, and all the other things your rear shock, fork or wheelset is going to experience out in the real world.
Data, whether it’s obtained by a dude in baggy shorts or a man in a lab coat, is only as good as the protocol it’s gathered under. Just because you’re doing a test in a lab doesn’t mean you’re doing it right.
Don’t get me wrong: I’m not arguing against lab testing in general or dynos in particular. I’d love to get my hands on a dyno and I think the state of product testing would be better if it included some degree of standardized lab testing. Some day, that may be the case, but our budget (and the budget of every North American magazine and website I know of) doesn’t include a line item for these machines or for outsourcing the testing. Frankly, I wish it did. In the meantime, however, I do have a set of lungs, a pair of legs and a commitment to sweating the details. Trail testing is what you’re getting from us and we take it seriously, as do an awful lot of the other editors at competing magazines and websites.
But, like I said, I’m biased. Here’s what a few people who work on the other side of the fence have to say on the subject….
NOAH SEARS, MRP
“With any product testing, the test “quality” is only as good the testers providing the feedback. So yeah, field testing with warm bodies can give you subjective impressions that may be incorrect or misinterpretations of what the product is doing. But lab testing isn’t perfect either – specifically in the case of a dynamometer.
Dynos are great to use in evaluating the damping performance of a shock and can help spot hiccups in performance or manufacturing. But testing competing shocks on a dyno only isn’t really gonna tell you which one is the hot ticket. For starters, there are different (and competing) theories on how suspension should work – evidenced by the one million and one variations on linkage found on today’s full-suspension frames.
There are also trail event situations that cannot really be duplicated on a dyno. It’s pretty hard, for instance, to simulate a top-to-bottom, non-stop run on Moab’s relentless Whole Enchilada. Finally, a dyno tells you nothing about the rear shock as a complete package – the usabilty of it’s layout, the intuitiveness of the adjustments, its balance of weight and adjustability, or it’s durability in the field. I suspect on a few recent shock designs the location of the rebound knob was an unfortunate afterthought. Cane Creek’s Climb Switch was revolutionary – previous to that, adjustments of that nature were mostly about adding compression damping. Cane Creek did a great job of rethinking the “climb” setting of a rear shock and made it way more useful for trail riders. A highly independently- adjustable shock is gonna look great on a dyno, but it’s gonna come with a weight penalty – that’s a compromise too. Field testing is crucial to evaluate durability – which is pretty critical on a product like a rear shock that’s asked to work pretty dang hard with minimal maintenance.
All in all, a Roehrig dyno plays a big role in the development, tuning, and production phases of our suspension. But I’d have to say I rely more on the feedback of my field testers. And thus, product reviews from qualified and professional journalists carry more weight with me personally than those of data-heavy, or should I say Germanic, variety.”
MARK JORDAN, FOX RACING SHOX
“We really believe that you need both dyno and field testing- they go hand in hand. A lot of our testing involves going back and forth between dyno and field testing, but in the end, field testing dictates the final word on new design performance and damping tunes.
Really, there’s no substitute for field testing, but you can sure learn a lot from a dyno in a short period of time, and help quantify and explain things. And you can make a dyno cycle the heck out of product using data acquisition information directly from a trail run rather than asking Greg Minnaar to do 200 runs in a row.”
DUNCAN RIFFLE, ROCKSHOX
“I wouldn’t put all my eggs into either the lab or the field-testing baskets. It all needs to come together.
Being in the field and doing ride testing is an invaluable thing. There are things that happen on a machine, in this case, on a dyno, that can never be clearly understood or felt the way they would be in a field test. Now, that being said, not everybody can correctly interpret what they are feeling on the trail and relate that to what is actually, technically happening with their suspension.
The dyno testing and lab testing are also important, but understanding that data and directly relating it to how the product will perform on the trail is not simple at all. Even me, I’m an ex-professional World Cup downhiller and if I looked at a print-out from a dyno, all I could say is, ‘Sweet. What do these numbers mean?’ I can tell you how that shock performs on the trail, I can field test it thousands of times and tell you how it felt, whether it exploded or held up, but could I compare the numbers from dyno tests and tell you how one data set compared to the other? No. Not a lot of people can. There are people who are trained to interpret that data. They have that job title. Companies, like RockShox, hire people who can look at those numbers and do a good job of relating it to how the shock will perform in the field. I’m not saying the lab testing isn’t a valuable thing. I’m just saying that it’s not as clear-cut and simple as it’s sometimes made out to be. Not everyone can do it.
Look, I’m never going to say that testing should be all about field testing or just all about the lab testing—both have their merits—and if someone had the capability to do all of them, fantastic. We’re committed to doing both—that’s why RockShox has engineers in Colorado Springs who literally will pull something off a machine that has just been CNC’d and will put all the parts and oil in it and go out and field test that thing—straight out of the office and onto the trail.”