Joe rides an early 2000's Cannondale Jekyll and a helmet that might be from the same era. He's one of those riders that might tear your legs off if he didn't have a 75-degree head angle to contend with.
He's also been grappling with the idea of a new bike and some new kit to go along with it. He asked if he could run a few questions by me. Giving bike advice is one of my favorite pastimes--probably because it's the only type of advice I'm somewhat qualified to deliver. So I happily obliged.
"When it comes to helmets," he said, "are some better than others?"
Are All Bike Helmets Created Equal?
Joe’s question is one that us bike-industry employees and consumers should be discussing right now. We’ll debate everything from frame angles and suspension designs to rubber durometers and how many rides you can go before washing your chamois. Generally speaking we're a savvy, if not slightly fungus-y bunch.
But when it comes to helmets, we are woefully ignorant. And it's not entirely our fault.
In the current state of things, there is supposedly no protective difference between a featherweight road helmet and a MIPS-loaded all-mountain lid. But of course some helmets are more protective than others. We just don’t have the information we need to start conversations about those differences.
In the U.S., the CPSC (U.S. Consumer Product Safety Commission) is the bureaucratic gatekeeper of helmet safety standards. It doesn’t actually test helmets, though. Testing is left to the helmet manufacturers themselves, some of whom outsource the job to labs. Records of helmet test data have to be kept on file for a few years and be provided to the CPSC within 48 hours upon request. It is, in some ways, an honor system.
Eight sample helmets in each size of the model being tested are required to complete the CPSC’s certification. Two are exposed to heat, two to cold, two are immersed in water and two are tested at room temperature. The helmets' straps are evaluated for retention, which is done with the helmet attached to a headform and the straps hooked to an 8.8-pound weight. The weight is dropped to simulate a force pulling on the straps. They have to stay put, not break and not stretch more than 1.2 inches.
Impact attenuation tests measure how much force is translated through the helmet. The helmets are attached to a headform with an accelerometer, which is mounted to a monorail. The monorail drops the helmets on a flat anvil from 6.5 feet at 20 feet per second and on hemispherical and curbstone anvils from 4 feet at 16 feet per second, testing the headgear for impacts on a variety of surface shapes. The helmet model fails if any of the samples used show a peak acceleration of more than 300 g. The helmets are also dropped on an angle to ensure they won't pop off during a fall, and must allow at least 105 degrees of peripheral vision.
Those Other Certifications
A CPSC sticker is one of several that you might see inside helmets. There are also certifications from Snell and ASTM, which require slightly higher drops and a lower coverage line in the case of Snell 95. The European Committee for Standardization (CEN) permits a lower drop height for its EN1078 certification, but fails any helmet that allows peak acceleration beyond 250 g. The tests all date back to the mid-to-late 90's. That's older than Joe's Jekyll.
In addition to age, these certifications share a lack of transparency. None require that manufacturers publish test data, let alone print it on helmet boxes. Without public results, the tests truly become pass/fail.
It’s not hard to see how such a system could distort brands’ priorities away from safety and innovation, incentivizing them to pursue the lightest, most breathable, comfortable, aerodynamic or cost-effective helmet that will pass the test.
A Need for Transparency
There are hopeful signs in the form of proposals for a rotational impact test from MIPS and other groups to Europe's CEN. Such a test would be an opportunity for the industry to standardize--and validate--its various slip-plane designs, which are not currently regulated.
A new testing standard would also be an opportune moment to institute a requirement that brands publish test results. This has the potential to confuse shoppers. There’s also the argument that we don't know enough about the brain or have enough crash data to know for sure what results correspond with a more protective helmet. That’s true: When it comes to the brain and brain injuries, there are, in Rumsfeldian terms, probably more unknown unknowns than there are known unknowns.
Still, requiring that brands publish test data would be a step in the right direction. Access to those numbers would allow us to make comparisons between helmets, and start discussing, with the involvement of brands, what those differences mean. This would make consumers more sophisticated and maybe even push the industry toward more rigorous testing.
Someday, I might even be able to give Joe an honest answer that isn’t “we don’t know.”
Look for more about helmets on Bikemag.com in the near future.
If you want more details on the testing process, Helmets.org is a non-profit helmet advocacy program founded in 1989, and a comprehensive resource when it comes to standards and testing. Helmetfacts.com presents much of the same info in a more digestible way, and graciously gave us photos for this story. Note that Helmetfacts is operated by Giro's in-house testing lab, The Dome. You can read the CPSC’s description of its testing requirements here.
So far, Leatt is the only brand we know of that’s publishing test data. The South Africans deserve some props for that. You can have a look at the test results for Leatt’s DBX 3.0 half-shell helmet here.