BOSTON - AUGUST 28: Randy Couture reacts after defeating James Toney in the first round of their UFC heavyweight bout at the TD Garden on August 28 2010 in Boston Massachusetts. (Photo by Josh Hedges/Zuffa LLC/Zuffa LLC via Getty Images)
Despite the rise of both FightMetric and CompuStrike over the last few years, statistical analysis of MMA is still in an infancy stage. MMA lacks the large, sortable databases readily available for sports like baseball or basketball, leaving amateurs like myself to crude data like result, time, and method.
So, I was excited to see David Williams' piece at Fight Opinion (and I recommend reading the entire piece before continuing here) about what he calls the "9-year rule." Williams' hypothesis suggest that a fighter's performance drops off nine years following his or her debut. He explained his methods as such:
To determine exactly when it is that fighters collapse, I need an objective method to measure how well fighters perform over time. Fortunately, I have a great tool to use to do this with SILVA, my statistical analysis system that estimates how good MMA fighters are. SILVA does this objectively by only looking at the wins and losses of a fighter and his opponents. It takes each of the opponents on a fighter's record, and assigns each fight a "Victory Score" based on how good the opponent is. This "Victory Score" is what I'll use to measure the performance of fighters over time.
For this study, I want to look at the collective performance of fighters over time against only the top tier of opponents, or what I define as a "UFC-quality fighter." The reason I do this is to filter out wins against inferior opponents: if a fighter is in the midst of a collapse, nobody is going to be convinced otherwise by a win against a 4-10 opponent on the regional circuit. With the parameters of the study set, I evaluated the careers of over 300 fighters, most of whom have competed in the UFC, to determine how well they perform according to how long they've been competing professionally.
The first thing that jumped out to me was "300 fighters." That's an incredibly shallow sample size. PECOTA, the baseball forecasting system developed for Baseball Prospectus by Nate Silver (more famous for his political blog Five Thirty Eight), compares players against a database of 20,000 major league batter seasons plus 15,000 translated minor league seasons. Three hundred samples represents less than one percent of the PECOTA database.
Part of the problem is that MMA at the highest level simply doesn't have the volume necessary to put together a reasonable study based on wins and losses. The UFC roster, from memory, fluctuates between 200 and 250 contracted fighters (that number may be higher with the addition of the bantam- and featherweight divisions). Williams doesn't address the date range he used here, though I would imagine (and hope) that he started in 2000 when commissions adopted the Unified Rules. I would estimate that there have been less than 1000 fighters that have fought for the UFC in since then.
Williams also doesn't fully explain how his SILVA system works, though it sounds an awful lot like a generic strength of schedule calculator, nor does he explain how he arrived at his 300 fighter sample. Was it the top 300 according to "Victory Score"? Was it a random sample of 300 fighters? Were there minimum fight requirements (FightMetric, for example, requires five fights before eligibility for various UFC records)?
There's also no definition of what makes one a "UFC-quality fighter." Again, is there a minimum record required? Is it anyone who received more than X fights in the UFC? Is it the top X% in each weight class by "Victory Score"? In addition, why thin your sample even further given the nature of a sport in which careers are fortunate to exceed 40 fights?
So, before we've even looked at the data, there's already a handful of issues that I'd like to see explained or cleaned up on the procedural side.
As for the data, this is how it shakes out:
From this, Williams concludes:
The steepest drop takes place after the fighters measured had been competing professionally for 9 years. At that point, the ability of the fighters to compete against quality competition declines to the same level as when they were relative rookies in the sport. It doesn't mean that the fighters are incapable of winning against good opponents, but their ability to compete at the highest levels of the sport is greatly diminished. This can take root in various ways. Some fighters become much more prone to being knocked out. Some have a slower reaction time. Others start getting injured on a frequent basis. For some, the collapse is psychological: the fighter becomes mentally broken.
Further, the 9-year rule seems to apply regardless of the age of the fighter or how many times he's competed professionally. Ortiz and Arlovski only had fought 17 and 18 times, respectively, when they reached the 9-year mark of their careers, but they've both suffered recent collapses. Meanwhile, to go to the other extreme, Jeremy Horn had competed 91 times when he reached the 9-year mark of his career. Horn went 7-6 in his following 13 fights, including losses to Matt Lindland, Jorge Santiago, and Dean Lister.
The rule seems to defy age as well. The effects of the 9-year rule on Randy Couture are debatable, because he went 5-3 afterwards with the famous win over Tim Sylvia, but given that two of those wins were against James Toney and Mark Coleman, I would argue that the rule applies to him as well. Meanwhile, the rule appears to have affected the careers of two fighters currently in their 20s: Joe Stevenson and Karo Parisyan were each just 25 years old when the 9-year rule took effect. Stevenson is 3-5 since then, and Parisyan is 1-3, with the latter having become known for suffering from severe panic attacks before his fights.
And here we find another issue with the methodology or, at least, the transparency and disclosure of the data: How many samples do we find in Years 12 and 13? I would struggle to name 50 fighters with who fought "UFC-quality" opponents in those years of their careers.
In addition, Williams notes his use of "Victory Score," and I wonder why that wasn't used in place (or in addition) to a straight winning percentage. I don't even need to mention the arbitrary (and, at times, seemingly random) nature of judging in MMA. But is defeat, even by stoppage, the best measure of one's performance? It's the same problem with judging pitchers by their win-loss record in baseball. You can pitch well, even extraordinarily so, and still come out with the "L." Baseball, however, has peripheral stats -- K/9, BB/9, etc. -- that currently eludes MMA.
We may find out that MMA fighters have their greatest success within the first nine years of their career. Or we may find out that their prime falls between ages X and Y. For right now, however, without a larger sample size and without a better understanding of the methodology behind this study, we don't have a conclusive answer either way.