Follow me on Twitter!


Saturday, August 30, 2014

The 2014 Games Were Heavier, Higher-Skill and Shorter Than Recent Years

Welcome back for another relatively quick one.  Today I'm going to hit a few highlights of the analysis I've done on the programming at this year's Games.

As the title would suggest, the big point here is that this year's Games were heavy and high-skill. Conversely, in comparison to prior years, they weren't as much about stamina, endurance and generally managing fatigue.

Let's consider the first point.  The chart below shows two key loading metrics for men for all eight CrossFit Games.  For those unfamiliar with these metrics, start here.


The load-based emphasis on lifting (LBEL) is always the first place I look when evaluating how "heavy" a competition was.  It gives us an indication of both the loads that were used as well as how often lifts were prescribed.  The LBEL at this year's Games was 0.89; the next-highest was 0.73 back in 2009.

The high LBEL in 2009 was based partly on the fact that there were two max-effort events out of eight total.  In metcons, the loadings were actually quite light at that time.  But in 2014, the met cons were heavy as well.  The average relative weight in metcons was 1.43; the next-highest was 1.36 in 2013.  For context, a 1.43 relative weight is equivalent to a 193-lb. clean, a 146-lb. snatch and a 343-lb. deadlift.  These are average weights used in metcons; the days of the bodyweight specialist competing at the CrossFit Games are over.

The women's numbers tell a similar story.  The LBEL was 0.62, about 30% higher than the previous high of 0.48 in 2009.  The average relative weight in met cons was 1.01, significantly higher than the next-highest (0.88 in 2013).  The 1.01 is on par with the men's loads from 2007-2010.

Not only were the Games programmed heavy, but the athletes are just flat-out getting stronger.  The chart below shows the relative weights achieved during the max-effort lifts in the Games historically. These represent the average across the entire field (except in 2007, when I limited the field to the non-scaled participants only).


Not only was the men's average of 2.71 in the overhead squat well above the previous high of 2.36, but the women's average of 1.80 is higher than the men achieved in the CrossFit Total in 2007 and the Jerk in 2010 (granted, that lift occurred within 90 seconds of the Pyramid Helen workout).

Now, as far as the high-skill comment, consider the types of movements that were emphasized at the Games.  I generally categorize movements into seven broad groups: Olympic-Style Barbell Lifts, Basic Gymnastics, Pure Conditioning, High Skill Gymnastics, Powerlifting-Style Barbell Lifts, KB/DB Lifts and Uncommon CrossFit Movements (sled pulls, object carries, etc.).  The two that require the most technical ability are High Skill Gymnastics (such as muscle-ups and HSPU) and Olympic-Style Barbell Lifts.

This season, Olympic-Style Barbell Lifts accounted for 32% of the total points, which is second all-time (2008 they were worth 38%).  The High Skill Gymnastics movements accounted for 17%, which was only topped once (2010).  Combined, those two groups accounted for 49%, which is second all-time.  The only year with a greater emphasis was 2010, which actually included the incredibly challenging ring HSPU.  Still, the sheer volume of high-skill movements required of athletes was far higher this year.  The muscle-up biathlon included 45 muscle-ups; in 2010, "Amanda" was crushing about half the field with just 21 muscle-ups.  The ring HSPU were tough in 2010, but were they more challenging than the 10-inch strict deficit HSPU this year?  Remember, women were only required to do regular HSPU back in 2010, and only 28 of them.  These days, 28 regular HSPU is nothing for the elite women's athletes.

On the flip side, this Games had much less volume than recent years.  The chart below shows the longest event (based on winning time), the approximate average length of all events (including finals) and the approximate total time that athletes competed, dating back to 2011.  It's clear that this year was much less grueling than the past two seasons, and it was very similar to 2011 (including starting with a ~40-minute beach workout).


The theme of this year's Games was strength and skill, not stamina.  Why?  Well, I have to believe television had something to do with it.  This year's events were more spectator-friendly across the board, and they may set the stage for future seasons in which every event is broadcast on cable (ESPN won't be showing a 90-minute rowing workout, that's for sure).  People don't like to watch events that take forever, but they do like to watch people lift heavy stuff and generally perform feats of strength and skill that make you say "I could never do that."

My hope is that the Games can continue to be spectator-friendly without losing the events with that "suck factor" that we in the community know and love.

Thursday, August 21, 2014

Rich Froning's Comeback Could Have Been Even More Amazing (and more scoring system thoughts)

Today will be the first in a series of posts breaking down the 2014 CrossFit Games in more detail.  In the past, I have combined a lot of thoughts into one or two longer posts reviewing the Games (in particular, the programming).  However, this year, due to time constraints from my work and personal life, I'm planning to get the analysis out there in smaller doses, otherwise it might be another month before my next post.  And in fact, this might be the best way to handle things going forward, but we will have to see.  Anyway, let's get moving.

Unlike the past two seasons, Rich Froning did not enter the final day of competition with a commanding lead.  In fact, he didn't even enter the final event with a commanding lead.  All it would have taken was a fifth-place finish by Froning and a first-place finish by Mathew Fraser on Double Grace for Froning to finish runner-up this season.  But what you may not have realized is that it could have been even tighter.

In the Open and the Regionals, the scoring system is simple: add up your placements across all events, and the lowest cumulative total wins.  At the Games, however, the scoring system changes to use a scoring table that translates each placement into a point value.  The athlete with the highest point total wins.  I've written plenty about this in the past (start here if you're interested), but the key difference is this: in the Games scoring system, there is a greater reward for finishes at the very top, and less punishment for finishes near the bottom.  The reason is that the point differential between high places is much higher (5 points between 1st and 2nd) than between lower places (1 point between 30th and 31st).

So you know that small lead Froning had going into the final event?  Well, under the regional scoring system*, he would actually have been trailing going into that event... BY 8 POINTS!  And he would have made that deficit up, because he won the event while Fraser took 11th.  I think it is safe to say that would have been the most dramatic finish to the Games we have seen (I guess Khalipa in 2008 was similar, but there were like 100 people watching, so...).

One reason the scoring would have been so close under this system is that Fraser's performance was remarkably consistent.  His lowest finish was 23rd.  All other athletes had at least one finish 26th or below, and Froning finished lower than 26th twice.  But Fraser also only won one event and had four top 5 finishes.  Froning, on the other hand, won four events and finished second one other time.

I also looked at how the scoring would have turned out under two other scoring systems:
  • Normal distribution scoring table - Similar to the Games scoring table, but the points are allocated 0-100 in a normal distribution.  See my article here for more information.
  • Standard deviation scoring** - This is based on the actual results in each event, rather than just the placement. Points are awarded based on how many standard deviations above or below average an athlete is on each event. More background on that in the article I referenced early on in this post.
Here is how the top 5 would have shaken our for men and women using all four of these systems (including the current system):










As far as the winners go, we would not have seen any changes.  Clearly, Froning and Camille Leblanc-Bazinet were the fittest individuals this year.  Generally, what you can observe here is that the athletes doing well in the standard deviation and normal distribution system had some really outstanding performances, whereas the athletes doing well in the regional scoring system were the most consistent.

What is also nice about the standard deviation system is that it can tell us a little more about how each event played out.  For each event, I had to calculate both the mean result and the standard deviation in order to get these rankings.  That allowed me to see a few other things:

  • Which events had the most tightly bunched fields (and the most widely spread fields)?
  • Were there significant differences between men and women in how tightly scores were bunched on events?
  • Which individual event performances were most dominant?
To measure the spread of the field in each event, I looked at the coefficient of variation, which is the standard deviation divided by the mean.  For instance, the mean weight lifted for women event 2 was 213.6 and the standard deviation was 22.1 pounds, so the coefficient of variation was 10%.  The higher this value, the wider the spread was in the results.  And remember, if the spread is wider, the better you have to be in order to generate a great score under the standard deviation system.

To see which individual event performances were most dominant, I looked at the winning score on each event.  Typically, this score was between 1.5 and 2.75 standard deviations above the mean; this is in the right ballpark if we assume a normal distribution, because there would be about a 7% chance of getting a result of 1.5 standard deviations above the mean and a 0.3% chance of getting a result of 2.75 standard deviations above the mean.

The chart below shows both the winning score (bars) and the coefficient of variation (line) for each event.  Note that the Clean Speed Ladder is omitted because there it was a tournament-style event and does not convert easily to the standard deviations system.  For my calculations of points on the Clean Speed Ladder, I used a normal distribution assumption and applied points based on the rankings in this event.


The largest win was Neal Maddox's 3.43 in the Sprint Sled 1; a normal distribution would say this should occur about 1-in-3,000 times.  For those who watched the Games, this performance was quite impressive.  Maddox looked like he was pushing a toy sled compared to everyone else.  Also, don't sleep on Nate Schrader's result in the Sprint Carry.  It may not have appeared quite as impressive because the field was so tightly bunched (only a 9% coefficient of variation, compared to 23% on Sprint Sled 1).

The most tightly bunched event was the Triple-3 for both men (7%) and women (5%).  The Sprint Carry was next (9% men, 7% women).  The event with the largest spread was Thick-n-Quick, at 53% for men and 41% for women.  Remember, Froning won this event in 1:40 (4.2 reps per minute), while some athletes only finished 2 reps (0.5 reps per minute).

The lesson, as always: Rich Froning is a machine.



*All of the alternate scoring scenarios here assume that the sprint sled events would each be worth half value.
**In order to do this, I had to convert all time-based events from a time score to a rate of speed score (reps per minute, for example).  There are lots of intricacies to this, so another individual calculating these may have used slightly different assumptions.  The main takeaways would be the same here, I think.

Friday, August 1, 2014

Initial Games and Pick 'Em Observations

Only a few days removed from the conclusion of the 2014 CrossFit Games, I haven't quite had time and energy to completely digest what took place.  Trust me, there is more analysis to come dealing the Games from a variety of angles, but for now, let's start with some quick observations.
  • I had the good fortune of being able to attend the Games in person (thanks to my wife for the birthday surprise!), so I can't comment on the quality of the TV product, but the intensity in-person for the prime-time events was top-notch.  In particular, the conclusion of the Saturday night event ("Push Pull") was probably the most exciting individual event I've witnessed.  The crowd's reaction when Froning took the lead, when it looked for the first time all weekend that the real Rich Froning had arrived, was powerful.  But for Josh Bridges to go unbroken on the last set of handstand push-ups and then hold off Froning on the final sled drag was really something special.
  • One of the underrated moments of the in-person experience came as we were leaving the venue Saturday night and a buzz went through the departing crowd as the JumboTron showed the updated men's overall standings with Froning out front for the first time since Friday morning.  You really got the sense that the spectators were fans, not just CrossFitters there to support the athlete from their local box.
  • Also super-cool was the "Fra-ser" chant in a small but vocal section of the crowd before the men's final.  Don't get me wrong, Froning was still the clear fan favorite, but this was a neat moment to hear the support for the underdog.
  • The Muscle-up Biathlon was also a pretty thrilling event, both for the men and women.  For the women in particular, the race between Camille and Julie Foucher (still in contention for the title at that point) was pretty nuts.  And I'm not sure if this is a good or bad thing, but when Foucher was no-repped at the end of her round of 12, I heard the first real booing at a CrossFit event.  People friggin' LOVE Julie Foucher, and they HATE no-reps.
  • Now let's get to some numbers.  Based on the CFG Analysis predictions, Cassidy Lance finishing in the top 10 was the third-longest shot to come through dating back to the 2012 Games.  I had her with a 2.3% chance of reaching the top 10.  The only two longer shots were Anne Tunnicliffe in 2013 (2.2% chance of top 10) and Kyle Kasperbauer in 2012 (1.2% chance of podium).  There were really no major long-shots on the men's side this year.  Jason Khalipa for podium had the lowest chances at 9.1%.
  • The winner of the pool was JesseM with 134.7 points.  He had some great picks, including 6 points on Lauren Fisher to finish top 10. However, he had far from the ideal set of picks.  That would have been the following:
    • 1 Rich Froning win
    • 1 Jason Khalipa podium
    • 1 Tommy Hackenbruck top 10
    • 1 Camille Leblanc-Bazinet
    • 1 Annie Thorisdottir podium
    • 15 Cassidy Lance top 10
    • Total score of 688.7!
  • I wrote a piece a couple weeks ago in which I mentioned that prior Games experience was worth approximately 4-5 spots at the Games.  The results from this season were consistent with that.  It seems fair to say that the advantage of having experience at the Games is real.
  • Despite not including the impact of past experience in my model, the calibration of my predictions turned out to be pretty solid again this year.  Now with three years of data, below is a chart showing the calibration of my top 10 predictions.



















Over the next few weeks, I'll be breaking the Games down in more depth, in particular digging more into the programming, how it compares to years past and what it might tell us about the future.  That's it for today, so good luck in your training, and I'll see you back soon.