Now that the latest version of USPSA Analyst is released, you can run career statistics for yourself, including every match you’ve shot, locals too. Here’s how.
To be sure you have the right page source, Ctrl+F to search for ‘matchid’ in the output.
Click ‘None’, then ‘Combined’.
This article is up to date as of September 2023. It may not be up to date if you’re reading it later on, and it may not be fully complete, either—this is not meant to be full documentation of Analyst’s guts (that’s the source code), but rather a high-level overview.
The Elo^{2} rating system was first developed to measure the skill of chess players. Chess is what we call a two-player, zero-sum game: in tournaments, each chess game is worth one point. If there’s a draw, the two players split it. Otherwise, the winner gets it. Elo operates by predicting how many points a player ought to win per game, and adjusting ratings based on how far the actual results deviate from its predictions.
Let’s consider two players, the first with a rating of 1000, and the second with a rating of 1200. The second player is favored by dint of the higher rating: Elo expects him to win about 0.75 points per game—say, by winning one and drawing the next, if there are two to be played. If player 2 wins a first game, his actual result (1 point) deviates from his expected result (0.75), so he takes a small number of rating points from player 1. Player 2’s rating rises, and player 1’s rating falls. Because the deviation is relatively small, the change in ratings is relatively small: we already expected player 2 to win, so the fact that he did is not strong evidence that his rating is too low. Both players’ ratings change by 10: player 2’s rating goes up to 1210, and player 1’s rating goes down to 990.
On the other hand, if player 1 wins, his actual result (1 point) deviates from his expected result (0.25) by quite a lot. Player 1 therefore gets to take a lot of points from player 2. We didn’t expect player 1 to win, so his win is evidence that the ratings need to move substantially to reflect reality. Both players’ ratings change by 30, in this case: player 2 drops to 1170, and player 1 rises to 1030.
The amount of rating change is proportional to the difference in rating between the winner and loser. In Elo terms, we call the maximum possible rating change K, or the development coefficient^{3}. In the examples above, we used multiplied by the difference between expected result and actual result^{4}: makes 10, for the rating change when player 2 wins, and makes 30, for when player 1 wins.
Expected score can be treated as an expected result, but in the chess case, it also expresses a win probability (again, ignoring draws): if comparing two ratings yields an expected result of 0.75, what it means is that Elo thinks that the better player has a 75% chance of winning.
There are a few things to notice here:
Practical shooting differs from chess in almost every way^{5}. Beyond the facially obvious ones, there are two that relate to scoring, and that thereby bear on Elo. First, USPSA is not a two-player game. On a stage or at a match, you compete against everyone at once. Second, USPSA is not zero-sum. There is not a fixed number of match points available on a given stage: if you win a 100-point stage and another shooter comes in at 50%, there are 150 points on the stage between the two of you. If you win and the other shooter comes in at 90%, there are 190 points.
The first issue is simple to solve, conceptually: simply compare each shooter’s rating to every one of his competitors’ ratings to determine his expected score^{6}. The non-zero-sum problem is thornier, but boils down to score distribution: how should a shooter’s actual score be calculated? The article I followed to develop the initial version of the multiplayer Elo rating engine in Analyst has a few suggestions, but the method I settled on has a few components.
First is match blending. In stage-by-stage mode (Analyst’s default), the algorithm blends in a proportion of your match performance, to even out some stage by stage variation^{7}. If you finished first at a match and third on a stage, and match blend is set to 0.3, your calculated place on that stage is , rewarding you on the stage for the match win.
Second is percentages. For a given rating event^{8}, the Elo algorithm in Analyst calculates a portion of actual score based on your percentage finish. This is easy to justify: coming in 2nd place at 99.5% is essentially a tie, as far as how much it should change your rating, compared to coming in 2nd place at 80%. The percentage component of an actual score is determined by dividing the percent finish on the stage by the sum of all percent finishes on the stage. For instance, in the case of a stage with three shooters finishing 100%, 95%, and 60%, the 95% finisher’s percentage contribution to actual score is .
The winning shooter gets some extra credit based on the gap to second: for his actual score, we treat his percentage as , where P is the percentage of the given shooter. For example, , for an actual score of about 0.404 against 0.392 done the other way around^{9}.
Analyst also calculates a place score according to a method in the article I linked above: the number of shooters minus the actual place finish, scaled to be in the range 0 to 1. Place and percent are both important. The math sometimes expects people who didn’t win to finish above 100%, which isn’t possible in our scoring, but a difficult constraint to encode in Elo’s expected score function. (Remember, the expected score is a simple probability, or at least the descendant of a simple probability.) Granting points for place finish allows shooters who aren’t necessarily contesting stage and match wins to gain Elo even in the cases where percentage finish breaks down. On the other hand, percentage finish serves as a brake on shooters who win most events they enter^{10}. If you always win, eventually you need to start winning by more and more (assuming your competition isn’t getting better too) to keep pushing your rating upward.
Percent and place are blended similar to match/stage blending above, each part scaled by a weight parameter.
That’s the system in a nutshell (admittedly a large nutshell). There are some additional tweaks I’ve added onto it, but first…
Although the system I’ve developed is Elo-based, it no longer follows the strict Elo pattern. Among a bevy of minor assumptions about standard Elo that no longer hold, there are two really big ones.
Number one: winning is no longer a guaranteed Elo gain, and losing is no longer a guaranteed Elo loss. If you’re a nationally competitive GM and you only put 5% on the A-class heat at a small local, you’re losing Elo because the comparison between your ratings says you should win by more than that. On the flip side, to manufacture a less likely situation, if you’re 10th of 10 shooters but 92% of someone who usually smokes you, you’ll probably still gain Elo by beating your percentage prediction.
Number two: it’s no longer strictly zero-sum. This arises from a few factors in score calculation, but the end result is that there isn’t a fixed pool of Elo in the system based on the number of competitors. (This is true to a small degree with the system described above, but true to a much larger degree once we get to the next section.)
The system I describe above works pretty well. To get it to where it is now, I consider a few other factors. Everything listed below operates by adjusting K, the development coefficient, for given stages (and sometimes even individual shooters), increasing or decreasing the rate of rating change when factors that don’t otherwise affect the rating system suggest it’s too slow or too fast.
In 2024, bomb protection and direction awareness will be enabled for the main ratings I do. Notably, these make the ratings slightly less accurate at predicting match results. It’s sufficiently small an effect that it may not be statistically significant, and it also substantially improves the leaderboards according to the experts who have seen the output. At the same time, the predictions will continue to use the 2023 settings, since (again) they are slightly but consistently more accurate.
In any case, that’s the quick (well, quick-ish) tour of Elo ratings for USPSA. Drop me a comment here, or a DM on Instagram (@jay_of_mars) if you have questions or comments.
The live blog provider puts the ads in. Block away.
]]>It’s still Sunday, and I got this done.
Merry Christmas from your Soapbox hosts.
Here are some of my answers.
Coolness is, of course, subjective, so your mileage may vary, but I think revolvers are cool. For one, there’s the cowboy cachet. If you’ve seen my match videos, at least after I started leaning into the spaghetti western theme, I’m sure you can tell I find that compelling.
For another, a revolver is a mechanical marvel: delicate clockwork nevertheless tough enough to survive a cylinder hammering back and forth. Bunches of tiny metal parts, all interlocking perfectly, cooperating to bring the cylinder into line and the hammer down at just the right moment.
And that’s saying nothing of the internal ballistics, which rely on the notion that standard-pressure air is more or less solid to a jet of gas at 20,000 PSI, at least on the timescale needed for a bullet to cross the barrel-cylinder gap and make it out the business end of the gun.
It’s a mechanically and historically fascinating tool. If your ‘why’ was intended to mean ‘why did you get into Revolver over, say, Open or CO?’, that’s why.
Come, reader, I have nothing to hide from you: you know I like weird hipster things. The fact that revolver is an esoteric corner of the USPSA game is a draw to me. Getting to^{1} work out a lot of how the wheelgun interacts with modern USPSA is an engaging mental exercise for me.
I don’t want to spoil my revolver techniques series too much^{2}, but USPSA Revolver is a layer cake of similarity sponges and difference fillings. There are parts of the game that are the same as in semi-auto divisions, but then you look deeper and there are differences, and then you zoom in to the techniques that underpin those parts of the game and they’re the same, and then you get better at those parts of the game, and they turn different again.
It keeps the gears turning in my head, and the Revolver game has not yet fully revealed itself to me. If you want to know why I keep at it, that’s one reason why.
USPSA, fundamentally, ends up being more about speed than accuracy. There are only so many points available on a stage, but you can always go faster. Just about anyone can shoot 90% points at a USPSA match. The measure of a shooter is how fast he can do it. There are some wrinkles in that clean formulation. Maybe you shoot 85% points a little faster, or 95% points a little slower. Ultimately, though, your speed improves a whole lot more than your accuracy as you get better at the game.
Inevitably, and not incorrectly, that leads people to shoot as fast as they think they can go to get good hits. Shooters in almost every division except Revolver^{3} have enough ammunition in the gun to turn the dial a bit more toward speed. Even in Production, if you throw a shot off target and take a fast makeup shot, you’re still in decent shape.
The dial has to go a bit more toward accuracy in Revolver. Shooting a wheelgun, I don’t frequently find myself pulling into a position with extra bullets in the gun. Usually, my stage plan calls for eight shots, and all eight of those shots have to hit, or else there’s going to be a standing reload somewhere. Revolver is a deadeye’s game. If you want to win against solid competition, you have to be able to shoot any given USPSA position with perfect accuracy, and you have to know exactly how fast you can go while doing it.
What do I find compelling about the actual shooting in Revolver division? That.
Toward the end of 2021, after a year and a half of shooting almost exclusively revolvers^{4}, I took a few detours into semi-automatic divisions: one day in Carry Optics, one in Open. I shot those guns, and those matches, pretty well. I’m clearly a better shooter now than I was the last time I ran my Carry Optics gun, and that was plain to see in the results.
That said, at the end of the match in Open, I donned the Revolver belt for a run at the classifier. In the aftermath of a 100% run, I remarked to a friend, “I shoot that gun like it’s an extension of my arm.” Revolver clicks for me. For the amount of practice I’ve put in, I run the revolver better compared to some hypothetical baseline skill than I would with the semi-autos. I’m shooting classifiers on a pace that should put me into Grandmaster soon, and in contention to win stages that don’t excessively handicap the wheelgun at the smaller of the two local matches I frequent.
Why revolver? Because I’m good at it, and still getting better.
October was a busy month for me: three USPSA matches, one of which yielded a trophy, and my birthday toward the end. Now that the summer’s winding down, I should be more able to do these regularly.