Introduction
The traditional ELO rating system reduces a player's ability to a single scalar value E, from which win probabilities are computed via a logistic function of the rating difference. While pragmatic, this one-dimensional approach may obscure the rich, multifaceted nature of chess skill. For instance, factors such as tactical creativity, psychological resilience, opening mastery, and endgame proficiency could interact in complex ways that a single number cannot capture.
I’m interested in exploring whether modeling a player’s ability as a vector
with each component representing a distinct skill dimension, can yield more accurate predictions of match outcomes. I tried asking ChatGPT for a detailed answer on this idea, but its responses aren't that helpful frankly.
The Limitations of a 1D Metric
The standard ELO system computes the win probability for two players A and B as a function of the scalar difference E_A−E_B, typically via:
where and α is a scaling parameter. This model assumes that all relevant aspects of chess performance are captured by E. Yet, consider two players with equal ELO ratings: one might excel in tactical positions but falter in long, strategic endgames, while the other might exhibit a more balanced but less spectacular play style. Their match outcomes could differ significantly depending on the nuances of a particular game - nuances that a one-dimensional rating might not capture.
A natural extension is to represent each player's skill by a vector , where each corresponds to a distinct skill (e.g., tactics, endgame, openings). One might model the probability of player A beating player B as:
where ⟨⋅,⋅⟩ denotes the dot product and is a weight vector representing the relative importance of each skill dimension.
I'm interested in opening the discussion: has anyone developed or encountered multidimensional models for competitive games that could be adapted for chess? How might techniques from psychometrics - e.g. Item Response Theory (IRT) - inform the construction of these models?
Considering the typical chess data (wins, draws, losses, and perhaps even in-game evaluations), is there a realistic pathway to disentangling multiple dimensions of ability? What metrics or validation strategies would best demonstrate that a multidimensional model provides superior predictive performance compared to the traditional ELO system?
Ultimately my aim here is to build chess betting models ... lol, but I think the stats is really cool too. Any insights on probabilistic or computational techniques that might help in this endeavor would be highly appreciated.
Thank you for your time and input.
Circling back to this with a thing I was thinking about - suppose one wanted to figure out just one additional degree of freedom to the Elo rating a player had (at a given point in time, if you also allow evolution over time) that would add as much improvement as possible. Almost certainly you need more dimensions than that to properly fit real idiosyncratic nonlinearities/nontransitivities (i.e. if you had a playing population with specific pairs of players that were especially strong/weak only against specific other players, or cycles of players where A beats B beats C beats A, etc), but if you just wanted to work out what the "second principal component" might be, what's a plausible guess?
First, you can essentially reproduce the Elo model by rather than each player having a rating and the winning chance being a function of the difference between their ratings, instead you posit that each player has a rating and when they play a game, they each indepedently sample a random value from a fixed probability distribution centered around their own rating, and the player with the larger sample wins.
I think that you exactly reproduce the Elo model up to scaling if this distribution is a Gumbel distribution, because the difference of two Gumbels is apparently equivalent to a draw from a logistic distribution, and the CDF of the logistic distribution is precisely the sigmoid that the Elo model posits. But in practice, you should end up with almost the same thing if you choose any other reasonable distribution so long as it has the right heaviness of tail.
In particular, I'd expect having linearly-exponential tails is good rather than quadratically-exponential tails like the normal distribution has, because linearly-exponential tails tend to be desirable for real-world ratings models due to being much more outlier-resistant and in the real world you have issues like forfeits, sandbaggers, internet disconnection/timeouts, etc. (If you have a quadratically exponential tail, then a ratings model can put so low probability on an outlier that subject to seeing the outlier, the ratings model is forced to make a too-large update to accommodate it, this should be intuitive from a Bayesian perspective). Outliers and noise and the realities of real world ratings data I'd expect introduces far bigger variation in ratings quality anyways than any minor distribution-shape differences would.
So for example, you could also say each player draws from a logistic distribution, rather than only a Gumbel. The difference of two logistics is not quite a logistic distribution but up to rescaling it should be pretty close so this is nearly the Elo model again.
Anyways, with any reformulation like this, there is a very natural candidate now for a second dimension - that of the variance of the distribution that a player draws their sample from. Rather than each player drawing from a fixed distribution centered around their rating before seeing who has the higher value and wins, we now add a second parameter that allows the variance of that distribution to vary by player. So the ratings model now becomes able to express things like "this player is more variable in performance between games, or prone to blunders uncharacteristic of their skill level than this other player". This parameter might also improve the rating system's ability to "explain away" things like sandbagger players by assigning them a high variance, thereby reducing their distortionary impact on other players' ratings even before manual intervention.