Opening theory and the motivation for change
Chess is one of humanity’s oldest and most intensively analysed strategic games, and for that reason, it has long attracted interest beyond the chess community itself. In fields such as artificial intelligence, decision theory and statistical physics, chess has served as a controlled testing ground for ideas about optimal choice, information processing and complexity.
From a physicist’s perspective, the game offers several decisive advantages: it is fully deterministic, it generates enormous empirical datasets spanning centuries, and it can be analysed with high precision using modern computer engines.
The modern rules of chess stabilised during the 15th century, when the classical starting position – RNBQKBNR – emerged as the universal standard. Importantly, this configuration was not derived from any formal optimisation principle. Rather, it appears to have arisen through a long process of cultural evolution and practical convention, gradually solidifying as the rules of the game themselves became fixed. Despite this historical contingency, the classical starting position has come to define what most players regard as “normal” chess.
At the highest levels of modern play, opening theory has become extraordinarily deep. In elite tournaments, the first 15 to 20 moves of a game often follow established theoretical paths. In some cases, preparation delays genuinely original decision-making until well into the middlegame. This situation has long raised concerns about the balance between memorisation and understanding, and about whether creative play is being crowded out by preparation.
Proposals to address this tension are not new. As early as 1792, the Dutch chess enthusiast Philip Julius van Zuylen suggested randomising the initial placement of pieces to reduce the role of rote learning. More than two centuries later, similar concerns were voiced by Bobby Fischer, who saw excessive opening preparation as a potential threat to the intellectual vitality of the game. In 1996, Fischer proposed a concrete alternative: Fischer Random Chess, developed with Susan Polgar.
Now standardised as Chess960 and more recently marketed as Freestyle Chess, the variant preserves the full rules of chess while randomising the back-rank pieces. Three constraints apply: the bishops must start on opposite-coloured squares, the king must be placed between the rooks to allow castling, and White and Black must have mirrored arrangements. These rules generate exactly 960 legal starting positions. The classical setup corresponds to position #518 in the standard numbering.
While Fischer’s motivation was primarily practical – to reduce the advantage of memorised opening theory – the introduction of Chess960 also opens the door to quantitative questions.
- Are all 960 starting positions equally complex?
- Is the classical position in any sense special?
- Can one define and measure the intrinsic difficulty of decision-making associated with different initial configurations?
These longstanding concerns are taken up directly by Marc Barthelemy in his study Not all Chess960 positions are equally complex, conducted at the Institut de Physique Théorique of the Université Paris-Saclay, where he subjects the full space of Chess960 starting positions to a systematic quantitative analysis.
A quantitative framework for complexity
In his study, Barthelemy addresses these questions using tools drawn from information theory and complex systems. Building on recent advances in computer chess, particularly the strength of engines such as Stockfish, the study evaluates all 960 starting positions with systematic consistency.
A central concept introduced in the work is an information–cost measure, which captures the cumulative information required to identify optimal moves over the first n plies of a game. This measure reflects how difficult it is, at each stage, to distinguish the best move from its competitors. Summed over the opening phase, it provides a quantitative proxy for decision-making complexity. By applying this framework uniformly across all 960 positions, the study constructs an empirical “complexity landscape” of opening configurations.
White’s first-move advantage revisited
To establish a baseline, all 960 starting positions were first evaluated using Stockfish 17.1 at depth 30. The resulting distribution of initial evaluations reveals a striking regularity. The mean evaluation is +0.297 pawns in White’s favour, with a standard deviation of 0.14 pawns. In practical terms, 956 out of 960 positions – 99.6% of the total – show a positive evaluation for White.
The classical starting position (#518) scores +0.30 pawns, placing it squarely in the middle of the distribution. This indicates that standard chess is statistically typical within the Chess960 ensemble: it neither amplifies nor diminishes the structural advantage of moving first. The narrow spread of the distribution suggests that White’s initiative is a robust feature of the game, largely independent of the precise arrangement of pieces.
Some outliers are nevertheless informative. The most White-favourable position identified is #279 (NRBKNRQB), with an evaluation of +0.83 pawns – roughly 2.8 times the mean. In this configuration, the placement of the pieces creates early developmental imbalances that White can exploit, although the engine’s preferred continuations are far from intuitive.
At the opposite extreme, position #535 (RNBKQNRB) produces near-perfect balance, with an evaluation of −0.09. The rarity of such cases underlines how exceptional it is for a starting position to neutralise White’s initiative.
From a game-theoretic perspective, these results support the view that the first-move advantage is intrinsic to the mechanics of chess itself, not merely an artefact of centuries of opening theory. Chess960 successfully removes preparation advantages, but it does not eliminate the underlying asymmetry between the players.
Complexity, symmetry and representative positions
When the information–cost framework is applied, a richer picture emerges. The 960 positions differ substantially in total complexity and in the balance of decision-making difficulty between White and Black.
Position #89 (NNRBBKRQ) stands out as an example of near-perfect symmetry, as it produces statistically indistinguishable decision complexity for both players. Even so, Stockfish still assigns White a small advantage, with principal variations beginning with moves such as b4, Nb3 or c4. This illustrates that complexity symmetry does not imply evaluative equality.
The position with the highest total complexity is #226 (BNRQKBNR), which reaches the 99.9th percentile in information cost. Remarkably, it differs from standard chess by only a single transposition of a rook and bishop on the a-file. Despite this minimal structural change, it generates the most demanding decision environment in the entire Chess960 ensemble.
At the opposite end of the spectrum lies position #316 (NBQRKRBN), which yields the lowest average complexity. However, the large standard deviation associated with this position indicates high variability: some games can still become quite complex.
Standard chess once again occupies a middle position. Its total complexity lies at the 47th percentile, confirming that it is neither especially simple nor especially complex within the Chess960 landscape. Its asymmetry measure, however, is relatively high, placing it at the 91st percentile and suggesting that Black tends to face moderately harder decisions in the opening. Given the size of the uncertainties, this imbalance is not decisive at the level of individual games, but it is nevertheless a systematic feature.
Broader implications and conclusions
Two main insights emerge from the study:
- The 960 starting positions form a heterogeneous landscape of decision-making complexity. Small structural changes can produce large effects on both the difficulty and the balance of the game.
- The classical starting position is not exceptional within this landscape. Its persistence may reflect aesthetic symmetry, ease of learning or historical accident rather than any optimisation for maximal strategic depth or perfect fairness.
In practice, random selection can yield configurations that impose significantly unequal cognitive demands on the players. The study therefore provides a quantitative basis for more principled approaches to position selection in competitive Freestyle Chess.
More broadly, Barthelemy’s work illustrates how concepts from information theory and statistical physics can be applied to deterministic decision-making systems. The proposed framework is not specific to chess and could be extended to other board games, historical variants such as Shatranj or Xiangqi, or even entirely different strategic domains.
Read the full paper on Arxiv…
Our experts show, using the games of Botvinnik, how to employ specific openings successfully, which model strategies are present in specific structures, how to find tactical solutions and rules for how to bring endings to a successful conclusion