KCEC
(Kirr's Chess Engine Comparison)
A tournament of original free chess engines
June 16, 2013
Testing summary:
Total: 135,679 games
played by 202 programs
1398 CPU days (X2 4600+)

White wins: 55,227 (40.7%)
Black wins: 47,434 (35.0%)
Draws: 33,018 (24.3%)
White score: 52.9%

All engines

Comparing 202 engines!
202 selected engines played 135679 games with each other

Evaluation difference: Most similar pairs

#PairEvaluation differenceMoves
counted
1Fritz 5.32 – Fritz 6 Light0.211032
2E.T. Chess 13.01.08 – E.T. Chess 18.11.050.221362
3Hamsters 0.6 – Hamsters 0.50.271435
4Petir 4.39 – Petir 4.9999990.301563
5Aldebaran 0.7.0 – Needle 0.53.10.361015
6Elf 1.3.0 – Adamant 1.70.381699
7Fruit 2.3 32-bit – Delfi 5.40.381338
8Movei 00.8.438 (10-10-10) – Petir 4.390.401301
9Pseudo 0.7c – Fritz 6 Light0.401345
10Pseudo 0.7c – Hamsters 0.60.401677
11Fruit 2.3 32-bit – Movei 00.8.438 (10-10-10)0.411320
12WildCat 8 – Booot 4.13.10.411224
13Gullydeckel 2.16pl1 – Faile 1.4.40.411449
14Hamsters 0.6 – Green Light Chess 3.000.411402
15The Crazy Bishop 0052 – Asterisk 0.60.421343
16The Crazy Bishop 0052 – Arion 1.70.421316
17WildCat 7 – List 5.120.421214
18Marvin 1.3.0 – Smash 1.0.30.431503
19Pseudo 0.7c – Movei 00.8.4030.431263
20Slow Chess Blitz WV2.1 – Ruffian 1.0.50.431473
21Ranita 2.4 – Elf 1.3.00.431766
22Alaric 707 – Pseudo 0.7c0.431165
23Fritz 6 Light – Amyan 1.5970.431003
24BugChess2 1.5.2 – Amyan 1.5970.441473
25Pseudo 0.7c – AnMon 5.600.441015
26Alaric 704 – Pseudo 0.7c0.441400
27Movei 00.8.403 – Hamsters 0.50.441225
28The Crazy Bishop 0052 – Averno 0.810.441172
29List 5.12 – SmarThink 0.17a0.441211
30Thinker 4.7a – Fritz 6 Light0.441137

Alter engine selection



Alter output selection

Rating list
      Protocols
      Logos
      Flags
      Continents
     LOS columns:

Crosstables:
Results
Performances
Score
LOS
Ponder hit
Eval difference
Proportion of draws
Number of games
Number of connecting games
Percentage of connecting games
Expected score
Score with common opponents
Score with all opponents
Performance with common opponents
Performance with all opponents
LOS with common opponents
LOS with all opponents
Ponder hit with common opponents
Ponder hit with all opponents
Eval difference with common opponents
Eval difference with all opponents

Ponder hit: most similar pairs
Ponder hit: most similar pairs (different families only)
Ponder hit: most different pairs
Ponder hit: most different pairs (same families only)
Eval diff: most similar pairs
Eval diff: most similar pairs (different families only)
Eval diff: most different pairs
Eval diff: most different pairs (same families only)

Maximum size of cross-tables (from 2 to 200):
Limit crosstables to engines in Elo range: to

Cross-tables show only best version of each engine
Highlight diagonal of cells wide. (0 to highlight everything)

Reference rating list:
Recalibrate:
  No recalibration (reference and current list are compared as they are)
  Recalibrate reference list to current one using selected engines only
  Recalibrate reference list to current one using all common engines
  Recalibrate current list to reference using selected engines only
  Recalibrate current list to reference using all common engines


Created in 2005-2012 by Kirill Kryukov
Updated on June 16, 2013