7-man EGTB

Endgame analysis using tablebases, EGTB generation, exchange, sharing, discussions, etc..
Cato the Younger
Posts: 8
Joined: Thu Jan 05, 2006 3:43 pm
Sign-up code: 0
Location: McLean, VA, USA

7-man EGTB

Post by Cato the Younger »

Not likely to available via P2P anytime soon!

First, there are slightly over 1000 7-man combinations (vs. 365 for 6-man). Second, the file size for each combination is on average much larger than the 6-man. To collect the whole set would probably take up considerably more than 10 terabytes. Third, the computing resources needed to generate these files could tie up a powerful mainframe for a considerable period of time. It does not seem likely complete 7-man Nalimov tables will be available in the foreseeable future.

There seem to be two alternatives. Win-Lose-Draw (WLD) tables could become the standard; they are much smaller and perhaps some hybrid system of WLD and Nalimov would work. That seems to be the approach most in favor now, I hear. Alternately, several years from now the gigantic Nalimov files for 7-man could be completely generated and put into an Internet digital library, to which chess engines might conceivably refer during the course of a game. Obviously current technology couldn't keep up, but with high-speed Internet2 coming somewhere down the line and some counter-latency measures who knows what might be possible!

If anyone out there is a tech or an EGTB expert I'd love to read your views.
guyhaw
Posts: 489
Joined: Sat Jan 21, 2006 10:43 am
Sign-up code: 10159
Location: Reading, UK
Contact:

7-man EGT production status

Post by guyhaw »

Yakov Konoval is developing a code and, with the participation of Marc Bourzutschky, generating 7-man EGTs at new levels of performance.

The EGTs are to the DTZ metric (i.e. to P-push, force-change and/or mate) rather than to DTM(ate) as this is more efficient in time and storage.

The maxDTZ record has already been raised from 243 (KRNKNN) to 290 (KRRNKRR) and only some 55 EGTs have been produced so far. Of course, some 'likely deep' ones were targetted first.

The code is still being developed, and I don't know of any desire to promulgate to co-workers yet. It is written in Pentium Assembler.

The '10TB' estimate for the storage of 7-man EGTs is I think way too low. I think 6-man is 1.4TB and you should think of 180x that.

Stats for EGTs are updates as and when at http://www.icga.org Follow the yellow brick road to Game-specific info ... Chess ... Endgames.

My estimate for the completion of 7-man is 2015.
User avatar
Kirill Kryukov
Site Admin
Posts: 7399
Joined: Sun Dec 18, 2005 9:58 am
Sign-up code: 0
Location: Mishima, Japan
Contact:

Post by Kirill Kryukov »

Yes, 7-men is very interesting and I'm thrilled to imagine the complete 7-men set. Well, now we are struggling to share all 6-men files online, and I think it will still take some time, may be one or two years.. We are have good progress, but from now the remaining bases will be more hard to find. It will all depend on community and may be Eugene will join and contribute directly, which would be the best way.
My estimate for the completion of 7-man is 2015.
Seems reasonable guess to me.

There also more compact tablebase metrics, like WDL, so I guess actually some bitbase-sort of 7-men will be complete faster than DTM or DTZ.
guyhaw
Posts: 489
Joined: Sat Jan 21, 2006 10:43 am
Sign-up code: 10159
Location: Reading, UK
Contact:

6-man and 7-man EGTs

Post by guyhaw »

The last 16 Nalimov 3-3p EGTs are only with Nalimov. The current plan is for Eugene to send a disc to John Tamplin who will DVD them for me and Eiko Bleicher. Eiko B has all available 6-man EGTs: he may or may not be able to be a sharer for your project.

The creation of WDL EGTs is a helpful step in creating DTx EGTs. Think of all those side-to-move-might-lose positions that would not have to be investigated (in fully retro mode) or stm-might-win positions that could be passed over in Nalimov-multiple-full-pass mode.

DTZ EGTs will be less than 50% the size of DTM ones: a pity the industry standard is currently DTM. Time for a change?

g
Wulff
Posts: 53
Joined: Thu Jan 05, 2006 4:08 pm
Sign-up code: 10159

Post by Wulff »

Hi!!

It would be very nice, If you would try to talk to them about sharing at least the as yet not available ones.

Also, if they have FTP available, that might be a possibility.

Could you do that ??

If they want to work sometging out, they could contact either Kirill or me.
guyhaw
Posts: 489
Joined: Sat Jan 21, 2006 10:43 am
Sign-up code: 10159
Location: Reading, UK
Contact:

Uploading - the last Nalimov DTM EGTs

Post by guyhaw »

It is clear that none of Eiko Bleicher, John Tamplin or myself has any upload capability worth talking about, so we will not be able to contribute directly to this file-sharing project.

However, if and when I get the last Nalimov DTM EGTs on DVD via John Tamplin, I will be happy to share these with Eiko, and with a participant to this project in postal range.
Wulff
Posts: 53
Joined: Thu Jan 05, 2006 4:08 pm
Sign-up code: 10159

Post by Wulff »

Well, the band width is not as important as just having them available in the first place. I'd be happy to be able to download them at just a few KB/s, as long as the files stay shared long enough for me to finish them.

If you do have a flat-rate connection, settimg up EMule to limit the band width usage is very easy.
I myself am using the scheduler to set the max. band width so that I have some more band width for myself, when I'm home, and going full speed when I'm working/sleeping.
User avatar
Kirill Kryukov
Site Admin
Posts: 7399
Joined: Sun Dec 18, 2005 9:58 am
Sign-up code: 0
Location: Mishima, Japan
Contact:

Post by Kirill Kryukov »

Hi Guy, I second what Dan said. The speed does not matter, all we need is availability of each file for long enough to download it. It will be downloaded just once, it may be FTP transfer, any p2p of your choice, or any other working method. As soon as any of project members has the files, they will be distributed to others eventually.

Also, thanks for offer to share by post! I live in Japan, I'm not sure if you consider it within your postal range or not. I guess it will be possible to find someone more near to you, but if not I'll be happy to be the one. Knowing approximate place where you live may help too, to find someone near you.
Wulff
Posts: 53
Joined: Thu Jan 05, 2006 4:08 pm
Sign-up code: 10159

Post by Wulff »

Hi!

As for postal exchange, I live in Denmark, but if you could send me the EMail address of the other two persons mentioned, I could drop them a note, and see what they say......

Thanks in advance.
Aarontay
Posts: 5
Joined: Mon Feb 13, 2006 9:53 pm
Sign-up code: 0

Re: 6-man and 7-man EGTs

Post by Aarontay »

guyhaw wrote:
DTZ EGTs will be less than 50% the size of DTM ones: a pity the industry standard is currently DTM. Time for a change?

g
50% seems to be a pretty large saving. But you would have to convince people to switch. Particularly when everybody is already tied to Nalimov's.

You probably need to convince some new top engine (not sure how) that has not implemented EGT to support yours (too bad Rybka,Fruit is out of reach), plus a good FTP site with high bandwdith for downloading of the basic 3-4-5 sets at least.

Even then I'm not so optimistic, people would download a few gigs worth just to let one engine use them. Unless if the name of the engine is Chessmaster maybe. :)
mbourzut
Posts: 30
Joined: Fri Mar 03, 2006 7:48 pm
Sign-up code: 0

7-man EGTB

Post by mbourzut »

There seem to be two alternatives. Win-Lose-Draw (WLD) tables could become the standard; they are much smaller and perhaps some hybrid system of WLD and Nalimov would work. That seems to be the approach most in favor now, I hear. Alternately, several years from now the gigantic Nalimov files for 7-man could be completely generated and put into an Internet digital library, to which chess engines might conceivably refer during the course of a game. Obviously current technology couldn't keep up, but with high-speed Internet2 coming somewhere down the line and some counter-latency measures who knows what might be possible!
I'm not a great fan of WLD tables. For those endings where they present a significant space saving, such as krbnkrb or krbnkrb which are general wins, WLD will not help much in actually making progress. For other endings with lots of drawn positions conventional compression techniques are already pretty effective.

From a generation point of view, WLD is also not that much faster than computing the full table, and one can of course trivially construct WLD tables from the complete tables if desired.
gambit3
Posts: 57
Joined: Mon Mar 06, 2006 8:06 am
Sign-up code: 0

tb method is cumbersome

Post by gambit3 »

if you wish for 50% space saving, change compression methods. there must be something else out there that can access compressed data directly, as in upx, pklite and the like, which doesn't add too much overhead in terms of time yet provides the data in a smaller package...

i do have a couple of questions though regarding the seeming inefficiency of tables for size vs. purpose:
why in hell are all the draws stored at all?
logically, you only need wins and losses... losses so you know what not to play, or at least what will last longest, and wins to know what is best to play. anything not fitting into the above two categories is a draw... seems pretty logical to me, why hasn't it been implemented yet?

i admit that nalimov cleaned the clock when he released his tbgen for 5 man bases, but i only use a black set when at playchess.com and performance is completely unaffected. there's already a 50% saving of space. i only keep others around for generation of further tables when i have a tertiary storage system i think of as capable at a later stage.

further, any engine can be told that a position is a draw by a generic algorithm that translates absence of position in a table as a draw. there's another space saving. chains between tables get captured by a different algorithm that detects table change and searches target table with first algo to see if position exists in tables.

with the speed with which hd capacities are improving, it is better to get all the siz man bases out now, since projection along current trends puts them all on a single hd within 5 years. 7 man tbs will be completely generated, and be around 265Terabytes [3=63k 4=31M 5=7.6G 6=1.3T].

according to the 100th ape principle, all that needs to be done by any 1 person sharing any 1 table is to be able to upload it 1.6 times for it to become universal inside the same time frame. this means, in effect, because not all downloaders will be social enough for that, that if something is uploaded twice, there is a buffer of no less than 20%, ASSUMING that the tables are correct.

so just get those tables on emule, regardless the speed
those who can, do
those who can't, teach
User avatar
Kirill Kryukov
Site Admin
Posts: 7399
Joined: Sun Dec 18, 2005 9:58 am
Sign-up code: 0
Location: Mishima, Japan
Contact:

Re: tb method is cumbersome

Post by Kirill Kryukov »

gambit3 wrote:if you wish for 50% space saving, change compression methods. there must be something else out there that can access compressed data directly, as in upx, pklite and the like, which doesn't add too much overhead in terms of time yet provides the data in a smaller package...
Actually, UPX and pklite don't access compressed data directly. They unpack the executable into memory first, where it is then executed in its uncompressed form.
gambit3 wrote:i do have a couple of questions though regarding the seeming inefficiency of tables for size vs. purpose:
why in hell are all the draws stored at all? logically, you only need wins and losses... losses so you know what not to play, or at least what will last longest, and wins to know what is best to play. anything not fitting into the above two categories is a draw... seems pretty logical to me, why hasn't it been implemented yet?
It is the indexing that makes this problem. I guess you imagine that EGTB contains both position *and* DTM. In such case it would be possible to just remove all draws and save mates only. But in reality EGTB does not keep positions, only DTM. When you query EGTB for some position, the probing code is doing some complex calculations to find out where exactly is the DTM for that position in the EGTB file. That's why there are draws there - they make probing code much simpler. If there were no draws, you would have to do a lot more computation to find a position index, actually you would have to repeat a generation of some part of the tablebase for that..
gambit3 wrote:i admit that nalimov cleaned the clock when he released his tbgen for 5 man bases, but i only use a black set when at playchess.com and performance is completely unaffected. there's already a 50% saving of space. i only keep others around for generation of further tables when i have a tertiary storage system i think of as capable at a later stage.

further, any engine can be told that a position is a draw by a generic algorithm that translates absence of position in a table as a draw. there's another space saving. chains between tables get captured by a different algorithm that detects table change and searches target table with first algo to see if position exists in tables.

with the speed with which hd capacities are improving, it is better to get all the siz man bases out now, since projection along current trends puts them all on a single hd within 5 years. 7 man tbs will be completely generated, and be around 265Terabytes [3=63k 4=31M 5=7.6G 6=1.3T].

according to the 100th ape principle, all that needs to be done by any 1 person sharing any 1 table is to be able to upload it 1.6 times for it to become universal inside the same time frame. this means, in effect, because not all downloaders will be social enough for that, that if something is uploaded twice, there is a buffer of no less than 20%, ASSUMING that the tables are correct.

so just get those tables on emule, regardless the speed
Yes, it's good thinking to consider that not downloaders are social enough. Still, I think many of us (those who can afford to *have* 6-men EGTB's), have some sort of broadband connection and can share all we got, theoretically. The speed may be not fast, but the availability is more important.
gambit3
Posts: 57
Joined: Mon Mar 06, 2006 8:06 am
Sign-up code: 0

Post by gambit3 »

Actually, UPX and pklite don't access compressed data directly. They unpack the executable into memory first, where it is then executed in its uncompressed form.
ok, fair point, but there is still the only needing one colour thing (unless you're generating bases) which halves tb storage requirements while incurring little (in my experience with junior) or (in my experience with gambit tiger and rybka) no performance hits.
those who can, do
those who can't, teach
Tom Dyer
Posts: 1
Joined: Mon Jan 23, 2006 5:47 pm
Sign-up code: 0

Another question on Nalimov egtb structure

Post by Tom Dyer »

Kirill Kryukov wrote:It is the indexing that makes this problem. I guess you imagine that EGTB contains both position *and* DTM. In such case it would be possible to just remove all draws and save mates only. But in reality EGTB does not keep positions, only DTM. When you query EGTB for some position, the probing code is doing some complex calculations to find out where exactly is the DTM for that position in the EGTB file. That's why there are draws there - they make probing code much simpler. If there were no draws, you would have to do a lot more computation to find a position index, actually you would have to repeat a generation of some part of the tablebase for that..
Hi Kirill, thank you - your explanation also helped clear up a misconception I had about EGTBs (only DTM, not position is stored). Since an EGTB for some position only contains DTM, I am interested to know specifically what data each EGTB entry actually stores. I am not so much interested in the advanced compression techniques used to create the EMD files, but rather in a layman's explanation of what information is stored in each EGTB position (maybe: Win/Loss # moves (or draw), outcome (w/l/d) of each legal move possible from that position, best move to play. Or is the probing code itself that somehow "determines" this knowledge and it is not stored directly in the db).
User avatar
jshriver
Posts: 298
Joined: Tue Jan 31, 2006 5:59 am
Sign-up code: 0
Location: Toledo, OH, USA
Contact:

Post by jshriver »

Kirill Kryukov wrote:Hi Guy, I second what Dan said. The speed does not matter, all we need is availability of each file for long enough to download it. It will be downloaded just once, it may be FTP transfer, any p2p of your choice, or any other working method. As soon as any of project members has the files, they will be distributed to others eventually.
Can I join the team? I am currently hosting all of the files I have on emule and on the web.
User avatar
jshriver
Posts: 298
Joined: Tue Jan 31, 2006 5:59 am
Sign-up code: 0
Location: Toledo, OH, USA
Contact:

Re: tb method is cumbersome

Post by jshriver »

gambit3 wrote:if you wish for 50% space saving, change compression methods. there must be something else out there that can access compressed data directly, as in upx, pklite and the like, which doesn't add too much overhead in terms of time yet provides the data in a smaller package...
What about bzip2? It's trivial to write a a bzip reader/writer. The man page has a great explanation and I was able to write bzcat in about 5min using it.

It also has the best compression I know of.

-Josh
gambit3
Posts: 57
Joined: Mon Mar 06, 2006 8:06 am
Sign-up code: 0

Post by gambit3 »

ok REALLY off topic fo rthe first part here, but josh, forget any nature of burrows transform for tables. on the bigger ones that compress less, datacomp will do better. i can tell you this for sure without ever having to look further into it. best bet would be something along the lines of a modified (for random accessibility) lz77-type model with ~24M (speed/compression tradeoff optimum considering decompression time/size on a 3GHz machine) dictionary and 64MB blocks...

anyway, for the rest, how about everybody just buy rybka and forget egtables?! seriously. 3000 game match that vas says he's played with vs. without tbs results in 2 points elo difference between the two! that's quite a statement, and i'm thinking about asking him for the db of the games that resulted. all i know so far is the games were blitz times. however, if this is a serious result, then tables with rybka are not such an advantage, and only an academic addition. consider also that gambit rybka found m87 in a 10+10 game on playchess vs. sausageman a few days ago, though with 5man backing. nice was that the mate was detected with 10 men still on the board.

in any case, if the result mentioned above is accurate, then tables aren't necessary with rybka at all, even analysing, since rybka is quick enough in its seeming slowness to get the job done. 100% saving on table storage. THAT'S nice! :>

i can post the m87 find game if anyone is interested, but the result started a discussion on playchess in which it was revealed that a ligitimate m273 has been discovered in 7 man bases.

wow, talk about crossing topics...

have fun all

rob
those who can, do
those who can't, teach
guyhaw
Posts: 489
Joined: Sat Jan 21, 2006 10:43 am
Sign-up code: 10159
Location: Reading, UK
Contact:

New Chess Endgame Record

Post by guyhaw »

Maybe old news to EGTers, but I see that Marc Bourzutschky has found a new, 'deepest' endgame position using Yakov Konoval's superfast code.

http://www.volny.cz/evcomp/tablebase.htm#330moves is the best source.

Caveat: see the '2006-03 Summary' textfile in the CHESSBASE database for the full line, as it is scrambled after move 301 in the actual line. The reason appears to be that CHESSBASE cannot handle a line longer than 600 plies: I wonder why!

Anyway, the btm KQBNKQB position has DTZ = DTC = a maximal 330m which rather tops the previous KRRNKRR maxDTZ of 290m.

These depths compare with the longest H-H game on record of 269m and the longest decisive H-H game on record of 193m.

With no sunlight penetrating to those depths, it's amazing that any piece can see where it's going.

g
User avatar
jshriver
Posts: 298
Joined: Tue Jan 31, 2006 5:59 am
Sign-up code: 0
Location: Toledo, OH, USA
Contact:

Re: New Chess Endgame Record

Post by jshriver »

guyhaw wrote:Maybe old news to EGTers, but I see that Marc Bourzutschky has found a new, 'deepest' endgame position using Yakov Konoval's superfast code.

http://www.volny.cz/evcomp/tablebase.htm#330moves is the best source.
I took a look at the link. Curious but is the Yakov Konoval program available online? If so where.

-Josh
guyhaw
Posts: 489
Joined: Sat Jan 21, 2006 10:43 am
Sign-up code: 10159
Location: Reading, UK
Contact:

Yakov's program

Post by guyhaw »

No, Yakov's program is not available online. It's very much in R&D and pawnless mode, with Marc Bourzutschky doing the 7-man production running. So, we have single-P-endgames and multiple-P-endgames (with e.p.) to come yet.

However, a new feature of EGT generation is that the EGT-verification is done by Marc's code which was written 'blind' before he saw Yakov's generation code. This gives new levels of Data Assurance which were not present before. There may also be opportunities to cross-check EGTs which were not available before: YK-Nalimov, YK-Koning (FEG).

YK's program is in Assembler-586 which gives it an extraordinary turn of speed. I have conjectured that in future, engines on multi-processors will be able to generate 6-man EGTs on demand with this code.

This rather underlines the fact that generating EGTs is just a special form of forward search, the only difference being that one starts with a 'search domain', e.g. KRNKNN, instead of with a particular, target KRNKNN position. As I see it, there are no logical grounds for denying Cs the use of 'unfair' n-man EGTs in a Human-Computer contest.

g
gambit3
Posts: 57
Joined: Mon Mar 06, 2006 8:06 am
Sign-up code: 0

Post by gambit3 »

now curious. i haven't looked yet, but does this m330 satisfy dtc50? or is it academic best and too long for tourney? the latter seems only possibility given two kings and only 5 (x50=250 moves) other pieces.

now have looked and while interesting for academia, it is clearly not tournament possible ending. it is DTC, but not DTC50 compatible. pity. maybe they'll find one such in the pawned endings.
those who can, do
those who can't, teach
guyhaw
Posts: 489
Joined: Sat Jan 21, 2006 10:43 am
Sign-up code: 10159
Location: Reading, UK
Contact:

DTZ = 330 record and the 50-move rule

Post by guyhaw »

The new record shows a line where KQBN requires 330 moves (660 plies) to effect the first 'conversion', i.e., force-change and/or mate.

In this case, the bB is captured.

Thus, Black would have been able to _claim_ a draw after 50 moves on each side.

Note however that the '50-move rule' is a 'rule of play' rather than an intrinsic 'rule of the game', i.e. a core-rule defining the board, men, moves and objectives of the game. Further, I believe that if the organisers of a tournament wish to suspend the 50-move rule or replace it with a k-move rule, they are free to do so.

Thus, it would not be 'definitive' to create EGTs that respected a particular k-move rule. The DTR metric, proposed by me but not yet demonstrated in any implemented EGT :( would allow any chess-engine to 'limbo' under any imposed k-move rule if that were possible, whatever 'k' was.

DTZ EGTs for endgames with maxDTZ > 50 are not just 'academic'. A fallible player may not defend optimally, and observations show that human players do not attack or defend optimally. Thus, wins which are theoretically going to be frustrated by the 50-move rule may in fact be won.

I've investigated this at some length, creating the Reference Fallible Endgame Player (RFEP, agent) to do so. Rafael Andrist extended WILHELM to effect the computations in brilliant style.

The conclusion is that, not only might one include EGT-exploitation in one's chess engine, but one might include Opponent Modelling using the RFEP concept to play better than just metric-based-strategy-optimally. Thus, one might choose between equi-optimal moves or even risk conceding depth in the hope of winning greater depth than by playing optimally. The 'OM' and 'risk' ideas were first seen by Jansen.

g
gambit3
Posts: 57
Joined: Mon Mar 06, 2006 8:06 am
Sign-up code: 0

Post by gambit3 »

DTZ EGTs for endgames with maxDTZ > 50 are not just 'academic'. A fallible player may not defend optimally, and observations show that human players do not attack or defend optimally. Thus, wins which are theoretically going to be frustrated by the 50-move rule may in fact be won.
a highly interesting approach, yet it doesn't alter that it will likely be engines, adn not humans, that are able to take advantage of these ''insights'', hence, it is academic. the greatest players will study these positions, naturally, and employ the knowledge gained to affect wins otb, however, we don't all have surnames ending in ''-ov'' or ''-ar'' or ''ski'' or ''wicz'' :>

while you are correct that it is director's discretion for 50 move rule, the director can also disqualify players which also affects tournament outcomes and results.
both 50 moves rule and the clock were introduced for the same reason: to prevent games that are not correspondance from going on literally for days at a time, thus keeping manageable the time requirement for any given tournament. ''open-ended'' tournaments are not widely watched, however much better in quality they may be. further, there is no guarantee, given your own quote of the possibility of fallibility in players, that the quality of the of the game will, in fact, be better. thus my contention is that it IS worth the trouble for the ''limitted'' tournament tables to be created in order to provide the engines with a practical (if not exactly optimal) solution to a given position.

it was the academic ideal of perfection, rather than some practical consideration, which drove the development of chess programs from the 70s to today, to answer the question ''can a computer beat man at chess consistently?''. in fact, if memory serves, the motive was even originally ''can a computer be taught to think like a man?'' it took until the mid 80s to realise that the answer was no if no reasonable chessplayers of the human kind got involved in tehdevelopment of the programs.
it was the academic consideration of perfect endings, to answer the same question as before, which ultimately drove the development of the software that produces, or has produced, tables. only the storage and transmission media are failing, otherwise every engine would be sold with a complete 6 man set, or what there is of it.
it is again academic consideration of perfection over practical that leads to the belief that the 50 move rule should be disregarded in construction/generation of tables. in practice, there is no knnkp win for the knights with the pawn on the edge, yet according to the tables, there is at least one.

in this regard, the 50 move rule is, in my opinion (note: this is only opinion) a good one: it likely will have solved more disputes than it has caused. it may be time for it to change to a longer number (100? 200? how many known mates are there that are longer than 50 moves total, anyway? kbnk is already 34!), though i believe the rule itself should not be eradicated.

in the spirit of loving the idea of fallibility, a fallible player may not know there is a mate, let alone how long it is, from a given position where the mate is known to be hundreds of moves away. i know i didn't even know about the 87 move mate gambit played, nor could i have picked the moves exactly within the confines of that 10+10 game, and i am otb in longplay an 1800-odd rated player. not world class exactly, but no complete bunny either. rybka may have even picked it without tables, and without knowing how far away it was, but that is not certain, since the tables were there for it to use as aid. in that regard, i consider that these are an extraneous source and an unfair advantage for the computer under late middle and endgame situations. the very fact that i have personally seen gambit hit table with 18 men still on the board only serves to strengthen my belief in this.
note again, this is not proof, it is opinion and logical reasoning.
all that said, i still am collecting tables. there is no better addition to the library than a perfect solution. i have improved my endings no end since tables. it is, among other things, a noble goal to seek perfection in something. that doesn't change that it is academic.

have fun
those who can, do
those who can't, teach
guyhaw
Posts: 489
Joined: Sat Jan 21, 2006 10:43 am
Sign-up code: 10159
Location: Reading, UK
Contact:

Optimal Strategies and k-move rules

Post by guyhaw »

Kasparov is on record as saying that he likes EGTs because they teach him things. Better chess players than me can see pattern in the manoeuvres, even if these have not translated into rulesets for computers to use.

There is a theory that the 50-move rule was created for the benefit of coffee-house professional players who wanted to get on to the next wager over the board.

The issue today is that one would certainly not want to see an obvious win frustrated by the 50-move flag being waved by the defender. An example is in KBBKN, where there are now tractable wins deeper than 50 moves.

The 50-move rule, or any k-move rule, can be factored in to EGT-generation, thanks to 'gtbgen', an amazing Bourzutschky generalisation of Nalimov's 'tbgen'. One simply sets the move count to zero given a capture or P-push and declares the result to be a draw if k-moves have been played 'prior' to that move-count-zeroing move.

Thus, Tamplin has generated all 6-man P-less DTZ50 EGTs - and papers on this are listed at:

http://www.is.reading.ac.uk/people/G.Ha ... blications

These EGTs demonstrate that the move that preserves the win in the context of a 50-move rule is neither DTC-, DTM- or DTZ-optimal. Some so-called 'pathological' positions in which DTZ50- and DTZ-optimal moves vary, sometimes by quite deal in depth terms, are listed in the papers.


The work done on RFEP and Opponent Modelling in the endgame is also found via the website above. 'Model Endgame Analysis', Andrist/Haworth (2003) is the latest exposition of the ideas available from the website.

Jansen showed (~1992) that in a certain KQKR position, against an opponent playing random moves, a non-optimal move would expect to achieve more than an optimal response.

RFEP positions the 'random player' at the zero-centre of a spectrum of RFEPs, positive skill on one side, negative skill on the other. With the RFEP concept, we can see that Jansen's risking of depth opportunity is very limited to low-skill opponents. Nevertheless, it is a good example of non-optimal play being perhaps relevant.


You express the opinion that the use of EGTs is "unfair". They are an example of precomputation, as are Opening Books, and arguably 'evaluation functions'. But then, humans arrive with their more limited and fallible memory of evaluation functions, opening books and endgame theory as well.

I argue that their use is fair if the rules of H-C contests allow them: the question is whether the rules are fair - and I believe they are. There have been H-C contests where 6-man EGTs were not allowed: I'm just very glad that they never became relevant as this rule would have occasioned great audience disappointment.

Perhaps the rules should be that an engine can use any EGTs it can compute during a game, as EGT-generation is just a 'different' forward-search strategy and it seems unreasonable to constrain search techniques. YK's software generates 5-man EGTs _very_ quickly so one could take it as a given that 5-man EGTs would be available, even if they were not available at the start of the game.

g
Post Reply