EGTB sharing - new plan

Endgame analysis using tablebases, EGTB generation, exchange, sharing, discussions, etc..
Codeman
Posts: 85
Joined: Fri Oct 19, 2007 7:50 pm

Re: EGTB sharing - new plan

Post by Codeman »

I have just started experimenting with threads to do loading and calculating at the same time .. looks quite promising.
h.g.muller wrote:Sub end-games with a captured piece are much smaller, and you would need them to 'seed' won positions with wtm, and non-lost positions with btm. If you are building DTC or DTZ, all this seeding takes place at the first cycle (and you would only need a bitbase of the sub -end-game). If you are bulding DTM you would have to do it every cycle. For capture conversions the successor TBs are 64x smaller, so this is not a significant overhead. If the 'conversions' are Pawn moves to a successor P-slice, the successor slice is as big as the current one. But the P-slices are comparatively small, and you can hold the successor (as well as the slice itseld) in RAM entirely. If the conversion is a promotion, you have the biggest problem, as the successor end-game is now far bigger than the P-slice (64 times, if there are still other Pawns on the board). But you only need the positions with Q on 8th rank from this successor. It would probably pay to make separate table-bases of such promotion positions (also listing where under-promotions are needed, and which ones). This is the hardest problem of the entire enterprice. With DTZ you fortunately only need the info as a bitbase.
You know I am building a bitbase and wanted to use 2 bit for BTM positions. But I found out, I have to save BTM - black wins as well. So that I have to check the subgame where black captures or promotes only in the first circle.
At one point you said "for btm you would have undecided, lost in the current cycle, and lost in the last cycle and lost earlier, which you could indicate in two bits". So what would you suggest? One option would be instead of saving "lost earlier" saving the black wins or I would use 3 bits per position but how could I take advantage of the 3 additional states?
h.g.muller
Posts: 223
Joined: Mon Feb 19, 2007 8:24 am
Sign-up code: 0
Location: Amsterdam
Contact:

Re: EGTB sharing - new plan

Post by h.g.muller »

I would not recommend building the black wins and the white wins simultaneously. They need different access patterns, and their storage need just add. (The wtm positions need many states there.)

Just build them separately. When the building is finished you no longer need to distinguish the lost positions by cycle in any way, and both btm and wtm positions need only a single bit. You could merge them after they are finished. Although I am not sure why you would want them merged.
Codeman
Posts: 85
Joined: Fri Oct 19, 2007 7:50 pm

Re: EGTB sharing - new plan

Post by Codeman »

h.g.muller wrote:I would not recommend building the black wins and the white wins simultaneously. They need different access patterns, and their storage need just add. (The wtm positions need many states there.)

Just build them separately. When the building is finished you no longer need to distinguish the lost positions by cycle in any way, and both btm and wtm positions need only a single bit. You could merge them after they are finished. Although I am not sure why you would want them merged.
I am talking about the accessing of sub-endgames:
When checking whether btm black has any possible move that doesn't reach a wtm-white-won position, often black can capture a white piece. So the subgame has to be checked, whether black can avoid the loss. The question is now, how the algorithm can treat this subendgame-query. One option I saw was to in the first circle go through all subendgames where black captures a piece and if it is a winning capture, set a bit on the btm position.
h.g.muller
Posts: 223
Joined: Mon Feb 19, 2007 8:24 am
Sign-up code: 0
Location: Amsterdam
Contact:

Re: EGTB sharing - new plan

Post by h.g.muller »

Ah, OK, now I see what you mean. Sorry for the confusion. You are talking about btm positions in which black can escape to a non-won wtm position in a successor EGTB. So such a position would, when it is hit as a potentially lost position, during the 3rd (verification) ply always revert to 'undecided'.

I usually describe this condition as black having a 'permanent escape', as the non-won wtm position it can reach in the successor TB will never change to another state, as the successor TB is supposed to be finished before we are building the current one. It would indeed be nice if these btm positions with permanent escapes were somehow recognizable, so that it is not needed to waste time on verification.

But saving time on the verification step is actually a problem that transcends probing of successor tablebases. For in-RAM building I have eliminated this step by using the DTx field of undecided btm positions to store the count of the number of 'escapes', i.e. moves to non-won wtm positions. (E.g. using positive numbers for DTx, and negative numbers for counts.) Then, each time a wtm position is declared won, all btm predecessors found from it with the second ply in the tree simply get their escape count decremented. There is then no need for a 3rd tree ply: if the escape count remains non-zero, the position remains undecided. If it does hit zero, the position turns into a lost one, and can be assigned DTx=N+1. Escapes to a successor TB would always remain incorporated in the count, as the 'undecided' wtm positions to which they convert will never change state, as they are actually truly non-won wtm positions in a finished TB. The idea of distinguishing undecided positions with a permanent escape from other undecided positions is sort of a partial implementation of this idea.

For building in RAM storing the escape counts makes sense, as RAM access is fast enough to outperform verification. For building on disk, the counts would have to be stored and read back from disk, and as this very much reduced compressibility compared to gouping them all as 'undecided' (which then likely gets a single-bit Huffman code), it is not competative at all. The partial implementation might suffer from the same problem.

I wonder if indicating the positions with a permanent escape separately would buy very much speedup in the first place (even apart from the overhead it gives by the need of more bits in the btm state, and consequently worse compression and longer HD transfer times). The alternative would be to try captures first during verification, and hide the positions with one less white piece in the broken positions of the full EGTB. This would work quite naturally if you would (for the wtm positions) consider those positions where a black and white piece coincide as positions where that white piece is absent. In the first cycle you would then fill the wtm state (a single 'won' bit) of such positions from the successor EGTB. You could use the same access pattern for this as you normally use during the cycle: if you would be treating a slice where, say, white pieces 1 and 2 are static, and white piece 3 and 4 (and the black pieces) dynamic, you would probe the successor EGTBs with white piece 3 or 4 missing, and piece 1 and 2 in the same place as for the current slice. Such successor slices would be 64 times smaller (because they have one less piece). The state of each wtm position in those successor EGTBs you would copy to several broken positions of the current slice, adding the missing dynamic white piece to the position on the square of any of the black pieces.

If a broken wtm position was thus marked as 'won', you would (just like with any wtm position that is marked won) do the second (retrograde) ply of the tree to get to the potentially lost btm positions. But you would not do all retrograde black moves, just the 'uncaptures' with the black piece that coincided with a white piece. The 3rd (verification) ply of the tree would be entirely normal.

After the first cycle, the info about wtm positions in subset EGTBs is then entirely incorporated in (some of) the broken poosiitons of the full EGTB, and available for probing during verification. This means that the forward black moves tried during verification can include capture of white pieces, which map to teh intuitively expected index (putting the black piece on top of the white one that it captures). It seems a good idea to try these captures first (as is known from any forward search), as they have a comparatively good chance to lead to non-won wtm positions. And if they do, the btm position remains undecided, and for the positions with permanent escapes the verification would fall through after a single probe. This would be almost as good as recognizing the btm position as undecided without any probes.
h.g.muller
Posts: 223
Joined: Mon Feb 19, 2007 8:24 am
Sign-up code: 0
Location: Amsterdam
Contact:

Re: EGTB sharing - new plan

Post by h.g.muller »

I did a kind of back-of-the-envelope calculation on what would be needed to build 6-men pawnless bitbases entirely in RAM. A pawnless 6-men has 8G positions. The idea would be to keep this in RAM as compressed data, and use my slice-by-slice algorithm for the generation, packing and unpacking the data from and to a fixed-length encoded buffer (allowing random access) capable of holding a single slice. This buffer could then use a 4-bit encoding, one bit for the wtm state (won or non-won), and 3 bits for btm state (undecided, non-lost (= permanent escape), lost in current cycle, lost in previous cycle, lost earlier, with 3 spare codes).

A simple Huffman coding scheme would distinguish the 10 states (combined wtm and btm) by codes 0, 10, 1100, 1101, 11100, 11101, 11110, 11111, in decreasing frequency. The code 0 would initially be assigned to undecided & non-won, which is about 60% of the TB in the early cycles, while 10 would be undecided & non-won (about 30%). As the number of won positions would increase during the building, at some point it would be better to change to 0 = undecided & won, 10 = lost earlier & won. The fraction of positions lost in current or previous cycle will be pretty small at any stage, and these will get the 5-bit codes. Typical frequencies occurring during TB building then would compress the data to about 1.7 bit per positions, meaning the thus compressed TB would take up 1.7GB of RAM.

If we are building 2+4, we would do it in 1+4 slices (alternately keeping the first or second white piece fixed). Such a slice would have 1G positions (a slice has in general no symmetry, as the fixed piece breaks it). Thus with 4 bits per position, it would require a 0.5G buffer. This brings the RAM requirement to above 2GB. I am not sure if better compression could get it down so much that it could be done in a machine with only 2GB of RAM. (These estimates are for end-games with all different pieces; as soon as there is exchange symmetry of identical pieces, or as soon as Bishops or other color-bound pieces are involved, the size would reduce so much that it would easily fit.)

For 3+3 life is much easier: we could do that in 1+3 slices, and the slice buffer would only need 16M positions = 8MB. With the main TB in RAM, the fact that we have to do two passes per cycle if we split the cycle in 3 passes (one for each white piece), in stead of a single pass, is probably not that much of a performance hit. If one of the pieces is factorizable, like the Rook, we could do it in 1.5+3 slices, reducing the number of passes per cycle again to 1, but now requiring a slice buffer of 64MB, which is still small enough to make it fit next to the main TB in 2GB. If there is no Rook, we could make use of the fact that a King (which will always be amongst the white pieces) does not need a full slice buffer, but can be treated as a 'streaming slice' in a buffer of 21/64 of the normal size (so 0.17 GB). This might make it possible to do all 3+3 in 2GB.

For 4+2 life is very easy, as we can do it in 2+2 slices, requiring only one pass per cycle, and an 8MB slice buffer. 5+1 bitbases are not really of interest, as they are all completely won. (Well, if you don't count things like KBBBBK with all B on the same color, which are completely drawn.) Similarly 1+5 will never have any won positions for the bare King. SO it is really only the 2+4 which might not be doable in 2GB (but no problem in 3GB). At least, not which such a simple Huffman compression.
Codeman
Posts: 85
Joined: Fri Oct 19, 2007 7:50 pm

Re: EGTB sharing - new plan

Post by Codeman »

h.g.muller wrote:I did a kind of back-of-the-envelope calculation on what would be needed to build 6-men pawnless bitbases entirely in RAM. A pawnless 6-men has 8G positions. The idea would be to keep this in RAM as compressed data, and use my slice-by-slice algorithm for the generation, packing and unpacking the data from and to a fixed-length encoded buffer (allowing random access) capable of holding a single slice. This buffer could then use a 4-bit encoding, one bit for the wtm state (won or non-won), and 3 bits for btm state (undecided, non-lost (= permanent escape), lost in current cycle, lost in previous cycle, lost earlier, with 3 spare codes).

A simple Huffman coding scheme would distinguish the 10 states (combined wtm and btm) by codes 0, 10, 1100, 1101, 11100, 11101, 11110, 11111, in decreasing frequency. The code 0 would initially be assigned to undecided & non-won, which is about 60% of the TB in the early cycles, while 10 would be undecided & non-won (about 30%). As the number of won positions would increase during the building, at some point it would be better to change to 0 = undecided & won, 10 = lost earlier & won. The fraction of positions lost in current or previous cycle will be pretty small at any stage, and these will get the 5-bit codes. Typical frequencies occurring during TB building then would compress the data to about 1.7 bit per positions, meaning the thus compressed TB would take up 1.7GB of RAM.

If we are building 2+4, we would do it in 1+4 slices (alternately keeping the first or second white piece fixed). Such a slice would have 1G positions (a slice has in general no symmetry, as the fixed piece breaks it). Thus with 4 bits per position, it would require a 0.5G buffer. This brings the RAM requirement to above 2GB. I am not sure if better compression could get it down so much that it could be done in a machine with only 2GB of RAM. (These estimates are for end-games with all different pieces; as soon as there is exchange symmetry of identical pieces, or as soon as Bishops or other color-bound pieces are involved, the size would reduce so much that it would easily fit.)

For 3+3 life is much easier: we could do that in 1+3 slices, and the slice buffer would only need 16M positions = 8MB. With the main TB in RAM, the fact that we have to do two passes per cycle if we split the cycle in 3 passes (one for each white piece), in stead of a single pass, is probably not that much of a performance hit. If one of the pieces is factorizable, like the Rook, we could do it in 1.5+3 slices, reducing the number of passes per cycle again to 1, but now requiring a slice buffer of 64MB, which is still small enough to make it fit next to the main TB in 2GB. If there is no Rook, we could make use of the fact that a King (which will always be amongst the white pieces) does not need a full slice buffer, but can be treated as a 'streaming slice' in a buffer of 21/64 of the normal size (so 0.17 GB). This might make it possible to do all 3+3 in 2GB.

For 4+2 life is very easy, as we can do it in 2+2 slices, requiring only one pass per cycle, and an 8MB slice buffer. 5+1 bitbases are not really of interest, as they are all completely won. (Well, if you don't count things like KBBBBK with all B on the same color, which are completely drawn.) Similarly 1+5 will never have any won positions for the bare King. SO it is really only the 2+4 which might not be doable in 2GB (but no problem in 3GB). At least, not which such a simple Huffman compression.
First of all: 6-men have 64G positions

But exactly in such a case, where we need to save as much space as possible we should do at least the easiest index-optimizations:
462*62*61*60*59 = 5.76G positions
5.76 * 1.7/8 = 1.22G space required

But I think that as soon as you can store everything in RAM other algorithms could be used. I for myself am only looking for a new way of generating bitbases, as my RAM cant hold 6-men at once. But in general, the algorithm I used only takes 2 bit per position without any compression (could be reduced to 1 bit, but the second bit gives a speedup)
I think I introduced it in some other part of this board anyway:
First one retrograde white move (from a btm-white-wins position) then one retrograde black move and finally verifying the position as won for white by checking all possible black moves and see if they lead to a wtm-won position. As soon as another btm-white-wins position is found do the same again.

So it would take 5.76 * 2/8 = 1.44G RAM

If you hold everything in RAM you dont have to distinguish between lost in current cycle, lost in previous cycle and lost earlier, as the structure falls away if you can use recursive programming.
h.g.muller
Posts: 223
Joined: Mon Feb 19, 2007 8:24 am
Sign-up code: 0
Location: Amsterdam
Contact:

Re: EGTB sharing - new plan

Post by h.g.muller »

Codeman wrote:First of all: 6-men have 64G positions
Because of the 8-fold symmetry this reduces to 8G positions. As for the (2+4)-men the memory requirement is just over the edge (1.7GB + 0.5GB > 2GB), trying to squeeze out a few broken positions might be worthwile here, as even a few percent of savings counts. I would not expect too much from it, though, as the broken positions are don't cares, and can thus always be represented by the smallest Huffman code available (i.e. a single bit). This means you save only 1 bit per broken position, not 1.7.

When you say 2 bits per position, I suppose you mean 2 bits for wtm and btm each? I am not sure distinguishing positions lost in the current or previous cycle from the other lost btm positions takes much space, as there are never many of those positions. The largest part of the 1.7 bits per position is for encoding the wtm state (won vs undecided, which typically grows during building from 40-60 to 90-10) and the btm state (lost vs uncediced, growing from 5-95 to 40-60). Thus we see that both at the beginning and end one of the states is distributed close to 50-50, and will thus require a full bit. A 90-10 distributions represents 0.47 bit. But this assumes ideal encoding (with fractional numbers of bits). So even with 1-bit of state info per wtm and btm position, the compressed TB will be very close to 1.7 bit per position.

The main savings by reducing the state info can be made in the slice buffer. With two bits of state info, a 1+4 slice would only require 0.25 GB, and that would be small enough to make it fit. It would make the algorithm a lot slower, though. So the main question is: is it really worthwile to squeeze the RAM usage into 2GB, which is at the edge of what is possible (for 2+4 men), or should we simply use 3GB, which makes it completely trivial...
Tuhma
Posts: 15
Joined: Sat Sep 29, 2007 10:48 am

Re: EGTB sharing - new plan

Post by Tuhma »

Hello,

I see that you are trying to make tbgenerator that would enable 7-men tablebase generation?

Is so, then I would like to know how long would it take to generate one 7-men tablebase using your tbgenerator?
Like KQQQKQQ or KRRPKRR ?
I made some tests using feg and using it KQQQKQQ would take about one month.
I feel that this is not much because I could run four instances of feg on my computer same time.
One 6-men set would take about 1/2 day -> 8 sets per day on quad.

I am used to computational projects and I think that certain TBs like krnpkrb, krrpkrr, kqppkrr and some others are worth of spending few cpu years. I think that 8 giga is quite usual amount of memory among heavy users and of course there are some special boards like Tyan Tempest i5400PW (S5397) with 16 memory slots -> 32 gigabytes is possible relatively cheaply because 2 giga dims are around $100.

Anyway, if you need a beta tester to generate some 7-men tablebases, I am more than willing to help.

Regards,

Tuhma
Codeman
Posts: 85
Joined: Fri Oct 19, 2007 7:50 pm

Re: EGTB sharing - new plan

Post by Codeman »

Tuhma wrote:Hello,

I see that you are trying to make tbgenerator that would enable 7-men tablebase generation?

Is so, then I would like to know how long would it take to generate one 7-men tablebase using your tbgenerator?
Like KQQQKQQ or KRRPKRR ?
I made some tests using feg and using it KQQQKQQ would take about one month.
I feel that this is not much because I could run four instances of feg on my computer same time.
One 6-men set would take about 1/2 day -> 8 sets per day on quad.

I am used to computational projects and I think that certain TBs like krnpkrb, krrpkrr, kqppkrr and some others are worth of spending few cpu years. I think that 8 giga is quite usual amount of memory among heavy users and of course there are some special boards like Tyan Tempest i5400PW (S5397) with 16 memory slots -> 32 gigabytes is possible relatively cheaply because 2 giga dims are around $100.

Anyway, if you need a beta tester to generate some 7-men tablebases, I am more than willing to help.

Regards,

Tuhma
Hallo Tuhma,

First of all, there is a huge differece between KQQQKQQ and KRRPKRR:
KQQQKQQ should be easier to generate as many pieces are the same. In total there would be about 28G positions (462*62*61*60*59*58 / (6*2) ) Thats only 5 times bigger than a usual 6 men TB. KRRPKRR is harder as you first have to generate some other Tablebases to know the outcome of the games where white converts the pawn.

For 7-men Endgames where all positions are different there would be about 334G positions. Thats far too much to be held in memory (even for 32GB :D ), so while generation some combination of Harddrive and RAM is needed. We are about to find an efficent algorithm to handle this issue.

Anyway thanks for you offer as a beta-tester.
Tuhma
Posts: 15
Joined: Sat Sep 29, 2007 10:48 am

Re: EGTB sharing - new plan

Post by Tuhma »

> is it really worthwile to squeeze the RAM usage into 2GB, which is at the edge of what is possible (for 2+4 men), or should
> we simply use 3GB, which makes it completely trivial...

4 giga is quite standard nowdays. And it is very likely that any program that you are going to write is going to be around several years so 4 giga is going to be mainstream stuff very soon.

curiosity question. How difficult is to generate kQQQkQQQ ? How much hard disk space that would require?

Tuhma
Tuhma
Posts: 15
Joined: Sat Sep 29, 2007 10:48 am

Re: EGTB sharing - new plan

Post by Tuhma »

>First of all, there is a huge differece between KQQQKQQ and KRRPKRR:
>KQQQKQQ should be easier to generate as many pieces are the same. In total there would be about 28G positions
>(462*62*61*60*59*58 / (6*2) ) Thats only 5 times bigger than a usual 6 men TB. KRRPKRR is harder as you first have to >generate some other Tablebases to know the outcome of the games where white converts the pawn.

Yes, I have to do KRRQKRR, KRRRKRR, KRRBKRR, KRRNKRR and etc. but i don't think that this is a big problem because I can do 4 sets simultaniosly -> 2-3 months. Peanuts. Only big thing is that sometimes I have to reboot or my computer crash. It would be very important to have continue possibility in TBgenerator so after you have finished 30% of work you can interup and continue later.

If i decide to generate 7-men set using feg is there any way to query that tablebase format? (to use it in endgame analysis)

Regards,

Tuhma
Tuhma
Posts: 15
Joined: Sat Sep 29, 2007 10:48 am

Re: EGTB sharing - new plan

Post by Tuhma »

About kQQQkQQQ, I guess that (462*62*61*60*59*58*57)/(6*6) = 568 *10^9 positions. i guess that quite many are illegal?

Tuhma
Codeman
Posts: 85
Joined: Fri Oct 19, 2007 7:50 pm

Re: EGTB sharing - new plan

Post by Codeman »

Tuhma wrote:About kQQQkQQQ, I guess that (462*62*61*60*59*58*57)/(6*6) = 568 *10^9 positions. i guess that quite many are illegal?

Tuhma
The indexing I described here just avoids having the two kings standing next to each other and two pieces standing on the same square. As well as it takes advantage of the possibility of flipping and rotating the board and exchange equal pieces. That are the more easy illegal positions to catch. Still there would be many illegal positions, where side-not-to-move is in check. But detecting these is very time consuming.

Secondly it is possible to continue generation of unfinished endgames. (but I dont know if it is implemented in the common generators)

I dont know much about feg, but I think it was designed for Chessmaster and so this program should be able to query it.

well .. kQQQkQQQ .. i think it is still far away, but you are right with the 568*10^9 positions.
Codeman
Posts: 85
Joined: Fri Oct 19, 2007 7:50 pm

Re: EGTB sharing - new plan

Post by Codeman »

h.g.muller wrote:
Codeman wrote:First of all: 6-men have 64G positions
Because of the 8-fold symmetry this reduces to 8G positions. As for the (2+4)-men the memory requirement is just over the edge (1.7GB + 0.5GB > 2GB), trying to squeeze out a few broken positions might be worthwile here, as even a few percent of savings counts. I would not expect too much from it, though, as the broken positions are don't cares, and can thus always be represented by the smallest Huffman code available (i.e. a single bit). This means you save only 1 bit per broken position, not 1.7.

When you say 2 bits per position, I suppose you mean 2 bits for wtm and btm each? I am not sure distinguishing positions lost in the current or previous cycle from the other lost btm positions takes much space, as there are never many of those positions. The largest part of the 1.7 bits per position is for encoding the wtm state (won vs undecided, which typically grows during building from 40-60 to 90-10) and the btm state (lost vs uncediced, growing from 5-95 to 40-60). Thus we see that both at the beginning and end one of the states is distributed close to 50-50, and will thus require a full bit. A 90-10 distributions represents 0.47 bit. But this assumes ideal encoding (with fractional numbers of bits). So even with 1-bit of state info per wtm and btm position, the compressed TB will be very close to 1.7 bit per position.

The main savings by reducing the state info can be made in the slice buffer. With two bits of state info, a 1+4 slice would only require 0.25 GB, and that would be small enough to make it fit. It would make the algorithm a lot slower, though. So the main question is: is it really worthwile to squeeze the RAM usage into 2GB, which is at the edge of what is possible (for 2+4 men), or should we simply use 3GB, which makes it completely trivial...
By saying 2 bits I mean 2 bits in total (wtm: won/lost for white, btm: won/lost for white)

True, I forgot about the fact, that broken positions will be compressed to 1 bit/pos anyway. Well, still by switching from 462*64^4 to 462*62!/58! we would save 1.64G positions = 0.18GByte. Furthermore you used 8*64 for both kings. If you reduced this to 462 you could save another 10%. I think in the end it should fit in the 2 GB RAM.

But maybe it would be better to use more RAM and on the other hand benefit from faster generation times.

It all depends on the way you want to implement the tablebases. What are you actually working on at the moment? I think at one point you said that you wanted to write a generator to do on-the-fly generation while the engine is calculating. Is this still one of your projects? And what metrics are you working on?
h.g.muller
Posts: 223
Joined: Mon Feb 19, 2007 8:24 am
Sign-up code: 0
Location: Amsterdam
Contact:

Re: EGTB sharing - new plan

Post by h.g.muller »

My main interest is in on-the-fly building of pawny DTZ50 EGTBs. In particular I want to be able to build P-slices with 4 non-Pawns (e.g. KRKR or KRKB), and then be able to use that to calculate the required P-slices with 3-5 Pawns added (depending on their advancement and blocking). So that my engine can play, say, KRPPPKRPP by generating the TB in 5-10 min of thinking, and from then on playing instantly to finish the game.

However, I am still in the planning stage for this one, thinking about how to optimize the algorithm by tricks just like the ones we are discussing here. And the problem of building on-disk (using RAM as fast storage) and building in RAM (using L2 as fast storage) is actually very similar. So when I am entirely happy with the algorithm, I might as well apply it to 7-men generation on disk (with 6-men generation as an easy testcase for that).

My main Chess PC is currently tied up in running the Battle of the Goths 10x8 Chess Championship, and I use my laptop for piece-value measurements (studying the pair-synergy between color-bound pieces) through playing material-imbalance games. And the most urgent programming project is currently to fix the remaining bugs in WinBoard_F, so I can finally release version 4.3.13. After that I can start programming work on the EGTBs, and on the Shogi engine I want to build. And by the end of the year I will probably buy an 8-core Nehalem PC, to convert my engine Joker to SMP.
jkominek
Posts: 150
Joined: Mon Dec 04, 2006 9:02 am
Sign-up code: 0
Location: Pittsburgh, PA

Re: EGTB sharing - new plan

Post by jkominek »

h.g.muller wrote:My main interest is in on-the-fly building of pawny DTZ50 EGTBs. ... I might as well apply it to 7-men generation on disk ... My main Chess PC is currently tied up in running the Battle of the Goths 10x8 Chess Championship, and I use my laptop for piece-value measurements (studying the pair-synergy between color-bound pieces) through playing material-imbalance games. And the most urgent programming project is currently to fix the remaining bugs in WinBoard_F, so I can finally release version 4.3.13. After that I can start programming work on the EGTBs, and on the Shogi engine I want to build. And by the end of the year I will probably buy an 8-core Nehalem PC, to convert my engine Joker to SMP.
Wow. Pardon the personal question in a public forum ... but, you are enjoying the luxuries of retirement, yes? I read your action-item list and think there must be eight h.g.mullers co-existing in superposition. When will the wave function collapse?

jk
Last edited by jkominek on Mon Apr 21, 2008 6:42 am, edited 1 time in total.
h.g.muller
Posts: 223
Joined: Mon Feb 19, 2007 8:24 am
Sign-up code: 0
Location: Amsterdam
Contact:

Re: EGTB sharing - new plan

Post by h.g.muller »

I decided to take an early retirement per April this year. 8) Otherwise the Shogi engine would not have been on the list, and the SMP stuff is a wish that only came up when I saw the announcement of the Nehalem CPU.

But you can see why, up to now, I did not make much progress on the EGTB building. :wink:
Tuhma
Posts: 15
Joined: Sat Sep 29, 2007 10:48 am

Re: EGTB sharing - new plan

Post by Tuhma »

BTW. You are talking about generating in the memory vs. generating in the hard disk.
How about solid state disks or USB memory?
Some USB memories are quite cheap like Corsair 32GB Flash Voyager is around 100 - 150 Eur + much better randon access times compared to HD.

regards,

Tuhma
User avatar
jshriver
Posts: 298
Joined: Tue Jan 31, 2006 5:59 am
Sign-up code: 0
Location: Toledo, OH, USA
Contact:

Re: EGTB sharing - new plan

Post by jshriver »

That's pretty cheap for hosting :) the hosting I use for olympuschess.com is ixwebhosting. Think I pay about $125 a year so ~$12 a month, but I have a 500gig max transfer rate. While they do claim "unlimited storage" I've asked for verification and since it is a vhosting service, they get around that by the "as long as you dont cause harm or abuse system resources from other clients" so they ask people realistically keep it under 60 gigs, prefer 40 or less. So that's what I've been trying to host.

I will gladly continue to offer some space for files, if we were to coordinate different sites could host different portions and take the strain off an individual source.

Agree with Kirill uploading to the site is slow and a big reason I don't update mine as often as I'd like. I get about 50k-75k uplink at best so it can take a week to do a real update.

Hope that helps,
Josh
jkominek
Posts: 150
Joined: Mon Dec 04, 2006 9:02 am
Sign-up code: 0
Location: Pittsburgh, PA

Re: EGTB sharing - new plan

Post by jkominek »

Tuhma wrote:Hello,
I am used to computational projects and I think that certain TBs like krnpkrb, krrpkrr, kqppkrr and some others are worth of spending few cpu years. I think that 8 giga is quite usual amount of memory among heavy users and of course there are some special boards like Tyan Tempest i5400PW (S5397) with 16 memory slots -> 32 gigabytes is possible relatively cheaply because 2 giga dims are around $100.
For a large-memory PC the motherboard I fancy is the SuperMicro H8QMi-2. It has 32 DIMM sockets and supports up to 128 GB. This is an Opteron board. Opterons are slower than Xeons, but for whatever reason motherboard manufacturers choose them when designing boards with large memory capacity. For TB generation I'd pass on 50% more speed for quadruple the memory.

Before anyone gets too excited and pulls out a credit card, I'd like to mention that "server grade" motherboards require EEC DIMMs, and this is true for Tyan Tempest S5397 too. For memory intensive computations - i.e. TB generation - you really do want that level of reliability. But it imposes a substantial cost premium: about 4x non-ECC memory. You're lucky if you can find a single 4 GB DIMM on sale for $225. I'll let you multiply by 32.

Because there are a large number of pawnless 7man tables that weigh in at just under 120 GB (such as krrrkqb) and a batch of pawnfull ones in the 90-105 GB range (kpppkrr), my preference is to go full capacity right from the start.
Anyway, if you need a beta tester to generate some 7-men tablebases, I am more than willing to help.
Tuhma
My hope is that tbgen2 will be ready for beta testing by early summer. I've "signed up" Josh Shriver. A second tester would be beneficial, if you can contribute the time of some heavy hardware. Not to mention your own time. (Note: the program is Linux-only.) I'm not interested in spot generation of a few select configurations. The goal the the entire 7-man set.

john
Last edited by jkominek on Mon Apr 21, 2008 6:50 am, edited 1 time in total.
jkominek
Posts: 150
Joined: Mon Dec 04, 2006 9:02 am
Sign-up code: 0
Location: Pittsburgh, PA

Re: EGTB sharing - new plan

Post by jkominek »

Tuhma wrote:>Yes, I have to do KRRQKRR, KRRRKRR, KRRBKRR, KRRNKRR and etc. but i don't think that this is a big problem because I can do 4 sets simultaniosly -> 2-3 months. Peanuts. Only big thing is that sometimes I have to reboot or my computer crash. It would be very important to have continue possibility in TBgenerator so after you have finished 30% of work you can interup and continue later.
You've probably noticed that FEG can resume from a halted computation. If a crash happens during the main iteration loop it is no big deal, all that is lost is the last ply. The situation is worse during the bulky initialization and finalization phases.
If i decide to generate 7-men set using feg is there any way to query that tablebase format? (to use it in endgame analysis)
Only within ChessMaster. I don't have the program, and don't know if it supports 7-man tables. After somebody (you!) generates the first one I hope it can be tested out.

john
Last edited by jkominek on Mon Apr 21, 2008 6:49 am, edited 1 time in total.
jkominek
Posts: 150
Joined: Mon Dec 04, 2006 9:02 am
Sign-up code: 0
Location: Pittsburgh, PA

Re: EGTB sharing - new plan

Post by jkominek »

Tuhma wrote:How about solid state disks or USB memory?
Some USB memories are quite cheap like Corsair 32GB Flash Voyager is around 100 - 150 Eur + much better randon access times compared to HD.
Tuhma
I'd say that the best use of a solid state disk is to increase the virtual memory space of a machine. But forget Corsair and pen drives. You'd want a well engineered SSD solution. For example, 8 MTRON disks in a RAID0 configuration: http://www.dvnation.com/benchmark-24.html.

jk
jkominek
Posts: 150
Joined: Mon Dec 04, 2006 9:02 am
Sign-up code: 0
Location: Pittsburgh, PA

Re: EGTB sharing - new plan

Post by jkominek »

Tuhma wrote:Is so, then I would like to know how long would it take to generate one 7-men tablebase using your tbgenerator? Like KQQQKQQ or KRRPKRR ? ... I am used to computational projects and I think that certain TBs like krnpkrb, krrpkrr, kqppkrr and some others are worth of spending few cpu years.
I'll run through the configurations you've mentioned.

1. kqqqkqq = 59.8 G positions, depends only on 6-man tables. (codeman forgot to double for wtm and btm)

2. kqqqkqqq = 1.1 T, depends only on 7-man tables. (Can divide by 2 since this is side-symmetric.)

3. krrpkrr = 543 G, 1.1 T including its 7-man dependencies.

4. krnpkrb = 2.2 T, 4.3 T including its 7-man dependencies.

5. kqppkrr = 418 G, 6.4 T including its 7-man dependencies.

Note that the last two will require 2 bytes per position, so you're looking at 8.6 TB and 12.8 TB for the generation of 4 and 5. Divide by a factor of 4-5 to get the approximate post-compression sizes.

I've attached the files from the calculation I made to generate these estimates. It provides the dependency tree all the way down to bare kings.

john
Attachments
krrpkrr.lst.gz
(741 Bytes) Downloaded 250 times
krnpkrb.lst.gz
(938 Bytes) Downloaded 259 times
kqqqkqqq.lst.gz
(429 Bytes) Downloaded 252 times
kqqqkqq.lst.gz
(390 Bytes) Downloaded 249 times
kqppkrr.lst.gz
(1.24 KiB) Downloaded 243 times
jkominek
Posts: 150
Joined: Mon Dec 04, 2006 9:02 am
Sign-up code: 0
Location: Pittsburgh, PA

Re: EGTB sharing - new plan

Post by jkominek »

Kirill Kryukov wrote:Here is the link to my new site: http://kirill-kryukov.com/chess/egtb/files.html
Hi Kirill,

So, an on-topic posting -- finally.

Your newly hosted site is returning 404 right now. A bad omen. Hope they haven't cut you off for excessive usage.

john
User avatar
Kirill Kryukov
Site Admin
Posts: 7399
Joined: Sun Dec 18, 2005 9:58 am
Sign-up code: 0
Location: Mishima, Japan
Contact:

Re: EGTB sharing - new plan

Post by Kirill Kryukov »

OK this plan did not work out as well expected. Unfortunately for all EGTB lovers out there. The idea was good though. As some predicted, the hosting did not take it. :-) I'm back to eMule sharing for the moment.
Post Reply