Mister X wrote:Yes, I am
In other groups of the alt.binaries.* hierarchy you can get complete movies - DVD-images ready for burning - which is of course illegal. I think, tablebases are public domain. (Is it correct?) So posting should be legal.
Usenet has many benefits. Downloads are usually faster compared with p2p and the transmission may be protected by parity-files.
Greetings, X
Well we are talking about hundreds of gigabytes here. Do you think the usenet will handle that, that it will even work, not to mention work fast? Who will host the files - the news server? What should we do if the server operator will decide it's too much for them to host 1.2 TB and if they delete it? Any server-based sharing is not dependable because it can stop working at any moment.
p2p works, as is proven by EGTB sharing project. I started with almost no tablebases about 4 months ago, and now I have about 500 GB, all shared online. p2p is also safe because it has no single point of failure.
Mister X wrote:Usenet has many benefits. Downloads are usually faster compared with p2p and the transmission may be protected by parity-files.
Does Usenet have capacity to host 1.2 terabytes? If not, then there is no comparison, because p2p can do this.
About the faster downloads, the p2p network speed depends only on contribution of the members. It is already fast for files that are well spread, and it will be fast for all files, gradually.
Also we are targeting first of all availability of all 6-men EGTB files. BitTorrent is also fast, for example, but it is not a good long-term sharing solution. BitTorrent is very convenient for quick distribution of new file, but you will not find that file online after 1 year. eD2K/KAD network which we are using is more suitable for long term sharing.
Mister X wrote:Usenet has many benefits. Downloads are usually faster compared with p2p and the transmission may be protected by parity-files.
I have to add, eMule software that we use for sharing has MD4 protection of all downloads. So p2p transmissions
are protected (not "may be"), and it is also not just parity but MD4 digest. We are also using MD5 signatures and datacomp testing in addition to that. As you can imagine it is a big problem to ensure the integrity of 1 TB of data. We are aware of the problem and we are using multiple ways to ensure that the files are not corrupted.