fairy-stockfish/variant-nnue-pytorch

hello : train xiangqi NNUE

Newera2022 opened this issue · 3 comments

hello : train xiangqi NNUE

Having a distributed training would of course be great, but unfortunately I do not have the time to develop something like that.

Just out of curiosity, what is the main challenge in setting up the trainer? Is it the installation of CUDA, python, the python packages, or the compilation of the training data loader?

If people can access discord (via VPN), they can share links to their uploaded training data in the Fairy-Stockfish discord server. Otherwise you can also feel free to create a discussion about Xiangqi NNUE training data in this repository, and then everyone can share their links there. I currently don't train networks due to lack of hardware, but others do, so they might pick up the training data from there then. That is why it shouldn't be a private communication like email, but somewhere public.

It would be best if people who get it to work (or at least part of it) extend the documentation at https://github.com/ianfab/variant-nnue-pytorch/wiki/%E4%B8%AD%E6%96%87 (also see the English version https://github.com/ianfab/variant-nnue-pytorch/wiki/Step-by-Step-Guide where I already added a bit more information). I tried to outline the steps there, but I can't fill in the detailed installation steps, since I neither have a Windows system nor a CUDA capable GPU, so it is impossible for me to test and reproduce the exact steps for the common Windows setup most users have.

I created #13 as a common place for further discussion in order to avoid scattering. If you are able to access it, also feel free to join our discord server.