Can be played at https://onitama.app/
Things that could be cool to implement, that aren't done yet, and might one day get done
- Show piece that last moved, and where it moved from
- Say when opponent has requested a rematch
- Add chat (maybe)
- Add "how to play"
- Add different difficulty AIs
The default Dockerfile will disable server-side AI, and instead compile the AI Agent as web-assembly and run in a JS WebWorker.
An alternate Dockerfile (Dockerfile.remoteai
) uses server-side AI which runs the agent code in the same process that
serves the game and delivers messages for multiplayer games.
The server-side AI is roughly 30x faster at running Monte Carlo simulations, meaning that the hard bot is notably harder when run server side.
The server side AI is roughly 2x faster, ever since swapping out the RNG used in the Monte Carlo search. Previously it was
significantly slower.
To use server-side AI, build the container with
docker build -t onitama:remoteai -f Dockerfile.remoteai .
As of writing this, https://onitama.app/ uses the local AI as it is very light on server resource requirements.
Pull from GitHub container registry
docker pull ghcr.io/jackadamson/onitama:latest
Run the container
docker run -dp 80:8080 --name onitama --rm ghcr.io/jackadamson/onitama:latest
build the container
docker build -t onitama .
Run the container
docker run -dp 80:8080 --name onitama --rm onitama
Requires Rust (nightly) and Node (v14)
- Install dependencies with
yarn install
- Start the backend with
cargo run onitamaserver
- Start the frontend with
yarn start
- Visit http://localhost:3000 to see the app
To develop single-player without the backend, start the frontend with
yarn start-local-ai