The simulation bot is intended to serve as a mediator between an end user and simulation environment for military oprations using RL.
User can either make a conversation with LLM about the documentation of the API, to clarify further requst or to learn more about the operation sumulation. Or, straightforwardly prompt an LLM to request the API to make a specific operation, for example you may ask it to execute simulation with default parameters.
- Install Llama-cpp for M1 Mac:
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir
- Install redis on Mac:
brew install redis
- Run redis locally:
brew start-service redis
redis-cli
- Download NeuralHermes LLM from HuggingFace:
https://huggingface.co/TheBloke/NeuralHermes-2.5-Mistral-7B-GGUF
-
Put it into model directory, and specify the name in
models.yml
file inside of parameters directory. -
Specify environment variables:
PROJECT_PATH=/path/to/project/folder;
REDIS_HOST=127.0.0.1;
REDIS_PORT=6379
-
If needed specify API url and API documentation in
src/packages/constants
package -
Download dependency of RL operation simulator:
https://github.com/yvoievid/intro-to-data-science
- Install python dependecies for OpSim (this project)
pip install -f dependenices.m1.txt
orrequirements.txt
for not Apple M1 users - Run debug.py script from
src/bin/debug.py
with working directory set equal to thePROJECT_PATH