This container is running as a ROS2 Node and contains a Ollama installation with Mistral as a LLM.
It also contains the website which is used to get the user input.
Once the container is started, the entry point is directly on the cli of ollama.
navigate to llm_scene_docker/llm_files/
execute:
python3
python3 MainLLM.py
ros2 run pkg_website_llm website_llm
- Connect to Docker
- enter on the terminal
ros2 service call /user_interaction llm_interfaces/srv/UserInteraction {''}
ros2 service call scene_interpretation llm_interfaces/srv/SceneInterpretation "{user_input: 'TEST'}"
Please not in the {}-brackets should be the ObjectDetections, so that the Website can display them.
- The terminal shows the user input.
Client:
- Open New Terminal
- Connect to LLM_Docker
- colcon build && source install/setup.bash
- Navigate to the folder: cd src/pkg_website_llm/pkg_website_llm/
- python3 ActionClientToPreProcessing.py
Server:
- Open New Terminal
- Connect to LLM_Docker
- colcon build && source install/setup.bash
- Navigate to the folder: cd /src/pkg_llm_docker/pkg_llm_docker
- python3 LLM_Action_Server.py
- Connect to Docker
- enter on the terminal:
ros2 run pkg_pack_item_server pack_item_server
- Result is hard coded:
['Box_Gluehlampe', 'Box_Wischblatt','Keilriemen_gross', 'Box_Bremsbacke', 'Keilriemen_klein', 'Tuete']