kolbytn/mindcraft

Bot not following commands

Closed this issue · 5 comments

Minecraft version: Minecraft 1.20.4
AI model: llama3

It's in the title - not many other details I can provide. Console registers that I've said something in chat, however, the bot itself will not do anything. "Follow me", or any simple commands do not process at all. It joins the game and does its own thing.

I have the same issue, it seems that with llama3 the bot doesn't take any input from the user!

yeah it would be nice if they had more documentation on using llama3 I have no idea how to use it with these bots

the only thing it does is freaks out when it sees a zombie

edit: I think i found the issue which is that the ollama server seems to be constantly restarting (i have no idea why) every time a new message is sent, the server thingy decides to start loading something (you can see if you have the console window open) and it takes around 2m to do one request and then just IMMEDIATELY RELOADS

edit2: Ok i think i got it figured out: every time the user sends a message, it creates a new agent (instead of using the same one for some reason) and sends the starting stuff again:

received message from BorbTheBird : hello there
selected examples:
zZZn98: come here
brug: Remember that your base is here.
Awaiting local response... (model: llama3)
Messages: (intents here)

If you are using the small Llama3 (8B) it will be quite difficult to get it to work as intended. It just does not have enough training to handle the complexity. Try with the 70B model. Even then, most of the issue lies in the prompting and not using function calling when needed.

Not quite sure if changing the model, will indeed solve the issue, plus i wouldn't how to set another model to be honest.

My experiments with llama3 8b have resulted in mostly poor behavior. It fails to work on the any but the simplest commands. If the model is taking a long time, it may be because your computer hardware is not capable of running it. We included support for local models since it was a highly requested feature, but most local models will not perform as well as larger models.

I'm closing this issue as it seems to be an issue with the choice of model / the hardware you're running it on.