neph1/LlamaTale

anything story, MUD mode, error after logging in as admin

sigmareaver opened this issue · 4 comments

Based on the wiki, I changed the stories/anything/story_config.json to show:

    "supported_modes": [
        "MUD"
    ],

And started the application with the command: python -m tale.main --game stories/anything/ --mode mud

After creating the admin account and logging in as admin, the web client printed the error below:

Welcome to the Land of Anything…

Message-of-the-day:


This is the Message-of-the-day. This game is still under heavy development. Please report any problems you may find. (the MOTD will only be shown when running in MUD mode, Interactive Fiction stories don’t have any use for a MOTD message).



* internal error (please report this):


——————————————————-
 CRASH OCCURRED! TIMESTAMP: 2024-06-25 17:01:52.677285
——————————————————-
 EXCEPTION: ConnectionError
 MESSAGE: HTTPConnectionPool(host=’localhost’, port=5001): Max retries exceeded with url: /api/v1/generate (Caused by NewConnectionError(’<urllib3.connection.HTTPConnection object at 0x7f0599e674d0>: Failed to establish a new connection: [Errno 111] Connection refused’))
 Extended stacktrace follows (most recent call last):
   —-

File “/mnt/Storage/LlamaTale/tale/driver_mud.py”, line 370, in MudDriver.main_loop
Source code:
    self._continue_dialog(conn, dialog, response)

   —-

File “/mnt/Storage/LlamaTale/tale/driver.py”, line 393, in MudDriver._continue_dialog
Source code:
    why, what = dialog.send(message or None)

Local values:
    conn = <tale.player.PlayerConnection object at 0x7f0599e5dc90>
    dialog = <generator object MudDriver._login_dialog_mud_create_admin at 0x7f0599da96c0>
    message = ’a900760o’
    self = <tale.driver_mud.MudDriver object at 0x7f0599fa4150>
   —-

File “/mnt/Storage/LlamaTale/tale/driver_mud.py”, line 169, in MudDriver._login_dialog_mud_create_admin
Source code:
    yield from self._login_dialog_mud(conn)  # continue with the normal login dialog

Local values:
    conn = <tale.player.PlayerConnection object at 0x7f0599e5dc90>
    email = ’traikanossi@yahoo.com’
    gender = ’m’
    name = ’trai’
    password = ’a900760o’
    password2 = ’a900760o’
    race = ’human’
    self = <tale.driver_mud.MudDriver object at 0x7f0599fa4150>
    stats = <Stats: {'gender': 'm', 'level': 1, 'xp': 0, 'hp': 5, 'max_hp': 5, 'maxhp_dice': '', 'ac': 0, 'wc': 0, 'attack_dice': '', 'alignment': 0, 'bodytype': <BodyType.HUMANOID: 'humanoid'>, ’language’: ’English’, ’weight’: 72.0, ’size’: <BodySize.HUMAN_SIZE   ...(truncated to 250)
   ----

File "/mnt/Storage/LlamaTale/tale/driver_mud.py", line 300, in MudDriver._login_dialog_mud
Source code:
    conn.player.look(short=False)  # force a 'look' command to get our bearings

Local values:
    account = <tale.accounts.Account object at 0x7f0599e64650>
    conn = <tale.player.PlayerConnection object at 0x7f0599e5dc90>
    existing_player = None
    name = ’trai’
    name_info = <tale.charbuilder.PlayerNaming object at 0x7f0599e5ecd0>
    prompt = ’’
    self = <tale.driver_mud.MudDriver object at 0x7f0599fa4150>
    successful_login = True
   —-

File “/mnt/Storage/LlamaTale/tale/player.py”, line 121, in Player.look
Source code:
    self.tell(look_text, end=True, evoke=evoke)

Local values:
    evoke = True
    look_paragraphs = [’[Transit]’, ’A phone booth taking you to your story.’]
    look_text = ’[Transit]\nA phone booth taking you to your story.’
    self = <Player 'trai' #18 @ 0x7f0599e5e090, privs:wizard>
    short = False
   —-

File “/mnt/Storage/LlamaTale/tale/player.py”, line 81, in Player.tell
Source code:
    msg, rolling_prompt = mud_context.driver.llm_util.evoke(message,

Local values:
    __class__ = <class 'tale.player.Player'>
    alt_prompt = ’’
    end = True
    evoke = True
    extra_context = ’’
    format = True
    message = ’[Transit]\nA phone booth taking you to your story.’
    self = <Player 'trai' #18 @ 0x7f0599e5e090, privs:wizard>
    short_len = False
   —-

File “/mnt/Storage/LlamaTale/tale/llm/llm_utils.py”, line 110, in LlmUtil.evoke
Source code:
    text = self.io_util.synchronous_request(request_body, prompt=prompt, context=story_context.to_prompt_string())

Local values:
    alt_prompt = ’’
    cached_look = ’’
    extra_context = ’’
    message = ’[Transit]\nA phone booth taking you to your story.’
    output_template = ’Original:[ {message}] Generated:{text}’
    prompt = ’You are a creative game keeper for a role playing game (RPG). You craft detailed worlds and interesting characters with unique and deep personalities for the player to interact with. Do not acknowledge the task, just perform it.<context>{context}</c   …(truncated to 250)
    request_body = {’stop_sequence’: ’’, ’max_length’: 1500, ’max_context_length’: 4096, ’temperature’: 0.5, ’top_k’: 120, ’top_a’: 0.0, ’top_p’: 0.85, ’typical_p’: 1.0, ’tfs’: 1.0, ’rep_pen’: 1.2, ’rep_pen_range’: 256, ’sampler_order’: [6, 0, 1, 3, 4, 2, 5], ’seed’: -   …(truncated to 250)
    rolling_prompt = ’[Transit]\nA phone booth taking you to your story.’
    self = <tale.llm.llm_utils.LlmUtil object at 0x7f0599fa4390>
    short_len = False
    skip_history = True
    story_context = <tale.llm.contexts.EvokeContext.EvokeContext object at 0x7f0599e5f310>
    text_hash_value = 283959820103946396630120370924901724239
    trimmed_message = ’[Transit]\nA phone booth taking you to your story.’
   —-

File “/mnt/Storage/LlamaTale/tale/llm/llm_io.py”, line 36, in IoUtil.synchronous_request
Source code:
    response = requests.post(self.url + self.endpoint, headers=self.headers, data=json.dumps(request_body))

Local values:
    context = ’Story context:; History:; ’
    prompt = ’You are a creative game keeper for a role playing game (RPG). You craft detailed worlds and interesting characters with unique and deep personalities for the player to interact with. Do not acknowledge the task, just perform it.<context>{context}</c   ...(truncated to 250)
    request_body = {'stop_sequence': '', 'max_length': 1500, 'max_context_length': 4096, 'temperature': 0.5, 'top_k': 120, 'top_a': 0.0, 'top_p': 0.85, 'typical_p': 1.0, 'tfs': 1.0, 'rep_pen': 1.2, 'rep_pen_range': 256, 'sampler_order': [6, 0, 1, 3, 4, 2, 5], 'seed': -   ...(truncated to 250)
    self = <tale.llm.llm_io.IoUtil object at 0x7f059a0a9bd0>
   —-

File “/mnt/teraskasi/anaconda3/lib/python3.11/site-packages/requests/api.py”, line 115, in post
Source code:
    return request(“post”, url, data=data, json=json, **kwargs)

Local values:
    data = ’{“stop_sequence”: “”, “max_length”: 1500, “max_context_length”: 4096, “temperature”: 0.5, “top_k”: 120, “top_a”: 0.0, “top_p”: 0.85, “typical_p”: 1.0, “tfs”: 1.0, “rep_pen”: 1.2, “rep_pen_range”: 256, “sampler_order”: [6, 0, 1, 3, 4, 2, 5], “seed”:    …(truncated to 250)
    json = None
    kwargs = {’headers’: {}}
    url = ’http://localhost:5001/api/v1/generate’
   —-

File “/mnt/teraskasi/anaconda3/lib/python3.11/site-packages/requests/api.py”, line 59, in request
Source code:
    return session.request(method=method, url=url, **kwargs)

Local values:
    kwargs = {’data’: ’{“stop_sequence”: “”, “max_length”: 1500, “max_context_length”: 4096, “temperature”: 0.5, “top_k”: 120, “top_a”: 0.0, “top_p”: 0.85, “typical_p”: 1.0, “tfs”: 1.0, “rep_pen”: 1.2, “rep_pen_range”: 256, “sampler_order”: [6, 0, 1, 3, 4, 2, 5],   …(truncated to 250)
    method = ’post’
    session = <requests.sessions.Session object at 0x7f0599e67390>
    url = ’http://localhost:5001/api/v1/generate’
   —-

File “/mnt/teraskasi/anaconda3/lib/python3.11/site-packages/requests/sessions.py”, line 589, in Session.request
Source code:
    resp = self.send(prep, **send_kwargs)

Local values:
    allow_redirects = True
    auth = None
    cert = None
    cookies = None
    data = ’{“stop_sequence”: “”, “max_length”: 1500, “max_context_length”: 4096, “temperature”: 0.5, “top_k”: 120, “top_a”: 0.0, “top_p”: 0.85, “typical_p”: 1.0, “tfs”: 1.0, “rep_pen”: 1.2, “rep_pen_range”: 256, “sampler_order”: [6, 0, 1, 3, 4, 2, 5], “seed”:    …(truncated to 250)
    files = None
    headers = {}
    hooks = None
    json = None
    method = ’post’
    params = None
    prep = <PreparedRequest [POST]>
    proxies = {}
    req = <Request [POST]>
    self = <requests.sessions.Session object at 0x7f0599e67390>
    send_kwargs = {’timeout’: None, ’allow_redirects’: True, ’proxies’: OrderedDict(), ’stream’: False, ’verify’: True, ’cert’: None}
    settings = {’proxies’: OrderedDict(), ’stream’: False, ’verify’: True, ’cert’: None}
    stream = None
    timeout = None
    url = ’http://localhost:5001/api/v1/generate’
    verify = None
   —-

File “/mnt/teraskasi/anaconda3/lib/python3.11/site-packages/requests/sessions.py”, line 703, in Session.send
Source code:
    r = adapter.send(request, **kwargs)

Local values:
    adapter = <requests.adapters.HTTPAdapter object at 0x7f0599e64ad0>
    allow_redirects = True
    hooks = {’response’: []}
    kwargs = {’timeout’: None, ’proxies’: OrderedDict(), ’stream’: False, ’verify’: True, ’cert’: None}
    request = <PreparedRequest [POST]>
    self = <requests.sessions.Session object at 0x7f0599e67390>
    start = 1719356512.67547
    stream = False
   —-

File “/mnt/teraskasi/anaconda3/lib/python3.11/site-packages/requests/adapters.py”, line 519, in HTTPAdapter.send
Source code:
    raise ConnectionError(e, request=request)

Local values:
    cert = None
    chunked = False
    conn = <urllib3.connectionpool.HTTPConnectionPool object at 0x7f0599e65ad0>
    proxies = OrderedDict()
    request = <PreparedRequest [POST]>
    self = <requests.adapters.HTTPAdapter object at 0x7f0599e64ad0>
    stream = False
    timeout = Timeout(connect=None, read=None, total=None)
    url = ’/api/v1/generate’
    verify = True

 EXCEPTION HERE: ConnectionError: HTTPConnectionPool(host=’localhost’, port=5001): Max retries exceeded with url: /api/v1/generate (Caused by NewConnectionError(’<urllib3.connection.HTTPConnection object at 0x7f0599e674d0>: Failed to establish a new connection: [Errno 111] Connection refused’))
——————————————————-

Please report this problem. 

Thinking about it, I hadn't yet started the LLM API, so that's most likely the cause. My apologies for the needless issue.

It might be a good idea to have the MUD fail gracefully here though, and only print out a message telling the user it can't connect to the LLM so as to remind the user that they need to start the LLM API.

Thanks for the suggestion. I'd be curious to know how running the mud went. I've tested that it runs and logged in, but that's about it.

Since I'm not very familiar with how LlamaTale works, I didn't do much except look at commands. However, based on the description, I was expecting the Anything story to ask me questions to build the world. I got a generated message about the phone booth, but that was all there seemed to be.

There was a couple of other issues I had as well.

  1. When running the MUD for LAN/WAN access, I had to enter my LAN adapter IP into self.mud_host = "" otherwise the port wasn't accessible from even other LAN devices. Might be useful to add that step to the Running MUD wiki page as an optional step for people who want to allow others to connect to the server.
  2. My friend was able to connect, but during character creation, he said he got an error after entering an e-mail address. Unfortunately, I didn't get the error message from him, but I plan on investigating that, and if need be, will open another issue for that.

Hmm. The Anything story is designed for single player/IF play. You could try generating it as an IF story and then save it and load it in mud mode.