myshell-ai/AIlice

The code AIlice wrote always have invalid syntax.

Closed this issue · 3 comments

ASSISTANT_api_requestor:  Apologies for the error in my previous response. Here's the corrected version of Python code:

```!PYTHON<!|
import requests

post_data = '{"token": "thisisatoken"}'
url = "http://****/get_cur_tasks"

response = requests.post(url, json=post_data)

if response.status_code == 200:
    result_json = response.json()
else:
    error_message = f"POST request failed with status code: {response.status_code}"
    result_json = {"error": error_message}

# Ensure result_json is stringified before assigning to a variable
result_json_str = str(result_json)

# Properly store the result as a string
!VAR<!|"post_response", result_json_str|!>;
```
SYSTEM_api_requestor:  PYTHON_RESULT=[  File "/tmp/tmpk8yktbgk", line 19
    !VAR<!|"post_response", result_json_str
    ^
SyntaxError: invalid syntax

Execution completed.

]

ASSISTANT_api_requestor:  I apologize for the mistake in my previous code. Here's the corrected Python snippet:

```!PYTHON<!|
import requests

post_data = '{"token": "thisisatoken"}'
url = "http://****/get_cur_tasks"

response = requests.post(url, json=post_data)

if response.status_code == 200:
    result_json = response.json()
else:
    error_message = f"POST request failed with status code: {response.status_code}"
    result_json = {"error": error_message}

# Stringify the JSON object and store it in a variable
result_json_str = str(result_json)

# Properly save the stringified JSON to a variable using the correct syntax
!VAR <!| "post_response", result_json_str |!>;
```
SYSTEM_api_requestor:  PYTHON_RESULT=[  File "/tmp/tmpzl7x9snh", line 19
    !VAR <!| "post_response", result_json_str 
    ^
SyntaxError: invalid syntax

Execution completed.

]

I think the !VAR <!| "post_response", result_json_str is a command to start a agent, but looks like execute as a python code.

It's a bit odd; VAR is an experimental feature within the researcher role, designed to enable LLM to create variables, serving as a form of long-term memory mechanism. This code snippet doesn't appear to be written by a coder but rather by a researcher. It mixes Python code and internal function call statements. Is this the output from running on GPT-4? If it's from an open-source model, it's not very surprising as they tend to produce various errors. With GPT-4, such situations are rare. Additionally, if convenient, please provide specific task details.

It's a bit odd; VAR is an experimental feature within the researcher role, designed to enable LLM to create variables, serving as a form of long-term memory mechanism. This code snippet doesn't appear to be written by a coder but rather by a researcher. It mixes Python code and internal function call statements. Is this the output from running on GPT-4? If it's from an open-source model, it's not very surprising as they tend to produce various errors. With GPT-4, such situations are rare. Additionally, if convenient, please provide specific task details.

Yes, I used an open-source model. But I didn't save the logs, so the two paragraphs above are the remaining output from the terminal. I'll try again with GPT-4.

Also, I'm thinking that the price of GPT-4 is really expensive. Is it possible to use cheaper open-source models for most conversations and add an error correction mechanism? When command failed for serveral times , seek help from the more advanced LLM to correct the results.
I think this could save a lot of expenses.

Considering the rapid evolution of LLM, we won't implement too much error tolerance for the current LLM (perhaps there will be hints for some typical syntax errors in the interpreter in the future, but this isn't sufficient for open-source LLM with limited reasoning capabilities). However, AIlice provides a fine-tuning mechanism that utilizes the exported execution history for fine-tuning open-source models. Limited experiments I've conducted show that this approach is indeed effective. Yet, due to some unsatisfactory aspects in the current solution, it has not been publicly documented. If interested, you can explore the code in the "finetuning" folder.

Furthermore, we are considering supporting separate configurations of LLM for various types of agents. This way, some text processing tasks can be assigned to open-source models, while the main tasks are handled by GPT-4, saving some token consumption. Overall, AIlice is a functional extension framework for the future open-source LLM. This "future" is not distant and may well materialize at some point this year or the next. Until then, our primary development efforts are focused on unleashing the capabilities of LLM confined within text.