StdErr being truncated, can we increase maximum size?
glmcdona opened this issue · 4 comments
glmcdona commented
Working on creating a non-standard submission for Lux, and whenever I submit an experiment I am getting an error. Downloading the logs from the submission allows me to see the StdErr to debug, but it's truncated causing a lot of the debugging information stack trace to be lost.
Example truncated match output:
[
[
{
"duration": 0.005138,
"stdout": "",
"stderr": "Traceback (most recent call last):\n File \"/opt/conda/lib/python3.7/site-packages/kaggle_environments/agent.py\", line 43, in get_last_callable\n code_object = compile(raw, path, \"exec\")\n File \"/kaggle_simulations/agent/main.py\", line 1\n /kaggle_simulations/agent/main.py\n ^\nSyntaxError: invalid syntax\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/conda/lib/python3.7/site-packages/kaggle_environments/agent.py\", line 157, in act\n action = self.agent(*args)\n File \"/opt/conda/lib/python3.7/site-packages/kaggle_environments/agent.py\", line 123, in callable_agent\n agent = get_last_callable(raw_agent, path=raw) or raw_agent\n File \"/opt/conda/lib/python3.7/site-packages/kaggle_environments/agent.py\", line 64, in get_last_callable\n raise InvalidArgument(\"Invalid raw Python: \" + repr(e))\nkaggle_environments.errors.InvalidArgument: Invalid raw Python: SyntaxError('invalid syntax', ('/kaggle_simulations/agent/main.py', 1, 1, '/kagg"
}
]
]
Can we increase the maximum length significantly by any chance to better help with debugging?
SiggyF commented
I ran into the same issue. I solved this in my branch by configuring it in Agent.init
self.max_log_length = self.configuration.get('max_log_length', 1024)
and in the Agent.act function:
max_log_length = self.max_log_length
I then instantiate an environment like this:
config = {"seed": 42, "loglevel": 2, "annotations": True, "max_log_length": 2048}
env = kaggle_environments.make('lux_ai_2021', configuration=config)
I'd be happy to submit it as a PR, after #157 is accepted. I have it now implemented in a branch based on that branch.
bovard commented