Issues
- 0
analyzecmd: Final output from multi-stage summarisation should read as if the summary was computed without intermediate summarisation
#21 opened by SeanHeelan - 0
- 0
findfaster: Add requirement that any suggestions must not use the target library as a dependency
#19 opened by SeanHeelan - 0
cmdanalysis: Experiment with detecting/including a header in data for chunk summaries
#18 opened by SeanHeelan - 0
cmdanalysis: Validate whether or not chunking of command output loses information
#17 opened by SeanHeelan - 0
debughost: Revisit how commands are generated, executed, and checked for correctness
#16 opened by SeanHeelan - 0
- 1
- 1
- 0
debughost: Allow user to specify a file containing a list of commands to run
#15 opened by SeanHeelan - 0
debughost: Does forcing the LLM to provide an explanation for why it is suggesting each command lead to more useful commands
#14 opened by SeanHeelan - 0
Binary distribution for perf-copilot
#13 opened by iogbole - 0
cmdanalysis: Need to implement chunked handling of the output of commands which may exceed the token limit
#12 opened by SeanHeelan - 0
- 0
debughost: LLM does not differentiate between commands that should be run on the host being debugged versus on another host
#8 opened by SeanHeelan - 0
- 0
topn: LLM not identifying all possible problems given list of Python functions
#5 opened by SeanHeelan - 1
- 0
Add functionality to count tokens in queries + responses and log them for debugging purposes
#3 opened by SeanHeelan - 0
Investigate whether the multiple prompts in findfaster can be integrated into one
#2 opened by SeanHeelan