Bug
Baxtaal opened this issue · 1 comments
Baxtaal commented
Before submitting a bug, please make sure the issue hasn't been already addressed by searching through the FAQs and existing/past issues
Describe the bug https://download.llamameta.net/*?Policy=eyJTdGF0ZW1lbnQiOlt7InVuaXF1ZV9oYXNoIjoiam04eGkyYmFrazg1ZXlidWgyNTkyc3hvIiwiUmVzb3VyY2UiOiJodHRwczpcL1wvZG93bmxvYWQubGxhbWFtZXRhLm5ldFwvKiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcyMDgzNDYyNH19fV19&Signature=lSX4SKpZcq7ETDRzMumlFn7U1ZowSty4G0jU7JEknUqXQzFPqNLnW7YIMshrzDgFj7578vmNfksTE9gLyUsemLeVTSPPD7HUdU-7JMnLGXk-7RuFLVw0nb8TfoiaSVzvRh%7EcEFbQFo%7EGTQJVUBjymjLgckDKzGSBEg7ucHfyXxHGM6owNR2kyrTgzzBdZuTH-HylctWIcPw0rawErXbezZ0WLxlJjgsgRM5D%7EhuLM6fakAMWEhPRI6v6ZlR1RdtTV2RhWuumcEhPH3rJSKxjvIN%7E%7EIW-W7OMEghgTIUsaEFhyUWwANu1MXP0nBg73IQuQQh-qs-x%7EOkiTZrmorJJlg__&Key-Pair-Id=K15QRJLYKIFSLZ&Download-Request-ID=463582516392063
<Please provide a clear and concise description of what the bug is. If relevant, please include a minimal (least lines of code necessary) reproducible (running this will give us the same result as you get) code snippet. Make sure to include the relevant imports.>
Minimal reproducible example
<Remember to wrap the code in ```triple-quotes blocks```
>
# sample code to repro the bug
Output
<Remember to wrap the output in ```triple-quotes blocks```
>
<paste stacktrace and other outputs here>
Runtime Environment
- Model: [eg:
llama-2-7b-chat
] - Using via huggingface?: [yes/no]
- OS: [eg. Linux/Ubuntu, Windows]
- GPU VRAM:
- Number of GPUs:
- GPU Make: [eg: Nvidia, AMD, Intel]
Additional context
Add any other context about the problem or environment here.
Baxtaal commented
Bug