Pinned issues
Issues
- 0
[Segmentation fault] python3 torchchat.py export stories15M --dtype fp32 --quantize '{"embedding": {"bitwidth": 4, "groupsize":32}, "linear:a8w4dq": {"groupsize" : 256}}' --output-pte-path stories15M.pte
#3588 opened by mikekgfb - 2
Evaluation results of llama2 with exetorch
#3568 opened by l2002924700 - 0
Does llama2 example on Android utilize HTP?
#3586 opened by CHNtentes - 1
- 0
How can I use ExecuTorch to deploy a model to a MicroController,such as Infineon TC3xxx ?
#3585 opened by AlexLuya - 0
Is Qwen in the roadmap?
#3583 opened by DzAvril - 7
ExecuTorch Build Problem
#3561 opened by emreaniloguz - 4
XNNPack Fails For `nn.MaxPool2d`
#3567 opened by kinghchan - 2
- 1
[method.cpp:825] Error setting input 0: 0x10
#3572 opened by mikekgfb - 6
Exporting Llama3's tokenizer
#3555 opened by vifi2021 - 4
what's the meaning of "Groupwise 4-bit (128)"
#3559 opened by l2002924700 - 0
- 3
Can I run ExecuTorch on ARM Cortex-A53 processor?
#3541 opened by neverparadise - 10
kv cache manipulation?
#3518 opened by l3utterfly - 4
- 5
memory issue during export_llama?
#3480 opened by antmikinka - 5
- 2
Issue running XNN-pack unittests
#3311 opened by freddan80 - 3
converting llama3 models with added tokens
#3519 opened by l3utterfly - 2
Can it run in python virtual environment?
#3200 opened by tayloryoung-o - 3
load_method with only method name
#3198 opened by victoriapoghosian - 5
ERROR: Overriding output data pointer allocated by memory plan is not allowed.
#3528 opened by sunqijie0350 - 2
- 2
to edge IR from transformers library model
#3540 opened by mhs4670go - 0
Quantize Llava encoder
#3557 opened by iseeyuan - 0
Support Phi 3 model
#3550 opened by iseeyuan - 16
How can I convert llama3 safetensors to the pth file needed to use with executorch?
#3303 opened by l3utterfly - 0
Executorch exported model produces gibberish: stories15M --dtype fp32 --quantize '{"embedding": {"bitwidth": 4, "groupsize":32}, "linear:a8w4dq": {"groupsize" : 256}}'
#3542 opened by mikekgfb - 12
Buck 2 Error on running ./install_requirements.sh
#3502 opened by gochaudhari - 2
Why is `torch.min` not ATen canonical?
#3517 opened by kinghchan - 1
Executorch reports a bug for pages and pages: [method.cpp:939] Overriding output data pointer allocated by memory plan is not allowed.
#3515 opened by mikekgfb - 0
`torch.max(input)` fails at XNNPACK runtime
#3516 opened by kinghchan - 9
Error when running inference for nanoGPT LLM example
#3465 opened by bryangarza - 6
[v0.2.1] Release Tracker
#3409 opened by dbort - 5
checkpoint str has no attribute 'get'
#3444 opened by antmikinka - 0
Downstream users have dependences on cmake variables and internals, making cmake a compatibility surface
#3501 opened by mikekgfb - 5
exir "missing out vars"
#3443 opened by antmikinka - 0
Add bf16 kernel support
#3488 opened by lucylq - 7
Duplicate registration of quantiation operators, e.g. quantized_decomposed::embedding_byte.out
#3370 opened by robell - 1
error while Building an ExecuTorch Android Demo App
#3463 opened by tggmbi - 2
UNSTABLE Android / test-llama-app / mobile-job (android)
#3344 opened by huydhn - 4
Tagging ConstantArgument in delegation
#3278 opened by mhs4670go - 6
missing packages & incorrect package versions
#3430 opened by antmikinka - 3
Memory planner errors?
#3425 opened by mikekgfb - 4
UserWarning: Attempted to insert a get_attr Node .. when `to_backend` is called
#3276 opened by mhs4670go - 2
UNSTABLE trunk / test-coreml-delegate / macos-job
#3264 opened by huydhn - 1
Remove dump of model IR
#3280 opened by mikekgfb - 0
build of executorch triggers a warning related to too many arguments provided for format string
#3189 opened by mikekgfb - 0