Pinned issues
Issues
- 5
[BUG] <title>windows QAnything2 文件向量化不成功
#513 opened by WalkeR-ZG - 23
[BUG] 2.0无法启动成功,一直循环等待后端启动
#492 opened by gu-feng - 1
[BUG] 启动不了。一直等待启动后端服务。win和linux都是
#517 opened by jacktpy - 1
[BUG] <title>v2版本无法启动
#497 opened by qazxswaaa - 3
请在文档中说明2.0系统中,如何自定义调用外部embedding和reranker模型
#504 opened by gjfmlj - 7
[BUG] 目前遇到的几个问题
#510 opened by gotothehill - 1
- 1
前台源码如何获取?
#508 opened by truthsun22 - 0
[BUG] 在提问时,默认传递给大语言模型的时间不是本地时区时间
#518 opened by batumgl - 0
[BUG] 上传docx文件处理超时
#514 opened by liangpn - 0
- 3
问知识库是pdf的内容,理解能力差
#463 opened by sushushimoonif - 1
[BUG] <title>2.0 docker 启动无法解析文件
#503 opened by gjfmlj - 1
Connection error.大语言模型API访问链接错误
#511 opened by wangwenfeng0 - 2
2.0通过mac安装,一直启动失败。
#496 opened by newsyh - 2
- 0
[BUG] <INSERT IGNORE INTO User SQL报错>
#506 opened by luckyxue - 0
V2里面的不同类型文件解析模块(例如pdf,docx)如何直接作为一个服务使用?
#507 opened by minervazz - 0
- 0
协议改成AGPL了,还会改成Apache或其他协议吗
#500 opened by ningpp - 1
[BUG] <title>启动后端时间过长,多次测试,总耗时永远为805秒
#474 opened by wj920291253 - 1
[BUG] 2.0前端页面报错
#495 opened by lucifer714 - 2
[BUG] <title>2.0 无法启动
#491 opened by fjzuser - 2
v2源码部署教程
#490 opened by LixiangHello - 2
[BUG] rerank服务一致性有问题?
#493 opened by kbaicai - 0
- 3
2.0啥时候发布
#485 opened by shushenghong - 2
[BUG] <title> python最新版pdf无法解析,已经下载了pdf模型文件
#480 opened by changqingla - 0
能修改前端的python版本
#484 opened by stellaludai - 2
前端代码什么时候开源呢
#479 opened by ForeseeBear - 0
embedding, rerank 模型的部署问题
#481 opened by ifromeast - 0
[BUG] <title> 启动Qanything-api服务报错
#478 opened by GOOD-N-LCM - 0
- 0
- 1
- 2
请问develop1.5.0什么时候可以上线呢,催更催更
#446 opened by fi5ee - 0
多个worker时显存翻倍,怎么共享一份模型参数?
#469 opened by charliedream1 - 0
上传文件最大30M,如何修改为100M?
#467 opened by shaoupipi - 0
优化 docker 镜像大小,删除/model_repos/QAEnsemble/base
#466 opened by kbaicai - 1
[BUG] <title>纯图片型的PDF文件解析会出现数组下标越界的问题
#458 opened by zhq734 - 1
[BUG] 双3060显卡运行时报错
#464 opened by bennywan - 0
[E:onnxruntime:, sequential_executor.cc:514 ExecuteKernel] Non-zero status code returned while running Gather node. Name:'Gather_112' Status Message: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:376 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 12582912
#462 opened by sushushimoonif - 0
[BUG] <title>
#461 opened by jjwwno - 0
[BUG] <title> Python版本GPU设置
#459 opened by CodeLyokoscj - 0
- 1
[BUG] <title>没办法调用ollama,找不到model
#452 opened by jkongWPJK - 0
response返回bug
#455 opened by lycfight - 0
请问llm_server_entrypoint.py文件中的model-url=localhost:10001,这个服务是通过哪个文件启动的呢?
#451 opened by zhangwanyu2020 - 0
Qwen模型文件存放的地址在哪里呢
#453 opened by zhangwanyu2020 - 4