关于2.0 无法使用 AI 模型的问题
Closed this issue · 5 comments
zhiyu1998 commented
jqjhl commented
jqjhl commented
xiaoxue508 commented
同样的问题
zhiyu1998 commented
我尝试学习了一下油猴脚本的机制,然后发现作者存在的两个问题可能需要修改下:
- 请求作者好像使用了油猴的
GM_xmlhttpRequest
,但是不知道为什么请求OpenAI就是有问题,这是在我后面更换了axios请求后发现的,不知道作者是否需要保持现有的请求方式,所以没有PR
原有请求在openai.ts
中:
export function request<TContext, TResponseType extends ResponseType = "json">({
method = "POST",
url = "",
data = "",
headers = {},
timeout = 5,
responseType = "json" as TResponseType,
onStream = () => {},
}: RequestArgs<TContext, TResponseType>) {
return new Promise<TContext>((resolve, reject) => {
const abort = GM_xmlhttpRequest<TContext, TResponseType>({
method,
url,
data,
headers,
timeout: timeout * 1000,
responseType,
ontimeout() {
if (axiosLoad) axiosLoad();
reject(new RequestError(`超时 ${Math.round(timeout / 1000)}s`));
},
onabort() {
if (axiosLoad) axiosLoad();
reject(new RequestError("用户中止"));
},
onerror(e) {
const msg = `${e.responseText} | ${e.error}`;
if (axiosLoad) axiosLoad();
reject(new RequestError(msg));
},
onloadend(e) {
if (axiosLoad) axiosLoad();
resolve(e.response);
},
onloadstart(e) {
axiosLoad = loader({ ms: timeout, color: "#F79E63" });
if (responseType === "stream") {
const reader = (e.response as ReadableStream<Uint8Array>).getReader();
onStream(reader);
}
},
});
});
}
然后我修改了message
和post
的逻辑,更改为axios,下面只是一些demo:
class gpt extends llm<openaiLLMConf> {
constructor(conf: openaiLLMConf, template: string | prompt) {
super(conf, template);
}
async chat(message: string) {
const res = await this.post({ prompt: this.buildPrompt(message) });
logger.debug("--------"+res.choices)
return res?.choices[0]?.message?.content || "";
}
async message({
data = {},
onPrompt = (s: string) => {},
onStream = (s: string) => {},
json = false,
}): Promise<messageReps> {
const prompts = this.buildPrompt(data);
const prompt = prompts[prompts.length - 1].content;
onPrompt(prompt);
const decoder = new TextDecoder("utf-8");
let stream = "";
const ans: messageReps = { prompt };
const res = await this.post({
prompt: prompts,
json,
onStream: (reader) => {
reader.read().then(function processText({ value }): any {
const s = decoder.decode(value);
const sl = s.split("\n");
for (let i = 0; i < sl.length; i++) {
const line = sl[i];
if (line.startsWith("data: ")) {
const data = line.slice(6);
if (data === "[DONE]") {
return;
}
const json = JSON.parse(data).choices[0];
const content = json.delta.content;
if (content) {
onStream(content);
stream += content;
} else if (json.finish_reason === "stop") {
ans.usage = {
input_tokens: json.usage?.prompt_tokens,
output_tokens: json.usage?.completion_tokens,
total_tokens: json.usage?.total_tokens,
};
return;
}
}
}
return reader.read().then(processText);
});
},
});
if (!this.conf.advanced.stream) {
ans.content = res?.choices[0]?.message?.content;
ans.usage = {
input_tokens: res?.usage?.prompt_tokens,
output_tokens: res?.usage?.completion_tokens,
total_tokens: res?.usage?.total_tokens,
};
} else {
ans.content = stream;
}
return ans;
}
private async post({
prompt,
onStream,
json = false,
}: {
prompt: prompt;
onStream?: OnStream;
json?: boolean;
}): Promise<any> {
logger.debug("开始请求")
const res = await axios.post(
this.conf.url + "/v1/chat/completions",
{
messages: prompt,
model: this.conf.model,
temperature: this.conf.advanced.temperature,
},
{
headers: {
Authorization: `Bearer ${this.conf.api_key}`,
"Content-Type": "application/json",
},
timeout: this.conf.other.timeout
}
);
logger.debug("请求结束")
logger.debug(res)
return res?.data;
}
}
- 给的超时秒数默认太少了,
120ms
,我擦,作者的网速是有多快,贫民玩家哭了,建议作者调低一些,比如10000ms
之类的
总结就是:应该是出现在请求和超时
Ocyss commented
可以先尝试在管理器中编辑脚本,然后在编辑器的右边有脚本设置,可以去里面把api地址加到XHR 安全 用户白名单中 试试
GM_xmlhttpRequest 是插件级的请求,可以无视一些跨域,但也更严格导致有些api,被插件安全拦截了(一般情况会弹窗让你勾选)
换axios完全没问题,从axios转到GM_xmlhttpRequest ,但其实axios的依赖还没清理干净,最好还是改成原生fetch的方式来调用可能改动会更少一些
超时120s,两分钟对于流式来说够快用了,开代理直连都绰绰有余吧,并且国内还有一堆镜像站可以玩