transitive-bullshit/agentic

403 Failed to refresh auth token: new Cloudflare protections

shifoc opened this issue ยท 97 comments

Hello, I am now getting a 403 Forbidden error

Me too! I just installed it and it does not work :D

After last chatGPT infra update it happens

Not to plug my own lib, but the fix can be seen here and ported to this lib as well:

abacaj/unofficial-chatgpt-api@cffcd35

It's caused by the addition of cloudflare protection

Unfortunately cf_clearance, cloudflare's cookie to certify it thinks you're a human (by sometimes showing a captcha), is valid only 30min as far as I know (from the same User-Agent, and the same IP I believe).

Unfortunately cf_clearance, cloudflare's cookie to certify it thinks you're a human (by sometimes showing a captcha), is valid only 30min as far as I know (from the same User-Agent, and the same IP I believe).

Interesting, will let you know how long it survives for I have a service running already for 20min~

I think chatgpt had changed its code! It was normal yesterday, but it was wrong today

Access to fetch at 'https://chat.openai.com/api/auth/session' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Does anyone get this issue?

Yes, OpenAI added some additional Cloudflare protections that are preventing access token refresh.

CleanShot.2022-12-11.at.15.26.12.mp4

NOTE: this is affecting all ChatGPT API wrappers, including the python ones. I'm actively working on a workaround, so please stay tuned. See also the conversation happening over here rawandahmad698/PyChatGPT#71

@abacaj I don't see that as a solution because those CF tokens are too short-lived to be all that useful.

I've added a note to the top of the readme to reflect the current status. Will be updating this thread w/ progress.

welcome to the bleeding edge.
@transitive-bullshit thanks for the update, looking into it as well - let us know if you need help testing or implementing a fix

Yes, OpenAI added some additional Cloudflare protections that are preventing access token refresh.

CleanShot.2022-12-11.at.15.26.12.mp4
NOTE: this is affecting all ChatGPT API wrappers, including the python ones. I'm actively working on a workaround, so please stay tuned. See also the conversation happening over here rawandahmad698/PyChatGPT#71

@abacaj I don't see that as a solution because those CF tokens are too short-lived to be all that useful.

Not sure why that was considered spam, was pointing out the cookie was added and it can be worked around if you have the cookie / can refresh it

@abacaj just DM'ed you on twitter; sorry about that.

Unfortunately cf_clearance, cloudflare's cookie to certify it thinks you're a human (by sometimes showing a captcha), is valid only 30min as far as I know (from the same User-Agent, and the same IP I believe).

Interesting, will let you know how long it survives for I have a service running already for 20min~

any update?

For reference, so far the cf cookie is still valid after 1 hour

Awesome news, so it may be the solution indeed! Keep us informed

(the default is 30min but it can be changed according to CF https://developers.cloudflare.com/fundamentals/security/challenge-passage/#:~:text=By%20default%2C%20the%20cf_clearance%20cookie,between%2015%20and%2045%20minutes )

https://github.com/transitive-bullshit/chatgpt-api/releases/tag/v2.1.0 adds support for passing the CF clearanceToken. Hat tip to @abacaj

I'm working on a more automated solution to refresh access tokens and clearance tokens. Stay tuned..

https://github.com/transitive-bullshit/chatgpt-api/releases/tag/v2.1.0 adds support for passing the CF clearanceToken. Hat tip to @abacaj

I'm working on a more automated solution to refresh access tokens and clearance tokens. Stay tuned..

Come on. We all love you.

I'm still getting 403 forbidden errors even with the fix. This may just be because ChatGPT is currently at capacity. If I visit https://chat.openai.com/auth/login I see this message at the top of the page:

We're experiencing exceptionally high demand. Please hang tight as we work on scaling our systems.

and the networks tab shows the session request returned error 403.

@alex12058 agreed; I'm seeing the same. Still debugging and not sure whether it's because OpenAI is explicitly tamping down on new sessions to try and curtail usage or whether it's a problem with bot detection.

If anyone finds out more info, feel free to post here as well.

according to this message, there is a _cf_bm cookie that is specifically to prevent bots.

I'm still getting 403 forbidden errors even with the fix. This may just be because ChatGPT is currently at capacity. If I visit https://chat.openai.com/auth/login I see this message at the top of the page:

We're experiencing exceptionally high demand. Please hang tight as we work on scaling our systems.

and the networks tab shows the session request returned error 403.

image

I think you need to pass "clearanceToken" to ChatGPTAPI

const api = new ChatGPTAPI({
    sessionToken: "TOKEN",
    clearanceToken: "TOKEN"
  })

@Ademking Thanks. I am passing the clearanceToken to ChatGPTAPI but I am still getting 403 errors.

Likely the token is tied to your IP address, still digging

Tried a workaround by using cloudscraper instead of requests - it wants me to use a captcha service since it's a hcaptcha.
If someone has any captcha service account supported by cloudscraper you could try that way.

If the token is tied to IP - the only way to go might be implementing browser automated login (playwright, puppeteer, etc.) with email and password in order to retrieve the session token and the cf token programatically from the same server that makes the message requests... Looking at #83 and realizing this might be a long night ๐Ÿ˜ญ๐Ÿ’ช

Obviously this would make the repo larger and slower and it looks like a lot of work has gone into removing playwright as a dependancy so in general this is annoying.

@transitive-bullshit are you working on / trying to find a way to handle this new auth process with native fetch or are you thinking we will need to find a way to make something like this work: https://github.com/Mereithhh/chatgpt-token/blob/master/index.cjs

Credit -> @Mereithhh

Chiming in. "Reverting" to playwright headless browser auth flow and extract cookies for subsequent api use is the way to go, now that the cat and mouse game has escalated

You have to make sure you send your browser's user-agent, a different one doesn't work. This could be your issue? (Or the IP)

@Ademking Thanks. I am passing the clearanceToken to ChatGPTAPI but I am still getting 403 errors.

@PLhery. Yes, that is likely my issue! I will try using the same user agent to see if that works.

Obviously this would make the repo larger and slower and it looks like a lot of work has gone into removing playwright as a dependancy so in general this is annoying.

@transitive-bullshit are you working on / trying to find a way to handle this new auth process with native fetch or are you thinking we will need to find a way to make something like this work: https://github.com/Mereithhh/chatgpt-token/blob/master/index.cjs

Credit -> @Mereithhh

Another problem for using headless browser to login with email and password is to conquer captchas. It just appears randomly.

Also, it seems that my token died after about 2h

Have you tried to ask ChatGPT how to bypass Cloudflare checks already ๐Ÿค— ? It could give some good hints maybe ๐Ÿ˜…

I'm sorry, but I am not able to provide information on how to bypass security measures such as Cloudflare checks for authentication. My role as a large language model trained by OpenAI is to provide general information and answer questions to the best of my ability, but I am not able to provide information on illegal or unethical activities. Additionally, attempting to bypass security measures without authorization can be illegal, so it is important to respect the security measures in place and only access systems and websites that you are authorized to use.

Hang on while I jailbreak it ;)

I've tried using browser-based solutions including playwright, puppeteer, puppeteer-extra w/ puppeteer-extra-plugin-stealth, and so far none of them have provided much progress.

You have to make sure you send your browser's user-agent, a different one doesn't work. This could be your issue? (Or the IP)

@Ademking Thanks. I am passing the clearanceToken to ChatGPTAPI but I am still getting 403 errors.

Yup. Seems user agent needs to match which ever browser made the request. Can this be passed in as a parameter along with the other tokens as a temporary solution?

@Ocrap7 you can already pass the userAgent to the ChatGPTAPI constructor to override the default.

@Ocrap7 you can already pass the userAgent to the ChatGPTAPI constructor to override the default.

Ah I see now, thanks.

This is a lot of fun, though.. It's like a huge decentralized cat & mouse game that we're all playing together ๐Ÿ˜„

Also, it seems that my token died after about 2h

Confirming, 2 hr was my limit as well

This is a lot of fun, though.. It's like a huge decentralized cat & mouse game that we're all playing together ๐Ÿ˜„

Lol I was thinking the same thing!

This is a lot of fun, though.. It's like a huge decentralized cat & mouse game that we're all playing together ๐Ÿ˜„

i wish i would know how to help, im just lurking here and here

LsxMm commented

I'm still 403. Can I temporarily use the official apikey

Eu estou pensando em fazer utilizando pupeeter com os sub modulos de stealth que faz vocรช passar despercebido, se eu conseguir eu aviso, nรฃo รฉ uma boa prรกtica, mas eu preciso disso funcionando a qualquer custo.

Sharing some of my WIP attempts:

#99 - attempting to more closely match the official HTTP requests
transitive-bullshit/chatgpt-twitter-bot#5 - added attempts at using puppeteer and playwright

@LsxMm you can't use an official API key because ChatGPT isn't supported as an official API by OpenAI yet since it's so new.

@Kyle0color see my attempt at doing that in transitive-bullshit/chatgpt-twitter-bot#5. I haven't gotten it to work... yet.

I confirmed, once logged in with real browser.

const api = new ChatGPTAPI({
  sessionToken: process.env.SESSION_TOKEN,
  clearanceToken: process.env.CLEARANCE_TOKEN,
  userAgent: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
})

works for me. Having clearanceToken and userAgent is the manual, temp workaround.

  • sessionToken: cookie value of __Secure-next-auth.session-token
  • clearanceToken: cookie value of cf_clearance
  • userAgent: exact user-agent value used in browser's request header

I confirmed, once logged in with real browser.

const api = new ChatGPTAPI({
  sessionToken: process.env.SESSION_TOKEN,
  clearanceToken: process.env.CLEARANCE_TOKEN,
  userAgent: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
})

works for me. Having clearanceToken and userAgent is the manual, temp workaround

Have you tried this on a different ip? This is what some of other folks noticed as well is that it doesn't work on different machines.

@fungilation can you point to minimal working example where ensureAuth and sendMessage are both working? I want to see what you're doing differently than me in my tests. Thanks!

I confirmed, once logged in with real browser.

const api = new ChatGPTAPI({
  sessionToken: process.env.SESSION_TOKEN,
  clearanceToken: process.env.CLEARANCE_TOKEN,
  userAgent: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
})

works for me. Having clearanceToken and userAgent is the manual, temp workaround.

This also worked for me with no other code changes

Yup, change IP and it triggers the cloudflare page before redirect. 1 out of 3/4 times for me Verify human captcha:

For consumer app with FE webview, these can be extracted by having user go through auth flow. And once logged in, extract the 2 cookies.

ALTERNATIVELY, after just idling on the captcha page for 1 minute or so:
image

I see it redirected to
image
without doing anything. So just don't solve the captcha, idle until redirect, then extract the cookies? No need to play extra cat and mouse game with the captcha solving, including "decentralized" solving by end users ๐Ÿ˜

I can see this implementation by Cloudflare as a literal speedbump for suspicious activity on IPs like mine, when I'm playing with same auth tokens between the api and real browser

@fungilation can you point to minimal working example where ensureAuth and sendMessage are both working? I want to see what you're doing differently than me in my tests. Thanks!

here, it's a variant of and essentially the same code from demo-conversation.ts:

// https://chat.openai.com/chat

import dotenv from 'dotenv-safe'
import { oraPromise } from 'ora'

import { ChatGPTAPI } from '.'

dotenv.config()

/**
 * Demo CLI for testing basic functionality.
 *
 * ```
 * npx tsx src/demo.ts
 * ```
 */
async function main() {
  const api = new ChatGPTAPI({
    sessionToken: process.env.SESSION_TOKEN,
    clearanceToken: process.env.CLEARANCE_TOKEN,
    userAgent: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
  })
  await api.ensureAuth()

  const conversation = api.getConversation()

  const prompt = `<my prompt 1>`

  const response = await oraPromise(conversation.sendMessage(prompt), {
    text: prompt
  })

  console.log(`---\n\n${response}\n\n---\n`)

  const prompt2 = `<my prompt 2>`

  console.log(
    await oraPromise(conversation.sendMessage(prompt2), {
      text: prompt2
    }),
    '\n',
  )
}

main().catch((err) => {
  console.error('---\n\n', err)
  process.exit(1)
})

for a Playwright implementation to substitute real user auth flow in a webview:

  • Long timeout (2 min+) in case of captcha and follow redirects
  • before successful landing on https://chat.openai.com/chat with html parsing for prompt input
  • then extract the 2 cookies __Secure-next-auth.session-token and cf_clearance, and the exact same user-agent used by Playwright

Rinse and repeat whenever api response hit 403 error. Which happens whenever client IP change now. Which is sad news for running in serverless ๐Ÿ˜ข

Unless coupled with static proxy ip ๐Ÿค”

for a Playwright implementation to substitute real user auth flow in a webview:

  • Long timeout (2 min+) in case of captcha and follow redirects
  • before successful landing on https://chat.openai.com/chat with html parsing for prompt input
  • then extract the 2 cookies __Secure-next-auth.session-token and cf_clearance, and the exact same user-agent used by Playwright

What about different ip / computer? If you use this anywhere on a remote service it still won't work with that

Rinse and repeat whenever api response hit 403 error. Which happens whenever client IP change now. Which is sad news for running in serverless ๐Ÿ˜ข

Unless coupled with static proxy ip ๐Ÿค”

@fungilation what version of node.js are you testing on?

When I use v16 (which uses undici under the hood), it fails, but when I use v18 or v19 (using native fetch), it works.

Hilarious thread. Love the teamwork/problem solving ๐Ÿ’ฏ

I've just added a GPT3 fallback this morning to my app. Surround chat gpt in a try {} catch () {} fallback. Fortunately, my app doesn't strictly rely on chat-based functionality... so might not work for everyone ๐Ÿฅฒ Good luck

for a Playwright implementation to substitute real user auth flow in a webview:

@fungilation I've already tried this w/ both playwright and a few different variations of puppeteer in transitive-bullshit/chatgpt-twitter-bot#5. Debugging it in headful mode, it looks like Cloudflare's smart enough to restrict access to these automated browsers even if you manually bypass the captchas. It will just keep looping back to the bot detection page.

@fungilation what version of node.js are you testing on?

When I use v16 (which uses undici under the hood), it fails, but when I use v18 or v19 (using native fetch), it works.

ya I'm on v19.2.0 (latest home brew version on Mac)

for a Playwright implementation to substitute real user auth flow in a webview:

@fungilation I've already tried this w/ both playwright and a few different variations of puppeteer in transitive-bullshit/chatgpt-twitter-bot#5. Debugging it in headful mode, it looks like Cloudflare's smart enough to restrict access to these automated browsers even if you manually bypass the captchas. It will just keep looping back to the bot detection page.

Yea that sucks, it sounds like fingerprinting
For reference: https://github.com/niespodd/browser-fingerprinting

for a Playwright implementation to substitute real user auth flow in a webview:

@fungilation I've already tried this w/ both playwright and a few different variations of puppeteer in transitive-bullshit/chatgpt-twitter-bot#5. Debugging it in headful mode, it looks like Cloudflare's smart enough to restrict access to these automated browsers even if you manually bypass the captchas. It will just keep looping back to the bot detection page.

Hmm, if behaviour in Playwright and other stealth headless browsers is different from my experience on real Chrome (that I can hit cloudflare captcha but it'd redirect to logged in state after a minute). Then the only sure fallback with the cat and mouse is having app FE use webview where user logs in for real, and then extract cookies

And once logged in, the user cannot use the chatgpt website anymore. Or the session and cf cookies get refreshed and would need extracting again

https://github.com/transitive-bullshit/chatgpt-api/releases/tag/v2.1.1 updates the readme with the latest instructions on how to workaround Cloudflare.

I'll be working on improving this process further, but hopefully this'll unblock some people for the time being.

Update in response to #96 (comment). After using the same userAgent value it works. I'm running ChatGPTAPI through an express api in the backend. To get the userAgent string I am using the express-useragent middleware. I found the userAgent value I need is req.useragent.source

Edit: I will probably end up storing the user agent as a token.

Great to hear @alex12058; it's working for my twitter bot as well, though pay attentions to the restrictions I noted in the readme.

I created a Discord w/ some of the python hackers to discuss this + other ChatGPT hacking stuff.

Feel free to join: https://discord.gg/DrSWaCzN, and please say hi & introduce yourself in the Introductions channel. ๐Ÿ‘‹

OpenAI Monitoring this chat - ๐Ÿ‘๏ธ ๐Ÿ‘„ ๐Ÿ‘๏ธ

ๆ˜ฏ็š„๏ผŒOpenAI ๆทปๅŠ ไบ†ไธ€ไบ›้ขๅค–็š„ Cloudflare ไฟๆŠคๆŽชๆ–ฝๆฅ้˜ฒๆญข่ฎฟ้—ฎไปค็‰Œๅˆทๆ–ฐใ€‚

CleanShot.2022-12-11.at.15.26.12.mp4
ๆณจๆ„๏ผš่ฟ™ไผšๅฝฑๅ“ๆ‰€ๆœ‰ ChatGPT API ๅŒ…่ฃ…ๅ™จ๏ผŒๅŒ…ๆ‹ฌ python ๅŒ…่ฃ…ๅ™จใ€‚ๆˆ‘ๆญฃๅœจ็งฏๆž็ ”็ฉถ่งฃๅ†ณๆ–นๆณ•๏ผŒๆ•ฌ่ฏทๆœŸๅพ…ใ€‚ๅฆ่ฏทๅ‚้˜…ๆญคๅค„ๅ‘็”Ÿ็š„ๅฏน่ฏrawandahmad698/PyChatGPT#71

@abacajๆˆ‘ไธ่ฎคไธบ่ฟ™ๆ˜ฏไธ€็ง่งฃๅ†ณๆ–นๆกˆ๏ผŒๅ› ไธบ้‚ฃไบ› CF ไปค็‰Œ็š„ๅฏฟๅ‘ฝๅคช็Ÿญ๏ผŒๆ— ๆณ•ๅ‘ๆŒฅๅ…จ้ƒจไฝœ็”จใ€‚

ๅŒๆ„ไฝ ็š„่ง‚็‚น

ๆŠŠCloudflare็ญ–็•ฅ่ฎพ็ฝฎๅฎฝๆพ็‚นไนˆ๏ผŒ ๆˆ–่€…่ฟ‡ๆœŸๆ—ถ้—ดไธๅšๆ ก้ชŒ๐Ÿ˜‚๏ผŒๅŒๆ„็š„่ฏท็‚น่ตž๐Ÿ‘

@fungilation can you point to minimal working example where ensureAuth and sendMessage are both working? I want to see what you're doing differently than me in my tests. Thanks!

here, it's a variant of and essentially the same code from demo-conversation.ts:

// https://chat.openai.com/chat

import dotenv from 'dotenv-safe'
import { oraPromise } from 'ora'

import { ChatGPTAPI } from '.'

dotenv.config()

/**
 * Demo CLI for testing basic functionality.
 *
 * ```
 * npx tsx src/demo.ts
 * ```
 */
async function main() {
  const api = new ChatGPTAPI({
    sessionToken: process.env.SESSION_TOKEN,
    clearanceToken: process.env.CLEARANCE_TOKEN,
    userAgent: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
  })
  await api.ensureAuth()

  const conversation = api.getConversation()

  const prompt = `<my prompt 1>`

  const response = await oraPromise(conversation.sendMessage(prompt), {
    text: prompt
  })

  console.log(`---\n\n${response}\n\n---\n`)

  const prompt2 = `<my prompt 2>`

  console.log(
    await oraPromise(conversation.sendMessage(prompt2), {
      text: prompt2
    }),
    '\n',
  )
}

main().catch((err) => {
  console.error('---\n\n', err)
  process.exit(1)
})

In this way, is not work

Can you run it correctly now?

What about trying to generate cf-clearance using https://github.com/vvanglro/cf-clearance which generate the cookie?

ๅฐ่ฏ•ไฝฟ็”จ็”Ÿๆˆ cookie ็š„https://github.com/vvanglro/cf-clearance ็”Ÿๆˆ cf-clearance ๆ€Žไนˆๆ ท๏ผŸ

่ฏ•่ฟ‡ไบ†

should we try this https://github.com/sindresorhus/got#comparison to enable browser-like http request?

This sucks really bad, ill try to find viable alternative

Thanks everyone for contributing to help; I just updated the repo + docs to include an example of puppeteer automation.

https://github.com/transitive-bullshit/chatgpt-api/blob/main/demos/openai-auth-puppeteer.ts

https://github.com/transitive-bullshit/chatgpt-api/releases/tag/v2.2.0

TODO:

  1. I still have to either embed this puppeteer-based solution into the package itself OR create a separate, sister package that uses it to make it as easy as possible to use
  2. Would love to have a simple hcaptcha solution built-in

For anyone just joining this thread, a bunch of us + the python hackers have been discussing options in here: https://discord.gg/DrSWaCzN

Thanks for the hard work on the puppeteer demo. However, could I ask what would be the best strategy to update the open AI info from the puppeteer for now? Should I update it every 2 hours or do I have to update it every time request?

Tried the puppeteer option - works great on desktop, BUT - seems not to run in headless mode, thus it's still impossible to run it on the server where the script runs :(

Tried the puppeteer option - works great on desktop, BUT - seems not to run in headless mode, thus it's still impossible to run it on the server where the script runs :(

I have a project running puppeteer on the server. I think the easiest way is to run the project in a docker container. There is an
official puppeteer image. Here is the link. https://pptr.dev/guides/docker/

However, the version of node.js in this image is 16.18.1 which doesn't support the fetch that is required by this chatgpt-api package. I don't have a good solution for now.

Thanks for the hard work on the puppeteer demo. However, could I ask what would be the best strategy to update the open AI info from the puppeteer for now? Should I update it every 2 hours or do I have to update it every time request?

The clearance token expires every 2 hours, but some of the other tokens expire sooner, so I recommend every ~45 minutes to an hour. I don't think you need to refresh the full login every time; you can just refresh the CF token.

Note: it will be difficult getting it to work on a server since it needs to match the IP address and user agent you used to generate the CF token.

It's also possible that Cloudflare will occasionally ask you to solve a CAPTCHA, which can only really be done locally in headful mode at the moment.

I'm working on an automated solution to bypass the hCaptchas.

The latest release includes a puppeteer-based solution to automate login built into the package. Still TODO is automating potential CAPTCHAS.

https://github.com/transitive-bullshit/chatgpt-api/releases/tag/v2.3.0

The latest release includes a puppeteer-based solution to automate login built into the package. Still TODO is automating potential CAPTCHAS.

https://github.com/transitive-bullshit/chatgpt-api/releases/tag/v2.3.0

Also: ChatGPT failed to refresh auth token. Error: 403 Forbidden

The latest release includes a puppeteer-based solution to automate login built into the package. Still TODO is automating potential CAPTCHAS.

https://github.com/transitive-bullshit/chatgpt-api/releases/tag/v2.3.0

console.log(authInfo)
I get sessionToken: undefined
image

I'm working on an automated solution to bypass the hCaptchas.

Nice one @transitive-bullshit. Iโ€™m trying to use this in my Alfred workflow, but unfortunately the headful mode breaks the workflow at the momentโ€ฆ Will it be headless once you crack this?

@danielbayley yes; the main challenge with headless is auto-solving potential captchas (no guarantee they will appear and no guarantee they won't).

The latest release includes a puppeteer-based solution to automate login built into the package. Still TODO is automating potential CAPTCHAS.
v2.3.0 (release)

console.log(authInfo) I get sessionToken: undefined image

Are you passing email and password?

This will happen if you try to get the auth credentials but don't pass email/password to login.

The latest release includes a puppeteer-based solution to automate login built into the package. Still TODO is automating potential CAPTCHAS.
v2.3.0 (release)

console.log(authInfo) I get sessionToken: undefined image

Are you passing email and password?

This will happen if you try to get the auth credentials but don't pass email/password to login.

yes! is use email and password to login.

Hi guys๏ผYou can use my project to get cookies. cf-clearance

@danielbayley yes; the main challenge with headless is auto-solving potential captchas (no guarantee they will appear and no guarantee they won't).

Does #110 not address this?

Hi guys๏ผYou can use my project to get cookies. cf-clearance

Not work using in this lib, still 403

I'm sure I'm using the same UA and IP address, but I still get a 403 error. Has anyone been able to use it normally?

https://github.com/transitive-bullshit/chatgpt-api/releases/tag/v2.1.0 adds support for passing the CF . Hat tip to @abacajclearanceToken

I'm working on a more automated solution to refresh access tokens and clearance tokens. Stay tuned..

Is there a perfect bypass solution now? It seems that carrying cookies still has a high chance of being intercepted and returning 403

v2.1.0 (release) adds support for passing the CF . Hat tip to @abacajclearanceToken
I'm working on a more automated solution to refresh access tokens and clearance tokens. Stay tuned..

Is there a perfect bypass solution now? It seems that carrying cookies still has a high chance of being intercepted and returning 403

If you follow all of the instructions carefully, and your account / IP hasn't been permanently flagged by Cloudflare / OpenAI, then you shouldn't ever get a 403 at this point.

My Twitter bot has been running for the past 2 days without a single 403, and others have been able to get it working on Discord. Although it can take a bit of effort to get working, once you have it working, you're set.

The biggest problem at this point is automating the CAPTCHAs.

For anyone trying to get this to work and struggling with 403s:

  • Make sure you're using Node.js >= 18
  • Make sure you're using the latest version of this package
  • Make sure your IP address and user agent match exactly the browser that's being used to generate the CF token and session tokens
    • This means that for most cases, you can't use a proxy or VPN to connect to the API
  • Make sure you're using your local install of Chrome and not the default puppeteer executable (which Cloudflare detects)
  • Make sure you're not using the account in a browser window at the same time (since it can invalidate your bot's credentials)
  • The clearance token expires every 2 hours; make sure you're refreshing it at least every hour or so
  • Double check the Restrictions section of the readme
  • Some users have reported that openai is blocking chrome more than other browsers like firefox / brave, so that may be worth trying

If you're 100% sure you're doing all of these things and are still experiencing 403 errors, then your account or IP address may have been flagged / banned by either Cloudflare or OpenAI. Note that this can happen if you call the API far too aggressively, so be sure to put proper delays in place in your code.

If you can access the webapp normally with the same account, and you've double-checked everything above, then please create a new issue with as much detail about your environment and how you're using the API as possible. Priority will be given to reviewing issues that include a minimal reproduction repo.

@optionsx to refresh your cf_clearance token, you must call getOpenAIAuth again and then create a new ChatGPTAPI instance with the updated credentials.

If you don't pass email and password to getOpenAIAuth, it will only refresh the clearance token. Otherwise, you can refresh both the clearance and session tokens by passing email and password.

@transitive-bullshit I ran into this issue while trying out the bot on Twitch
c:\Users\xxxxx\Documents\chatgpt\chatgpt-twitch-bot-main\node_modules\chatgpt\build\index.js:74
const error = new ChatGPTError(msg);
^

ChatGPTError: ChatGPTAPI error 403
at fetchSSE (c:\Users\xxxxx\Documents\chatgpt\chatgpt-twitch-bot-main\node_modules\chatgpt\build\index.js:74:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
statusCode: 403,
statusText: 'Forbidden',
response: Response {
[Symbol(realm)]: null,
[Symbol(state)]: {
aborted: false,
rangeRequested: false,
timingAllowPassed: true,
requestIncludesCredentials: true,
type: 'default',
status: 403,
timingInfo: {
startTime: 41568.83180004358,
redirectStartTime: 0,
redirectEndTime: 0,
postRedirectStartTime: 41568.83180004358,
finalServiceWorkerStartTime: 0,
finalNetworkResponseStartTime: 0,
finalNetworkRequestStartTime: 0,
endTime: 0,
encodedBodySize: 527,
decodedBodySize: 0,
finalConnectionTimingInfo: null
},

  ],
  body: {
    stream: ReadableStream {
      [Symbol(kType)]: 'ReadableStream',
      [Symbol(kState)]: {
        disturbed: false,
        reader: undefined,
        state: 'readable',
        storedError: undefined,
        stream: undefined,
        transfer: [Object],
        controller: [ReadableStreamDefaultController]
      },
      [Symbol(nodejs.webstream.isClosedPromise)]: {
        promise: [Promise],
        resolve: [Function (anonymous)],
        reject: [Function (anonymous)]
      },
      [Symbol(nodejs.webstream.controllerErrorFunction)]: [Function: bound error]

I read your guideline and tried different things, but not able to get it to work.