coderabbitai/ai-pr-reviewer

Handle large PRs more effectively

pbjorklund opened this issue · 4 comments

Been using this internally for a while now - actually very useful.

Wondering if there is a way to make it less noisy for large PRs.

When it reviewed a PR that changed 400 files from @import to @use it took .. a while to wade through the comments.

All in all great project!

I have recently asked the bot to comment only on substantive issues. Let's see how it behaves over the next few days.

Advice -- The default model configured in action is gpt-3.5-turbo. Try changing that to gpt-4 if you have access to it. That model is expensive, but the results are much better overall.

I tried GPT4, it cost me 80$ in the first day 🤣

Switched back to GPT4 earlier today and decided to use only-summary. Please let me know the results of your findings if you can remember to.

For summary, gpt-3.5-turbo is good enough. We use that for summary and gpt-4 for review. It's costing us in the $50-$80 ballpark each day.

Btw, my latest changes have improved the noise issue and suggestion accuracy. I will test with variety of PRs this week and tweak it further.

For now, I have added an @openai: ignore keyword that can be used to skip large PRs, which is especially useful when people are still working on it. It's also helpful if you would like to skip PRs that refactor the code.