Out of sort memory, consider increasing server sort buffer size (errno 1038) for Long Posts on PlanetScale Database
orware opened this issue · 9 comments
This particular issue has already been fixed in MySQL 8.0.28 as described here:
https://bugs.mysql.com/bug.php?id=105304
But is something that can be experienced on versions of MySQL prior to that (which currently includes PlanetScale databases since internally we are running on 8.0.23 as of today's date).
There is a workaround available for adjusting the sort_buffer_size
value for the current database session which has worked for other developers using PlanetScale as their database:
https://stackoverflow.com/questions/29575835/error-1038-out-of-sort-memory-consider-increasing-sort-buffer-size
However, this is something we may want to have a configuration value for when it comes to Beam users, that way they do not have to make any core customizations to the Beam source code.
This has primarily come up via a report from a PlanetScale user that was experiencing trouble with their Beam installation after importing a long post and it appeared that the issue arose due to the length of the post itself.
Error encountered:
{
"user_facing_error":"None",
"kind":"QueryError(Server(ServerError"{
"code":1038,
"message":"target: beam.-.primary: vttablet: rpc error: code = ResourceExhausted desc = Out of sort memory, consider increasing server sort buffer size (errno 1038) (sqlstate HY001) (CallerID: planetscale-admin): Sql: \"select Post.id, Post.title, Post.contentHtml, Post.createdAt, Post.hidden, Post.authorId, aggr_selection_0_Comment._aggr_count_comments from Post left join (select `Comment`.postId, COUNT(*) as _aggr_count_comments from `Comment` group by `Comment`.postId) as aggr_selection_0_Comment on Post.id = aggr_selection_0_Comment.postId where Post.hidden = :v1 order by beam.Post.createdAt desc limit :v3, :v2\", BindVars: {#maxLimit: \"type:INT64 value:\\\"100001\\\"\"v1: \"type:INT64 value:\\\"0\\\"\"v2: \"type:INT64 value:\\\"7\\\"\"v3: \"type:INT64 value:\\\"0\\\"\"}",
"state":"HY001"
}"))"
}
Further context:
After several tests, I realize that is not about how many posts. It is actually about whether there is a super long post.
For example, you could import https://danluu.com/nothing-works/ as a beam post. Then your index would pop up such an error.
The only exception is when you set POSTS_PER_PAGE to 1, your index page (page 1). But you cannot get page 2 then.So, I think if beam could limit the length of post (currently, there is a limit, but could be too loose to get avoid of the mysql bug), it could be super helpful.
Limiting the length of the post however may not be necessary if the current workaround of increasing the sort_buffer_size
value could be implemented within Beam itself.
Generally this is the value user's running into this problem have been asked to add on their end:
set sort_buffer_size=2097152
I am facing this same issue
@orware Do you have any insight into when PlanetScale will move to 8.0.28
or above? I'm not sure what our upgrade processes are.
I reached out to support at Planetscale for this issue and this is what they sent me:
Hello Ansh!
Thank you for reporting this problem that you ran into on your end!
This issue has been something our users have been running into every so often and is generally triggered by JSON columns in the schema and is related to a MySQL bug one of the Engineers on our team submitted to MySQL back in October and should be included as part of MySQL 8.0.28 (https://bugs.mysql.com/bug.php?id=105304). (Another Stack Overflow thread that talks about the same sort of issue can be found here as well if interested: https://stackoverflow.com/questions/29575835/error-1038-out-of-sort-memory-consider-increasing-sort-buffer-size)
I think the only factor internally allowing it to still occur is that the PlanetScale platform is currently using MySQL 8.0.23 at the moment (I don't have an exact ETA as to when we will be updating to the current version of MySQL within our platform however but it likely won't be too long now since we've had the discussion about this issue a few times already and the problem was fixed recently in MySQL now).
If you have the option of running a SET query after you setup your connection on your end, you should be able to workaround the problem in the meantime by increasing the sort_buffer_size value dynamically for your current connection/session using something like the following:
set sort_buffer_size=2097152
It's definitely an odd problem, but I hope the workaround above does the trick for now (and we'll solve it more permanently once we deploy MySQL 8.0.28+ on the PlanetScale platform in the not too distant future).
Please let me know if this helps to resolve the issue for you!
I was able to get it to work in TablePlus (Database GUI tool) by using the set sort_buffer_size=2097152
command.
I'm also experiencing this issue, so I guess just go to console and run the command above? Tried it by going to planetscale console (yes, you can access console to the database directly in the dashboard, there is a page for it). The command executed successfully but didn't resolve the issue. What resolved it is by simply not select certain big JSON fields in the table which is not what we want.
The SET
query currently would need to be integrated into the Beam code somewhere to execute right after connections are being established, rather than separately via the console, which would work only for that web console session, but not for the application more generally since it is a setting that would have to be adjusted for every connection/session.
One potential thought might be to add a line like the following:
await prisma.$executeRaw`set sort_buffer_size=2097152`;
Right after the line where the prisma
variable is created within the prisma.ts file to achieve that.
I tried, not sure how to tell if it works, since I'm still getting the same error
If you are also a PlanetScale user, you can file a ticket with us over at https://support.planetscale.com/ and I can work with you more closely on the issue and perform some extra troubleshooting together with your specific dataset.