Pre Generated Chunks causing issues while re running chunking.
Closed this issue · 3 comments
Chunky lagging out server environment due to looping and getting stuck on the same process when dealing with chunks that have already been generated in said environment.
We are doing this due to a MC update to 1.21.1 and have decided to expand the borders. I have tested this on several servers and as long as the server doesn't currently have many region files chunky has done well and acts as it always have for us. Wonderfully so thank you for this amazing tool.
However we are currently experiencing server TPS dropping with no one online and nothing going on. Here is a copy of a spark report image to support the findings.
Hi, this does not appear to be a bug, but rather a server performance issue. For performance issues while pre-generating please read the FAQ first and join the Discord server for support if you still have trouble and need help troubleshooting. I recommend also updating Paper and Chunky to the latest versions first just in case. Otherwise, joining the Discord and sharing a spark profile link (not a screenshot) is ideal.
Just reporting, this is not typical for what I have seen from chunky in the last 5 years of use on multiple servers. Paper is latest as of a few days ago, they keep pushing updates as paper does lol. Chunky is latest version, 16G ram but I can up it if needed, what is recommended here? We aren't really running this off our home computer to say the least... I am doubtful that server hardware is the issue, we can power through it just thought ya'll should know. We don't pregen worlds with servers live anyway. Again appreciate the dev of this and the projects continued updates. Let me know if I can provide anything to help support this finding.
I am not sure if you are aware but sharing Spark reports has some information that people might not want to have out there in the public as well... So might want to be careful with asking for that information in a public environment. It can be damaging to the users.