FabricMC/yarn

Too many open files

Blayung opened this issue · 0 comments

Basically, enigma crashes at startup with such a message:

java.nio.file.FileSystemException: /home/wojtek/programming/java/fabric/yarn/mappings/net/minecraft/client/gui/widget/ElementListWidget.mapping: Too many open files
	at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
	at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
	at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:261)
	at java.base/java.nio.file.Files.newByteChannel(Files.java:379)
	at java.base/java.nio.file.Files.newByteChannel(Files.java:431)
	at java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
	at java.base/java.nio.file.Files.newInputStream(Files.java:159)
	at java.base/java.nio.file.Files.newBufferedReader(Files.java:2897)
	at java.base/java.nio.file.Files.newBufferedReader(Files.java:2930)
	at net.fabricmc.mappingio.format.enigma.EnigmaDirReader$2.visitFile(EnigmaDirReader.java:92)
	at net.fabricmc.mappingio.format.enigma.EnigmaDirReader$2.visitFile(EnigmaDirReader.java:88)
	at java.base/java.nio.file.Files.walkFileTree(Files.java:2786)
	at java.base/java.nio.file.Files.walkFileTree(Files.java:2857)
	at net.fabricmc.mappingio.format.enigma.EnigmaDirReader.read(EnigmaDirReader.java:88)
	at net.fabricmc.mappingio.format.enigma.EnigmaDirReader.read(EnigmaDirReader.java:47)
	at net.fabricmc.mappingio.MappingReader.read(MappingReader.java:212)
	at cuchaz.enigma.translation.mapping.serde.MappingFormat.read(MappingFormat.java:127)
	at cuchaz.enigma.gui.GuiController.lambda$openMappings$2(GuiController.java:159)
	at cuchaz.enigma.gui.dialog.ProgressDialog.lambda$runOffThread$1(ProgressDialog.java:97)
	at java.base/java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:718)
	at java.base/java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:483)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)

I'm on artix linux, branch 1.21 and openjdk 21.

I've also never changed any default resource limits. ulimit -n outputs 1024.

Increasing the limit to 4096 only changes the previous mapping file in the error to a different one. I don't think this is intended behaviour.

mapNamedJar and decompileCFR work normally.