Mechanism logic update for managing files
Opened this issue · 2 comments
Considering most files transpiled and stuff are essentially very small files, and large in numbers, it's not uncommon for esm.sh server to hit its inode limit.
I've researched online, and found out that a better way to manage this type of scenario is to use a database system to save multiple file data all together at once and lower the overhead problem and inodes limitations too.
I'm currently on a vacation, and just saw the notification about inodes issue back in town, so if @ije can't make an fs driver or something to fix this, I'll look into it myself when I get back.
Setting up a database could introduce more ram usage I think, but will fix the inodes issue for real this time.
currently the server uses pnpm
to install packages, that takes most inodes of the fs, how it works with a DB?
The inodes problem, as far as I'm aware - is occuring due to large number of files in the server. We javascript devs have a tendency to provide large number of small sized js files, and all these ends up on your esm server, hitting the inodes limit.
With database based file management system, we can stop hitting inodes limit by creating a virtual filesystem by saving all those files together in a single file - that's how database servers generally save their data.
While yes, it would generate high memory usage, but we can allot swapfile for the time if issues occur, or use some database engine that focuses on using less ram and higher disk calls?