ClosureTree/closure_tree

What should I do when my data volume is very large, more than 100 million?

smockgithub opened this issue · 3 comments

When there is a lot of data, the relational table will generate a lot of data. How to deal with it to ensure performance?

throw more money to your postgres instance?

@smockgithub this question is very vague. You are not defining what is performance nor which database you using.

If you experience slowness, you maybe need a bigger instance or some caching mechanism.

Can this be closed?

Not sure if there are any other answers besides "ensure your indexes are meeting your needs" and "ensure the postgres instance is big enough"