Marza/lucene-hazelcast

Does this work? Do you recommend this approach for production environments?

Opened this issue · 5 comments

Have you used this solution? I see HFileMapStore writing to the local server filesystem. Why?

thanks!
rafa

Marza commented

First of all, it is not production ready. The project I was working on was put on hold so I did not continue, also you are probably better of using an ElasticSearch cluster.

However, I have ran some tests with this solution, on a single node it had similar performance to native Lucene. I did run limited tests for read-only behaviour on multi-node (not updating the index in runtime) and it was showing similar performance to single node with near cache turned on. The interesting part is when you can update the index on one node and all others are updated as well.

HFileMapStore is optional as is all MapStores/MapLoaders for Hazelcast, I used it to fill the cache using the native Lucene index files to avoid fetching all the data from the DB and indexing everytime I tweaked something and wanted to run another test.

I love ElasticSearch, but in the past we had to customize a lot Lucene, and running lucene directly was a great deal. Specially to optimize it for supporting hundreds of thousands of synonyms matches. I do think your solution of having a custom directory using hazelcast is in the right direction.

I would probably make it so only one server writes to the index. This is possible through a hazelcast lock that is unlocked if the instance goes down. That way the next instance gets the lock and starts accepting requests. Makes sense?

I don't understand the need for a custom cache, as having more than 3 hazelcast instances would probably make it almost impossible to lose the data if at least one instance is up. That's the case you were trying to prevent? (the need to reindex everything if all the hazelcast instances are down?)

Do you have any other suggestions to make this production ready?

Marza commented

If you need lots of customization then Lucene is probably good, this implementation works as drop-in replacement of native Lucene directory.

Yeah, that makes sense and you probably need to do that since resolving merge conflicts within a Lucene index-file will be hard or close to impossible and not worth the effort.

The need for a near-cache (a Hazelcast feature) is because normally one entry is owned by one node and all other nodes would need to fetch that entry from that node (there is also one backup in case that node goes down). This means that normally you would need to do a lot of network calls within that Hazelcast cluster and Lucene index-files tend to be quite big and lose performance if you make them to small. What Hazelcasts near-cache means that all nodes will keep a copy of that entry or index-file in that case so no network call within the cluster is needed (except for updates ofc).
I have run into performance issues using Hazelcast without near-cache when we reached several thousands of requests for second, enabling near-cache feature solved those. Read up on near-cache in the Hazelcast documentation http://docs.hazelcast.org/docs/3.7/manual/html-single/index.html#creating-near-cache-for-map

HFileMapStore was only for testing when I ran on single node and wanted to restart without re-indexing, it shouldn't be needed at all in a production environment.

I did investigate using Hazelcasts Criteria API (http://docs.hazelcast.org/docs/3.7/manual/html-single/index.html#querying-with-criteria-api) instead but this was so slow, several orders of magnitude slower and the memory usage was unacceptable.

Since our servers have 64gb of ram, I'm planning to use ReplicatedMap instead, so all the data is replicated across all servers. That way I can probably skip the HFileMapStore.

Planned architecture:

  • Only one server at time write to the index.
  • That index is replicated to the other servers using a ReplicatedMap
  • We have 6 servers, so if one server goes down it will just re-download the replicatedmap.

Does that make sense to you?

Marza commented

Yeah, that sounds good to me.
I wish you good luck, also I would really like to hear about how it worked out for you.