[Brokernode] Data_maps from SQL > Badger issues
Opened this issue · 3 comments
-errors in kv_store with concurrent map reads and writes. This is occurring when we try to update the dbMap. We probably need to create a queue/worker in kv_store similar to the PoWWorker for accessing the dbMap to do reads and writes, so that no two things are ever modifying it at once. Alternatively, we should look into a different structure where concurrent access is not a problem.
-add different dbMaps for production and test, so we can freely use RemoveAllKvStoreDataFromAllKvStores() in unit tests with no chance of affecting the prod DB map.
-general slowness and weird behavior. Multiple flaky or hanging tests. These are inconsistent.
I thought that we are planning to S3 as a temp storage solution (to store uploaded binary data). Because, brokernode had to scale since it would accept request from different clients. While s3 server does not have such problem.
We are. But, these changes were already nearly done before we arrived at that conclusion, and the S3 solution may take a while to implement also.
I still have some tests I need to fix. But we don't have to push this in, if we don't think it's worthwhile.
Moving this back to Icebox. We may still need to implement the item from bullet 2 at some point but at this point I think we do not.