[Misc] Why was DynamoDb + Elastic Search chosen over DocumentDb?
joehep opened this issue · 3 comments
As part of the evaluation process, it came up that DocumentDb seems like a natural solution to storing the resource as opposed to DynamoDb + Elastic Search from a question of simplicity of infrastructure. Could someone explain the reasoning behind using DynamoDb + Elastic Search?
For what it's worth I'm not sold that DocumentDb is the right answer, I am trying to compare this other solutions and I need to be able to answer this question.
Hi, from my understanding (and correct me if this has since changed), we went with DynamoDB + ElasticSearch for the search capabilities of ElasticSearch over DocumentDb. While DocumentDb is a natural storage solution, ElasticSearch has better infrastructure for searching that was useful for implementing the FHIR Search specification. As for the choice between DynamoDb and DocumentDb, DocumentDb would require more hands on configuration of cluster size and availability, whereas DynamoDb handled scaling in a much simpler fashion. In addition, I believe DocumentDb has a service uptime SLA of 99.9% whereas DynamoDb commits to 99.999% (again, this may have changed since we last checked). Hopefully that helps clear things up a bit!
Thanks,
Sukeerth
Thanks for the response. Those were the reasons I was expected, but I didn't want to make assumptions. I am curious about the 400k object size limitation in DynamoDb and how this projects intends to deal with resources larger than 400k. I am also concerned with how large resources - >10M in size will be dealt with by API Gateway in a "standards-compliant" manner. For example, binary resources we see now average much larger than 10M size.
As I see it (and I would be very happy to be wrong) in order to provide a standards-compliant FHIR end point, I would need to provide to the outside world a façade supporting the FHIR standard that would then translate requests into FHIR Works compatible workflows.
We found that the average request size generally falls well under that 400k limit, and we haven't yet heard of any issues with that implemention. But we actually handle Binary files a bit differently from normal resources: we offer a presigned PUT url for storing the Binary resource in S3 on a request to the /Binary endpoint of the FHIR Server. This isn't fully compliant with FHIR spec on Binary objects, but was a noted deviation to get functionality to work with our infrastructure. We do also have a facade implementation of the persistence package, but that would require some customization to get working.