[challenge 05] SQL schema incompatible with databricks workbook
Opened this issue · 3 comments
nmiodice commented
The databricks notebook will fail out of the box because the insert statement in Cmd 13
is not compatible with the SQL Schema.
There should be a fix to the databricks workbook or the SQL schema. The insert that fails is the following:
INSERT INTO jdbcObservationTable
SELECT id AS observationid, SUBSTRING_INDEX(subject.reference,'/',-1) AS patientid, code.coding[0].code AS observationcode, code.coding[0].display AS observation, status, valueQuantity.unit, valueQuantity.value FROM observationTable;
A simple workaround is to insert dummy data to the missing columns:
INSERT INTO jdbcObservationTable
SELECT id AS observationid, SUBSTRING_INDEX(subject.reference,'/',-1) AS patientid, code.coding[0].code AS observationcode, '' AS deviceid, code.coding[0].display AS observation, '', '', status, valueQuantity.unit, valueQuantity.value FROM observationTable;
A better workaround is to fix the SQL schema, or insert the correct data in the databricks workbook.
cyberuna commented
Could you paste the error. There a NOTE which I just clarified that changes might be needed to DDL and Notebook.
FHIR Json only has columns when there is data. And depending on the data used and challenges completed, that column might or might not be there.