bvanaken/clinical-outcome-prediction

How to use the trained model for prediction/inference

Closed this issue · 1 comments

Hi,

could you give an example to use the trained model for prediction/inference?

I ran into some problems when I tried with FARM.

The first question, when I use the trained PRO_3/DIA model for inference, using FARM Inferencer. The results are very poor, no labels are predicted and the probability of each label is very low
image

Second question, when I use the trained MP model for inference, using FARM Inferencer. Customizing ExtendedTextClassificationHead results in KeyError: 'ExtendedTextClassificationHead'.
image
image

Thanks!

Hello,

thanks for your interest in our work!
Here is an example on how to load FARM's Inferencer for the diagnoses or procedure task:

from farm.infer import Inferencer

model_dir = "PATH_TO_MODEL_DIRECTORY"
model = Inferencer.load(model_dir, num_processes=0)

samples = [{"text": "Patient presents with diabetes and hypertension."}]

results = model.inference_from_dicts(dicts=samples)[0]

# results['label'] => ['250', '272', '401', '428', 'V586'] 

Is the problem that you get bad results in general? Then I guess it could be due to hyperparameter setting - we found BERT to be quite sensitive regarding HPs for multi-label classifications. Let me know and I can share the hyperparameters we used. Or are your test set results good and only the Inferencer does not produce good predictions? This would be strange and I would first try to check your FARM version (I was using 0.4.3 and later 0.8.0).

Regarding the second question: I am not 100% sure, but could you try changing the "name" field in prediction_head_0_config.json for the regarding model to "TextClassificationHead" and see if this resolves the error?

Best regards
Betty