landiisotta/convae_architecture

Model

Opened this issue · 1 comments

Hey, I got a question about the model you used: why did you decided to go for an Autoencoder rather than a Transformer? I couldn't find any information about that in the paper.

Best regards

We selected CNN+AE instead of transformers for several reasons. Some of them are: 1) we were dealing with structured EHR and not free text, so in that case it is harder to define the "context"; 2) we wanted to loosely model the temporality of EHR patient histories via CNNs; 3) AE provides a representation that was trained to capture the EHR history itself at a specific time point.