composite scores for indexed descriptions to target document(s)
Based on v2.0.0 of the ML Reproducibility Checklist (here)
- Dependencies
- see
requirements.txt
- see
- Training scripts
- data preprocessing in
data.py
- trainer in
train.py
- data preprocessing in
- Evaluation scripts
- in
eval.py
orutils.py
- in
- Pretrained models
- everything stored as pytorch state dicts, as
.pkl
files, with format{bst epoch}_{exp}.pkl
where exp usually specifies model and task
- everything stored as pytorch state dicts, as
- Results
- stored in
notebooks/
since everyone prefers these for visualization nowadays
- stored in
- one question is how to construct good priors and the idea here is to evaluate the model based on scientific knowledge priors, which can get complicated, depending on the lang model used. This will allow one to at least satisfy hill's criterion of consistency w.r.t. causal knowledge, at least complementing hidden structure coherently
- bleed into training (as reg?)? or is this a terrible idea... could avoid mode collapse issues if done carefully
- see if this can be done with a kind of hyperfoods heuristic, i.e., get simpletopics up and running
- wtf should this look like? at least have modules for parsing .xml description text and full literature, have a comparison of sensibility between topics by trying review -> description para and vice versa, and store these steps somewhere. Post distillation here