jfan1256
Interested in asset pricing, financial machine learning, and risk management. Proficient in data analysis, statistical learning, and deep learning.
Yale UniversityIowa City, Iowa
Pinned Repositories
psychspt
PsychSPT: Psychiatric Supportive Pretrained Transformer
distill-blip
Distill BLIP (Knowledge-Distillation for Image-Text Deep Learning Tasks). Supports pretraining and caption/retrieval finetuning on Multi-GPU or Single-GPU training for On Prem and Cloud VM. Handles preprocessing datasets, which are downloaded using Img2Datasets for CC3M, COCO, Flickr30k, and VGO.
awf-ray-dalio
Mimicing Ray Dalio's All Weather Fund via backtesting
symptom-extraction
Extracting symptom words from Electronic Health Records (check out our paper for more info) with different combinations of pre-trained and fine-tuned word embeddings + BiLSTM classifier (inspiration from Steinkamp et al.).
intermediate-machine-learning
Yale S&DS 365: Intermediate Machine Learning
personal-web
Personal Website
tactile-pose-estimation
A model that utilizes tactile sensor data to estimate human poses.
torch-starter
Pytorch Template Starter for training DL models. Supports multi-gpu distributed training, metric logging, saving checkpoints, early stopping, and plotting learning curves.
biblink-frontend
Frontend Development For Biblink
jfan1256's Repositories
jfan1256/intermediate-machine-learning
Yale S&DS 365: Intermediate Machine Learning
jfan1256/tactile-pose-estimation
A model that utilizes tactile sensor data to estimate human poses.
jfan1256/torch-starter
Pytorch Template Starter for training DL models. Supports multi-gpu distributed training, metric logging, saving checkpoints, early stopping, and plotting learning curves.
jfan1256/psychspt
PsychSPT: Psychiatric Supportive Pretrained Transformer
jfan1256/awf-ray-dalio
Mimicing Ray Dalio's All Weather Fund via backtesting
jfan1256/personal-web
Personal Website
jfan1256/symptom-extraction
Extracting symptom words from Electronic Health Records (check out our paper for more info) with different combinations of pre-trained and fine-tuned word embeddings + BiLSTM classifier (inspiration from Steinkamp et al.).
jfan1256/distill-blip
Distill BLIP (Knowledge-Distillation for Image-Text Deep Learning Tasks). Supports pretraining and caption/retrieval finetuning on Multi-GPU or Single-GPU training for On Prem and Cloud VM. Handles preprocessing datasets, which are downloaded using Img2Datasets for CC3M, COCO, Flickr30k, and VGO.