itsShnik/adaptively-finetuning-transformers
Adaptively fine tuning transformer based models for multiple domains and multiple tasks
Python
Issues
- 0
To experiment with attention pooling
#33 opened by itsShnik - 0
Implement SpotTune Res
#31 opened by itsShnik - 1
Experiment with dropping layers.
#27 opened by itsShnik - 1
Global-K Variant
#14 opened by itsShnik - 1
Add other strategies in LXMERT
#21 opened by itsShnik - 1
Experiment with Standard + SpotTune on VLBERT
#25 opened by itsShnik - 0
Experiment with policy network inputs
#26 opened by itsShnik - 1
- 1
Implement SpotTune strategies on LXMERT
#15 opened by itsShnik - 1
Parallel Net in LXMERT
#18 opened by itsShnik - 1
Add Gumbel Softmax in LXMERT
#19 opened by itsShnik - 1
LXMERT Policy Net
#16 opened by itsShnik - 1
Add VQAcpV2 dataset
#12 opened by itsShnik - 1
Wandb Integration
#3 opened by itsShnik - 1
Visualization Material
#8 opened by itsShnik - 1
Other finetuning policies
#4 opened by itsShnik - 1
- 1
Create a small policy network
#1 opened by itsShnik