lxuechen/private-transformers
A codebase that makes differentially private training of transformers easy.
PythonApache-2.0
Issues
- 0
Paper results cannot reproduce!
#42 opened by MarkDeng1 - 0
Training on multiple GPUs
#41 opened by mohummedalee - 0
- 1
Can not train with a single GPU
#39 opened by CHAOS-Yang - 1
Table2text support for facebook/opt models
#38 opened by zredlined - 3
RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
#37 opened by JeffffffFu - 2
no metrics: BLEU or Rouge-L
#36 opened by adam-dziedzic - 2
text infilling task
#34 opened by adam-dziedzic - 2
table2text
#35 opened by adam-dziedzic - 2
Support for multi-gpu private fine-tuning
#32 opened by Pier297 - 0
compatibility issue with transformers
#33 opened by kshll6 - 2
What is the best way to handle large models?
#31 opened by Pier297 - 0
v0.4.0 fixes
#24 opened by lxuechen - 4
- 1
- 2
No such file or directory
#27 opened by trestad - 0
v0.3.0 fixes
#22 opened by lxuechen - 0
v0.2.2 fixes
#25 opened by lxuechen - 0
Support Poisson sampling + profile speed
#11 opened by lxuechen - 7
Support BART model
#12 opened by SeolhwaLee - 0
- 0
Refine spectrum eval
#17 opened by lxuechen - 3
How to set max_compositions
#19 opened by hlzhang109 - 3
- 1
- 1
Issue with privacy_engine
#13 opened by acphile - 6
Set another seed won't change the result
#10 opened by JunyiZhu-AI - 3
- 3
- 4
Using dataloader with fixed batch size
#7 opened by xplip - 1
Function for "get_privacy_stats()"
#1 opened by XuandongZhao