/runway-transformers-PPLM

Runway port of the PPLM Model by Uber AI Research based on huggingface/transformers

Primary LanguagePythonApache License 2.0Apache-2.0

runway-transformers-PPLM : Runway port of Plug and Play Language Models, a Simple Approach to Controlled Text Generation

Generation Script

Code was adapted from the excellent run_generation script and weights provided by huggingface/transformers.

Authors: Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu

Paper link: https://arxiv.org/abs/1912.02164

Blog link: https://eng.uber.com/pplm

Please check out the repo under uber-research for more information: https://github.com/uber-research/PPLM

Tuning hyperparameters for bag-of-words control

  1. Increase --stepsize to intensify topic control, and decrease its value to soften the control. --stepsize 0 recovers the original uncontrolled GPT-2 model.

  2. If the language being generated is repetitive (For e.g. "science science experiment experiment"), there are several options to consider:
    a) Reduce the --stepsize
    b) Increase --kl_scale (the KL-loss coefficient) or decrease --gm_scale (the gm-scaling term)
    c) Add --grad-length xx where xx is an (integer <= length, e.g. --grad-length 30).

Tuning hyperparameters for discriminator control

  1. Increase --stepsize to intensify topic control, and decrease its value to soften the control. --stepsize 0 recovers the original uncontrolled GPT-2 model.

  2. Use --class_label 3 for negative, and --class_label 2 for positive