/amadeus

Create RP training data from a VN, using GPT-4

Primary LanguageJupyter NotebookMIT LicenseMIT

I've come up with a step-by-step guide to using the notebook in this repo. You can find it here (feel free to dismiss the subscription prompt).

Repo description 99% generated by GPT-4, and not all proofread, tell me if there are errors or GPT-isms.

I've licensed this code under MIT. I have no clue what kind of license fits the dataset itself, and I'm not going to give any legal advice on that front either.

Augmental Dataset and Model Training Code

This repository contains the code used to generate the Augmental dataset, as well as the model training code to finetune models on it. The dataset stands out due to its innovative approach of utilizing AI to enhance human-written scripts from visual novels, bridging the gap between purely synthetic data and manual data curation.

Table of Contents

  1. Introduction
  2. Prerequisites
  3. Data Generation
  4. Model Training
  5. Acknowledgements
  6. References and Useful Links

Introduction

The Augmental dataset is a novel multiturn dataset containing 7.86k replies spread across about 480 different conversations among 7 distinct characters. This dataset was crafted by refining and enhancing the script of the visual novel Steins;Gate using GPT-4. The dataset prioritizes quality, longer responses, and retaining the human-like essence of the conversation while benefitting from the capabilities of GPT-4.

Prerequisites

Dataset Source

The dataset is generated from the .scx.txt files of Steins;Gate. It's essential to have a legal copy of Steins;Gate to extract the required files. Please ensure you have the rights to use the text from the visual novel for your purposes.

Tools

  • sc3tools for extracting .scx.txt files from Steins;Gate.

Data Generation

Extraction and Preprocessing

  1. Extract the .scx.txt files from Steins;Gate using sc3tools.
  2. Merge the extracted .scx.txt files into a single text file.

Processing with Notebook

  1. Open the processing_refactor.ipynb notebook.
  2. Before running the notebook, ensure you've toggled the dataset_has_been_manually_edited flag at the top:
    • Set to True if working with the original dataset. If this is true, it (shouldn't) make any OAI calls and will leave any gaps in the dataset alone.
    • Set to False if you're generating new data.
  3. Run the notebook to process the raw text file. The output will be the Augmental dataset, ready for model training.

Model Training

The training code for finetuning models on the Augmental dataset is contained in train.py.

Usage

Use the processing_refactor notebook how you would normally use a notebook. Cells that run OpenAI will skip generations that have already been saved to files in the ./annotated_convs, ./scenarios, and ./anchors directories. !!DO NOT DELETE FILES IN THOSE DIRECTORIES UNLESS YOU WANT PAINFUL ERRORS!!

python train.py

Acknowledgements

This dataset is an evolution of the dataset that was used to train MythoMakise, a model that achieved notable recognition. The current model, trained on the Augmental dataset, promises even higher quality interactions and versatility. See the note on the Augmental Dataset HF page for legal considerations. TLDR: if the legal holders of the Steins;Gate IP tell me to take this down I will without a second thought.

References and Useful Links and Self-promotional Links