/python_autocomplete

Use Transformers and LSTMs to learn Python source code

Primary LanguageJupyter NotebookMIT LicenseMIT

Python Autocomplete

This project try autocompleting python source code using LSTM or Transformer models.

Training model: Open In Colab

Evaluating trained model: Open In Colab

It gives quite decent results by saving above 30% key strokes in most files, and close to 50% in some. We calculated key strokes saved by making a single (best) prediction and selecting it with a single key.

The dataset we use is the python code found in repos linked in Awesome-pytorch-list. We download all the repositories as zip files, extract them, remove non python files and split them randomly to build training and validation datasets.

We train a character level model without any tokenization of the source code, since it's the simplest.

Try it yourself

  1. Clone this repo
  2. Install requirements from requirements.txt
  3. Run python_autocomplete/create_dataset.py.
    • It collects repos mentioned in PyTorch awesome list
    • Downloads the zip files of the repos
    • Extract the zips
    • Remove non python files
    • Collect all python code to data/train.py and, data/eval.py
  4. Run python_autocomplete/train.py to train the model. Try changing hyper-parameters like model dimensions and number of layers.
  5. Run evaluate.py to evaluate the model.

You can also run the training notebook on Google Colab.

Open In Colab

Sample

Here's a sample evaluation of a trained transformer model.

Colors:

  • yellow: the token predicted is wrong and the user needs to type that character.
  • blue: the token predicted is correct and the user selects it with a special key press, such as TAB or ENTER.
  • green: autocompleted characters based on the prediction

We are working on a simple extension for VSCode for demonstration.