This project try autocompleting python source code using LSTM or Transformer models.
It gives quite decent results by saving above 30% key strokes in most files, and close to 50% in some. We calculated key strokes saved by making a single (best) prediction and selecting it with a single key.
The dataset we use is the python code found in repos linked in Awesome-pytorch-list. We download all the repositories as zip files, extract them, remove non python files and split them randomly to build training and validation datasets.
We train a character level model without any tokenization of the source code, since it's the simplest.
- Clone this repo
- Install requirements from
requirements.txt
- Run
python_autocomplete/create_dataset.py
.- It collects repos mentioned in PyTorch awesome list
- Downloads the zip files of the repos
- Extract the zips
- Remove non python files
- Collect all python code to
data/train.py
and,data/eval.py
- Run
python_autocomplete/train.py
to train the model. Try changing hyper-parameters like model dimensions and number of layers. - Run
evaluate.py
to evaluate the model.
You can also run the training notebook on Google Colab.
Here's a sample evaluation of a trained transformer model.
Colors:
- yellow: the token predicted is wrong and the user needs to type that character.
- blue: the token predicted is correct and the user selects it with a special key press, such as TAB or ENTER.
- green: autocompleted characters based on the prediction
We are working on a simple extension for VSCode for demonstration.