/mltype

Command line tool for improving typing skills (programmers friendly)

Primary LanguagePythonMIT LicenseMIT

mltype

Command line tool for improving typing speed and accuracy. The main goal is to help programmers practise programming languages.

Demo

Installation

Python environment

pip install --upgrade mltype

Docker

Make sure that Docker and Docker Compose are installed.

docker-compose run --rm mltype

You will get a shell in a running container and the mlt command should be available.

See the documentation for more information.

Main features

Text generation

  • Using neural networks to generate text. One can use pretrained networks (see below) or train new ones from scratch.
  • Alternatively, one can read text from a file or provide it manually

Typing interface

  • Dead simple (implemented in curses)
  • Basic statistics - WPM and accuracy
  • Setting target speed
  • Playing against past performances

Documentation and usage

The entrypoint is mlt. To get information on how to use the subcommands use the --help flag (e.g. mlt file --help).

$ mlt
Usage: mlt [OPTIONS] COMMAND [ARGS]...

  Tool for improving typing speed and accuracy

Options:
  --help  Show this message and exit.

Commands:
  file    Type text from a file
  ls      List all language models
  random  Sample characters randomly from a vocabulary
  raw     Provide text manually
  replay  Compete against a past performance
  sample  Sample text from a language
  train   Train a language

Pretrained models

See below for a list of pretrained models. They are stored on a google drive and one needs to download the entire archive.

Name Info Link
C++ Trained on https://github.com/TheAlgorithms/C-Plus-Plus link
C# Trained on https://github.com/TheAlgorithms/C-Sharp link
CPython Trained on https://github.com/python/cpython/tree/master/Python link
Crime and Punishment Trained on http://www.gutenberg.org/ebooks/2554 link
Dracula Trained on http://www.gutenberg.org/ebooks/345 link
Elixir Trained on https://github.com/phoenixframework/phoenix link
Go Trained on https://github.com/TheAlgorithms/Go link
Haskell Trained on https://github.com/jgm/pandoc link
Java Trained on https://github.com/TheAlgorithms/Java link
JavaScript Trained on https://github.com/trekhleb/javascript-algorithms link
Kotlin Trained on https://github.com/square/leakcanary link
Lua Trained on https://github.com/nmap/nmap link
Perl Trained on https://github.com/mojolicious/mojo link
PHP Trained on https://github.com/symfony/symfony link
Python Trained on https://github.com/TheAlgorithms/Python link
R Trained on https://github.com/tidyverse/ggplot2 link
Ruby Trained on https://github.com/jekyll/jekyll link
Rust Trained on https://github.com/rust-lang/rust/tree/master/compiler link
Scala Trained on https://github.com/apache/spark/tree/master/mllib link
Scikit-learn Trained on https://github.com/scikit-learn/scikit-learn link
Swift Trained on https://github.com/raywenderlich/swift-algorithm-club link

Once you download the file, you will need to place it in ~/.mltype/languages. Note that if the folder does not exist you will have to create it. The file name can be changed to whatever you like. This name will then be used to refer to the model.

To verify that the model was downloaded succesfully, try to sample from it. Note that this might take 20+ seconds the first time around.

mlt sample my_new_model

Feel free to create an issue if you want me to train a model for you. Note that you can also do it yourself easily by reading the documentation (mlt train) and getting a GPU on Google Colab (click the badge below for a ready to use notebook).

Open In Colab

Credits

This project is very much motivated by the The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy.