/allennlp-custom-subcommand-sample

AllenNLP sample code for custom Subcommand.

Primary LanguagePythonMIT LicenseMIT

AllenNLP sample code for Subcommand

This is a AllenNLP sample code for the Subcommand mechanism.

The registrable subcommand mechanism is a feature introduced in the PR (allenai/allennlp#3671). The original allennlp has subcommands such as train, evaluate, and predict. This sample code shares how to define your own subcommands.

Execution example

$ allennlp --help
2022-07-13 22:12:44,209 - INFO - allennlp.common.plugins - Plugin my_project available << You can find the my_project lib is automatically loaded!
usage: allennlp [-h] [--version]  ...

Run AllenNLP

optional arguments:
  -h, --help        show this help message and exit
  --version         show program's version number and exit

Commands:

    build-vocab     Build a vocabulary from an experiment config file.
    cached-path     Cache remote files to the AllenNLP cache.
    checklist       Run a trained model through a checklist suite.
    count-instances
                    Count the number of training instances in an experiment config file.
    diff            Display a diff between two model checkpoints.
    evaluate        Evaluate the specified model + dataset(s).
    find-lr         Find a learning rate range.
    predict         Use a trained model to make predictions.
    print-results   Print results from allennlp serialization directories to the console.
    push-to-hf      Push a model to the Hugging Face Hub. Pushing your models to the Hugging Face Hub ([hf.co](https://hf.co/)) allows you to share your models with others. On top of that, you can try the models directly in the browser with the available widgets. Before running this command, login to Hugging Face with `huggingface-cli login`. You can specify
                    either a `serialization_dir` or an `archive_path`, but using the first option is recommended since the `serialization_dir` contains more useful information such as metrics and TensorBoard traces.
    test-install    Test AllenNLP installation.
    train           Train a model.
    hello-subcommand                                    << Here is the command that have been added!
                    This is the first custom subcommand << Here is the command that have been added!
$ allennlp hello-subcommand --message world!
2022-07-13 22:09:51,665 - INFO - allennlp.common.plugins - Plugin my_project available
Hello world!

Tips for adding custom subcommand

I recommend that you also refer to allenai/allennlp-template-config-files, a template for starting a new allennlp project using config files and allennlp train.

Create .allennlp_plugins

  • Create .allennlp_plugins file in the root directory of the project (ref. 9d487b7).
  • Add custom library name e.g., my_project to the .allennlp_plugins in the project.
Create .allennlp_plugins Add custom library name, e.g., my_project

Create files under my_project/commands

  • Create files for the hello-subcommand command under the my_project/commands
    • I recommend to create two files: sub_command.py for the registrable subcommand and function.py for main function for the subcommand.
    • You must remember to create __init__.py in each sub directories.

Update files to load the subcommand

  • Update my_project/__init__.py to load my_project/commands.
  • Update my_project/commands/__init__.py to load my_project.commands.hello_subcommand.sub_command.HelloSubcommand.
my_project/__init__.py my_project/commands/__init__.py

That's all. Now you can realize all your ideas as subcommands 🎉

Mechanism of automatically registering/loading the subcommand

The following is a rough introduction to the mechanism:

  1. allennlp reads .allennlp_plugins in the repository root.
  2. Based on the library name (e.g., my_project in this repository) in .allennlp_plugins, load the __init__.py (e.g., my_project/__init__.py) of the root of that library.
  3. The __init__.py has imported the custom subcommand so that allennlp can register that subcommand.

Acknowledgments