/rules_contest

Bazel rules for automating everything about programming contest problem preparation

Primary LanguagePythonMIT LicenseMIT

rules_contest

test docs license releases

rules_contest is a collection of Bazel rules for maintaining programming contest problems. rules_contest helps you automate tasks to prepare programming contest problems, such as:

  • Building and testing datasets
  • Building and testing reference solutions
  • Building problem statements
  • Building a progress tracker

Rules provided by rules_contest are designed to be simple and composable. Even if some existing rules do not fit your purpose, you can easily replace them with your own custom rules while still using other useful rules.

Getting Started

Prerequisites

Install Bazel by following the official guide.

Clone the template repository

We provide a Git repository containing a template workspace on GitHub.

https://github.com/nya3jp/contest_template

Click the "Use this template" button to create a new repository using the template. Use Git to checkout the repository to the local machine.

The template workspace contains a few example problems and their solutions.

Build all targets

In the workspace, run the following command to build all datasets and solutions.

bazel build //...

Build artifacts are saved under the bazel-bin directory in the workspace. For example, the dataset for the "Sum of two numbers" problem is at bazel-bin/sum/judge/dataset.zip.

Test all targets

In the workspace, run the following command to test all datasets and solutions.

bazel test //...

In the end of the output, a summary of test results is printed to the console.

//sqrt/judge:dataset_test                                                PASSED in 1.2s
//sqrt/judge:sample_test                                                 PASSED in 0.5s
//sqrt/python:python_test                                                PASSED in 2.8s
//sum/cpp:cpp_test                                                       PASSED in 0.7s
//sum/cpp_WA:cpp_WA_test                                                 PASSED in 0.7s
//sum/judge:dataset_test                                                 PASSED in 1.3s
//sum/judge:sample_test                                                  PASSED in 0.4s
//sum/python:python_test                                                 PASSED in 1.7s

Documentation

Full documentation is available online.

docs