/Gofer-Grader

Small autograding library

Primary LanguagePythonBSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

Gofer Grader

CircleCI codecov

Simple library for interactive autograding.

Previous names include gradememaybe and okgrade.

See the Gofer Grader documentation for more information.

What?

This library can be used to autograde Jupyter Notebooks and Python files.

Instructors can write tests in a subset of the okpy test format (other formats coming soon), and students can dynamically check if their code is correct or not. These notebooks / .py files can later be collected and a grade assigned to them automatically.

Integrating Gofer into your course

As an effort to help autograding with Berkeley's offering of Data 8 online, this repo also contains two components that could be useful to others for their own courses. The primary one is a tornado service that receives notebook submissions and runs them in docker containers. The second piece is a Jupyter notebook extension that submits the current notebook to the service. Though they could be modified to work on your own setup, these are meant to play particularly nicely with Jupyterhub.

Additional documentation in how to get them working is in the respective directories gofer_service and submit_extension.

Why?

okpy is used at Berkeley for a number of large classes (CS61A, data8, etc). It has a lot of features that are very useful for large and diverse classes, such as:

  1. Office Hours management
  2. Student assignment statistics
  3. Plagiarism detection
  4. Personalized feedback
  5. Backups of student submissions
  6. Support for Python, Scheme and other languages
  7. Hiding / locking tests when students are running them locally
  8. Mass automatic grading

And many more.

However, this comes with a complexity cost for instructors who only need a subset of these features and sysadmins operating an okpy server installation.

This project is tightly scoped to only do automatic grading, and nothing else.

Caveats

Gofer executes arbitrary user code within the testing environment, rather than parsing standard out. While there are certain measures implemented to make it more difficult for users to maliciously modify the tests, it is not 100% possible to secure against these attacks since Python exposes all the objects.

Credit

Lots of credit to the amazing teams that have worked on okpy over the years.

  1. Academic Publications
  2. GitHub Organizatio
  3. ok-client GitHub repository