This is LLMA (Language Model Assignment Analyser), an all-round software-system for uploading, solving and analysing educational assignments. With LLMs on the rise, educators face the challenge of considering the implications of generated content when creating exercises and exams. LMAA is a platform designed for collecting existing and novel assignments and comparing them in terms of language model generability. The resulting information, may give educators insight into the current state of LLMs in their field of work and support them in creating novel assignments which are not so easily solvable by language models.
- How to use
- Setup
- Run
- Advanced management tasks
- Logging system
- Database system
- Communication system
- More information
The LMAA-Application is split into four primary components: Assignments, Communication, Testing and Visualisation.
To start, an educator may enter assignments and classification data. For assignments many language models may be called multiple times to generate a solution. To test solutions, educators may add and execute testcases. To get an overview, the test results and additional factors can be visualised.
For initial setup run setup.py
. This will create the required file structures and the database, including language
model detection.
python setup.py
The application can be started by running run.py
. Make sure to finish the setup before starting the application.
python run.py
The logging system is structured as follows:
- Console logging: Only for Django logging
- Django logging: For Django logging and other system processes
The logfile is located according to config/system_config.yaml
. The default location is logs/lmaa-log.log
The database system is handled by Django Models. The database
structure is accordingly defined in <appname>/models.py
.
By default SQLite
is used as a database system. The database file is located according to config/system_config.yaml
.
The default location is data/lmaa-local.db
The communicator system can be dynamically extended. Correctly configured and implemented communicators will
automatically be available in the CommunicationManager
and the django
frontend upon the next startup.
New communicators may be added at any point to scripts.communication.impl
.
When implementing a new communicator be sure to comply with the following instructions:
- Create new python file in
/scripts/communication/impl
- Create a class with the mandatory name-schema
___CommunicatorImpl
inheriting fromCommunicator
- Define all request properties necessary for API calls in the class-property
properties
- The property containing the user-input must always be named
prompt
- Properties may have one of 3 types (str,int,float), as implemented by
PropertyType
. If additional types are required, the frontend must be adapted - A property can be mandatory or optional. Optional properties must contain a default value
- A property may or may not be a configuration-property. Configuration properties are properties displayed in the
frontend form
/communication/new/configure
- The property containing the user-input must always be named
- Define the display name of the Communicator in the class-property
name
- Implement
__init__
containing a super-call (super().__init__('<CommunicationName>')
) - Implement all abstract methods from
Communicator
as described in the documentation
The provided implementation found in communicator_openai_chat_completion.py
may be used as a guide.
If all steps have been completed correctly the CommunicationManager
will automatically detect the implementation and
make it available via get_implementations()
.
Outdated communicators may not necassarily be removed, but it is possible. Solutions stored in the database only contain the name of the implementation.
When removing a communicator be sure to comply with the following instructions:
- Delete/Remove the implementation file in
/scripts/communication/in
. If not removed, django will automatically re-detect the Communicator upon the next startup. - Clean-up the database tables
llm
andllm_property
LMAA was developed as a part of the thesis Application of generative AI in introductory programming courses at TU Wien Informatics by Fabian Hagmann.