MiniDungeons is a simple turn-based roguelike puzzle game, implemented as a benchmark problem for modeling decision-making styles of human players. In every MiniDungeon level, the hero (controlled by the player) starts at the level's entrance and must navigate to the level exit while collecting treasures, killing monsters and drinking healing potions. The MiniDungeons game was created for two purposes: (a) to investigate how human players enact decision making styles in a simple game, and (b) to construct artificial agents able to represent such decision making styles. These artificial agents are identified as procedural personas and make decisions based on a utility (e.g. kill monsters, collect treasure, survive or speed run to the exit) specified by the human designer. The playtraces of such procedural personas can act as proxies of human playtraces during the design of game levels or new game rules. Moreover, the procedural personas can act as critics of an automated or mixed-initiative generator of new content (levels, rules) for MiniDungeons.
The MiniDungeons 1 framework was implemented in 2014 and rehashed over the years, leading to a number of publications. More information on the research side of things, especially procedural personas, can be found here.
The codebase provided in this repository offers a bare-bones version of the MiniDungeons 1 project, complete with the simulation of the game, the 11 levels used in the research, and a controller interface for creating custom artificial agents for playing the game. For the sake of simplicity and readability, the framework uses maps drawn as ASCII characters, rather than the tile graphics of the MiniDungeons research project (which are found here).
Three important sections should be highlighted about the project:
- Map resources: the original MiniDungeons used 10 maps (1-10) and one tutorial map (0) for testing artificial agents and collecting human playtest data. These map files are in the resources/dungeons/ folder, as txt files. Maps can thus be edited directly. Maps consist of empty tiles (
.
), impassable wall tiles (#
), the entrance where the hero starts from (E
), the exit where the hero must go to complete the leve (r
), treasure tiles (t
), potions (p
) and monsters (m
). A hero starts at the entrance with 40 hit points (HP); stepping on a monster tile the first time kills the monster but deals 6-15 HP damage to the hero; stepping on a potion tile the first time removes the potion and heals the hero by 10 HP. - Experiment modes: the easiest way to run the project is to use three existing executables in the experiment package. SimulationMode performs a number of simulations of a specific agent on one or more dungeons and reports the outcomes as metrics and heatmaps of each playthrough; these reports are also saved in a folder specified in the outputFolder variable. DebugMode tests one simulation on one map, step by step, allowing users to see what the agent does in each action, showing the ASCII map and the number of the hero's current HP. Additional debug information could also be included in this view as needed. CompetitionMode is intended for testing how each agent specified in the controllerNames array fares against every other in a number of metrics such as treasures collected, monsters killed etc. This mode is intended for conference or classroom competitions where agents created by different users compete on one or more dimensions monitored by the system.
- Controller interface: in the controllers package you can find the interface that all artificial agents must follow. Essentially, the agent must choose a next action among the four directions (0,1,2,3) every turn via getNextAction. At the start of each simulation, every agent also calls the reset function which can be overriden to initialize any necessary variables. The agent has full knowledge and access of the map. There are a number of very simple controllers in the same package, the most 'elaborate' of which is the pathfinding agent.
This repository uses the excellent pathfinding algorithm by robotacid, included in lib/ai_path.jar. This library uses GNU Lesser General Public License, and is included here as combined works. The current codebase is also released under the GNU Lesser General Public License. If used in an academic research project, the research which introduced it is "Christoffer Holmgard, Antonios Liapis, Julian Togelius, Georgios N. Yannakakis: Generative Agents for Player Decision Modeling in Games, in Poster Proceedings of the 9th Conference on the Foundations of Digital Games, 2014" and should be cited.