projectmesa/mesa-examples

Add Contributing.md file & edit related files

jackiekazil opened this issue · 4 comments

We need to add a contributing.md file to this repo. This is an extension of this ticket: projectmesa/mesa#963 (comment), but it can be broken out separately.

AC:

  • A contributing.md file that helps people contribute
  • Update main Mesa docs as well as contributing.md in the main mesa & mesa-geo repos to remove anything that might be confusing and also add whatever needs to be added for clarity.
  • include stuff related to the outcome of discussion on pinning models - projectmesa/mesa#1530
  • Add some text about writing a readme for an example should be formatted / what it should include

Perhaps the contributing.md file could be different from that in Mesa & Mesa-Geo? As mentioned in #61, both AnyLogic and Stella has two types of example models: one from the official team, and the other from the community.

We could encourage community to contribute their Mesa models into this repository (maybe from a class of students as part of their homework assignments), by lowering the standard for code quality (e.g., algorithmic complexities don't have to be optimized), so long as the models can run and produce expected behaviors. We could require documentation contain certain mandatory sections (e.g., short summary, what are the agents, etc).

@wang-boyu I like that idea a lot!
Do you have any thoughts on marking / flag officially "endorsed" ones?

And to that matter, I will also throw in another thought in this -- what are the purposes of our examples?
The initial purpose was to demonstrate features. Now I need help to easily tell you where to a feature.

Thinking out loud - maybe we should list our user goals, then try to figure out the solution for those.
.. your post brings up -- Given a minimal set of requirements, a user's model is accepted. Examples that are "official" are held to a more rigorous level of acceptance.

Do you have any thoughts on marking / flag officially "endorsed" ones?

We may simple put them into different folders? For instance a community or 3rdparty folder in this repository and another team_mesa or official folder for our examples. I am terrible at naming things.

what are the purposes of our examples?

Demonstrating features is definitely a primary goal of having examples. Prospective users may be attracted by these loads of examples we have and get impressed by what Mesa can offer.

Another possible purpose is to illustrate how models can be developed for certain use cases. If one wants to develop a specific model, then he/she can download a relevant example and directly start from there.

Yet another purpose could be to promote community (and especially educational) engagement? In this discussion projectmesa/mesa#1577 I mentioned some university courses and project assignments. Maybe what students manage to achieve in these classes can also contribute to our project in the form of community examples, if of course, we are open to this and provide explicit instructions in how to do so.

rht commented

A relevant read would be https://www.inet.ox.ac.uk/files/JEL-v2.0.pdf, in "Section IV I Challenge and Opportunity: How to Create ABM Community Models?".

I'd say, in addition to demonstrate features, the repo should harbor classic examples that have been peer reviewed. The Sugarscape {G1, M, T} has had eyeballs from, @tpike3, Rob Axtell, and students from Complexity Explorer, is a sound starting point for people, analogous to how people would use scipy.integrate.odeint, knowing it has had numerous bugfixes (most recent one that caused crash was in 2018). Additionally, this would be the training data for language models.

Regarding with the peer review process, one model would be how Scholarpedia does it, e.g. a wiki article on Game of Life co-written by John Conway himself. But I find this to be problematic, as it is an appeal to authority. I don't have the capacity to ascertain the soundness of projectmesa/mesa#1057, because I haven't implemented it myself, nor have I run the code under various conditions that can be written as tests that confirm the finding in the paper. Tests should be the way to decentralize the audit/review process, in that soon, machines could write them.