GrainLearning/grainLearning

JOSS Review: Paper Comments

Closed this issue · 7 comments

Hi! I’m reviewing your paper for JOSS and opening small issues as I come across them. This issue corresponds to my comments on the manuscript.

The paper hits all of the desired points by JOSS, but I personally found it to be very terse and not informative about the software. I think substantially more detail should be added to the paper such that, when cited by researchers, a reader can briefly look at the paper and understand the functionality and design of the software. I feel comfortable requesting this change as you have very extensive documentation and I feel that using some of that content to formally present the software in the paper is not too arduous of a task.

Below are my comments that need to be addressed in the revised paper

  1. While the summary does a good job at setting the context of the field, I find the summary to be lacking in describing what GrainLearning does. Could you please add another two sentences or so highlighting the broad strokes of what GrainLearning does, beyond “emulating granular behavior” and “learning the uncertainties?”
  2. The ‘State of the Field’ section reads more like what I would expect for the ‘Statement of Need’. I think you should merge these and shorten.
  3. I don’t believe lines 32 - 40 should be in the statement of need as they describe the functionality of the software.
  4. The “functionality” section of the paper is highly tailored for a narrow scientific community, and I think obscures the very cool capabilities of the model. I would greatly appreciate if, at a minimum, you could provide a software architecture diagram that highlights the GarinLearning data model. This will allow a quick reader to glance at the paper and understand what GrainLearning takes in and what it puts out.
  5. It is not clear to me what the authors mean by “calibration” in their functionality statement. This may be because I am not in-field, but a definition would be useful. My reading of this statement does not jibe with the type of model calibration I am used to in Bayesian model evaluation.
  6. I also would appreciate a statement of contribution of the authors. I’m very happy that all authors are listed as co-first authors, but I think with this many, a statement of who-did-what would be useful. If all authors equally contributed to writing/coding/testing/deploying/etc, great! Having that written down would only make it more clear that this was a highly collaborative effort.

@gchure Thanks a lot for your comments. We will work on the text and come back to you.

@gchure Could you give us an example of how in JOSS paper we could explicitly mention the different contributions of the different authors? This is not something that we are used to do in the articles of our scientific domain ☺️

Of course. I recently requested this for Taweret, another package in review at JOSS. I have a paper (though not in JOSS) with 5 co-first authors and I felt that the "Author Contribution" section accurately conveyed why we split authorship equally. It can be as simple as "X, Y, and Z developed the software. A, B, X, and Y developed statistical models. X and Z wrote test suites. X wrote the documentation." etc. My interest for this is more to make it really clear to the reader that everyone substantially contributed to the software, which I have found to be really well engineered.

Dear @gchure, we have worked on the text and added the section Author contributions.
You can find the latest version generated in the action here.
Let us know what you think and thanks again for your comments.

Looks great. Just a few small comments:

i) Can you reference Figure 1 in the text? I think something like "The core functionality of GrainLearning is diagrammed in Figure 1." at line 58.

ii) The figure caption lists steps 1, 2, 3... but those aren't shown in the figure. This makes it hard to see what each step is referring to in the figure. Can you add them as sub labels within the figure?

With those two small points addressed, I can finish off my review.

Thanks, those are indeed nice additions 🚀
Check the latest version in this artifact.

Great! Looks perfect. Thanks for the rapid iteration!

As a minor point, you'll probably want to switch out the * and ¶ in the author affiliations. I discovered empirically that this will break your paper upon acceptance.