LSSTScienceCollaborations/ObservingStrategy

Comments from David Kirkby (SAC) on Chapter 1, incl: Table of Metrics

Closed this issue · 9 comments

My only high-level comment on this chapter is that it would be useful to have an executive summary of findings so far, accompanied by a summary table of the currently implemented metrics.

Minor editorial corrections / suggestions:
p.12: "...is not slaved to..." -> "is not limited to..."
p.13: "...for each projects, ..." -> "...for each project, ..."
p.13: "...many of them, we hope, ...": "hope" seems out of place for ideas already written down.
p.13: "...towards an..." -> "...towards a..."
p.14: "...expected performance of the..." -> "...expected performance using the..." (otherwise, I don't understand this sentence)
p.15: "Will the case science..." -> "Will the science case..." (this typo is repeated in subsequent sections).
p.18: Opsim 4 v1.x is confusing: why not just Opsim v4.x?
p.19: "a FoM will be a precision...": you argued earlier that it should be dimensional.

Thanks, David! I agree that a table of metrics would be a very good addition to the paper. We'd talked about this before but balked at the idea of adding a layer on top of the sims_maf_contrib repo that did not auto-update. Perhaps we'd better just bite the bullet and agree to curate a summary table.

@rhiannonlynne @yoachim What do you think? What's the cleanest way to link the available metrics into the observing strategy white paper?

The table could be auto-generated if the code for each metric had an accompanying simple yaml (or similar) file, or you could add a virtual function get_summary_table_row() to the metric API ;-)

No need to make extra yaml files, we already have the docstrings!

import lsst.sims.maf.metrics as metrics
metrics.MeanMetric.__doc__

outputs Calculate the mean of a simData column slice. Now we get to see how well we've done writing docstrings as we went.

I'd say the MAF metrics should go in one table and any sims_contrib ones go in a second.

As I feared, we are a little under-documented. But it shouldn't be too painful to go through and add the strings. We can also use pandas.DataFrame.to_latex.

My only comment is that this is long, as the docstrings should be extensive and include parameter settings. The MAF core metrics make up most of the metrics available. I guess putting a static table of them into the paper, rather than linking to a webpage, means that you will get the list of metrics available at the time of the paper. There should be a clear to-do for updating this table as the paper gets updated.
Are there still metrics in the community (and in the white paper) which have not been pushed to sims_maf_contrib? (this may be an interesting way to find out).

@rhiannonlynne If the docstrings are in numpy format, we should be able to just take the first line of each one... @yoachim woudl you like to have a go at making the two tables you refer to, sims_maf and sims_maf_contrib? To address Lynne's point about keeping these Tables up to date, I guess we need a script that lives in the white paper folder that is called via make metric_tables ... Alternatively, you could have the sims continuously integrated doc system output two latex tables as part of its overnight build, and make make metric_tables simply download them...

It would be fantastic if these tables could hyperlink the metric name to the relevant python file in the GitHub repo. It would also be good if the latex references to metrics pointed to this hyperlinked table - but I can take care of that once we have the tables. What do you think?

OK, I've added in a script on a new branch. I'm not sure if it'll actually work if called from the Makefile due to all the stack $PATH requirements.

I'll let you decide how best to deal with text/tables overrunning lines/pages :)

I gave up trying to make hyperlinks when I ran into objects with no __file__ and didn't know that could happen.