vilcans/screenplain

RML output & testing

charneykaye opened this issue · 6 comments

My favorite idea so far is mocking out some part of ReportLab such that it exports its RML to a spy.

Otherwise perhaps just mocking out ReportLab entirely.

I was thinking about simply checking if the resulting PDF is identical to a "known good" version. I.e. a functional test, not a unit test. In my experience, this works well for regression tests. We could use the RML format instead of PDF for this as the diffs should be more readable.

Mocking can work for unit tests, but sometimes it's not worth it because how it adds complexity to the code.

I think I have a way of doing this which preserves all of the intricate usage of Platypus that you had originally coded- but outputs RML, which reportlab then imports. IMO the added complexity of using RML is a net gain in terms of being able to manipulate and test the fountain 2 rml path more robustly, relying on reportlab to faithfully translate that rml to pdf.

Well @vilcans (though please feel free to check out the branch i cut featuring some RML exporting) unfortunately by project has ground to what appears to be a permanent block. Please confirm or challenge my assumptions:

  1. There is no path to export a Platypus document template as RML (even if for no other purpose than writing functional tests for the XML formatting path)
  2. Even with an RML exporting path, the rml2pdf utility is deprecated along with the rlextra pip repository.
  3. The trml2pdf utility is too brittle a dependency to introduce.

Hiya @vilcans please advise me on this and #21, for when I have time soon to put in to Screenplain. Otherwise I will look at #10 . Cheers!

I want to avoid adding complexity to the project, so if converting from RML to PDF is brittle or requires new dependencies, I think it's better to keep the current behavior of outputting PDF directly. For testing PDF output, what do you think about simply adding PDF test cases? If such a test fails, the diff will probably be hard to read, but if you just open the failed document in a PDF reader, you can easily see what (if any) the regression is.

I agree. The RML approach is too brittle, and it's a totally separate dependency that adds a lot of complexity to the project. I will close this ticket, and focus on PDF tests.