okfn-brasil/serenata-toolbox

Deploy only after Unit tests passes

Closed this issue · 11 comments

Right now Travis tries to deploy to PyPI two time hence breaking the build in the second time.

Maybe add a condition for the deploy only to be done if unit tests pass would be a smart choice.

@cuducos what do you think?

I believe that we need to be more maturebefore be able to go through a Continuous Deployment approach.

I just executed our coverage against unit tests on master branch. Here is the result:
image

We have 48% of coverage, I think that this number can be improved before we follow this approach, but the number just say that our tests go through this pieces of code after run, do not talk about the quality of our tests as mutation tests do. This number also tell us that there is 52% of our code base without unit test coverage. Journey tests take too much time but give us trust to move on.

Question
How would it behave on Pull Requests? Will Travis try do deploy after success?

How would it behave on Pull Requests? Will Travis try do deploy after success?

No. The only time the deploy is executed is when a PR is accepted because of on: master defined on .travis.yml unless either [skip ci] or [ci skip] tags are on the commit message.

@lipemorais raised a very important point: we need to be more mature before be able to go through a Continuous Deployment approach. (BTW can you spot why your and our CI reports say 48% but our badge on top of the README.md says 87%?)

Anyway… going for a CI approach with this low level of coverage is indeed crazyness. Thanks goddess we actually are a bit crazy.

However I wouldn't be even crazier than that. In other words my vote would be to try to chain both test suites (units and journey) in one Travis process (not two) and only roll ou a new release if both pass.

In my defense of being crazy: 48% coverage is ludicrous, but the hassle to manually do that would be worse and actually would not bring any extra security (TBH we will not spent minutes or hours doing manual tests before manually rolling out the release; in real life we will roll out, break stuff, write a fix, repeat).

In addition: the best we can do about the terrible coverage is what @lipemorais is beautifully doing: writing more and better tests for the toolbox ; )

a new release if both pass

In a call earlier today, we agreed that this would be an "ideal" approach. But we haven't figured out how to do so using travis the way it is today. Hopefully the beautiful travis build stages will be out of beta pretty soon so we can improve our deployment situation.

We also agreed that it is smarter to release a new version only after journey tests passes once it also takes into consideration "real life" situation. Although I'm biased to "fail faster" approach and releasing after unit tests passes.

In addition: the best we can do about the terrible coverage is what @lipemorais is beautifully doing: writing more and better tests for the toolbox ; )

+1

But we haven't figured out how to do so using travis the way it is today.

It's quite simple, I guess. Replace:

script: coverage run -m unittest discover tests/$TEST_SUITE

By

script: coverage run -m unittest discover

And remove these lines:

env:
  - TEST_SUITE=unit
  - TEST_SUITE=journey

I guess…

can you spot why your and our CI reports say 48% but our badge on top of the README.md says 87%?

I have a guess that this higher number is because we send not just unit tests but journey tests on coverage reports.

I am very biased by the build stages approach since we are not able to use it right now I agree with @cuducos that we should merge booth, unit and journey tests, execution. This will avoid this kind of situation making us choose which test suite should trig deploy.

The parallel gain that we have with unit and journey tests running paralleled are very small because unit tests are very fast, so it spend almost all time running just journey suite.

alert
The approach to merge unit and journey will keep de coverage make up effect that we already have today.

Closed by #134