Failed to read proto message: proto: bad byte length -402238888
NivedhaSenthil opened this issue · 2 comments
Expected behavior
Report gets generated seamlessly for all test suites.
Actual behavior
Getting bellow error for large test suites
2019/03/25 11:55:14 Failed to read proto message: proto: bad byte length -402238888
2019/03/25 11:55:14 Failed to read proto message: proto: bad byte length -402238888
2019/03/25 11:55:14 Failed to read proto message: proto: bad byte length -402238888
2019/03/25 11:55:14 Failed to read proto message: proto: bad byte length -402238888
Steps to reproduce
run attached sample project gauge run
sample-java.zip
Gauge version
Gauge version: 1.0.4
Commit Hash: 3a9a647
Plugins
html-report (4.0.6)
java (0.7.1)
screenshot (0.0.1)
There are two problems noted analysing the above test suite with huge number of test failures.
- Time taken to execute the suite increases when
screenshot_on_failure
is enabled to true, as the overhead of capture screenshot is also added up to performance of test suite. - When there are large number of screenshots the size of message sent from gauge to html-report grows causing
Failed to read proto message:
error.
First one is applicable irrespective of inbuilt or custom screenshots, on average both adds up 1sec to each failing scenario, for 1000 failing scenarios overhead would be around 16.6 mins but for 100 failing scenarios overhead would be around 1.6 mins. Reports should get generated successfully irrespective of number of failure with getgauge/gauge#1176
Second one on increasing message size because of more number of screenshot should be shorted out when html-report starts consuming streamed messages sent from gauge instead of waiting for the execution to complete via #221
Note: Considered option of writing screenshot into file instead of adding as data-uri with base64 encoded images, since that too has considerable overhead of IO and issues with sharing wont be doing that now. Hoping streaming data should solve the issue.
Closing this as an old issue. This may be fixed in the newer releases.
Will reopen if it still needs fixing.