summary() throws away lots of text at some websites
yevgenpapernyk opened this issue · 2 comments
A lot of the text content is unfortunately not present in the .summary() result.
This happens sometimes, the algorithm has very limited knowledge about blocks of text, image captions, comments and ads, and might confuse them, especially if the text in the HTML is interleaved with other blocks. Unless you can point on a specific bug or suggest adding a special rule for this website (that wouldn't reduce quality on other websites), I can do nothing about it.
Same thing as in #150, you could try trafilatura which builds upon readability-lxml
. I just tried and was able to extract the text you mentioned:
pip/pip3 install trafilatura
trafilatura -u "https://edition.cnn.com/2020/07/24/politics/donald-trump-coronavirus-briefing-jacksonville/index.html"
If it doesn't work, please provide us with precise examples of text portions which don't get extracted as you expect, this really helps!