news-please is an open source, easy-to-use news crawler that extracts structured information from almost any news website. It can follow recursively internal hyperlinks and read RSS feeds to fetch both most recent and also old, archived articles. You only need to provide the root URL of the news website to crawl it completely. news-please combines the power of multiple state-of-the-art libraries and tools, such as scrapy, Newspaper, and readability. news-please also features a library mode, which allows Python developers to use the crawling and extraction functionality within their own program. Moreover, news-please allows to conveniently crawl and extract articles from commoncrawl.org.
If you like news-please and would like to contribute to it, please have a look at our list of issues that need help. Of course, we are always looking forward to pull requests containing bug fixes, improvements, or your own ideas.
06/01/2018: If you're interested in news analysis, you might also want to check out our new project, Giveme5W1H - a tool that extracts phrases answering the journalistic five W and one H questions to describe an article's main event, i.e., who did what, when, where, why, and how.
news-please extracts the following attributes from news articles. Also, have a look at an examplary json file extracted by news-please.
- headline
- lead paragraph
- main text
- main image
- name(s) of author(s)
- publication date
- language
- works out of the box: install with pip, add URLs of your pages, run :-)
- run news-please conveniently with the CLI
- use it as a library within your own software
- extract articles from the news archive of commoncrawl.org
news-please supports three use cases, which are explained in more detail in the following.
- stores extracted results in JSON files or ElasticSearch (you can implement other storages easily)
- simple but extensive configuration (if you want to tweak the results)
- revisions: crawl articles multiple times and track changes
- crawl and extract information given a list of article URLs
- to use news-please within your own Python code
- commoncrawl.org provides an extensive, free-to-use archive of news articles from small and major publishers world wide
- news-please enables users to conveniently download and extract articles from commoncrawl.org
- you can optionally define filter criteria, such as news publisher(s) or the date period, within which articles need to be published
- clone the news-please repository, install the awscli tool, adapt the config section in newsplease/examples/commoncrawl.py, and execute
python3 -m newsplease.examples.commoncrawl
It's super easy, we promise!
news-please runs on Python 3.5+.
$ pip3 install news-please
Some folks from the great conda-forge community are working on including news-please in conda-forge; we'll update here once news-please can be installed using conda.
You can access the core functionality of news-please, i.e. extraction of semi-structured information from one or more news articles, in your own code by using news-please in library mode. If you want to use news-please's full website extraction (given only the root URL) or continuous crawling mode (using RSS), you'll need to use the CLI mode.
from newsplease import NewsPlease
article = NewsPlease.from_url('https://www.nytimes.com/2017/02/23/us/politics/cpac-stephen-bannon-reince-priebus.html?hp')
print(article.title)
A sample of an extracted article can be found here (as a JSON file).
If you want to crawl multiple articles at a time, optionally with a timeout in seconds
NewsPlease.from_urls([url1, url2, ...], timeout=6)
or if you have a file containing all URLs (each line containing a single URL)
NewsPlease.from_file(path)
or if you have raw HTML data (you can also provide the original URL to increase the accuracy of extracting the publishing date)
NewsPlease.from_html(html, url=None)
or if you have a WARC file (also check out our commoncrawl workflow, which provides convenient methods to filter commoncrawl's archive for specific news outlets and dates)
NewsPlease.from_warc(warc_record)
In library mode, news-please will attempt to download and extract information from each URL. The previously described functions are blocking, i.e., will return once news-please has attempted all URLs. The resulting list contains all successfully extracted articles.
$ news-please
news-please will then start crawling a few examples pages. To terminate the process press CTRL+C
. news-please will then shut down within 5-60 seconds. You can also press CTRL+C
twice, which will immediately kill the process (not recommended, though).
The results are stored by default in JSON files in the data
folder. In the default configuration, news-please also stores the original HTML files.
Most likely, you will not want to crawl from the websites provided in our example configuration. Simply head over to the sitelist.hjson
file and add the root URLs of the news outlets' web pages of your choice. news-please also can extract the most recent events from the GDELT project, see here.
news-please also supports export to ElasticSearch. Using Elasticsearch will also enable the versioning feature. First, enable it in the config.cfg
at the config directory, which is by default ~/news-please/config
but can also be changed with the -c
parameter to a custom location. In case the directory does not exist, a default directory will be created at the specified location.
[Scrapy]
ITEM_PIPELINES = {
'newsplease.pipeline.pipelines.ArticleMasterExtractor':100,
'newsplease.pipeline.pipelines.ElasticsearchStorage':350
}
That's it! Except, if your Elasticsearch database is not located at http://localhost:9200
, uses a different username/password or CA-certificate authentication. In these cases, you will also need to change the following.
[Elasticsearch]
host = localhost
port = 9200
...
# Credentials used for authentication (supports CA-certificates):
use_ca_certificates = False # True if authentification needs to be performed
ca_cert_path = '/path/to/cacert.pem'
client_cert_path = '/path/to/client_cert.pem'
client_key_path = '/path/to/client_key.pem'
username = 'root'
secret = 'password'
We have collected a bunch of useful information for both users and developers. As a user, you will most likely only deal with two files: sitelist.hjson
(to define sites to be crawled) and config.cfg
(probably only rarely, in case you want to tweak the configuration).
You can find more information on usage and development in our wiki! Before contacting us, please check out the wiki. If you still have questions on how to use news-please, please create a new issue on GitHub. Please understand that we are not able to provide individual support via email. We think that help is more valuable if it is shared publicly so that more people can benefit from it.
For bug reports, we ask you to use the Bug report template. Make sure you're using the latest version of news-please, since we cannot give support for older versions. Unfortunately, we cannot give support for issues or questions sent by email.
If news-please is useful to you, please consider donating. Your donation will directly support the work on news-please (coffee!).
This project would not have been possible without the contributions of the following students (ordered alphabetically):
- Moritz Bock
- Michael Fried
- Jonathan Hassler
- Markus Klatt
- Kevin Kress
- Sören Lachnit
- Marvin Pafla
- Franziska Schlor
- Matt Sharinghousen
- Claudio Spener
- Moritz Steinmaier
We also thank all other contributors, which you can find on the contributors page!
If you are using news-please, please cite our paper (ResearchGate, Mendeley):
@InProceedings{Hamborg2017,
author = {Hamborg, Felix and Meuschke, Norman and Breitinger, Corinna and Gipp, Bela},
title = {news-please: A Generic News Crawler and Extractor},
year = {2017},
booktitle = {Proceedings of the 15th International Symposium of Information Science},
location = {Berlin},
editor = {Gaede, Maria and Trkulja, Violeta and Petra, Vivien},
pages = {218--223},
month = {March}
}
You can find more information on this and other news projects on our website.
Do you want to contribute? Great, we are always happy for any support on this project! We are particularly looking for pull request that fix bugs (issues are found under the issues tab), but welcome also pull requests that contribute your own ideas. If you plan to submit a pull request adding new functionality, we suggest to open an issue first in order to briefly discuss with us how it should be implemented to best fit into news-please's architecture.
Please note that we usually do not have enough resources to implement features requested by users - instead we recommend to implement them yourself, and send a pull request.
By contributing to this project, you agree that your contributions will be licensed under the project's license (see below).
Licensed under the Apache License, Version 2.0 (the "License"); you may not use news-please except in compliance with the License. A copy of the License is included in the project, see the file LICENSE.txt.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. The news-please logo is courtesy of Mario Hamborg.
Copyright 2016-2019 The news-please team