Stanford CoreNLP provides a set of natural language analysis tools written in Java. It can take raw human language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize and interpret dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases or word dependencies, and indicate which noun phrases refer to the same entities. It was originally developed for English, but now also provides varying levels of support for (Modern Standard) Arabic, (mainland) Chinese, French, German, and Spanish. Stanford CoreNLP is an integrated framework, which make it very easy to apply a bunch of language analysis tools to a piece of text. Starting from plain text, you can run all the tools with just two lines of code. Its analyses provide the foundational building blocks for higher-level and domain-specific text understanding applications. Stanford CoreNLP is a set of stable and well-tested natural language processing tools, widely used by various groups in academia, industry, and government. The tools variously use rule-based, probabilistic machine learning, and deep learning components.
The Stanford CoreNLP code is written in Java and licensed under the GNU General Public License (v3 or later). Note that this is the full GPL, which allows many free uses, but not its use in proprietary software that you distribute to others.
Several times a year we distribute a new version of the software, which corresponds to a stable commit.
During the time between releases, one can always use the latest, under development version of our code.
Here are some helpful instructions to use the latest code:
- Make sure you have Ant installed, details here: http://ant.apache.org/
- Compile the code with this command:
cd CoreNLP ; ant
- Then run this command to build a jar with the latest version of the code:
cd CoreNLP/classes ; jar -cf ../stanford-corenlp.jar edu
- This will create a new jar called stanford-corenlp.jar in the CoreNLP folder which contains the latest code
- The dependencies that work with the latest code are in CoreNLP/lib and CoreNLP/liblocal, so make sure to include those in your CLASSPATH.
- When using the latest version of the code make sure to download the latest versions of the corenlp-models, and english-models, and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in.
- Make sure you have Maven installed, details here: https://maven.apache.org/
- To get the tests to pass, you will need to install the latest version of the Spanish models jar. You can download the jar from here. Then run this command (replace "/location/of" with the path on your machine).
mvn install:install-file -Dfile=/location/of/stanford-spanish-corenlp-models-current.jar -DgroupId=edu.stanford.nlp -DartifactId=stanford-corenlp -Dversion=3.7.0 -Dclassifier=models-spanish -Dpackaging=jar
- If you run this command in the CoreNLP directory:
mvn package
, it should run the tests and build this jar file:CoreNLP/target/stanford-corenlp-3.7.0.jar
- When using the latest version of the code make sure to download the latest versions of the corenlp-models, and english-models, and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in. You can adapt the command above for installing the Spanish models to install the latest versions of the models into Maven if you want to run Stanford CoreNLP as part of a Maven project, just change the location of the file and the classifier (e.g. use "/location/of/stanford-french-corenlp-models-current.jar" and "models-french" instead). To use the standard models jar, set classifier=models.
You can find releases of Stanford CoreNLP on Maven Central.
You can find more explanation and documentation on the Stanford CoreNLP homepage.
The most recent models associated with the code in the HEAD of this repository can be found here.
Some of the larger (English) models -- like the shift-reduce parser and WikiDict -- are not distributed with our default models jar. The most recent version of these models can be found here.
We distribute resources for other languages as well, including Arabic models, Chinese models, French models, German models, and Spanish models.
For information about making contributions to Stanford CoreNLP, see the file CONTRIBUTING.md.
Questions about CoreNLP can either be posted on StackOverflow with the tag stanford-nlp, or on the mailing lists.