askplatypus/syntaxnet-api

http://syntaxnet.askplatyp.us - not working

Opened this issue · 7 comments

ndvbd commented
  1. On the readme it refers to "http://syntaxnet.askplatyp.us" - this link doesn't work.
  2. In the .py code you refer to "http://syntaxnet.askplatyp.us" - are you calling an external server?
Tpt commented

Sorry, I have shut down "http://syntaxnet.askplatyp.us" because we do not use it anymore. Spacy and StanfordNLP have similar performances than Syntaxnet. I am going to tag this repository as "obsolete" to highlight that I am not maintaining it anymore. Feel free to create a fork if you are interested in maintaining it, I could link to it from the README.

The Python file is not calling this server, it's just added to the Swagger documentation to build absolute URIs.

ndvbd commented

I thought Syntaxnet has better results than CoreNLP and Spacy. I also did some manual checks and indeed got better results in Syntaxnet. Do you have a different impression?

Tpt commented

Indeed, at my knowledge Syntaxnet has better results than CoreNLP. But the new Standford NLP seems to beat syntaxnet and is way easier to run.

The benchmark for Syntaxnet are here: https://github.com/tensorflow/models/blob/master/research/syntaxnet/g3doc/conll2017/README.md
And similar benchmark for StandfordNLP (the "standford" participant of CoNLL benchmark: https://universaldependencies.org/conll18/results-uas.html#per-treebank-uas-f1

About Spacy, I have not compared everything. For French on the Sequoia dataset
Spacy gives UAS: 89.08, LAS: 86.41 vs UAS: 87.90, LAS: 85.74 for SyntaxNet.

ndvbd commented

Oh, you say there is a difference between CoreNLP and StanfordNLP? I thought they are part of the same thing/team.

I see the first two links you've put - can you see something that is comparable? I mean a test for StanfordNLP and Syntaxnet that is done on the same test data?

Tpt commented

Oh, you say there is a difference between CoreNLP and StanfordNLP?

Yes, there are from the same team. StandfordNLP is a reimplementation from scratch of a part of CoreNLP using PyTorch.

I see the first two links you've put - can you see something that is comparable?

Yes, I believe that CoNLLU challenge 2017 and 2018 are using the same datasets, i.e. Universal Dependencies treebanks v2. For example StandfordNLP gets UAS: 90.59 and LAS: 88.78 on fr_sequoia and SyntaxNet UAS: 87.90 and LAS: 85.74 on French-Sequoia.

ndvbd commented

Okay, so if I see right:
Stanford - UAS for en_lines got 82.99
SyntaxNet - UAS for English-LinES got 82.43

I now understand. Thank you.

Tpt commented

Yes!