wang-research-lab/deepex

ValueError When Running OIE on New Sentence

Opened this issue · 2 comments

When I tried to run OIE on a new sentence, I got ValueError: y_true takes value in {} and pos_label is not specified: either make y_true take value in {0, 1} or {-1, 1} or pass pos_label explicitly. The only modification was changing the content of supervised-oie/supervised-oie-benchmark/raw_sentences/test.txt to just

Julia owns two cats and one dog.

As in #17

Please help @cgraywang @jesseLiu2000 @filip-cermak

(deepex_new) root@c605ffdb427a:/home/zhanwen/deepex_new# bash tasks/OIE_2016.sh
Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, 16-bits training: False
Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, 16-bits training: False
08/14/2023 22:24:07 - WARNING - __main__ -   Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, 16-bits training: False
08/14/2023 22:24:07 - WARNING - __main__ -   Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, 16-bits training: False
Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: False
08/14/2023 22:24:07 - WARNING - __main__ -   Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: False
Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, 16-bits training: False
08/14/2023 22:24:07 - WARNING - __main__ -   Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, 16-bits training: False
Some weights of the model checkpoint at bert-large-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at bert-large-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at bert-large-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at bert-large-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
time spent on loading data augmentation: 0.8261523246765137s
08/14/2023 22:24:13 - INFO - __main__ -   time spent on loading data augmentation: 0.8261523246765137s
create batch examples...: 1it [00:00, 2337.96it/s]                                                               | 0/1 [00:00<?, ?it/s]
time spent on loading data augmentation: 0.8587894439697266s
08/14/2023 22:24:14 - INFO - __main__ -   time spent on loading data augmentation: 0.8587894439697266s
Generate dataset and results:   0%|                                                                              | 0/4 [00:00<?, ?it/s]08/14/2023 22:24:14 - INFO - deepex.model.kgm -   ***** Running Generate_triplets *****
08/14/2023 22:24:14 - INFO - deepex.model.kgm -     Num examples = 1
08/14/2023 22:24:14 - INFO - deepex.model.kgm -     Batch size = 16
                                                                                                                                      time spent on loading data augmentation: 0.851825475692749s
08/14/2023 22:24:14 - INFO - __main__ -   time spent on loading data augmentation: 0.851825475692749s
create batch examples...: 1it [00:00, 1945.41it/s]                                                               | 0/4 [00:00<?, ?it/s]
time spent on loading data augmentation: 0.84853196144104s                                                       | 0/1 [00:00<?, ?it/s]
08/14/2023 22:24:14 - INFO - __main__ -   time spent on loading data augmentation: 0.84853196144104s
create batch examples...: 1it [00:00, 2023.30it/s]                                                               | 0/4 [00:00<?, ?it/s]
create batch examples...: 1it [00:00, 2041.02it/s]t/s]
Generate batch dataset and results: 0it [00:00, ?it/s]                                                                                08/14/2023 22:24:14 - INFO - deepex.model.kgm -   forward time cost 0.2758486270904541s
08/14/2023 22:24:15 - INFO - deepex.model.kgm -   search time cost 0.8128805160522461s
Generate_triplets: 100%|█████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.31s/it]
total producing triplets time: 1.6015987396240234s                                                               | 0/1 [00:00<?, ?it/s]
08/14/2023 22:24:15 - INFO - __main__ -   total producing triplets time: 1.6015987396240234s
total dump triplets time: 0.0033626556396484375s
08/14/2023 22:24:15 - INFO - __main__ -   total dump triplets time: 0.0033626556396484375s███████████████| 1/1 [00:01<00:00,  1.31s/it]
convert batch examples to features...: 1it [00:01,  1.62s/it]
process feature files...: 1it [00:01,  1.62s/it] 1.62s/it]
Generate batch dataset and results: 1it [00:01,  1.62s/it]
Generate dataset and results: 100%|██████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.70s/it]
total time: 2.525900363922119s
08/14/2023 22:24:15 - INFO - __main__ -   total time: 2.525900363922119s
Generate_triplets: 100%|█████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.34s/it]
total producing triplets time: 1.468618631362915s
08/14/2023 22:24:15 - INFO - __main__ -   total producing triplets time: 1.468618631362915s
total dump triplets time: 0.0034074783325195312s
08/14/2023 22:24:15 - INFO - __main__ -   total dump triplets time: 0.0034074783325195312s███████████████| 1/1 [00:01<00:00,  1.34s/it]
convert batch examples to features...: 1it [00:01,  1.48s/it]
process feature files...: 1it [00:01,  1.48s/it] 1.48s/it]
Generate batch dataset and results: 1it [00:01,  1.48s/it]
Generate dataset and results: 100%|██████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00,  2.57it/s]
total time: 2.4193410873413086s
08/14/2023 22:24:15 - INFO - __main__ -   total time: 2.4193410873413086s
Generate_triplets: 100%|█████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.34s/it]
total producing triplets time: 1.5090875625610352s
08/14/2023 22:24:15 - INFO - __main__ -   total producing triplets time: 1.5090875625610352s
total dump triplets time: 0.004492044448852539s
08/14/2023 22:24:15 - INFO - __main__ -   total dump triplets time: 0.004492044448852539s████████████████| 1/1 [00:01<00:00,  1.34s/it]
convert batch examples to features...: 1it [00:01,  1.52s/it]
process feature files...: 1it [00:01,  1.52s/it] 1.52s/it]
Generate batch dataset and results: 1it [00:01,  1.52s/it]
Generate dataset and results: 100%|██████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00,  2.50it/s]
total time: 2.4521644115448s
08/14/2023 22:24:15 - INFO - __main__ -   total time: 2.4521644115448s
Generate_triplets: 100%|█████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.48s/it]
total producing triplets time: 1.6742017269134521s
08/14/2023 22:24:16 - INFO - __main__ -   total producing triplets time: 1.6742017269134521s
total dump triplets time: 0.0035593509674072266s
08/14/2023 22:24:16 - INFO - __main__ -   total dump triplets time: 0.0035593509674072266s███████████████| 1/1 [00:01<00:00,  1.47s/it]
convert batch examples to features...: 1it [00:01,  1.68s/it]
process feature files...: 1it [00:01,  1.68s/it] 1.68s/it]
Generate batch dataset and results: 1it [00:01,  1.68s/it]
Generate dataset and results: 100%|██████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00,  2.27it/s]
total time: 2.614199638366699s
08/14/2023 22:24:16 - INFO - __main__ -   total time: 2.614199638366699s
deduplicating batch:   0%|                                                                                       | 0/4 [00:00<?, ?it/s]output/classified/OIE_2016/P0/0_BertTokenizerFast_NPMentionGenerator_256_0_1/search_res.json
deduplicating doc: 100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 201.03it/s]
output/classified/OIE_2016/P0/0_BertTokenizerFast_NPMentionGenerator_256_0_2/search_res.json                     | 0/1 [00:00<?, ?it/s]
deduplicating doc: 100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 207.29it/s]
output/classified/OIE_2016/P0/0_BertTokenizerFast_NPMentionGenerator_256_0_3/search_res.json                     | 0/1 [00:00<?, ?it/s]
deduplicating doc: 100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 207.98it/s]
output/classified/OIE_2016/P0/0_BertTokenizerFast_NPMentionGenerator_256_0_0/search_res.json                     | 0/1 [00:00<?, ?it/s]
deduplicating doc: 100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 206.81it/s]
deduplicating batch: 100%|██████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 177.91it/s]
sorting: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4064.25it/s]
merging doc: 100%|█████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 9822.73it/s]
total triplets: 2560
100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.81s/it]
/root/miniconda3/envs/deepex_new/lib/python3.7/site-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.preprocessing.data module is  deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.preprocessing. Anything that cannot be imported from sklearn.preprocessing is now part of the private API.
  warnings.warn(message, FutureWarning)
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data]   Package stopwords is already up-to-date!
INFO:root:Writing PR curve of DeepEx to eval_data/OIE_2016/deepex.oie_2016.3.dat
Traceback (most recent call last):
  File "benchmark.py", line 231, in <module>
    error_file = args["--error-file"])
  File "benchmark.py", line 101, in compare
    recallMultiplier = ((correctTotal - unmatchedCount)/float(correctTotal)))
  File "benchmark.py", line 125, in prCurve
    precision_ls, recall_ls, thresholds = precision_recall_curve(y_true, y_scores)
  File "/root/miniconda3/envs/deepex_new/lib/python3.7/site-packages/sklearn/metrics/_ranking.py", line 653, in precision_recall_curve
    sample_weight=sample_weight)
  File "/root/miniconda3/envs/deepex_new/lib/python3.7/site-packages/sklearn/metrics/_ranking.py", line 544, in _binary_clf_curve
    classes_repr=classes_repr))
ValueError: y_true takes value in {} and pos_label is not specified: either make y_true take value in {0, 1} or {-1, 1} or pass pos_label explicitly.
OIE_2016 (top 3)

This is a terribly overtrained system, save your time and run

@filip-cermak This is a terribly overtrained system, save your time and run

Quite! The results from eval just do not stack up against the poor quality triples produced. If anyone's interested, we did a fairly comprehensive eval: