Important fix (untokenized data written to .bin files)
abisee opened this issue · 3 comments
This is a notification that the code to obtain the CNN / Daily Mail dataset unfortunately had a bug which caused the untokenized data to be written to the .bin
files (not the tokenized data, as intended). The fix has been committed here.
If you've already created your .bin
and vocab
files, I advise you to recreate them. To do this:
- Pull the new version of the
cnn-dailymail
repo - Delete or rename the
finished_files
directory (but keep thecnn_stories_tokenized
anddm_stories_tokenized
directories) - Comment out the lines
tokenize_stories(cnn_stories_dir, cnn_tokenized_stories_dir)
andtokenize_stories(dm_stories_dir, dm_tokenized_stories_dir)
(lines 178 and 179) ofmake_datafiles.py
. This is because you don't need to retokenize the data. - Run
make_datafiles.py
. This will create the new.bin
andvocab
files.
If you've already begun training with the Tensorflow code, I advise you to restart training with the new datafiles. Switching the vocab
and .bin
files mid-training will not work.
Apologies for the inconvenience.
Tagging people to whom this may be relevant: @prokopevaleksey @tianjianjiang @StevenLOL @MrGLaDOS @hate5six @liuchen11 @bugtig @ayushoriginal @BenJamesbabala @BinbinBian @caomw @halolimat @ml-lab @ParseThis @qiang2100 @scylla @tonydeep @yiqingyang2012 @YuxuanHuang @Rahul-Iisc @pj-parag
when I run the code , it always print "Tried to find tokenized story file 9bfbb6ede20df9611c2a8b42980629658dc5ec23.story in both directories cnn_stories_tokenized and dm_stories_tokenized.“ Couldn't find it.
how can i fix it?
@yangze01 That error message means that it's trying to find a .story
file in cnn_stories_tokenized
or dm_stories_tokenized
but the file is in neither. Those directories should contain a tokenized version of every file in the original cnn / dailymail stories directories you passed into make_datafile.py
.
You probably had some error during tokenization that resulted in an incomplete set of tokenized files in cnn_stories_tokenized
and dm_stories_tokenized
.
The latest commit now has more informative checks and error messages.