Jekub/Wapiti

Training stops early with l-bfgs

Opened this issue · 2 comments

I am using wapiti with a small set of data. When training with the other three optimization algorithms, everything is fine, the resulting models tag well, when tested.
Training with l-bfgs optimization, however, doesn't go through and stops after the 5th iteration. The model is saved, but it cannot find any entities in the test file. There is no error message, nor could I find how to switch to a more verbose mode with wapiti.
How do I find the source of the problem? is there any way to debug?

Here is what I get in the command line:

 % wapiti train -c -p patterns.txt train model
* Load patterns
* Load training data
   1000 sequences loaded
* Initialize the model
* Summary
    nb train:    1728
    nb labels:   5
    nb blocks:   111412
    nb features: 557060
* Train the model with l-bfgs
  [   1] obj=53109.96   act=240660   err= 3.71%/25.41% time=0.23s/0.23s
  [   2] obj=16052.89   act=201733   err= 3.71%/25.41% time=0.14s/0.37s
  [   3] obj=13922.99   act=125481   err= 3.71%/25.41% time=0.15s/0.52s
  [   4] obj=12309.84   act=88905    err= 3.71%/25.41% time=0.15s/0.67s
  [   5] obj=10402.72   act=64820    err= 3.71%/25.41% time=0.15s/0.82s
* Compacting the model
    - Scan the model
    - Compact it
       83385 observations removed
      416925 features removed
* Save the model
* Done
kmike commented

See "Stopping criterion" section here: https://wapiti.limsi.fr/manual.html - error was not changing for 5 iterations, which is default. Maybe this is the cause? Check --objwin, --stopwin and --stopeps arguments.

@kmike Thank you for the hint! I set -o 10 -w 10, and the training went through :)