Issues
- 0
Just use Jieba for all segmentation
#20 opened by jannes - 0
update demo
#18 opened by jannes - 1
- 1
Also support both 繁體
#12 opened by jannes - 1
- 4
Add wordlist filtering
#16 opened by jannes - 1
Use refinery for db migrations
#13 opened by jannes - 1
#unique words in Analysis UI and JSON export as reported by zh-vocab-filter don't match
#15 opened by jannes - 2
Remove all HTML entities from parsed epub
#2 opened by jannes - 1
- 1
Remove clap dependency
#9 opened by jannes - 3
Remove epub-rs dependency
#1 opened by jannes - 0
add license
#5 opened by jannes - 0
Do correct cleanup on panic in TUI loop
#10 opened by jannes - 0
change extract command to interactive tui mode
#8 opened by jannes - 0
add dictionary filter
#6 opened by jannes - 0
bundle pkuseg segmenter as seperate binary
#7 opened by jannes - 1
add option to import 'garbage' words
#4 opened by jannes - 1
use anyhow for all error handling
#3 opened by jannes