tokenizer
There are 1313 repositories under tokenizer topic.
theseer/tokenizer
A small library for converting tokenized PHP source code into XML (and potentially other formats)
Chevrotain/chevrotain
Parser Building Toolkit for JavaScript
dqbd/tiktokenizer
Online playground for OpenAPI tokenizers
roshan-research/hazm
Persian NLP Toolkit
natasha/natasha
Solves basic Russian NLP tasks, API for lower level Natasha projects
lovit/soynlp
한국어 자연어처리를 위한 파이썬 라이브러리입니다. 단어 추출/ 토크나이저 / 품사판별/ 전처리의 기능을 제공합니다.
ikawaha/kagome
Self-contained Japanese Morphological Analyzer written in pure Go
no-context/moo
Optimised tokenizer/lexer generator! 🐄 Uses /y for performance. Moo.
BLKSerene/Wordless
An Integrated Corpus Tool With Multilingual Support for the Study of Language, Literature, and Translation
wangfenjin/simple
支持中文和拼音的 SQLite fts5 全文搜索扩展 | A SQLite3 fts5 tokenizer which supports Chinese and PinYin
mathewsanders/Mustard
🌭 Mustard is a Swift library for tokenizing strings when splitting by whitespace doesn't cut it.
cbaziotis/ekphrasis
Ekphrasis is a text processing tool, geared towards text from social networks, such as Twitter or Facebook. Ekphrasis performs tokenization, word normalization, word segmentation (for splitting hashtags) and spell correction, using word statistics from 2 big corpora (english Wikipedia, twitter - 330mil english tweets).
risesoft-y9/Data-Labeling
数据标注是一款专门对文本数据进行处理和标注的工具,通过简化快捷的文本标注流程和动态的算法反馈,支持用户快速标注关键词并能通过算法持续减少人工标注的成本和时间。数据标注的过程先由人工标注构建基础,再由自动标注反哺人工标注,最后由人工标注进行纠偏,从而大幅度提高标注的精准度和高效性。数据标注需要依赖开源的数字底座进行人员岗位管控。
open-korean-text/open-korean-text
Open Korean Text Processor - An Open-source Korean Text Processor
smoothnlp/SmoothNLP
专注于可解释的NLP技术 An NLP Toolset With A Focus on Explainable Inference
niieani/gpt-tokenizer
The fastest JavaScript BPE Tokenizer Encoder Decoder for OpenAI's GPT models (o1, o3, o4, gpt-4o, gpt-4, etc.). Port of OpenAI's tiktoken with additional features.
jflex-de/jflex
The fast scanner generator for Java™ with full Unicode support
therealoliver/Deepdive-llama3-from-scratch
Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.
alasdairforsythe/tokenmonster
Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript
glayzzle/php-parser
:herb: NodeJS PHP Parser - extract AST or tokens
lindera/lindera
A multilingual morphological analysis library.
lydell/js-tokens
Tiny JavaScript tokenizer.
lionsoul2014/friso
High performance Chinese tokenizer with both GBK and UTF-8 charset support based on MMSEG algorithm developed by ANSI C. Completely based on modular implementation and can be easily embedded in other programs, like: MySQL, PostgreSQL, PHP, etc.
hplt-project/sacremoses
Python port of Moses tokenizer, truecaser and normalizer
leodevbro/vscode-blockman
VSCode extension to highlight nested code blocks
CogComp/cogcomp-nlp
CogComp's Natural Language Processing Libraries and Demos: Modules include lemmatizer, ner, pos, prep-srl, quantifier, question type, relation-extraction, similarity, temporal normalizer, tokenizer, transliteration, verb-sense, and more.
polm/fugashi
A Cython MeCab wrapper for fast, pythonic Japanese tokenization and morphological analysis.
neurosnap/sentences
A multilingual command line sentence tokenizer in Golang
timtadh/lexmachine
Lex machinary for go.
taishi-i/nagisa
A Japanese tokenizer based on recurrent neural networks
ku-nlp/jumanpp
Juman++ (a Morphological Analyzer Toolkit)
daac-tools/vibrato
🎤 vibrato: Viterbi-based accelerated tokenizer
belladoreai/llama-tokenizer-js
JS tokenizer for LLaMA 1 and 2
zurawiki/tiktoken-rs
Ready-made tokenizer library for working with GPT and tiktoken
guillaume-be/rust-tokenizers
Rust-tokenizer offers high-performance tokenizers for modern language models, including WordPiece, Byte-Pair Encoding (BPE) and Unigram (SentencePiece) models
OpenNMT/Tokenizer
Fast and customizable text tokenization library with BPE and SentencePiece support