Issues
- 4
- 4
Support for batched inference?
#124 opened by ksrinivs64 - 4
Incorrect constrained generation for regex
#116 opened by teoremma - 3
Skip or speedup lexer preprocessing
#115 opened by teoremma - 1
Do you support earley parser?
#118 opened by benjamin12342 - 6
Beam search issue
#104 opened by HichemAK - 2
Support multibatch multithreaded parsing
#19 opened by shubhamugare - 0
Add vLLM integration
#75 opened by shubhamugare - 0
Add support to OpenAI models
#18 opened by shubhamugare - 1
add cypher support
#92 opened by zdnhub - 3
GenerationMixin._get_logits_warper() missing 1 required positional argument: 'device'
#84 opened by XZF0 - 3
SynCode for fill-in-the-middle models
#82 opened by AzizCode92 - 12
"Generate Indentation-Error-Free Python Code" example with "microsoft/phi-2" model generates error
#78 opened by border-b - 2
- 3
Issues with grammar parsing
#73 opened by RevanthRameshkumar - 3
Code example on
#70 opened by badrisnps - 1
- 1
how to pipinstall it?
#54 opened by jjjxusbx - 1
'common.lark' not found
#61 opened by shaurya-06 - 0
Parallel Syntactic Decoding
#11 opened by shubhamugare - 0
Allow easy frontend for adding a new grammar
#14 opened by shubhamugare - 0
Aligning to the SynCode paper
#15 opened by shubhamugare - 0
LR(1) parsing
#12 opened by shubhamugare - 0
- 0
Cache LR(1) parser
#20 opened by shubhamugare - 0
- 0
Refactoring Python parser
#17 opened by shubhamugare