What's new in 0.25.0
Closed this issue · 1 comments
nicolay-r commented
Support Batching
for effecting imputing LLM into text processing pipelines
Previosly, the whole text processing pipeline was relying on the sentence
/ text part.
Now we overcome that liimitation and therefore we can consider multiple sentences, formed in list i.e. batch.
This step is so important for LLM, LM, neural networks, for which batching accelerates the performance.
As the result, overall pipeline launching is expected to perform faster.
Sources collections are no longer going to be a part of AREkit ✨
Tha allow us to lightweight 🪶 the overall framework and so that purely focus on data processing techniques
Flexibility and Performance Enhancements
Fixed bugs
- 🔧
RowCacheStorageProvider
fixed bug with mismatching size of type list and columns list in case of otherforce
collected columns (ad4312c)
Minor Updates
nicolay-r commented
Published 🥳