nicolay-r/AREkit

What's new in 0.25.0

Closed this issue · 1 comments

Support Batching for effecting imputing LLM into text processing pipelines

Previosly, the whole text processing pipeline was relying on the sentence / text part.
Now we overcome that liimitation and therefore we can consider multiple sentences, formed in list i.e. batch.
This step is so important for LLM, LM, neural networks, for which batching accelerates the performance.
As the result, overall pipeline launching is expected to perform faster.

Sources collections are no longer going to be a part of AREkit ✨
Tha allow us to lightweight 🪶 the overall framework and so that purely focus on data processing techniques

  • #537
  • Remove requests library dependency 🪶
  • Move all the tutorials 📚 to the AREkit-ss project. 🪶
  • #545

Flexibility and Performance Enhancements

Fixed bugs

  • 🔧 RowCacheStorageProvider fixed bug with mismatching size of type list and columns list in case of other force collected columns (ad4312c)

Minor Updates

Published 🥳