logan-markewich/llama_index_starter_pack

Can we use GPU here in this to improve the speed or inference time ?

VivekSinghDS opened this issue · 1 comments

Can we use GPU here in this to improve the speed or inference time ?

Everything is done using openAI API's, so no room for GPU usage to speed things up.

You could use a local LLM, but it would take some code customization plus some powerful hardware

Some examples of local LLMs and llama index are here (note that your answer quality will vary depending on the model you use):

https://github.com/autratec?tab=repositories