/LLM-personalization

We enable LLM with personalization capability

Primary LanguagePythonApache License 2.0Apache-2.0

LLM-personalization

We enable LLM with personalization capability

This project build upon https://github.com/langchain-ai/web-explorer

Today, LLMs are not good at personalization providing recommendation. They advise physicians and financial advisors to ask professionals in respective fields for help, even having user information available. Answering questions of software professionals, LLM needs to deliver in-depth answers with codes or algorithms, whereas for professionals in other fields would need definitions and main concepts. The intent of this project is to make LLM answer tailored to the needs of users, taking into account available information about them. To do that, we need to generalize available documents about a person like LinkedIn, visited web pages, investment history extracted from tax documents, health forms, maintaining the privacy of this person. We rely on meta-learning techniques to design a LLM prompt to produce a personalization prompt to obtain a suitable relevant information. Such “meta-prompt” is produced by generalization operation applied to available documents for the user. These documents need to be de-identified so that they are sufficient for personalization on one hand and will maintain user privacy on the other hand.

Personalization profile is built from the link provided by the user.

Then, given a user question, this will -

Use LLM to generate a set of queries. Query for each. The URLs from search results are stored in self.urls. A check is performed for any new URLs that haven't been processed yet (not in self.url_database). Only these new URLs are loaded, transformed, and added to the vectorstore. The vectorstore is queried for relevant documents based on the questions generated by the LLM. Only unique documents are returned as the final result.