praetor29/personalgpt

[Suggestion] The clone bettering itself in real time

Opened this issue · 1 comments

Would it be possible to make the AI model fine-tune itself over time through interactions? Perhaps it could learn from chatting with its real-life counterpart by analyzing their responses? Or maybe the model could at least save these conversations into some sort of databank or backlog to be trained on it later?

My idea here is to find a way to train the clone exponentially: the more you use it, the more accurate it becomes, gradually decreasing the programmer's part in the process.

While it cannot improve upon its fine-tuned model in real time, it is an intriguing idea to collect a variant of the usual training data: "How a conversation between yourself and the human you are cloning is like."

Definitely something to try implementing!