RoleCraft

RoleCraft-GLM: Advancing Personalized Role-Playing in Large Language Models
[Project] [Paper]

Abstract

This study presents RoleCraft-GLM, an innovative framework aimed at enhancing personalized role-playing with Large Language Models (LLMs). RoleCraft-GLM addresses the key issue of lacking personalized interactions in conversational AI, and offers a solution with detailed and emotionally nuanced character portrayals. We contribute a unique conversational dataset that shifts from conventional celebrity-centric characters to diverse, non-celebrity personas, thus enhancing the realism and complexity of language modeling interactions. Additionally, our approach includes meticulous character development, ensuring dialogues are both realistic and emotionally resonant. The effectiveness of RoleCraft-GLM is validated through various case studies, highlighting its versatility and skill in different scenarios. Our framework excels in generating dialogues that accurately reflect characters' personality traits and emotions, thereby boosting user engagement. In conclusion, RoleCraft-GLM marks a significant leap in personalized AI interactions, and paves the way for more authentic and immersive AI-assisted role-playing experiences by enabling more nuanced and emotionally rich dialogues.

Framework

Framework Image1 Overview of the RoleCraft-GLM framework: (1)Emotionally annotated dialog datasets play a key role in creating role profiles that reflect specific emotional traits. (2)The generation of Q&A pairs, based on context and known character traits, ensures that dialogues are consistent with the character profiles. (3) A hybrid approach of generic and character-specific instructions is used to train the GLM for various dialog scenarios.

Prompt

Framework Image2 Here is an example of generating a detailed character description. Utilizing a character description template along with an emotionally annotated dialogue dataset facilitates the generation of detailed character descriptions based on prompts.(The instruction and output have been translated into English.)

Statistics

Statistics Image 1 Statistics Image 2

Experimental Results

Experimental Results Image 1

Experimental Results Image 2 Experimental Results Image 3

Citation

if you find this work helpful, please cite:

@misc{rolecraft-glm,
    title={RoleCraft-GLM: Advancing Personalized Role-Playing in Large Language Models},
    author={Meiling Tao, Xuechen Liang, Tianyu Shi, et al},
    year={2023},
    howpublished={GitHub repository},
    note={\url{https://github.com/tml2002/RoleCraft-GLM/}}
}