/Coded_Prompts_for_LLMs

Code for "Coded Prompts for Large Language Models".

Primary LanguageJupyter Notebook

Coded Prompts for Large Language Models

Ziqian Lin, Yicong Chen, Yuchen Zeng, Kangwook Lee

University of Wisconsin-Madison

Paper Link: TBD

While Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks and various prompting techniques have been proposed, there remains room for performance enhancement. In this work, we introduce a novel dimension to prompt design – coded prompts for LLM inference. Drawing inspiration from coding theory, where coded symbols communicate or store functions of multiple information symbols, we design coded prompts to process multiple inputs simultaneously. We validate this approach through experiments on two distinct tasks: identifying the maximum prime number within a range and sentence toxicity prediction. Our results indicate that coded prompts can indeed improve task performance. We believe that coded prompts will pave a new way for innovative strategies to enhance the efficiency and effectiveness of LLMs.

Experiment

Task 1: Finding the Maximum Prime Number in a Range (Binary Classification)

step 1 - install the openai package:

pip install openai

step 2 - download the code for this repo and unzip it:

step 3 - cd to the folder:

cd task\ 1

step 4 - run experiment:

python run_task1.py --integers 1 --samples 4 --apikey yourkey

Task 2: Online Comment Toxicity Prediction (Regression)

step 1 - install packages:

pip install openai
pip install datasets
pip install guidance

step 2 - run the notebook ./task 2/task2.ipynb.