Patient-Ψ: Using Large Language Models to Simulate Patients for Training Mental Health Professionals
- Get started on local computer
- Quickly start the Patient-Ψ-Trainer
- Cognitive model generation - Patient-Ψ-CM dataset
- Prompts for Patient-Ψ
- Citation
We recommend starting a virtual environment with miniconda (quick command line installation):
conda create -n patient-psi python=3.11.0
Then install requirements for python3 and for Next.js-based Vercel app.
# Python packages
pip install -r requirements.txt
# Next.js dependencies
conda install -c conda-forge nodejs
npm install -g pnpm@8.7.4
npn install -g ts-node
npm install -g vercel
First, create a Vercel account via the Vercel website.
Then, fork this repo, and create a Vercel project and a KV database linked to it. Follow the video clip for instructions.
After this, go back to the terminal. Run vercel link
and fill in the credentials to connect to the Vercel project just created.
Under your forked repo, run the following to create a .env.local
file with automatically imported environment variables of the KV database.
vercel env pull .env.local
You need to add two additional environment variables to the .env.local
file manually. We provide an example at .env.example
.
AUTH_SECRET
: use eitheropenssl rand -base64 32
or an automatic generator to generate a random secret.OPENAI_API_KEY
: plug in your OpenAI API key.
Run the following commands under your forked repo to update the additional environment variables to the Vercel project settings. The Vercel app will prompt you to fill out the corresponding values and ask you to select the environments to add to. Please toggle all environments (production, development, preview).
vercel env add AUTH_SECRET
vercel env add OPENAI_API_KEY
To check if the above setup is successful, go to your Vercel project page, and navigate to Settings -> Environment Variables
, and there should be at least 6 key-value pairs as shown in .env.example
.
Note: You should not commit your
.env
file anywhere.
After setting up the Vercel app, you can quickly upload sample cognitive models to the KV database and start the Patient-Ψ-Trainer powered by GPT-4.
We provide example patient cognitive models in file python/data/profiles.json
, which are publically available on Beck Institute website.
First, upload the profiles to your KV database by running the code.
ts-node lib/utils/kvDatabaseFunctions.ts
Second, run the following to start the server on localhost:8001
.
pnpm install
pnpm dev --port 8001
The app should be started on http://localhost:8001/signup. Please sign up with any 6 characters at minimum.
To get access to the Patient-Ψ-CM dataset, please fill out this form: https://forms.gle/pQ3g6YVFrEWjBU2H7.
Folder python/
contains the code for producing the Patient-Ψ-CM dataset. We provide an example transcript excerpt from CBT therapy, which is publicly available on Beck Institute website.
Run the following commands to construct the cognitive models. Make sure you update the OPENAI_API_KEY
in python/.env
cd python
python3 -m generation.generate --transcript-file "example_transcript.txt" --out-file "example_CCD_from_transcript.json"
Note: You should not commit you
.env
file anywhere. Make sure to update the variables inpython/.env
if you want to use your custom folder.
The prompts for different conversational styles can be found in this folder
The prompts for simulating a patient with a cognitive model can be found in this function
@misc{wang2024patientpsi,
title={PATIENT-{\Psi}: Using Large Language Models to Simulate Patients for Training Mental Health Professionals},
author={Ruiyi Wang and Stephanie Milani and Jamie C. Chiu and Jiayin Zhi and Shaun M. Eack and Travis Labrum and Samuel M. Murphy and Nev Jones and Kate Hardy and Hong Shen and Fei Fang and Zhiyu Zoey Chen},
year={2024},
eprint={2405.19660},
archivePrefix={arXiv},
primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'}
}