/WebGLM

WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)

Primary LanguagePythonApache License 2.0Apache-2.0

WebGLM: Towards An Efficient Web-enhanced Question Answering System with Human Preference

📃 Paper (KDD 2023)

This is official implementation of WebGLM. And the table of contents is shown below.

demo.mp4

Overview

paper

WebGLM aspires to provide an efficient and cost-effective web-enhanced question-answering system using the 10-billion-parameter General Language Model (GLM). It aims to improve real-world application deployment by integrating web search and retrieval capabilities into the pre-trained language model.

Features

  • LLM-augmented Retriever: Enhances the retrieval of relevant web content to better aid in answering questions accurately.
  • Bootstrapped Generator: Generates human-like responses to questions, leveraging the power of the GLM to provide refined answers.
  • Human Preference-aware Scorer: Estimates the quality of generated responses by prioritizing human preferences, ensuring the system produces useful and engaging content.

Preparation

Prepare Code and Environments

Clone this repo, and install python requirements.

pip install -r requirements.txt

Install Nodejs.

apt install nodejs # If you use Ubuntu

Install playwright dependencies.

playwright install

If browsing environments are not installed in your host, you need to install them. Do not worry, playwright will give you instructions when you first execute it if so.

Prepare SerpAPI Key

In search process, we use SerpAPI to get search results. You need to get a SerpAPI key from here.

Then, set the environment variable SERPAPI_KEY to your key.

export SERPAPI_KEY="YOUR KEY"

Prepare Retriever Checkpoint

Download the checkpoint on Tsinghua Cloud by running the command line below.

You can manually specify the path to save the checkpoint by --save SAVE_PATH.

python download.py retriever-pretrained-checkpoint

Try WebGLM

Before you run the code, make sure that the space of your device is enough.

Export Environment Variables

Export the environment variable WEBGLM_RETRIEVER_CKPT to the path of the retriever checkpoint. If you have downloaded the retriever checkpoint in the default path, you can simply run the command line below.

export WEBGLM_RETRIEVER_CKPT=./download/retriever-pretrained-checkpoint

Run as Command Line Interface

You can try WebGLM-2B model by:

python cli_demo.py -w THUDM/WebGLM-2B

Or directly for WebGLM-10B model:

python cli_demo.py

Run as Web Service

You can try WebGLM-2B model by:

python web_demo.py -w THUDM/WebGLM-2B

Or directly for WebGLM-10B model:

python web_demo.py

Train WebGLM

Train Generator

Prepare Data

Download the training data on Tsinghua Cloud by running the command line below.

python download.py generator-training-data

It will automatically download all the data and preprocess them into the seq2seq form that can be used immediately in ./download.

Training

Please refer to GLM repo for seq2seq training.

Train Retriever

Prepare Data

Download the training data on Tsinghua Cloud by running the command line below.

python download.py retriever-training-data

Training

Run the following command line to train the retriever. If you have downloaded the retriever training data in the default path, you can simply run the command line below.

python train_retriever.py --train_data_dir ./download/retriever-training-data

Evaluation

You can reproduce our results on TriviaQA, WebQuestions and NQ Open. Take TriviaQA for example, you can simply run the command line below:

bash scripts/triviaqa.sh

and start running the experiment.

Real Application Cases

Here you can see some examples of WebGLM real application scenarios.

Citation

If you use this code for your research, please cite our paper.

@misc{liu2023webglm,
      title={WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences},
      author={Xiao Liu and Hanyu Lai and Hao Yu and Yifan Xu and Aohan Zeng and Zhengxiao Du and Peng Zhang and Yuxiao Dong and Jie Tang},
      year={2023},
      eprint={2306.07906},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

This repo is simplified for easier deployment.