Pinned Repositories
COMP90015-2019S2-Assignment2
Project Files of Distributed White Board
Perplexica
Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
Graph-Masking-Pre-training
This is the official code for EMNLP 2022 paper: Self-supervised Graph Masking Pre-training for Graph-to-Text Generation
jiuzhouh.github.io
LLM-Agent-Paper-List
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
Multi-Score
A new automatic evaluation metric which jointly evaluates output diversity and quality in a multi-reference setting.
PiVe
This is the official code for paper: [PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs]
Reward-Engineering-for-Generating-SEG
This is the codes for "Reward Engineering for Generating Semi-structured Explanation".
Uncertainty-Aware-Language-Agent
This is the official repo for Towards Uncertainty-Aware Language Agent.
LLM-Agent-Paper-List
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
Jiuzhouh's Repositories
Jiuzhouh/PiVe
This is the official code for paper: [PiVe: Prompting with Iterative Verification Improving Graph-based Generative Capability of LLMs]
Jiuzhouh/Uncertainty-Aware-Language-Agent
This is the official repo for Towards Uncertainty-Aware Language Agent.
Jiuzhouh/Graph-Masking-Pre-training
This is the official code for EMNLP 2022 paper: Self-supervised Graph Masking Pre-training for Graph-to-Text Generation
Jiuzhouh/Reward-Engineering-for-Generating-SEG
This is the codes for "Reward Engineering for Generating Semi-structured Explanation".
Jiuzhouh/Multi-Score
A new automatic evaluation metric which jointly evaluates output diversity and quality in a multi-reference setting.
Jiuzhouh/jiuzhouh.github.io
Jiuzhouh/LLM-Agent-Paper-List
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.