14H034160212
Ph.D. Candidate, @witbrock, @Strong-AI-Lab, UoA #NLP #LLMs #Reasoning; AI Engineer, @xtracta-app, NZ; ex: AI Engineer, AIIT, Peking U; UoA (First-class Honours)
University of AucklandAuckland, New Zealand
Pinned Repositories
14H034160212
14h034160212.github.io
This is Qiming Bao.
COVID-19-redesign-master
The latest update version from Qiming
HHH-An-Online-Question-Answering-System-for-Medical-Questions
HBAM: Hierarchical Bi-directional Word Attention Model
IDOL
Repo for paper "IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning" accepted to the Findings of ACL 2023
A-Neural-Symbolic-Paradigm
From Symbolic Logic Reasoning to Soft Reasoning: A Neural-Symbolic Paradigm
Logical-and-abstract-reasoning
Evaluation on Logical Reasoning and Abstract Reasoning Challenges
Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning
The source code for Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning. #1 on the ReClor Leaderboard and we are the first group scored above 90% on the hidden test set around the world.. The paper has been accepted by the Findings of ACL-24.
Multi-Step-Deductive-Reasoning-Over-Natural-Language
Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation
PARARULE-Plus
PARARULE Plus: A Larger Deep Multi-Step Reasoning Dataset over Natural Language
14H034160212's Repositories
14H034160212/HHH-An-Online-Question-Answering-System-for-Medical-Questions
HBAM: Hierarchical Bi-directional Word Attention Model
14H034160212/IDOL
Repo for paper "IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning" accepted to the Findings of ACL 2023
14H034160212/14H034160212
14H034160212/14h034160212.github.io
This is Qiming Bao.
14H034160212/ChatDoctor
14H034160212/ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
14H034160212/al8n
14H034160212/alpaca-lora
Instruct-tune LLaMA on consumer hardware
14H034160212/datasets
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
14H034160212/DB-GPT
Revolutionizing Database Interactions with Private LLM Technology
14H034160212/ERNIE-Layout-Pytorch
An unofficial Pytorch implementation of ERNIE-Layout which is originally released through PaddleNLP.
14H034160212/evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
14H034160212/FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
14H034160212/github-readme-stats
:zap: Dynamically generated stats for your github readmes
14H034160212/gpt4free
decentralising the Ai Industry, just some language model api's...
14H034160212/llama
Inference code for LLaMA models
14H034160212/LLaMA-Factory
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
14H034160212/llama-recipes
Examples and recipes for Llama 2 model
14H034160212/LLM-Tuning
Tuning LLMs with no tears💦, sharing LLM-tools with love❤️.
14H034160212/LLMZoo
⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡
14H034160212/Logic-LLM
The project page for "LOGIC-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning"
14H034160212/LogiQA2.0
Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks
14H034160212/LongLoRA
Efficient long-context fine-tuning, supervised fine-tuning, LongQA dataset.
14H034160212/MOSS-RLHF
MOSS-RLHF
14H034160212/prm800k
800,000 step-level correctness labels on LLM solutions to MATH problems
14H034160212/Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
14H034160212/Qiming_genie_challenge
14H034160212/ReasoningNLP
paper list on reasoning in NLP
14H034160212/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
14H034160212/trl
Train transformer language models with reinforcement learning.