Pinned Repositories
awesome-awesome-machine-learning
A curated list of awesome lists across all machine learning topics. | 机器学习/深度学习/人工智能一切主题 (学习范式/任务/应用/模型/道德/交叉学科/数据集/框架/教程) 的资源列表汇总。
awesome-multi-view-clustering
collections for advanced, novel multi-view clustering methods(papers , codes and datasets)
awesome-omics
A collection of awesome things regarding all omics.
Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
DAI-Net
Official implementation for the paper "DAI-Net: Dual Adaptive Interaction Network for Coordinated Medication Recommendation"
DPNET
DPNET: Dynamic Poly-attention Network for Trustworthy Multi-modal Classification
GMAE
Learning Disentangled Representations for Generalized Multi-view Clustering
IiAGL-MC
Inclusivity Induced Adaptive Graph Learning for Multi-view Clustering
Multi-view-learning-methods-with-code
needRemember
算法理论基础知识应知应会
obananas's Repositories
obananas/GMAE
Learning Disentangled Representations for Generalized Multi-view Clustering
obananas/DAI-Net
Official implementation for the paper "DAI-Net: Dual Adaptive Interaction Network for Coordinated Medication Recommendation"
obananas/DPNET
DPNET: Dynamic Poly-attention Network for Trustworthy Multi-modal Classification
obananas/IiAGL-MC
Inclusivity Induced Adaptive Graph Learning for Multi-view Clustering
obananas/Multi-view-learning-methods-with-code
obananas/awesome-awesome-machine-learning
A curated list of awesome lists across all machine learning topics. | 机器学习/深度学习/人工智能一切主题 (学习范式/任务/应用/模型/道德/交叉学科/数据集/框架/教程) 的资源列表汇总。
obananas/awesome-multi-view-clustering
collections for advanced, novel multi-view clustering methods(papers , codes and datasets)
obananas/awesome-omics
A collection of awesome things regarding all omics.
obananas/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
obananas/deep-learning-for-image-processing
deep learning for image processing including classification and object-detection etc.
obananas/draw_io
obananas/fucking-algorithm
刷算法全靠套路,认准 labuladong 就够了!English version supported! Crack LeetCode, not only how, but also why.
obananas/GitHubDaily
坚持分享 GitHub 上高质量、有趣实用的开源技术教程、开发者工具、编程网站、技术资讯。
obananas/GNNPapers
Must-read papers on graph neural networks (GNN)
obananas/insightface
State-of-the-art 2D and 3D Face Analysis Project
obananas/machinelearning
My blogs and code for machine learning. http://cnblogs.com/pinard
obananas/MemVR
Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models'.
obananas/mmcv
OpenMMLab Computer Vision Foundation
obananas/mmediting
OpenMMLab Image and Video Processing, Editing and Synthesis Toolbox
obananas/NLP_ability
总结梳理自然语言处理工程师(NLP)需要积累的各方面知识,包括面试题,各种基础知识,工程能力等等,提升核心竞争力
obananas/obananas.github.io
obananas/skill-map
程序员技能图谱
obananas/SSMDrug
Official implementation for the paper "Drug Recommendation Method Based on Structured Sequence Modelling with Multi-Source Information"
obananas/xiaotun-project
推荐算法
obananas/awesome-Large-MultiModal-Hallucination
😎 up-to-date & curated list of awesome LMM hallucinations papers, methods & resources.
obananas/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
obananas/Awesome-MLLM-Hallucination
📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).
obananas/Awesome_Multimodel_LLM
Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.
obananas/llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
obananas/LVLM-Hallucinations-Survey
This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and continuously update our survey, we maintain this repository of relevant references.