LamnouarMohamed
Computer Vision Engineer | Multi-Camera Multiple Object Tracking
@ SiliconeSignal TechnologiesMorocco
LamnouarMohamed's Stars
google-research/bert
TensorFlow code and pre-trained models for BERT
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
chatchat-space/Langchain-Chatchat
Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
s0md3v/roop
one-click face swap
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
PromtEngineer/localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
THUDM/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
openai/gpt-3
GPT-3: Language Models are Few-Shot Learners
BradyFU/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
openai/spinningup
An educational resource to help anyone learn deep reinforcement learning.
hediet/vscode-debug-visualizer
An extension for VS Code that visualizes data during debugging.
OFA-Sys/Chinese-CLIP
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
Luodian/Otter
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
OpenBMB/BMTools
Tool Learning for Big Models, Open-Source Solutions of ChatGPT-Plugins
salesforce/CodeT5
Home of CodeT5: Open Code LLMs for Code Understanding and Generation
OpenGVLab/InternImage
[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
Forethought-Technologies/AutoChain
AutoChain: Build lightweight, extensible, and testable LLM Agents
OpenGVLab/InternVideo
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
pydantic/pydantic-core
Core validation logic for pydantic written in rust
a16z-infra/llama2-chatbot
LLaMA v2 Chatbot
cuda-mode/resource-stream
CUDA related news and material links
kuleshov/minillm
MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs
HenryHZY/Awesome-Multimodal-LLM
Research Trends in LLM-guided Multimodal Learning.
e-johnstonn/FableForge
Generate a picture book from a single prompt using OpenAI function calling, replicate, and Deep Lake
e-johnstonn/SalesCopilot
Intelligent sales assistant built using Deep Lake, Whisper, LangChain, and GPT 3.5/4
hustvl/SparseTrack
Official PyTorch implementation of SparseTrack (the new version of code will come soon)
MediaBrain-SJTU/EqMotion
[CVPR2023] EqMotion: Equivariant Multi-agent Motion Prediction with Invariant Interaction Reasoning
mrdbourke/learn-transformers
Work in progress. Simple repository to learn Transformers (and transformers).
neoeno/aoc2023