Pinned Repositories
AgentGPT
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
Auto-GPT-1
awesome-chatgpt-code-interpreter-experiments
Awesome things you can do with ChatGPT + Code Interpreter combo 🔥
chatgpt-askup-search-plugin
AskUp Search ChatGPT Plugin
clip_video_app
Flask-based web application designed to compare text and image embeddings using the CLIP model.
llama
Simple llama usage example
kbpark102's Repositories
kbpark102/llama
Simple llama usage example
kbpark102/AgentGPT
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
kbpark102/Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
kbpark102/Auto-GPT-1
kbpark102/awesome-chatgpt-code-interpreter-experiments
Awesome things you can do with ChatGPT + Code Interpreter combo 🔥
kbpark102/chatgpt-askup-search-plugin
AskUp Search ChatGPT Plugin
kbpark102/clip_video_app
Flask-based web application designed to compare text and image embeddings using the CLIP model.
kbpark102/ControlNet
Let us control diffusion models
kbpark102/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
kbpark102/FFMetrics
Visualizes Video Quality Metrics (PSNR, SSIM & VMAF) calculated by ffmpeg.exe
kbpark102/Fooocus
Focus on prompting and generating
kbpark102/Gemini
The open source implementation of Gemini, the model that will "eclipse ChatGPT" by Google
kbpark102/google-research
Google Research
kbpark102/GPT4Video
Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation
kbpark102/Grounded-Segment-Anything
Grounded-SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
kbpark102/h2o-llmstudio
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs
kbpark102/HiREST
Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)
kbpark102/how-do-vits-work
(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
kbpark102/llama-dl
High-speed download of LLaMA, Facebook's 65B parameter GPT model
kbpark102/papers-with-data
A curated list of papers that released datasets along with their work
kbpark102/pegasus-1-eval
Repository for evaluating Pegasus-1 and video-language foundation models
kbpark102/pymatting
A Python library for alpha matting
kbpark102/sam-hq
Segment Anything in High Quality
kbpark102/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
kbpark102/SegmentAnyRGBD
Segment Any RGBD
kbpark102/stable-diffusion-webui
Stable Diffusion web UI
kbpark102/stable-diffusion-webui-docker
Easy Docker setup for Stable Diffusion with user-friendly UI
kbpark102/TVLT
PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022)
kbpark102/visionscript
A high-level programming language for using computer vision.
kbpark102/visualblocks
Visual Blocks for ML is a Google visual programming framework that lets you create ML pipelines in a no-code graph editor. You – and your users – can quickly prototype workflows by connecting drag-and-drop ML components, including models, user inputs, processors, and visualizations.