/Awesome-List-of-LLM_VLMs

github awesome list of recent LLMs and VLMs

🌐✨Summarizing the latest LLMs and VLMs! Helping you quickly and easily choose and use large models! 😄
This is the repository navigation page, the main Awesome List: LLMs🚀 | VLMs🚀
Supported languages: 中文🚀 | English

Welcome to our repository🥰, a comprehensive navigation page that connects you to the most relevant resources and summary platforms for the latest large models(including LLMs🚀 and VLMs🚀). Whether you're looking for benchmarks💯, comparisons⚖️, or surveys📖, we've got you covered. this navigation page also links to other relevant summary platforms. Explore the sections below to find the information you need:

  • Benchmarking Inference Speed of Large Language Models🚀

GPU-Benchmarks-on-LLM-Inference uses various NVIDIA GPUs and Apple Silicon devices to test models like LLaMA 3 with the llama.cpp tool, measuring performance by tokens generated per second. It covers NVIDIA 3000, 4000, and A100 series, as well as Apple's M1, M2, and M3 chips.

  • Comprehensive Analysis and Comparison of Large Language Models🔍

The website LifeArchitect.ai/models provides a comprehensive analysis and comparison of large language models (LLMs) such as GPT-3, GPT-4, and PaLM, detailing their sizes, capabilities, and training data.

  • Reliable Measurement of Large Language Model Response Times⏱️

TheFastest.ai offers reliable performance measurements for popular large language models (LLMs) based on response times. It compares models across multiple data centers (e.g., US West, East, and Europe), focusing on metrics like Time to First Token (TTFT) and Tokens Per Second (TPS), with daily updated statistics.

  • Comprehensive Survey of Vision-Language Models📊

VLM_survey is a repository summarizing and surveying the latest vision-language models (VLMs), including links to relevant papers. It covers:

  1. Overview of Vision-Language Models: Reviews VLM research in image classification, object detection, and semantic segmentation.
  2. Pre-training Methods: Summarizes network architectures, pre-training objectives, and downstream tasks for VLMs.
  3. Transfer Learning Methods: Discusses transfer learning strategies for VLMs in different tasks.
  4. Knowledge Distillation Methods: Examines knowledge distillation techniques in tasks like object detection and semantic segmentation.