Pinned Repositories
Gaudi-tutorials
Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://developer.habana.ai/
GenAIComps
GenAI components at micro-service level; GenAI service composer to create mega-service
GenAIExamples
Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
intel-itex
LLaVA
Visual Instruction Tuning: Large Language-and-Vision Assistant built towards multimodal GPT-4 level capabilities.
neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
text-data-generation
AI Starter Kit for AI Unstructured Synthetic Data Generation using Intel® Extension for Pytorch
tvm-onednn
Enable onednn in tvm.
xFasterTransformer
LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
feng-intel's Repositories
feng-intel/intel-itex
feng-intel/Gaudi-tutorials
Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://developer.habana.ai/
feng-intel/GenAIComps
GenAI components at micro-service level; GenAI service composer to create mega-service
feng-intel/GenAIExamples
Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
feng-intel/LLaVA
Visual Instruction Tuning: Large Language-and-Vision Assistant built towards multimodal GPT-4 level capabilities.
feng-intel/neural-compressor
SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
feng-intel/text-data-generation
AI Starter Kit for AI Unstructured Synthetic Data Generation using Intel® Extension for Pytorch
feng-intel/tvm-onednn
Enable onednn in tvm.
feng-intel/xFasterTransformer