Pinned Repositories
3dai
axis2-c
Mirror of Apache Axis2/C
DevCloudContent-docker-compose
dlstreamer
This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines.
docker-intel-gpu-telegraf
GenAIComps
GenAI components at micro-service level; GenAI service composer to create mega-service
i2v-pytorch-models
Inference containers for the Weaviate `img2vec-pytorch` module
intel-devcloud-demos
Intel DevCloud Media and AI demos
intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
yolov8_efficientnet_demos
Execute a GStreamer media accelerated decode and model ensembled pipeline of Yolov8 and Efficientnet with either OpenVINO Model Server or DLStreamer for inference.
gsilva2016's Repositories
gsilva2016/yolov8_efficientnet_demos
Execute a GStreamer media accelerated decode and model ensembled pipeline of Yolov8 and Efficientnet with either OpenVINO Model Server or DLStreamer for inference.
gsilva2016/3dai
gsilva2016/dlstreamer
This repository is a home to Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework. Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines.
gsilva2016/docker-intel-gpu-telegraf
gsilva2016/GenAIComps
GenAI components at micro-service level; GenAI service composer to create mega-service
gsilva2016/i2v-pytorch-models
Inference containers for the Weaviate `img2vec-pytorch` module
gsilva2016/intel-devcloud-demos
Intel DevCloud Media and AI demos
gsilva2016/intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
gsilva2016/intel-extension-for-transformers
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
gsilva2016/jetson-containers
Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
gsilva2016/kvm-containers
gsilva2016/langchain
🦜🔗 Build context-aware reasoning applications
gsilva2016/llmrts-intel
gsilva2016/lms_intel_architecture
gsilva2016/model_server
A scalable inference server for models optimized with OpenVINO™
gsilva2016/multi-camera-people-tracking
Multi-camera people tracking is to monitoring people with multiple cameras and connecting each other
gsilva2016/Open3D-ML
An extension of Open3D to address 3D Machine Learning tasks
gsilva2016/opencv_intel_gpu_accel
gsilva2016/openvino
OpenVINO™ Toolkit repository
gsilva2016/openvinotoolkit_model_server_legacy_intel_celeron
gsilva2016/ovms_maskrcnn_bit
gsilva2016/ovms_microservices
gsilva2016/pp3d
gsilva2016/sample-videos
Sample videos for running inference
gsilva2016/ultralytics
NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
gsilva2016/Video-LLaVA
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
gsilva2016/vision-self-checkout
gsilva2016/vision-selfcheckout-demo24
Vision Self-Checkout CV and LMM Demo
gsilva2016/weaviate-examples
Weaviate vector database – examples
gsilva2016/weaviate-img2vec-client
Weaviate img2vec client in a Docker container