mgiessing
Background in microelectronics and data science. Interests in accelerating AI, ML & DL with the right hardware - especially IBM Power Systems!
IBMFrankfurt am Main, Germany
Pinned Repositories
BMW-TensorFlow-Training-GUI
This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
pytorch-pretrained-bert-feedstock
A conda-smithy repository for pytorch-pretrained-bert.
bcn-lab-2084
CP4D
istio_build_scripts
kfctl
kfctl is a CLI for deploying and managing Kubeflow
ocp4-upi-kvm
OCP4 on KVM/Power
opence-container-build
watsonx-rag
WMLA
Watson Machine Learning Tutorials
mgiessing's Repositories
mgiessing/opence-container-build
mgiessing/watsonx-rag
mgiessing/bcn-lab-2084
mgiessing/qiskit-container-build
mgiessing/arrow
Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
mgiessing/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
mgiessing/brief-builder
mgiessing/cargo-chef
A cargo-subcommand to speed up Rust Docker builds using Docker layer caching.
mgiessing/core
The core library and APIs implementing the Triton Inference Server.
mgiessing/cphtestp
Environment for creating a docker image running cph performance tests for Persistent and Non-Persistent messaging against an MQ Queue Manager.
mgiessing/instructlab
InstructLab Command-Line Interface. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data.
mgiessing/knowhere
Knowhere is an open-source vector search engine, integrating FAISS, HNSW, etc.
mgiessing/kubeai
Private Open AI on Kubernetes
mgiessing/libc
Raw bindings to platform APIs for Rust
mgiessing/ocp4-upi-powervm-hmc
OpenShift on IBM PowerVM servers managed using HMC
mgiessing/onnx-mlir
Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure
mgiessing/onnxruntime-genai
Generative AI extensions for onnxruntime
mgiessing/onnxruntime_backend
The Triton backend for the ONNX Runtime.
mgiessing/openvscode-server
Run upstream VS Code on a remote machine with access through a modern web browser from any device, anywhere.
mgiessing/optimum
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
mgiessing/ppc64le-build-automation
mgiessing/pulsar-client-cpp
Apache Pulsar C++ client library
mgiessing/python_backend
Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
mgiessing/rstack
mgiessing/s5cmd
Parallel S3 and local filesystem execution tool.
mgiessing/safetensors
Simple, safe way to store and distribute tensors
mgiessing/torchserve-embedder-encoder-arm64
Embeddings Microservice for use in various projects
mgiessing/triton
Development repository for the Triton language and compiler
mgiessing/value-trait
a collection of traits for representing JSONesque values
mgiessing/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs