Pinned Repositories
datafusion
Apache DataFusion SQL Query Engine
nifi
Apache NiFi
nifi-minifi-cpp
Apache NiFi - MiNiFi C++
dask-sql
Distributed SQL Engine in Python using Dask
docker-hwx
Combination of Dockerized Hortonworks projects and other Hadoop ecosystem components
docker-nifi
Apache NiFi Docker Environment
nifi-addons
Additional convenience processors not found in core Apache NiFi
nifi-opencv
cudf
cuDF - GPU DataFrame Library
jdye64's Repositories
jdye64/garcon
Device Registry for all components of Apache NiFi
jdye64/cudf
cuDF - GPU DataFrame Library
jdye64/dask-sql
Distributed SQL Engine in Python using Dask
jdye64/nifi
Apache NiFi
jdye64/arrow-datafusion
Apache Arrow DataFusion and Ballista query engines
jdye64/arrow-datafusion-python
Apache Arrow DataFusion Python Bindings
jdye64/arrow-testing
Auxiliary testing files for Apache Arrow
jdye64/core
:house_with_garden: Open source home automation that puts local control and privacy first.
jdye64/dask
Parallel computing with task scheduling
jdye64/GenerativeAIExamples
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
jdye64/haystack
:mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
jdye64/jupyter-ai
A generative AI extension for JupyterLab
jdye64/langchain
⚡ Building applications with LLMs through composability ⚡
jdye64/litellm
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
jdye64/llama_index
LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
jdye64/NeMo
NeMo: a toolkit for conversational AI
jdye64/NeMo-Curator
Scalable toolkit for data curation
jdye64/pandas-ai
PandasAI is the Python library that integrates Gen AI into pandas, making data analysis conversational
jdye64/polars
Fast multi-threaded, hybrid-out-of-core query engine focussing on DataFrame front-ends
jdye64/qpml
Query Plan Markup Language
jdye64/remote-land-management
Repository consisting of scripts that assist with managing land when you are physically not nearby
jdye64/rust-experiments
Simple experimentation playground for random Rust research
jdye64/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
jdye64/sqlparser-rs
Extensible SQL Lexer and Parser for Rust
jdye64/substrait-rs
jdye64/substrait-tests
substrait-tests
jdye64/TensorRT
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
jdye64/tpch
jdye64/typify
JSON Schema -> Rust type converter
jdye64/unifi-cam-proxy
Enable non-Ubiquiti cameras to work with Unifi NVR