Pinned Repositories
BDA-exercises
My solutions to exercises from avehtari Baesyan Data Analysis course
BDA_course_Aalto
Bayesian Data Analysis course at Aalto
COVID-19_underreporting
Estimate the number of COVID-19 cases based com SARS hospitalization data and compare this estimate with confirmed cases (Brazil)
dbt-ml-preprocessing
A SQL port of python's scikit-learn preprocessing module, provided as cross-database dbt macros.
h2ogpt
Private Q&A and summarization of documents+images or chat with local GPT, 100% private, Apache 2.0. Supports Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
Kubeflow_Pipelines
This repository aims to develop a step-by-step tutorial on how to build a Kubeflow Pipeline from scratch in your local machine.
lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
personal-website
pokemon_ml_project
This is an example project to understand how to work with Kedro and MLFlow for machine learning projects.
statrethinking_winter2019
Statistical Rethinking course at MPI-EVA from Dec 2018 through Feb 2019
isfuku's Repositories
isfuku/personal-website
isfuku/BDA-exercises
My solutions to exercises from avehtari Baesyan Data Analysis course
isfuku/BDA_course_Aalto
Bayesian Data Analysis course at Aalto
isfuku/COVID-19_underreporting
Estimate the number of COVID-19 cases based com SARS hospitalization data and compare this estimate with confirmed cases (Brazil)
isfuku/dbt-ml-preprocessing
A SQL port of python's scikit-learn preprocessing module, provided as cross-database dbt macros.
isfuku/h2ogpt
Private Q&A and summarization of documents+images or chat with local GPT, 100% private, Apache 2.0. Supports Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
isfuku/Kubeflow_Pipelines
This repository aims to develop a step-by-step tutorial on how to build a Kubeflow Pipeline from scratch in your local machine.
isfuku/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
isfuku/pokemon_ml_project
This is an example project to understand how to work with Kedro and MLFlow for machine learning projects.
isfuku/statrethinking_winter2019
Statistical Rethinking course at MPI-EVA from Dec 2018 through Feb 2019