YanSte
ML AI engineer | Senior Software engineer | Open source contributor 🚀 | Work in Zürich 🇨🇭
Zurich
YanSte's Stars
langchain-ai/streamlit-agent
Reference implementations of several LangChain agents as Streamlit apps
Binaryify/OneDark-Pro
Atom's iconic One Dark theme for Visual Studio Code
dair-ai/Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
DanielWarfield1/MLWritingAndResearch
Notebook Examples used in machine learning writing and research
ollama/ollama
Get up and running with Llama 3.3, Mistral, Gemma 2, and other large language models.
openlm-research/open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
ChandraLingam/DataLake
This course will provide an in-depth understanding of the key elements of a Data Lake Architecture, including strategies for managing changes and evolving schemas. You will also learn to use SQL to query files directly
meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
YanSte/deep-learning-pytorch-huggingface
YanSte/NLP-LLM-Fine-tuning-QA-LoRA-T5
Natural Language Processing (NLP) and Large Language Models (LLM) with Fine-Tuning LLM and make Chatbot Question answering (QA) with LoRA and Flan-T5 Large
YanSte/simple-rag
YanSte/NLP-LLM-Fine-tuning-Llame-2-QLoRA-2024
Natural Language Processing (NLP) and Large Language Models (LLM) with Fine-Tuning LLM QLoRA and Llama 2 in 2024
lamini-ai/lamini-examples
lamini-ai/simple-rag
lamini-ai/prompt-engineering-open-llms
Dao-AILab/flash-attention
Fast and memory-efficient exact attention
ashishpatel26/LLM-Finetuning
LLM Finetuning with peft
xding2/Hands-On-NLP-Model
Fine Tuning Model for different NLP task
meta-llama/llama
Inference code for Llama models
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
Abonia1/LLM-finetuning
This repository provides code and resources for Parameter Efficient Fine-Tuning (PEFT), a technique for improving fine-tuning efficiency in natural language processing tasks.
ovh/ai-training-examples
philschmid/deep-learning-pytorch-huggingface
imoneoi/openchat-ui
An open source UI for OpenChat models
imoneoi/openchat
OpenChat: Advancing Open-source Language Models with Imperfect Data
brevdev/notebooks
Collection of notebook guides created by the Brev.dev team!
Ki6an/fastT5
⚡ boost inference speed of T5 models by 5x & reduce the model size by 3x.
YanSte/NLP-PEFT-LoRA-DialogSum-Dialogue-Summarize
Exploration of Large Language Model (LLM) and its capabilities, specifically dialogue summarization abilities. It highlights the use of a comprehensive fine-tuning approach called Efficient Fine-Tuning (PEFT)
YanSte/YanSte
Me 🙂
YanSte/NLP-PPO-DialogSum-Less-Toxic-Summarize
NLP (Natural Language Processing) with PEFT (Parameter Efficient Fine-Tuning) and LoRA (Low-Rank Adaptation) for Less-Toxic Summarization