Pinned Repositories
fine-tuning-llms-on-aws
In this lab, you will learn how to use Amazon SageMaker to fine-tune a pretrained Hugging Face LLM using AWS Trainium accelerators, and then leverage the fine-tuned model for inference on AWS Inferentia.
operational-insights-using-amazon-devops-guru-innovate
Optimizing-GPU-Utilization-for-AI-ML-Workloads
reference-toolkit
Python-based tools for managing bibliographies using BibTeX
research
retail-agent-and-guardrails-for-bedrock
eugeneteo's Repositories
eugeneteo/fine-tuning-llms-on-aws
In this lab, you will learn how to use Amazon SageMaker to fine-tune a pretrained Hugging Face LLM using AWS Trainium accelerators, and then leverage the fine-tuned model for inference on AWS Inferentia.
eugeneteo/operational-insights-using-amazon-devops-guru-innovate
eugeneteo/Optimizing-GPU-Utilization-for-AI-ML-Workloads
eugeneteo/reference-toolkit
Python-based tools for managing bibliographies using BibTeX
eugeneteo/research
eugeneteo/retail-agent-and-guardrails-for-bedrock