- First thing to do on launch is to open a new shell and verify virtualenv is sourced.
Things included are:
-
Makefile
-
Pytest
-
pandas
-
Pylint
-
Dockerfile
-
GitHub copilot
-
jupyter
andipython
-
Most common Python libraries for ML/DL and Hugging Face
-
githubactions
- Zero-shot classification: ./hugging-face/zero_shot_classification.py classify
- Yake for candidate label creation: ./utils/kw_extract.py
docker run -it --rm -p 8888:8888 -p 3000:3000 -p 3001:3001 bentoml/quickstart:latest
The following examples test out the GPU
- run pytorch training test:
python utils/quickstart_pytorch.py
- run pytorch CUDA test:
python utils/verify_cuda_pytorch.py
- run tensorflow training test:
python utils/quickstart_tf2.py
- run nvidia monitoring test:
nvidia-smi -l 1
it should show a GPU - run whisper transcribe test
./utils/transcribe-whisper.sh
and verify GPU is working withnvidia-smi -l 1
Additionally, this workspace is setup to fine-tune Hugging Face
python hf_fine_tune_hello_world.py
Used as the base and customized in the following Duke MLOps and Applied Data Engineering Coursera Labs:
- MLOPs-C2-Lab1-CICD
- MLOps-C2-Lab2-PokerSimulator
- MLOps-C2-Final-HuggingFace
- Coursera-MLOps-C2-lab3-probability-simulations
- Coursera-MLOps-C2-lab4-greedy-optimization
- Watch GitHub Universe Talk: Teaching MLOps at scale with Github
- Building Cloud Computing Solutions at Scale Specialization
- Python, Bash and SQL Essentials for Data Engineering Specialization
- Implementing MLOps in the Enterprise
- Practical MLOps: Operationalizing Machine Learning Models
- Coursera-Dockerfile