Experimentations with multimodal models
The docker image can be built using ./Dockerfile
. You can build it using the following command, run from the root directory
docker build --build-arg WB_API_KEY=<your_api_key> . -f Dockerfile --rm -t llm-finetuning:latest
First navigate to this repo on your local machine. Then run the container:
docker run --gpus all --name multimodal-experimentation -it --rm -p 8888:8888 -p 8501:8501 -p 8000:8000 --entrypoint /bin/bash -w /multimodal-experimentation -v $(pwd):/multimodal-experimentation llm-finetuning:latest
Inside the Container:
jupyter lab --ip 0.0.0.0 --no-browser --allow-root --NotebookApp.token=''
Host machine access this url:
localhost:8888/<YOUR TREE HERE>
Inside the container:
streamlit run app.py
Host machine:
localhost:8501
- Upgrade transformers version to include gemma 2. Save model locally.
- Look at implementing these: