This is a collection of guides and examples for Google Gemma. Gemma is a family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models.
Gemma is a family of lightweight, state-of-the art open models built from the same research and technology used to create the Gemini models. The Gemma model family includes:
- base Gemma
- Gemma variants
You can find the Gemma models on GitHub, Hugging Face models, Kaggle, Google Cloud Vertex AI Model Garden, and ai.nvidia.com.
Name | Description |
---|---|
Common_use_cases.ipynb | Illustrate some common use cases for Gemma, CodeGemma and PaliGemma. |
Gemma | |
Keras_Gemma_2_Quickstart.ipynb | Gemma 2 pre-trained 9B model quickstart tutorial with Keras. |
Keras_Gemma_2_Quickstart_Chat.ipynb | Gemma 2 instruction-tuned 9B model quickstart tutorial with Keras. Referenced in this blog. |
Chat_and_distributed_pirate_tuning.ipynb | Chat with Gemma 7B and finetune it so that it generates responses in pirates' tone. |
gemma_inference_on_tpu.ipynb | Basic inference of Gemma with JAX/Flax on TPU. |
gemma_data_parallel_inference_in_jax_tpu.ipynb | Parallel inference of Gemma with JAX/Flax on TPU. |
Gemma_control_vectors.ipynb | Implement control vectors with Gemma in the I/O 2024 Keras talk. |
Self_extend_Gemma.ipynb | Self-extend context window for Gemma in the I/O 2024 Keras talk. |
Gemma_Basics_with_HF.ipynb | Load, run, finetune and deploy Gemma using Hugging Face. |
Guess_the_word.ipynb | Play a word guessing game with Gemma using Keras. |
Game_Design_Brainstorming.ipynb | Use Gemma to brainstorm ideas during game design using Keras. |
Translator_of_Old_Korean_Literature.ipynb | Use Gemma to translate old Korean literature using Keras. |
Gemma2_on_Groq.ipynb | Leverage the free Gemma 2 9B IT model hosted on Groq (super fast speed). |
Prompt_chaining.ipynb | Illustrate prompt chaining and iterative generation with Gemma. |
Advanced_Prompting_Techniques.ipynb | Illustrate advanced prompting techniques with Gemma. |
Run_with_Ollama.ipynb | Run Gemma models using Ollama. |
Deploy_with_vLLM.ipynb | Deploy a Gemma model using vLLM. |
Deploy_Gemma_in_Vertex_AI.ipynb | Deploy a Gemma model using Vertex AI. |
RAG_with_ChromaDB.ipynb | Build a Retrieval Augmented Generation (RAG) system with Gemma using ChromaDB and Hugging Face. |
Minimal_RAG.ipynb | Minimal example of building a RAG system with Gemma using Google UniSim and Hugging Face. |
RAG_PDF_Search_in_multiple_documents_on_Colab.ipynb | RAG PDF Search in multiple documents using Gemma 2 2B on Google Colab. |
Using_Gemma_with_LangChain.ipynb | Examples to demonstrate using Gemma with LangChain. |
Gemma_RAG_LlamaIndex.ipynb | RAG example with LlamaIndex using Gemma. |
Integrate_with_Mesop.ipynb | Integrate Gemma with Google Mesop. |
Integrate_with_OneTwo.ipynb | Integrate Gemma with Google OneTwo. |
Finetune_with_Axolotl.ipynb | Finetune Gemma using Axolotl. |
Finetune_with_XTuner.ipynb | Finetune Gemma using XTuner. |
Finetune_with_LLaMA_Factory.ipynb | Finetune Gemma using LLaMA-Factory. |
PaliGemma | |
Image_captioning_using_PaliGemma.ipynb | Use PaliGemma to generate image captions using Keras. |
Image_captioning_using_finetuned_PaliGemma.ipynb | Compare the image captioning results using different PaliGemma versions with Hugging Face. |
Finetune_PaliGemma_for_image_description.ipynb | Finetune PaliGemma for image description using JAX. |
Integrate_PaliGemma_with_Mesop.ipynb | Integrate PaliGemma with Google Mesop. |
Zero_shot_object_detection_in_images_using_PaliGemma.ipynb | Zero-shot Object Detection in images using PaliGemma. |
Zero_shot_object_detection_in_videos_using_PaliGemma.ipynb | Zero-shot Object Detection in videos using PaliGemma. |
Referring_expression_segmentation_in_images_using_PaliGemma.ipynb | Referring Expression Segmentation in images using PaliGemma. |
Referring_expression_segmentation_in_videos_using_PaliGemma.ipynb | Referring Expression Segmentation in videos using PaliGemma. |
Ask a Gemma cookbook-related question on the new Build with Google AI Forum, or open an issue on GitHub.
If you want to see additional cookbooks implemented for specific features/integrations, please send us a pull request by adding your feature request(s) in the wish list.
If you want to make contributions to the Gemma Cookbook project, you are welcome to pick any idea in the wish list and implement it.
Contributions are always welcome. Please read contributing before implementation.
Thank you for developing with Gemma! We’re excited to see what you create.