FlameModules 🔥

FlameModules is an exploratory collection of Python codes built on top of PyTorch. Designed as a learning tool, it offers deep insights into neural network operations and PyTorch functionalities. Though crafted primarily for personal enlightenment, all are welcome to use this as a learning resources.

Below is a visual representation of the structure and relationship between various modules:

Snip20231025_6

Modules Overview

1. accelerators

  • Purpose: Manage acceleration methods like GPUs, TPUs, and seamlessly integrate device-specific operations.
  • Key Features:
    • Acceleration support for multiple devices
    • Decoupling from core logic

2. callbacks

  • Purpose: The callbacks module offers interaction points within the training process, supporting model checkpoints, early stopping, and metrics logging.
  • Key Features:
    • Flexible training interaction points
    • Model saving and logging

3. core

  • Purpose: Acting as FlameModules' bedrock, it provides crucial components and logic.
  • Key Features:
    • Fundamental structures
    • Primary framework logic

4. loggers

  • Purpose: The loggers module is dedicated to capturing and visualizing training metrics. It's compatible with various logging platforms to cater to diverse requirements.
  • Key Features:
    • Metrics recording,
    • support for platforms like TensorBoard, WandB, etc.

5. loops

  • Purpose: This module wraps the various loops present in the training routine, categorizing them as epoch, batch, or validation loops.
  • Key Features:
    • Structured training loops
    • Efficient loop management.

6. overrides

  • Purpose: Overrides grant the ability to tweak specific functionalities without altering the foundational logic of FlameModules.
  • Key Features:
    • Customizable functionalities
    • Core logic preservation.

7. plugins

  • Purpose: Plugins enhance the FlameModules framework's capabilities by offering custom operations and compatibility with diverse platforms and tools.
  • Key Features:
    • Framework extensibility
    • Support for custom tools and operations.

8. serve

  • Purpose: This module focuses on making trained models ready for inference and potential deployment scenarios.
  • Key Features:
    • Efficient model serving
    • Deployment readiness.

9. strategies

  • Purpose: Strategies manage distributed training methodologies, such as data and model parallelism techniques.
  • Key Features:
    • Distributed training support
    • Data and model parallelism.

10. trainer

  • Purpose: The trainer is the primary touchpoint for users to initiate the training routine. It oversees the entire cycle, from training and validation to testing.
  • Key Features:
    • User-friendly interface
    • Comprehensive training management.

11. tuner

  • Purpose: The tuner module offers tools dedicated to optimizing hyperparameters for superior training outcomes.
  • Key Features:
    • Hyperparameter optimization tools
    • Improved training results.

Installation

  1. Clone the Repository:
    git clone https://github.com/rahul1-bot/FlameModules.git
    

Visual Diagram

core

CORE