Plug and play modules to optimize the performances of your AI systems
Documentation: docs.nebuly.com/
Nebullvm
is an ecosystem of plug and play modules to optimize the performances of your AI systems. The optimization modules are stack-agnostic and work with any library. They are designed to be easily integrated into your system, providing a quick and seamless boost to its performance. Simply plug and play to start realizing the benefits of optimized performance right away.
If you like the idea, give us a star to show your support for the project
What can this help with?
There are multiple modules we actually provide to boost the performances of your AI systems:
Next modules and roadmap
We are actively working on incorporating the following modules, as requested by members of our community, in upcoming releases:
- GPToptimizer: Effortlessly optimize large APIs generative models from OpenAI, Cohere, HF.
- CloudSurfer: Automatically discover the optimal cloud configuration and hardware on AWS, GCP and Azure to run your AI models.
- OptiMate: Interactive tool guiding savvy users in achieving the best inference performance out of a given model / hardware setup.
- TrainingSim: Easily simulate the training of large AI models on a distributed infrastructure to predict training behaviours without actual implementation.
Contributing
As an open source project in a rapidly evolving field, we welcome contributions of all kinds, including new features, improved infrastructure, and better documentation. If you're interested in contributing, please see the linked page for more information on how to get involved.