mudler/LocalAI

enhancement: use separate conda env for extra grpc backends

mudler opened this issue · 5 comments

mudler commented

Is your feature request related to a problem? Please describe.
Python requirements may be conflicting

Describe the solution you'd like
Create a separate conda environment for each backend and use a customized entrypoint.

Describe alternatives you've considered
Just rely on external

Additional context
N/A

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me,
I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

I can help you with the issue you described. It seems like you are having trouble with conflicting Python requirements when using LocalAI. To create a separate conda environment for each backend, you can use the conda command-line tool to create a new environment and install the required packages. Here's an example command:

conda env create -n my_env --python=3.6

This will create a new environment named "my_env" with Python version 3.6. You can then install the required packages for your backend in this environment using the pip or conda command. Once you have installed all the required packages, you can use the activate command to activate the environment:

activate my_env

This will set the environment variables for the current shell session to use the installed packages in the "my_env" environment. You can then run LocalAI using the customized entrypoint as described in the documentation.
As an alternative, you could also consider using a separate installation of Python for each backend and specify the path to the executable in the localai command-line tool. This would allow you to use different versions of Python for each backend without conflicts.

Sources:

Aisuko commented

Hi @mudler. I have an idea in here. As you mentioned separate conda environment for each backend. We will create a conda_env_requirement.yml in each folder. And it should include all the dependencies of the backend, right? Shall we create a Makefile in extra folder to create conda env for every external grpc service ? Or create Makefile for every external gRPC backend?

The Makefile format should be like:

.PONY: <backend-name>
<backend-name>:
	@echo "Creating virtual environment..."
	@conda env create --name <backend-name> --file xxx.yml
	@echo "Virtual environment created."
Aisuko commented
mudler commented

Hi @mudler. I have an idea in here. As you mentioned separate conda environment for each backend. We will create a conda_env_requirement.yml in each folder. And it should include all the dependencies of the backend, right? Shall we create a Makefile in extra folder to create conda env for every external grpc service ? Or create Makefile for every external gRPC backend?

The Makefile format should be like:

.PONY: <backend-name>
<backend-name>:
	@echo "Creating virtual environment..."
	@conda env create --name <backend-name> --file xxx.yml
	@echo "Virtual environment created."

that sounds good, we just need as well to have a top-level target that prepares the envs for all the extra backends, and a wrapper script to run the backend with the environment to use for LocalAI

mudler commented

see fbbd2f6 for an example for bark/autogptq (not tested yet), created #1144 out of the feat/conda branch