OMP: Error #15 on Windows 11
Closed this issue · 7 comments
I tried running the example code from the README:
import txtai
embeddings = txtai.Embeddings()
embeddings.index(["Correct", "Not what we hoped"])
embeddings.search("positive", 1)
Yet I encountered the following error:
OMP: Error #15: Initializing libomp140.x86_64.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://openmp.llvm.org/
After a few hour of troubleshooting I've narrowed it down to faiss. All the issues I could find were related to macOS. As suggested in a recent related issue switching the backend to knsw works.
embeddings = txtai.Embeddings(backend="hnsw")
I'm unsure what is causing the issue. I did try downgrading from Python 3.12 to 3.10 as I saw that helped someone else but it didn't change anything for me. It maybe be related to the fact that txtai is installing torch-cpu and faiss-cpu despite my system having a GPU.
My system specs are:
CPU: i7-13620H
GPU: 4060 Mobile
OS: Windows 11 Pro Build 26100
Python: 3.10.11 (in a virtual environment)
IDE: PyCharm 2024.3 (Professional Edition)
It's a relatively fresh Windows 11 install. Besides PyCharm and Python, I have the C++ Build Tools.
Here is the output of pip list
:
Package Version
------------------ -----------
annoy 1.17.3
certifi 2024.8.30
charset-normalizer 3.4.0
colorama 0.4.6
faiss-cpu 1.9.0.post1
filelock 3.16.1
fsspec 2024.10.0
greenlet 3.1.1
hnswlib 0.8.0
huggingface-hub 0.26.3
idna 3.10
Jinja2 3.1.4
MarkupSafe 3.0.2
mpmath 1.3.0
msgpack 1.1.0
networkx 3.4.2
numpy 2.1.3
packaging 24.2
pgvector 0.3.6
pip 23.2.1
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
safetensors 0.4.5
setuptools 68.2.0
SQLAlchemy 2.0.36
sqlite-vec 0.1.6
sympy 1.13.1
tokenizers 0.20.3
torch 2.5.1
tqdm 4.67.1
transformers 4.46.3
txtai 8.0.0
typing_extensions 4.12.2
urllib3 2.2.3
wheel 0.41.2
Thank you for taking the time to read this. I appreciate any insights.
Hello,
Thank you for taking the time to write this up. Normally, I see this type of problem on macOS. The best options I've seen:
- Set this parameter
- Use Conda package manager. The Faiss team publishes a different package there that works better for some.
You could also pair down the issue (create a simple Faiss only code snippet) and write something with the upstream project.
Thank you for the response. I tried the same code on my Windows 10 desktop which has a Ryzen 7700X paired with a RTX 3090 and I got the same error. I ended up using WSL with a virtual environment on my laptop and that is working. I no longer get the error with faiss-cpu and it installed PyTorch with CUDA.
Setting KMP_DUPLICATE_LIB_OK=true
does allow the code to run but I'd rather not rely on it. Installing txtai in a conda virtual environment also allows the code to run but it still didn't install PyTorch with CUDA.
Instructions on how to install PyTorch with CUDA through Conda can be found here: https://pytorch.org/get-started/locally/
Ultimately if WSL works though, that isn't a bad way to do it either.
I was able to install PyTorch with CUDA manually when I was troubleshooting. My understanding is that txtai installs GPU enabled dependencies by default. I did see how to specify a CPU only install on txtai's documentation but not the other way around. Is the version of torch the only difference between the default and CPU only installs?
The default install uses the PyTorch GPU package. Are you referring to the docker images (txtai-cpu vs txtai-gpu)?
That was my understanding. I'm unsure why it doesn't install the PyTorch GPU package on Windows using pip or conda. With WSL it does install the PyTorch GPU package. For cpu-only I was referring to this section on the Installation - txtai document page.
Ok, well it sounds like you figured out a solution that works. I'm going to go ahead and close this issue.