marcelotduarte/cx_Freeze

cx_freeze with tensorflow-directml-plugin

marcelotduarte opened this issue · 7 comments

Discussed in #2094

@sskim5128
@p7ayfu77 simple example.

  1. From empty venv, install only "tensorflow-directml-plugin"
    pip install tensorflow-directml-plugin

  2. Example simple script.
    tf-directml-test.py

import tensorflow as tf
from tensorflow.python.client import device_lib

def get_available_gpus():
    local_device_protos = device_lib.list_local_devices()
    return [x.name for x in local_device_protos if x.device_type == 'GPU']

get_available_gpus()

Execute in VSCode

(.directml) C:\dev>python tf-directml-test.py
2024-03-13 20:26:08.210067: I tensorflow/c/logging.cc:34] Successfully opened dynamic library C:\dev\.directml\lib\site-packages\tensorflow-plugins/directml/directml.d6f03b303ac3c4f2eeb8ca631688c9757b361310.dll
2024-03-13 20:26:08.211241: I tensorflow/c/logging.cc:34] Successfully opened dynamic library dxgi.dll
2024-03-13 20:26:08.218779: I tensorflow/c/logging.cc:34] Successfully opened dynamic library d3d12.dll
2024-03-13 20:26:08.423244: I tensorflow/c/logging.cc:34] DirectML device enumeration: found 1 compatible adapters.
2024-03-13 20:26:08.691537: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2

This is now showing the direct ML ddl is loaded.

  1. Freeze with cx_Freeze
    tf-directml-freeze.py
from cx_Freeze import setup, Executable

build_exe_options = {
    "packages": ["tensorflow","tensorflow-plugins","tensorflow_estimator","tensorflow_io_gcs_filesystem"],
    "include_files": [],
    "include_msvcr": True,
}

#base = "Win32GUI" if sys.platform == "win32" else None

setup(
    name="tf-directml-test",
    options={
        "build_exe": build_exe_options
    },
    executables=[
        Executable(
            "./tf-directml-test.py",
            #base=base,
            target_name="tf-directml-test"
            )
        ]
)

# python tf-directml-freeze.py build
  1. Execute the result: build\exe.win-amd64-3.9>tf-directml-test.exe
    Example output
c:\dev\build\exe.win-amd64-3.9>tf-directml-test.exe
2024-03-13 20:14:43.526070: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

Note the tensorflow-plugins/directml/directml.d6f03b303ac3c4f2eeb8ca631688c9757b361310.dll is NOT loaded

Hi @p7ayfu77
You can test the patch in the latest development build:
pip install --force --no-cache --pre --extra-index-url https://marcelotduarte.github.io/packages/ cx_Freeze
For conda-forge the command is:
conda install -y --no-channel-priority -S -c https://marcelotduarte.github.io/packages/conda cx_Freeze

Wow! Fantastic, I will test asap.
Very much appreciated. Please see support provided.
I will feedback in soon.

I forgot to say that I tested the sample without needing to use 'packages' or even with the simple command:
cxfreeze tf-directml-freeze.py

I also tested by moving the executable to another machine, which is where I detected an error in the autograph and corrected it. It should even work for other plugins.

Bingo! 🥳

c:\dev\astro-csbdeep\build\exe.win-amd64-3.9>tf-directml-test.exe
2024-03-15 21:24:29.760644: I tensorflow/c/logging.cc:34] Successfully opened dynamic library c:\dev\astro-csbdeep\build\exe.win-amd64-3.9\lib\tensorflow-plugins/directml/directml.d6f03b303ac3c4f2eeb8ca631688c9757b361310.dll
2024-03-15 21:24:29.762157: I tensorflow/c/logging.cc:34] Successfully opened dynamic library dxgi.dll
2024-03-15 21:24:29.766355: I tensorflow/c/logging.cc:34] Successfully opened dynamic library d3d12.dll
2024-03-15 21:24:30.003346: I tensorflow/c/logging.cc:34] DirectML device enumeration: found 1 compatible adapters.
2024-03-15 21:24:30.204356: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-15 21:24:30.205376: I tensorflow/c/logging.cc:34] DirectML: creating device on adapter 0 (Intel(R) Iris(R) Xe Graphics)
2024-03-15 21:24:30.309303: I tensorflow/c/logging.cc:34] Successfully opened dynamic library Kernel32.dll
2024-03-15 21:24:30.310400: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-03-15 21:24:30.310514: W tensorflow/core/common_runtime/pluggable_device/pluggable_device_bfc_allocator.cc:28] Overriding allow_growth setting because force_memory_growth was requested by the device.
2024-03-15 21:24:30.310632: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/device:GPU:0 with 6901 MB memory) -> physical PluggableDevice (device: 0, name: DML, pci bus id: <undefined>)

Seems to be working. I will now test on the main app. ...

Amazing!!! Thank you @marcelotduarte. Great work.
This opens the tool up for anyone without a CUDA/NVIDIA GPU.

Screenshot 2024-03-15 213557

Very good. You provided me with a very well defined test path, which helps a lot.
And thank you very much for the best sponsorship so far.