PacktPublishing/Causal-Inference-and-Discovery-in-Python

Environment for M1 Silicon

ferrari-leo opened this issue · 2 comments

Hi Alex,

Please find below an environment file that successfully runs GPU related codes in Chapters 11.1 and 14 (as .txt as it won't accept yml - should just be able to change the extension back)
causal_book_py39_for_m1.txt. The changes from the yml provided in your repo are:

  • remove - nvidia from channels; remove - pytorch and -pytorch-cuda=11.7 from dependencies
  • add - notebook=6.5 to dependencies

Then replace the set device cell with

# Set device
device = "mps" if torch.backends.mps.is_available() else "cpu"

I still then had to pip install CausalPy once the env was activated.

The full yml as exported by conda is
causal_book_py39_applem1.txt

Notes:

  • This has only been tested to run on notebooks 11.1 and 14 but I did not closely monitor whether the results were the same. I'm only assuming at this point it should run fine on the other chapters
  • In notebook 14, "Expert knowledge" section, in the cell after the one with augmented Lagrangian loss objects (first line assert len(dataset_train.batch_size) == 1, "Only 1D batch size is supported"), an errors occurs with message "NotImplementedError: The operator 'aten::triu_indices' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS."

Hi @ferrari-leo

Thank you for sharing the update and testing the env.

I'll test it out with the remaining notebooks and let's take it from there.

Please note that it will take me some time as my plate is currently full.

Hi @ferrari-leo

I added your M1 environment file to the repo as experimental.

Thank you once again for your contribution.