conda-forge/tensorflow-feedstock

GPU enabled builds

Closed this issue · 39 comments

Can we provide gpu-enabled packages?

That should be easy to do but not sure how the installation would be since it would probably be done with a flag or should we use another package namespace?

Sorry I didn't understand. The GPU enabled tensorflow would work on CPU too.

Yes, we can use GPU versions only. I think its nice to have the option to have the CPU only version.

This is kinda complicated to have the same pkg but with different compile option. A possible solutioin is what continuum is doing for numpy compiled with (and without) MKL might be a solution but maybe its not the best.

You can use conda features: http://conda.pydata.org/docs/building/meta-yaml.html#features to have two different builds.
But my main concern is cuda do we have a package for that?

I dont think there is one. I have been there before and it was not a nice experience I always install it as a os lib/pkg.

That's not gonna work. We need to build bazel first, and compile tensorflow from source. Then we need to package cuda things and enable GPU builds here.

Yes, that would be ideal. Do we need the CI workers to have access to a GPU to compile the library?

Just FYI, I have used the GPU wheels that google provides on an EC2 instance with a GPU and they work fine.

Do we need the CI workers to have access to a GPU to compile the library?

Don't know but maybe for now we can package the binary of GPU version.
Also, cudatoolkit is available in the defaults channel but it is missing libcuda.so while it is required by tensorflow.
For cudnn there is not much we can do because we cannot distribute it because of its license, we can ask people to download it and give instructions on how to install it.

I have not used a GPU enabled tensorflow but Google does provide .whl files which we could create conda packages from in the same manner as we are currently using to make the non-CPU conda package. These seem to require CUDA toolkit 7.5 and CuDNN which we may have to depend on the system providing due to licensing.

Currently the tensorflow packages in the conda-forge channel are not built from source but rather repackaged wheels, see #6 . I have been building tensorflow from source using Bazel in my own channel (jjhelmus) but this requires a Ubuntu 14.04 docker container and a locally installed Java/Bazel on OS X. If anyone want details see the tensorflow directory of jjhelmus/wip_conda_recipes

I am trying to add bazel here: conda-forge/staged-recipes#1019 any help is appreciated.

If we have infinite compile time, can we simply grab Docker image from nvidia, and build tensorflow on top of that? I just succeeded in building one on our lab's CentOS 6 cluster, with GPU support.

I think a few questions need to be resolved before we can provide GPU enabled builds. The ones I can think of are:

  • Can conda-forge legally distribute the contents of such a package? I do not know what libraries and such get included in such a build (CuDNN, other CUDA libraries?) and what the re-distribution licenses are for these components.
  • Does the current CI platforms (Travis CI and Circle CI) support building a GPU enabled version of tensorflow? I'm not certain the hardware these services run on include a GPU.

Either way a first step towards this would be to build the CPU version of tensorflow from source, #6, and work on this front would help any future GPU builds.

@jjhelmus check https://github.com/leelabcnbc/DevOps/blob/master/Docker/tensorflow/0.11.0rc0/centos6/py35/gpu/Dockerfile, I succeeded in building a GPU version from source, for a CentOS 6 base.

My only concern is that I think building TF takes so long and no public CI could tolerate that.

Derp.

My only concern is that I think building TF takes so long and no public CI could tolerate that.

Travis is unlimitted time.
AppVeyor is 2hours.
Building TensorFlow does not take 2 hours.

Travis is unlimitted time.
AppVeyor is 2hours.

Actually that is not accurate. Travis CI is right around 45mins. AppVeyor is around 1hr.

FWIW CircleCI is the longest at around 2hrs.

Ah, my mistake -- there is no build timeout,
but there is a per-job timeout.

https://docs.travis-ci.com/user/customizing-the-build/#Build-Timeouts

Can someone tell when would the gpu version come out in conda-forge channel?

wesm commented

I'm in need of this very soon and willing to help to make it happen. cc @cpcloud

To prior comments:

  • I don't think Travis CI hosts need to have a GPU to build CUDA libraries
  • There are no redistribution restrictions of binaries that have been generated by nvcc or that depend on the CUDA runtime libraries. Here is the EULA for you to scrutinize: http://docs.nvidia.com/cuda/eula/index.html#axzz4qLHiIRnd

I think one way to do this would be to provide a linux-anvil-with-cuda and a flag in meta.yaml which, on rerender, changes the runtime anvil to be the one with additional libraries in it. What do you think?

@wesm Is cudnn an issue? Continuum has special permission to redistribute it: conda-forge/caffe-feedstock#27 (comment), so we could just depend on it from defaults, but I don't know if distributing binaries linked against it is okay without extra permission. (Well, tensorflow/pytorch/etc already do themselves, so probably, but maybe they got permission for it.)

In any case, I would be very interested in helping get this set up for conda-forge.

There are a few issues that we would need to address to make this work.

First issue is we are not building from source yet. There has been a sizable amount of work done by @jjhelmus to address this, but we would need to finish that first.

Second someone will need to reach out to NVIDIA and get permission to use their compiler and header files in our Docker images and elsewhere. Possibly also distributing them between builds somehow too. As noted by @dougalsutherland, the libraries can already be redistributed if we can use the ones from defaults. Based on looking at these Docker images, it looks to be relatively straightforward to download everything needed from NVIDIA directly. So that could be an option if NVIDIA approves. Should add that only really works for Linux, which is probably sufficient.

Third we need to have a way to select the CPU only build or GPU build as needed. This may need to use features near term. Though there may be other options in conda/conda-build, which would solve this problem more cleanly like key-value features.

Fourth we need to determine how we handle different versions of CUDA and cuDNN. Some cards really need the latest CUDA to work at all (read crashes otherwise). We would need to figure out how to distinguish the different build combinations of CUDA and cuDNN version.

Re: points 3 and 4, defaults is using the tensorflow package for CPU-only, and tensorflow-gpu for the GPU one, with the latest versions having build strings like py36cuda8.0cudnn6.0_0, which has dependencies on cudatoolkit 8.0* and cudnn 6.0*. Not that conda-forge necessarily has to do the same thing, but it seems vaguely reasonable. I suppose in the current conda-forge infrasstructure this would require a different branch for each cuda / cudnn combination, right?

As a note, I'm working on recipes right now for pytorch and related things. Currently CPU-only, but it has the same issues (though building it from source is easier).

wesm commented

My understanding is that binaries built with cuda 7.5 are forward compatible with cuda 8.0 driver API. When new cards come out with newer compute capability then older versions of CUDA may not work. It's a bit of a quagmire. Having builds at least for cuda 7.5 and / or cuda 8.0 would suffice for most use cases

Likely the easiest method for getting a GPU accelerated version of tensorflow, tensorflow-gpu, on conda-forge would be to repackage the whl files on PyPI. These only support Ubunutu 14.04+.

At @jakirkham mentioned a more complete solution would be to build tensorflow, including a version with GPU support from source. I have done this in the past but each new version of tensorflow requires modifications to the recipe. I haven't had time to update my current build recipe to support the 1.3.0 release. The defaults channel has tensorflow and tensorflow-gpu packages that were built from source and support CentOS 6 and distribution with newer glibc's. Separate packages are provided for CUDA 7.5 and 8.0.

@jakirkham Has also nicely provided details on the licensing as well. NVIDIA has Docker images which include the CUDA libraries and compilers. This can be run on system with a GPU and from my tests tensorflow-gpu can be built (but not tested) on systems without a GPU. It is unclear if Docker images which use these images as a base can be legally uploaded to Docker Hub.

wesm commented

I have a much smaller library that needs the CUDA compiler. I can maintain builds on anaconda.org separate from conda-forge but it will make things more annoying for the users. Is there a way to make a package in conda-forge dependent on a package in a different channel?

Just to comment, it would be much better not to have a tensorflow-gpu package, but instead a tensorflow package with GPU support differentiated either by hardcoding something like "gpu_cuda8_cudnn6" in the build string or by using a conda-feature. I personally would recommend the build string option.

BTW, I regularly build from source and try to keep instructions up-to-date here: http://github.com/yaroslavvb/tensorflow-community-wheels

I have a dream that tensorflow-community-wheels could be replaced with automated solution that provides common configurations (GPU/noGPU, Xeon-optimized, I7-optimized). Does conda-forge have mechanism for selecting architecture? IE, Intel CPU will need a different TensorFlow binary than AMD CPU

@yaroslavvb On that "Intel Xeon-optimized" tensorflow, I believe the one on Intel channel might be the one you are after. e.g.

conda install tensorflow -c intel

Check out the Intel Optimized Tensorflow. I can't comment on the conda-forge side of thing though!

wesm commented

Does conda-forge have mechanism for selecting architecture?

You could do this with conda "features" (e.g. this is used to select MKL vs. non-MKL enabled NumPy). It would require a user explicitly installing a metapackage to select their architecture though

@yaroslavvb Two questions here:

  1. Does conda, as a package manager, helps package maintainers support different variants of the same package and users to select the variant they prefer? => Yes.

  2. Is conda-forge a possible home for specialized tensorflow builds? => Not yet, but it is already useful.

Jump to the end of the TL;DR; for a runnable example.

How well does conda support package variants?

I'm no expert but I think conda is better suited than pip / wheels to work with package variants.

Let's look at pip. For installing a concrete variant of tensorflow, like these you catalog in your community wheels repo, one needs to provide the full URL to the wheel. Even worse, there are two tensorflow packages in pypi, which are actually just tensorflow variants: tensorflow and tensorflow-gpu. Having two packages for what actually are two variants of the same package have two bad consequences: it allows both packages to be installed simultaneously (clobbering!) and it makes it hard to depend on tensorflow (packages that depend on tensorflow are not satisfied by installing tensorflow-gpu, so clobbering again!). This has even bitten google, and their solution was not pretty.

These situations can be avoided with conda. As far as I know, conda allows two ways to specify different variants of the same package version: conda-features, that apply environment-wide, and matching against the build string.

Having used, heavily, conda features to differentiate variants of the same package (e.g. gpl vs no-gpl, with-gui vs without-gui, cuda vs no-cuda, optimized vs generic), I cannot recommend them. They are hard to compose, they make the number of packages that need to be installed grow linearly with the number of variants, they enforce artificial "side effect" package dependencies and, if not treated carefully, they tend to produce confusing situations for the package solver leading to broken environments, with undesirable effects like version constraint violations and dependency update/downgrade cascades (which I feel is exactly what the original design of features as an "environment-wide" feature tried to avoid).

Using conda features, the user would install a particular version of tensorflow like:

# This hypothetical command chorizo is not pretty
conda install -c conda-forge tensorflow-i7-feature tensorflow-mkl-feature tensorflow-xla-feature tensorflow-cudnn6-feature tensorflow=1.3.0

# I purposely prefix each feature tracker metapackage with "tensorflow".
# One could think we could just have a metapackage "mkl-feature".
# But that could be constraining with other packages coming to the same environment.
# For example, we could want to have in the same environment
#   - mkl-tensorflow 
#   - openblas-pytorch 

Instead, I think using different build strings is a simpler and neater way to go. Relevantly, this is what I'm currently playing with when generating an "optimized" tensorflow package.

Using the build string, the user could install a particular version of tensorflow like:

# This hypothetical command is, somehow, prettier
conda install -c conda-forge 'tensorflow=1.3.0=*i7*mkl*xla*cudnn6*'

# Note that build string constrain is based on string matching, so unfortunately 
# requires the user to remember the order of the substrings (e.g. i7 before than xla).
# Build string matching is, of course, also prone to suffer problems.
# When the number of variants grow, it will require some automation/templating to keep
# building and maintenance manageable.

Conda might evolve in the future to better support package variants (for example, by introducing new concepts to allow establishing relationships between packages, like "provides" or "conflicts" verbs that are common in other package managers, or by allowing to change a variant of a package depending on the hardware available), It will for sure keep fixing problems involving constraint violations (the upcoming 4.4 version should already be better at that task).

Is conda-forge a possible home for specialized tensorflow builds?

For context, we are a young startup fighting to get a packaging solution that stays enough out of the way. We are still evaluating if conda is part of that solution, and for it to be so, we need to ship performance-optimized packages. Now, if one jumps into conda packaging, my recommendation is to do it into the conda-forge pool.

For us, conda-forge is all about participating of a friendly open community all sweating together to endure the scientific software packaging torture. Automation, best practices, discussion and support, community-based package testing, improvement of recipes and building workflows, software stack consistency, it is all-inclusive in the welcome package.

conda-forge has still to start providing mechanisms and best practices to build against CUDA. Also MKL is right now out of the conda-forge scope. Another possible problem is the capped time in the public CIs used to build these packages.

However, conda-forge can be a pretty neat channel to build upon. CUDA, CuDNN and MKL are provided on anaconda defaults. It is very easy to use a conda-forge like nvidia-docker image to build packages. One is free to run the builds on machines without time restrictions.

We do all these things. We have a channel layered on top of conda-forge and build several performance-optimized packages internally, trying to suggest upstream changes when we deem them broadly useful. Then we have some simple scripts that allow to automatically modify complex environments so that they contain the package variants that are relevant for each deployment characteristics.

A user-facing example

This is a small proof-of-concept conda environment with a specialized tensorflow build.

# Install with:
#   conda env create -f ourapp-gpu.yaml

name: ourapp-gpu

channels:
  - loopbio
  - conda-forge
  - defaults

dependencies:
  - python=2
  - cudnn=6
  - tensorflow=1.3.0=*cudnn6*mkl*xla*

# Disclaimer: At the time of writing this works.
# But I will at some point delete these old test packages from anaconda.org.
njzjz commented

Is it possible to build tensorflow-gpu currently?

Ping everybody

It should now be possible to build tensorflow-gpu, we have the infrastructure setup for pytorch should somebody want to attempt to build this out;

https://github.com/conda-forge/pytorch-cpu-feedstock

izahn commented

I haven't been able to get this recipe to build with gpu support, but I do have a working gpu build based on the Anaconda recipe at https://github.com/izahn/tensorflow-feedstock/tree/anaconda. It took two days to build on my laptop, and the resulting package available at https://anaconda.org/izahn/tensorflow-base

@izahn If you can share error logs that might also bring us a bit forward here.

izahn commented

Thanks @xhochy , the issues I hit when trying to make a cuda-enabled build of this recipe working were all path and library related. For example sometimes I would get this

<snip>
+ export 'TF_CUDA_COMPUTE_CAPABILITIES=3.5;5.0+PTX;6.0;6.1;7.0;7.5;8.0;8.6'
+ TF_CUDA_COMPUTE_CAPABILITIES='3.5;5.0+PTX;6.0;6.1;7.0;7.5;8.0;8.6'
+ export TF_NEED_CUDA=1
+ TF_NEED_CUDA=1
+ export TF_CUDA_VERSION=11.2
+ TF_CUDA_VERSION=11.2
+ export TF_CUDNN_VERSION=8
+ TF_CUDNN_VERSION=8
+ export TF_CUDA_CLANG=0
+ TF_CUDA_CLANG=0
+ export TF_DOWNLOAD_CLANG=0
+ TF_DOWNLOAD_CLANG=0
+ export TF_NEED_TENSORRT=0
+ TF_NEED_TENSORRT=0
+ export NCCL_ROOT_DIR=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620141478883/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac
+ NCCL_ROOT_DIR=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620141478883/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac
+ export USE_STATIC_NCCL=0
+ USE_STATIC_NCCL=0
+ export USE_STATIC_CUDNN=0
+ USE_STATIC_CUDNN=0
+ export CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
+ CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
+ export MAGMA_HOME=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620141478883/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac
+ MAGMA_HOME=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620141478883/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac
+ export TF_NCCL_VERSION=
+ TF_NCCL_VERSION=
+ export GCC_HOST_COMPILER_PATH=x86_64-conda-linux-gnu-cc
+ GCC_HOST_COMPILER_PATH=x86_64-conda-linux-gnu-cc
+ export TF_CUDA_PATHS=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620141478883/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac,/usr/local/cuda-11.2,/usr
+ TF_CUDA_PATHS=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620141478883/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac,/usr/local/cuda-11.2,/usr
+ sed -i -e '/PROTOBUF_INCLUDE_PATH/c\ ' .bazelrc
+ sed -i -e '/PREFIX/c\ ' .bazelrc
+ ./configure
You have bazel 3.1.0- (@non-git) installed.
Found CUDA 11.2 in:
    $PREFIX/lib
    /usr/local/cuda-11.2/targets/x86_64-linux/include
Found cuDNN 8 in:
    $PREFIX/lib
    $PREFIX/include


Invalid gcc path. x86_64-conda-linux-gnu-cc cannot be found.

After fiiddling around for a while I progressed to

[2,483 / 16,123] Compiling external/mkl_dnn/src/cpu/cpu_engine.cpp [for host]; 49s local ... (4 actions running)
ERROR: /home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/work/tensorflow/core/lib/db/BUILD:28:1: C++ compilation of rule '//tensorflow/core/lib/db:snapfn' failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command 
  (cd /home/conda/.cache/bazel/_bazel_conda/7cbd9e1696cb737095e0b40a58edc7e3/execroot/org_tensorflow && \
  exec env - \
    CUDA_TOOLKIT_PATH=/usr/local/cuda-11.2 \
    GCC_HOST_COMPILER_PATH=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_build_env/bin/x86_64-conda-linux-gnu-gcc \
    PATH=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/work:/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_build_env/bin:/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac/bin:/opt/conda/condabin:/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_build_env:/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_build_env/bin:/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac:/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac/bin:/opt/conda/bin:/opt/conda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/conda/bin:/usr/local/cuda/bin \
    PWD=/proc/self/cwd \
    PYTHON_BIN_PATH=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac/bin/python \
    PYTHON_LIB_PATH=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac/lib/python3.9/site-packages \
    TF2_BEHAVIOR=1 \
    TF_CONFIGURE_IOS=0 \
    TF_CUDA_COMPUTE_CAPABILITIES=5.2,5.3,6.0,6.1,6.2,7.0,7.2,7.5,8.0,8.6 \
    TF_CUDA_PATHS=/home/conda/feedstock_root/build_artifacts/tensorflow-split_1620145774549/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac,/usr/local/cuda-11.2,/usr \
    TF_CUDA_VERSION=11.2 \
    TF_CUDNN_VERSION=8 \
    TF_NCCL_VERSION='' \
    TF_NEED_CUDA=1 \
    TF_SYSTEM_LIBS=absl_py,astor_archive,astunparse_archive,boringssl,com_github_googleapis_googleapis,com_github_googlecloudplatform_google_cloud_cpp,com_github_grpc_grpc,com_google_protobuf,curl,cython,dill_archive,flatbuffers,gast_archive,gif,icu,libjpeg_turbo,org_sqlite,png,pybind11,snappy,zlib \
  external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -MD -MF bazel-out/k8-opt/bin/tensorflow/core/lib/db/_objs/snapfn/snapfn.pic.d '-frandom-seed=bazel-out/k8-opt/bin/tensorflow/core/lib/db/_objs/snapfn/snapfn.pic.o' -iquote . -iquote bazel-out/k8-opt/bin -iquote external/org_sqlite -iquote bazel-out/k8-opt/bin/external/org_sqlite -iquote external/snappy -iquote bazel-out/k8-opt/bin/external/snappy -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fPIC -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -fno-omit-frame-pointer -no-canonical-prefixes -fno-canonical-system-headers -DNDEBUG -g0 -O2 -ffunction-sections -fdata-sections -w -DAUTOLOAD_DYNAMIC_KERNELS '-march=nocona' '-mtune=haswell' '-std=c++14' -DEIGEN_AVOID_STL_ARRAY -Iexternal/gemmlowp -Wno-sign-compare '-ftemplate-depth=900' -fno-exceptions '-DGOOGLE_CUDA=1' '-DTENSORFLOW_USE_NVCC=1' '-DTENSORFLOW_USE_XLA=1' -msse3 -pthread -DSQLITE_OMIT_LOAD_EXTENSION -c tensorflow/core/lib/db/snapfn.cc -o bazel-out/k8-opt/bin/tensorflow/core/lib/db/_objs/snapfn/snapfn.pic.o)
Execution platform: @local_execution_config_platform//:platform
tensorflow/core/lib/db/snapfn.cc:79:10: fatal error: sqlite3ext.h: No such file or directory
 #include "sqlite3ext.h"
          ^~~~~~~~~~~~~~
compilation terminated.

I pushed this branch to https://github.com/izahn/tensorflow-feedstock/tree/cuda but its a mess.

Hi I am the creator of Cirun.io, "GPU enabled builds" caught my eye.

FWIW I'll share my two cents. I created a service for problems like these, which is basically running custom machines (including GPUs) in GitHub Actions: https://cirun.io/

It is used in multiple open source projects needing GPU/custom machine support like the following:

It is fairly simple to setup, all you need is a cloud account (AWS or GCP) and a simple yaml file describing what kind of machines you need and Cirun will spin up ephemeral machines on your cloud for GitHub Actions to run. It's native to GitHub ecosystem, which mean you can see logs/trigger in the Github's interface itself, just like any Github Action run.

Also, note that Cirun is free for Open source projects. (You only pay to your cloud provider for machine usage)

njzjz commented

Resolved by #157.