Add Python 3.12 to the CI
nzw0301 opened this issue · 12 comments
Motivation
Sub-task of optuna/optuna#5000.
Description
see the title of this issue.
Alternatives (optional)
Additional context (optional)
Here is a list of the target yaml files under .github/workflows
.
-
aim.yml
-
allennlp.yml
-
base.yml
-
catboost.yml
-
chainer.yml
-
checks.yml
-
dashboard.yml
-
dask.yml
-
dask_ml.yml
-
fastai.yml
-
haiku.yml
-
hiplot.yml
-
hydra.yml
-
keras.yml
-
lightgbm.yml
-
mlflow.yml
-
multi_objective.yml
-
mxnet.yml
-
pytorch.yml
-
ray.yml
-
rl.yml
-
samplers.yml
-
skimage.yml
-
sklearn.yml
-
stale.yml
-
tensorboard.yml
-
tensorflow.yml
-
terminator.yml
-
tfkeras.yml
-
visualization.yml
-
wandb.yml
-
xgboost.yml
I've checked listings addressed by #214 and have not removed the deprecated library's examples yet.
This issue has not seen any recent activity.
Following libraries have been already archived will not support python 3.12.
- allennlp
- chainer
- mxnet
Following libraries does not (officially) support python 3.12 for now.
- aim aimhubio/aim#3111
- fastai, haiku, lightgbm, mlflow (the CI does not run on python 3.11 and 3.12)
- keras, tensorboard, tensorflow, tfkeras tensorflow/tensorflow#62003
- ray ray-project/ray#40211
Following libraries have newly supported python 3.12 recently.
- catboost catboost/catboost#2510
- dask dask/dask#10544
- pytorch pytorch/pytorch#110436
- wandb wandb/wandb#6468
This issue has not seen any recent activity.
This issue has not seen any recent activity.
This issue has not seen any recent activity.
This issue has not seen any recent activity.
This issue has not seen any recent activity.
This issue has not seen any recent activity.
This issue has not seen any recent activity.
I investigated the current status of the unaddressed items above partially with my local env (mac):
tfkeras
It works w/ python 3.12 but an example shows a warning message every trial as follows:
python tfkeras/tfkeras_integration.py
...
2024-08-03 14:53:51.516660: W tensorflow/core/kernels/data/cache_dataset_ops.cc:913] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Similarly, tfkeras/tfkeras_simple.py
shows warning message at the first model construction:
python tfkeras/tfkeras_simple.py
[I 2024-08-03 14:59:38,046] A new study created in memory with name: no-name-b1748d29-9666-4ad1-96d8-2d95c2ea4b00
/opt/homebrew/Caskroom/miniconda/base/envs/optuna-312/lib/python3.12/site-packages/keras/src/layers/convolutional/base_conv.py:107: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead.
super().__init__(activity_regularizer=activity_regularizer, **kwargs)
ray
Still ongoing task according to ray-project/ray#40211.
lightgbm
It supports Python 3.12. But similar to thetfkeras
case, an example shows a warning message as follows:
python lightgbm/lightgbm_tuner_cv.py
[I 2024-08-03 15:04:28,351] A new study created in memory with name: no-name-a3244a73-6d43-41e9-954e-1f3494e21ac4
feature_fraction, val_score: inf: 0%| | 0/7 [00:00<?, ?it/s]
/optuna-312/lib/python3.12/site-packages/sklearn/model_selection/_split.py:91: UserWarning: The groups parameter is ignored by KFold
warnings.warn(
Training until validation scores don't improve for 100 rounds
and terminator example stopped at the trial 19 locally.
This issue has not seen any recent activity.
This issue has not seen any recent activity.