bump seldon charm 1.15 -> 1.17.1 for CKF release 1.8
orfeas-k opened this issue · 3 comments
This issue tracks the process of bumping seldon-core-operator version from 1.15.0
to 1.17.1
. For the process, we 're following our internal release handbook document that has a section about manifest files upgrades and images upgrades.
The changes that this process will introduce should match the upstream ones kubeflow/manifests#2532 and they are expected to be templates updates and image versions' bumps.
mlserver-huggingface
During updating to use from docker.io/charmedkubeflow/mlserver-huggingface:1.2.4_22.04_1
to seldonio/mlserver:1.3.5-huggingface
, the output from the seldondeployment in test_seldon_servers.py integration tests was modified as following:
[
('id', 'None'),
('model_name', 'classifier'),
('model_version', 'v1'),
('outputs',
[{'data': 'None',
'datatype': 'BYTES',
'name': 'output',
- 'parameters': {'content_type': 'str'},
? -
+ 'parameters': {'content_type': 'hg_jsonlist'},
? +++++++++
'shape': [1, 1]}]),
('parameters', {}),
]
The changes in the output can be observed if we look at this example notebook run here (from 1.2.4 to 1.3.5). This change was introduced upstream here while it used to be str (coded here)
mlserver-mlflow
Similar to huggingface, during update from docker.io/charmedkubeflow/mlserver-mlflow:1.2.0_22.04_1
to seldonio/mlserver:1.3.5-mlflow
, the output from the seldondeployment
mlflow in test_seldon_servers.py integration tests was also modified as following:
[
('id', 'None'),
('model_name', 'classifier'),
('model_version', 'v1'),
('outputs',
[{'data': [6.016145744177844],
'datatype': 'FP64',
- 'name': 'output-1',
- 'parameters': {'content_type': 'np'},
+ 'name': 'predict',
+ 'parameters': None,
- 'shape': [1, 1]}]),
? ---
+ 'shape': [1]}]),
- ('parameters', {'content_type': 'np'}),
+ ('parameters', None),
]
Unfortunately, the upstream example doesn't include the expected output in this case to verify and upstream tests expect output-1
as name using though different request data.
With that in mind, and since the Deployment seems to start successfully and return the expected prediction 6.016145744177844
, I suggest we update our expected test output according to the new outputs and keep that in mind for the future.