Deployment-Report: Short-lived DaemonSets/Pods can cause intermittent TypeError issue
sasjowood opened this issue · 1 comments
Short-lived DaemonSets/Pods can cause intermittent TypeErrors when sorting the data dictionary used for generating the deployment report, which yields the following traceback:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/Deployment/DailyAutomatedDeployments/Viya4Deploy/autodeploy-viya4/viya-arkcd/viya-ark.py", line 139, in <module>
main(sys.argv[1:])
File "/var/lib/jenkins/workspace/Deployment/DailyAutomatedDeployments/Viya4Deploy/autodeploy-viya4/viya-arkcd/viya-ark.py", line 77, in main
command.run(argv[1:])
File "/var/lib/jenkins/workspace/Deployment/DailyAutomatedDeployments/Viya4Deploy/autodeploy-viya4/viya-arkcd/deployment_report/deployment_report.py", line 48, in run
main(argv)
File "/var/lib/jenkins/workspace/Deployment/DailyAutomatedDeployments/Viya4Deploy/autodeploy-viya4/viya-arkcd/deployment_report/deployment_report.py", line 146, in main
data_file, html_file = sas_deployment_report.write_report(
File "/var/lib/jenkins/workspace/Deployment/DailyAutomatedDeployments/Viya4Deploy/autodeploy-viya4/viya-arkcd/deployment_report/model/viya_deployment_report.py", line 549, in write_report
data_json = json.dumps(self._report_data, cls=KubernetesObjectJSONEncoder, indent=4, sort_keys=True)
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/opt/rh/rh-python38/root/usr/lib64/python3.8/json/encoder.py", line 353, in _iterencode_dict
items = sorted(dct.items())
TypeError: '<' not supported between instances of 'NoneType' and 'str'
In diagnosing the problem, it appears the Pods are only maintained for about 15-30 seconds and the spawning DaemonSet is alive for even less time. If the deployment-report is run at the right time, the Pods will be gathered but the DaemonSet will have already been removed, causing the error above.
Workaround
Waiting a few seconds and retrying the deployment-report will allow for a successful execution. There are extended periods where neither the problematic Pods or DaemonSet exists; if run during this timeframe, an error will not be raised.
This issue is resolved in release 1.3.0.