Increase Analysers' test coverage
arbreezy opened this issue · 4 comments
K8sGPT analysers are the first step of K8sGPT analysis whereby we scan K8s clusters and identify issues with K8s resources.
To gain more confidence every analyser should come with its own unit testing of various use cases we aim to identify.
There are a few analysers that have either limited or absent unit tests. The goal of this work is to increase code coverage of K8sGPT analysers by mocking a K8s environment. e.g Statefulset analyser
Here is a non-exhaustive list but a rather good start of enhancing or adding unit testing.
- validating webhook
- pvc
- replicaset
- events
- logs analyser
- analyzers' wrapper functions
Hey @arbreezy
I was recently testing the statefulset.go
file and running the statefulset_test.go
gave me errors below:
# command-line-arguments [command-line-arguments.test]
./statefulset_test.go:38:25: undefined: StatefulSetAnalyzer
./statefulset_test.go:65:25: undefined: StatefulSetAnalyzer
./statefulset_test.go:130:25: undefined: StatefulSetAnalyzer
./statefulset_test.go:176:25: undefined: StatefulSetAnalyzer
FAIL command-line-arguments [build failed]
FAIL
Could you please specify a procedure to run tests correctly (I mean to run the existing tests, if any)
Heu @Ishani217 you can use the Makefile to run the tests, make test
Hello everyone 👋,
I have a basic understanding of Linux, Docker, Helm, Kubernetes, Go, version control, and GitHub Actions. I participated in GSoC 2023 with CCExtractor, working on the open-source flood-mobile project. Now, I plan to join the k8sgpt project through the LFX Mentorship Program.
Issue #889 caught my attention as it aligns with my skills and offers a great opportunity to gain hands-on experience with real-world open-source codebases. I've started exploring the provided resources and reviewing the issue. I'm excited about the chance to make meaningful contributions.
Looking forward to learning from the k8sgpt community and contributing to this valuable project.