Attribute `s` of `resource.Quantity` is empty
BruceDu521 opened this issue · 4 comments
API Version: k8s.io/api v0.29.1
Golang Version: 1.21
I wrote a function to build appv1.Deployment
manually, just like below:
func buildDeployment(cpu, mem string) appsv1.Deployment {
replicas := new(int32)
*replicas = 1
deployment := appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "my-deployment",
},
Spec: appsv1.DeploymentSpec{
Replicas: replicas,
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "my-app",
},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": "my-app",
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "my-container",
Image: "my-image",
Resources: corev1.ResourceRequirements{
Limits: corev1.ResourceList{
corev1.ResourceMemory: resource.MustParse(mem),
corev1.ResourceCPU: resource.MustParse(cpu),
},
},
},
},
},
},
},
}
return deployment
}
I did a simple test after, then I found a weird stuff:
func TestBuildDeployment(t *testing.T) {
deployment := buildDeployment("4", "8Gi")
mem := deployment.Spec.Template.Spec.Containers[0].Resources.Limits["memory"]
t.Logf("%+v\n", mem)
deployment2 := buildDeployment("4", "12Gi")
mem2 := deployment2.Spec.Template.Spec.Containers[0].Resources.Limits["memory"]
t.Logf("%+v\n", mem2)
}
// output:
=== RUN TestBuildDeployment
e:\..skip..\k8sapp_test.go:12: {i:{value:8589934592 scale:0} d:{Dec:<nil>} s: Format:BinarySI}
e:\..skip..\k8sapp_test.go:16: {i:{value:12884901888 scale:0} d:{Dec:<nil>} s:12Gi Format:BinarySI}
--- PASS: TestBuildDeployment (0.00s)
Notice the s
of Quantity
object, 8Gi mem is empty, but 12Gi showed up.
When I changed 8Gi to 16Gi, 32Gi, they were all empty string as well.
I have no idea why like this, please help me out, thank you!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.