kubernetes/apimachinery

Decoding blank first line followed by document separator

Closed this issue · 4 comments

What Happened?

When the first line of a yaml document is blank, the decoding will not work properly. By "not properly" I mean that it does not throw an error, but it also does not decode into the desired object.

Example:

crdv1 := &extv1.CustomResourceDefinition{} // extv1 is k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1
// for crd I used https://github.com/kubermatic/kubermatic/blob/36282c9f0bda74b2543be05a3317201cf0604592/charts/kubermatic-operator/crd/k8c.io/kubermatic.k8c.io_clusters.yaml
// but I think anything that starts with a blank line, causes the issue
dec := yaml.NewYAMLOrJSONDecoder(crd, 4096)
err := dec.Decode(crdv1) // will not throw an error, but also will not decode the object into it

Normally I would not mind too much and just remove the blank line. However controller-gen generates all of its CRDs to start with a blank line, which makes this a bit troublesome.

Underlying issue

The issue occurs if the first line is blank and then it is followed by the yaml document separator ---. In this case, the following happens then in YAMLReaders Read() is:

  1. the blank line (or more specifically the \n character) is written into the buffer
  2. Afterwards the document separator is recognized and because the buffer is not empty (as it already contains the "\n") the Read func returns. Unfortunately of course at this point only the blank line is being returned

I will look into a fix now, but wanted to write the issue already, as I had opened an issue in the wrong repository first.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.