kubernetes/apiextensions-apiserver

use of vendored package not allowed

domcar opened this issue · 8 comments

Hi Guys,

maybe my problem is easy to solve but I can't get it working.
My goal is just to list/get CRD in my cluster (not created by me, already present).

I wrote the following code:

package main
import (
    "fmt"

    restext "k8s.io/apiextensions-apiserver/vendor/k8s.io/client-go/rest"
    ext "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/typed/apiextensions/v1beta1"
)
var k8sext, _ = newK8Sextclient()              // k8s client

func main() {
    fmt.Println("test")
}

func newK8Sextclient() (*ext.ApiextensionsV1beta1Client, error) {
    configext, err := restext.InClusterConfig()
    if err != nil { panic(err.Error()) }
    return ext.NewForConfig(configext)
}

but I get:
main.go:5:5: use of vendored package not allowed

What am I doing wrong?

Instead if I use the package "k8s.io/client-go/rest" I get this other error:
./main.go:18:28: cannot use configext (type *"k8s.io/client-go/rest".Config) as type *"k8s.io/apiextensions-apiserver/vendor/k8s.io/client-go/rest".Config in argument to "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/typed/apiextensions/v1beta1".NewForConfig

Thanks for your help

@domcar try restext "k8s.io/client-go/rest" instead of restext "k8s.io/apiextensions-apiserver/vendor/k8s.io/client-go/rest"

Ah, saw the edit now.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Is there a solution to this problem? I'm trying something similar and getting a very similar error:
cannot use config (type *"project-provision/vendor/k8s.io/client-go/rest".Config) as type *"project-provision/vendor/github.com/openshift/client-go/vendor/k8s.io/client-go/rest".Config in argument to "project-provision/vendor/github.com/openshift/client-go/project/clientset/versioned/typed/project/v1".NewForConfig

codwu commented

same problem +1

same problem+1

I delete the vendor directory in the 'src/k8s.io/apiextensions-apiserver' to workaround this problem