v1.1.0 binary does not run on Debian Bullseye
Closed this issue · 15 comments
The published binary for v1.1.0 for AMD64 requires GLIBC 2.32 or 2.34 but Debian Bullseye has 2.31.
Debian Bookworm was only published ~2 weeks ago so in my opinion it isn't reasonable to already expect everyone to be on the latest version.
Error message:
./kubectl-hns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./kubectl-hns)
./kubectl-hns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by ./kubectl-hns)
To reproduce wit Docker put the following in a Dockerfile and run docker build .:
FROM debian:bullseye
ADD https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/v1.1.0/kubectl-hns_linux_amd64 /kubectl-hns
RUN chmod +x ./kubectl-hns
RUN ./kubectl-hns help
The simplest fix is probably to use an older OS to compile the binary.
I was previously running 1.1.0rc2 which works. I just tested rc3 and found that it has the same issue.
So if you updated the build containers in February just after publishing rc2 then that is likely to be the change which introduced this incompatibility.
Running into the same thing with GitHub self hosted runners.
/home/runner/.krew/bin/kubectl-hns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /home/runner/.krew/bin/kubectl-hns)
/home/runner/.krew/bin/kubectl-hns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/runner/.krew/bin/kubectl-hns)
Is there a fix or workaround?
Hi, Same problem here on Rocky 8.7.
Ok I'll go see if I can downgrade somehow
Sorry I haven't gotten to this yet :( Will have another look.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hi, Same problem here .
[root@cs-xndb1 ~]# kubectl hns --help
/usr/local/sbin/kubectl-hns: /lib64/libc.so.6: version 'GLIBC_2.34' not found (required by /usr/local/sbin/kubectl-hns)
/usr/local/sbin/kubectl-hns: /lib64/libc.so.6: version 'GLIBC_2.32' not found (required by /usr/local/sbin/kubectl-hns)
[root@cs-xndb1 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
@mist714 raised an PR for this which was merged on the 14th November 2023 (PR 236) which builds the Linux binaries with CGO_ENABLED=0 so should avoid these issues with glibc completely.
This merge, however, was after v1.1.0 was built and released so the binaries available are linked with glibc still.
For the time being people may want to try building from source themselves.
Maintainers, could somebody please roll an interim v1.1.1 or something with at least this change in to help folks out please?
(I can confirm that built with CGO_ENABLED=0 worked for my use case)
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Looks like little interest in this feature.
@iamasmith It's fixed in HEAD, just need a new release.
@joekohlsdorf yes, I think it was me that pointed this out