kubevpn leave failed: unmarshal json patch
killerddd3 opened this issue · 5 comments
killerddd3 commented
killerddd3 commented
os: windows11 23H2
kubevpn: 2.2.3
k8s: 1.20.9
killerddd3 commented
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
artifact.spinnaker.io/location: yqn-sit
artifact.spinnaker.io/name: yqn-system-sit
artifact.spinnaker.io/type: kubernetes/deployment
deployment.kubernetes.io/desired-replicas: '1'
deployment.kubernetes.io/max-replicas: '2'
kubevpn-probe-restore-patch: >-
W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9zcGVjL3RlbXBsYXRlL3NwZWMvY29udGFpbmVycy8wL3JlYWRpbmVzc1Byb2JlIiwidmFsdWUiOiIifSx7Im9wIjoicmVwbGFjZSIsInBhdGgiOiIvc3BlYy90ZW1wbGF0ZS9zcGVjL2NvbnRhaW5lcnMvMC9saXZlbmVzc1Byb2JlIiwidmFsdWUiOiIifSx7Im9wIjoicmVwbGFjZSIsInBhdGgiOiIvc3BlYy90ZW1wbGF0ZS9zcGVjL2NvbnRhaW5lcnMvMC9zdGFydHVwUHJvYmUiLCJ2YWx1ZSI6IiJ9LHsib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9zcGVjL3RlbXBsYXRlL3NwZWMvY29udGFpbmVycy8xL3JlYWRpbmVzc1Byb2JlIiwidmFsdWUiOiIifSx7Im9wIjoicmVwbGFjZSIsInBhdGgiOiIvc3BlYy90ZW1wbGF0ZS9zcGVjL2NvbnRhaW5lcnMvMS9saXZlbmVzc1Byb2JlIiwidmFsdWUiOiIifSx7Im9wIjoicmVwbGFjZSIsInBhdGgiOiIvc3BlYy90ZW1wbGF0ZS9zcGVjL2NvbnRhaW5lcnMvMS9zdGFydHVwUHJvYmUiLCJ2YWx1ZSI6IiJ9XQ==
meta.helm.sh/release-name: yqn-system-sit
meta.helm.sh/release-namespace: yqn-sit
moniker.spinnaker.io/application: yqn-apiteam8761
moniker.spinnaker.io/cluster: deployment yqn-system-sit
labels:
app: yqn-system-sit
app.kubernetes.io/managed-by: spinnaker
app.kubernetes.io/name: yqn-apiteam8761
appGroup: yqn
appName: yqn-system
pod-template-hash: 6cdfbcfc9f
program: java
prometheus: java
workEnv: sit
name: yqn-system-sit-6cdfbcfc9f
namespace: yqn-sit
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: yqn-system-sit
uid: 67e252bf-7eba-4c19-bc1e-714cf3ffaeb1
resourceVersion: '225432145'
spec:
replicas: 0
selector:
matchLabels:
app: yqn-system-sit
pod-template-hash: 6cdfbcfc9f
template:
metadata:
annotations:
artifact.spinnaker.io/location: yqn-sit
artifact.spinnaker.io/name: yqn-system-sit
artifact.spinnaker.io/type: kubernetes/deployment
kubectl.kubernetes.io/restartedAt: '2024-03-28T10:29:06+08:00'
moniker.spinnaker.io/application: yqn-apiteam8761
moniker.spinnaker.io/cluster: deployment yqn-system-sit
creationTimestamp: null
labels:
app: yqn-system-sit
app.kubernetes.io/managed-by: spinnaker
app.kubernetes.io/name: yqn-apiteam8761
appGroup: yqn
appName: yqn-system
pod-template-hash: 6cdfbcfc9f
program: java
prometheus: java
workEnv: sit
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: test
operator: In
values:
- test1
weight: 100
containers:
- env:
- name: APPNAME
value: yqn-system
- name: APPGROUP
value: yqn
- name: workEnv
value: sit
- name: JAVA_OPTS
value: '-Xms512m -Xmx1536m'
image: >-
yldc-docker.pkg.coding.yili.com/yl-pasture/docker/yqn-system:esit-bdevelop-dairy-milkhall-tag-472a6f6-434-2403281031
imagePullPolicy: IfNotPresent
name: yqn-system-sit
ports:
- containerPort: 8080
name: http
protocol: TCP
resources:
limits:
cpu: '4'
memory: 2Gi
requests:
cpu: 20m
memory: 768Mi
securityContext:
allowPrivilegeEscalation: false
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- args:
- >-
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.disable_ipv6=0
sysctl -w net.ipv6.conf.all.forwarding=1
sysctl -w net.ipv4.conf.all.route_localnet=1
update-alternatives --set iptables /usr/sbin/iptables-legacy
iptables -F
ip6tables -F
iptables -P INPUT ACCEPT
ip6tables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
ip6tables -P FORWARD ACCEPT
iptables -t nat -A PREROUTING ! -p icmp -j DNAT --to
${LocalTunIPv4}
ip6tables -t nat -A PREROUTING ! -p icmp -j DNAT --to
${LocalTunIPv6}
iptables -t nat -A POSTROUTING ! -p icmp -j MASQUERADE
ip6tables -t nat -A POSTROUTING ! -p icmp -j MASQUERADE
iptables -t nat -A OUTPUT -o lo ! -p icmp -j DNAT --to-destination
${LocalTunIPv4}
ip6tables -t nat -A OUTPUT -o lo ! -p icmp -j DNAT
--to-destination ${LocalTunIPv6}
kubevpn serve -L
"tun:/127.0.0.1:8422?net=${TunIPv4}&route=${CIDR4}" -F
"tcp://${TrafficManagerService}:10800"
command:
- /bin/sh
- '-c'
env:
- name: LocalTunIPv4
value: 223.254.0.102
- name: LocalTunIPv6
value: 'efff:ffff:ffff:ffff:ffff:ffff:ffff:999b'
- name: TunIPv4
- name: TunIPv6
- name: CIDR4
value: 223.254.0.0/16
- name: CIDR6
value: 'efff:ffff:ffff:ffff::/64'
- name: TrafficManagerService
value: kubevpn-traffic-manager
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
envFrom:
- secretRef:
name: kubevpn-traffic-manager
image: 'docker.io/naison/kubevpn:v2.2.3'
imagePullPolicy: IfNotPresent
name: vpn
resources:
limits:
cpu: 256m
memory: 256Mi
requests:
cpu: 128m
memory: 128Mi
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: codingregistrykey
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
observedGeneration: 2
replicas: 0
wencaiwulue commented
Yes, maybe it is a bug, but i do not base64 encode the annotation key kubevpn-probe-restore-patch
, very confusing
killerddd3 commented
service mesh is ok
but proxy model error
wencaiwulue commented
@killerddd3 already released new version v2.2.4, fixed this bug, you can have a try