piped panics when using `K8S_BASELINE_ROLLOUT` with `spec.planner.alwaysPipelineSync: true` on v0.47.3-rc0
ffjlabo opened this issue · 1 comments
What happened:
When adding the k8s app with app.pipecd.yaml which has spec.planner.alwaysPipelineSync: true
and K8S_BASELINE_ROLLOUT
, piped fails with panic like this↓.
The piped keeps failing until it is canceled on the UI.
gx6v1_zp09_2tx_x/T/workspace1130338840/35fa4ef2-dd6f-40c0-acf2-d9a944b32d50-scheduler-192449208", "stage-name": "K8S_CANARY_ROLLOUT", "app-dir": "/var/folders/th/pq_q9v6j6_n_0fgx6v1_zp09_2tx_x/T/workspace1130338840/35fa4ef2-dd6f-40c0-acf2-d9a944b32d50-scheduler-192449208/target-deploysource/deploysource3508451428/repo-1/kubernetes/analysis-with-baseline"}
there are 1 planned/running deployments for scheduling {"count": 1}
start executing kubernetes stage {"deployment-id": "35fa4ef2-dd6f-40c0-acf2-d9a944b32d50", "app-id": "c6c93b81-0afa-47f3-a79d-2bbff89d7723", "project-id": "pipecd", "app-kind": "KUBERNETES", "working-dir": "/var/folders/th/pq_q9v6j6_n_0fgx6v1_zp09_2tx_x/T/workspace1130338840/35fa4ef2-dd6f-40c0-acf2-d9a944b32d50-scheduler-192449208", "stage-name": "K8S_BASELINE_ROLLOUT", "app-dir": "/var/folders/th/pq_q9v6j6_n_0fgx6v1_zp09_2tx_x/T/workspace1130338840/35fa4ef2-dd6f-40c0-acf2-d9a944b32d50-scheduler-192449208/target-deploysource/deploysource3508451428/repo-2/kubernetes/analysis-with-baseline"}
successfully reported 6 events about application live state {"platform-provider": "kubernetes-dev"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x18 pc=0x1046345a4]
goroutine 17863 [running]:
github.com/pipe-cd/pipecd/pkg/app/piped/executor/kubernetes.(*deployExecutor).ensureBaselineRollout(0x14000842800, {0x10514cf98, 0x14000f3e320})
/Users/s14218/oss/pipe-cd/pipecd/pkg/app/piped/executor/kubernetes/baseline.go:46 +0x1b4
github.com/pipe-cd/pipecd/pkg/app/piped/executor/kubernetes.(*deployExecutor).Execute(0x14000842800, {0x10514d580, 0x140018eee70})
/Users/s14218/oss/pipe-cd/pipecd/pkg/app/piped/executor/kubernetes/kubernetes.go:156 +0xb00
github.com/pipe-cd/pipecd/pkg/app/piped/controller.(*scheduler).executeStage(0x14000acb408, {0x10514d580, 0x140018eee70}, {{{}, {}, {}, 0x0}, 0x0, {0x0, 0x0, ...}, ...}, ...)
/Users/s14218/oss/pipe-cd/pipecd/pkg/app/piped/controller/scheduler.go:541 +0xde4
github.com/pipe-cd/pipecd/pkg/app/piped/controller.(*scheduler).Run.func2()
/Users/s14218/oss/pipe-cd/pipecd/pkg/app/piped/controller/scheduler.go:300 +0xb8
created by github.com/pipe-cd/pipecd/pkg/app/piped/controller.(*scheduler).Run in goroutine 17606
/Users/s14218/oss/pipe-cd/pipecd/pkg/app/piped/controller/scheduler.go:299 +0xc50
exit status 2
K8S_BASELINE_ROLLOUT
requires the previous running commit, which was deployed previously.
https://github.com/pipe-cd/pipecd/blob/master/pkg/app/piped/executor/kubernetes/baseline.go#L46
func (e *deployExecutor) ensureBaselineRollout(ctx context.Context) model.StageStatus {
var (
runningCommit = e.Deployment.RunningCommitHash
options = e.StageConfig.K8sBaselineRolloutStageOptions
variantLabel = e.appCfg.VariantLabel.Key
baselineVariant = e.appCfg.VariantLabel.BaselineValue
)
if options == nil {
e.LogPersister.Errorf("Malformed configuration for stage %s", e.Stage.Name)
return model.StageStatus_STAGE_FAILURE
}
// Load running manifests at the most successful deployed commit.
e.LogPersister.Infof("Loading running manifests at commit %s for handling", runningCommit)
ds, err := e.RunningDSP.Get(ctx, e.LogPersister)
if err != nil {
e.LogPersister.Errorf("Failed to prepare running deploy source (%v)", err)
return model.StageStatus_STAGE_FAILURE
}
But there isn't the one when adding the app first.
So e.RunningDSP
is nil and causes panic.
This bug is created by the fix in #4916
What you expected to happen:
It should fail with an error on the stage when first adding the app and deploying it as PipelineSync.
How to reproduce it:
Create and add k8s app below.
app.pipecd.yaml
deployment.yaml
service.yaml
app.pipecd.yaml
apiVersion: pipecd.dev/v1beta1
kind: KubernetesApp
spec:
name: analysis-with-baseline
labels:
env: example
team: product
planner:
alwaysUsePipeline: true
pipeline:
stages:
- name: K8S_CANARY_ROLLOUT
with:
replicas: 10%
- name: K8S_BASELINE_ROLLOUT
with:
replicas: 10%
- name: ANALYSIS
with:
duration: 10m
threshold: 2
- name: K8S_PRIMARY_ROLLOUT
- name: K8S_CANARY_CLEAN
- name: K8S_BASELINE_CLEAN
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: analysis-with-baseline
labels:
app: analysis-with-baseline
spec:
replicas: 3
selector:
matchLabels:
app: analysis-with-baseline
pipecd.dev/variant: primary
template:
metadata:
labels:
app: analysis-with-baseline
pipecd.dev/variant: primary
spec:
containers:
- name: helloworld
image: ghcr.io/pipe-cd/helloworld:v0.30.0
args:
- server
ports:
- containerPort: 9085
service.yaml
apiVersion: v1
kind: Service
metadata:
name: analysis-with-baseline
spec:
selector:
app: analysis-with-baseline
ports:
- protocol: TCP
port: 9085
targetPort: 9085
Environment:
piped
version: v0.47.3-rc0control-plane
version:- Others:
We will fix it later on pipedv1.