realtime kernel enablement relies on delta from past machine state which doesn't exist in hypershift
relyt0925 opened this issue · 9 comments
To enable the realtime kernel: currently the machine config operator looks at the delta between the previous machine config and the current proposed machine config and if realtime kernel has been selected proceeds to run this code:
machine-config-operator/pkg/daemon/update.go
Lines 1175 to 1211 in 6a59f30
The problem for hypershift is the combination of two things:
- The previous machine config isn't persisted and therefore there is always an empty machine config for the "previous machine config"
- The enablement option cannot be reran twice: if that code is ran on a node with a realtime kernel already enabled the rpm-ostree commands fail with "kernel not found" (fails removing the packages as they have been removed and does not gracefully handle that failing the upgrade)
This means that for hypershift: in place upgrades using the real time kernel are not working. I am curious if we can make the automation for real time kernel enablement be more graceful when running on a machine with the real time kernel already enabled (meaning it can skip some of the removal and reset steps). If it was able to gracefully run in that mode: real time kernel could be utilized in hypershift environments.
The problem is the upgrades for the real time kernel are not truly "reconciled". The kernel type and associated automation that ran is actually done as a diff between what the "old machine config" had and what the new machine config has:
machine-config-operator/pkg/daemon/update.go
Line 813 in 48e94f5
lisowski
8 minutes ago
In the case of "runFirstbootCompleteMachineConfig" it is always empty:
machine-config-operator/pkg/daemon/daemon.go
Line 926 in 48e94f5
And therefore every update with real time kernel enabled it tries and go through the process of converting from regular kernel to realtime kernel which is not setup currently to pass when ran on a machine with a real time kernel already enabled
lisowski
6 minutes ago
currently that is done by this code here:
machine-config-operator/pkg/daemon/update.go
Lines 1175 to 1211 in 6a59f30
I am wondering if we can make that process a little more graceful to be able to work without relying on looking at "past information"
cc @yuqi-zhang who I have been chatting with about this
The previous machine config isn't persisted and therefore there is always an empty machine config for the "previous machine config"
This will cause serious problems down the line; specifically older files will silently persist, creating hysteresis.
This however is another bug that will be fixed by layering, xref #3137 - if the state of the node is always a container image, then bootc/rpm-ostree always handle transitioning between states and there's no MachineConfig or Ignition that needs diffing.
Hi @relyt0925 , sorry for the delay, just looking through some of the logic,
For
machine-config-operator/pkg/daemon/daemon.go
Line 926 in 48e94f5
In Hypershift we have two modes of nodepools: inplace and replace
- inplace basically uses the current state on the system, so it should not detect a diff, and actually runs the regular MCD, so it wouldn't update the kernel again (and not use the firstboot complete path at all)
- replace should be starting with a completely new node each time, so you shouldn't have RT kernel yet (and thus no previous state
Is there a third operation that IBM clusters are doing that I am missing the context for?
Hey!
Ok that makes sense! The system used on the IBM side in Satellite predates that logic and actually uses complete-firstboot. Part of the reason for this is there is a necessesity for full control of worker upgrades at an individual worker level (cannot just automatically roll out across all nodes in a nodepool).
So it effectively reignites the node through the complete-firstboot path with the latest machine config everytime an upgrade is triggered.
However: if the path going on currently for the machine config path works that same strategy can be adopted (running the machine-config-daemon and that will look at current state versus an empty diff).
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.