cpu limits should be detectable during s2i build
Closed this issue · 11 comments
During the s2i build, the cpu limit doesn't ripple down to the build process.
.NET, for example, uses the cpu limit to determine how much things it should build in parallel.
When there is no cpu limit, .NET falls back to use the number of physical cores on the machine.
When the build machine has many cores (e.g. 64 or more), the amount of parallelism may be completely inappropriate. This causes the build to stall, consume massive amounts of memory, and finally crash.
.NET determines the cpu limit by dividing cfs_quota_us
by cfs_period_us
and rounding it up.
In the s2i build container, cfs_quota_us
is -1
.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
During the s2i build, the cpu limit doesn't ripple down to the build process.
.NET, for example, uses the cpu limit to determine how much things it should build in parallel. When there is no cpu limit, .NET falls back to use the number of physical cores on the machine.
When the build machine has many cores (e.g. 64 or more), the amount of parallelism may be completely inappropriate. This causes the build to stall, consume massive amounts of memory, and finally crash.
.NET determines the cpu limit by dividing
cfs_quota_us
bycfs_period_us
and rounding it up. In the s2i build container,cfs_quota_us
is-1
.
That's a valid value to find there, if I'm reading the docs right.
Can you share the build config or build spec that triggers this behavior? Does it include resource requests and/or limits?
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.