ost
vit1788 opened this issue · 4 comments
Description
Provide a brief description of your issue here. For example:
On a multi master install, if the first master goes down we can no
longer scaleup the cluster with new nodes or masters.
Version
Please put the following version information in the code block
indicated below.
- Your ansible version per
ansible --version
If you're operating from a git clone:
- The output of
git describe
If you're running from playbooks installed via RPM
- The output of
rpm -q openshift-ansible
Place the output between the code block below:
VERSION INFORMATION HERE PLEASE
Steps To Reproduce
- [step 1]
- [step 2]
Expected Results
Describe what you expected to happen.
Example command and output or error messages
Observed Results
Describe what is actually happening.
Example command and output or error messages
For long output or logs, consider using a gist
Additional Information
Provide any additional information which may help us diagnose the
issue.
- Your operating system and version, ie: RHEL 7.2, Fedora 23 (
$ cat /etc/redhat-release
) - Your inventory file (especially any non-standard configuration parameters)
- Sample code, etc
EXTRA INFORMATION GOES HERE
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.