pingcap/tidb-operator
TiDB operator creates and manages TiDB clusters running in Kubernetes.
GoApache-2.0
Issues
- 1
tidb spec support loadBalancerClass
#5952 opened by xiaozhuang-a - 0
Dependency Dashboard
#5839 opened by renovate - 1
Does tidb-operator have no requirements for the k8s version? Are any version of k8s and any version of tidb-operator compatible?
#5953 opened by xiaoxiongxyy - 5
TiDB-operator fails to start the tiproxy servers if spec.tiproxy.version not provided
#5833 opened by kos-team - 3
- 2
TiDB-operator is unable to delete TiProxy servers
#5835 opened by kos-team - 0
- 1
If store number is 0, it is expected to set the volume backup to failed which will not blocked the whole clean schedule [restore for 5658]
#5821 opened by ti-chi-bot - 1
A TiKV was restarted before it was stopped for scheduling, and BR did not exit in a timely manner and set the status to failed, resulting in the cluster remaining in the pause schedule state [restore for 5583]
#5820 opened by ti-chi-bot - 1
tikv crash during volumerestore [restore for 5577]
#5819 opened by ti-chi-bot - 1
volumebackup will failed when there is more than 50 tags specified in snapshot [restore for 5540]
#5818 opened by ti-chi-bot - 1
- 1
- 1
If there is a PV with the same name (released status) in the namespace, it will cause the restore to fail [restore for 5505]
#5815 opened by ti-chi-bot - 1
volumebackup failed, error message is "init job deleted before all the volume snapshots are created" [restore for 5478]
#5814 opened by ti-chi-bot - 1
Pausing lighting will cause a loss of status when encountering a TikV restart, and backup needs to be set to fail [restore for 5442]
#5813 opened by ti-chi-bot - 1
TiKV failed for more than 5 minutes, and the system did not set the backup task as failed [restore for 5439]
#5812 opened by ti-chi-bot - 1
backup should be set to failed when error "failed to keep importing to store 17 being denied, the state might be inconsistency" occured [restore for 5437]
#5811 opened by ti-chi-bot - 1
When checking whether warmup has ended, if there are warmup jobs in other namespaces, it will interfere with the current results. [restore for 5405]
#5810 opened by ti-chi-bot - 1
volumeback failed. reason is "Pod backup-fed-ebs-tidb-operator-ebs-1-init-w5glk has failed, original reason " [restore for 5330]
#5809 opened by ti-chi-bot - 1
do volumebackup after scaling out cluster, backup failed [restore for 5306]
#5808 opened by ti-chi-bot - 1
delete restore data pod when restoring will cause restore progress stuck [restore for 5297]
#5807 opened by ti-chi-bot - 1
- 1
TiKV failure last for 1 minute can cause backup failure [restore for 5294]
#5805 opened by ti-chi-bot - 1
When the restore data pod fails, the volumerestore task status is not set to failed [restore for 5293]
#5804 opened by ti-chi-bot - 1
it is recommended to check the number of tikv volumes when restoring [restore for 5287]
#5803 opened by ti-chi-bot - 1
volumerestore 时,operator 报错“sync failed, err: read backup meta from bucket qa-workload-datasets and prefix airbnb-ebs/rpod_loss_with_pdleader_10min-tidb-operator-ebs-2, err: blob (key "backupmeta") (code=Unknown)” [restore for 5284]
#5802 opened by ti-chi-bot - 1
- 1
- 1
there are 2 running volumebackup during volumebackupschedule [restore for 5271]
#5799 opened by ti-chi-bot - 1
for a 100T cluster after restore, the QPS decreased and became unstable in the first two hours [restore for 5269]
#5798 opened by ti-chi-bot - 1
- 1
failure to access AWS during backup will cause the entire task to fail [restore for 5266]
#5796 opened by ti-chi-bot - 1
- 1
Kill warmup pod when do volume restore, warmup pod will not be restarted, which will cause restore failed [restore for 5235]
#5794 opened by ti-chi-bot - 1
backup status changed from failed to VolumeBackupComplete [restore for 5193]
#5793 opened by ti-chi-bot - 1
there is a error "MissingRegion: could not find region configuration" in volumerestore [restore for 5185]
#5792 opened by ti-chi-bot - 1
tikv gets stuck while syncing raft logs during volumerestore [restore for 5165]
#5791 opened by ti-chi-bot - 1
Need to add the restore name to the warmup job name, otherwise it will affect the next restore task [restore for 5160]
#5790 opened by ti-chi-bot - 1
The previous backup failed, which will affect the time for rerun the backup [restore for 5144]
#5789 opened by ti-chi-bot - 1
- 1
When volumerestore to different numbers of tikv, it is recommended to use precheck to directly report an error [restore for 5129]
#5787 opened by ti-chi-bot - 1
When the backup commit ts of multiple k8s is inconsistent, it will cause the second phase of restore to be very slow [restore for 5109]
#5786 opened by ti-chi-bot - 1
Volume Restore failed [restore for 5090]
#5785 opened by ti-chi-bot - 1
Volume restore takes a long time [restore for 5089]
#5784 opened by ti-chi-bot - 1
Volume backup status should be displayed as Complete, with the first letter capitalized [restore for 5088]
#5783 opened by ti-chi-bot - 1
Deleting volume backup during its running can cause snapshot leakage [restore for 5087]
#5782 opened by ti-chi-bot - 1
One of the k8s backups failed, and the status of the entire task should be updated to failed [restore for 5086]
#5781 opened by ti-chi-bot - 1
The number of regions is much greater than the concurrent number of br, resulting in a particularly long volume restore time [restore for 5085]
#5780 opened by ti-chi-bot - 1
The GC safepoint exceeds resolved ts, causing the backup task to fail [restore for 5073]
#5779 opened by ti-chi-bot