/kubeclipper

Manage kubernetes in the most light and convenient way ☸️

Primary LanguageGoApache License 2.0Apache-2.0

banner

Manage kubernetes in the most light and convenient way

repo status coverage last commit Issues Pull Requests contributors apache2.0 Stars Forks

github actions code-check-test build-kc


KubeClipper

English | 中文

Features

✨ Create Cluster
  • Supports online deployment, proxy deployment, offline deployment
  • Frequently-used mirror repository management
  • Create clusters / install plugins from templates
  • Supports multi-version K8S and CRI deployments
  • NFS storage support
☸️ Cluster Management
  • Multi-region, multi-cluster management
  • Access to cluster kubectl web console
  • Real-time logs during cluster operations
  • Edit clusters (metadata, etc.)
  • Deleting clusters
  • Adding / removing cluster nodes
  • Retry from breakpoint after creation failure
  • Cluster backup and restore, scheduled backups
  • Cluster version upgrade
  • Save entire cluster / individual plugins as templates
  • Cluster backup storage management
🌐 Region & Node Management
  • Adding agent nodes and specifying regions (kcctl)
  • Node status management
  • Connect node terminal
  • Node enable/disable
  • View the list of nodes and clusters under a region
🚪 Access control
  • User and role management
  • Custom Role Management
  • OIDC integrate

Quick Start

For users who are new to KubeClipper and want to get started quickly, it is recommended to use the All-in-One installation mode, which can help you quickly deploy KubeClipper with zero configuration.

Preparations

KubeClipper itself does not take up too many resources, but in order to run Kubernetes better in the future, it is recommended that the hardware configuration should not be lower than the minimum requirements.

You only need to prepare a host with reference to the following requirements for machine hardware and operating system.

Hardware recommended configuration

  • Make sure your machine meets the minimum hardware requirements: CPU >= 2 cores, RAM >= 2GB.
  • Operating System: CentOS 7.x / Ubuntu 18.04 / Ubuntu 20.04.

Node requirements

  • Nodes must be able to connect via SSH.

  • You can use the sudo / curl / wget / tar command on this node.

It is recommended that your operating system is in a clean state (no additional software is installed), otherwise, conflicts may occur.

Deploy KubeClipper

Download kcctl

KubeClipper provides command line tools 🔧 kcctl to simplify operations.

You can download the latest version of kcctl directly with the following command:

curl -sfL https://oss.kubeclipper.io/kcctl.sh | bash -
# In China, you can add cn env, we use registry.aliyuncs.com/google_containers instead of k8s.gcr.io
curl -sfL https://oss.kubeclipper.io/kcctl.sh | KC_REGION=cn bash -
# The latest version is downloaded by default. You can download the specified version
curl -sfL https://oss.kubeclipper.io/kcctl.sh | VERSION=v1.2.0 bash -

You can also download the specified version on the GitHub Release Page .

Check if the installation is successful with the following command:

kcctl version

Get Started with Installation

In this quick start tutorial, you only need to run just one command for installation, and the template looks like this:

kcctl deploy  [--user root] (--passwd SSH_PASSWD | --pk-file SSH_PRIVATE_KEY)

If you use the ssh password method, the command is as follows:

kcctl deploy --user root --passwd $SSH_PASSWD

If you use the private key method is as follows:

kcctl deploy --user root --pk-file $SSH_PRIVATE_KEY

You only need to provide a ssh user and ssh password or ssh private key to deploy KubeClipper.

After you runn this command, kcctl will check your installation environment and enter the installation process, if the conditions are met.

After printing the KubeClipper banner, the installation is complete.

 _   __      _          _____ _ _
| | / /     | |        /  __ \ (_)
| |/ / _   _| |__   ___| /  \/ |_ _ __  _ __   ___ _ __
|    \| | | | '_ \ / _ \ |   | | | '_ \| '_ \ / _ \ '__|
| |\  \ |_| | |_) |  __/ \__/\ | | |_) | |_) |  __/ |
\_| \_/\__,_|_.__/ \___|\____/_|_| .__/| .__/ \___|_|
                                 | |   | |
                                 |_|   |_|

Login Console

When deployed successfully, you can open a browser and visit http://$IP to enter the KubeClipper console.

You can log in with the default account and password admin / Thinkbig1 .

You may need to configure port forwarding rules and open ports in security groups for external users to access the console.

Create a k8s cluster

When kubeclipper is deployed successfully, you can use the kcctl tool or console to create a k8s cluster. In the quick start tutorial, we use the kcctl tool to create.

First, log in with the default account and password to obtain the token, which is convenient for subsequent interaction between kcctl and kc-server.

kcctl login -H http://localhost  -u admin -p Thinkbig1

Then create a k8s cluster with the following command:

NODE=$(kcctl get node -o yaml|grep ipv4DefaultIP:|sed 's/ipv4DefaultIP: //')

kcctl create cluster --master $NODE --name demo --untaint-master

The cluster creation will be completed in about 3 minutes, or you can use the following command to view the cluster status:

kcctl get cluster -o yaml|grep status -A5

You can also enter the console to view real-time logs.

Once the cluster enter the Running state , it means that the creation is complete. You can use kubectl get cs command to view the cluster status.

Development and Debugging

  1. fork repo and clone
  2. run etcd locally, usually use docker / podman to run etcd container
    export HostIP="Your-IP"
    docker run -d \
    --net host \
    k8s.gcr.io/etcd:3.5.0-0 etcd \
    --advertise-client-urls http://${HostIP}:2379 \
    --initial-advertise-peer-urls http://${HostIP}:2380 \
    --initial-cluster=infra0=http://${HostIP}:2380 \
    --listen-client-urls http://${HostIP}:2379,http://127.0.0.1:2379 \
    --listen-metrics-urls http://127.0.0.1:2381 \
    --listen-peer-urls http://${HostIP}:2380 \
    --name infra0 \
    --snapshot-count=10000 \
    --data-dir=/var/lib/etcd
  3. change kubeclipper-server.yaml etcd.serverList to your locally etcd cluster
  4. make build
  5. ./dist/kubeclipper-server serve

Architecture

kc-arch1

kc-arch2

Contributing

Please follow Community to join us.