组件 | 版本 |
---|---|
Kubernetes | v1.1.7 |
docker | 1.8.2 |
etcd | 2.1.1 |
flannel | 0.5.3 |
v1.0版本后安装部署差异应该不大,主要差别应该在应用集群的yaml配置文件
由于GFW的原因,所有gcr.io/google_containers/的镜像都需翻墙下载。
以三台主机示例,一台master,两台minion。
- 在所有主机执行
systemctl stop firewalld && systemctl disable firewalld
- 通过yum(或dnf)安装Kubernetes及etcd
yum install -y etcd kubernetes
- 修改etcd配置(/etc/etcd/etcd.conf)监听所有ip
ETCD_NAME=kubernetes ETCD_DATA_DIR="/var/lib/etcd/kubernetes.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
Tip 此处为单节点简单配置,etcd也可集群配置
- 修改Kubernetes API Server配置文件(/etc/kubernetes/apiserver)
KUBE_API_ADDRESS="--address=0.0.0.0" KUBE_API_PORT="--port=8080" KUBELET_PORT="--kubelet_port=10250" KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" KUBE_API_ARGS="--service-node-port-range=30000-40000 --service_account_key_file=/opt/kubernetes/key/serviceaccount.key"
- 修改Kubernetes Controller Manager配置文件(/etc/kubernetes/controller-manager)
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/opt/kubernetes/key/serviceaccount.key"
- 生成账户密钥
mkdir -p /opt/kubernetes/key openssl genrsa -out /opt/kubernetes/key/serviceaccount.key 2048
- 启动etcd、kube-apiserver、kube-controller-manager、kube-scheduler服务
for SERVICES in docker etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done
- 在etcd中定义flannel网络配置
etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
- 查询节点信息(此时没有节点)
kubectl get nodes
- 通过yum(或dnf)安装Kubernetes、docker、flannel及cadvisor
yum install -y docker cadvisor flannel kubernetes
- 修改flannel配置(/etc/sysconfig/flanneld)
FLANNEL_ETCD="http://172.17.13.26:2379"
- 修改Kubernetes默认配置(/etc/kubernetes/config)连接到master
KUBE_MASTER="--master=http://172.17.13.26:8080"
- 修改kubelet配置(/etc/kubernetes/kubelet)
KUBELET_ADDRESS="--address=0.0.0.0" KUBELET_PORT="--port=10250" # change the hostname to this host’s IP address KUBELET_HOSTNAME="--hostname_override=172.17.13.128" KUBELET_API_SERVER="--api_servers=http://172.17.13.26:8080" KUBELET_ARGS=""
- 启动kube-proxy、kubelet、docker、flanneld服务
for SERVICES in kube-proxy kubelet docker flanneld; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done
其他节点配置类似
- 返回mater节点查询节点信息(此时可以看到配置的两个节点信息)
kubectl get --all-namespaces nodes
NAME LABELS STATUS AGE 172.17.13.128 kubernetes.io/hostname=172.17.13.128 Ready 2h 172.17.13.129 kubernetes.io/hostname=172.17.13.129 Ready 2h
- 在master节点配置集群方案 如果需要namespace,需先创建namespace
mkdir -p /opt/kubernetes/namespace cd /opt/kubernetes/namespace vi na.yaml
配置见master/opt/kubernetes/namespace/na.yaml
创建集群文件
mkdir -p /opt/kubernetes/cluster cd /opt/kubernetes/cluster
以redis集群示例
vi redis-cluster.yaml
配置见master/opt/kubernetes/cluster/redis-cluster.yaml
- 在各个minion节点执行
docker pull docker.io/kubernetes/pause docker tag docker.io/kubernetes/pause gcr.io/google_containers/pause:0.8.0 docker tag gcr.io/google_containers/pause:0.8.0 gcr.io/google_containers/pause
因为GFW的原因无法在gcr.io下载镜像
pause为kubernetes所必须的镜像,用于管理各个Pods之间的网络
- 在master节点启动集群服务
kubectl create -f /opt/kubernetes/cluster/redis-cluster.yaml
输出:
services/redis-master replicationcontrollers/redis-master
查看Controller Manager
kubectl get --all-namespaces rc
输出:
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS redis-master master docker.io/redis app=redis,role=master,tier=backend 3
查看服务
kubectl get --all-namespaces svc
输出:
NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.1 443/TCP redis-master app=redis,role=master,tier=backend app=redis,role=master,tier=backend 10.254.160.170 6379/TCP
查看Pods
kubectl get --all-namespaces pods
输出:
NAME READY STATUS RESTARTS AGE redis-master-80mnc 1/1 Running 0 1h redis-master-fnkxg 1/1 Running 0 1h redis-master-ig7i0 1/1 Running 0 1h
可在master通过nginx(可用镜像)分发流量到各个节点,对外输出统一服务地址。 由于集群本身有负载均衡所以即使nginx将请求转发到节点1最终被调用的服务也可能是节点2提供的。
kubectl cluster-info kubectl get nodes
kubectl get namespaces
kubectl get svc kubectl get rc kubectl get pods kubectl logs <pod_name>
kubectl describe pods/redis-master-dz33o
kubectl create -f redis-cluster.yaml kubectl delete -f redis-cluster.yaml
kubectl exec -ti -c --namespace="kube-system" -- env
kubectl get pods --sort-by=.status.containerStatuses[0].restartCount
异常终止信息 kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}" $ kubectl get pods/pod-w-message -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.exitCode}}{{end}}"