多臺雲伺服器的 Kubernetes 叢集搭建

語言: CN / TW / HK

環境

兩臺或多臺騰訊雲伺服器(本人搭建用了兩臺),都是 CentOs 7.6,

master 節點:伺服器為 4C8G,公網 IP:124.222.61.xxx

node1節點:伺服器為 4C4G,公網 IP:101.43.182.xxx

修改 hosts 資訊:

在 master 節點和 node 節點的 hosts 檔案中新增節點資訊

$ vim /etc/hosts
124.222.61.xxx master
101.43.182.xxx node1

這裡的 master 和 node1 均為 hostname,儘量不要使用預設的 hostname,修改hostname的命令為 hostnamectl set-hostname master

禁用防火牆:

$ systemctl stop firewalld
$ systemctl disable firewalld

禁用 SELINUX:

$ vim /etc/selinux/config
SELINUX=1
$ setenforce 0
$ vim /etc/selinux/config
SELINUX=disabled

載入 br_netfilter 模組:

$ modprobe br_netfilter

建立 /etc/sysctl.d/k8s.conf 檔案:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

執行命令使修改生效:

$ sysctl -p /etc/sysctl.d/k8s.conf

安裝 ipvs:

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

在各個節點上都安裝 ipset:

$ yum install ipset

安裝管理工具 ipvsadm:

$ yum install ipvsadm

同步伺服器時間:

$ yum install chrony -y
$ systemctl enable chronyd
$ systemctl start chronyd

關閉 swap 分割槽:

$ swapoff -a
$ vim /etc/sysctl.d/k8s.conf
(新增一行)vm.swappiness=0
$ sysctl -p /etc/sysctl.d/k8s.conf

安裝 Docker:

$ yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
$ yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo (阿里雲映象)
$ yum install docker-ce-18.09.9

配置 Docker 映象加速器(阿里雲):

$ mkdir -p /etc/docker
$ vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors" : [
    "https://uvtcantv.mirror.aliyuncs.com"
  ]
}

啟動 Docker:

$ systemctl start docker
$ systemctl enable docker

安裝 Kubeadm:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

然後安裝 kubeadm、kubelet、kubectl:

$ yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes

設定成開機自啟動:

$ systemctl enable --now kubelet

上面的所有操作都需要在所有的節點進行配置

叢集初始化

在 master 節點配置 kubeadm 初始化檔案:

$ kubeadm config print init-defaults > kubeadm.yaml

修改 kubeadm.yaml 檔案,修改 imageRepository ,kube-proxy 的模式為 ipvs,networking.podSubnet 設定為 10.244.0.0/16

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 124.222.61.xxx  # apiserver master節點IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master  # 預設讀取當前master節點的hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 修改成阿里雲映象源
kind: ClusterConfiguration
kubernetesVersion: v1.16.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16  # Pod 網段,flannel外掛需要使用這個網段
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs  # kube-proxy 模式

然後使用上面的配置檔案進行初始化:

$ kubeadm init --config kubeadm.yaml

初始化這裡有個坑,執行完上面的初始化指令碼後會卡在 etcd 初始化的位置,因為 etcd 繫結埠的時候使用外網 IP,而云伺服器外網 IP 並不是本機的網絡卡,而是閘道器分配的一個供外部訪問的 IP,從而導致初始化程序一直重試繫結,長時間卡在這裡 [kubelet-check] Initial timeout of 40s passed.

解決辦法,在卡住的時候另啟一個伺服器終端,修改初始化生成的 etcd.yaml

vim /etc/kubernetes/manifests/etcd.yaml

將其修改成這樣:

耐心等待三到四分鐘就可以了。

初始化成功之後,會在終端列印一條命令,這條命令就是節點加入叢集要執行的命令如下圖:

拷貝 kubeconfig 檔案:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

新增節點

將 master 節點上的 $HOME/.kube/config 檔案拷貝到 node 節點 $HOME/.kube/config 對應的檔案中

然後執行上面 master 節點初始化生成的命令,如果忘了可以執行 kubeadm token create --print-join-command 重新獲取。

kubeadm join 124.222.61.161:6443 --token 1l2un1.or6f04f1rewyf0xq     --discovery-token-ca-cert-hash sha256:1534171b93c693e6c0d7b2ed6c11bb4e2604be6d2af69a5f464ce74950ed4d9d

執行成功後執行 kubectl get nodes 命令:

$ kubectl get nodes
執行之後可以看到 status 是 NotReady 狀態,因為我們還沒安裝網路外掛

安裝 flannel 網路外掛:

$ wget  https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
$ vi kube-flannel.yml
......
containers:
- name: kube-flannel
  image: quay.io/coreos/flannel:v0.11.0-amd64
  command:
  - /opt/bin/flanneld
  args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=eth0  # 如果是多網絡卡的話,指定內網網絡卡的名稱
......
$ kubectl apply -f kube-flannel.yml

等待一段時間檢視 Pod 執行狀態:

$ kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-58cc8c89f4-6nn74              1/1     Running   0          18h
coredns-58cc8c89f4-v96jb              1/1     Running   0          18h
etcd-ydzs-master                      1/1     Running   0          18h
kube-apiserver-ydzs-master            1/1     Running   2          18h
kube-controller-manager-ydzs-master   1/1     Running   0          18h
kube-flannel-ds-amd64-674zs           1/1     Running   0          18h
kube-flannel-ds-amd64-zbv7l           1/1     Running   0          18h
kube-proxy-b7c9c                      1/1     Running   0          18h
kube-proxy-bvsrr                      1/1     Running   0          18h
kube-scheduler-ydzs-master            1/1     Running   0          18h

檢視 node 節點,發現也正常了:

$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   18h   v1.16.2
node1    Ready    <none>   18h   v1.16.2

配置 Dashboard

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
$ vi recommended.yaml
# 修改Service為NodePort型別
......
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort  # 加上type=NodePort變成NodePort型別的服務
......
$ kubectl apply -f recommended.yaml

Dashboard 會被預設安裝在 kubernetes-dashboard 這個名稱空間下面:

$ kubectl get pods -n kubernetes-dashboard -l k8s-app=kubernetes-dashboard
NAME                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-6b86b44f87-xsqft   1/1     Running   0          16h
$ kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.100.126.111   <none>        8000/TCP        17h
kubernetes-dashboard        NodePort    10.108.217.144   <none>        443:31317/TCP   17h

然後我們通過https://124.222.61.161:31317訪問,會發現訪問失敗,因為證書過期了下面我們來生成證書:

#新建目錄:
mkdir key && cd key

#生成證書
openssl genrsa -out dashboard.key 2048 

#我這裡寫的自己的node1節點,因為我是通過nodeport訪問的;如果通過apiserver訪問,可以寫成自己的master節點ip
openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=124.222.61.161'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt 

#刪除原有的證書secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard

#建立新的證書secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

#檢視pod
kubectl get pod -n kubernetes-dashboard

#重啟pod
kubectl delete pod kubernetes-dashboard-7b5bf5d559-gn4ls  -n kubernetes-dashboard

執行完繼續訪問會提示不安全連線,繼續訪問就好了。

這裡我們使用火狐瀏覽器,Google 瀏覽器無法訪問

建立使用者登陸 Dashboard:

# 建立 admin.yaml 檔案
$ vim admin.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard

# 直接建立
$ kubectl apply -f admin.yaml
$ kubectl get secret -n kubernetes-dashboard|grep admin-token
admin-token-jv2dq                  kubernetes.io/service-account-token   3      16h
kubectl get secret admin-token-jv2dq -o jsonpath={.data.token} -n kubernetes-dashboard |base64 -d
# 會生成一串很長的base64後的字串

然後用上面的 base64 的字串作為 token 登入 Dashboard 即可: