0%

k8s学习日记day1

kubernetes环境资源搭建

1 环境规划

1.1 集群类型

●Kubernetes集群大致分为两类:一主多从和多主多从。

●一主多从:一个Master节点和多台Node节点,搭建简单,但是有单机故障风险,适合用于测试环境。

●多主多从:多台Master和多台Node节点,搭建麻烦,安全性高,适合用于生产环境。

为了测试方便,本次搭建的是一主多从类型的集群。

1.2 安装方式

●kubernetes有多种部署方式,目前主流的方式有kubeadm、minikube、二进制包。

●① minikube:一个用于快速搭建单节点的kubernetes工具。
●② kubeadm:一个用于快速搭建kubernetes集群的工具。
●③ 二进制包:从官网上下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效。

●我们需要安装kubernetes的集群环境,但是又不想过于麻烦,所以选择kubeadm方式。

1.3 主机规划

角色 IP地址 操作系统 配置
master 192.168.20.119 CentOS7.5,基础设施服务器 8核CPU,12G内存,100G硬盘
node1 192.168.20.115 CentOS7.5,基础设施服务器 4核CPU,8G内存,100G硬盘
node2 192.168.20.124 CentOS7.5,基础设施服务器 4核CPU,8G内存,100G硬盘

2 环境搭建

2.1 前言

本次环境搭建需要三台CentOS服务器(一主二从),然后在每台服务器中分别安装Docker(18.06.3)、kubeadm(1.18.0)、kubectl(1.18.0)和kubelet(1.18.0)。

没有特殊说明,就是三台机器都需要执行。

2.2 环境初始化

2.2.1 检查操作系统的版本

●检查操作系统的版本(要求操作系统的版本至少在7.5以上):

1
2
[root@master ~]# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core)

2.2.2 关闭防火墙和禁止防火墙开机启动

1
2
3
4
5
6
7
8
9
[root@master ~]# systemctl stop firewalld
Failed to stop firewalld.service: Unit firewalld.service not loaded.
[root@master ~]# systemctl disable firewalld
Failed to execute operation: No such file or directory

[root@master ~]# systemctl stop iptables
Failed to stop iptables.service: Unit iptables.service not loaded.
[root@master ~]# systemctl disable iptables
Failed to execute operation: No such file or directory

我这里是因为没有防火墙

2.2.3 设置主机名

master:

1
2
[root@master ~]# hostnamectl set-hostname master
[root@master ~]# login

node1:

1
2
[root@node-1 ~]# hostnamectl set-hostname node1
[root@node-1 ~]# login

node2:

1
2
[root@node-2 ~]# hostnamectl set-hostname node2
[root@node-2 ~]# login

2.2.4 主机名解析

为了方便后面集群节点间的直接调用,需要配置一下主机名解析,企业中推荐使用内部的DNS服务器。

1
2
3
4
5
6
7
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.20.119 master
192.168.20.115 node1
192.168.20.124 node2

三个节点都是一样的

2.2.5 源配置

master节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@master ~]# rm -rf /etc/yum.repos.d/*
[root@master ~]# cat /etc/yum.repos.d/local.repo
[k8s]
name=k8s
baseurl=file:///opt/kubernetes-repo
gpgcheck=0
enabled=1

[centos]
name=centos
baseurl=http://172.19.25.11/centos
gpgcheck=0
enabled=1

[root@master ~]# yum install -y vsftpd
[root@master ~]# echo anon_root=/opt/ >> /etc/vsftpd/vsftpd.conf
[root@master ~]# systemctl restart vsftpd
[root@master ~]# systemctl enable vsftpd
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.

node节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@node1 ~]# rm -rf /etc/yum.repos.d/*
[root@node1 ~]# cat /etc/yum.repos.d/local.repo
[k8s]
name=k8s
baseurl=ftp://master/kubernetes-repo
gpgcheck=0
enabled=1

[centos]
name=centos
baseurl=http://172.19.25.11/centos
gpgcheck=0
enabled=1

[root@node2 ~]# rm -rf /etc/yum.repos.d/*
[root@node2 ~]# cat /etc/yum.repos.d/local.repo
[k8s]
name=k8s
baseurl=ftp://master/kubernetes-repo
gpgcheck=0
enabled=1

[centos]
name=centos
baseurl=http://172.19.25.11/centos
gpgcheck=0
enabled=1

2.2.6 时间同步

master节点:

1
2
3
4
5
6
7
8
9
[root@master ~]# yum install -y chrony
[root@master ~]# sed -i '3,6s/^/#/g' /etc/chrony.conf
[root@master ~]# sed -i "7s|^|server master iburst|g" /etc/chrony.conf
[root@master ~]# echo "allow all" >> /etc/chrony.conf
[root@master ~]# echo "local stratum 10" >> /etc/chrony.conf
[root@master ~]# systemctl restart chronyd
[root@master ~]# systemctl enable chronyd
[root@master ~]# timedatectl set-ntp true
[root@master ~]# systemctl restart chronyd

node节点:

1
2
3
4
5
6
7
8
9
10
11
12
[root@node1 ~]# sed -i '3,6s/^/#/g' /etc/chrony.conf
[root@node1 ~]# sed -i "7s|^|server master iburst|g" /etc/chrony.conf
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# systemctl enable chronyd
[root@node1 ~]# timedatectl set-ntp true
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================

^* master 11 6 17 2 +20us[ +18us] +/- 16ms

2.2.7 关闭selinux

1
[root@master]# sed -i 's/enforcing/disabled/' /etc/selinux/config

2.2.8 关闭swap分区

1
[root@master ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab

2.2.9 将桥接的IPv4流量传递到iptables的链

master节点:

1
2
3
4
5
[root@master opt]# modprobe br_netfilter
[root@master opt]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
[root@master opt]# echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
[root@master opt]# echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
[root@master opt]# sysctl -p

node节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@node1 opt]# modprobe br_netfilter
[root@node1 opt]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@node1 opt]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

[root@node2 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@node2 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

2.2.10 开启ipvs

三个节点都要

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master opt]# yum -y install ipset ipvsadm
[root@master opt]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack_ipv4
> EOF
[root@master opt]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 15053 0
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 141432 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 133053 2 ip_vs,nf_conntrack_ipv4
libcrc32c 12644 3 xfs,ip_vs,nf_conntrack

2.3 每个节点安装Docker、kubeadm、kubelet和kubectl

2.3.1 安装docker-ce

三节点

1
2
3
4
5
6
7
8
9
10
11
12
[root@master opt]# yum install -y yum-utils device-mapper-p* lvm2
[root@master opt]# yum install -y docker-ce
[root@master opt]# systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@master opt]# tee /etc/docker/daemon.json <<-'EOF'
{
"insecure-registries" : ["0.0.0.0/0"],
"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@master opt]# systemctl restart docker

2.3.2安装kubeadm、kubelet和kubectl

三节点

1
2
3
4
[root@master opt]# yum install -y kubelet-1.18.1 kubeadm-1.18.1 kubectl-1.18.1
[root@master opt]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master opt]# systemctl start kubelet

2.3.3 master节点部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
[root@master opt]# kubeadm init --kubernetes-version=1.18.1 --apiserver-advertise-address=$IP --image-repository 192.168.20.119/library --pod-network-cidr=10.244.0.0/16
W0518 05:17:21.372074 3287 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.20.119]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.20.119 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.20.119 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0518 05:17:30.213101 3287 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0518 05:17:30.216982 3287 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.516345 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qkrll1.aajjr3v4zcps0a94
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.20.119:6443 --token qkrll1.aajjr3v4zcps0a94 \
--discovery-token-ca-cert-hash sha256:eca13ad31879f9f8cca8c719b685f239a06d2e1450e49380f8f3eec5121db792
[root@master opt]# mkdir -p /root/.kube
[root@master opt]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master opt]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master opt]# kubectl get pod -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6fcfc67db4-6tdxr 0/1 Pending 0 76s <none> <none> <none> <none>
coredns-6fcfc67db4-b6m9j 0/1 Pending 0 76s <none> <none> <none> <none>
etcd-master 1/1 Running 0 89s 192.168.20.119 master <none> <none>
kube-apiserver-master 1/1 Running 0 89s 192.168.20.119 master <none> <none>
kube-controller-manager-master 1/1 Running 0 89s 192.168.20.119 master <none> <none>
kube-proxy-d7vxn 1/1 Running 0 76s 192.168.20.119 master <none> <none>
kube-scheduler-master 1/1 Running 0 89s 192.168.20.119 master <none> <none>
[root@master opt]# sed -i "s/quay.io\/coreos/$IP\/library/g" /opt/yaml/flannel/kube-flannel.yaml
[root@master opt]# kubectl apply -f /opt/yaml/flannel/kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

2.3.4 dashboard部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
[root@master ~]# mkdir dashboard-certs
[root@master ~]# cd dashboard-certs/
[root@master dashboard-certs]# kubectl create namespace kubernetes-dashboard
namespace/kubernetes-dashboard created
[root@master dashboard-certs]# openssl genrsa -out dashboard.key 2048
Generating RSA private key, 2048 bit long modulus
..........+++
.......................+++
e is 65537 (0x10001)
[root@master dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
[root@master dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
Signature ok
subject=/CN=dashboard-cert
Getting Private key
[root@master dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
secret/kubernetes-dashboard-certs created
[root@master dashboard-certs]# sed -i "s/kubernetesui/$IP\/library/g" /opt/yaml/dashboard/recommended.yaml
[root@master dashboard-certs]# kubectl apply -f /opt/yaml/dashboard/recommended.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
secret/kubernetes-dashboard-certs configured
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master dashboard-certs]# kubectl apply -f /opt/yaml/dashboard/dashboard-adminuser.yaml
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-bind-cluster-role created
[root@master dashboard-certs]# token=`kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')`
'[root@master dashboard-certs]# '

> ^C
> [root@master dashboard-certs]# echo "登录令牌:$token"
> 登录令牌:Name: dashboard-admin-token-lppbw
> Namespace: kubernetes-dashboard
> Labels: <none>
> Annotations: kubernetes.io/service-account.name: dashboard-admin
> kubernetes.io/service-account.uid: f808d3d3-ffc2-4fbf-bb1d-d6da71c8b89e

Type: kubernetes.io/service-account-token

Data
====

ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjFqbHJxNFptcmZoZXBibEs1NHFwWHRRZGxDck8tWDM4UWRwV3M2ZkoyT3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbHBwYnciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjgwOGQzZDMtZmZjMi00ZmJmLWJiMWQtZDZkYTcxYzhiODllIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.O7GUtnsJxUBP0m-iDKpYx9Bn7XHd02lUiFXaVRv8LtM2M6pB5Snd9smY5Hj3voT--b8AEuizywnRYZtX6mIDxAiRSQhfea4uU5dlG0wuG0_JDnj1w5431RPedZVFwE3xO5YyecwzwMwmCE7XWx9uFFRvTj17ant3BkZN7TMPWOrab4VUU905RWYCzb33WpzCa8nYOiweNzfttopJVYmTpSlVSEQAZH3cx2vl7eW2dmny3Glqz0-OoK5eVk1gpWiAZhRFpMD0540wXBtmGcXnDVcijFxYlo-TzfiJLnh7Q8k9ydPbDok3wViVqqAdsbfGDpa_TjkTpNJJOqVmDPqY8Q

2.3.5node节点部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
[root@node1 opt]# docker login -u admin -p Harbor12345 192.168.20.119
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@node1 opt]# ssh master "kubeadm token create --print-join-command" >token.sh
The authenticity of host 'master (192.168.20.119)' can't be established.
ECDSA key fingerprint is SHA256:FqTDtd28812m1IAFRjAbURuwoPQQRbq7gqGrEYh77C4.
ECDSA key fingerprint is MD5:1a:d0:c6:aa:89:3a:1c:ed:c6:21:1d:dc:4d:63:e8:33.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master,192.168.20.119' (ECDSA) to the list of known hosts.
W0518 05:47:05.650872 30628 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@node1 opt]# chmod +x token.sh && source token.sh && rm -rf token.sh
W0518 05:47:14.165256 24770 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@node1 opt]# ssh master "kubectl get nodes"
NAME STATUS ROLES AGE VERSION
master Ready master 29m v1.18.1
node1 Ready <none> 16s v1.18.1

[root@node2 opt]# docker login -u admin -p 123456 192.168.20.119
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@node2 opt]# ssh master "kubeadm token create --print-join-command" >token.sh
The authenticity of host 'master (192.168.20.119)' can't be established.
ECDSA key fingerprint is SHA256:FqTDtd28812m1IAFRjAbURuwoPQQRbq7gqGrEYh77C4.
ECDSA key fingerprint is MD5:1a:d0:c6:aa:89:3a:1c:ed:c6:21:1d:dc:4d:63:e8:33.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master,192.168.20.119' (ECDSA) to the list of known hosts.
W0518 05:50:32.342935 5574 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@node2 opt]# chmod +x token.sh && source token.sh && rm -rf token.sh
W0518 05:50:47.130633 27919 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@node2 opt]# ssh master "kubectl get nodes"
NAME STATUS ROLES AGE VERSION
master Ready master 33m v1.18.1
node1 Ready <none> 3m54s v1.18.1
node2 Ready <none> 20s v1.18.1