[root@master ~]# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core)
2.2.2 关闭防火墙和禁止防火墙开机启动
1 2 3 4 5 6 7 8 9
[root@master ~]# systemctl stop firewalld Failed to stop firewalld.service: Unit firewalld.service not loaded. [root@master ~]# systemctl disable firewalld Failed to execute operation: No such file or directory
[root@master ~]# systemctl stop iptables Failed to stop iptables.service: Unit iptables.service not loaded. [root@master ~]# systemctl disable iptables Failed to execute operation: No such file or directory
[root@master opt]# kubeadm init --kubernetes-version=1.18.1 --apiserver-advertise-address=$IP --image-repository 192.168.20.119/library --pod-network-cidr=10.244.0.0/16 W0518 05:17:21.372074 3287 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.20.119] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.20.119 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.20.119 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0518 05:17:30.213101 3287 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0518 05:17:30.216982 3287 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 21.516345 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: qkrll1.aajjr3v4zcps0a94 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
[root@master ~]# mkdir dashboard-certs [root@master ~]# cd dashboard-certs/ [root@master dashboard-certs]# kubectl create namespace kubernetes-dashboard namespace/kubernetes-dashboard created [root@master dashboard-certs]# openssl genrsa -out dashboard.key 2048 Generating RSA private key, 2048 bit long modulus ..........+++ .......................+++ e is 65537 (0x10001) [root@master dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert' [root@master dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt Signature ok subject=/CN=dashboard-cert Getting Private key [root@master dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard secret/kubernetes-dashboard-certs created [root@master dashboard-certs]# sed -i "s/kubernetesui/$IP\/library/g" /opt/yaml/dashboard/recommended.yaml [root@master dashboard-certs]# kubectl apply -f /opt/yaml/dashboard/recommended.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply namespace/kubernetes-dashboard configured serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply secret/kubernetes-dashboard-certs configured secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created [root@master dashboard-certs]# kubectl apply -f /opt/yaml/dashboard/dashboard-adminuser.yaml serviceaccount/dashboard-admin created clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-bind-cluster-role created [root@master dashboard-certs]# token=`kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')` '[root@master dashboard-certs]# '
[root@node1 opt]# docker login -u admin -p Harbor12345 192.168.20.119 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [root@node1 opt]# ssh master "kubeadm token create --print-join-command" >token.sh The authenticity of host 'master (192.168.20.119)' can't be established. ECDSA key fingerprint is SHA256:FqTDtd28812m1IAFRjAbURuwoPQQRbq7gqGrEYh77C4. ECDSA key fingerprint is MD5:1a:d0:c6:aa:89:3a:1c:ed:c6:21:1d:dc:4d:63:e8:33. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'master,192.168.20.119' (ECDSA) to the list of known hosts. W0518 05:47:05.650872 30628 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [root@node1 opt]# chmod +x token.sh && source token.sh && rm -rf token.sh W0518 05:47:14.165256 24770 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node1 opt]# ssh master "kubectl get nodes" NAME STATUS ROLES AGE VERSION master Ready master 29m v1.18.1 node1 Ready <none> 16s v1.18.1
[root@node2 opt]# docker login -u admin -p 123456 192.168.20.119 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded [root@node2 opt]# ssh master "kubeadm token create --print-join-command" >token.sh The authenticity of host 'master (192.168.20.119)' can't be established. ECDSA key fingerprint is SHA256:FqTDtd28812m1IAFRjAbURuwoPQQRbq7gqGrEYh77C4. ECDSA key fingerprint is MD5:1a:d0:c6:aa:89:3a:1c:ed:c6:21:1d:dc:4d:63:e8:33. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'master,192.168.20.119' (ECDSA) to the list of known hosts. W0518 05:50:32.342935 5574 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [root@node2 opt]# chmod +x token.sh && source token.sh && rm -rf token.sh W0518 05:50:47.130633 27919 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@node2 opt]# ssh master "kubectl get nodes" NAME STATUS ROLES AGE VERSION master Ready master 33m v1.18.1 node1 Ready <none> 3m54s v1.18.1 node2 Ready <none> 20s v1.18.1