0%

金砖国赛样题二

2022金砖国家职业技能大赛样题

B 场次题目:容器的编排与运维>

任务 1 容器云平台环境初始化(5 分)

1.容器云平台的初始化 根据表 2 的 IP 地址规划,创建云服务器,镜像使用 CentOS_7.5_x86_64_XD. qcow,确保网络正常通信。按照表 2 置主机名节点并关闭 swap,同时永久关闭 s elinux 以及防火墙,并修改 hosts 映射。 请将 master 节点 hosts 文件内容提交到答题框。【1 分】

主机名节点并关闭 swap

1
[root@master ~]# swapoff -a

永久关闭 s elinux 以及防火墙

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@master ~]# setenforce 0
setenforce: SELinux is disabled
[root@master ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing - SELinux security policy is enforced.

# permissive - SELinux prints warnings instead of enforcing.

# disabled - No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of three two values:

# targeted - Targeted processes are protected,

# minimum - Modification of targeted policy. Only selected processes are protected.

# mls - Multi Level Security protection.

SELINUXTYPE=targeted


修改 hosts 映射

1
2
3
4
5
6
7
8
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.20.115 master
192.168.20.122 node1
192.168.20.112 node2
192.168.20.114 harbor

2.Yum 源数据的持久化挂载 将提供的 CentOS-7-x86_64-DVD-1804.iso 和 chinaskills_cloud_paas.iso 光盘镜像上传到 master 节点 /root 目录下,然后在 /opt 目录下使用命令创建 /centos 目录和 /paas 目录,并将镜像文件 CentOS-7-x86_64-DVD-1804.iso 挂 载到/centos 目录下,将镜像文件 chinaskills_cloud_paas.iso 挂载到 /paas 目录下。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master ~]# curl -O http://172.19.25.11/middle/chinaskills_cloud_paas.iso
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8825M 100 8825M 0 0 111M 0 0:01:19 0:01:19 --:--:-- 111M
[root@master ~]# curl -O http://172.19.25.11/middle/CentOS-7-x86_64-DVD-1804.iso
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4263M 100 4263M 0 0 111M 0 0:00:38 0:00:38 --:--:-- 106M
[root@master ~]# mount chinaskills_cloud_paas.iso /mnt/
mount: /dev/loop0 is write-protected, mounting read-only
[root@master ~]# mkdir -p /opt/paas
[root@master ~]# mkdir -p /opt/centos
[root@master ~]# cp -rf /mnt/* /opt/paas/
[root@master ~]# umount /mnt/
[root@master ~]# mount CentOS-7-x86_64-DVD-1804.iso /mnt/
mount: /dev/loop0 is write-protected, mounting read-only
[root@master ~]# cp -rf /mnt/* /opt/centos/
[root@master ~]# umount /mnt/

3.Yum 源的编写 为 master 节点设置本地 yum 源,yum 源文件名为 centos.repo,安装 ftp 服 务,将 ftp 仓库设置为 /opt/,为其他节点配置 ftp 源,yum 源文件名称为 ftp. repo,其中 ftp 服务器地址为 master 节点 IP。 请将其它节点的 yum 源文件内容提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@master ~]# cat /etc/yum.repos.d/local.repo 
[centos]
name=centos
baseurl=file:///opt/centos/
gpgcheck=0
enabled=1

[k8s]
name=k8s
baseurl=file:///opt/paas/kubernetes-repo
gpgcheck=0
enabled=1
[root@master ~]# yum install -y vsftpd
[root@master ~]# echo "anon_root=/opt/" >> /etc/vsftpd/vsftpd.conf
[root@master ~]# systemctl restart vsftpd
[root@master ~]# systemctl enable vsftpd
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.
1
2
3
4
5
6
7
8
9
10
11
12
[root@node1 ~]# cat /etc/yum.repos.d/ftp.repo 
[centos]
name=centos
baseurl=ftp://master/centos/
gpgcheck=0
enabled=1

[k8s]
name=k8s
baseurl=ftp://master/paas/kubernetes-repo
gpgcheck=0
enabled=1

4.设置时间同步服务器 在 master 节点上部署 chrony 服务器,允许其他节点同步时间,启动服务并 设置为开机启动;在其他节点上指定 master 节点为上游 NTP 服务器,重启服务 并设为开机启动。 请在 master 节点上使用 chronyc 命令同步控制节点的系统时间。

master 节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
[root@master ~]# cat /etc/chrony.conf 

# Use public servers from the pool.ntp.org project.

# Please consider joining the pool (http://www.pool.ntp.org/join.html).

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server master iburst

# Record the rate at which the system clock gains/losses time.

driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates

# if its offset is larger than 1 second.

makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).

rtcsync

# Enable hardware timestamping on all interfaces that support it.

#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust

# the system clock.

#minsources 2

# Allow NTP client access from local network.

#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.

#local stratum 10

# Specify file containing keys for NTP authentication.

#keyfile /etc/chrony.keys

# Specify directory for log files.

logdir /var/log/chrony

# Select which information is logged.

#log measurements statistics tracking
allow 192.168.20.0/24
local stratum 10

node节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root@node1 ~]# cat /etc/chrony.conf 

# Use public servers from the pool.ntp.org project.

# Please consider joining the pool (http://www.pool.ntp.org/join.html).

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server master iburst

# Record the rate at which the system clock gains/losses time.

driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates

# if its offset is larger than 1 second.

makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).

rtcsync

# Enable hardware timestamping on all interfaces that support it.

#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust

# the system clock.

#minsources 2

# Allow NTP client access from local network.

#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.

#local stratum 10

# Specify file containing keys for NTP authentication.

#keyfile /etc/chrony.keys

# Specify directory for log files.

logdir /var/log/chrony

# Select which information is logged.

#log measurements statistics tracking

5.设置免密登录 为四台服务器设置免密登录,保证 3 台服务器能够互相免密登录。请使用 s cp 命令将 master 节点的 hosts 文件发送到所有节点的 /etc/hosts。将以上所 有命令和返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@master ~]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)?
[root@master ~]# ssh-copy-id node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)

[root@master ~]# ssh-copy-id node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)

[root@master ~]# ssh-copy-id harbor
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)

任务 2 Kubernetes 搭建任务(10 分)

1.安装 docker 应用 在所有节点上安装 dokcer-ce。并在 harbor 节点安装 harbor 仓库,显现正 常登录 horbor 仓库,登录密码设置为“test_工位号”。请将登录后截图提交到 答题框。【1 分】

所有节点

1
2
3
4
[root@master ~]# yum install -y yum-utils device-mapper-p* lvm2
[root@master ~]# yum install -y docker-ce
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker

harbor节点

1
2
3
4
5
6
7
8
9
cd /opt/harbor/
tar -zxvf harbor-offline-installer-v2.1.0.tgz
cd harbor
mv harbor.yml.tmpl harbor.yml
sed -i "5s/reg.mydomain.com/${IP}/g" harbor.yml
sed -i "13s/^/#/g" harbor.yml
sed -i "15,18s/^/#/g" harbor.yml
./prepare || exit
./install.sh --with-clair || exit

2.搭建 horbor 仓库 修改默认 docker 仓库为 horbor 地址,修改 docker 启动引擎为 systemd。 安装完成后执行 docker verison 命令返回结果以及将 daemon.json 文件内容提 交。【2 分】

1
2
3
4
5
6
[root@master ~]# cat /etc/docker/daemon.json 
{
"insecure-registries":["192.168.20.114"],
"exec-opts":["native.cgroupdriver=systemd"]
}
[root@master ~]# systemctl restart docker
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@master ~]# docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:03:45 2020
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.13
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:02:21 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.7
GitCommit: 8fba4e9a7d01810a393d5d25a3621dc101981175
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683

3.安装 docker-compose 在 master 节点上使用 /opt/paas/docker-compose/v1.25.5-docker-compo se-Linux-x86_6 下的文件安装 docker-compose。安装完成后执行 docker-compose version 命令,请将程序返回结果提交到答题框。【0.5 分】

1
2
3
4
5
6
[root@master ~]# mv /opt/paas/docker-compose/v1.25.5-docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
[root@master ~]# docker-compose version
docker-compose version 1.25.5, build 8a1c60f6
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019

4.上传 docker 镜像 在 master 节点使用 /opt/paas/ k8s_image_push.sh 将所有镜像上传至 do cker 仓库。完成后将 Harbor 仓库 library 中镜像列表截图,请将以上截图提交 到答题框。【1 分】

1
[root@master ~]# for i in $(ls /opt/paas/images/); do docker load -i /opt/paas/images/$i; done
1
[root@master paas]# ./k8s_image_push.sh 

5.安装 kubeadm 工具 在 master 及 node 节点安装 Kubeadm 工具并设置开机自动启动,安装完成后 使用 rpm 命令配合 grep 查看 Kubeadm 工具是否正确安装。将 rpm 命令配合 grep 返回结果提交到答题框。【0.5 分】

1
2
3
4
5
6
[root@master paas]# yum install -y kubelet-1.18.1 kubeadm-1.18.1 kubectl-1.18.1
[root@master paas]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@master paas]# systemctl restart kubelet
[root@master paas]# rpm -qa | grep kubeadm
kubeadm-1.18.1-0.x86_64

6.计算节点获取必备镜像 在所有 node 节点中使用 docker 命令拉取安装 kubernetes 基础镜像,拉取 完成后使用 docker 命令查看镜像列表。【1 分】

7.kubeadm 安装 master 使用 kubeadm 命令初始化 master 节点,设置 kubernetes 虚拟内部网段地址 为 10.244.0.0/16,然后使用 kube-flannel.yaml 完成控制节点初始化设置,完 成后使用命令查看集群状态和所有 pod。 将以上命令和返回结果提交到答题框。【2 分】

1
2
3
4
5
6
7
8
9
[root@master ~]# modprobe br_netfilter
[root@master ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
[root@master ~]# echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
[root@master ~]# echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
[root@master ~]# sysctl -p
[root@master ~]# kubeadm init --kubernetes-version=1.18.1 --apiserver-advertise-address=192.168.20.115 --image-repository 192.168.20.114/library --pod-network-cidr=10.244.0.0/16
[root@master ~]# mkdir -p /root/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config
[root@master ~]# chown $(id -u):$(id -g) /root/.kube/config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master ~]# sed -i 's/quay.io\/coreos/192.168.20.114\/library/g' /opt/paas/yaml/flannel/kube-flannel.yaml 
[root@master ~]# kubectl apply -f /opt/paas/yaml/flannel/kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f8789844f-5v894 1/1 Running 0 7m54s
kube-system coredns-f8789844f-vfh6t 1/1 Running 0 7m54s
kube-system etcd-master 1/1 Running 0 8m7s
kube-system kube-apiserver-master 1/1 Running 0 8m7s
kube-system kube-controller-manager-master 1/1 Running 0 8m7s
kube-system kube-flannel-ds-xvn9p 1/1 Running 0 43s
kube-system kube-proxy-fjkd2 1/1 Running 0 7m54s
kube-system kube-scheduler-master 1/1 Running 0 8m7s

8.安装 kubernetes 网络插件 使用 kube-flannel.yaml 安装 kubernetes 网络插件,安装完成后使用命令 查看节点状态。完成后使用命令查看集群状态。将集群状态查看命令和返回结果 提交到答题框。【0.5 分】

1
2
3
4
5
6
7
8
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 10m v1.18.1
[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.20.115:6443
KubeDNS is running at https://192.168.20.115:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

9.kubernetes 图形化界面的安装 安装 kubernetes dashboard 界面,完成后查看首页然后将 kubernetes dashboard 界面截图提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
mkdir dashboard-certs
cd dashboard-certs/
kubectl create namespace kubernetes-dashboard
openssl genrsa -out dashboard.key 2048
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
sed -i "s/kubernetesui/$IP\/library/g" /opt/yaml/dashboard/recommended.yaml
kubectl apply -f /opt/yaml/dashboard/recommended.yaml
kubectl apply -f /opt/yaml/dashboard/dashboard-adminuser.yaml

10.扩展计算节点 在 master 节点上使用 kubeadm 命令查看 token,在所有 node 节点上使用 k ubeadm 命令将 node 节点加入 kubernetes 集群。完成后在 master 节点上查看所 有节点状态。

所有node节点

1
2
3
4
5
6
modprobe br_netfilter
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
1
2
3
4
5
[root@node1 ~]# cat /etc/docker/daemon.json 
{
"insecure-registries":["192.168.20.114"],
"exec-opts":["native.cgroupdriver=systemd"]
}
1
2
3
4
ssh master "kubeadm token create --print-join-command" >token.sh
chmod +x token.sh && source token.sh && rm -rf token.sh
sleep 20
ssh master "kubectl get nodes"

任务 3 Kubernetes 运维任务(15 分)

1.使用 dockerfile 构建 dokcer 镜像 以 mysql:5.7 镜像为基础镜像,制作一个 mysql 镜像,可以将提供的 sql 文 件初始化到mysql数据库中,然后使用编写的dockerfile文件将镜像制作出来, 名称为 mysql:latest,并将该镜像上传至前面所搭建的 harbor 仓库中,编写 Y AML 文件,验证数据库内容。 完成后将 dockerfile 文件内容及 harbor 仓库镜像列表、数据库内容提交到 答题框。【1 分】

2.持久化存储 搭建 NFS 共享存储,配置 nfs-provisioner,创建 storageclass,通过 storageclass 动态生成 pvc,大小为 1Gi,修改标准 nfs-deployment.yaml 文件, 编写 storageclass.yaml 和 pvc.yaml 文件,将最终 pvc 状态截图和 yaml 文件提 交至答题框。【2 分】

1
2
3
[root@master ~]# yum install -y nfs-utils rpcbind
[root@master ~]# cat /etc/exports
/root/nfs *(rw,async,no_root_squash)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
[root@master ~]# cat rpac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default #根据实际环境设定namespace,下面类同
---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:

- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:

- kind: ServiceAccount
name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default
rules:

- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:

- kind: ServiceAccount
name: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
1
2
3
4
5
6
7
8
[root@master ~]# cat storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
archiveOnDelete: "false"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[root@master ~]# cat nfs-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner

# replace with namespace where provisioner is deployed

namespace: default #与RBAC文件中的namespace保持一致

spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: 192.168.20.114/library/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-storage #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
- name: NFS_SERVER
value: 192.168.20.115 #NFS Server IP地址
- name: NFS_PATH
value: /root/nfs #NFS挂载卷
volumes:
- name: nfs-client-root
nfs:
server: 192.168.20.115 #NFS Server IP地址
path: /root/nfs #NFS 挂载卷
1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master ~]# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: managed-nfs-storage # StorageClass
resources:
requests:
storage: 1Gi
1
2
3
4
[root@master ~]# kubectl apply -f rpac.yaml 
[root@master ~]# kubectl apply -f storageclass.yaml
[root@master ~]# kubectl apply -f nfs-deployment.yaml
[root@master ~]# kubectl apply -f pvc.yaml
1
2
3
4
5
6
7
8
9
10
11
12
[root@master ~]# kubectl get pod,sc,pvc,pv
NAME READY STATUS RESTARTS AGE
pod/nfs-client-provisioner-8d5b467d-cr7qw 1/1 Running 0 15m

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/managed-nfs-storage nfs-storage Delete Immediate false 27m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-pvc Bound pvc-ee59105c-cc95-48ac-af39-75ab6ca51ac0 1Gi RWX managed-nfs-storage 2m24s

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-ee59105c-cc95-48ac-af39-75ab6ca51ac0 1Gi RWX Delete Bound default/test-pvc managed-nfs-storage 2m24s

3.编写 deployment 文件 将提供的 nginx:latest 镜像上传至 harbor 镜像仓库,使用该镜像编写 dep loyment 文件,要求将已创建的 pvc 挂载至/html 目录下,副本数 1,实现资源 限制:需求内存 300Mi,需求 CPU 300M,限制内存 450Mi,限制 CPU450M,将 POD 状态截图和 yaml 文件提交至答题框。【3 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@master ~]# cat nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: 192.168.20.114/library/nginx:latest
name: nginx
ports:
- name: nginx-port
containerPort: 80
volumeMounts:
- name: test-pvc
mountPath: /usr/share/nginx/html
resources:
limits:
cpu: 450m
memory: 450Mi
requests:
cpu: 300m
memory: 300Mi
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-pvc

4.创建 service 服务,提供对外访问接口 基于 nginx 的 pod 服务,编写一个 service 名称为 nginx-svc,代理 nginx 的服务端口,端口类型为 nodeport,创建成功后可以通过该 service 访问 nginx。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@master ~]# cat nginx-svc.yaml 
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx-svc
spec:
ports:
- port: 80
name: nginx-port
selector:
app: nginx
type: NodePort

5.配置 metrics-server 实现资源监控 将已提供的 metrics-server 镜像上传至 harbor,修改 components.yaml, 创建 metrics-server,完成后,将 metrics-server 状态截图提交至答题框。【2 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
[root@master ~]# cat components.yaml 
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:

- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:

- kind: ServiceAccount
name: metrics-server
namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:

- kind: ServiceAccount
name: metrics-server
namespace: kube-system

---

apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100

versionPriority: 100
---

apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server

namespace: kube-system
---

apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: 192.168.20.114/library/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
ports:
- name: main-port
containerPort: 4443
protocol: TCP
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp-dir
mountPath: /tmp
#nodeSelector:
#kubernetes.io/os: linux
#kubernetes.io/arch: "amd64"

nodeName: master
---

apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:

- port: 443
protocol: TCP
targetPort: main-port

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:

- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:

- kind: ServiceAccount
name: metrics-server
namespace: kube-system
1
[root@master ~]# kubectl apply -f components.yaml 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@master ~]# kubectl get apiservice
NAME SERVICE AVAILABLE AGE
v1. Local True 25h
v1.admissionregistration.k8s.io Local True 25h
v1.apiextensions.k8s.io Local True 25h
v1.apps Local True 25h
v1.authentication.k8s.io Local True 25h
v1.authorization.k8s.io Local True 25h
v1.autoscaling Local True 25h
v1.batch Local True 25h
v1.coordination.k8s.io Local True 25h
v1.networking.k8s.io Local True 25h
v1.rbac.authorization.k8s.io Local True 25h
v1.scheduling.k8s.io Local True 25h
v1.storage.k8s.io Local True 25h
v1beta1.admissionregistration.k8s.io Local True 25h
v1beta1.apiextensions.k8s.io Local True 25h
v1beta1.authentication.k8s.io Local True 25h
v1beta1.authorization.k8s.io Local True 25h
v1beta1.batch Local True 25h
v1beta1.certificates.k8s.io Local True 25h
v1beta1.coordination.k8s.io Local True 25h
v1beta1.discovery.k8s.io Local True 25h
v1beta1.events.k8s.io Local True 25h
v1beta1.extensions Local True 25h
v1beta1.metrics.k8s.io kube-system/metrics-server True 56m
v1beta1.networking.k8s.io Local True 25h
v1beta1.node.k8s.io Local True 25h
v1beta1.policy Local True 25h
v1beta1.rbac.authorization.k8s.io Local True 25h
v1beta1.scheduling.k8s.io Local True 25h
v1beta1.storage.k8s.io Local True 25h
v2beta1.autoscaling Local True 25h
v2beta2.autoscaling Local True 25h

6.配置弹性伸缩 编写 deployment-nginx-hpa.yaml 文件,要求最小副本数 1,最大副本数 3, 当整体的资源利用率超过 80%的时候实现自动扩容,将 yaml 文件提交至答题框。

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master ~]# cat deployment-nginx-hpa.yaml 
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 80
1
2
3
4
[root@master ~]# kubectl apply -f deployment-nginx-hpa.yaml
[root@master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-hpa Deployment/nginx-hpa 0%/80% 1 3 1 159m

7.压力测试 安装 httpd-tools 工具,通过 service 提供的对外访问接口进行压力测试, 验证 HPA 弹性伸缩功能,将 HPA 状态和 POD 状态截图提交至答题框。【2 分】

1
2
3
4
5
6
7
8
9
10
[root@master ~]# yum install -y httpd-tools
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-8d5b467d-5cl8s 1/1 Running 6 23h 10.244.1.3 node1 <none> <none>
nginx-665b4cdfdd-48hwn 1/1 Running 0 23h 10.244.2.2 node2 <none> <none>
nginx-hpa-78c58b9df7-qjdhh 1/1 Running 2 22h 10.244.0.14 master <none> <none>
[root@master ~]# ab -t 600 -n 1000000 -c 1000 http://10.244.0.14/path
[root@master ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-hpa Deployment/nginx-hpa 0%/80% 1 3 1 162m