OpenStack 私有云 OpenStack搭建任务 当前任务共10道题目1.基础安装
controller
yum install -y iaas-xiandian
改脚本
iaas-pre-host.sh
compute
yum install -y iaas-xiandian
改脚本
iaas-pre-host.sh
2、【实操题】数据库安装(3分)
在controller节点上使用iaas-install-mysql.sh 脚本安装Mariadb、Memcached、RabbitMQ等服务。安装服务完毕后,完成下列题目。
1.登录数据库服务,设置数据库的最大连接数为5000。
2.登录数据库服务,设置数据库的参数,将最大允许的packet设置为30M。
iaas-install-mysql.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [root@controller ~]# cat /etc/my.cnf # # This group is read both both by the client and the server # use it for options that affect everything # [client-server] # # This group is read by the server # [mysqld] # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8 max_connections=5000 //最大连接数 max_allowed_packet=30M //最大允许的packet设置为30M # # include all files from the config directory # !includedir /etc/my.cnf.d
3、【实操题】key安装
iaas-install-keystone.sh
4、【实操题】Glance安装(2分)
在controller节点上使用iaas-install-glance.sh脚本安装glance 服务。使用命令将提供的cirros-0.3.4-x86_64-disk.img镜像上传至平台,命名为cirros,并设置最小启动需要的硬盘为10G。完成后提交控制节点的用户名、密码和IP地址到答题框。
curl -O http://172.19.25.11/cirros-0.3.4-x86_64-disk.img
source /etc/keystone/admin-openrc.sh
glance image-create --name cirros --disk-format qcow2 --container-format bare --min-disk 10 --progress < cirros-0.3.4-x86_64-disk.img
5、【实操题】Nova安装(2分)
在controller节点和compute节点上分别使用iaas-install-nova-compute.sh脚本、iaas-install-nova-compute.sh脚本安装Nova 服务。安装完成后,修改相关参数对openstack平台进行调优操作,相应的调优操作有:
(1)设置cpu超售比例为4倍;
(2)设置内存超售比例为1.5倍;
(3)预留2048mb内存,这部分内存不能被虚拟机使用;
(4)预留10240mb磁盘,这部分磁盘不能被虚拟机使用;
controller
iaas-install-nova-controller.sh
compute
iaas-install-nova-compute.sh
(1)设置cpu超售比例为4倍;
sed -i 's/#cpu_allocation_ratio=0.0/cpu_allocation_ratio=4.0/g' /etc/nova/nova.conf
(2)设置内存超售比例为1.5倍;
sed -i 's/#ram_allocation_ratio=0.0/ram_allocation_ratio=1.5/g' /etc/nova/nova.conf
(3)预留2048mb内存,这部分内存不能被虚拟机使用;
sed -i 's/#reserved_host_memory_mb=512/reserved_host_memory_mb=2048/g' /etc/nova/nova.conf
(4)预留10240mb磁盘,这部分磁盘不能被虚拟机使用;
sed -i 's/#reserved_host_disk_mb=0/reserved_host_disk_mb=10240/g' /etc/nova/nova.conf
重启
systemctl restart *nova*
6、【实操题】Neutron安装(2分)
在controller节点和compute节点上分别使用iaas-install-neutron-controller.sh 和iaas-install-neutron-compute.sh脚本安装neutron服务并用命令创建一个网络
compute
iaas-install-neutron-compute.sh
controller
iaas-install-neutron-controller.sh
在controller执行命令
openstack network create --no-share --external --provider-physical-network provider --provider-network-type vlan extnet
openstack subnet create --network extnet --subnet-range 10.10.1.0/24 --gateway 10.10.1.1 subextnet
7、【实操题】dashboard安装(1分)
controller
iaas-install-dashboard.sh
8、【实操题】Swift安装(2分)
在控制节点和计算节点上分别使用iaas-install-swift-controller.sh和iaas-install-swift-compute.sh脚本安装Swift服务。安装完成后,使用命令创建一个名叫examcontainer的容器。完成后提交控制节点的用户名、密码和IP地址到答题框。
compute
iaas-install-swift-compute.sh
controller
iaas-install-swift-controller.sh
swift post examcontainer
9、【实操题】Cinder创建硬盘(2分)
在控制节点和计算节点分别使用iaas-install-cinder-controller.sh、iaas-install-cinder-compute.sh脚本安装Cinder服务,使用cinder命令创建一个名字叫blockvolume,大小为2G的云硬盘。完成后提交控制节点的用户名、密码和IP地址到答题框。
compute
iaas-install-cinder-compute.sh
controller
iaas-install-cinder-controller.sh
cinder create --name blockvolume 2
10、【实操题】Heat安装(1分)
在控制节点上使用iaas-install-heat.sh脚本安装Heat服务。完成后提交控制节点的用户名、密码和IP地址到答题框。
controller
iaas-install-heat.sh
三、OpenStack运维任务当前任务共8道题目 1、【实操题】Heat模板管理(2分)
在自行搭建的OpenStack私有云平台上,在/root目录下编写Heat模板heat-image.yaml,编写模板内容通过使用swift外部存储方式创建镜像heat-image,限制镜像最低磁盘使用为20G,最低内存使用为2G。完成后提交控制节点的用户名、密码和IP地址到答题框。(在提交信息前请准备好yaml模板执行的环境)
不会
2、【实操题】Heat模板管理(1.5分)
在自行搭建的OpenStack私有云平台或赛项提供的all-in-one平台上,在/root目录下编写Heat模板create_net.yaml,创建名为Heat-Network网络,选择不共享;创建子网名为Heat-Subnet,子网网段设置为10.20.2.0/24,开启DHCP服务,地址池为10.20.2.20-10.20.2.100。完成后提交控制节点的用户名、密码和IP地址到答题框。(在提交信息前请准备好yaml模板执行的环境)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@controller ~]# cat create_net.yaml heat_template_version: 2018-03-02 resources: network: type: OS::Neutron::Net properties: name: "heat-network" admin_state_up: true shared: false subnet: type: OS::Neutron::Subnet properties: name: "heat-subnet" cidr: 10.10.2.0/24 gateway_ip: 10.10.2.1 enable_dhcp: true allocation_pools: - start: 10.10.2.20 end: 10.10.2.100 network_id: {get_resource: "network"}
heat stack-create -f create_net.yaml heta-network
3、【实操题】OpenStack参数调优(2分)
OpenStack各服务内部通信都是通过RPC来交互,各agent都需要去连接RabbitMQ;随着各服务agent增多,MQ的连接数会随之增多,最终可能会到达上限,成为瓶颈。使用自行搭建的OpenStack私有云平台,分别通过用户级别、系统级别、配置文件来设置RabbitMQ服务的最大连接数为10240,配置完成后提交修改节点的用户名、密码和IP地址到答题框。
用户级别:
1 2 3 4 vi /etc/security/limits.conf` //添加两行 user soft nofile 10240 user hard nofile 10240
系统级别:
1 2 vi /etc/sysctl.conf fs.file-max=10240
配置文件:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [root@controller ~]# cat /usr/lib/systemd/system/rabbitmq-server.service # systemd unit example [Unit] Description=RabbitMQ broker After=network.target epmd@0.0.0.0.socket Wants=network.target epmd@0.0.0.0.socket [Service] Type=notify User=rabbitmq Group=rabbitmq NotifyAccess=all TimeoutStartSec=3600 LimitNOFILE=10240 //添加这一条 # Note: # You *may* wish to add the following to automatically restart RabbitMQ # in the event of a failure. systemd service restarts are not a # replacement for service monitoring. Please see # http://www.rabbitmq.com/monitoring.html # # Restart=on-failure # RestartSec=10 WorkingDirectory=/var/lib/rabbitmq ExecStart=/usr/lib/rabbitmq/bin/rabbitmq-server ExecStop=/usr/lib/rabbitmq/bin/rabbitmqctl stop ExecStop=/bin/sh -c "while ps -p $MAINPID >/dev/null 2>&1; do sleep 1; done" [Install] WantedBy=multi-user.target
systemctl daemon-reload
systemctl restart rabbitmq-server
查看 rabbitmq 状态,查到 total_limit,10140
rabbitmqctl status
4、【实操题】KVM I/O优化(1.5分)
使用自行搭建的OpenStack私有云平台,优化KVM的I/O调度算法,将默认的模式修改为none模式。配置完成后提交控制节点的用户名、密码和IP地址到答题框。
1 2 3 [root@controller ~]# echo none > /sys/block/vda/queue/scheduler [root@controller ~]# cat /sys/block/vda/queue/scheduler [none] mq-deadline kyber
5、【实操题】开放镜像(2分)
使用自行搭建的OpenStack私有云平台。使用提供的cirros-0.3.4-x86_64-disk.img镜像文件(镜像文件在提供的HTTP服务中)在admin项目中创建名为glance-cirros的镜像,通过命令将glance-cirros镜像指定demo项目可以进行共享使用。配置完成后提交controller控制节点的用户名、密码和IP地址到答题框。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 [root@controller ~]# source /etc/keystone/admin-openrc.sh [root@controller ~]# glance image-create --name glance-cirros --disk-format qcow2 --container-format bare < cirros-0.3.4-x86_64-disk.img +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2022-04-22T05:43:46Z | | disk_format | qcow2 | | id | b18ba8b5-a016-4a0d-9115-9748d23084d4 | | min_disk | 0 | | min_ram | 0 | | name | glance-cirros | | owner | e0ab86e9a7e947d38ef4b86dfa1e3942 | | protected | False | | size | 13287936 | | status | active | | tags | [] | | updated_at | 2022-04-22T05:43:47Z | | virtual_size | None | | visibility | shared | +------------------+--------------------------------------+ [root@controller ~]# openstack project list +----------------------------------+------------------------------------------------------------------+ | ID | Name | +----------------------------------+------------------------------------------------------------------+ | 88585c35bd8d483abf671329fa9b1a7a | service | | 89de021d5c934f3cb70e397d0d6eb3c7 | e0ab86e9a7e947d38ef4b86dfa1e3942-e2eb5f10-5316-4ba8-857d-23b75de | | b4bc592766104dd3912e4532c53ebaa1 | demo | | e0ab86e9a7e947d38ef4b86dfa1e3942 | admin | +----------------------------------+------------------------------------------------------------------+ [root@controller ~]# glance member-create b18ba8b5-a016-4a0d-9115-9748d23084d4 b4bc592766104dd3912e4532c53ebaa1 +--------------------------------------+----------------------------------+---------+ | Image ID | Member ID | Status | +--------------------------------------+----------------------------------+---------+ | b18ba8b5-a016-4a0d-9115-9748d23084d4 | b4bc592766104dd3912e4532c53ebaa1 | pending | +--------------------------------------+----------------------------------+---------+ [root@controller ~]# glance member-update b18ba8b5-a016-4a0d-9115-9748d23084d4 b4bc592766104dd3912e4532c53ebaa1 accepted +--------------------------------------+----------------------------------+----------+ | Image ID | Member ID | Status | +--------------------------------------+----------------------------------+----------+ | b18ba8b5-a016-4a0d-9115-9748d23084d4 | b4bc592766104dd3912e4532c53ebaa1 | accepted | +--------------------------------------+----------------------------------+----------+
6、【实操题】修改glance存储后端(2分)
在提供的OpenStack私有云平台,创建一台云主机(镜像使用CentOS7.5,flavor使用带临时磁盘50G的),配置该主机为nfs的server端,将该云主机中的/mnt/test目录进行共享(目录不存在可自行创建)。然后配置controller节点为nfs的client端,要求将/mnt/test目录作为glance后端存储的挂载目录。配置完成后提交controller控制节点的用户名、密码和IP地址到答题框。
nfs服务端:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 [root@nfs ~]# umount /mnt [root@nfs ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 100G 0 disk └─vda1 253:1 0 100G 0 part / vdb 253:16 0 50G 0 disk [root@nfs ~]# [root@nfs ~]# mkfs.ext4 /dev/vdb mke2fs 1.42.9 (28-Dec-2013) 文件系统标签= OS type: Linux 块大小=4096 (log=2) 分块大小=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 3276800 inodes, 13107200 blocks 655360 blocks (5.00%) reserved for the super user 第一个数据块=0 Maximum filesystem blocks=2162163712 400 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Allocating group tables: 完成 正在写入inode表: 完成 Creating journal (32768 blocks): 完成 Writing superblocks and filesystem accounting information: 完成 [root@nfs ~]# mount /dev/vdb /mnt [root@nfs ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 100G 0 disk └─vda1 253:1 0 100G 0 part / vdb 253:16 0 50G 0 disk /mnt
yum install -y nfs-utils rpcbind
mkdir /mnt/test
echo /mnt/test *(rw,async,no_root_squash) > /etc/exports
1 2 3 4 [root@nfs ~]# systemctl restart rpcbind [root@nfs ~]# systemctl restart nfs [root@nfs ~]# systemctl enable rpcbind [root@nfs ~]# systemctl enable nfs
nfs客户端:
echo 192.168.20.106:/mnt/test /var/lib/glance/images nfs default netdev 0 0 > /etc/fstab
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [root@controller ~]# systemctl restart rpcbind [root@controller ~]# systemctl restart nfs [root@controller ~]# systemctl enable rpcbind [root@controller ~]# systemctl enable nfs [root@controller ~]# showmount -e 192.168.20.106 Export list for 192.168.20.106: /mnt/test * [root@controller ~]# mount -t nfs 192.168.20.106:/mnt/test /var/lib/glance/images/ [root@controller ~]# chown -R glance:glance /var/lib/glance/images/ [root@controller ~]# ll /var/lib/glance/ 总用量 4 drwxr-xr-x 2 glance glance 4096 4月 25 06:18 images [root@controller ~]# mount ............................................................... 192.168.20.106:/mnt/test on /var/lib/glance/images type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.20.105,local_lock=none,addr=192.168.20.106)
7、【实操题】Raid管理(1分)
在提供的OpenStack私有云平台,创建一台云主机,flavor使用带有50G临时磁盘的,然后在云主机上对云硬盘进行操作。要求分出4个大小为5G的分区,使用这4个分区,创建名为/dev/md5、raid级别为5的磁盘阵列加一个热备盘(/dev/vdb4为热备盘)。完成后提交云主机的用户名、密码和IP地址到答题框。
yum install -y mdadm
umount /mnt
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 [root@raid ~]# fdisk /dev/vdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): Using default response p Partition number (1-4, default 1): First sector (2048-104857599, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599): +5G Partition 1 of type Linux and of size 5 GiB is set Command (m for help): n Partition type: p primary (1 primary, 0 extended, 3 free) e extended Select (default p): Using default response p Partition number (2-4, default 2): First sector (10487808-104857599, default 10487808): Using default value 10487808 Last sector, +sectors or +size{K,M,G} (10487808-104857599, default 104857599): +5G Partition 2 of type Linux and of size 5 GiB is set Command (m for help): n Partition type: p primary (2 primary, 0 extended, 2 free) e extended Select (default p): Using default response p Partition number (3,4, default 3): First sector (20973568-104857599, default 20973568): Using default value 20973568 Last sector, +sectors or +size{K,M,G} (20973568-104857599, default 104857599): +5G Partition 3 of type Linux and of size 5 GiB is set Command (m for help): n Partition type: p primary (3 primary, 0 extended, 1 free) e extended Select (default e): p Selected partition 4 First sector (31459328-104857599, default 31459328): Using default value 31459328 Last sector, +sectors or +size{K,M,G} (31459328-104857599, default 104857599): +5G Partition 4 of type Linux and of size 5 GiB is set Command (m for help): t Partition number (1-4, default 4): 1 Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): t Partition number (1-4, default 4): 2 Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): t Partition number (1-4, default 4): 3 Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): t Partition number (1-4, default 4): 4 Hex code (type L to list all codes): fd Changed type of partition 'Linux' to 'Linux raid autodetect' Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
mdadm -C /dev/md5 -l 5 -n 3 -x 1 /dev/vdb1 /dev/vdb2 /dev/vdb3 /dev/vdb4
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [root@raid ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Fri Apr 22 07:34:32 2022 Raid Level : raid5 Array Size : 10475520 (9.99 GiB 10.73 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Fri Apr 22 07:40:00 2022 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : raid.novalocal:5 (local to host raid.novalocal) UUID : a7ee7f6c:33942c54:654cf6c9:880cc731 Events : 20 Number Major Minor RaidDevice State 0 253 17 0 active sync /dev/vdb1 1 253 18 1 active sync /dev/vdb2 4 253 19 2 active sync /dev/vdb3 3 253 20 - spare /dev/vdb4
8、【实操题】redis主从(1分)
使用提供的OpenStack私有云平台,申请两台CentOS7.5系统的云主机,使用提供的http源,在两个节点自行安装redis服务并启动,配置redis的访问需要密码,密码设置为123456。然后将这两个redis节点配置为redis的主从架构。配置完成后提交redis主节点的用户名、密码和IP地址到答题框。
主节点:
yum install -y redis
1 2 3 4 vi /etc/redis.conf bind 0.0.0.0 protected-mode no requirepass 123456
systemctl restart redis
systemctl enable redis
从节点:
yum install -y redis
1 2 3 4 5 vi /etc/redis.conf bind 0.0.0.0 protected-mode no slaveof 192.168.20.129 6379 masterauth 123456
systemctl restart redis
systemctl enable redis
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@redis-2 ~]# redis-cli 127.0.0.1:6379> info ................. # Replication role:slave master_host:192.168.20.129 master_port:6379 master_link_status:up master_last_io_seconds_ago:7 master_sync_in_progress:0 slave_repl_offset:589 slave_priority:100 slave_read_only:1 connected_slaves:0 master_repl_offset:0 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 ....................