Tacker Helm 部署自动化¶
概述¶
通过 OpenStack-Helm 安装 Tacker 涉及多个手动步骤,包括系统先决条件的设置、环境配置以及各种组件的部署。这个过程容易出错且耗时,特别是对于新用户而言。
为了简化此过程,引入了一个基于 Python 的自动化脚本,通过处理所有先决条件和部署步骤,简化并优化了使用 OpenStack-Helm 安装 Tacker 的过程。
自动化脚本可在 Tacker 仓库中找到
tacker/tools/tacker_helm_deployment_automation_script
本文档提供了使用自动化脚本进行基于 Helm 的 Tacker 安装的步骤和指南。
先决条件¶
所有节点必须能够访问 OpenStack 仓库。仓库将在脚本执行期间自动下载。
$ git clone https://opendev.org/openstack/openstack-helm.git $ git clone https://opendev.org/zuul/zuul-jobs.git
所有参与节点应相互连接。
配置更改¶
编辑
k8s_env/inventory.yaml以进行 k8s 部署添加以下内容以运行 ansible
# ansible user for running the playbook ansible_user: <USERNAME> ansible_ssh_private_key_file: <PATH_TO_SSH_KEY_FOR_USER> ansible_ssh_extra_args: -o StrictHostKeyChecking=no
添加用于运行 kubectl 和 helm 命令的用户和组。
# The user and group that will be used to run Kubectl and Helm commands. kubectl: user: <USERNAME> group: <USERGROUP> # The user and group that will be used to run Docker commands. docker_users: - <USERNAME>
添加将在主节点和工作节点之间用于通信的用户详细信息。必须将此用户配置为在所有节点上使用无密码 ssh。
# to connect to the k8s master node via ssh without a password. client_ssh_user: <USERNAME> cluster_ssh_user: <USER_GROUP>
如果使用裸机服务器进行负载均衡,请启用 MetalLB。
# MetalLB controllr is used for bare-metal loadbalancer. metallb_setup: true
如果部署 CEPH 集群,请启用循环设备配置以供 CEPH 使用
# Loopback devices will be created on all cluster nodes which then can be used # to deploy a Ceph cluster which requires block devices to be provided. # Please use loopback devices only for testing purposes. They are not suitable # for production due to performance reasons. loopback_setup: true loopback_device: /dev/loop100 loopback_image: /var/lib/openstack-helm/ceph-loop.img loopback_image_size: 12G
添加将安装 Kubectl 和 Helm 的主节点。
children: # The primary node where Kubectl and Helm will be installed. If it is # the only node then it must be a member of the groups k8s_cluster and # k8s_control_plane. If there are more nodes then the wireguard tunnel # will be established between the primary node and the k8s_control_plane node. primary: hosts: primary: ansible_host: <PRIMARY_NODE_IP>
添加将部署 Kubernetes 集群的节点。如果只有一个节点,则在此处提及它
# The nodes where the Kubernetes components will be installed. k8s_cluster: hosts: primary: ansible_host: <IP_ADDRESS> node-2: ansible_host: <IP_ADDRESS> node-3: ansible_host: <IP_ADDRESS>在 section 中添加控制平面节点
# The control plane node where the Kubernetes control plane components will be installed. # It must be the only node in the group k8s_control_plane. k8s_control_plane: hosts: primary: ansible_host: <IP_ADDRESS>在 k8s_nodes section 中添加工作节点,如果是一节点安装,则保留该 section 为空
# These are Kubernetes worker nodes. There could be zero such nodes. # In this case the Openstack workloads will be deployed on the control plane node. k8s_nodes: hosts: node-2: ansible_host: <IP_ADDRESS> node-3: ansible_host: <IP_ADDRESS>您可以在 [1] 中找到
inventory.yaml的完整示例。
编辑
TACKER_NODE在config/config.yaml中,使用主节点的 hostname 将其标记为控制平面节点NODES: TACKER_NODE: <CONTROL-PLANE_NODE_HOSTNAME>
脚本执行¶
确保用户具有执行脚本的权限
$ ls -la Tacker_Install.py -rwxr-xr-x 1 root root 21923 Jul 22 10:00 Tacker_Install.py
执行命令以运行脚本
$ sudo python3 Tacker_install.py
执行命令以运行脚本
$ kubectl get pods -n openstack | grep -i Tacker tacker-conductor-d7595d756-6k8wp 1/1 Running 0 24h tacker-db-init-mxwwf 0/1 Completed 0 24h tacker-db-sync-4xnhx 0/1 Completed 0 24h tacker-ks-endpoints-4nbqb 0/3 Completed 0 24h tacker-ks-service-c8s2m 0/1 Completed 0 24h tacker-ks-user-z2cq7 0/1 Completed 0 24h tacker-rabbit-init-fxggv 0/1 Completed 0 24h tacker-server-6f578bcf6c-z7z2c 1/1 Running 0 24h有关详细信息,请参阅 [2] 中的文档。