如何使用 Mgmt Driver 部署 Kubernetes 集群¶
概述¶
1. Mgmt Driver 介绍¶
Mgmt Driver 允许用户在其 VNF 生命周期管理操作之前和/或之后配置其 VNF。用户可以通过实现自己的 Mgmt Driver 来自定义 Mgmt Driver 的逻辑,这些自定义由 NFV-SOL001 v2.6.1 中的“接口”定义指定。本用户指南旨在通过用户自定义的 Mgmt Driver 部署 Kubernetes 集群。
如果您想在不同的物理计算服务器上部署 Pods,本用户指南提供了一种支持方法。Tacker 可以在不同的物理计算服务器上部署 Kubernetes 集群的工作节点,然后在该集群上使用反亲和性规则部署 Pods。有关详细信息,请参阅章节 Kubernetes 集群上基于硬件的亲和性。
2. 用例¶
在本用户指南中,通过示例 Mgmt Driver 和 VNF 包支持两种情况,在 VNFD 中提供两种部署模式
简单:部署一个主节点和工作节点。在这种情况下,它支持扩展工作节点和修复工作节点。
复杂:部署三个(或更多)主节点和工作节点。在这种情况下,它支持扩展工作节点和修复工作节点和主节点。
在上述所有情况下,kubeadm 用于在示例脚本中部署 Kubernetes。
1. 简单:单个主节点¶
简单的 Kubernetes 集群包含一个主节点作为控制器节点。您可以使用我们提供的示例脚本部署它。下图显示了简单的 Kubernetes 集群架构
+-------------------------------+
| Kubernetes cluster |
| +---------------+ |
| | +---------+ | |
| | | k8s-api | | |
| | +---------+ | |
| | +---------+ | |
| | | etcd | | |
| | +---------+ | |
| | Master VM | |
| +---------------+ |
| |
| |
| +----------+ +----------+ |
| | +------+ | | +------+ | |
| | | Pod | | | | Pod | | |
| | +------+ | | +------+ | |
| | Worker VM| | Worker VM| |
| +----------+ +----------+ |
| |
+-------------------------------+
2. 复杂:高可用性 (HA) 配置¶
Kubernetes 以其弹性可靠性而闻名。这是通过确保集群没有单点故障来实现的。因此,为了拥有高可用性 (HA) 集群,您需要有多个主节点。我们提供了可用于部署 HA Kubernetes 集群的示例脚本。下图显示了 HA Kubernetes 集群架构
+-----------------------------------------------------------+
| High availability (HA) Kubernetes cluster |
| +-------------------------------------+ |
| | | |
| | +---------------+ +---------+ | |
| | | VIP - Active | | HAProxy | | |
| | | |----->| (Active)|------+ |
| | |(keep - alived)| +---------+ | | +-----------+ |
| | | | +---------+ | | | | |
| | +---------------+ | k8s-api |<-----+ | | |
| | ^ +---------+ | | | | |
| | | +---------+ | | | | |
| | VRRP | +--->| etcd | | | | | |
| | | | +---------+ | | | | |
| | | | Master01 VM | | | | |
| +------------|------|-----------------+ | | | |
| | | | | | |
| +------------|------|-----------------+ | | | |
| | v | | | |Worker01 VM| |
| | +---------------+ | +---------+ | | | | |
| | | VIP - Standby | | | HAProxy | | | +-----------+ |
| | | | | |(Standby)| | | |
| | |(keep - alived)| | +---------+ | | |
| | | | | +---------+ | | |
| | +---------------+ | | k8s-api |<-----+ |
| | ^ | +---------+ | | |
| | | | +---------+ | | |
| | VRRP | +--->| etcd | | | +-----------+ |
| | | | +---------+ | | | | |
| | | | Master02 VM | | | | |
| +------------|------|-----------------+ | | | |
| | | | | | |
| +------------|------|-----------------+ | | | |
| | v | | | | | |
| | +---------------+ | +---------+ | | | | |
| | | VIP - Standby | | | HAProxy | | | | | |
| | | | | |(Standby)| | | | | |
| | |(keep - alived)| | +---------+ | | | | |
| | | | | +---------+ | | |Worker02 VM| |
| | +---------------+ | | k8s-api |<-----+ | | |
| | | +---------+ | +-----------+ |
| | | +---------+ | |
| | +--->| etcd | | |
| | +---------+ | |
| | Master03 VM | |
| +-------------------------------------+ |
+-----------------------------------------------------------+
Mgmt Driver 支持通过以下 instantiate_end 过程构建 HA 主节点
识别由 OpenStackInfraDriver(用于创建 OpenStack 资源)创建的 VM。
调用脚本配置 HAProxy(一种可靠的解决方案,为基于 TCP 和 HTTP 的应用程序提供高可用性、负载平衡和代理),以启动对主节点的信号分发。
首先安装所有主节点,然后通过调用设置新的 Kubernetes 集群的脚本安装工作节点。
准备工作¶
如果您使用示例脚本部署 Kubernetes 集群,则需要确保在 OpenStack 上创建的虚拟机 (VM) 可以访问外部网络。如果您通过 devstack 安装了 tacker 服务,则以下是设置网络配置的一种可选方法。
注意
在通过 devstack 安装的情况下,请在 stack 用户下执行以下所有命令。您可以使用 sudo su stack 命令更改您的用户。
1. OpenStack 路由器¶
1. 创建 OpenStack 路由器¶
为了确保您的 VM 可以访问外部网络,可能需要一个公共网络和内部网络之间的路由器。可以通过 OpenStack 控制面板或 cli 命令创建它。以下步骤将在 public 网络和内部 net0 网络之间创建一个路由器。cli 命令如下所示
$ openstack router create router-net0
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2021-02-17T04:49:09Z |
| description | |
| distributed | False |
| external_gateway_info | null |
| flavor_id | None |
| ha | False |
| id | 66fcada3-e101-4136-ad5a-ed4f0f2a7ac1 |
| name | router-net0 |
| project_id | 4e7c90a9c086427fbfc817ed6b372d97 |
| revision_number | 1 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2021-02-17T04:49:09Z |
+-------------------------+--------------------------------------+
$ openstack router set --external-gateway public router-net0
$ openstack router show router-net0
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | nova |
| created_at | 2021-02-17T04:49:09Z |
| description | |
| distributed | False |
| external_gateway_info | {"network_id": "70459da3-e4ba-44a1-959c-ee1540bf532f", "external_fixed_ips": [{"subnet_id": "0fe68555-8d3a-4fcb-83e2-602744eab106", "ip_address": "192.168.10.4"}, {"subnet_id": "d1bebebe-dde4-486a-8bca-eb9939aec972", |
| | "ip_address": "2001:db8::2f0"}], "enable_snat": true} |
| flavor_id | None |
| ha | False |
| id | 66fcada3-e101-4136-ad5a-ed4f0f2a7ac1 |
| interfaces_info | [] |
| name | router-net0 |
| project_id | 4e7c90a9c086427fbfc817ed6b372d97 |
| revision_number | 3 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2021-02-17T04:51:59Z |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
$ openstack router add subnet router-net0 subnet0
$ openstack router show router-net0
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | nova |
| created_at | 2021-02-17T04:49:09Z |
| description | |
| distributed | False |
| external_gateway_info | {"network_id": "70459da3-e4ba-44a1-959c-ee1540bf532f", "external_fixed_ips": [{"subnet_id": "0fe68555-8d3a-4fcb-83e2-602744eab106", "ip_address": "192.168.10.4"}, {"subnet_id": "d1bebebe-dde4-486a-8bca-eb9939aec972", |
| | "ip_address": "2001:db8::2f0"}], "enable_snat": true} |
| flavor_id | None |
| ha | False |
| id | 66fcada3-e101-4136-ad5a-ed4f0f2a7ac1 |
| interfaces_info | [{"port_id": "0d2abb5d-7b01-4227-b5b4-325d153dfe4a", "ip_address": "10.10.0.1", "subnet_id": "70e60dee-b654-49ee-9692-147de8f07844"}] |
| name | router-net0 |
| project_id | 4e7c90a9c086427fbfc817ed6b372d97 |
| revision_number | 4 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2021-02-17T04:54:35Z |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
通过上述命令,您可以获得内部 net0 网络和外部网络之间的网关 ip。这里是 192.168.10.4 在 external_gateway_info 中。 net0 网络的 cidr 是 10.10.0.0/24。
2. 在控制器节点上设置路由规则¶
根据步骤 1 中获得的网关 ip,您应该在 OpenStack 的控制器节点中添加一条路由规则。命令如下所示
$ sudo route add -net 10.10.0.0/24 gw 192.168.10.4
3. 设置安全组¶
为了访问 k8s 集群,您需要设置安全组规则。您可以创建一个新的安全组或将规则添加到 default 安全组。使用 cli 命令显示了最低设置
获取 nfv 项目的默认安全组 id
$ auth='--os-username nfv_user --os-project-name nfv --os-password devstack --os-auth-url http://127.0.0.1/identity --os-project-domain-name Default --os-user-domain-name Default' $ nfv_project_id=`openstack project list $auth | grep -w '| nfv' | awk '{print $2}'` $ default_id=`openstack security group list $auth | grep -w 'default' | grep $nfv_project_id | awk '{print $2}'`
使用上述 id 将新的安全组规则添加到默认安全组
#ssh 22 port $ openstack security group rule create --protocol tcp --dst-port 22 $default_id $auth #all tcp $ openstack security group rule create --protocol tcp $default_id $auth #all icmp $ openstack security group rule create --protocol icmp $default_id $auth #all udp $ openstack security group rule create --protocol udp $default_id $auth #dns 53 port $ openstack security group rule create --protocol tcp --dst-port 53 $default_id $auth #k8s port $ openstack security group rule create --protocol tcp --dst-port 6443 $default_id $auth $ openstack security group rule create --protocol tcp --dst-port 16443 $default_id $auth $ openstack security group rule create --protocol tcp --dst-port 2379:2380 $default_id $auth $ openstack security group rule create --protocol tcp --dst-port 10250:10255 $default_id $auth $ openstack security group rule create --protocol tcp --dst-port 30000:32767 $default_id $auth
2. Ubuntu 镜像¶
在本用户指南中,Ubuntu 镜像用于主/工作节点。为了确保 Mgmt Driver 可以通过 SSH 访问 VM,需要进行一些配置。
1. 下载 Ubuntu 镜像¶
您可以从官方网站下载 ubuntu 镜像(版本 20.04)。命令如下所示
$ wget -P /opt/stack/tacker/samples/mgmt_driver/kubernetes https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64.img
2. 安装 libguestfs-tools¶
如果您使用示例脚本部署 Kubernetes 集群,则需要确保您的镜像创建的 VM 允许您使用用户名和密码通过 SSH 登录。但是,从官方网站下载的 ubuntu 镜像创建的 VM 不允许您使用用户名和密码通过 SSH 登录。因此,您需要修改 ubuntu 镜像。以下是使用 guestfish 工具修改镜像的一种方法,或者您可以使用自己的方法进行修改。安装该工具的方法如下所示
$ sudo apt-get install libguestfs-tools
$ guestfish --version
guestfish 1.36.13
3. 设置镜像的配置¶
guestfish 工具可以使用其自己的命令修改镜像的配置。命令如下所示
$ cd /opt/stack/tacker/samples/mgmt_driver/kubernetes
$ sudo guestfish -a ubuntu-20.04-server-cloudimg-amd64.img -i sh "sed -i 's/lock\_passwd\: True/lock\_passwd\: false/g' /etc/cloud/cloud.cfg"
$ sudo guestfish -a ubuntu-20.04-server-cloudimg-amd64.img -i sh "sed -i '/[ ][ ][ ][ ][ ]lock\_passwd\: false/a\ plain\_text\_passwd\: ubuntu' /etc/cloud/cloud.cfg"
$ sudo guestfish -a ubuntu-20.04-server-cloudimg-amd64.img -i sh "sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config"
$ sha512sum ubuntu-20.04-server-cloudimg-amd64.img
fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452
注意
修改后 ubuntu 镜像的哈希值会不同,因此您应该自己计算它。并且该值应该写入 sample_kubernetes_df_simple.yaml 和 sample_kubernetes_df_complex.yaml 在 创建 和 上传 VNF 包 中定义。
3. 设置 Tacker 配置¶
首先,将存储在 tacker/samples/mgmt_driver/kubernetes/kubernetes_mgmt.py 中的示例脚本复制到 tacker/tacker/vnfm/mgmt_drivers 目录。
$ cp /opt/stack/tacker/samples/mgmt_driver/kubernetes/kubernetes_mgmt.py /opt/stack/tacker/tacker/vnfm/mgmt_drivers/
1. 设置 setup.cfg¶
您必须在 tacker 的操作环境中注册 kubernetes_mgmt.py。示例脚本 (kubernetes_mgmt.py) 使用 mgmt-drivers-kubernetes 字段在 Mgmt Driver 中注册。
$ vi /opt/stack/tacker/setup.cfg
...
tacker.tacker.mgmt.drivers =
noop = tacker.vnfm.mgmt_drivers.noop:VnfMgmtNoop
vnflcm_noop = tacker.vnfm.mgmt_drivers.vnflcm_noop:VnflcmMgmtNoop
mgmt-drivers-kubernetes = tacker.vnfm.mgmt_drivers.kubernetes_mgmt:KubernetesMgmtDriver
...
2. 设置 tacker.conf¶
然后,在 tacker.conf 中找到 vnflcm_mgmt_driver 字段。将步骤 1 中定义的 mgmt-drivers-kubernetes 添加到其中,并用逗号分隔。
$ vi /etc/tacker/tacker.conf
...
[tacker]
...
vnflcm_mgmt_driver = vnflcm_noop,mgmt-drivers-kubernetes
...
3. 更新 tacker.egg-info¶
在上述两个步骤之后,配置尚未生效。您还需要执行 setup.py 脚本以重新生成 tacker.egg-info 目录的内容。
$ cd /opt/stack/tacker/
$ python setup.py build
running build
running build_py
running egg_info
writing requirements to tacker.egg-info/requires.txt
writing tacker.egg-info/PKG-INFO
writing top-level names to tacker.egg-info/top_level.txt
writing dependency_links to tacker.egg-info/dependency_links.txt
writing entry points to tacker.egg-info/entry_points.txt
writing pbr to tacker.egg-info/pbr.json
[pbr] Processing SOURCES.txt
[pbr] In git context, generating filelist from git
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
writing manifest file 'tacker.egg-info/SOURCES.txt'
然后,您可以重新启动 tacker 和 tacker-conductor 的服务后,使用 Mgmt Driver 部署 Kubernetes 集群。
$ sudo systemctl stop devstack@tacker
$ sudo systemctl restart devstack@tacker-conductor
$ sudo systemctl start devstack@tacker
创建并上传 VNF 包¶
VNF 包是一个 ZIP 文件,包括 VNFD、VM 的软件镜像以及脚本和配置文件等其他工件资源。目录结构和文件内容在 NFV-SOL004 v2.6.1 中定义。根据 NFV-SOL004 v2.6.1,VNF 包应该是 ZIP 文件格式,并具有 TOSCA-Simple-Profile-YAML-v1.2 规范。在本用户指南中,带有 TOSCA-Metadata 目录的 CSAR 用于部署 Kubernetes 集群。
注意
有关 VNF 包的更详细定义,请参阅 VNF 包。
1. 目录结构¶
简单和复杂情况的 VNF 包示例结构如下所示。
注意
您也可以在 tacker 的 samples/mgmt_driver/kubernetes/kubernetes_vnf_package/ 目录中找到它们。
目录结构
TOSCA-Metadata/TOSCA.meta
Definitions/
Files/images/
Scripts/
BaseHOT/
UserData/
!----TOSCA-Metadata
!---- TOSCA.meta
!----Definitions
!---- etsi_nfv_sol001_common_types.yaml
!---- etsi_nfv_sol001_vnfd_types.yaml
!---- sample_kubernetes_top.vnfd.yaml
!---- sample_kubernetes_types.yaml
!---- sample_kubernetes_df_simple.yaml
!---- sample_kubernetes_df_complex.yaml
!----Files
!---- images
!---- ubuntu-20.04-server-cloudimg-amd64.img
!----Scripts
!---- install_k8s_cluster.sh
!---- kubernetes_mgmt.py
!----BaseHOT
!---- simple
!---- nested
!---- simple_nested_master.yaml
!---- simple_nested_worker.yaml
!---- simple_hot_top.yaml
!---- complex
!---- nested
!---- complex_nested_master.yaml
!---- complex_nested_worker.yaml
!---- complex_hot_top.yaml
!----UserData
!---- __init__.py
!---- k8s_cluster_user_data.py
TOSCA-Metadata/TOSCA.meta¶
根据 TOSCA-Simple-Profile-YAML-v1.2 规范,TOSCA.meta 元数据文件在 TOSCA-1.0-specification 中描述。 Scripts 目录下的文件是工件文件,因此您应该将它们的位置和摘要添加到 TOSCA.meta 元数据文件中。示例文件如下所示
Definitions/¶
所有 VNFD YAML 文件都位于此处。在本指南中,有两种类型的定义文件,ETSI NFV 类型定义文件和用户定义的类型定义文件。
ETSI NFV 提供了两种类型定义文件 [1],其中包含在 NFV-SOL001 v2.6.1 中定义的所有类型定义。您可以从官方网站下载它们。
您可以从 NFV-SOL001 v2.6.1 扩展您自己的类型定义。在大多数情况下,您需要扩展 tosca.nodes.nfv.VNF 以定义您的 VNF 节点类型。在本指南中,sample_kubernetes_df_simple.yaml 用于简单情况,sample_kubernetes_df_complex.yaml 用于复杂情况。这两个文件可以通过 deployment_flavour 来区分。示例文件如下所示
文件/镜像¶
VNF 软件镜像位于此处。这些文件也在 TOSCA.meta 中描述。用于部署 Kubernetes 集群的镜像为在 下载 镜像 中下载的 ubuntu-20.04-server-cloudimg-amd64.img。
Scripts/¶
有两个用于部署 Kubernetes 集群的脚本文件。install_k8s_cluster.sh 用于在 tacker 创建的 VM 上安装 k8s 集群。kubernetes_mgmt.py 是一个 Mgmt Driver 文件,用于在实例化、终止、扩展和修复之前或之后执行。您可以在与本指南同一级别的目录中获取这些脚本。
BaseHOT/¶
基本 HOT 文件是一种 Native 云编排模板,HOT 在这种情况下通常用于不同 VNFs 中的 LCM 操作。用户负责准备此文件,并且需要使其与放置在 Definitions/ 目录中的 VNFD 一致。
在本指南中,您必须使用用户数据来部署 Kubernetes 集群,因此必须包含 BaseHot 目录。
您必须将存储在 Definitions/ 下的 deployment_flavour 对应的目录放在 BaseHOT/ 目录下,并将 Base HOT 文件存储在其中。
在本指南中,VNF 包中有两种情况(简单和复杂),因此 BaseHOT/ 目录下有两个目录。示例文件如下所示
simple¶
complex¶
UserData/¶
LCM 操作用户数据是一个脚本,它返回用作 Base HOT 中 Heat 输入参数的键/值数据。示例文件如下所示
2. 创建 VNF 包¶
执行以下 CLI 命令以创建 VNF 包。
$ openstack vnf package create
结果
$ openstack vnf package create
+-------------------+-------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------------------------------------+
| ID | 03a8eb3e-a981-434e-a548-82d9b90161d7 |
| Links | { |
| | "self": { |
| | "href": "/vnfpkgm/v1/vnf_packages/03a8eb3e-a981-434e-a548-82d9b90161d7" |
| | }, |
| | "packageContent": { |
| | "href": "/vnfpkgm/v1/vnf_packages/03a8eb3e-a981-434e-a548-82d9b90161d7/package_content" |
| | } |
| | } |
| Onboarding State | CREATED |
| Operational State | DISABLED |
| Usage State | NOT_IN_USE |
| User Defined Data | {} |
+-------------------+-------------------------------------------------------------------------------------------------+
3. 上传 VNF 包¶
在实例化 VNF 之前,必须创建一个 VNF 包的 zip 文件并上传它。
执行以下命令以创建一个 zip 文件。
$ zip sample_kubernetes_csar.zip -r Definitions/ Files/ TOSCA-Metadata/ BaseHOT/ UserData/ Scripts/
执行以下 CLI 命令以上传 VNF 包。
$ openstack vnf package upload --path ./sample_kubernetes_csar.zip 03a8eb3e-a981-434e-a548-82d9b90161d7
结果
Upload request for VNF package 03a8eb3e-a981-434e-a548-82d9b90161d7 has been accepted.
之后,执行以下 CLI 命令并确认 VNF 包上传成功。
确认“Onboarding State”为“ONBOARDED”。
确认“Operational State”为“ENABLED”。
确认“Usage State”为“NOT_IN_USE”。
请记下“VNFD ID”,因为您将在下一个“部署 Kubernetes 集群”中需要它。
$ openstack vnf package show 03a8eb3e-a981-434e-a548-82d9b90161d7
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
| Additional Artifacts | [ |
| | { |
| | "artifactPath": "Scripts/install_k8s_cluster.sh", |
| | "checksum": { |
| | "algorithm": "SHA-256", |
| | "hash": "7f1f4518a3db7b386a473aebf0aa2561eaa94073ac4c95b9d3e7b3fb5bba3017" |
| | }, |
| | "metadata": {} |
| | }, |
| | { |
| | "artifactPath": "Scripts/kubernetes_mgmt.py", |
| | "checksum": { |
| | "algorithm": "SHA-256", |
| | "hash": "3d8fc578cca5eec0fb625fc3f5eeaa67c34c2a5f89329ed9307f343cfc25cdc4" |
| | }, |
| | "metadata": {} |
| | } |
| | ] |
| Checksum | { |
| | "hash": "d853ca27df5ad5270516adc8ec3cef6ebf982f09f2291eb150c677691d2c793e454e0feb61f211a2b4b8b6df899ab2f2c808684ae1f9100081e5375f8bfcec3d", |
| | "algorithm": "sha512" |
| | } |
| ID | 03a8eb3e-a981-434e-a548-82d9b90161d7 |
| Links | { |
| | "self": { |
| | "href": "/vnfpkgm/v1/vnf_packages/03a8eb3e-a981-434e-a548-82d9b90161d7" |
| | }, |
| | "packageContent": { |
| | "href": "/vnfpkgm/v1/vnf_packages/03a8eb3e-a981-434e-a548-82d9b90161d7/package_content" |
| | } |
| | } |
| Onboarding State | ONBOARDED |
| Operational State | ENABLED |
| Software Images | [ |
| | { |
| | "size": 2000000000, |
| | "version": "20.04", |
| | "name": "Image for masterNode kubernetes", |
| | "createdAt": "2021-02-18 08:49:39+00:00", |
| | "id": "masterNode", |
| | "containerFormat": "bare", |
| | "minDisk": 0, |
| | "imagePath": "", |
| | "minRam": 0, |
| | "diskFormat": "qcow2", |
| | "provider": "", |
| | "checksum": { |
| | "algorithm": "sha-512", |
| | "hash": "fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452" |
| | }, |
| | "userMetadata": {} |
| | }, |
| | { |
| | "size": 2000000000, |
| | "version": "20.04", |
| | "name": "Image for workerNode kubernetes", |
| | "createdAt": "2021-02-18 08:49:40+00:00", |
| | "id": "workerNode", |
| | "containerFormat": "bare", |
| | "minDisk": 0, |
| | "imagePath": "", |
| | "minRam": 0, |
| | "diskFormat": "qcow2", |
| | "provider": "", |
| | "checksum": { |
| | "algorithm": "sha-512", |
| | "hash": "fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452" |
| | }, |
| | "userMetadata": {} |
| | }, |
| | { |
| | "size": 2000000000, |
| | "version": "20.04", |
| | "name": "Image for workerNode kubernetes", |
| | "createdAt": "2021-02-18 08:49:39+00:00", |
| | "id": "workerNode", |
| | "containerFormat": "bare", |
| | "minDisk": 0, |
| | "imagePath": "", |
| | "minRam": 0, |
| | "diskFormat": "qcow2", |
| | "provider": "", |
| | "checksum": { |
| | "algorithm": "sha-512", |
| | "hash": "fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452" |
| | }, |
| | "userMetadata": {} |
| | }, |
| | { |
| | "size": 2000000000, |
| | "version": "20.04", |
| | "name": "Image for masterNode kubernetes", |
| | "createdAt": "2021-02-18 08:49:39+00:00", |
| | "id": "masterNode", |
| | "containerFormat": "bare", |
| | "minDisk": 0, |
| | "imagePath": "", |
| | "minRam": 0, |
| | "diskFormat": "qcow2", |
| | "provider": "", |
| | "checksum": { |
| | "algorithm": "sha-512", |
| | "hash": "fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452" |
| | }, |
| | "userMetadata": {} |
| | } |
| | ] |
| Usage State | NOT_IN_USE |
| User Defined Data | {} |
| VNF Product Name | Sample VNF |
| VNF Provider | Company |
| VNF Software Version | 1.0 |
| VNFD ID | b1db0ce7-ebca-1fb7-95ed-4840d70a1163 |
| VNFD Version | 1.0 |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
部署 Kubernetes 集群¶
1. 单个主节点¶
可以安装和设置一个单主节点 Kubernetes 集群在“instantiate_end”操作中,该操作允许您在实例化后执行任何脚本,并启用 Mgmt Driver 支持。 实例化的 Kubernetes 集群仅支持一个主节点和多个工作节点。 实例化的 Kubernetes 集群将自动注册为 VIM。 然后您可以使用 VIM 部署 CNF。
如果您想部署一个单主节点 Kubernetes 集群,可以使用在 创建 和 上传 VNF 包 中创建的具有 ‘simple’ flavour 的 VNF 包。 最重要的是,您必须创建用于正确实例化的参数文件。 以下是创建参数文件和 OpenStack cli 命令的方法。
1. 创建参数文件¶
创建一个名为 simple_kubernetes_param_file.json 的文件,格式如下。 该文件定义了实例化请求的参数。 这些参数将被设置在实例化请求的主体中。
必需参数
flavourId
additionalParams
注意
[这是 UserData 特定的部分] additionalParams 是一个可以由 KeyValuePairs 描述的参数。 通过在此参数中设置以下两个参数,就可以使用 LCM 操作用户数据进行实例化。 对于 file_name.py 和 class_name,设置 Prerequisites 中描述的文件名和类名。
lcm-operation-user-data: ./UserData/file_name.py
lcm-operation-user-data-class: class_name
可选参数
instantiationLevelId
extVirtualLinks
extManagedVirtualLinks
vimConnectionInfo
在本指南中,VM 需要具有 extCPs 才能通过 Tacker 通过 SSH 访问。 因此,需要 extVirtualLinks 参数。 只有在您拥有 cli-legacy-vim 中描述的默认 VIM 时,才能跳过 vimConnectionInfo。
部署 Kubernetes 集群的参数说明
要部署 Kubernetes 集群,您必须在 additionalParams 中设置 k8s_cluster_installation_param 键。 KeyValuePairs 如下表所示
## List of additionalParams.k8s_cluster_installation_param (specified by user)
+------------------+-----------+---------------------------------------------+-------------------+
| parameter | data type | description | required/optional |
+------------------+-----------+---------------------------------------------+-------------------+
| script_path | String | The path where the Kubernetes installation | required |
| | | script stored in the VNF Package | |
+------------------+-----------+---------------------------------------------+-------------------+
| vim_name | String | The vim name of deployed Kubernetes cluster | optional |
| | | registered by tacker | |
+------------------+-----------+---------------------------------------------+-------------------+
| master_node | dict | Information for the VM of the master node | required |
| | | group | |
+------------------+-----------+---------------------------------------------+-------------------+
| worker_node | dict | Information for the VM of the worker node | required |
| | | group | |
+------------------+-----------+---------------------------------------------+-------------------+
| proxy | dict | Information for proxy setting on VM | optional |
+------------------+-----------+---------------------------------------------+-------------------+
## master_node dict
+------------------+-----------+---------------------------------------------+-------------------+
| parameter | data type | description | required/optional |
+------------------+-----------+---------------------------------------------+-------------------+
| aspect_id | String | The resource name of the master node group, | optional |
| | | and is same as the `aspect` in `vnfd`. If | |
| | | you use user data, it must be set | |
+------------------+-----------+---------------------------------------------+-------------------+
| ssh_cp_name | String | Resource name of port corresponding to the | required |
| | | master node's ssh ip | |
+------------------+-----------+---------------------------------------------+-------------------+
| nic_cp_name | String | Resource name of port corresponding to the | required |
| | | master node's nic ip (which used for | |
| | | deploying Kubernetes cluster) | |
+------------------+-----------+---------------------------------------------+-------------------+
| username | String | Username for VM access | required |
+------------------+-----------+---------------------------------------------+-------------------+
| password | String | Password for VM access | required |
+------------------+-----------+---------------------------------------------+-------------------+
| pod_cidr | String | CIDR for pod | optional |
+------------------+-----------+---------------------------------------------+-------------------+
| cluster_cidr | String | CIDR for service | optional |
+------------------+-----------+---------------------------------------------+-------------------+
| cluster_cp_name | String | Resource name of the Port corresponding to | required |
| | | cluster ip | |
+------------------+-----------+---------------------------------------------+-------------------+
| cluster_fip_name | String | Resource name of the Port corresponding to | optional |
| | | cluster ip used for registering vim. If you | |
| | | use floating ip as ssh ip, it must be set | |
+------------------+-----------+---------------------------------------------+-------------------+
## worker_node dict
+------------------+-----------+---------------------------------------------+-------------------+
| parameter | data type | description | required/optional |
+------------------+-----------+---------------------------------------------+-------------------+
| aspect_id | String | The resource name of the worker node group, | optional |
| | | and is same as the `aspect` in `vnfd`. If | |
| | | you use user data, it must be set | |
+------------------+-----------+---------------------------------------------+-------------------+
| ssh_cp_name | String | Resource name of port corresponding to the | required |
| | | worker node's ssh ip | |
+------------------+-----------+---------------------------------------------+-------------------+
| nic_cp_name | String | Resource name of port corresponding to the | required |
| | | worker node's nic ip (which used for | |
| | | deploying Kubernetes cluster) | |
+------------------+-----------+---------------------------------------------+-------------------+
| username | String | Username for VM access | required |
+------------------+-----------+---------------------------------------------+-------------------+
| password | String | Password for VM access | required |
+------------------+-----------+---------------------------------------------+-------------------+
## proxy dict
+------------------+-----------+---------------------------------------------+-------------------+
| parameter | data type | description | required/optional |
+------------------+-----------+---------------------------------------------+-------------------+
| http_proxy | string | Http proxy server address | optional |
+------------------+-----------+---------------------------------------------+-------------------+
| https_proxy | string | Https proxy server address | optional |
+------------------+-----------+---------------------------------------------+-------------------+
| no_proxy | string | User-customized, proxy server-free IP | optional |
| | | address or segment | |
+------------------+-----------+---------------------------------------------+-------------------+
| k8s_node_cidr | string | CIDR for Kubernetes node, all its ip will be| optional |
| | | set into no_proxy | |
+------------------+-----------+---------------------------------------------+-------------------+
simple_kubernetes_param_file.json
{
"flavourId": "simple",
"vimConnectionInfo": [{
"id": "3cc2c4ff-525c-48b4-94c9-29247223322f",
"vimId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", #Set the uuid of the VIM to use
"vimType": "openstack"
}],
"additionalParams": {
"k8s_cluster_installation_param": {
"script_path": "Scripts/install_k8s_cluster.sh",
"vim_name": "kubernetes_vim",
"master_node": {
"aspect_id": "master_instance",
"ssh_cp_name": "masterNode_CP1",
"nic_cp_name": "masterNode_CP1",
"username": "ubuntu",
"password": "ubuntu",
"pod_cidr": "192.168.3.0/16",
"cluster_cidr": "10.199.187.0/24",
"cluster_cp_name": "masterNode_CP1"
},
"worker_node": {
"aspect_id": "worker_instance",
"ssh_cp_name": "workerNode_CP2",
"nic_cp_name": "workerNode_CP2",
"username": "ubuntu",
"password": "ubuntu"
},
"proxy": {
"http_proxy": "http://user1:password1@host1:port1",
"https_proxy": "https://user2:password2@host2:port2",
"no_proxy": "192.168.246.0/24,10.0.0.1",
"k8s_node_cidr": "10.10.0.0/24"
}
},
"lcm-operation-user-data": "./UserData/k8s_cluster_user_data.py",
"lcm-operation-user-data-class": "KubernetesClusterUserData"
},
"extVirtualLinks": [{
"id": "net0_master",
"resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", #Set the uuid of the network to use
"extCps": [{
"cpdId": "masterNode_CP1",
"cpConfig": [{
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET"
}]
}]
}]
}, {
"id": "net0_worker",
"resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", #Set the uuid of the network to use
"extCps": [{
"cpdId": "workerNode_CP2",
"cpConfig": [{
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET"
}]
}]
}]
}]
}
2. 执行实例化操作¶
执行以下 CLI 命令来实例化 VNF 实例。
$ openstack vnflcm create b1db0ce7-ebca-1fb7-95ed-4840d70a1163
+--------------------------+---------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------+---------------------------------------------------------------------------------------------+
| ID | 3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72 |
| Instantiation State | NOT_INSTANTIATED |
| Links | { |
| | "self": { |
| | "href": "/vnflcm/v1/vnf_instances/3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72" |
| | }, |
| | "instantiate": { |
| | "href": "/vnflcm/v1/vnf_instances/3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72/instantiate" |
| | } |
| | } |
| VNF Instance Description | None |
| VNF Instance Name | vnf-3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72 |
| VNF Package ID | 03a8eb3e-a981-434e-a548-82d9b90161d7 |
| VNF Product Name | Sample VNF |
| VNF Provider | Company |
| VNF Software Version | 1.0 |
| VNFD ID | b1db0ce7-ebca-1fb7-95ed-4840d70a1163 |
| VNFD Version | 1.0 |
+--------------------------+---------------------------------------------------------------------------------------------+
$ openstack vnflcm instantiate 3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72 ./simple_kubernetes_param_file.json
Instantiate request for VNF Instance 3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72 has been accepted.
$ openstack vnflcm show 3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72
+--------------------------+-------------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------+-------------------------------------------------------------------------------------------------+
| ID | 3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72 |
| Instantiated Vnf Info | { |
| | "flavourId": "simple", |
| | "vnfState": "STARTED", |
| | "scaleStatus": [ |
| | { |
| | "aspectId": "master_instance", |
| | "scaleLevel": 0 |
| | }, |
| | { |
| | "aspectId": "worker_instance", |
| | "scaleLevel": 0 |
| | } |
| | ], |
| | "extCpInfo": [ |
| | { |
| | "id": "d6ed7fd0-c26e-4e1e-81ab-71dc8c6d8293", |
| | "cpdId": "masterNode_CP1", |
| | "extLinkPortId": null, |
| | "associatedVnfcCpId": "1f830544-57ef-4f93-bdb5-b59e465f58d8", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "ba0f7de5-32b3-48dd-944d-341990ede0cb", |
| | "cpdId": "workerNode_CP2", |
| | "extLinkPortId": null, |
| | "associatedVnfcCpId": "9244012d-ad53-4685-912b-f6413ae38493", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ], |
| | "extVirtualLinkInfo": [ |
| | { |
| | "id": "b396126a-6a95-4a24-94ae-67b58f5bd9c2", |
| | "resourceHandle": { |
| | "vimConnectionId": null, |
| | "resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", |
| | "vimLevelResourceType": null |
| | } |
| | }, |
| | { |
| | "id": "10dfbb44-a8ff-435b-98f8-70539e71af8c", |
| | "resourceHandle": { |
| | "vimConnectionId": null, |
| | "resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", |
| | "vimLevelResourceType": null |
| | } |
| | } |
| | ], |
| | "vnfcResourceInfo": [ |
| | { |
| | "id": "1f830544-57ef-4f93-bdb5-b59e465f58d8", |
| | "vduId": "masterNode", |
| | "computeResource": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "a0eccaee-ff7b-4c70-8c11-ba79c8d4deb6", |
| | "vimLevelResourceType": "OS::Nova::Server" |
| | }, |
| | "storageResourceIds": [], |
| | "vnfcCpInfo": [ |
| | { |
| | "id": "9fe655ab-1d35-4d22-a6f3-9a07fa797884", |
| | "cpdId": "masterNode_CP1", |
| | "vnfExtCpId": null, |
| | "vnfLinkPortId": "e66a44a4-965f-49dd-b168-ff4cc2485c34", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "9244012d-ad53-4685-912b-f6413ae38493", |
| | "vduId": "workerNode", |
| | "computeResource": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "5b3ff765-7a9f-447a-a06d-444e963b74c9", |
| | "vimLevelResourceType": "OS::Nova::Server" |
| | }, |
| | "storageResourceIds": [], |
| | "vnfcCpInfo": [ |
| | { |
| | "id": "59176610-fc1c-4abe-9648-87a9b8b79640", |
| | "cpdId": "workerNode_CP2", |
| | "vnfExtCpId": null, |
| | "vnfLinkPortId": "977b8775-350d-4ef0-95e5-552c4c4099f3", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "974a4b98-5d07-44d4-9e13-a8ed21805111", |
| | "vduId": "workerNode", |
| | "computeResource": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "63402e5a-67c9-4f5c-b03f-b21f4a88507f", |
| | "vimLevelResourceType": "OS::Nova::Server" |
| | }, |
| | "storageResourceIds": [], |
| | "vnfcCpInfo": [ |
| | { |
| | "id": "523b1328-9704-4ac1-986f-99c9b46ee1c4", |
| | "cpdId": "workerNode_CP2", |
| | "vnfExtCpId": null, |
| | "vnfLinkPortId": "eba708c4-14de-4d96-bc82-ed0abd95780b", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ] |
| | } |
| | ], |
| | "vnfVirtualLinkResourceInfo": [ |
| | { |
| | "id": "96d15ae5-a1d8-4867-aaee-a4372de8bc0e", |
| | "vnfVirtualLinkDescId": "b396126a-6a95-4a24-94ae-67b58f5bd9c2", |
| | "networkResource": { |
| | "vimConnectionId": null, |
| | "resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", |
| | "vimLevelResourceType": "OS::Neutron::Net" |
| | }, |
| | "vnfLinkPorts": [ |
| | { |
| | "id": "e66a44a4-965f-49dd-b168-ff4cc2485c34", |
| | "resourceHandle": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "b5ed388b-de4e-4de8-a24a-f1b70c5cce94", |
| | "vimLevelResourceType": "OS::Neutron::Port" |
| | }, |
| | "cpInstanceId": "9fe655ab-1d35-4d22-a6f3-9a07fa797884" |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "c67b6f41-fd7a-45b2-b69a-8de9623dc16b", |
| | "vnfVirtualLinkDescId": "10dfbb44-a8ff-435b-98f8-70539e71af8c", |
| | "networkResource": { |
| | "vimConnectionId": null, |
| | "resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", |
| | "vimLevelResourceType": "OS::Neutron::Net" |
| | }, |
| | "vnfLinkPorts": [ |
| | { |
| | "id": "977b8775-350d-4ef0-95e5-552c4c4099f3", |
| | "resourceHandle": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "0002bba0-608b-4e2c-bd4d-23f1717f017c", |
| | "vimLevelResourceType": "OS::Neutron::Port" |
| | }, |
| | "cpInstanceId": "59176610-fc1c-4abe-9648-87a9b8b79640" |
| | }, |
| | { |
| | "id": "eba708c4-14de-4d96-bc82-ed0abd95780b", |
| | "resourceHandle": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "facc9eae-6f2d-4cfb-89c2-27841eea771c", |
| | "vimLevelResourceType": "OS::Neutron::Port" |
| | }, |
| | "cpInstanceId": "523b1328-9704-4ac1-986f-99c9b46ee1c4" |
| | } |
| | ] |
| | } |
| | ], |
| | "vnfcInfo": [ |
| | { |
| | "id": "1405984c-b174-4f33-8cfa-851d54ab95ce", |
| | "vduId": "masterNode", |
| | "vnfcState": "STARTED" |
| | }, |
| | { |
| | "id": "08b3f00e-a133-4262-8edb-03e2484ce870", |
| | "vduId": "workerNode", |
| | "vnfcState": "STARTED" |
| | }, |
| | { |
| | "id": "027502d6-d072-4819-a502-cb7cc688ec16", |
| | "vduId": "workerNode", |
| | "vnfcState": "STARTED" |
| | } |
| | ], |
| | "additionalParams": { |
| | "lcm-operation-user-data": "./UserData/k8s_cluster_user_data.py", |
| | "lcm-operation-user-data-class": "KubernetesClusterUserData", |
| | "k8sClusterInstallationParam": { |
| | "vimName": "kubernetes_vim", |
| | "proxy": { |
| | "noProxy": "192.168.246.0/24,10.0.0.1", |
| | "httpProxy": "http://user1:password1@host1:port1", |
| | "httpsProxy": "https://user2:password2@host2:port2", |
| | "k8sNodeCidr": "10.10.0.0/24" |
| | }, |
| | "masterNode": { |
| | "password": "ubuntu", |
| | "podCidr": "192.168.3.0/16", |
| | "username": "ubuntu", |
| | "aspectId": "master_instance", |
| | "nicCpName": "masterNode_CP1", |
| | "sshCpName": "masterNode_CP1", |
| | "clusterCidr": "10.199.187.0/24", |
| | "clusterCpName": "masterNode_CP1" |
| | }, |
| | "scriptPath": "Scripts/install_k8s_cluster.sh", |
| | "workerNode": { |
| | "password": "ubuntu", |
| | "username": "ubuntu", |
| | "aspectId": "worker_instance", |
| | "nicCpName": "workerNode_CP2", |
| | "sshCpName": "workerNode_CP2" |
| | } |
| | } |
| | } |
| | } |
| Instantiation State | INSTANTIATED |
| Links | { |
| | "self": { |
| | "href": "/vnflcm/v1/vnf_instances/3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72" |
| | }, |
| | "terminate": { |
| | "href": "/vnflcm/v1/vnf_instances/3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72/terminate" |
| | }, |
| | "scale": { |
| | "href": "/vnflcm/v1/vnf_instances/3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72/scale" |
| | }, |
| | "heal": { |
| | "href": "/vnflcm/v1/vnf_instances/3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72/heal" |
| | }, |
| | "changeExtConn": { |
| | "href": "/vnflcm/v1/vnf_instances/3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72/change_ext_conn" |
| | } |
| | } |
| VIM Connection Info | [ |
| | { |
| | "id": "9ab53adf-ca70-47b2-8877-1858cfb53618", |
| | "vimId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "vimType": "openstack", |
| | "interfaceInfo": {}, |
| | "accessInfo": {} |
| | }, |
| | { |
| | "id": "ef2c6b0c-c930-4d6c-9fe4-7c143e80ad94", |
| | "vimId": "2aeef9af-6a5b-4122-8510-21dbc71bc7cb", |
| | "vimType": "kubernetes", |
| | "interfaceInfo": null, |
| | "accessInfo": { |
| | "authUrl": "https://10.10.0.35:6443" |
| | } |
| | } |
| | ] |
| VNF Instance Description | None |
| VNF Instance Name | vnf-3f32428d-e8ce-4d6a-9be9-4c7f3a02ac72 |
| VNF Package ID | 03a8eb3e-a981-434e-a548-82d9b90161d7 |
| VNF Product Name | Sample VNF |
| VNF Provider | Company |
| VNF Software Version | 1.0 |
| VNFD ID | b1db0ce7-ebca-1fb7-95ed-4840d70a1163 |
| VNFD Version | 1.0 |
+--------------------------+-------------------------------------------------------------------------------------------------+
2. 多主节点¶
当您以 HA 配置安装 Kubernetes 集群时,Kubernetes 集群中配置了至少三个主节点。 在每个主节点上,将构建一个负载均衡器 (HAProxy) 和 etcd。 以上内容由具有 Mgmt Driver 的 “instantiate_end” 操作执行。
如果您想部署一个多主节点 Kubernetes 集群,可以使用在 创建 和 上传 VNF 包 中创建的具有 complex flavour 的 VNF 包。 以下是创建参数文件和 OpenStack cli 命令的方法。
1. 创建参数文件¶
参数文件中的参数与 1. 单个 主 节点 中的参数相同。 应该注意的是,由于您需要创建一组(至少三个)主节点,因此必须设置 aspect_id。 同时,HA 集群需要一个代表 IP 来访问,因此必须将 cluster_cp_name 设置为在 BaseHot 中创建的虚拟 ip 的端口名称。 在本指南中,cluster_cp_name 是 vip_CP。 complex_kubernetes_param_file.json 如下所示。
complex_kubernetes_param_file.json
{
"flavourId": "complex",
"vimConnectionInfo": [{
"id": "3cc2c4ff-525c-48b4-94c9-29247223322f",
"vimId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", #Set the uuid of the VIM to use
"vimType": "openstack"
}],
"additionalParams": {
"k8s_cluster_installation_param": {
"script_path": "Scripts/install_k8s_cluster.sh",
"vim_name": "kubernetes_vim_complex",
"master_node": {
"aspect_id": "master_instance",
"ssh_cp_name": "masterNode_CP1",
"nic_cp_name": "masterNode_CP1",
"username": "ubuntu",
"password": "ubuntu",
"pod_cidr": "192.168.3.0/16",
"cluster_cidr": "10.199.187.0/24",
"cluster_cp_name": "vip_CP"
},
"worker_node": {
"aspect_id": "worker_instance",
"ssh_cp_name": "workerNode_CP2",
"nic_cp_name": "workerNode_CP2",
"username": "ubuntu",
"password": "ubuntu"
},
"proxy": {
"http_proxy": "http://user1:password1@host1:port1",
"https_proxy": "https://user2:password2@host2:port2",
"no_proxy": "192.168.246.0/24,10.0.0.1",
"k8s_node_cidr": "10.10.0.0/24"
}
},
"lcm-operation-user-data": "./UserData/k8s_cluster_user_data.py",
"lcm-operation-user-data-class": "KubernetesClusterUserData"
},
"extVirtualLinks": [{
"id": "net0_master",
"resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", #Set the uuid of the network to use
"extCps": [{
"cpdId": "masterNode_CP1",
"cpConfig": [{
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET"
}]
}]
}]
}, {
"id": "net0_worker",
"resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", #Set the uuid of the network to use
"extCps": [{
"cpdId": "workerNode_CP2",
"cpConfig": [{
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET"
}]
}]
}]
}]
}
2. 执行实例化操作¶
VNF 包已在 创建 和 上传 VNF 包 中上传。 因此,您只需在 OpenStack 控制节点上执行以下 cli 命令。
$ openstack vnflcm create b1db0ce7-ebca-1fb7-95ed-4840d70a1163
+--------------------------+---------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------+---------------------------------------------------------------------------------------------+
| ID | c5215213-af4b-4080-95ab-377920474e1a |
| Instantiation State | NOT_INSTANTIATED |
| Links | { |
| | "self": { |
| | "href": "/vnflcm/v1/vnf_instances/c5215213-af4b-4080-95ab-377920474e1a" |
| | }, |
| | "instantiate": { |
| | "href": "/vnflcm/v1/vnf_instances/c5215213-af4b-4080-95ab-377920474e1a/instantiate" |
| | } |
| | } |
| VNF Instance Description | None |
| VNF Instance Name | vnf-c5215213-af4b-4080-95ab-377920474e1a |
| VNF Package ID | 03a8eb3e-a981-434e-a548-82d9b90161d7 |
| VNF Product Name | Sample VNF |
| VNF Provider | Company |
| VNF Software Version | 1.0 |
| VNFD ID | b1db0ce7-ebca-1fb7-95ed-4840d70a1163 |
| VNFD Version | 1.0 |
+--------------------------+---------------------------------------------------------------------------------------------+
$ openstack vnflcm instantiate c5215213-af4b-4080-95ab-377920474e1a ./complex_kubernetes_param_file.json
Instantiate request for VNF Instance c5215213-af4b-4080-95ab-377920474e1a has been accepted.
$ openstack vnflcm show c5215213-af4b-4080-95ab-377920474e1a
+--------------------------+-------------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------+-------------------------------------------------------------------------------------------------+
| ID | c5215213-af4b-4080-95ab-377920474e1a |
| Instantiated Vnf Info | { |
| | "flavourId": "complex", |
| | "vnfState": "STARTED", |
| | "scaleStatus": [ |
| | { |
| | "aspectId": "master_instance", |
| | "scaleLevel": 0 |
| | }, |
| | { |
| | "aspectId": "worker_instance", |
| | "scaleLevel": 0 |
| | } |
| | ], |
| | "extCpInfo": [ |
| | { |
| | "id": "a36f667a-f0f8-4ac8-a120-b19569d7bd72", |
| | "cpdId": "masterNode_CP1", |
| | "extLinkPortId": null, |
| | "associatedVnfcCpId": "bbce9656-f051-434f-8c4a-660ac23e91f6", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "67f38bd4-ae0b-4257-82eb-09a3c2dfd470", |
| | "cpdId": "workerNode_CP2", |
| | "extLinkPortId": null, |
| | "associatedVnfcCpId": "b4af0652-74b8-47bd-bcf6-94769bdbf756", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ], |
| | "extVirtualLinkInfo": [ |
| | { |
| | "id": "24e3e9ae-0df4-49d6-9ee4-e21dfe359baf", |
| | "resourceHandle": { |
| | "vimConnectionId": null, |
| | "resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", |
| | "vimLevelResourceType": null |
| | } |
| | }, |
| | { |
| | "id": "2283b96d-64f8-4403-9b21-643aa1058e86", |
| | "resourceHandle": { |
| | "vimConnectionId": null, |
| | "resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", |
| | "vimLevelResourceType": null |
| | } |
| | } |
| | ], |
| | "vnfcResourceInfo": [ |
| | { |
| | "id": "bbce9656-f051-434f-8c4a-660ac23e91f6", |
| | "vduId": "masterNode", |
| | "computeResource": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "a0eccaee-ff7b-4c70-8c11-ba79c8d4deb6", |
| | "vimLevelResourceType": "OS::Nova::Server" |
| | }, |
| | "storageResourceIds": [], |
| | "vnfcCpInfo": [ |
| | { |
| | "id": "9fe655ab-1d35-4d22-a6f3-9a07fa797884", |
| | "cpdId": "masterNode_CP1", |
| | "vnfExtCpId": null, |
| | "vnfLinkPortId": "e66a44a4-965f-49dd-b168-ff4cc2485c34", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "8bee8301-eb14-4c5c-bab8-a1b244d4d954", |
| | "vduId": "masterNode", |
| | "computeResource": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "4a40d65c-3440-4c44-858a-72a66324a11a", |
| | "vimLevelResourceType": "OS::Nova::Server" |
| | }, |
| | "storageResourceIds": [], |
| | "vnfcCpInfo": [ |
| | { |
| | "id": "65c9f35a-08a2-4875-bd85-af419f26b19d", |
| | "cpdId": "masterNode_CP1", |
| | "vnfExtCpId": null, |
| | "vnfLinkPortId": "26fa4b33-ad07-4982-ad97-18b66abba541", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "28ac0cb9-3bc1-4bc2-8be2-cf60f51b7b7a", |
| | "vduId": "masterNode", |
| | "computeResource": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "12708197-9724-41b8-b48c-9eb6862331dc", |
| | "vimLevelResourceType": "OS::Nova::Server" |
| | }, |
| | "storageResourceIds": [], |
| | "vnfcCpInfo": [ |
| | { |
| | "id": "d51f3b54-a9ed-46be-8ffe-64b5d07d1a7b", |
| | "cpdId": "masterNode_CP1", |
| | "vnfExtCpId": null, |
| | "vnfLinkPortId": "b71dc885-8e3e-4ccd-ac6f-feff332fd395", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "b4af0652-74b8-47bd-bcf6-94769bdbf756", |
| | "vduId": "workerNode", |
| | "computeResource": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "5b3ff765-7a9f-447a-a06d-444e963b74c9", |
| | "vimLevelResourceType": "OS::Nova::Server" |
| | }, |
| | "storageResourceIds": [], |
| | "vnfcCpInfo": [ |
| | { |
| | "id": "59176610-fc1c-4abe-9648-87a9b8b79640", |
| | "cpdId": "workerNode_CP2", |
| | "vnfExtCpId": null, |
| | "vnfLinkPortId": "977b8775-350d-4ef0-95e5-552c4c4099f3", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "974a4b98-5d07-44d4-9e13-a8ed21805111", |
| | "vduId": "workerNode", |
| | "computeResource": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "63402e5a-67c9-4f5c-b03f-b21f4a88507f", |
| | "vimLevelResourceType": "OS::Nova::Server" |
| | }, |
| | "storageResourceIds": [], |
| | "vnfcCpInfo": [ |
| | { |
| | "id": "523b1328-9704-4ac1-986f-99c9b46ee1c4", |
| | "cpdId": "workerNode_CP2", |
| | "vnfExtCpId": null, |
| | "vnfLinkPortId": "eba708c4-14de-4d96-bc82-ed0abd95780b", |
| | "cpProtocolInfo": [ |
| | { |
| | "layerProtocol": "IP_OVER_ETHERNET" |
| | } |
| | ] |
| | } |
| | ] |
| | } |
| | ], |
| | "vnfVirtualLinkResourceInfo": [ |
| | { |
| | "id": "96d15ae5-a1d8-4867-aaee-a4372de8bc0e", |
| | "vnfVirtualLinkDescId": "24e3e9ae-0df4-49d6-9ee4-e21dfe359baf", |
| | "networkResource": { |
| | "vimConnectionId": null, |
| | "resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", |
| | "vimLevelResourceType": "OS::Neutron::Net" |
| | }, |
| | "vnfLinkPorts": [ |
| | { |
| | "id": "e66a44a4-965f-49dd-b168-ff4cc2485c34", |
| | "resourceHandle": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "b5ed388b-de4e-4de8-a24a-f1b70c5cce94", |
| | "vimLevelResourceType": "OS::Neutron::Port" |
| | }, |
| | "cpInstanceId": "9fe655ab-1d35-4d22-a6f3-9a07fa797884" |
| | }, |
| | { |
| | "id": "26fa4b33-ad07-4982-ad97-18b66abba541", |
| | "resourceHandle": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "dfab524f-dec9-4247-973c-a0e22475f950", |
| | "vimLevelResourceType": "OS::Neutron::Port" |
| | }, |
| | "cpInstanceId": "65c9f35a-08a2-4875-bd85-af419f26b19d" |
| | }, |
| | { |
| | "id": "b71dc885-8e3e-4ccd-ac6f-feff332fd395", |
| | "resourceHandle": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "45733936-0a9e-4eaa-a71f-3a77cb034581", |
| | "vimLevelResourceType": "OS::Neutron::Port" |
| | }, |
| | "cpInstanceId": "d51f3b54-a9ed-46be-8ffe-64b5d07d1a7b" |
| | } |
| | ] |
| | }, |
| | { |
| | "id": "c67b6f41-fd7a-45b2-b69a-8de9623dc16b", |
| | "vnfVirtualLinkDescId": "2283b96d-64f8-4403-9b21-643aa1058e86", |
| | "networkResource": { |
| | "vimConnectionId": null, |
| | "resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", |
| | "vimLevelResourceType": "OS::Neutron::Net" |
| | }, |
| | "vnfLinkPorts": [ |
| | { |
| | "id": "977b8775-350d-4ef0-95e5-552c4c4099f3", |
| | "resourceHandle": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "0002bba0-608b-4e2c-bd4d-23f1717f017c", |
| | "vimLevelResourceType": "OS::Neutron::Port" |
| | }, |
| | "cpInstanceId": "59176610-fc1c-4abe-9648-87a9b8b79640" |
| | }, |
| | { |
| | "id": "eba708c4-14de-4d96-bc82-ed0abd95780b", |
| | "resourceHandle": { |
| | "vimConnectionId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "resourceId": "facc9eae-6f2d-4cfb-89c2-27841eea771c", |
| | "vimLevelResourceType": "OS::Neutron::Port" |
| | }, |
| | "cpInstanceId": "523b1328-9704-4ac1-986f-99c9b46ee1c4" |
| | } |
| | ] |
| | } |
| | ], |
| | "vnfcInfo": [ |
| | { |
| | "id": "3ca607b9-f270-4077-8af8-d5d244f8893b", |
| | "vduId": "masterNode", |
| | "vnfcState": "STARTED" |
| | }, |
| | { |
| | "id": "c2b19ef1-f748-4175-9f3a-6792a9ee7a62", |
| | "vduId": "masterNode", |
| | "vnfcState": "STARTED" |
| | }, |
| | { |
| | "id": "59f5fd29-d20f-426f-a1a6-526757205cb4", |
| | "vduId": "masterNode", |
| | "vnfcState": "STARTED" |
| | }, |
| | { |
| | "id": "08b3f00e-a133-4262-8edb-03e2484ce870", |
| | "vduId": "workerNode", |
| | "vnfcState": "STARTED" |
| | }, |
| | { |
| | "id": "027502d6-d072-4819-a502-cb7cc688ec16", |
| | "vduId": "workerNode", |
| | "vnfcState": "STARTED" |
| | } |
| | ], |
| | "additionalParams": { |
| | "lcm-operation-user-data": "./UserData/k8s_cluster_user_data.py", |
| | "lcm-operation-user-data-class": "KubernetesClusterUserData", |
| | "k8sClusterInstallationParam": { |
| | "vimName": "kubernetes_vim_complex", |
| | "proxy": { |
| | "noProxy": "192.168.246.0/24,10.0.0.1", |
| | "httpProxy": "http://user1:password1@host1:port1", |
| | "httpsProxy": "https://user2:password2@host2:port2", |
| | "k8sNodeCidr": "10.10.0.0/24" |
| | }, |
| | "masterNode": { |
| | "password": "ubuntu", |
| | "podCidr": "192.168.3.0/16", |
| | "username": "ubuntu", |
| | "aspectId": "master_instance", |
| | "nicCpName": "masterNode_CP1", |
| | "sshCpName": "masterNode_CP1", |
| | "clusterCidr": "10.199.187.0/24", |
| | "clusterCpName": "vip_CP" |
| | }, |
| | "scriptPath": "Scripts/install_k8s_cluster.sh", |
| | "workerNode": { |
| | "password": "ubuntu", |
| | "username": "ubuntu", |
| | "aspectId": "worker_instance", |
| | "nicCpName": "workerNode_CP2", |
| | "sshCpName": "workerNode_CP2" |
| | } |
| | } |
| | } |
| | } |
| Instantiation State | INSTANTIATED |
| Links | { |
| | "self": { |
| | "href": "/vnflcm/v1/vnf_instances/c5215213-af4b-4080-95ab-377920474e1a" |
| | }, |
| | "terminate": { |
| | "href": "/vnflcm/v1/vnf_instances/c5215213-af4b-4080-95ab-377920474e1a/terminate" |
| | }, |
| | "scale": { |
| | "href": "/vnflcm/v1/vnf_instances/c5215213-af4b-4080-95ab-377920474e1a/scale" |
| | }, |
| | "heal": { |
| | "href": "/vnflcm/v1/vnf_instances/c5215213-af4b-4080-95ab-377920474e1a/heal" |
| | }, |
| | "changeExtConn": { |
| | "href": "/vnflcm/v1/vnf_instances/c5215213-af4b-4080-95ab-377920474e1a/change_ext_conn" |
| | } |
| | } |
| VIM Connection Info | [ |
| | { |
| | "id": "9ab53adf-ca70-47b2-8877-1858cfb53618", |
| | "vimId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", |
| | "vimType": "openstack", |
| | "interfaceInfo": {}, |
| | "accessInfo": {} |
| | }, |
| | { |
| | "id": "2e56da35-f343-4f9e-8f04-7722f8edbe7a", |
| | "vimId": "3e04bb8e-2dbd-4c32-9575-d2937f3aa931", |
| | "vimType": "kubernetes", |
| | "interfaceInfo": null, |
| | "accessInfo": { |
| | "authUrl": "https://10.10.0.80:16443" |
| | } |
| | } |
| | ] |
| VNF Instance Description | None |
| VNF Instance Name | vnf-c5215213-af4b-4080-95ab-377920474e1a |
| VNF Package ID | 03a8eb3e-a981-434e-a548-82d9b90161d7 |
| VNF Product Name | Sample VNF |
| VNF Provider | Company |
| VNF Software Version | 1.0 |
| VNFD ID | b1db0ce7-ebca-1fb7-95ed-4840d70a1163 |
| VNFD Version | 1.0 |
+--------------------------+-------------------------------------------------------------------------------------------------+
扩展 Kubernetes 工作节点¶
根据 NFV-SOL001 v2.6.1,scale_start 和 scale_end 操作允许用户在扩展操作中执行任何脚本,并且使用 Mgmt Driver 支持 Kubernetes 集群中工作节点的扩展操作。
实例化 Kubernetes 集群后,如果您想删除 Kubernetes 集群中的一个或多个工作节点,可以执行 scale in 操作。 如果您想添加新的工作节点到 Kubernetes 集群,可以执行 scale out 操作。 以下是创建参数文件和 OpenStack cli 命令的方法。
1. 创建参数文件¶
以下是作为 ETSI NFV-SOL003 v2.6.1 中 ScaleVnfRequest 数据类型的 “POST /vnf_instances/{id}/scale” 的扩展参数
+------------------+---------------------------------------------------------+
| Attribute name | Parameter description |
+------------------+---------------------------------------------------------+
| type | User specify scaling operation type: |
| | "SCALE_IN" or "SCALE_OUT" |
+------------------+---------------------------------------------------------+
| aspectId | User specify target aspectId, aspectId is defined in |
| | above VNFD and user can know by |
| | ``InstantiatedVnfInfo.ScaleStatus`` that contained in |
| | the response of "GET /vnf_instances/{id}" |
+------------------+---------------------------------------------------------+
| numberOfSteps | Number of scaling steps |
+------------------+---------------------------------------------------------+
| additionalParams | Not needed |
+------------------+---------------------------------------------------------+
以下是两个扩展请求主体的示例
{
"type": "SCALE_OUT",
"aspectId": "worker_instance",
"numberOfSteps": "1"
}
{
"type": "SCALE_IN",
"aspectId": "worker_instance",
"numberOfSteps": "1"
}
注意
只能扩展工作节点(in)。 当前功能不支持扩展主节点。
2. 执行扩展操作¶
在执行 scale 命令之前,您必须确保您的 VNF 实例已实例化。 VNF 包应在 创建 和 上传 VNF 包 中上传,并且 Kubernetes 集群应通过 部署 Kubernetes 集群 中的过程部署。
在执行工作节点扩展操作时,Tacker 会调用以下 Heat API。
堆栈更新
确认扩展是否成功的步骤如下
1. 执行 Heat CLI 命令,并在扩展前后检查堆栈的 ‘worker_instance’ 中的资源列表数量。
2. 登录到 Kubernetes 集群的主节点,并在扩展前后检查工作节点的数量。
要确认扩展后的工作节点数量,您可以使用 Heat CLI 找到堆栈资源数量的增加或减少。 此外,Kubernetes 集群中注册的工作节点数量应更新。 有关 Heat CLI 命令的详细信息,请参阅 Heat CLI 参考。
扩展前堆栈信息
$ openstack stack resource list vnf-c5215213-af4b-4080-95ab-377920474e1a -n 2 --filter \ type=complex_nested_worker.yaml -c resource_name -c physical_resource_id -c resource_type \ -c resource_status +---------------+--------------------------------------+-----------------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+-----------------------------+-----------------+ | lwljovool2wg | 07b79bbe-d0b2-4df0-8775-6202142b6054 | complex_nested_worker.yaml | CREATE_COMPLETE | | n6nnjta4f4rv | 56c9ec6f-5e52-44db-9d0d-57e3484e763f | complex_nested_worker.yaml | CREATE_COMPLETE | +---------------+--------------------------------------+-----------------------------+-----------------+
扩展前 Kubernetes 集群中的工作节点
$ ssh ubuntu@10.10.0.80 $ kubectl get node NAME STATUS ROLES AGE VERSION master59 Ready control-plane,master 1h25m v1.20.4 master78 Ready control-plane,master 1h1m v1.20.4 master31 Ready control-plane,master 35m v1.20.4 worker18 Ready <none> 10m v1.20.4 worker20 Ready <none> 4m v1.20.4
vnf_instance 的扩展执行
$ openstack vnflcm scale --type "SCALE_OUT" --aspect-id worker_instance --number-of-steps 1 c5215213-af4b-4080-95ab-377920474e1a Scale request for VNF Instance c5215213-af4b-4080-95ab-377920474e1a has been accepted.
扩展后堆栈信息
$ openstack stack resource list vnf-c5215213-af4b-4080-95ab-377920474e1a -n 2 --filter \ type=complex_nested_worker.yaml -c resource_name -c physical_resource_id -c resource_type \ -c resource_status +---------------+--------------------------------------+-----------------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+-----------------------------+-----------------+ | lwljovool2wg | 07b79bbe-d0b2-4df0-8775-6202142b6054 | complex_nested_worker.yaml | UPDATE_COMPLETE | | n6nnjta4f4rv | 56c9ec6f-5e52-44db-9d0d-57e3484e763f | complex_nested_worker.yaml | UPDATE_COMPLETE | | z5nky6qcodlq | f9ab73ff-3ad7-40d2-830a-87bd0c45af32 | complex_nested_worker.yaml | CREATE_COMPLETE | +---------------+--------------------------------------+-----------------------------+-----------------+
扩展后 Kubernetes 集群中的工作节点
$ ssh ubuntu@10.10.0.80 $ kubectl get node NAME STATUS ROLES AGE VERSION master59 Ready control-plane,master 1h35m v1.20.4 master78 Ready control-plane,master 1h11m v1.20.4 master31 Ready control-plane,master 45m v1.20.4 worker18 Ready <none> 20m v1.20.4 worker20 Ready <none> 14m v1.20.4 worker45 Ready <none> 4m v1.20.4
vnf_instance 的缩减执行
$ openstack vnflcm scale --type "SCALE_IN" --aspect-id worker_instance --number-of-steps 1 c5215213-af4b-4080-95ab-377920474e1a Scale request for VNF Instance c5215213-af4b-4080-95ab-377920474e1a has been accepted.
注意
此示例显示了 “SCALE_OUT” 操作后的 “SCALE_IN” 输出。
缩减后堆栈信息
$ openstack stack resource list vnf-c5215213-af4b-4080-95ab-377920474e1a -n 2 --filter type=complex_nested_worker.yaml -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+-----------------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+-----------------------------+-----------------+ | n6nnjta4f4rv | 56c9ec6f-5e52-44db-9d0d-57e3484e763f | complex_nested_worker.yaml | UPDATE_COMPLETE | | z5nky6qcodlq | f9ab73ff-3ad7-40d2-830a-87bd0c45af32 | complex_nested_worker.yaml | UPDATE_COMPLETE | +---------------+--------------------------------------+-----------------------------+-----------------+
缩减后 Kubernetes 集群中的工作节点
$ ssh ubuntu@10.10.0.80 $ kubectl get node NAME STATUS ROLES AGE VERSION master59 Ready control-plane,master 1h38m v1.20.4 master78 Ready control-plane,master 1h14m v1.20.4 master31 Ready control-plane,master 48m v1.20.4 worker20 Ready <none> 17m v1.20.4 worker45 Ready <none> 7m v1.20.4
修复 Kubernetes 主/工作节点¶
根据 NFV-SOL001 v2.6.1,heal_start 和 heal_end 操作允许用户在修复操作中执行任何脚本,并且使用 Mgmt Driver 支持 Kubernetes 集群中主节点和工作节点的修复操作。
实例化 Kubernetes 集群后,如果 Kubernetes 集群中的一个节点无法正常运行,您可以修复它。 还可以支持修复整个 Kubernetes 集群。 以下是创建参数文件和 OpenStack cli 命令的方法。
1. 创建参数文件¶
以下是发送到“POST /vnf_instances/{id}/heal”的修复参数,数据类型为 HealVnfRequest。它与 SOL002 和 SOL003 不同。
+------------------+---------------------------------------------------------+
| Attribute name | Parameter description |
+------------------+---------------------------------------------------------+
| vnfcInstanceId | User specify heal target, user can know "vnfcInstanceId"|
| | by ``InstantiatedVnfInfo.vnfcResourceInfo`` that |
| | contained in the response of "GET /vnf_instances/{id}". |
+------------------+---------------------------------------------------------+
| cause | Not needed |
+------------------+---------------------------------------------------------+
| additionalParams | Not needed |
+------------------+---------------------------------------------------------+
| healScript | Not needed |
+------------------+---------------------------------------------------------+
+------------------+---------------------------------------------------------+
| Attribute name | Parameter description |
+------------------+---------------------------------------------------------+
| cause | Not needed |
+------------------+---------------------------------------------------------+
| additionalParams | Not needed |
+------------------+---------------------------------------------------------+
cause 和 additionalParams 均受 SOL002 和 SOL003 的支持。
如果 vnfcInstanceId 参数为 null,则表示需要对整个 Kubernetes 集群进行修复操作,这在 SOL003 中是这种情况。
以下是 SOL002 的修复请求主体的示例
{
"vnfcInstanceId": "bbce9656-f051-434f-8c4a-660ac23e91f6"
}
注意
在 部署 Kubernetes 集群 的章节中,CLI 命令 openstack vnflcm show VNF INSTANCE ID 中显示了 VNF 实例实例化的结果。
您可以从上述结果的 Instantiated Vnf Info 中获取 vnfcInstanceId。 vnfcResourceInfo.id 是 vnfcInstanceId。
下面提到的 physical_resource_id 与 vnfcResourceInfo.computeResource.resourceId 相同。
2. 执行修复操作¶
1. 修复主节点¶
修复指定 VNFC 实例时,Heat API 将从 Tacker 调用。
堆栈资源标记为不健康
堆栈更新
确认修复是否成功的步骤如下
1. 执行 Heat CLI 命令,并在修复前后检查主节点的 physical_resource_id 和 resource_status。
2. 登录到 Kubernetes 集群的主节点,并在修复前后检查主节点的年龄。
要确认修复主节点成功,您可以使用 Heat CLI 找到此资源 ‘master_instance 资源列表’ 的 physical_resource_id 已更改。 此外,修复后的 Kubernetes 集群中的主节点年龄应更新。
注意
请注意,Tacker 管理的“vnfc-instance-id”和 Heat 管理的“physical-resource-id”是不同的。
修复前主节点信息
$ openstack stack resource list vnf-c5215213-af4b-4080-95ab-377920474e1a -n 2 --filter type=OS::Nova::Server -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+------------------+-----------------+ | workerNode | 5b3ff765-7a9f-447a-a06d-444e963b74c9 | OS::Nova::Server | CREATE_COMPLETE | | workerNode | 63402e5a-67c9-4f5c-b03f-b21f4a88507f | OS::Nova::Server | CREATE_COMPLETE | | masterNode | a0eccaee-ff7b-4c70-8c11-ba79c8d4deb6 | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 4a40d65c-3440-4c44-858a-72a66324a11a | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 12708197-9724-41b8-b48c-9eb6862331dc | OS::Nova::Server | CREATE_COMPLETE | +---------------+--------------------------------------+------------------+-----------------+
修复前 Kubernetes 集群中的主节点
$ ssh ubuntu@10.10.0.80 $ kubectl get node NAME STATUS ROLES AGE VERSION master59 Ready control-plane,master 1h38m v1.20.4 master78 Ready control-plane,master 1h14m v1.20.4 master31 Ready control-plane,master 48m v1.20.4 worker20 Ready <none> 17m v1.20.4 worker45 Ready <none> 7m v1.20.4
我们使用 physical_resource_id a0eccaee-ff7b-4c70-8c11-ba79c8d4deb6 修复主节点,其 vnfc_instance_id 是 bbce9656-f051-434f-8c4a-660ac23e91f6。
vnf_instance 的主节点修复执行
$ openstack vnflcm heal c5215213-af4b-4080-95ab-377920474e1a --vnfc-instance bbce9656-f051-434f-8c4a-660ac23e91f6 Heal request for VNF Instance 9e086f34-b3c9-4986-b5e5-609a5ac4c1f9 has been accepted.
修复后主节点信息
$ openstack stack resource list vnf-c5215213-af4b-4080-95ab-377920474e1a -n 2 --filter type=OS::Nova::Server -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+------------------+-----------------+ | workerNode | 5b3ff765-7a9f-447a-a06d-444e963b74c9 | OS::Nova::Server | CREATE_COMPLETE | | workerNode | 63402e5a-67c9-4f5c-b03f-b21f4a88507f | OS::Nova::Server | CREATE_COMPLETE | | masterNode | aaecc9b4-8ce5-4f1c-a90b-3571fd4bfb5f | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 4a40d65c-3440-4c44-858a-72a66324a11a | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 12708197-9724-41b8-b48c-9eb6862331dc | OS::Nova::Server | CREATE_COMPLETE | +---------------+--------------------------------------+------------------+-----------------+
修复后 Kubernetes 集群中的主节点
$ ssh ubuntu@10.10.0.80 $ kubectl get node NAME STATUS ROLES AGE VERSION master78 Ready control-plane,master 1h36m v1.20.4 master31 Ready control-plane,master 1h10m v1.20.4 worker20 Ready <none> 39m v1.20.4 worker45 Ready <none> 29m v1.20.4 master59 Ready control-plane,master 2m v1.20.4
2. 修复工作节点¶
修复工作节点与修复主节点相同。 您只需在修复命令中替换 vnfc_instance_id 即可。
修复前工作节点信息
$ openstack stack resource list vnf-c5215213-af4b-4080-95ab-377920474e1a -n 2 --filter type=OS::Nova::Server -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+------------------+-----------------+ | workerNode | 5b3ff765-7a9f-447a-a06d-444e963b74c9 | OS::Nova::Server | CREATE_COMPLETE | | workerNode | 63402e5a-67c9-4f5c-b03f-b21f4a88507f | OS::Nova::Server | CREATE_COMPLETE | | masterNode | aaecc9b4-8ce5-4f1c-a90b-3571fd4bfb5f | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 4a40d65c-3440-4c44-858a-72a66324a11a | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 12708197-9724-41b8-b48c-9eb6862331dc | OS::Nova::Server | CREATE_COMPLETE | +---------------+--------------------------------------+------------------+-----------------+
修复前 Kubernetes 集群中的工作节点
$ ssh ubuntu@10.10.0.80 $ kubectl get node NAME STATUS ROLES AGE VERSION master78 Ready control-plane,master 1h36m v1.20.4 master31 Ready control-plane,master 1h10m v1.20.4 worker20 Ready <none> 39m v1.20.4 worker45 Ready <none> 29m v1.20.4 master59 Ready control-plane,master 2m v1.20.4
我们使用 physical_resource_id 5b3ff765-7a9f-447a-a06d-444e963b74c9 修复工作节点,其 vnfc_instance_id 是 b4af0652-74b8-47bd-bcf6-94769bdbf756。
vnf_instance 的工作节点修复执行
$ openstack vnflcm heal c5215213-af4b-4080-95ab-377920474e1a --vnfc-instance b4af0652-74b8-47bd-bcf6-94769bdbf756 Heal request for VNF Instance 9e086f34-b3c9-4986-b5e5-609a5ac4c1f9 has been accepted.
修复后工作节点信息
$ openstack stack resource list vnf-c5215213-af4b-4080-95ab-377920474e1a -n 2 --filter type=OS::Nova::Server -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+------------------+-----------------+ | workerNode | 5b3ff765-7a9f-447a-a06d-444e963b74c9 | OS::Nova::Server | CREATE_COMPLETE | | workerNode | c94f8952-bf2e-4a08-906e-67cee771112b | OS::Nova::Server | CREATE_COMPLETE | | masterNode | aaecc9b4-8ce5-4f1c-a90b-3571fd4bfb5f | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 4a40d65c-3440-4c44-858a-72a66324a11a | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 12708197-9724-41b8-b48c-9eb6862331dc | OS::Nova::Server | CREATE_COMPLETE | +---------------+--------------------------------------+------------------+-----------------+
修复后 Kubernetes 集群中的工作节点
$ ssh ubuntu@10.10.0.80 $ kubectl get node NAME STATUS ROLES AGE VERSION master78 Ready control-plane,master 1h46m v1.20.4 master31 Ready control-plane,master 1h20m v1.20.4 worker45 Ready <none> 39m v1.20.4 master59 Ready control-plane,master 10m v1.20.4 worker20 Ready <none> 2m v1.20.4
3. 修复整个 Kubernetes 集群¶
修复整个 VNF 时,以下 API 将从 Tacker 调用到 Heat。
堆栈删除
堆栈创建
1. 执行 Heat CLI 命令,并在修复前后检查堆栈的 ‘ID’ 和 ‘Stack Status’。
2. Kubernetes 集群的所有信息都将更改。
这用于确认堆栈 ‘ID’ 在修复前后已更改。
修复前堆栈信息
$ openstack stack list -c 'ID' -c 'Stack Name' -c 'Stack Status' +--------------------------------------+------------------------------------------+-----------------+ | ID | Stack Name | Stack Status | +--------------------------------------+------------------------------------------+-----------------+ | f485f3f2-8181-4ed5-b927-e582b5aa9b14 | vnf-c5215213-af4b-4080-95ab-377920474e1a | CREATE_COMPLETE | +--------------------------------------+------------------------------------------+-----------------+
修复前 Kubernetes 集群信息
$ ssh ubuntu@10.10.0.80 $ kubectl get node NAME STATUS ROLES AGE VERSION master59 Ready control-plane,master 1h38m v1.20.4 master78 Ready control-plane,master 1h14m v1.20.4 master31 Ready control-plane,master 48m v1.20.4 worker20 Ready <none> 17m v1.20.4 worker45 Ready <none> 7m v1.20.4
执行整个 VNF 的修复
$ openstack vnflcm heal c5215213-af4b-4080-95ab-377920474e1a Heal request for VNF Instance c5215213-af4b-4080-95ab-377920474e1a has been accepted.
修复后堆栈信息
$ openstack stack list -c 'ID' -c 'Stack Name' -c 'Stack Status' +--------------------------------------+------------------------------------------+-----------------+ | ID | Stack Name | Stack Status | +--------------------------------------+------------------------------------------+-----------------+ | 03aaadbe-bf5a-44a0-84b0-8f2a18f8a844 | vnf-c5215213-af4b-4080-95ab-377920474e1a | CREATE_COMPLETE | +--------------------------------------+------------------------------------------+-----------------+
修复后 Kubernetes 集群信息
$ ssh ubuntu@10.10.0.93 $ kubectl get node NAME STATUS ROLES AGE VERSION master46 Ready control-plane,master 1h25m v1.20.4 master37 Ready control-plane,master 1h1m v1.20.4 master14 Ready control-plane,master 35m v1.20.4 worker101 Ready <none> 10m v1.20.4 worker214 Ready <none> 4m v1.20.4
Kubernetes 集群上的硬件感知亲和性¶
在上述两种情况(简单和复杂)中,如果您在 Kubernetes 集群的 VNF 上部署容器网络功能,Pod 可能会被调度到同一个物理计算服务器,而它们被标记了反亲和性规则。 反亲和性规则可以将 Pod 部署到不同的工作节点上,但工作节点可能在同一服务器上。 在本章中,我们提供了一种支持 Pod 的硬件感知亲和性的方法。
此案例将创建一个具有 3 个主节点和 2 个工作节点的 Kubernetes 集群。 当 Tacker 部署工作节点时,一个 ‘anti-affinity’ 规则将被添加到它们的 “scheduler_hints” 属性(一个属性可以控制 VM 将部署到哪个计算服务器上),以便工作节点将部署到不同的服务器上。 在工作节点加入 Kubernetes 集群后,将向工作节点添加一个标签(其类型为 ‘topologyKey’,键为 ‘CIS-node’,值为工作节点部署的服务器)。
然后,当在此 Kubernetes 集群中部署 Pod 时,如果 Pod 具有基于 ‘CIS-node’ 标签的反亲和性规则,Pod 将被调度到具有该标签不同值的worker节点上,因此 Pod 将被部署到不同的服务器上。
同时,如果您使用 Grant 部署您的 VM,您可以指定 VM 的可用区 (AZ)。 在这种情况下,您的工作节点将被添加一个标签(其类型为 ‘topologyKey’,键为 ‘kubernetes.io/zone’,值为工作节点部署的 AZ)。 当您在 pod-affinity 中指定 zone 标签时,您的 pod 将被部署到不同的 AZ。
1. VNF 包介绍¶
硬件感知亲和性(以下简称 pod-affinity)的 VNF 包类似于上述两个案例包。 您只需要在 Definitions 和 BaseHOT 目录中附加 pod-affinity 的定义文件即可。
Definitions¶
文件 deployment_flavour 应与上述两种情况不同。 示例文件如下
BaseHOT¶
BaseHOT 需要配置一个 srvgroup,其中包含反亲和性的策略定义。 目录结构如下
!----BaseHOT
!---- podaffinity
!---- nested
!---- podaffinity_nested_master.yaml
!---- podaffinity_nested_worker.yaml
!---- podaffinity_hot_top.yaml
示例文件如下
2. 使用 Pod-affinity 实例化 Kubernetes 集群¶
使用 pod-affinity 实例化时的操作步骤和方法与 部署 Kubernetes 集群 中的相同。 区别在于,实例化时使用的参数文件中的 flavourId 需要修改为 pod-affinity 的。 在这种用例中,flavourId 是 podaffinity。
podaffinity_kubernetes_param_file.json 如下所示。
podaffinity_kubernetes_param_file.json
{
"flavourId": "podaffinity",
"vimConnectionInfo": [{
"id": "3cc2c4ff-525c-48b4-94c9-29247223322f",
"vimId": "05ef7ca5-7e32-4a6b-a03d-52f811f04496", #Set the uuid of the VIM to use
"vimType": "openstack"
}],
"additionalParams": {
"k8s_cluster_installation_param": {
"script_path": "Scripts/install_k8s_cluster.sh",
"vim_name": "kubernetes_vim_podaffinity",
"master_node": {
"aspect_id": "master_instance",
"ssh_cp_name": "masterNode_CP1",
"nic_cp_name": "masterNode_CP1",
"username": "ubuntu",
"password": "ubuntu",
"pod_cidr": "192.168.0.0/16",
"cluster_cidr": "10.199.187.0/24",
"cluster_cp_name": "vip_CP"
},
"worker_node": {
"aspect_id": "worker_instance",
"ssh_cp_name": "workerNode_CP2",
"nic_cp_name": "workerNode_CP2",
"username": "ubuntu",
"password": "ubuntu"
},
"proxy": {
"http_proxy": "http://user1:password1@host1:port1",
"https_proxy": "https://user2:password2@host2:port2",
"no_proxy": "192.168.246.0/24,10.0.0.1",
"k8s_node_cidr": "10.10.0.0/24"
}
},
"lcm-operation-user-data": "./UserData/k8s_cluster_user_data.py",
"lcm-operation-user-data-class": "KubernetesClusterUserData"
},
"extVirtualLinks": [{
"id": "net0_master",
"resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", #Set the uuid of the network to use
"extCps": [{
"cpdId": "masterNode_CP1",
"cpConfig": [{
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET"
}]
}]
}]
}, {
"id": "net0_worker",
"resourceId": "71a3fbd1-f31e-4c2c-b0e2-26267d64a9ee", #Set the uuid of the network to use
"extCps": [{
"cpdId": "workerNode_CP2",
"cpConfig": [{
"cpProtocolData": [{
"layerProtocol": "IP_OVER_ETHERNET"
}]
}]
}]
}]
}
确认 OpenStack 上的实例化操作是否成功¶
您可以使用 Heat CLI 确认使用 pod-affinity 成功实例化 Kubernetes 集群。 确认点如下所示。
确认 tacker 创建的“OS::Nova::ServerGroup”资源的 policy 属性值为 ‘anti-affinity’。
确认 tacker 创建的“OS::Nova::ServerGroup”资源的 members 属性是 worker 节点 VM 的 physical_resource_id。
确认 worker 节点 VM 的 server_groups 属性值是 “OS::Nova::ServerGroup” 资源的 physical_resource_id。
实例化后,以下命令可以检查确认点 1 和 2。
pod-affinity 的“OS::Nova::ServerGroup”资源信息
$ openstack stack resource show vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a srvgroup --fit +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'id': '46186a58-5cac-4dd6-a516-d6deb1461f8a', 'name': 'ServerGroup', 'policy': 'anti-affinity', 'rules': {}, 'members': ['51826868-74d6-4ce1-9b0b-157efdfc9490', 'e4bef063-30f9-4f26-b5fc-75d99e46db1e'], 'project_id': | | | 'ff61312f72f94c7da3d0c3c4578ad121', 'user_id': '4751fbe7b18a4469bedf89ba0cc09616'} | | creation_time | 2021-04-22T02:47:22Z | | description | | | links | [{'href': 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a/4dfd5d41-77c9-4e72-b30f-d06f0ca79ebf/resources/srvgroup', 'rel': 'self'}, {'href': | | | 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a/4dfd5d41-77c9-4e72-b30f-d06f0ca79ebf', 'rel': 'stack'}] | | logical_resource_id | srvgroup | | physical_resource_id | 46186a58-5cac-4dd6-a516-d6deb1461f8a | | required_by | ['worker_instance'] | | resource_name | srvgroup | | resource_status | CREATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Nova::ServerGroup | | updated_time | 2021-04-22T02:47:22Z | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ $ openstack stack resource list vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a -n 2 --filter type=OS::Nova::Server -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+------------------+-----------------+ | workerNode | 51826868-74d6-4ce1-9b0b-157efdfc9490 | OS::Nova::Server | CREATE_COMPLETE | | workerNode | e4bef063-30f9-4f26-b5fc-75d99e46db1e | OS::Nova::Server | CREATE_COMPLETE | | masterNode | d4578afd-9eb6-2ca0-1932-ccd69d763b6b | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 42904925-7d05-e311-3953-dc92c88428b0 | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 282a9ba5-fcbc-3f4b-6ca3-71d383e26134 | OS::Nova::Server | CREATE_COMPLETE | +---------------+--------------------------------------+------------------+-----------------+
以下命令可以检查确认点 3。
pod-affinity 的“worker node VM”信息
$ openstack stack resource list vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a -n 2 --filter type=podaffinity_nested_worker.yaml -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+--------------------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+--------------------------------+-----------------+ | kxogpuzgdcvi | 3b11dba8-2dab-4ad4-8241-09a0501cab47 | podaffinity_nested_worker.yaml | CREATE_COMPLETE | | n5s7ycewii5s | 4b2ac686-e6ff-4397-88dd-cbba7d2e7a34 | podaffinity_nested_worker.yaml | CREATE_COMPLETE | +---------------+--------------------------------------+--------------------------------+-----------------+ $ openstack stack resource show 3b11dba8-2dab-4ad4-8241-09a0501cab47 workerNode --fit +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'id': '51826868-74d6-4ce1-9b0b-157efdfc9490', 'name': 'workerNode', 'status': 'ACTIVE', 'tenant_id': 'ff61312f72f94c7da3d0c3c4578ad121', 'user_id': '4751fbe7b18a4469bedf89ba0cc09616', 'metadata': {}, 'hostId': | | | 'bdd83b04143e4048e93141cfb5600c39571a94e501564cf7a1380073', 'image': {'id': '959c1e45-e140-407d-aaaf-bb5eea93a828', 'links': [{'rel': 'bookmark', 'href': 'http://192.168.10.115/compute/images/959c1e45-e140-407d-aaaf- | | | bb5eea93a828'}]}, 'flavor': {'vcpus': 2, 'ram': 4096, 'disk': 40, 'ephemeral': 0, 'swap': 0, 'original_name': 'm1.medium', 'extra_specs': {'hw_rng:allowed': 'True'}}, 'created': '2021-04-22T02:47:27Z', 'updated': | | | '2021-04-22T02:47:36Z', 'addresses': {'net0': [{'version': 4, 'addr': '10.10.0.52', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:09:34:5f'}]}, 'accessIPv4': '', 'accessIPv6': '', 'links': [{'rel': 'self', | | | 'href': 'http://192.168.10.115/compute/v2.1/servers/51826868-74d6-4ce1-9b0b-157efdfc9490'}, {'rel': 'bookmark', 'href': 'http://192.168.10.115/compute/servers/51826868-74d6-4ce1-9b0b-157efdfc9490'}], 'OS-DCF:diskConfig': | | | 'MANUAL', 'progress': 0, 'OS-EXT-AZ:availability_zone': 'nova', 'config_drive': '', 'key_name': None, 'OS-SRV-USG:launched_at': '2021-04-22T02:47:30.000000', 'OS-SRV-USG:terminated_at': None, 'security_groups': [{'name': | | | 'default'}], 'OS-EXT-SRV-ATTR:host': 'compute03', 'OS-EXT-SRV-ATTR:instance_name': 'instance-000003de', 'OS-EXT-SRV-ATTR:hypervisor_hostname': 'compute03', 'OS-EXT-SRV-ATTR:reservation_id': 'r-3wox5r91', 'OS-EXT-SRV- | | | ATTR:launch_index': 0, 'OS-EXT-SRV-ATTR:hostname': 'workernode', 'OS-EXT-SRV-ATTR:kernel_id': '', 'OS-EXT-SRV-ATTR:ramdisk_id': '', 'OS-EXT-SRV-ATTR:root_device_name': '/dev/vda', 'OS-EXT-SRV-ATTR:user_data': 'Q29udGVudC1UeX | | | BlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMDA5NDg1OTI5MTU3NzU5MzA2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAwMDk0ODU5MjkxNTc3NTkzMDY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PS | | | J1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCi | | | MgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi==', 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': 'active', 'OS-EXT- | | | STS:power_state': 1, 'os-extended-volumes:volumes_attached': [], 'host_status': 'UP', 'locked': False, 'locked_reason': None, 'description': None, 'tags': [], 'trusted_image_certificates': None, 'server_groups': | | | ['46186a58-5cac-4dd6-a516-d6deb1461f8a'], 'os_collect_config': {}} | | creation_time | 2021-04-22T02:47:24Z | | description | | | links | [{'href': 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a-worker_instance-6wyaj6mgtjlt-kxogpuzgdcvi- | | | eutiueiy6e7n/3b11dba8-2dab-4ad4-8241-09a0501cab47/resources/workerNode', 'rel': 'self'}, {'href': 'http://192.168.10.115/heat- | | | api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a-worker_instance-6wyaj6mgtjlt-kxogpuzgdcvi-eutiueiy6e7n/3b11dba8-2dab-4ad4-8241-09a0501cab47', 'rel': 'stack'}] | | logical_resource_id | workerNode | | parent_resource | kxogpuzgdcvi | | physical_resource_id | 51826868-74d6-4ce1-9b0b-157efdfc9490 | | required_by | [] | | resource_name | workerNode | | resource_status | CREATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Nova::Server | | updated_time | 2021-04-22T02:47:24Z | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
确认 Kubernetes 上的实例化操作是否成功¶
要确认 ‘CIS-node’ 标签已成功添加到 worker 节点,您应该通过 ssh 登录到 Kubernetes 集群中的一个 master 节点,并使用 Kubernetes CLI。 确认点如下所示。
确认 ‘CIS-node’ 标签在 worker 节点的 labels 中。
确认 ‘CIS-node’ 标签的值是 worker 节点部署在其上的 Compute Server 的名称。 此值的键是 “worker node VM” 信息中的 ‘OS-EXT-SRV-ATTR:host’。
实例化后,以下命令可以检查这些确认点。
Kubernetes 集群中的 worker 节点信息
$ kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS master110 Ready control-plane,master 5h34m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master110,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= master13 Ready control-plane,master 5h21m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master13,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= master159 Ready control-plane,master 5h48m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master159,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= worker52 Ready <none> 5h15m v1.21.0 CIS-node=compute03,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker52,kubernetes.io/os=linux worker88 Ready <none> 5h10m v1.21.0 CIS-node=compute01,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker88,kubernetes.io/os=linux
3. 使用 Pod-affinity 扩展 Worker 节点¶
使用 pod-affinity 扩展 worker 节点的操作步骤和方法与 扩展 Kubernetes Worker 节点 中的相同。
确认 OpenStack 上的扩展操作是否成功¶
您可以使用 Heat CLI 确认使用 pod-affinity 扩展 worker 节点已成功完成。 确认点如下所示。
确认扩展的 worker 节点的
physical_resource_id已添加到 “OS::Nova::ServerGroup” 资源的members属性中。确认扩展的 worker 节点 VM 的
server_groups属性值是 “OS::Nova::ServerGroup” 资源的physical_resource_id。
扩展 worker 节点后,以下命令可以检查确认点 1。
pod-affinity 的“OS::Nova::ServerGroup”资源信息
$ openstack stack resource show vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a srvgroup --fit +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'id': '46186a58-5cac-4dd6-a516-d6deb1461f8a', 'name': 'ServerGroup', 'policy': 'anti-affinity', 'rules': {}, 'members': ['51826868-74d6-4ce1-9b0b-157efdfc9490', 'e4bef063-30f9-4f26-b5fc-75d99e46db1e', | | | 'a576d70c-d299-cf83-745a-63a1f49da7d3'], 'project_id': 'ff61312f72f94c7da3d0c3c4578ad121', 'user_id': '4751fbe7b18a4469bedf89ba0cc09616'} | | | creation_time | 2021-04-22T02:47:22Z | | description | | | links | [{'href': 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a/4dfd5d41-77c9-4e72-b30f-d06f0ca79ebf/resources/srvgroup', 'rel': 'self'}, {'href': | | | 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a/4dfd5d41-77c9-4e72-b30f-d06f0ca79ebf', 'rel': 'stack'}] | | logical_resource_id | srvgroup | | physical_resource_id | 46186a58-5cac-4dd6-a516-d6deb1461f8a | | required_by | ['worker_instance'] | | resource_name | srvgroup | | resource_status | UPDATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Nova::ServerGroup | | updated_time | 2021-04-22T03:47:22Z | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ $ openstack stack resource list vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a -n 2 --filter type=OS::Nova::Server -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+------------------+-----------------+ | workerNode | 51826868-74d6-4ce1-9b0b-157efdfc9490 | OS::Nova::Server | CREATE_COMPLETE | | workerNode | e4bef063-30f9-4f26-b5fc-75d99e46db1e | OS::Nova::Server | CREATE_COMPLETE | | workerNode | a576d70c-d299-cf83-745a-63a1f49da7d3 | OS::Nova::Server | CREATE_COMPLETE | | masterNode | d4578afd-9eb6-2ca0-1932-ccd69d763b6b | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 42904925-7d05-e311-3953-dc92c88428b0 | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 282a9ba5-fcbc-3f4b-6ca3-71d383e26134 | OS::Nova::Server | CREATE_COMPLETE | +---------------+--------------------------------------+------------------+-----------------+
以下命令可以检查确认点 2。 资源名称为 ‘plkz6sfomuhx’ 的资源是扩展的那个。
pod-affinity 的“worker node VM”信息
$ openstack stack resource list vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a -n 2 --filter type=podaffinity_nested_worker.yaml -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+--------------------------------+------------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+--------------------------------+------------------+ | kxogpuzgdcvi | 3b11dba8-2dab-4ad4-8241-09a0501cab47 | podaffinity_nested_worker.yaml | UPDATE_COMPLETE | | n5s7ycewii5s | 4b2ac686-e6ff-4397-88dd-cbba7d2e7a34 | podaffinity_nested_worker.yaml | UPDATE_COMPLETE | | plkz6sfomuhx | 24d0076c-672a-e52d-1947-ec8495708b5d | podaffinity_nested_worker.yaml | CREATE_COMPLETE | +---------------+--------------------------------------+--------------------------------+------------------+ $ openstack stack resource show 24d0076c-672a-e52d-1947-ec8495708b5d workerNode --fit +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'id': 'a576d70c-d299-cf83-745a-63a1f49da7d3', 'name': 'workerNode', 'status': 'ACTIVE', 'tenant_id': 'ff61312f72f94c7da3d0c3c4578ad121', 'user_id': '4751fbe7b18a4469bedf89ba0cc09616', 'metadata': {}, 'hostId': | | | 'bdd83b04143e4048e93141cfb5600c39571a94e501564cf7a1380073', 'image': {'id': '959c1e45-e140-407d-aaaf-bb5eea93a828', 'links': [{'rel': 'bookmark', 'href': 'http://192.168.10.115/compute/images/959c1e45-e140-407d-aaaf- | | | bb5eea93a828'}]}, 'flavor': {'vcpus': 2, 'ram': 4096, 'disk': 40, 'ephemeral': 0, 'swap': 0, 'original_name': 'm1.medium', 'extra_specs': {'hw_rng:allowed': 'True'}}, 'created': '2021-04-22T02:47:26Z', 'updated': | | | '2021-04-22T02:47:34Z', 'addresses': {'net0': [{'version': 4, 'addr': '10.10.0.46', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:28:fc:7a'}]}, 'accessIPv4': '', 'accessIPv6': '', 'links': [{'rel': 'self', | | | 'href': 'http://192.168.10.115/compute/v2.1/servers/a576d70c-d299-cf83-745a-63a1f49da7d3'}, {'rel': 'bookmark', 'href': 'http://192.168.10.115/compute/servers/a576d70c-d299-cf83-745a-63a1f49da7d3'}], 'OS-DCF:diskConfig': | | | 'MANUAL', 'progress': 0, 'OS-EXT-AZ:availability_zone': 'nova', 'config_drive': '', 'key_name': None, 'OS-SRV-USG:launched_at': '2021-04-22T02:47:28.000000', 'OS-SRV-USG:terminated_at': None, 'security_groups': [{'name': | | | 'default'}], 'OS-EXT-SRV-ATTR:host': 'compute02', 'OS-EXT-SRV-ATTR:instance_name': 'instance-000003dd', 'OS-EXT-SRV-ATTR:hypervisor_hostname': 'compute02', 'OS-EXT-SRV-ATTR:reservation_id': 'r-lvg9ate8', 'OS-EXT-SRV- | | | ATTR:launch_index': 0, 'OS-EXT-SRV-ATTR:hostname': 'workernode', 'OS-EXT-SRV-ATTR:kernel_id': '', 'OS-EXT-SRV-ATTR:ramdisk_id': '', 'OS-EXT-SRV-ATTR:root_device_name': '/dev/vda', 'OS-EXT-SRV-ATTR:user_data': 'Q29udGVudC1UeX | | | BlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMDA5NDg1OTI5MTU3NzU5MzA2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAwMDk0ODU5MjkxNTc3NTkzMDY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PS | | | J1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCi | | | MgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi==', 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': 'active', 'OS-EXT- | | | STS:power_state': 1, 'os-extended-volumes:volumes_attached': [], 'host_status': 'UP', 'locked': False, 'locked_reason': None, 'description': None, 'tags': [], 'trusted_image_certificates': None, 'server_groups': | | | ['46186a58-5cac-4dd6-a516-d6deb1461f8a'], 'os_collect_config': {}} | | creation_time | 2021-04-22T02:47:23Z | | description | | | links | [{'href': 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a-worker_instance-6wyaj6mgtjlt-plkz6sfomuhx-tvegcyfieq7m/24d0076c-672a-e52d-1947- | | | ec8495708b5d/resources/workerNode', 'rel': 'self'}, {'href': 'http://192.168.10.115/heat- | | | api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a-worker_instance-6wyaj6mgtjlt-plkz6sfomuhx-tvegcyfieq7m/24d0076c-672a-e52d-1947-ec8495708b5d', 'rel': 'stack'}] | | logical_resource_id | workerNode | | parent_resource | plkz6sfomuhx | | physical_resource_id | a576d70c-d299-cf83-745a-63a1f49da7d3 | | required_by | [] | | resource_name | workerNode | | resource_status | CREATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Nova::Server | | updated_time | 2021-04-22T03:47:23Z | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
确认 Kubernetes 上的扩展操作是否成功¶
要确认 ‘CIS-node’ 标签已成功添加到扩展的 worker 节点,您应该通过 ssh 登录到 Kubernetes 集群中的一个 master 节点,并使用 Kubernetes CLI。 确认点如下所示。
确认 ‘CIS-node’ 标签在扩展的 worker 节点的 labels 中。
确认 ‘CIS-node’ 标签的值是 worker 节点部署在其上的 Compute Server 的名称。 此值的键是 “worker node VM” 信息中的 ‘OS-EXT-SRV-ATTR:host’。
扩展后,以下命令可以检查这些确认点。 worker46 是扩展的 worker 节点。
Kubernetes 集群中的 worker 节点信息
$ kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS master110 Ready control-plane,master 5h34m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master110,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= master13 Ready control-plane,master 5h21m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master13,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= master159 Ready control-plane,master 5h48m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master159,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= worker52 Ready <none> 5h15m v1.21.0 CIS-node=compute01,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker52,kubernetes.io/os=linux worker88 Ready <none> 5h10m v1.21.0 CIS-node=compute03,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker88,kubernetes.io/os=linux worker46 Ready <none> 2m17s v1.21.0 CIS-node=compute02,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker46,kubernetes.io/os=linux
4. 使用 Pod-affinity 修复 Worker 节点¶
使用 pod-affinity 修复 worker 节点的操作步骤和方法与 修复 一个 Worker 节点 的 修复 Kubernetes Master/Worker 节点 中的相同。
确认 OpenStack 上的修复操作是否成功¶
要确认使用 pod-affinity 修复 worker 节点是否成功,您可以使用 Heat CLI。 确认点如下所示。
确认指向修复的 worker 节点的
physical_resource_id已在 “OS::Nova::ServerGroup” 资源的members属性中更改。确认修复的 worker 节点 VM 的
server_groups属性值是 “OS::Nova::ServerGroup” 资源的physical_resource_id。
修复 worker 节点后,以下命令可以检查确认点 1。 members 中更改的 physical_resource_id 是 ‘a576d70c-d299-cf83-745a-63a1f49da7d3’。
pod-affinity 的“OS::Nova::ServerGroup”资源信息
$ openstack stack resource show vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a srvgroup --fit +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'id': '46186a58-5cac-4dd6-a516-d6deb1461f8a', 'name': 'ServerGroup', 'policy': 'anti-affinity', 'rules': {}, 'members': ['51826868-74d6-4ce1-9b0b-157efdfc9490', 'e4bef063-30f9-4f26-b5fc-75d99e46db1e', | | | '4cb1324f-356d-418a-7935-b0b34c3b17ed'], 'project_id': 'ff61312f72f94c7da3d0c3c4578ad121', 'user_id': '4751fbe7b18a4469bedf89ba0cc09616'} | | | creation_time | 2021-04-22T02:47:22Z | | description | | | links | [{'href': 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a/4dfd5d41-77c9-4e72-b30f-d06f0ca79ebf/resources/srvgroup', 'rel': 'self'}, {'href': | | | 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a/4dfd5d41-77c9-4e72-b30f-d06f0ca79ebf', 'rel': 'stack'}] | | logical_resource_id | srvgroup | | physical_resource_id | 46186a58-5cac-4dd6-a516-d6deb1461f8a | | required_by | ['worker_instance'] | | resource_name | srvgroup | | resource_status | UPDATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Nova::ServerGroup | | updated_time | 2021-04-22T04:15:22Z | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ $ openstack stack resource list vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a -n 2 --filter type=OS::Nova::Server -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+------------------+-----------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+------------------+-----------------+ | workerNode | 51826868-74d6-4ce1-9b0b-157efdfc9490 | OS::Nova::Server | CREATE_COMPLETE | | workerNode | e4bef063-30f9-4f26-b5fc-75d99e46db1e | OS::Nova::Server | CREATE_COMPLETE | | workerNode | 4cb1324f-356d-418a-7935-b0b34c3b17ed | OS::Nova::Server | CREATE_COMPLETE | | masterNode | d4578afd-9eb6-2ca0-1932-ccd69d763b6b | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 42904925-7d05-e311-3953-dc92c88428b0 | OS::Nova::Server | CREATE_COMPLETE | | masterNode | 282a9ba5-fcbc-3f4b-6ca3-71d383e26134 | OS::Nova::Server | CREATE_COMPLETE | +---------------+--------------------------------------+------------------+-----------------+
以下命令可以检查确认点 2。 资源名称为 ‘plkz6sfomuhx’ 的资源中的名称为 ‘workerNode’ 的资源是修复的 VM。
pod-affinity 的“worker node VM”信息
$ openstack stack resource list vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a -n 2 --filter type=podaffinity_nested_worker.yaml -c resource_name -c physical_resource_id -c resource_type -c resource_status +---------------+--------------------------------------+--------------------------------+------------------+ | resource_name | physical_resource_id | resource_type | resource_status | +---------------+--------------------------------------+--------------------------------+------------------+ | kxogpuzgdcvi | 3b11dba8-2dab-4ad4-8241-09a0501cab47 | podaffinity_nested_worker.yaml | UPDATE_COMPLETE | | n5s7ycewii5s | 4b2ac686-e6ff-4397-88dd-cbba7d2e7a34 | podaffinity_nested_worker.yaml | UPDATE_COMPLETE | | plkz6sfomuhx | 24d0076c-672a-e52d-1947-ec8495708b5d | podaffinity_nested_worker.yaml | CREATE_COMPLETE | +---------------+--------------------------------------+--------------------------------+------------------+ $ openstack stack resource show 24d0076c-672a-e52d-1947-ec8495708b5d workerNode --fit +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attributes | {'id': '4cb1324f-356d-418a-7935-b0b34c3b17ed', 'name': 'workerNode', 'status': 'ACTIVE', 'tenant_id': 'ff61312f72f94c7da3d0c3c4578ad121', 'user_id': '4751fbe7b18a4469bedf89ba0cc09616', 'metadata': {}, 'hostId': | | | 'bdd83b04143e4048e93141cfb5600c39571a94e501564cf7a1380073', 'image': {'id': '959c1e45-e140-407d-aaaf-bb5eea93a828', 'links': [{'rel': 'bookmark', 'href': 'http://192.168.10.115/compute/images/959c1e45-e140-407d-aaaf- | | | bb5eea93a828'}]}, 'flavor': {'vcpus': 2, 'ram': 4096, 'disk': 40, 'ephemeral': 0, 'swap': 0, 'original_name': 'm1.medium', 'extra_specs': {'hw_rng:allowed': 'True'}}, 'created': '2021-04-22T02:47:26Z', 'updated': | | | '2021-04-22T02:47:34Z', 'addresses': {'net0': [{'version': 4, 'addr': '10.10.0.46', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'fa:16:3e:28:fc:7a'}]}, 'accessIPv4': '', 'accessIPv6': '', 'links': [{'rel': 'self', | | | 'href': 'http://192.168.10.115/compute/v2.1/servers/4cb1324f-356d-418a-7935-b0b34c3b17ed'}, {'rel': 'bookmark', 'href': 'http://192.168.10.115/compute/servers/4cb1324f-356d-418a-7935-b0b34c3b17ed'}], 'OS-DCF:diskConfig': | | | 'MANUAL', 'progress': 0, 'OS-EXT-AZ:availability_zone': 'nova', 'config_drive': '', 'key_name': None, 'OS-SRV-USG:launched_at': '2021-04-22T02:47:28.000000', 'OS-SRV-USG:terminated_at': None, 'security_groups': [{'name': | | | 'default'}], 'OS-EXT-SRV-ATTR:host': 'compute02', 'OS-EXT-SRV-ATTR:instance_name': 'instance-000003dd', 'OS-EXT-SRV-ATTR:hypervisor_hostname': 'compute02', 'OS-EXT-SRV-ATTR:reservation_id': 'r-lvg9ate8', 'OS-EXT-SRV- | | | ATTR:launch_index': 0, 'OS-EXT-SRV-ATTR:hostname': 'workernode', 'OS-EXT-SRV-ATTR:kernel_id': '', 'OS-EXT-SRV-ATTR:ramdisk_id': '', 'OS-EXT-SRV-ATTR:root_device_name': '/dev/vda', 'OS-EXT-SRV-ATTR:user_data': 'Q29udGVudC1UeX | | | BlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMDA5NDg1OTI5MTU3NzU5MzA2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAwMDk0ODU5MjkxNTc3NTkzMDY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PS | | | J1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCi | | | MgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi==', 'OS-EXT-STS:task_state': None, 'OS-EXT-STS:vm_state': 'active', 'OS-EXT- | | | STS:power_state': 1, 'os-extended-volumes:volumes_attached': [], 'host_status': 'UP', 'locked': False, 'locked_reason': None, 'description': None, 'tags': [], 'trusted_image_certificates': None, 'server_groups': | | | ['46186a58-5cac-4dd6-a516-d6deb1461f8a'], 'os_collect_config': {}} | | creation_time | 2021-04-22T04:15:23Z | | description | | | links | [{'href': 'http://192.168.10.115/heat-api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a-worker_instance-6wyaj6mgtjlt-plkz6sfomuhx-tvegcyfieq7m/24d0076c-672a-e52d-1947- | | | ec8495708b5d/resources/workerNode', 'rel': 'self'}, {'href': 'http://192.168.10.115/heat- | | | api/v1/ff61312f72f94c7da3d0c3c4578ad121/stacks/vnf-2456188c-d882-4de2-8cd0-54a7a2ea4e6a-worker_instance-6wyaj6mgtjlt-plkz6sfomuhx-tvegcyfieq7m/24d0076c-672a-e52d-1947-ec8495708b5d', 'rel': 'stack'}] | | logical_resource_id | workerNode | | parent_resource | plkz6sfomuhx | | physical_resource_id | 4cb1324f-356d-418a-7935-b0b34c3b17ed | | required_by | [] | | resource_name | workerNode | | resource_status | CREATE_COMPLETE | | resource_status_reason | state changed | | resource_type | OS::Nova::Server | | updated_time | 2021-04-22T04:15:23Z | +------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
确认 Kubernetes 上的修复操作是否成功¶
要确认 ‘CIS-node’ 标签已成功添加到修复的 worker 节点,您应该通过 ssh 登录到 Kubernetes 集群中的一个 master 节点,并使用 Kubernetes CLI。 确认点如下所示。
确认 ‘CIS-node’ 标签在修复的 worker 节点的 labels 中。
确认 ‘CIS-node’ 标签的值是 worker 节点部署在其上的 Compute Server 的名称。 此值的键是 “worker node VM” 信息中的 ‘OS-EXT-SRV-ATTR:host’。
修复后,以下命令可以检查这些确认点。 worker46 是修复的 worker 节点。
Kubernetes 集群中的 worker 节点信息
$ kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS master110 Ready control-plane,master 5h34m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master110,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= master13 Ready control-plane,master 5h21m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master13,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= master159 Ready control-plane,master 5h48m v1.21.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master159,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= worker52 Ready <none> 5h15m v1.21.0 CIS-node=compute01,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker52,kubernetes.io/os=linux worker88 Ready <none> 5h10m v1.21.0 CIS-node=compute03,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker88,kubernetes.io/os=linux worker46 Ready <none> 1m33s v1.21.0 CIS-node=compute02,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker46,kubernetes.io/os=linux
限制¶
如果您部署单个 master 节点 Kubernetes 集群,则无法修复 master 节点。
本用户指南提供了一种 UserData 格式的 VNF 包。 您也可以使用基于 TOSCA 的 VNF 包,以 SOL001 v2.6.1 的方式进行操作,但它仅支持单 master 案例,并且不支持扩展操作。