配置计算服务以使用裸机服务

计算服务需要配置为使用裸机服务的驱动程序。计算服务的配置文件通常位于 /etc/nova/nova.conf

注意

从 Newton 版本开始,可以运行多个 nova-compute 服务使用 ironic 虚拟驱动程序(在 nova 中)以提供冗余。裸机节点通过哈希环映射到这些服务。如果某个服务停止运行,可用的裸机节点将被重新映射到不同的服务。

一旦激活,节点将始终映射到相同的 nova-compute,即使它停止运行也是如此。在负责的服务恢复到活动状态之前,无法通过计算 API 管理该节点。

以下配置文件必须在计算服务的控制器节点和计算节点上进行修改。

  1. 在计算服务的配置文件中更改以下配置选项(例如,/etc/nova/nova.conf

    [default]
    
    # Defines which driver to use for controlling virtualization.
    # Enable the ironic virt driver for this compute instance.
    compute_driver=ironic.IronicDriver
    
    # Amount of memory in MB to reserve for the host so that it is always
    # available to host processes.
    # It is impossible to reserve any memory on bare metal nodes, so set
    # this to zero.
    reserved_host_memory_mb=0
    
    [filter_scheduler]
    
    # Enables querying of individual hosts for instance information.
    # Not possible for bare metal nodes, so set it to False.
    track_instance_changes=False
    
    [scheduler]
    
    # This value controls how often (in seconds) the scheduler should
    # attempt to discover new hosts that have been added to cells.
    # If negative (the default), no automatic discovery will occur.
    # As each bare metal node is represented by a separate host, it has
    # to be discovered before the Compute service can deploy on it.
    # The value here has to be carefully chosen based on a compromise
    # between the enrollment speed and the load on the Compute scheduler.
    # The recommended value of 2 minutes matches how often the Compute
    # service polls the Bare Metal service for node information.
    discover_hosts_in_cells_interval=120
    

    注意

    设置 discover_hosts_in_cells_interval 选项的替代方法是在每个节点注册后,在任何计算控制器节点上运行以下命令

    nova-manage cell_v2 discover_hosts --by-service
    
  2. 考虑在控制器节点上启用以下选项

    [filter_scheduler]
    
    # Enabling this option is beneficial as it reduces re-scheduling events
    # for ironic nodes when scheduling is based on resource classes,
    # especially for mixed hypervisor case with host_subset_size = 1.
    # However enabling it will also make packing of VMs on hypervisors
    # less dense even when scheduling weights are completely disabled.
    #shuffle_best_same_weighed_hosts = false
    
  3. 请仔细考虑以下选项

    [compute]
    
    # This option will cause nova-compute to set itself to a disabled state
    # if a certain number of consecutive build failures occur. This will
    # prevent the scheduler from continuing to send builds to a compute
    # service that is consistently failing. In the case of bare metal
    # provisioning, however, a compute service is rarely the cause of build
    # failures. Furthermore, bare metal nodes, managed by a disabled
    # compute service, will be remapped to a different one. That may cause
    # the second compute service to also be disabled, and so on, until no
    # compute services are active.
    # If this is not the desired behavior, consider increasing this value or
    # setting it to 0 to disable this behavior completely.
    #consecutive_build_service_disable_threshold = 10
    
  4. ironic 部分中更改以下配置选项。替换

    • IRONIC_PASSWORD 为您在身份服务中为 ironic 用户选择的密码

    • IRONIC_NODE 为 ironic-api 节点的hostname 或 IP 地址

    • IDENTITY_IP 为身份服务器的 IP

    [ironic]
    
    # Ironic authentication type
    auth_type=password
    
    # Keystone API endpoint
    auth_url=http://IDENTITY_IP:5000/v3
    
    # Ironic keystone project name
    project_name=service
    
    # Ironic keystone admin name
    username=ironic
    
    # Ironic keystone admin password
    password=IRONIC_PASSWORD
    
    # Ironic keystone project domain
    # or set project_domain_id
    project_domain_name=Default
    
    # Ironic keystone user domain
    # or set user_domain_id
    user_domain_name=Default
    
  5. 在计算服务的控制器节点上,重新启动 nova-scheduler 进程

    Fedora/RHEL/CentOS:
      sudo systemctl restart openstack-nova-scheduler
    
    Ubuntu:
      sudo service nova-scheduler restart
    
  6. 在计算服务的计算节点上,重新启动 nova-compute 进程

    Fedora/RHEL/CentOS:
      sudo systemctl restart openstack-nova-compute
    
    Ubuntu:
      sudo service nova-compute restart