Discussion:
[Openstack] unexpected distribution of compute instances in queens
Zufar Dhiyaulhaq
2018-11-26 10:45:33 UTC
Permalink
Hi,

I am deploying OpenStack with 3 compute node, but I am seeing an abnormal
distribution of instance, the instance is only deployed in a specific
compute node, and not distribute among other compute node.

this is my nova.conf from the compute node. (template jinja2 based)

[DEFAULT]
osapi_compute_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1'][
'ipv4']['address'] }}
metadata_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address']
}}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement

what is the problem? I have lookup the openstack-nova-scheduler in the
controller node but it's running well with only warning

nova-scheduler[19255]:
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332:
NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported

the result I want is the instance is distributed in all compute node.
Thank you.
--
*Regards,*
*Zufar Dhiyaulhaq*
Sean Mooney
2018-11-26 16:13:33 UTC
Permalink
Hi,
I am deploying OpenStack with 3 compute node, but I am seeing an abnormal distribution of instance, the instance is
only deployed in a specific compute node, and not distribute among other compute node.
this is my nova.conf from the compute node. (template jinja2 based)
hi, the default behavior of nova used to be spread not pack and i belive it still is.
the default behavior with placement however is closer to a packing behavior as
allcoation candiates are retrunidn in an undefined but deterministic order.

on a busy cloud this does not strictly pack instaces but on a quite cloud it effectivly does

you can try and enable randomisation of the allocation candiates by setting this config option in
the nova.conf of the shcduler to true.
https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates

on that note can you provide the nova.conf for the schduelr is used instead of the compute node nova.conf.
if you have not overriden any of the nova defaults the ram and cpu weigher should spread instances withing
the allocation candiates returned by placement.
[DEFAULT]
osapi_compute_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
metadata_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
enabled_apis = osapi_compute,metadata
my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
what is the problem? I have lookup the openstack-nova-scheduler in the controller node but it's running well with only
warning
Configuration option(s) ['use_tpool'] not supported
the result I want is the instance is distributed in all compute node.
Thank you.
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Zufar Dhiyaulhaq
2018-11-27 09:55:08 UTC
Permalink
Hi Smooney,

thank you for your help. I am trying to enable randomization but not
working. The instance I have created is still in the same node. Below is my
nova configuration (added randomization from your suggestion) from the
master node (Template jinja2 based).

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address']
}}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
virt_type = kvm
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
keymap=en-us
server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
server_proxyclient_address = {{ hostvars[inventory_hostname][
'ansible_ens3f1']['ipv4']['address'] }}
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]

Thank you,

Best Regards,
Zufar Dhiyaulhaq
Post by Zufar Dhiyaulhaq
Hi,
I am deploying OpenStack with 3 compute node, but I am seeing an
abnormal distribution of instance, the instance is
Post by Zufar Dhiyaulhaq
only deployed in a specific compute node, and not distribute among other
compute node.
Post by Zufar Dhiyaulhaq
this is my nova.conf from the compute node. (template jinja2 based)
hi, the default behavior of nova used to be spread not pack and i belive it still is.
the default behavior with placement however is closer to a packing behavior as
allcoation candiates are retrunidn in an undefined but deterministic order.
on a busy cloud this does not strictly pack instaces but on a quite cloud
it effectivly does
you can try and enable randomisation of the allocation candiates by
setting this config option in
the nova.conf of the shcduler to true.
https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates
on that note can you provide the nova.conf for the schduelr is used
instead of the compute node nova.conf.
if you have not overriden any of the nova defaults the ram and cpu weigher
should spread instances withing
the allocation candiates returned by placement.
Post by Zufar Dhiyaulhaq
[DEFAULT]
osapi_compute_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
metadata_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
enabled_apis = osapi_compute,metadata
controller3_ip_man }}:5672
Post by Zufar Dhiyaulhaq
my_ip = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
Post by Zufar Dhiyaulhaq
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
Post by Zufar Dhiyaulhaq
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
what is the problem? I have lookup the openstack-nova-scheduler in the
controller node but it's running well with only
Post by Zufar Dhiyaulhaq
warning
Configuration option(s) ['use_tpool'] not supported
the result I want is the instance is distributed in all compute node.
Thank you.
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Zufar Dhiyaulhaq
2018-11-27 10:01:19 UTC
Permalink
Hi Smooney,
sorry for the last reply. I am attaching wrong configuration files. This is
my nova configuration (added randomization from your suggestion) from the
master node (Template jinja2 based).

[DEFAULT]
osapi_compute_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
metadata_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:{{ rabbitmq_pw }}@{{ controller1_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller2_ip_man
}}:5672,openstack:{{ rabbitmq_pw }}@{{ controller3_ip_man }}:5672
my_ip = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_api
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection = mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
randomize_allocation_candidates = true
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
connection=mysql+pymysql://nova:{{ nova_dbpw }}@{{ vip }}/nova_placement

Thank you

Best Regards,
Zufar Dhiyaulhaq
Post by Zufar Dhiyaulhaq
Hi Smooney,
thank you for your help. I am trying to enable randomization but not
working. The instance I have created is still in the same node. Below is my
nova configuration (added randomization from your suggestion) from the
master node (Template jinja2 based).
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
virt_type = kvm
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
keymap=en-us
server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
server_proxyclient_address = {{ hostvars[inventory_hostname][
'ansible_ens3f1']['ipv4']['address'] }}
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
Thank you,
Best Regards,
Zufar Dhiyaulhaq
Post by Zufar Dhiyaulhaq
Hi,
I am deploying OpenStack with 3 compute node, but I am seeing an
abnormal distribution of instance, the instance is
Post by Zufar Dhiyaulhaq
only deployed in a specific compute node, and not distribute among
other compute node.
Post by Zufar Dhiyaulhaq
this is my nova.conf from the compute node. (template jinja2 based)
hi, the default behavior of nova used to be spread not pack and i belive it still is.
the default behavior with placement however is closer to a packing behavior as
allcoation candiates are retrunidn in an undefined but deterministic order.
on a busy cloud this does not strictly pack instaces but on a quite cloud
it effectivly does
you can try and enable randomisation of the allocation candiates by
setting this config option in
the nova.conf of the shcduler to true.
https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates
on that note can you provide the nova.conf for the schduelr is used
instead of the compute node nova.conf.
if you have not overriden any of the nova defaults the ram and cpu
weigher should spread instances withing
the allocation candiates returned by placement.
Post by Zufar Dhiyaulhaq
[DEFAULT]
osapi_compute_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
metadata_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
enabled_apis = osapi_compute,metadata
controller3_ip_man }}:5672
Post by Zufar Dhiyaulhaq
my_ip = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
Post by Zufar Dhiyaulhaq
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{
controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211
Post by Zufar Dhiyaulhaq
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
what is the problem? I have lookup the openstack-nova-scheduler in the
controller node but it's running well with only
Post by Zufar Dhiyaulhaq
warning
Configuration option(s) ['use_tpool'] not supported
the result I want is the instance is distributed in all compute node.
Thank you.
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Zufar Dhiyaulhaq
2018-11-28 07:50:32 UTC
Permalink
Hi,

Thank you. I am able to fix this issue by adding this configuration into
nova configuration file in controller node.

driver=filter_scheduler

Best Regards
Zufar Dhiyaulhaq
Post by Zufar Dhiyaulhaq
Hi Smooney,
sorry for the last reply. I am attaching wrong configuration files. This
is my nova configuration (added randomization from your suggestion) from
the master node (Template jinja2 based).
[DEFAULT]
osapi_compute_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
metadata_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
enabled_apis = osapi_compute,metadata
my_ip = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
randomize_allocation_candidates = true
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
Thank you
Best Regards,
Zufar Dhiyaulhaq
On Tue, Nov 27, 2018 at 4:55 PM Zufar Dhiyaulhaq <
Post by Zufar Dhiyaulhaq
Hi Smooney,
thank you for your help. I am trying to enable randomization but not
working. The instance I have created is still in the same node. Below is my
nova configuration (added randomization from your suggestion) from the
master node (Template jinja2 based).
[DEFAULT]
enabled_apis = osapi_compute,metadata
controller3_ip_man }}:5672
my_ip = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'][
'address'] }}
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
virt_type = kvm
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
keymap=en-us
server_listen = {{ hostvars[inventory_hostname]['ansible_ens3f1']['ipv4'
]['address'] }}
server_proxyclient_address = {{ hostvars[inventory_hostname][
'ansible_ens3f1']['ipv4']['address'] }}
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
Thank you,
Best Regards,
Zufar Dhiyaulhaq
Post by Zufar Dhiyaulhaq
Hi,
I am deploying OpenStack with 3 compute node, but I am seeing an
abnormal distribution of instance, the instance is
Post by Zufar Dhiyaulhaq
only deployed in a specific compute node, and not distribute among
other compute node.
Post by Zufar Dhiyaulhaq
this is my nova.conf from the compute node. (template jinja2 based)
hi, the default behavior of nova used to be spread not pack and i belive it still is.
the default behavior with placement however is closer to a packing behavior as
allcoation candiates are retrunidn in an undefined but deterministic order.
on a busy cloud this does not strictly pack instaces but on a quite
cloud it effectivly does
you can try and enable randomisation of the allocation candiates by
setting this config option in
the nova.conf of the shcduler to true.
https://docs.openstack.org/nova/latest/configuration/config.html#placement.randomize_allocation_candidates
on that note can you provide the nova.conf for the schduelr is used
instead of the compute node nova.conf.
if you have not overriden any of the nova defaults the ram and cpu
weigher should spread instances withing
the allocation candiates returned by placement.
Post by Zufar Dhiyaulhaq
[DEFAULT]
osapi_compute_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
metadata_listen = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
enabled_apis = osapi_compute,metadata
controller3_ip_man }}:5672
Post by Zufar Dhiyaulhaq
my_ip = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
backend=oslo_cache.memcache_pool
enabled=true
memcache_servers={{ controller1_ip_man }}:11211,{{ controller2_ip_man
}}:11211,{{ controller3_ip_man }}:11211
Post by Zufar Dhiyaulhaq
[cells]
[cinder]
os_region_name = RegionOne
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://{{ vip }}:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://{{ vip }}:5000/v3
memcached_servers = {{ controller1_ip_man }}:11211,{{
controller2_ip_man }}:11211,{{ controller3_ip_man }}:11211
Post by Zufar Dhiyaulhaq
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_pw }}
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://{{ vip }}:9696
auth_url = http://{{ vip }}:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = {{ neutron_pw }}
service_metadata_proxy = true
metadata_proxy_shared_secret = {{ metadata_secret }}
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://{{ vip }}:5000/v3
username = placement
password = {{ placement_pw }}
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
keymap=en-us
novncproxy_base_url = https://{{ vip }}:6080/vnc_auto.html
novncproxy_host = {{
hostvars[inventory_hostname]['ansible_ens3f1']['ipv4']['address'] }}
Post by Zufar Dhiyaulhaq
[workarounds]
[wsgi]
[xenserver]
[xvp]
[placement_database]
}}/nova_placement
Post by Zufar Dhiyaulhaq
what is the problem? I have lookup the openstack-nova-scheduler in the
controller node but it's running well with only
Post by Zufar Dhiyaulhaq
warning
Configuration option(s) ['use_tpool'] not supported
the result I want is the instance is distributed in all compute node.
Thank you.
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Jay Pipes
2018-11-28 14:56:26 UTC
Permalink
Post by Zufar Dhiyaulhaq
Hi,
Thank you. I am able to fix this issue by adding this configuration into
nova configuration file in controller node.
driver=filter_scheduler
That's the default:

https://docs.openstack.org/ocata/config-reference/compute/config-options.html

So that was definitely not the solution to your problem.

My guess is that Sean's suggestion to randomize the allocation
candidates fixed your issue.

Best,
-jay

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cg
Mike Carden
2018-11-30 07:53:12 UTC
Permalink
I'm seeing a similar issue in Queens deployed via tripleo.

Two x86 compute nodes and one ppc64le node and host aggregates for virtual
instances and baremetal (x86) instances. Baremetal on x86 is working fine.

All VMs get deployed to compute-0. I can live migrate VMs to compute-1 and
all is well, but I tire of being the 'meatspace scheduler'.

I've looked at the nova.conf in the various nova-xxx containers on the
controllers, but I have failed to discern the root of this issue.

Anyone have a suggestion?

--
MC
Jay Pipes
2018-11-30 13:57:32 UTC
Permalink
Post by Mike Carden
I'm seeing a similar issue in Queens deployed via tripleo.
Two x86 compute nodes and one ppc64le node and host aggregates for
virtual instances and baremetal (x86) instances. Baremetal on x86 is
working fine.
All VMs get deployed to compute-0. I can live migrate VMs to compute-1
and all is well, but I tire of being the 'meatspace scheduler'.
LOL, I love that term and will have to remember to use it in the future.
Post by Mike Carden
I've looked at the nova.conf in the various nova-xxx containers on the
controllers, but I have failed to discern the root of this issue.
Have you set the placement_randomize_allocation_candidates CONF option
and are still seeing the packing behaviour?

Best,
-jay

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-b

Loading...