Discussion:
[Openstack] Cinder - no availability zones
Turbo Fredriksson
2016-07-12 19:57:19 UTC
Permalink
I'm back to trying to get Cinder volumes to work. I think
I've nailed it down to the fact that there are no availability
zones;

bladeA01:~# cinder availability-zone-list
+------+--------+
| Name | Status |
+------+--------+
+------+--------+

However, two hours of googling, and no one seems to know how
to create a zone in Cinder.

All "they" say is to add the zone in cinder.conf under
DEFAULT/storage_availability_zone and DEFAULT/default_availability_zone.

I did that weeks ago, but still nothing (although previously I
didn't explicitly look for it):

[DEFAULT]
storage_availability_zone = nova
default_availability_zone = nova
allow_availability_zone_fallback = true

"They" also mention host aggregates:

bladeA01:~# openstack aggregate list
+----+-------+-------------------+
| ID | Name | Availability Zone |
+----+-------+-------------------+
| 6 | infra | nova |
| 7 | devel | nova |
| 8 | build | nova |
| 9 | tests | nova |
+----+-------+-------------------+

I'm not sure what kind of availability zone these are, but
I have _something_ (I'm guessing Nova zones):

bladeA01:~# openstack availability zone list
+-----------+-------------+
| Zone Name | Zone Status |
+-----------+-------------+
| internal | available |
| nova | available |
| nova | available |
| nova | available |
+-----------+-------------+
--
Att inse sin egen betydelse är som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka
Brent Troge
2016-07-12 20:32:11 UTC
Permalink
Can you send this output ?

cinder service-list

Also, when you create a volume what happens ?
Is there any error ?
Post by Turbo Fredriksson
I'm back to trying to get Cinder volumes to work. I think
I've nailed it down to the fact that there are no availability
zones;
bladeA01:~# cinder availability-zone-list
+------+--------+
| Name | Status |
+------+--------+
+------+--------+
However, two hours of googling, and no one seems to know how
to create a zone in Cinder.
All "they" say is to add the zone in cinder.conf under
DEFAULT/storage_availability_zone and DEFAULT/default_availability_zone.
I did that weeks ago, but still nothing (although previously I
[DEFAULT]
storage_availability_zone = nova
default_availability_zone = nova
allow_availability_zone_fallback = true
bladeA01:~# openstack aggregate list
+----+-------+-------------------+
| ID | Name | Availability Zone |
+----+-------+-------------------+
| 6 | infra | nova |
| 7 | devel | nova |
| 8 | build | nova |
| 9 | tests | nova |
+----+-------+-------------------+
I'm not sure what kind of availability zone these are, but
bladeA01:~# openstack availability zone list
+-----------+-------------+
| Zone Name | Zone Status |
+-----------+-------------+
| internal | available |
| nova | available |
| nova | available |
| nova | available |
+-----------+-------------+
--
Att inse sin egen betydelse Àr som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Turbo Fredriksson
2016-07-12 20:44:04 UTC
Permalink
Post by Brent Troge
cinder service-list
bladeA01:~# cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | bladeA01 | nova | enabled | up | 2016-07-12T20:32:44.000000 | - |
| cinder-scheduler | bladeA01 | nova | enabled | up | 2016-07-12T20:32:43.000000 | - |
| cinder-volume | ***@lvm | nova | enabled | up | 2016-07-12T20:32:39.000000 | - |
| cinder-volume | ***@nfs | nova | enabled | up | 2016-07-12T20:32:39.000000 | - |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
Post by Brent Troge
Also, when you create a volume what happens ?
Is there any error ?
If I create a volume in Horizon, it just say "Error".

If I create one from the shell:

cinder create --name test1 --volume-type lvm \
--availability-zone nova 10
[..]
bladeA01:~# cinder list
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| 88087aa1-208c-44f1-bf33-0cda7b757274 | error | test1 | 10 | lvm | false | |
| e8f50273-f62e-4cad-8066-725067e062f8 | deleting | test5 | 0 | lvm | true | |
+--------------------------------------+----------+-------+------+-------------+----------+-------------+

The logs say:

==> /var/log/cinder/cinder-scheduler.log <==
2016-07-12 21:40:19.868 15552 DEBUG cinder.scheduler.base_filter [req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Starting with 0 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/base_filter.py:79
2016-07-12 21:40:19.869 15552 INFO cinder.scheduler.base_filter [req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Filter AvailabilityZoneFilter returned 0 host(s)

Same thing if I don't use the availability-zone..
--
Life sucks and then you die
Brent Troge
2016-07-12 21:17:42 UTC
Permalink
this looks to be an issue with your lvm configuration..
on your volume host, do you see any errors ? look in cinder logs as well as
system logs.

can you also send your lvm backend configuration ?
Post by Turbo Fredriksson
Post by Brent Troge
cinder service-list
bladeA01:~# cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State |
Updated_at | Disabled Reason |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | bladeA01 | nova | enabled | up |
2016-07-12T20:32:44.000000 | - |
| cinder-scheduler | bladeA01 | nova | enabled | up |
2016-07-12T20:32:43.000000 | - |
2016-07-12T20:32:39.000000 | - |
2016-07-12T20:32:39.000000 | - |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
Post by Brent Troge
Also, when you create a volume what happens ?
Is there any error ?
If I create a volume in Horizon, it just say "Error".
cinder create --name test1 --volume-type lvm \
--availability-zone nova 10
[..]
bladeA01:~# cinder list
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| ID | Status | Name | Size |
Volume Type | Bootable | Attached to |
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| 88087aa1-208c-44f1-bf33-0cda7b757274 | error | test1 | 10 |
lvm | false | |
| e8f50273-f62e-4cad-8066-725067e062f8 | deleting | test5 | 0 |
lvm | true | |
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
==> /var/log/cinder/cinder-scheduler.log <==
2016-07-12 21:40:19.868 15552 DEBUG cinder.scheduler.base_filter
[req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Starting with 0 host(s)
get_filtered_objects
/usr/lib/python2.7/dist-packages/cinder/scheduler/base_filter.py:79
2016-07-12 21:40:19.869 15552 INFO cinder.scheduler.base_filter
[req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Filter AvailabilityZoneFilter
returned 0 host(s)
Same thing if I don't use the availability-zone..
--
Life sucks and then you die
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Brent Troge
2016-07-12 21:27:00 UTC
Permalink
sometimes i like to stop the cinder-volume service, then run it manually.

service cinder-volume stop

once the service is stopped, run it manually

cinder-volume

send your create command, and see what errors are thrown back in the
cinder-volume terminal
Post by Brent Troge
this looks to be an issue with your lvm configuration..
on your volume host, do you see any errors ? look in cinder logs as well
as system logs.
can you also send your lvm backend configuration ?
Post by Turbo Fredriksson
Post by Brent Troge
cinder service-list
bladeA01:~# cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State |
Updated_at | Disabled Reason |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | bladeA01 | nova | enabled | up |
2016-07-12T20:32:44.000000 | - |
| cinder-scheduler | bladeA01 | nova | enabled | up |
2016-07-12T20:32:43.000000 | - |
2016-07-12T20:32:39.000000 | - |
2016-07-12T20:32:39.000000 | - |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
Post by Brent Troge
Also, when you create a volume what happens ?
Is there any error ?
If I create a volume in Horizon, it just say "Error".
cinder create --name test1 --volume-type lvm \
--availability-zone nova 10
[..]
bladeA01:~# cinder list
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| ID | Status | Name | Size |
Volume Type | Bootable | Attached to |
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
| 88087aa1-208c-44f1-bf33-0cda7b757274 | error | test1 | 10 |
lvm | false | |
| e8f50273-f62e-4cad-8066-725067e062f8 | deleting | test5 | 0 |
lvm | true | |
+--------------------------------------+----------+-------+------+-------------+----------+-------------+
==> /var/log/cinder/cinder-scheduler.log <==
2016-07-12 21:40:19.868 15552 DEBUG cinder.scheduler.base_filter
[req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Starting with 0 host(s)
get_filtered_objects
/usr/lib/python2.7/dist-packages/cinder/scheduler/base_filter.py:79
2016-07-12 21:40:19.869 15552 INFO cinder.scheduler.base_filter
[req-6aa3569a-f6d4-4131-a2ad-7ed9feb83791 4b0e25c70d2b4ad6ba4c50250f2f0b0b
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Filter AvailabilityZoneFilter
returned 0 host(s)
Same thing if I don't use the availability-zone..
--
Life sucks and then you die
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Turbo Fredriksson
2016-07-12 21:44:46 UTC
Permalink
Post by Brent Troge
send your create command, and see what errors are thrown back in the
cinder-volume terminal
Didn't say a thing. A second or two after the create command
finished, it said this:

2016-07-12 22:36:39.151 16418 DEBUG oslo_service.periodic_task [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Running periodic task VolumeManager._publi
sh_service_capabilities run_periodic_tasks /usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215
2016-07-12 22:36:39.151 16418 DEBUG cinder.manager [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Notifying Schedulers of capabilities ... _publish_serv
ice_capabilities /usr/lib/python2.7/dist-packages/cinder/manager.py:168
2016-07-12 22:36:39.153 16418 DEBUG oslo_messaging._drivers.amqpdriver [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] CAST unique_id: 2cf38a9df5154068bd
e11d66d847af22 FANOUT topic 'cinder-scheduler' _send /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:443
2016-07-12 22:36:39.156 16418 DEBUG oslo_service.periodic_task [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Running periodic task VolumeManager._repor
t_driver_status run_periodic_tasks /usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215
2016-07-12 22:36:39.156 16418 DEBUG cinder.volume.drivers.lvm [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Updating volume stats _update_volume_stats
/usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py:189
2016-07-12 22:36:39.157 16418 DEBUG oslo_concurrency.processutils [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Running cmd (subprocess): env LC_ALL=C
vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix blade_center execute /usr/lib/python2.7/dist-packages/oslo_concurrency/proc
essutils.py:344
2016-07-12 22:36:39.178 16418 DEBUG oslo_concurrency.processutils [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] CMD "env LC_ALL=C vgs --noheadings --un
it=g -o name,size,free,lv_count,uuid --separator : --nosuffix blade_center" returned: 0 in 0.021s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/pr
ocessutils.py:374
2016-07-12 22:36:39.179 16418 DEBUG oslo_concurrency.processutils [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] Running cmd (subprocess): env LC_ALL=C
lvs --noheadings --unit=g -o vg_name,name,size --nosuffix blade_center execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:344
2016-07-12 22:36:39.199 16418 DEBUG oslo_concurrency.processutils [req-08907182-b887-4461-8d3e-61b43ba681f0 - - - - -] CMD "env LC_ALL=C lvs --noheadings --un
it=g -o vg_name,name,size --nosuffix blade_center" returned: 0 in 0.021s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:374
^C2016-07-12 22:36:52.631 16411 INFO oslo_service.service [-] Caught SIGINT signal, instantaneous exiting
2016-07-12 22:36:52.631 16423 INFO oslo_service.service [-] Caught SIGINT signal, instantaneous exiting
2016-07-12 22:36:52.631 16418 INFO oslo_service.service [-] Caught SIGINT signal, instantaneous exiting
--
There are no dumb questions,
unless a customer is asking them.
- Unknown
Turbo Fredriksson
2016-07-12 21:34:01 UTC
Permalink
Post by Brent Troge
this looks to be an issue with your lvm configuration..
on your volume host, do you see any errors ?
None! It looks like everything is perfectly fine..
Post by Brent Troge
can you also send your lvm backend configuration ?
https://github.com/FransUrbo/openstack_bladecenter/blob/master/configs-control/etc/cinder/cinder.conf
--
God gave man both a penis and a brain,
but unfortunately not enough blood supply
to run both at the same time.
- R. Williams
Turbo Fredriksson
2016-07-12 22:34:27 UTC
Permalink
from your volume server send output of this..
vgscan
bladeA01:~# vgscan
Reading volume groups from cache.
Found volume group "blade_center" using metadata type lvm2
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.
Brent Troge
2016-07-12 22:54:08 UTC
Permalink
can you also run, 'pvscan' on the volume-server and send that output

does your scheduler even inventory the volume-server ?

do you see any references to 'free_capacity' in your scheduler logs ?
Post by Turbo Fredriksson
from your volume server send output of this..
vgscan
bladeA01:~# vgscan
Reading volume groups from cache.
Found volume group "blade_center" using metadata type lvm2
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Turbo Fredriksson
2016-07-12 23:17:16 UTC
Permalink
Post by Brent Troge
can you also run, 'pvscan' on the volume-server and send that output
does your scheduler even inventory the volume-server ?
do you see any references to 'free_capacity' in your scheduler logs ?
With _a lot_ of trial and error (and really reading every
single character of the debug/log output from a restart and
a create), it might have been this:


2016-07-12 23:44:16.711 9199 DEBUG cinder.scheduler.filters.capabilities_filter [req-ae7c0c47-9d08-4d81-a94b-56a4feaf2922 4b0e25c70d2b4ad6ba4c50250f2f0b0b 04ee0e71babe4fd7aa16c3f64a8fca89 - - -] extra_spec requirement 'LVM_iSCSI' does not match 'LVM' _satisfies_extra_specs /usr/lib/python2.7/dist-packages/cinder/scheduler/filters/capabilities_filter.py:59


The "LVM_iSCSI" was part of the '[zol]' driver and there was no
"volume_backend_name" set for the '[lvm]' one..
https://github.com/FransUrbo/openstack_bladecenter/commit/63fe97399bdd0a49fcf30dcf670cf40e25b4306c


Fixing the config file, I can now create a dummy volume, a
volume from an image AND there's the expected "nova" availability
zone in Horizon!


However, I now (again!) get "Block Device Mapping is Invalid."
when trying to create an instance with a volume from an image.
The volume is created, but then deleted and instance create fails.

I'm going to continue this tomorrow and read the logs for that
more closely.

Thanx for the help!
--
Att inse sin egen betydelse är som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka

Loading...