Discussion:
[Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid
Turbo Fredriksson
2016-06-16 23:18:06 UTC
Permalink
I'm trying my newly installed Openstack system and I'm getting
problem in starting my first instance.

----- s n i p -----
Build of instance 5193c2d9-0aaf-4f84-b108-f6884d97b571 aborted: Block Device Mapping is Invalid.
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1926, in _do_build_and_run_instance filter_properties) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2083, in _build_and_run_instance 'create.error', fault=e) File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ self.force_reraise() File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2048, in _build_and_run_instance block_device_mapping) as resources: File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2206, in _build_resources reason=e.format_message())
----- s n i p -----

Cleaning up all the logs for irrelevant stuff, I see:

----- s n i p -----
INFO cinder.api.v2.volumes Create volume of 5 GB
INFO cinder.volume.api Volume created successfully.
INFO cinder.volume.flows.manager.create_volume Volume 6b1dace4-78e1-452b-a455-c0fc882374f3: being created as image with specification: {'status': u'creating', 'image_location': (None, None), 'volume_size': 5, 'volume_name': 'volume-6b1dace4-78e1-452b-a455-c0fc882374f3', 'image_id': u'8c15b5e8-9a67-4784-ad7a-0b1cc7b0bdec', 'image_service': <cinder.image.glance.GlanceImageService object at 0x7fa4f31d8ad0>, 'image_meta': {'status': u'active', 'name': u'fedora23', 'deleted': False, 'container_format': u'docker', 'created_at': datetime.datetime(2016, 6, 15, 20, 38, 43, tzinfo=<iso8601.Utc>), 'disk_format': u'qcow2', 'updated_at': datetime.datetime(2016, 6, 15, 20, 38, 45, tzinfo=<iso8601.Utc>), 'id': u'8c15b5e8-9a67-4784-ad7a-0b1cc7b0bdec', 'owner': u'd524c8dfd9e9449798ebac9b025f8de6', 'min_ram': 0, 'checksum': u'38d62e2e1909c89f72ba4d5f5c0005d5', 'min_disk': 0, 'is_public': True, 'deleted_at': None, 'properties': {u'hypervisor_type': u'docker', u'architecture': u'x86_64'}, 'size': 234363392}}
INFO cinder.image.image_utils Image download 223.00 MB at 35.35 MB/s
WARN manila.context [-] Arguments dropped when creating context: {u'read_only': False, u'domain': None, u'show_deleted': False, u'user_identity': u'- - - - -', u'project_domain': None, u'resource_uuid': None, u'user_domain': None}.
WARN manila.context [-] Arguments dropped when creating context: {u'read_only': False, u'domain': None, u'show_deleted': False, u'user_identity': u'- - - - -', u'project_domain': None, u'resource_uuid': None, u'user_domain': None}.
INFO cinder.image.image_utils Converted 3072.00 MB image at 31.59 MB/s
INFO cinder.volume.flows.manager.create_volume Volume volume-6b1dace4-78e1-452b-a455-c0fc882374f3 (6b1dace4-78e1-452b-a455-c0fc882374f3): created successfully
INFO cinder.volume.manager Created volume successfully.
INFO cinder.api.v2.volumes Delete volume with id: 6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.api Delete volume request issued successfully.
INFO eventlet.wsgi.server 10.0.4.5 "DELETE /v2/d524c8dfd9e9449798ebac9b025f8de6/volumes/6b1dace4-78e1-452b-a455-c0fc882374f3 HTTP/1.1" status: 202 len: 211 time: 0.1300900
INFO cinder.volume.targets.iscsi Skipping remove_export. No iscsi_target is presently exported for volume: 6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.utils Performing secure delete on volume: /dev/mapper/blade_center-volume--6b1dace4--78e1--452b--a455--c0fc882374f3
----- s n i p -----

Full log at http://bayour.com/misc/openstack_instance_create-log.txt.


The web GUI say (this might be from another test, but I always
get the same):

----- s n i p -----
Error: Failed to perform requested operation on instance
"jessie-test", the instance has an error status: Please try again
later [Error: Build of instance a4e1deaa-cdf0-4fc7-8c54-579868c962c3
aborted: Block Device Mapping is Invalid.].
----- s n i p -----



I can see nothing relevant in this that would make it fail!
The only thing that bought my eye was that it isn't removing
the iSCSI target, because there isn't one..

This is (most of) my cinder.conf file:

----- s n i p -----
[DEFAULT]
my_ip = 10.0.4.1
storage_availability_zone = nova
default_availability_zone = nova
enabled_backends = lvm
iscsi_target_prefix = iqn.2010-10.org.openstack:
iscsi_ip_address = $my_ip
iscsi_port = 3260
iscsi_iotype = blockio
iscsi_write_cache = on
volume_group = blade_center
scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = blade_center
iscsi_protocol = iscsi
iscsi_helper = tgtadm
----- s n i p -----


PS. Creating the instance from an already existing, empty
volume, didn't work either. Same message, and even less
information in the log.
--
As soon as you find a product that you really like,
they will stop making it.
- Wilson's Law
Eugen Block
2016-06-17 12:12:47 UTC
Permalink
I had also some trouble getting volume backed instances to boot. I use
xen hypervisor and found out that the instance was assigned a device
name of "vda" (which is set by default) instead of xvda, I filed a bug
report for this. Have you nova-compute.logs? I can't find them in your
link. They should give a hint about the device name or other possible
causes. Since the volume is created but immediately destroyed, I guess
nova has a problem with the block device.

Regards,
Eugen
Post by Turbo Fredriksson
I'm trying my newly installed Openstack system and I'm getting
problem in starting my first instance.
----- s n i p -----
Block Device Mapping is Invalid.
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
line 1926, in _do_build_and_run_instance filter_properties) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2083, in _build_and_run_instance 'create.error', fault=e) File
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220,
in __exit__ self.force_reraise() File
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196,
in force_reraise six.reraise(self.type_, self.value, self.tb) File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next() File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2206, in _build_resources reason=e.format_message())
----- s n i p -----
----- s n i p -----
INFO cinder.api.v2.volumes Create volume of 5 GB
INFO cinder.volume.api Volume created successfully.
INFO cinder.volume.flows.manager.create_volume Volume
6b1dace4-78e1-452b-a455-c0fc882374f3: being created as image with
specification: {'status': u'creating', 'image_location': (None,
<cinder.image.glance.GlanceImageService object at 0x7fa4f31d8ad0>,
datetime.datetime(2016, 6, 15, 20, 38, 43, tzinfo=<iso8601.Utc>),
'disk_format': u'qcow2', 'updated_at': datetime.datetime(2016, 6,
u'docker', u'architecture': u'x86_64'}, 'size': 23436
3392}}
INFO cinder.image.image_utils Image download 223.00 MB at 35.35 MB/s
{u'read_only': False, u'domain': None, u'show_deleted': False,
u'user_identity': u'- - - - -', u'project_domain': None,
u'resource_uuid': None, u'user_domain': None}.
{u'read_only': False, u'domain': None, u'show_deleted': False,
u'user_identity': u'- - - - -', u'project_domain': None,
u'resource_uuid': None, u'user_domain': None}.
INFO cinder.image.image_utils Converted 3072.00 MB image at 31.59 MB/s
INFO cinder.volume.flows.manager.create_volume Volume
volume-6b1dace4-78e1-452b-a455-c0fc882374f3
(6b1dace4-78e1-452b-a455-c0fc882374f3): created successfully
INFO cinder.volume.manager Created volume successfully.
6b1dace4-78e1-452b-a455-c0fc882374f3
INFO cinder.volume.api Delete volume request issued successfully.
INFO eventlet.wsgi.server 10.0.4.5 "DELETE
0.1300900
INFO cinder.volume.targets.iscsi Skipping remove_export. No
6b1dace4-78e1-452b-a455-c0fc882374f3
/dev/mapper/blade_center-volume--6b1dace4--78e1--452b--a455--c0fc882374f3
----- s n i p -----
Full log at http://bayour.com/misc/openstack_instance_create-log.txt.
The web GUI say (this might be from another test, but I always
----- s n i p -----
Error: Failed to perform requested operation on instance
"jessie-test", the instance has an error status: Please try again
later [Error: Build of instance a4e1deaa-cdf0-4fc7-8c54-579868c962c3
aborted: Block Device Mapping is Invalid.].
----- s n i p -----
I can see nothing relevant in this that would make it fail!
The only thing that bought my eye was that it isn't removing
the iSCSI target, because there isn't one..
----- s n i p -----
[DEFAULT]
my_ip = 10.0.4.1
storage_availability_zone = nova
default_availability_zone = nova
enabled_backends = lvm
iscsi_ip_address = $my_ip
iscsi_port = 3260
iscsi_iotype = blockio
iscsi_write_cache = on
volume_group = blade_center
scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = blade_center
iscsi_protocol = iscsi
iscsi_helper = tgtadm
----- s n i p -----
PS. Creating the instance from an already existing, empty
volume, didn't work either. Same message, and even less
information in the log.
--
As soon as you find a product that you really like,
they will stop making it.
- Wilson's Law
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Turbo Fredriksson
2016-06-17 13:05:33 UTC
Permalink
Post by Eugen Block
Have you nova-compute.logs?
They don't say a thing, so I'm guessing it never gets
that far.

If I'm quick, i can see the LVM volume being created
successfully (which the log also indicates).
Eugen Block
2016-06-17 13:38:58 UTC
Permalink
Then I would turn on debug logs for cinder and see if there is more
information on why it's deleting the volumes before attaching them. I
don't even see the attempt to attach it. If it works, these steps
should be processed:

- Created volume successfully.
- Initialize volume connection completed successfully.
- Attach volume completed successfully.
- Deleted volume successfully.

Regards,
Eugen
Post by Turbo Fredriksson
Post by Eugen Block
Have you nova-compute.logs?
They don't say a thing, so I'm guessing it never gets
that far.
If I'm quick, i can see the LVM volume being created
successfully (which the log also indicates).
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Turbo Fredriksson
2016-06-17 14:32:38 UTC
Permalink
Neither can I! And running with debugging doesn't
show anything either :(

The log literally say (no changes, no additions or removal!):

----- s n i p -----
2016-06-17 15:12:39.335 8046 DEBUG cinder.volume.manager [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeOnFinishTask;volume:create, create.end' (54ad7ada-38eb-4fe1-9efd-53e6f2d35f26) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-06-17 15:12:39.583 8046 INFO cinder.volume.flows.manager.create_volume [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Volume 8892642a-6e12-48ae-ba94-8cc897a4acd5 (8892642a-6e12-48ae-ba94-8cc897a4acd5): created successfully
2016-06-17 15:12:39.585 8046 DEBUG cinder.volume.manager [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Task 'cinder.volume.flows.manager.create_volume.CreateVolumeOnFinishTask;volume:create, create.end' (54ad7ada-38eb-4fe1-9efd-53e6f2d35f26) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-06-17 15:12:39.589 8046 DEBUG cinder.volume.manager [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Flow 'volume_create_manager' (45f829ab-d4bd-480a-a78f-c5e8eaa38598) transitioned into state 'SUCCESS' from state 'RUNNING' _flow_receiver /usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:140
2016-06-17 15:12:39.591 8046 INFO cinder.volume.manager [req-b9a6a699-c178-427b-b1f2-bf62dce2578e 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Created volume successfully.
[NOTE: Here everything was a-ok!!]
2016-06-17 15:12:43.378 8046 DEBUG oslo_messaging._drivers.amqpdriver [-] received message msg_id: None reply to None __call__ /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:201
[NOTE: And here it starts deleting the volume!]
2016-06-17 15:12:43.383 8046 DEBUG oslo_concurrency.lockutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Lock "8892642a-6e12-48ae-ba94-8cc897a4acd5-delete_volume" acquired by "cinder.volume.manager.lvo_inner2" :: waited 0.001s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273
2016-06-17 15:12:43.614 8046 INFO cinder.volume.targets.iscsi [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Skipping remove_export. No iscsi_target is presently exported for volume: 8892642a-6e12-48ae-ba94-8cc897a4acd5
2016-06-17 15:12:43.615 8046 DEBUG oslo_concurrency.processutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix blade_center/8892642a-6e12-48ae-ba94-8cc897a4acd5 execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:326
2016-06-17 15:12:43.780 8046 DEBUG oslo_concurrency.processutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix blade_center/8892642a-6e12-48ae-ba94-8cc897a4acd5" returned: 0 in 0.166s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:356
2016-06-17 15:12:43.783 8046 DEBUG oslo_concurrency.processutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o Attr blade_center/8892642a-6e12-48ae-ba94-8cc897a4acd5 execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:326
2016-06-17 15:12:43.948 8046 DEBUG oslo_concurrency.processutils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o Attr blade_center/8892642a-6e12-48ae-ba94-8cc897a4acd5" returned: 0 in 0.165s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:356
2016-06-17 15:12:43.950 8046 INFO cinder.volume.utils [req-661beca9-723a-4565-99b2-70b5ed862eca 067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 - - -] Performing secure delete on volume: /dev/mapper/blade_center-8892642a--6e12--48ae--ba94--8cc897a4acd5
----- s n i p -----
--
Imagine you're an idiot and then imagine you're in
the government. Oh, sorry. Now I'm repeating myself
- Mark Twain
Eugen Block
2016-06-20 10:15:35 UTC
Permalink
Hi list,

I am seeing a strange behaviour of my cloud and could use some help on this.
I have a project containing 2 VMs, one is running in an external
network, the other is in a tenant-network with a floating ip. Security
group allows ping and ssh.
Now there are several ways to break or restore the connectivity but I
can't find the cause.

1. Boot a new instance on the same compute node (but different
project, no matter if same or different network). Connectivity to both
existing VMs is lost, however, from within the instance I can still
get out! Restarting neutron-linuxbridge-agent gets it right again.

2. During the state of broken connectivity changing the
security-group-rules (adding one rule or deleting a rule) for the
default sec-group has the same effect, although
neutron-linuxbridge-agent is not restarted after that, but the VMs are
reachable again.

3. Different project, different network, same compute node: deleting a
running instance also leads to a connectivity loss for the existing VMs.

4. In a way I was able to reproduce this issue: on a different compute
node and different project I launched an instance in the same external
network last Friday. The instance was reachable, I shut it down. Today
I booted it again, it was not reachable. Restarting the
linuxbridge-agent fixed it again.

I took a look into iptables and compared the output when the instances
are reachable and when they are not. Somehow the neutron rules aren't
there. Following the rule tree to the bottom it leads to a DROP rule
for all packets.

---cut here---
compute1:~ # iptables -L FORWARD -nv|more
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
0 0 nova-filter-top all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 nova-compute-FORWARD all -- * * 0.0.0.0/0
0.0.0.0/0

compute1:~ # systemctl restart openstack-neutron-linuxbridge-agent.service

compute1:~ # iptables -L FORWARD -nv|more
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
14 1176 neutron-filter-top all -- * * 0.0.0.0/0
0.0.0.0/0
14 1176 neutron-linuxbri-FORWARD all -- * *
0.0.0.0/0 0.0.0.0/0
0 0 nova-filter-top all -- * * 0.0.0.0/0
0.0.0.0/0
0 0 nova-compute-FORWARD all -- * * 0.0.0.0/0
0.0.0.0/0
---cut here---

What is going on with neutron? I see that since about two weeks now, I
updated all nodes last Friday but the problem still exists.

Any help is appreciated!

Regards,
Eugen
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Eugen Block
2016-06-20 12:01:51 UTC
Permalink
Is it possible to create an empty volume? Without nova or glance, just
a volume. If that works and the volume is not deleted immediately, you
could try to attach it to a running instance to see if nova can handle
it.
Do you see the iscsi session on your compute node?
Then you could try to create a volume from an image, that way you see
if glance and cinder are working properly together. If that also works
it could be an issue with nova, maybe come misconfiguration.
Post by Turbo Fredriksson
Neither can I! And running with debugging doesn't
show anything either :(
----- s n i p -----
2016-06-17 15:12:39.335 8046 DEBUG cinder.volume.manager
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Task
'cinder.volume.flows.manager.create_volume.CreateVolumeOnFinishTask;volume:create, create.end' (54ad7ada-38eb-4fe1-9efd-53e6f2d35f26) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver
/usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:189
2016-06-17 15:12:39.583 8046 INFO
cinder.volume.flows.manager.create_volume
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Volume 8892642a-6e12-48ae-ba94-8cc897a4acd5
(8892642a-6e12-48ae-ba94-8cc897a4acd5): created successfully
2016-06-17 15:12:39.585 8046 DEBUG cinder.volume.manager
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Task
'cinder.volume.flows.manager.create_volume.CreateVolumeOnFinishTask;volume:create, create.end' (54ad7ada-38eb-4fe1-9efd-53e6f2d35f26) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver
/usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:178
2016-06-17 15:12:39.589 8046 DEBUG cinder.volume.manager
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Flow 'volume_create_manager'
(45f829ab-d4bd-480a-a78f-c5e8eaa38598) transitioned into state
'SUCCESS' from state 'RUNNING' _flow_receiver
/usr/lib/python2.7/dist-packages/taskflow/listeners/logging.py:140
2016-06-17 15:12:39.591 8046 INFO cinder.volume.manager
[req-b9a6a699-c178-427b-b1f2-bf62dce2578e
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Created volume successfully.
[NOTE: Here everything was a-ok!!]
2016-06-17 15:12:43.378 8046 DEBUG
oslo_messaging._drivers.amqpdriver [-] received message msg_id: None
reply to None __call__
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:201
[NOTE: And here it starts deleting the volume!]
2016-06-17 15:12:43.383 8046 DEBUG oslo_concurrency.lockutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Lock "8892642a-6e12-48ae-ba94-8cc897a4acd5-delete_volume"
acquired by "cinder.volume.manager.lvo_inner2" :: waited 0.001s
inner
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273
2016-06-17 15:12:43.614 8046 INFO cinder.volume.targets.iscsi
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Skipping remove_export. No iscsi_target is presently exported
for volume: 8892642a-6e12-48ae-ba94-8cc897a4acd5
2016-06-17 15:12:43.615 8046 DEBUG oslo_concurrency.processutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o
vg_name,name,size --nosuffix
blade_center/8892642a-6e12-48ae-ba94-8cc897a4acd5 execute
/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:326
2016-06-17 15:12:43.780 8046 DEBUG oslo_concurrency.processutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env
LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix
blade_center/8892642a-6e12-48ae-ba94-8cc897a4acd5" returned: 0 in
0.166s execute
/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:356
2016-06-17 15:12:43.783 8046 DEBUG oslo_concurrency.processutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] Running cmd (subprocess): sudo cinder-rootwrap
/etc/cinder/rootwrap.conf env LC_ALL=C lvdisplay --noheading -C -o
Attr blade_center/8892642a-6e12-48ae-ba94-8cc897a4acd5 execute
/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:326
2016-06-17 15:12:43.948 8046 DEBUG oslo_concurrency.processutils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
- -] CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env
LC_ALL=C lvdisplay --noheading -C -o Attr
blade_center/8892642a-6e12-48ae-ba94-8cc897a4acd5" returned: 0 in
0.165s execute
/usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:356
2016-06-17 15:12:43.950 8046 INFO cinder.volume.utils
[req-661beca9-723a-4565-99b2-70b5ed862eca
067a3363c5984f81a7cfa2eda5d1ebf3 d524c8dfd9e9449798ebac9b025f8de6 -
/dev/mapper/blade_center-8892642a--6e12--48ae--ba94--8cc897a4acd5
----- s n i p -----
--
Imagine you're an idiot and then imagine you're in
the government. Oh, sorry. Now I'm repeating myself
- Mark Twain
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Turbo Fredriksson
2016-06-20 14:01:21 UTC
Permalink
Post by Eugen Block
Is it possible to create an empty volume?
Yes.
Post by Eugen Block
try to attach it to a running instance
Can't start any instances because I can't create volumes.. :(
Post by Eugen Block
Do you see the iscsi session on your compute node?
No.
Post by Eugen Block
Then you could try to create a volume from an image, that way you see if glance and cinder are working properly together.
How do i do that from the shell?
--
Michael Jackson is not going to buried or cremated
but recycled into shopping bags so he can remain white,
plastic and dangerous for kids to play with.
Eugen Block
2016-06-20 14:27:41 UTC
Permalink
Post by Turbo Fredriksson
Can't start any instances because I can't create volumes
Can't you boot an instance without cinder? You could edit nova.conf to
use local file system, just to have a running instance. If that works
you can switch to another backend.
Post by Turbo Fredriksson
How do i do that from the shell?
cinder create --image <IMAGE-ID> --name <NAME> <SIZE>
Post by Turbo Fredriksson
Post by Eugen Block
Do you see the iscsi session on your compute node?
No.
Try debugging your iscsi connection, maybe first without openstack. If
you aren't able to login to a session then openstack will also fail, I
guess...

In my environment, I first tried to get all services running and
working without external backends, cinder, glance and nova all ran on
local storage. Then I tried other backends for cinder (iscsi), now all
services use ceph.
Post by Turbo Fredriksson
Post by Eugen Block
Is it possible to create an empty volume?
Yes.
Post by Eugen Block
try to attach it to a running instance
Can't start any instances because I can't create volumes.. :(
Post by Eugen Block
Do you see the iscsi session on your compute node?
No.
Post by Eugen Block
Then you could try to create a volume from an image, that way you
see if glance and cinder are working properly together.
How do i do that from the shell?
--
Michael Jackson is not going to buried or cremated
but recycled into shopping bags so he can remain white,
plastic and dangerous for kids to play with.
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Turbo Fredriksson
2016-06-20 15:31:08 UTC
Permalink
Post by Eugen Block
Can't you boot an instance without cinder?
Don't know, can I??
Post by Eugen Block
You could edit nova.conf to use local file system, just to have a running instance. If that works you can switch to another backend.
How?
Post by Eugen Block
cinder create --image <IMAGE-ID> --name <NAME> <SIZE>
I'll try that thanx. How do you do that with the "openstack" command?
Post by Eugen Block
Try debugging your iscsi connection, maybe first without openstack.
From what I can see, it doesn't even start sharing via iSCSI..
Post by Eugen Block
In my environment, I first tried to get all services running and working without external backends, cinder, glance and nova all ran on local storage.
Didn't even knew you could do that. Thought you HAD to use cinder/swift..

Please point me to a faq/howto/doc on how to do that, thanx!
Post by Eugen Block
Then I tried other backends for cinder (iscsi), now all services use ceph.
ceph?
--
Life sucks and then you die
Eugen Block
2016-06-21 07:26:59 UTC
Permalink
Post by Turbo Fredriksson
Post by Eugen Block
Can't you boot an instance without cinder?
Don't know, can I??
Well, you should ;-) How do you try to boot your instance, from CLI or
Horizon? If it's Horizon, you would have to NOT klick the button
"Create a new volume --> Yes" ;-) If it's CLI it's sufficient to only
execute "nova boot --flavor <FLAVOR> --image <IMAGE-ID> --nic
net-id=<NET-ID> (optional: only if you have multiple networks
available) <NAME>"
This way you avoid creating a volume.
Post by Turbo Fredriksson
Post by Eugen Block
You could edit nova.conf
How?
It's usually the default, although I'm really not an expert in
Openstack. But if you simply try to set up nova on control and compute
node following an install guide, it should bring you there.
I followed
http://docs.openstack.org/mitaka/install-guide-obs/nova-controller-install.html, there aren't many options to configure and it defaults to local file
storage.
Post by Turbo Fredriksson
From what I can see, it doesn't even start sharing via iSCSI
You should try to fix that before you try to use it with openstack.
Post by Turbo Fredriksson
Didn't even knew you could do that. Thought you HAD to use cinder/swift..
Please point me to a faq/howto/doc on how to do that, thanx!
I used this guide:
http://docs.openstack.org/mitaka/install-guide-obs/environment-networking-storage-cinder.html
In the section for block storage it says "Block storage node
(Optional)", so you wouldn't have to, but I guess it makes sense in
the longterm. But as I already said, first you should try to get an
instance running at all before using another backend.


Regards,
Eugen
Post by Turbo Fredriksson
Post by Eugen Block
Can't you boot an instance without cinder?
Don't know, can I??
Post by Eugen Block
You could edit nova.conf to use local file system, just to have a
running instance. If that works you can switch to another backend.
How?
Post by Eugen Block
cinder create --image <IMAGE-ID> --name <NAME> <SIZE>
I'll try that thanx. How do you do that with the "openstack" command?
Post by Eugen Block
Try debugging your iscsi connection, maybe first without openstack.
From what I can see, it doesn't even start sharing via iSCSI..
Post by Eugen Block
In my environment, I first tried to get all services running and
working without external backends, cinder, glance and nova all ran
on local storage.
Didn't even knew you could do that. Thought you HAD to use cinder/swift..
Please point me to a faq/howto/doc on how to do that, thanx!
Post by Eugen Block
Then I tried other backends for cinder (iscsi), now all services use ceph.
ceph?
--
Life sucks and then you die
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Abhishek Shrivastava
2016-06-21 10:40:39 UTC
Permalink
Hi Turbo,

The first thing I want to know

- Which VM are you creating(i.e; which OS image are you taking)?
- What size are you using and all?

Secondly,

- Which flavor are you using for VM creation.?
Post by Eugen Block
Can't you boot an instance without cinder?
Post by Turbo Fredriksson
Don't know, can I??
Well, you should ;-) How do you try to boot your instance, from CLI or
Horizon? If it's Horizon, you would have to NOT klick the button "Create a
new volume --> Yes" ;-) If it's CLI it's sufficient to only execute "nova
only if you have multiple networks available) <NAME>"
This way you avoid creating a volume.
You could edit nova.conf
Post by Turbo Fredriksson
How?
It's usually the default, although I'm really not an expert in Openstack.
But if you simply try to set up nova on control and compute node following
an install guide, it should bring you there.
I followed
http://docs.openstack.org/mitaka/install-guide-obs/nova-controller-install.html,
there aren't many options to configure and it defaults to local file
storage.
From what I can see, it doesn't even start sharing via iSCSI
You should try to fix that before you try to use it with openstack.
Didn't even knew you could do that. Thought you HAD to use cinder/swift..
Post by Turbo Fredriksson
Please point me to a faq/howto/doc on how to do that, thanx!
http://docs.openstack.org/mitaka/install-guide-obs/environment-networking-storage-cinder.html
In the section for block storage it says "Block storage node (Optional)",
so you wouldn't have to, but I guess it makes sense in the longterm. But as
I already said, first you should try to get an instance running at all
before using another backend.
Regards,
Eugen
Post by Turbo Fredriksson
Can't you boot an instance without cinder?
Don't know, can I??
You could edit nova.conf to use local file system, just to have a running
Post by Eugen Block
instance. If that works you can switch to another backend.
How?
cinder create --image <IMAGE-ID> --name <NAME> <SIZE>
I'll try that thanx. How do you do that with the "openstack" command?
Try debugging your iscsi connection, maybe first without openstack.
From what I can see, it doesn't even start sharing via iSCSI..
In my environment, I first tried to get all services running and working
Post by Eugen Block
without external backends, cinder, glance and nova all ran on local storage.
Didn't even knew you could do that. Thought you HAD to use cinder/swift..
Please point me to a faq/howto/doc on how to do that, thanx!
Then I tried other backends for cinder (iscsi), now all services use ceph.
ceph?
--
Life sucks and then you die
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
*Thanks & Regards,*
*Abhishek*
*Cloudbyte Inc. <http://www.cloudbyte.com>*
Turbo Fredriksson
2016-06-21 11:06:59 UTC
Permalink
Post by Abhishek Shrivastava
The first thing I want to know
- Which VM are you creating(i.e; which OS image are you taking)?
I've tried both the CirrOS and Debian GNU/Linux Jessie images.

http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
http://cdimage.debian.org/cdimage/openstack/8.5.0/debian-8.5.0-openstack-amd64.qcow2
Post by Abhishek Shrivastava
- What size are you using and all?
Size? I've tried creating a volume from those images from 2GB to 20GB.
Post by Abhishek Shrivastava
- Which flavor are you using for VM creation.?
My own take on the m1.flavor:

openstack flavor create --ram 1024 --disk 10 --vcpus 1 --disk 5 m1.tiny
--
I love deadlines. I love the whooshing noise they
make as they go by.
- Douglas Adams
Abhishek Shrivastava
2016-06-21 11:19:42 UTC
Permalink
​Have you tried any other flavors?

For instance if you are creating a 1GB volume then you can go for flavor
m1.tiny flavor.

So try creating a VM having boot volume size 1GB​ and use flavor m1.tiny
and see if it works.
Post by Turbo Fredriksson
Post by Abhishek Shrivastava
The first thing I want to know
- Which VM are you creating(i.e; which OS image are you taking)?
I've tried both the CirrOS and Debian GNU/Linux Jessie images.
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
http://cdimage.debian.org/cdimage/openstack/8.5.0/debian-8.5.0-openstack-amd64.qcow2
Post by Abhishek Shrivastava
- What size are you using and all?
Size? I've tried creating a volume from those images from 2GB to 20GB.
Post by Abhishek Shrivastava
- Which flavor are you using for VM creation.?
openstack flavor create --ram 1024 --disk 10 --vcpus 1 --disk 5 m1.tiny
--
I love deadlines. I love the whooshing noise they
make as they go by.
- Douglas Adams
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
*Thanks & Regards,*
*Abhishek*
*Cloudbyte Inc. <http://www.cloudbyte.com>*
Turbo Fredriksson
2016-06-21 11:28:33 UTC
Permalink
​Have you tried any other flavors?
No, I never saw the point. The resources I specified was well within
the flavors rules. And the error was "Block Device Mapping is Invalid"
I can not see how changing the flavor would change that.
--
System administrators motto:
You're either invisible or in trouble.
- Unknown


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://l
Eugen Block
2016-06-21 12:36:35 UTC
Permalink
If it was the flavor, you would get different errors, something like
"flavor disk too small" or "out of memory". Again, I recommend to
launch an instance on local disk to see if that is working, then fix
the iscsi issue to be able to create volumes at all, first empty
volumes, then from an image and so on.
Post by Turbo Fredriksson
​Have you tried any other flavors?
No, I never saw the point. The resources I specified was well within
the flavors rules. And the error was "Block Device Mapping is Invalid"
I can not see how changing the flavor would change that.
--
You're either invisible or in trouble.
- Unknown
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://lists.openstack
Cynthia Lopes
2016-06-21 14:17:51 UTC
Permalink
Hi,

First of all, think dis question did no get answered:

-I'll try that thanx. How do you do that with the "openstack" command?

If not, the command is: openstack volume create --size (size in GB) --image
(image name or id) volume_name

Just for info the cinder command was not exact, it should be: cinder create
--image*-id *<IMAGE-ID> *--display-name* <NAME> <SIZE>


I agree with Eugen that you should make sure you can create a volume and
attach to a VM to help understand what your problem is.
This guide explains about ephemeral storage options:
https://platform9.com/support/openstack-tutorial-storage-options-and-use-cases/

By default you should be able to create VMs with ephemeral disks (not
cinder one).
Usually you can specify the directory where VM instances disks will be
stored in the compute node on nova.conf option 'instances_path' in
[DEFAULT] session. By default it should point to
'/var/lib/nova/instances/'. It is default option so, even if it is not
there, this should work.
Nova compute config options:
http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html


The command to create the VM with an ephemeral disk (nova local storage and
not cinder) is:
openstack server create --image (image id or name) --flavor (flavor id or
name) vm_name


Concerning the flavor, I think the flavor you use should have the same disk
size as the disk. At least, for me when I try to boot a VM from a volume
that is not the same size of the flavor, I get BadRequest error.

Let us know if you manage to boot a VM so you can try to attach a volume to
it.

Good luck with all that.

Kind regards,
Cynthia
Post by Eugen Block
If it was the flavor, you would get different errors, something like
"flavor disk too small" or "out of memory". Again, I recommend to launch an
instance on local disk to see if that is working, then fix the iscsi issue
to be able to create volumes at all, first empty volumes, then from an
image and so on.
Post by Abhishek Shrivastava
​Have you tried any other flavors?
No, I never saw the point. The resources I specified was well within
the flavors rules. And the error was "Block Device Mapping is Invalid"
I can not see how changing the flavor would change that.
--
You're either invisible or in trouble.
- Unknown
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Turbo Fredriksson
2016-06-22 21:56:26 UTC
Permalink
Now that my authentication problems seems to be fixed, it's back on track with
trying to boot my first instance..
Post by Cynthia Lopes
If not, the command is: openstack volume create --size (size in GB) --image
(image name or id) volume_name
Just for info the cinder command was not exact, it should be: cinder create
--image*-id *<IMAGE-ID> *--display-name* <NAME> <SIZE>
Thanx.
Post by Cynthia Lopes
I agree with Eugen that you should make sure you can create a volume and
attach to a VM to help understand what your problem is.
Ok, so I created an empty, bootable volume. Worked just fine it seems.

I then used that when creating the instance (from Horizon).

Still the same error - Block Device Mapping is Invalid.

----- s n i p -----
bladeA01b:~# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| c16975ad-dd45-41d7-b0a9-cbd0849f80e4 | test | available | 5 | |
+--------------------------------------+--------------+-----------+------+-------------+
bladeA01b:~# openstack volume show test
+--------------------------------+--------------------------------------+
| Field | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | true |
| consistencygroup_id | None |
| created_at | 2016-06-22T20:48:31.000000 |
| description | |
| encrypted | False |
| id | c16975ad-dd45-41d7-b0a9-cbd0849f80e4 |
| migration_status | None |
| multiattach | False |
| name | test |
| os-vol-host-attr:host | ***@lvm#LVM_iSCSI |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 2985b96e27f048cd92a18db0dd03aa23 |
| properties | |
| replication_status | disabled |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | None |
| updated_at | 2016-06-22T20:48:48.000000 |
| user_id | 0b7e5b0653084efdad5d67b66f2cf949 |
+--------------------------------+--------------------------------------+
----- s n i p -----

If I understand you correctly, this is a Cinder volume, right? Because of
the "@lvm.." part?

How can I create a local volume?

Looking under "System Information -> Block Storage Services" I see only
Cinder services..

----- s n i p -----
Name Host Zone Status State Last Updated
cinder-backup bladeA01b nova Enabled Up 0 minutes
cinder-scheduler bladeA01b nova Enabled Up 0 minutes
cinder-volume ***@lvm nova Enabled Up 0 minutes
cinder-volume ***@nfs nova Enabled Down 4 hours, 13 minutes
----- s n i p -----
Post by Cynthia Lopes
https://platform9.com/support/openstack-tutorial-storage-options-and-use-cases/
Thanx, I've read something similar so I'm aware of the differences and
what they do. This one I'm going to read in more detail, because it HAD
more detail! :)
Post by Cynthia Lopes
Usually you can specify the directory where VM instances disks will be
stored in the compute node on nova.conf option 'instances_path' in
[DEFAULT] session.
It was commented out, but just for the sake of it I un-commented it..
Post by Cynthia Lopes
http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html
Thanx. That was actually halfway to actually be "documentation". I'll bookmark
that.
Post by Cynthia Lopes
The command to create the VM with an ephemeral disk (nova local storage and
openstack server create --image (image id or name) --flavor (flavor id or
name) vm_name
----- s n i p -----
bladeA01b:/var/tmp# wget --quiet http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
bladeA01b:/var/tmp# openstack image create --public --protected --disk-format qcow2 \
Post by Cynthia Lopes
--container-format docker --property architecture=x86_64 \
--property hypervisor_type=docker \
--file cirros-0.3.4-x86_64-disk.img cirros
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | docker |
| created_at | 2016-06-22T21:23:03Z |
| disk_format | qcow2 |
| file | /v2/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8/file |
| id | d4d913c3-21f3-4e7d-932c-2cb35c8131e8 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 2985b96e27f048cd92a18db0dd03aa23 |
| properties | architecture='x86_64', hypervisor_type='docker' |
| protected | True |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2016-06-22T21:23:04Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
bladeA01b:/var/tmp# openstack server create --image cirros --flavor m1.tiny test3
Multiple possible networks found, use a Network ID to be more specific. (HTTP 409) (Request-ID: req-381a6df8-cd8b-474a-89c4-8a5935b3d7f8)
bladeA01b:/var/tmp# openstack network list
+--------------------------------------+------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+------------+--------------------------------------+
| fb1a3653-44d9-4f98-a357-c87406a8ea47 | physical | 5e3ea098-975d-460c-b313-61c11b2175d3 |
| 2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d | network-99 | 6ef5d993-2796-4adf-a724-eae5f5d1cc53 |
+--------------------------------------+------------+--------------------------------------+
bladeA01b:/var/tmp# openstack server create --image cirros --flavor m1.tiny --nic net-id=2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d test3
+--------------------------------------+------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | whateversecret |
| config_drive | |
| created | 2016-06-22T21:26:55Z |
| flavor | m1.tiny (5936ba55-7d76-4b80-8b3a-73b458b306f2) |
| hostId | |
| id | 860613fe-3834-4f72-909b-5fb4b7ff2932 |
| image | cirros (d4d913c3-21f3-4e7d-932c-2cb35c8131e8) |
| key_name | None |
| name | test3 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 2985b96e27f048cd92a18db0dd03aa23 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2016-06-22T21:26:55Z |
| user_id | 0b7e5b0653084efdad5d67b66f2cf949 |
+--------------------------------------+------------------------------------------------+
[waited a little while]
bladeA01b:/var/tmp# openstack server show test3 | grep fault
| fault | {u'message': u'Build of instance 860613fe-3834-4f72-909b-5fb4b7ff2932 aborted: Cannot load repository file: Connection to glance host http://10.0.4.3:9292 failed: Error finding address for http://10.0.4.3:9292/v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8: HTTPConnecti', u'code': 500, u'details': u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1926, in _do_build_and_run_instance\n filter_properties)\n File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2083, in _build_and_run_instance\n \'create.error\', fault=e)\n File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__\n self.force_reraise()\n File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2067, in _build_and_run_instance\n instance=instance)\n File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__\n self.gen.throw(type, value, traceback)\n File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2244, in _build_resources\n reason=six.text_type(exc))\n', u'created': u'2016-06-22T21:27:28Z'} |
----- s n i p -----

Ok, that's different! I'm not running Glance on my Compute, only on my Control.

Which of these should I run on the Compute and which one on the Control?

The documentation (one of many I follow: http://docs.openstack.org/draft/install-guide-debconf/common/get_started_image_service.html) doesn't say. Only which ones to install
on the Control.

----- s n i p -----
bladeA03b:/etc/nova# apt-cache search glance | grep ^glance
glance - OpenStack Image Registry and Delivery Service - Daemons
glance-api - OpenStack Image Registry and Delivery Service - API server
glance-common - OpenStack Image Registry and Delivery Service - common files
glance-glare - OpenStack Artifacts - API server
glance-registry - OpenStack Image Registry and Delivery Service - registry server
----- s n i p -----

Currently, I have all of them only on the Control..
Post by Cynthia Lopes
Concerning the flavor, I think the flavor you use should have the same disk
size as the disk.
Ok, I'll keep that in mind, thanx.


Now, this might be a stupid question, but it actually only occurred to me just now
when I looking at that missing net error. I haven't really setup my network, just
"winged" it. I' pretty sure it's not even close to working (I need to do more
studying in the matter - I still don't have a clue about how things is supposed
to work in/on the OpenStack side of things).

I've postponed it because I desperately need ANY success story - creating an
instance, even if it won't technically work would help a lot in that. I figured
it should at least TRY to start.. And I _ASUME_ (!!) that as long as the Control
can talk to the Compute and "tell" it what to do (such as "attach this volume/image"),
it should at least be able to be created. I'm guessing the networking (Neutron)
in OS is for the _instance_, not for administration etc. Or, did I misunderstood
(the little I've read and actually understood about it :)?
--
Att tänka innan man talar, är som att torka sig i röven
innan man skiter.
- Arne Anka
Eugen Block
2016-06-23 11:26:41 UTC
Permalink
Post by Turbo Fredriksson
How can I create a local volume?
You have probably configured your cinder.conf to use lvm as backend:

control1:~ # grep -r enabled_backends /etc/cinder/
/etc/cinder/cinder.conf:#enabled_backends = lvm
/etc/cinder/cinder.conf:enabled_backends = rbd --> that's what I
use currently

I'm not sure if it would work, it's been a while since I used local
storage, but if you just comment the enabled_backend option out and
restart cinder services, I believe it would create local volumes. But
still, I would postpone volumes for now if you want to bring an
instance up at all and try to get nova to work with glance.
Post by Turbo Fredriksson
Ok, that's different! I'm not running Glance on my Compute, only on my Control.
Glance is not supposed to run on a compute node, it runs on a control
node. Reading the error message it seems that you have configured your
glance host as it tries to connect, but do you also have configured
the endpoints according to
http://docs.openstack.org/draft/install-guide-debconf/debconf/debconf-api-endpoints.html? What's the output of "openstack endpoint list | grep
glance"?
Post by Turbo Fredriksson
[waited a little while]
How long did you wait? Timeout problem? Make sure that nothing blocks
the requests (proxy?), what response do you get if you execute
control1:~ # curl http://<YOUR-CONTROLLER>:9292
Post by Turbo Fredriksson
Which of these should I run on the Compute and which one on the Control?
On top of every "install and configure" page there is a statement
where to install the required services, for example the glance page
says:

"This section describes how to install and configure the Image
service, code-named glance, on the controller node."

Or if you continue to the compute service, which has several
components, it differs between control and compute node:

"This section describes how to install and configure the Compute
service, code-named nova, on the controller node."

and

"This section describes how to install and configure the Compute
service on a compute node."
Post by Turbo Fredriksson
Now, this might be a stupid question, but it actually only occurred to me just now
when I looking at that missing net error.
I don't think this should be a problem if you have at least a subnet
assigned to the network, which is true in your case. I just tested
that, the instance boots into a newly created network without any
further configuration. So in your case it's the missing connection to
glance, if you fix that we'll see what's next ;-)
Post by Turbo Fredriksson
Now that my authentication problems seems to be fixed, it's back on track with
trying to boot my first instance..
Post by Cynthia Lopes
If not, the command is: openstack volume create --size (size in GB) --image
(image name or id) volume_name
Just for info the cinder command was not exact, it should be: cinder create
--image*-id *<IMAGE-ID> *--display-name* <NAME> <SIZE>
Thanx.
Post by Cynthia Lopes
I agree with Eugen that you should make sure you can create a volume and
attach to a VM to help understand what your problem is.
Ok, so I created an empty, bootable volume. Worked just fine it seems.
I then used that when creating the instance (from Horizon).
Still the same error - Block Device Mapping is Invalid.
----- s n i p -----
bladeA01b:~# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| c16975ad-dd45-41d7-b0a9-cbd0849f80e4 | test | available |
5 | |
+--------------------------------------+--------------+-----------+------+-------------+
bladeA01b:~# openstack volume show test
+--------------------------------+--------------------------------------+
| Field | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | true |
| consistencygroup_id | None |
| created_at | 2016-06-22T20:48:31.000000 |
| description | |
| encrypted | False |
| id | c16975ad-dd45-41d7-b0a9-cbd0849f80e4 |
| migration_status | None |
| multiattach | False |
| name | test |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 2985b96e27f048cd92a18db0dd03aa23 |
| properties | |
| replication_status | disabled |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | None |
| updated_at | 2016-06-22T20:48:48.000000 |
| user_id | 0b7e5b0653084efdad5d67b66f2cf949 |
+--------------------------------+--------------------------------------+
----- s n i p -----
If I understand you correctly, this is a Cinder volume, right? Because of
How can I create a local volume?
Looking under "System Information -> Block Storage Services" I see only
Cinder services..
----- s n i p -----
Name Host Zone Status State Last Updated
cinder-backup bladeA01b nova Enabled Up 0 minutes
cinder-scheduler bladeA01b nova Enabled Up 0 minutes
----- s n i p -----
Post by Cynthia Lopes
https://platform9.com/support/openstack-tutorial-storage-options-and-use-cases/
Thanx, I've read something similar so I'm aware of the differences and
what they do. This one I'm going to read in more detail, because it HAD
more detail! :)
Post by Cynthia Lopes
Usually you can specify the directory where VM instances disks will be
stored in the compute node on nova.conf option 'instances_path' in
[DEFAULT] session.
It was commented out, but just for the sake of it I un-commented it..
Post by Cynthia Lopes
http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html
Thanx. That was actually halfway to actually be "documentation". I'll bookmark
that.
Post by Cynthia Lopes
The command to create the VM with an ephemeral disk (nova local storage and
openstack server create --image (image id or name) --flavor (flavor id or
name) vm_name
----- s n i p -----
bladeA01b:/var/tmp# wget --quiet
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
bladeA01b:/var/tmp# openstack image create --public --protected --disk-format qcow2 \
Post by Cynthia Lopes
--container-format docker --property architecture=x86_64 \
--property hypervisor_type=docker \
--file cirros-0.3.4-x86_64-disk.img cirros
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | docker |
| created_at | 2016-06-22T21:23:03Z |
| disk_format | qcow2 |
| file | /v2/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8/file |
| id | d4d913c3-21f3-4e7d-932c-2cb35c8131e8 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 2985b96e27f048cd92a18db0dd03aa23 |
| properties | architecture='x86_64', hypervisor_type='docker' |
| protected | True |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2016-06-22T21:23:04Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
bladeA01b:/var/tmp# openstack server create --image cirros --flavor m1.tiny test3
Multiple possible networks found, use a Network ID to be more
req-381a6df8-cd8b-474a-89c4-8a5935b3d7f8)
bladeA01b:/var/tmp# openstack network list
+--------------------------------------+------------+--------------------------------------+
| ID | Name | Subnets
|
+--------------------------------------+------------+--------------------------------------+
| fb1a3653-44d9-4f98-a357-c87406a8ea47 | physical |
5e3ea098-975d-460c-b313-61c11b2175d3 |
| 2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d | network-99 |
6ef5d993-2796-4adf-a724-eae5f5d1cc53 |
+--------------------------------------+------------+--------------------------------------+
bladeA01b:/var/tmp# openstack server create --image cirros --flavor
m1.tiny --nic net-id=2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d test3
+--------------------------------------+------------------------------------------------+
| Field | Value
|
+--------------------------------------+------------------------------------------------+
| OS-DCF:diskConfig | MANUAL
|
| OS-EXT-AZ:availability_zone | nova
|
| OS-EXT-SRV-ATTR:host | None
|
| OS-EXT-SRV-ATTR:hypervisor_hostname | None
|
| OS-EXT-SRV-ATTR:instance_name | instance-00000003
|
| OS-EXT-STS:power_state | 0
|
| OS-EXT-STS:task_state | scheduling
|
| OS-EXT-STS:vm_state | building
|
| OS-SRV-USG:launched_at | None
|
| OS-SRV-USG:terminated_at | None
|
| accessIPv4 |
|
| accessIPv6 |
|
| addresses |
|
| adminPass | whateversecret
|
| config_drive |
|
| created | 2016-06-22T21:26:55Z
|
| flavor | m1.tiny
(5936ba55-7d76-4b80-8b3a-73b458b306f2) |
| hostId |
|
| id |
860613fe-3834-4f72-909b-5fb4b7ff2932 |
| image | cirros
(d4d913c3-21f3-4e7d-932c-2cb35c8131e8) |
| key_name | None
|
| name | test3
|
| os-extended-volumes:volumes_attached | []
|
| progress | 0
|
| project_id |
2985b96e27f048cd92a18db0dd03aa23 |
| properties |
|
| security_groups | [{u'name': u'default'}]
|
| status | BUILD
|
| updated | 2016-06-22T21:26:55Z
|
| user_id |
0b7e5b0653084efdad5d67b66f2cf949 |
+--------------------------------------+------------------------------------------------+
[waited a little while]
bladeA01b:/var/tmp# openstack server show test3 | grep fault
| fault | {u'message': u'Build of
instance 860613fe-3834-4f72-909b-5fb4b7ff2932 aborted: Cannot load
repository file: Connection to glance host http://10.0.4.3:9292
failed: Error finding address for
HTTPConnecti', u'code': 500, u'details': u' File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
1926, in _do_build_and_run_instance\n filter_properties)\n File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2083, in _build_and_run_instance\n \'create.error\', fault=e)\n
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
221, in __exit__\n self.force_reraise()\n File
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197,
in force_reraise\n six.reraise(self.type_, self.value, self.tb)\n
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
line 2067, in _build_and_run_instance\n instance=instance)\n
File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__\n
self.gen.throw(type, value, traceback)\n File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2244, in _build_resources\n reason=six.text_type(exc))\n',
u'created': u'2016-06-22T21:27:28Z'} |
----- s n i p -----
Ok, that's different! I'm not running Glance on my Compute, only on my Control.
Which of these should I run on the Compute and which one on the Control?
http://docs.openstack.org/draft/install-guide-debconf/common/get_started_image_service.html) doesn't say. Only which ones to
install
on the Control.
----- s n i p -----
bladeA03b:/etc/nova# apt-cache search glance | grep ^glance
glance - OpenStack Image Registry and Delivery Service - Daemons
glance-api - OpenStack Image Registry and Delivery Service - API server
glance-common - OpenStack Image Registry and Delivery Service - common files
glance-glare - OpenStack Artifacts - API server
glance-registry - OpenStack Image Registry and Delivery Service - registry server
----- s n i p -----
Currently, I have all of them only on the Control..
Post by Cynthia Lopes
Concerning the flavor, I think the flavor you use should have the same disk
size as the disk.
Ok, I'll keep that in mind, thanx.
Now, this might be a stupid question, but it actually only occurred to me just now
when I looking at that missing net error. I haven't really setup my network, just
"winged" it. I' pretty sure it's not even close to working (I need to do more
studying in the matter - I still don't have a clue about how things is supposed
to work in/on the OpenStack side of things).
I've postponed it because I desperately need ANY success story - creating an
instance, even if it won't technically work would help a lot in that. I figured
it should at least TRY to start.. And I _ASUME_ (!!) that as long as the Control
can talk to the Compute and "tell" it what to do (such as "attach this volume/image"),
it should at least be able to be created. I'm guessing the
networking (Neutron)
in OS is for the _instance_, not for administration etc. Or, did I misunderstood
(the little I've read and actually understood about it :)?
--
Att tänka innan man talar, är som att torka sig i röven
innan man skiter.
- Arne Anka
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://lists.opensta
Turbo Fredriksson
2016-06-23 13:10:05 UTC
Permalink
/etc/cinder/cinder.conf:enabled_backends = rbd --> that's what I use currently
"rbd"?
I'm not sure if it would work, it's been a while since I used local storage, but if you just comment the enabled_backend option out and restart cinder services, I believe it would create local volumes.
Shouldn't it be enough just to "disable" those services/backends?

I guess I have to, because just commenting that out didn't help, they still
show as enabled and running.

But even after disabling them, they're still show as "status=disabled,state=up"
with a "cinder service-list".. ?
Post by Turbo Fredriksson
Ok, that's different! I'm not running Glance on my Compute, only on my Control.
Glance is not supposed to run on a compute node, it runs on a control node.
Ok, good! I thought I missed something fundamental.
What's the output of "openstack endpoint list | grep glance"?
| 57b10556b7bf47eaa019c603a0f6b34f | europe-london | glance | image | True | public | http://10.0.4.1:9292
| 8672f6de1673470d93ab6ccee1c1a2bb | europe-london | glance | image | True | internal | http://10.0.4.1:9292
| e45c3e83fe744e7db949cdd89dfe5654 | europe-london | glance | image | True | admin | http://10.0.4.1:9292

That's the Control node..
Post by Turbo Fredriksson
[waited a little while]
How long did you wait?
10-15 seconds perhaps. At least less than (half?) a minute..
"This section describes how to install and configure the Image service, code-named glance, on the controller node."
It is not obvious from that that that (!! :) should only be done on the
Controller! It just say "do this on the controller". It does not make it
clear that you shouldn't do something on the compute as well.
"This section describes how to install and configure the Compute service, code-named nova, on the controller node."
"This section describes how to install and configure the Compute service on a compute node."
Neither of which distinguish the different parts - what if I
have/want a separate compute and control node? It does not
make things obvious!


And that's why I have a problem with HOWTOs! They _assume_ to much.
And a _BAD_ HOWTO (which all of them on Openstack are!) doesn't even
attempt to explain the different options you have, so if you deviate
even the very slightest, you're f**ked!

There's a _HUMONGOS_ difference between a "HOWTO" and "Documentation"!
Timeout problem? Make sure that nothing blocks the requests (proxy?), what response do you get if you execute
control1:~ # curl http://<YOUR-CONTROLLER>:9292
I was doing that ON the Control. Worked just fine.

And The Control and Compute is on the same switch.
--
Det är när man känner doften av sin egen avföring
som man börjar undra vem man egentligen är.
- Arne Anka
Turbo Fredriksson
2016-06-23 13:35:51 UTC
Permalink
Post by Turbo Fredriksson
But even after disabling them, they're still show as "status=disabled,state=up"
with a "cinder service-list".. ?
I tried anyway, but creating a volume (empty or from an image) gave
the host field as empty. And the status was Error!

So apparently I need to figure out a way to configure "local storage".


The LVM volume shows:

----- s n i p -----
bladeA01b:~# openstack volume show test | grep host
| os-vol-host-attr:host | ***@lvm#LVM_iSCSI |
----- s n i p -----
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.
Turbo Fredriksson
2016-06-23 15:06:50 UTC
Permalink
I'm starting to think that it might have something to do with the
networking after all:

----- s n i p -----
2016-06-23 15:52:13.775 25419 DEBUG nova.compute.manager [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd
03aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Instance network_info: |[VIF({'profile': {}, 'ovs_interfaceid': u'9c23c0b8-1e96-4e73-b048-55c73
80b2425', 'preserve_on_delete': False, 'network': Network({'bridge': 'br-provider', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 'fi
xed', 'floating_ips': [], 'address': u'10.99.0.4'})], 'version': 4, 'meta': {'dhcp_server': u'10.99.0.2'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': 'dns'
, 'address': u'10.0.0.254'})], 'routes': [], 'cidr': u'10.99.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 'gateway', 'address': u'10.99.0.1'})})]
, 'meta': {'injected': False, 'tenant_id': u'2985b96e27f048cd92a18db0dd03aa23', 'mtu': 1458}, 'id': u'2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d', 'label': u'networ
k-99'}), 'devname': u'tap9c23c0b8-1e', 'vnic_type': u'normal', 'qbh_params': None, 'meta': {}, 'details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 'a
ddress': u'fa:16:3e:4c:04:17', 'active': False, 'type': u'ovs', 'id': u'9c23c0b8-1e96-4e73-b048-55c7380b2425', 'qbg_params': None})]| _allocate_network_async
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:1572
2016-06-23 15:52:13.776 25419 DEBUG nova.compute.claims [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd0
3aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Aborting claim: [Claim: 1024 MB memory, 5 GB disk] abort /usr/lib/python2.7/dist-packages/nova/c
ompute/claims.py:120
2016-06-23 15:52:13.777 25419 DEBUG oslo_concurrency.lockutils [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a1
8db0dd03aa23 - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.abort_instance_claim" :: waited 0.000s inner /usr/lib/python2.7/dist-p
ackages/oslo_concurrency/lockutils.py:273
2016-06-23 15:52:14.017 25419 DEBUG oslo_concurrency.lockutils [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a1
8db0dd03aa23 - - -] Lock "compute_resources" released by "nova.compute.resource_tracker.abort_instance_claim" :: held 0.240s inner /usr/lib/python2.7/dist-pac
kages/oslo_concurrency/lockutils.py:285
2016-06-23 15:52:14.018 25419 DEBUG nova.compute.manager [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd
03aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Build of instance d75bd127-c554-4d79-bb9e-157c752628f4 aborted: Block Device Mapping is Invalid
. _build_and_run_instance /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2081
2016-06-23 15:52:14.019 25419 DEBUG nova.compute.utils [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03
aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Build of instance d75bd127-c554-4d79-bb9e-157c752628f4 aborted: Block Device Mapping is Invalid.
notify_about_instance_usage /usr/lib/python2.7/dist-packages/nova/compute/utils.py:284
2016-06-23 15:52:14.020 25419 ERROR nova.compute.manager [req-87a08e39-96ac-4c23-96dd-5227c972b865 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: d75bd127-c554-4d79-bb9e-157c752628f4] Build of instance d75bd127-c554-4d79-bb9e-157c752628f4 aborted: Block Device Mapping is Invalid.
----- s n i p -----
Yngvi Páll Þorfinnsson
2016-06-23 15:42:06 UTC
Permalink
I think it's possible we're far away from the correct path ;-)
It's not mentioned at all in the openstack lbaas V2 documentation,
but I think it's necessary to install Octavia on the controller machine first.
Then configure neutron on all compute nodes to support lbaas ...
Someone please correct me, if I'm wrong on this....

Cheers
Yngvi

-----Original Message-----
From: Turbo Fredriksson [mailto:***@bayour.com]
Sent: 23. júní 2016 13:36
To: ***@lists.openstack.org
Subject: Re: [Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid
Post by Turbo Fredriksson
But even after disabling them, they're still show as "status=disabled,state=up"
with a "cinder service-list".. ?
I tried anyway, but creating a volume (empty or from an image) gave the host field as empty. And the status was Error!

So apparently I need to figure out a way to configure "local storage".


The LVM volume shows:

----- s n i p -----
bladeA01b:~# openstack volume show test | grep host
| os-vol-host-attr:host | ***@lvm#LVM_iSCSI |
----- s n i p -----
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Yngvi Páll Þorfinnsson
2016-06-24 11:00:29 UTC
Permalink
Can anyone advise on or provide a documentation for Octavia installation ?

Rgds
Yngvi


-----Original Message-----
From: Yngvi Páll Þorfinnsson
Sent: 23. júní 2016 15:42
To: Turbo Fredriksson <***@bayour.com>; ***@lists.openstack.org
Subject: Re: [Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid

I think it's possible we're far away from the correct path ;-) It's not mentioned at all in the openstack lbaas V2 documentation, but I think it's necessary to install Octavia on the controller machine first.
Then configure neutron on all compute nodes to support lbaas ...
Someone please correct me, if I'm wrong on this....

Cheers
Yngvi

-----Original Message-----
From: Turbo Fredriksson [mailto:***@bayour.com]
Sent: 23. júní 2016 13:36
To: ***@lists.openstack.org
Subject: Re: [Openstack] Create instance fails on creating block device - Block Device Mapping is Invalid
Post by Turbo Fredriksson
But even after disabling them, they're still show as "status=disabled,state=up"
with a "cinder service-list".. ?
I tried anyway, but creating a volume (empty or from an image) gave the host field as empty. And the status was Error!

So apparently I need to figure out a way to configure "local storage".


The LVM volume shows:

----- s n i p -----
bladeA01b:~# openstack volume show test | grep host
| os-vol-host-attr:host | ***@lvm#LVM_iSCSI |
----- s n i p -----
--
Build a man a fire, and he will be warm for the night.
Set a man on fire and he will be warm for the rest of his life.


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Eugen Block
2016-06-23 15:30:39 UTC
Permalink
Post by Turbo Fredriksson
"rbd"?
It's a different storage backend, something like a network RAID. But
don't mind it right now ;-)
Post by Turbo Fredriksson
But even after disabling them, they're still show as
"status=disabled,state=up"
They are running because you didn't stop the services, you just
disabled them. You could stop them for now if you don't intend using
cinder until you get an instance up and running, but I would take care
of cinder after that. It doesn't affect you if you while trying to
boot an instance on local storage because cinder is not required for
that.

From your latest logs I assume that you are still trying to boot from
volume, I recommend to ignore cinder for now and focus on launching an
instance at all. Have you fixed your glance issue? Because that is
required, otherwise it won't work at all.
Post by Turbo Fredriksson
/etc/cinder/cinder.conf:enabled_backends = rbd --> that's what I use currently
"rbd"?
I'm not sure if it would work, it's been a while since I used local
storage, but if you just comment the enabled_backend option out and
restart cinder services, I believe it would create local volumes.
Shouldn't it be enough just to "disable" those services/backends?
I guess I have to, because just commenting that out didn't help, they still
show as enabled and running.
But even after disabling them, they're still show as
"status=disabled,state=up"
with a "cinder service-list".. ?
Post by Turbo Fredriksson
Ok, that's different! I'm not running Glance on my Compute, only on my Control.
Glance is not supposed to run on a compute node, it runs on a control node.
Ok, good! I thought I missed something fundamental.
What's the output of "openstack endpoint list | grep glance"?
| 57b10556b7bf47eaa019c603a0f6b34f | europe-london | glance | image
| True | public | http://10.0.4.1:9292
| 8672f6de1673470d93ab6ccee1c1a2bb | europe-london | glance | image
| True | internal | http://10.0.4.1:9292
| e45c3e83fe744e7db949cdd89dfe5654 | europe-london | glance | image
| True | admin | http://10.0.4.1:9292
That's the Control node..
Post by Turbo Fredriksson
[waited a little while]
How long did you wait?
10-15 seconds perhaps. At least less than (half?) a minute..
"This section describes how to install and configure the Image
service, code-named glance, on the controller node."
It is not obvious from that that that (!! :) should only be done on the
Controller! It just say "do this on the controller". It does not make it
clear that you shouldn't do something on the compute as well.
"This section describes how to install and configure the Compute
service, code-named nova, on the controller node."
"This section describes how to install and configure the Compute
service on a compute node."
Neither of which distinguish the different parts - what if I
have/want a separate compute and control node? It does not
make things obvious!
And that's why I have a problem with HOWTOs! They _assume_ to much.
And a _BAD_ HOWTO (which all of them on Openstack are!) doesn't even
attempt to explain the different options you have, so if you deviate
even the very slightest, you're f**ked!
There's a _HUMONGOS_ difference between a "HOWTO" and "Documentation"!
Timeout problem? Make sure that nothing blocks the requests
(proxy?), what response do you get if you execute
control1:~ # curl http://<YOUR-CONTROLLER>:9292
I was doing that ON the Control. Worked just fine.
And The Control and Compute is on the same switch.
--
Det är när man känner doften av sin egen avföring
som man börjar undra vem man egentligen är.
- Arne Anka
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983


_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : ***@lists.openstack.org
Unsubscribe : http://lists.ope
Turbo Fredriksson
2016-06-23 16:27:22 UTC
Permalink
They are running because you didn't stop the services, you just disabled them.
I kind'a expected a disable to stop the service.. But what if I wanted to
stop only ONE service (of several)? For example the "nfs" backend but leave
the "lvm" online. I can't shutdown cinder-volume, that would stop all of them..
You could stop them for now if you don't intend using cinder until you get an instance up and running, but I would take care of cinder after that. It doesn't affect you if you while trying to boot an instance on local storage because cinder is not required for that.
Well. I can create Cinder volumes without any problems it seems:

bladeA01b:~# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 0a0929e6-cf4d-40b3-9ba3-9575290993e6 | test2 | available | 5 | |
| c16975ad-dd45-41d7-b0a9-cbd0849f80e4 | test | available | 5 | |
+--------------------------------------+--------------+-----------+------+-------------+
bladeA01b:~# openstack volume show test | grep host
| os-vol-host-attr:host | ***@lvm#LVM_iSCSI
bladeA01b:~# openstack volume show test2 | grep host
From your latest logs I assume that you are still trying to boot from volume, I recommend to ignore cinder for now and focus on launching an instance at all.
That doesn't seem to be possible. I've looked over some of the code for
Cinder, and if you don't configure "enabled_backends", then no volume
can be created. Well, they can be created, but they end up in Error state
right away!
Have you fixed your glance issue?
I don't know. I don't know what's wrong with it :(


But I touch one thing and two other things break :( :( :(. Dang, dang, dang,
I'm getting really tired of this s**t!

----- s n i p -----
2016-06-23 17:00:21.511 18347 INFO cinder.api.openstack.wsgi [req-35b7eb35-997c-4149-a975-aba921d86182 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] HTTP exception thrown: Volume test2 could not be found.
2016-06-23 17:00:21.513 18347 INFO cinder.api.openstack.wsgi [req-35b7eb35-997c-4149-a975-aba921d86182 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test2 returned with HTTP 404
2016-06-23 17:00:21.514 18347 INFO eventlet.wsgi.server [req-35b7eb35-997c-4149-a975-aba921d86182 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] 10.0.4.1 "GET /v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test2 HTTP/1.1" status: 404 len: 419 time: 0.6278081
2016-06-23 17:00:21.524 18347 INFO cinder.api.openstack.wsgi [req-a37ee53f-6910-4411-8260-dd615a722318 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] GET http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/detail?all_tenants=1&name=test2
2016-06-23 17:00:21.600 18347 INFO cinder.volume.api [req-a37ee53f-6910-4411-8260-dd615a722318 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Get all volumes completed successfully.
2016-06-23 17:00:21.609 18347 INFO cinder.api.openstack.wsgi [req-a37ee53f-6910-4411-8260-dd615a722318 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/detail?all_tenants=1&name=test2 returned with HTTP 200
2016-06-23 17:00:21.611 18347 INFO eventlet.wsgi.server [req-a37ee53f-6910-4411-8260-dd615a722318 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] 10.0.4.1 "GET /v2/2985b96e27f048cd92a18db0dd03aa23/volumes/detail?all_tenants=1&name=test2 HTTP/1.1" status: 200 len: 1607 time: 0.0912130
----- s n i p -----

And yet it's right there! See top of email.
--
Att inse sin egen betydelse är som att få ett kvalster att
fatta att han bara syns i mikroskop
- Arne Anka
Turbo Fredriksson
2016-06-23 22:31:23 UTC
Permalink
----- s n i p -----
2016-06-23 23:08:25.277 25887 INFO cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18d
b0dd03aa23 - - -] GET http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test
2016-06-23 23:08:25.278 25887 DEBUG cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18
db0dd03aa23 - - -] Empty body provided in request get_body /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py:936
2016-06-23 23:08:25.278 25887 DEBUG cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18
db0dd03aa23 - - -] Calling method '<bound method VolumeController.show of <cinder.api.v2.volumes.VolumeController object at 0x7f78ae8d4ad0>>' _process_stack /
usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py:1092
2016-06-23 23:08:25.362 25887 INFO cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] HTTP exception thrown: Volume test could not be found.
2016-06-23 23:08:25.363 25887 INFO cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test returned with HTTP 404
2016-06-23 23:08:25.366 25887 INFO eventlet.wsgi.server [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] 10.0.4.1 "GET /v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test HTTP/1.1" status: 404 len: 418 time: 0.8508980
----- s n i p -----

and yet:

----- s n i p -----
bladeA01b:~# openstack volume list +--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 8dbd3b7c-e36b-433f-a3b0-d701f63f63c2 | test | available | 5 | |
+--------------------------------------+--------------+-----------+------+-------------+
----- s n i p -----

That's with the "admin" user+password etc though..
--
There are no dumb questions,
unless a customer is asking them.
- Unknown
Turbo Fredriksson
2016-06-24 12:59:05 UTC
Permalink
Post by Turbo Fredriksson
----- s n i p -----
2016-06-23 23:08:25.362 25887 INFO cinder.api.openstack.wsgi [req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] HTTP exception thrown: Volume test could not be found.
----- s n i p -----
Oups, that one was mine! I forgot/missed to copy the docker image
to Glance!


As far as i can tell, I'm now "all good". It now "only" (!!! :)
complains about the network :( :(.

----- s n i p -----
2016-06-24 13:42:25.141 10978 WARNING novadocker.virt.docker.driver [req-c98bae39-dbb0-4b36-808d-004684ae3a2c 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 1988a860-13f6-44cf-b40c-5709b815f381] Cannot setup network: Cannot find any PID under container "708a45db07705d7f4393fc60886ae4e894dccc2ce02ef7eb77dd701abc750752"
2016-06-24 13:42:25.141 10978 ERROR novadocker.virt.docker.driver [instance: 1988a860-13f6-44cf-b40c-5709b815f381] Traceback (most recent call last):
2016-06-24 13:42:25.141 10978 ERROR novadocker.virt.docker.driver [instance: 1988a860-13f6-44cf-b40c-5709b815f381] File "/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 490, in _start_container
2016-06-24 13:42:25.141 10978 ERROR novadocker.virt.docker.driver [instance: 1988a860-13f6-44cf-b40c-5709b815f381] self._attach_vifs(instance, network_info)
2016-06-24 13:42:25.141 10978 ERROR novadocker.virt.docker.driver [instance: 1988a860-13f6-44cf-b40c-5709b815f381] File "/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 241, in _attach_vifs
2016-06-24 13:42:25.141 10978 ERROR novadocker.virt.docker.driver [instance: 1988a860-13f6-44cf-b40c-5709b815f381] raise RuntimeError(msg.format(container_id))
2016-06-24 13:42:25.141 10978 ERROR novadocker.virt.docker.driver [instance: 1988a860-13f6-44cf-b40c-5709b815f381] RuntimeError: Cannot find any PID under container "708a45db07705d7f4393fc60886ae4e894dccc2ce02ef7eb77dd701abc750752"
2016-06-24 13:42:25.141 10978 ERROR novadocker.virt.docker.driver [instance: 1988a860-13f6-44cf-b40c-5709b815f381]
----- s n i p -----

I'm not sure what that error exactly mean though.


Provided my network looks like this:

+ [external_ip - internet]
+- Contego (DHCP, DNS, NTP, Primary Gateway/Firewall)
+ [192.168.4/24] -+- Old VMs, Static IPs
+ [192.168.5/24] -+- Old VMs, Dynamic IPs
+ [192.168.63/24] -+- Guest network
+ [192.168.69/24] -+- Physical machines
+- Celia (Block Storage - 30TB+/ZoL, LDAP, Kerberos V, SMB, AFP, AFS, NFS, iSCSI)
+- Negotia (Current VM host)
+ [10.0.0/16] -+- Blade Center
+ [10.0.1/24] -+- Management/iLO network (Blade Center 1)
+ [10.0.2/24] -+- Management/iLO network (Blade Center 2)
+ [10.0.3/24] -+- Blade Center 1 - Blade hosts, eth0
+ [10.0.4/24] -+- Blade Center 1 - Blade hosts, eth1
+ [10.0.5/24] -+- Blade Center 2 - Blade hosts, eth0
+ [10.0.6/24] -+- Blade Center 2 - Blade hosts, eth1
+ [10.9x.0/24] -+- Virtual Machines

My primary (and at the moment only) Control node have the IP
"10.0.4.1/10.99.0.1" and my first (and at the moment only)
Compute have the IP "10.0.4.3".

My plan (?) was to put the OS router on 10.99.0.254 and route it
"down" to the physical hosts eth0 (10.99.0.1) and then on to
Contego which is my firewall/gateway to the 'Net. I don't expect
any traffic IN that way though..

Then all VMs gets a 10.99.0.x address (if I'm quick, I can actually
see that happening already - but because it fails to start, the IP
is removed from the instance).


My question is, how do I get traffic between 10.99.0.1 and 10.99.0.254??

This is my current network setup. Although, I have no idea what I'm doing,
I just winged it to get SOMETHING..

https://github.com/FransUrbo/openstack_bladecenter/blob/master/local_setup_openstack-control.sh#L647-L681

What am I missing?
--
You know, boys, a nuclear reactor is a lot like a woman.
You just have to read the manual and press the right buttons
- Homer Simpson
Cynthia Lopes
2016-06-24 13:04:08 UTC
Permalink
Sorry, what command precisely did you run when you got those logs? Are you
trying to boot a vm from a volume is it?

I find it weird that you are seeing the volume name on the request instead
of the volume id: GET
http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test
<http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test2016-06-23>
This should be: GET
http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/8dbd3b7c-e36b-433f-a3b0-d701f63f63c2
Post by Turbo Fredriksson
----- s n i p -----
2016-06-23 23:08:25.277 25887 INFO cinder.api.openstack.wsgi
[req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949
2985b96e27f048cd92a18d
b0dd03aa23 - - -] GET
http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test
2016-06-23 23:08:25.278 25887 DEBUG cinder.api.openstack.wsgi
[req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949
2985b96e27f048cd92a18
db0dd03aa23 - - -] Empty body provided in request get_body
/usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py:936
2016-06-23 23:08:25.278 25887 DEBUG cinder.api.openstack.wsgi
[req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949
2985b96e27f048cd92a18
db0dd03aa23 - - -] Calling method '<bound method VolumeController.show of
<cinder.api.v2.volumes.VolumeController object at 0x7f78ae8d4ad0>>'
_process_stack /
usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py:1092
2016-06-23 23:08:25.362 25887 INFO cinder.api.openstack.wsgi
[req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949
2985b96e27f048cd92a18db0dd03aa23 - - -] HTTP exception thrown: Volume test
could not be found.
2016-06-23 23:08:25.363 25887 INFO cinder.api.openstack.wsgi
[req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949
2985b96e27f048cd92a18db0dd03aa23 - - -]
http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test
returned with HTTP 404
2016-06-23 23:08:25.366 25887 INFO eventlet.wsgi.server
[req-9d2bc683-7599-4539-92a6-b8e8503591c8 0b7e5b0653084efdad5d67b66f2cf949
2985b96e27f048cd92a18db0dd03aa23 - - -] 10.0.4.1 "GET
/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test HTTP/1.1" status: 404
len: 418 time: 0.8508980
----- s n i p -----
----- s n i p -----
bladeA01b:~# openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| 8dbd3b7c-e36b-433f-a3b0-d701f63f63c2 | test | available | 5 | |
+--------------------------------------+--------------+-----------+------+-------------+
----- s n i p -----
That's with the "admin" user+password etc though..
--
There are no dumb questions,
unless a customer is asking them.
- Unknown
_______________________________________________
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Turbo Fredriksson
2016-06-24 13:13:26 UTC
Permalink
Post by Cynthia Lopes
Sorry, what command precisely did you run when you got those logs? Are you
trying to boot a vm from a volume is it?
openstack server create --image cirros --flavor m1.tiny --nic net-id=2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d --wait test


It turned out I missed a part of the Nova-Docker setup - copy
the docker image to Glance.

When fixing that part, I then got other problems (see other mail).
Post by Cynthia Lopes
I find it weird that you are seeing the volume name on the request instead
of the volume id: GET
http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test
<http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/test2016-06-23>
This should be: GET
http://10.0.4.1:8776/v2/2985b96e27f048cd92a18db0dd03aa23/volumes/8dbd3b7c-e36b-433f-a3b0-d701f63f63c2
That is probably Nova-Docker that works in a special way..
--
Michael Jackson is not going to buried or cremated
but recycled into shopping bags so he can remain white,
plastic and dangerous for kids to play with.
Turbo Fredriksson
2016-06-23 23:32:41 UTC
Permalink
Sorry for this long mail - I think the original problem is now fixed.
I'm including the whole work/test log for posterity. And if someone
have anything to comment on it, incase I've missed something..



After six, seven hours of debugging and modifying the code to output
more information, I've found out this:

When running

openstack server create --volume test --flavor m1.tiny \
--nic net-id=2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d --wait test

I get this on the Compute:

----- s n i p -----
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [req-37cfbeac-324c-4077-8056-2efc62e80b3f 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: efe13dfa-79be-49a1-8113-04830463b545] Instance failed block device setup
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] Traceback (most recent call last):
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1754, in _prep_block_device
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] wait_func=self._await_block_device_map_created)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 518, in attach_block_devices
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] map(_log_and_attach, block_device_mapping)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 516, in _log_and_attach
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] bdm.attach(*attach_args, **attach_kwargs)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 54, in wrapped
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] ret_val = method(obj, context, *args, **kwargs)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 261, in attach
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] connector = virt_driver.get_volume_connector(instance)
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] File "/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 1375, in get_volume_connector
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] raise NotImplementedError()
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545] NotImplementedError
2016-06-23 23:27:34.708 10716 ERROR nova.compute.manager [instance: efe13dfa-79be-49a1-8113-04830463b545]
2016-06-23 23:27:34.742 10716 DEBUG keystoneauth.session [req-37cfbeac-324c-4077-8056-2efc62e80b3f 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] RESP: [200] Content-Type: application/json Content-Length: 639 X-Openstack-Request-Id: req-701bcf9e-e2bb-41d1-b49c-935da83f6653 Date: Thu, 23 Jun
----- s n i p -----

Following that backwards, I come to attach_block_devices():

----- s n i p -----
[..]
else:
LOG.info(_LI('Booting with blank volume at %(mountpoint)s'),
{'mountpoint': bdm['mount_device']},
context=context, instance=instance)

bdm.attach(*attach_args, **attach_kwargs) (L516)
[..]
connector = virt_driver.get_volume_connector(instance) (L261)
[..]
def get_volume_connector(self, instance):
"""Get connector information for the instance for attaching to volumes.

Connector information is a dictionary representing the ip of the
machine that will be making the connection, the name of the iscsi
initiator and the hostname of the machine as follows::
{
'ip': ip,
'initiator': initiator,
'host': hostname
}
"""
raise NotImplementedError()
----- s n i p -----

So I think I've found a bug. It seems you can't attach an empty volume like I've been
trying for days now. OR, I missed some configuration somewhere..

However, when running:

openstack server create --image cirros --flavor m1.tiny \
--nic net-id=2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d --wait test

I on the other hand get:

----- s n i p -----
2016-06-23 23:39:09.326 10716 DEBUG nova.compute.manager [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd
03aa23 - - -] [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python2.7/dist-
packages/nova/compute/manager.py:2059
2016-06-23 23:39:09.330 10716 DEBUG novadocker.virt.docker.driver [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd9
2a18db0dd03aa23 - - -] Image name "cirros" does not exist, fetching it... _pull_missing_image /usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/dr
iver.py:384
2016-06-23 23:39:09.332 10716 DEBUG novadocker.virt.docker.driver [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Fetching image with id d4d913c3-21f3-4e7d-932c-2cb35c8131e8 from glance _pull_missing_image /usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py:415
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Error contacting glance server 'http://10.0.4.3:9292' for 'data', done trying.
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance Traceback (most recent call last):
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 250, in call
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance result = getattr(client.images, method)(*args, **kwargs)
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance File "/usr/lib/python2.7/dist-packages/glanceclient/v1/images.py", line 148, in data
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance % urlparse.quote(str(image_id)))
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 275, in get
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance return self._request('GET', url, **kwargs)
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance File "/usr/lib/python2.7/dist-packages/glanceclient/common/http.py", line 256, in _request
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance raise exc.CommunicationError(message=message)
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance CommunicationError: Error finding address for http://10.0.4.3:9292/v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8: HTTPConnectionPool(host='10.0.4.3', port=9292): Max retries exceeded with url: /v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc32ac35110>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',))
2016-06-23 23:39:09.392 10716 ERROR nova.image.glance
2016-06-23 23:39:09.395 10716 WARNING novadocker.virt.docker.driver [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] Cannot load repository file: Connection to glance host http://10.0.4.3:9292 failed: Error finding address for http://10.0.4.3:9292/v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8: HTTPConnectionPool(host='10.0.4.3', port=9292): Max retries exceeded with url: /v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc32ac35110>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',))
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] Traceback (most recent call last):
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 417, in _pull_missing_image
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] instance['user_id'], instance['project_id'])
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 110, in fetch
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] IMAGE_API.download(context, image_href, dest_path=path)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/api.py", line 182, in download
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] dst_path=dest_path)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 383, in download
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] _reraise_translated_image_exception(image_id)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 682, in _reraise_translated_image_exception
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] six.reraise(new_exc, None, exc_trace)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 381, in download
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] image_chunks = self._client.call(context, 1, 'data', image_id)
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] File "/usr/lib/python2.7/dist-packages/nova/image/glance.py", line 269, in call
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] server=str(self.api_server), reason=six.text_type(e))
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] GlanceConnectionFailed: Connection to glance host http://10.0.4.3:9292 failed: Error finding address for http://10.0.4.3:9292/v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8: HTTPConnectionPool(host='10.0.4.3', port=9292): Max retries exceeded with url: /v1/images/d4d913c3-21f3-4e7d-932c-2cb35c8131e8 (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc32ac35110>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',))
2016-06-23 23:39:09.395 10716 ERROR novadocker.virt.docker.driver [instance: 8d7241bf-71a0-466d-b0f1-26211615b777]
2016-06-23 23:39:09.398 10716 ERROR nova.compute.manager [req-bb159c37-3033-4a36-a21a-c57acb487de2 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 8d7241bf-71a0-466d-b0f1-26211615b777] Instance failed to spawn
----- s n i p -----

This was because of a missing "api_servers" in nova.conf. Setting that to (using trial
and error):

api_servers = http://control:9292

The information in the config file say:

These should be fully qualified urls of the form "scheme://hostname:port[/path]"

However, with the/any path, it won't work. Granted, the 'endpoint list' DOES say
"http://10.0.4.1:9292", so I guess the path is optional and for special configuration.



Now it seems to go further:

----- s n i p -----
2016-06-24 00:03:09.622 14217 DEBUG novadocker.virt.docker.driver [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd9
2a18db0dd03aa23 - - -] Loading repository file into docker cirros _pull_missing_image /usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py:
419
2016-06-24 00:03:10.100 14217 DEBUG keystoneauth.session [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd
03aa23 - - -] RESP: [201] Content-Type: application/json Content-Length: 871 X-Openstack-Request-Id: req-17ccf052-d52c-401f-9cc7-3a0819e4c3fa Date: Thu, 23 Ju
n 2016 23:03:08 GMT Connection: keep-alive
RESP BODY: {"port": {"status": "DOWN", "binding:host_id": "bladeA03b", "description": "", "allowed_address_pairs": [], "extra_dhcp_opts": [], "updated_at": "2
016-06-23T23:03:08", "device_owner": "compute:nova", "port_security_enabled": true, "binding:profile": {}, "fixed_ips": [{"subnet_id": "6ef5d993-2796-4adf-a72
4-eae5f5d1cc53", "ip_address": "10.99.0.40"}], "id": "db8bec43-9ba0-4276-9623-1a17f5857a06", "security_groups": ["c39cbc1f-99cf-4c1a-98b2-ec4f56481ccf"], "dev
ice_id": "3dd1ff63-24ea-435d-a036-e99a42ebf1b5", "name": "", "admin_state_up": true, "network_id": "2bb7b8e2-188f-4e46-bf4d-ef5ec81ddb4d", "dns_name": null, "
binding:vif_details": {"port_filter": true, "ovs_hybrid_plug": true}, "binding:vnic_type": "normal", "binding:vif_type": "ovs", "tenant_id": "2985b96e27f048cd
92a18db0dd03aa23", "mac_address": "fa:16:3e:64:4e:18", "created_at": "2016-06-23T23:03:08"}}
_http_log_response /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:277
2016-06-24 00:03:10.101 14217 DEBUG nova.network.neutronv2.api [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 3dd1ff63-24ea-435d-a036-e99a42ebf1b5] Successfully created port: db8bec43-9ba0-4276-9623-1a17f5857a06 _create_port /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py:261
2016-06-24 00:03:10.102 14217 DEBUG oslo_concurrency.lockutils [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Acquired semaphore "refresh_cache-3dd1ff63-24ea-435d-a036-e99a42ebf1b5" lock /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:215
2016-06-24 00:03:10.103 14217 DEBUG nova.network.neutronv2.api [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 3dd1ff63-24ea-435d-a036-e99a42ebf1b5] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py:910
2016-06-24 00:03:10.121 14217 DEBUG keystoneauth.session [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] REQ: curl -g -i -X GET http://10.0.4.1:9696/v2.0/ports.json?tenant_id=2985b96e27f048cd92a18db0dd03aa23&device_id=3dd1ff63-24ea-435d-a036-e99a42ebf1b5 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}262b2c831c6ea94c09cd20bb956858e6c71671b2" _http_log_request /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:248
2016-06-24 00:03:10.169 14217 WARNING novadocker.virt.docker.driver [req-867eddb0-5367-425a-8189-cdf3e5293855 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] [instance: 3dd1ff63-24ea-435d-a036-e99a42ebf1b5] Cannot load repository file: ('Connection aborted.', error(32, 'Broken pipe'))
----- s n i p -----


And now I'm stuck again. Looking at the information of the instance, it now say:

No valid host was found. There are not enough hosts available.

Although:

----- s n i p -----
bladeA01b:~# openstack endpoint list | grep nova
| a5e36f0b933c4e4da7a5737d00e7230b | europe-london | nova | compute | True | internal | http://10.0.4.1:8774/v2/%(tenant_id)s |
| b7a8e4623fbd456fb008527f9c51995f | europe-london | nova | compute | True | admin | http://10.0.4.1:8774/v2/%(tenant_id)s |
| c3b5eda8124b4e4186f919a7944d1290 | europe-london | nova | compute | True | public | http://10.0.4.1:8774/v2/%(tenant_id)s |
----- s n i p -----
Turbo Fredriksson
2016-06-23 23:36:11 UTC
Permalink
Post by Turbo Fredriksson
No valid host was found. There are not enough hosts available.
Looking closer at the Controllers logs, I see:

----- s n i p -----
2016-06-24 00:29:38.835 20278 INFO glance.registry.api.v1.images [req-55daa3fe-c192-41a4-a5c0-0b6e076a4bcf 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Image cirros not found
2016-06-24 00:29:38.837 20278 INFO eventlet.wsgi.server [req-55daa3fe-c192-41a4-a5c0-0b6e076a4bcf 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] 127.0.0.1 - - [24/Jun/2016 00:29:38] "GET /images/cirros HTTP/1.1" 404 242 0.251989
2016-06-24 00:29:38.852 20307 ERROR glance.registry.client.v1.client [req-55daa3fe-c192-41a4-a5c0-0b6e076a4bcf 0b7e5b0653084efdad5d67b66f2cf949 2985b96e27f048cd92a18db0dd03aa23 - - -] Registry client request GET /images/cirros raised NotFound
----- s n i p -----
--
System administrators motto:
You're either invisible or in trouble.
- Unknown
Turbo Fredriksson
2016-06-22 19:17:13 UTC
Permalink
Post by Turbo Fredriksson
Post by Eugen Block
Have you nova-compute.logs?
They don't say a thing, so I'm guessing it never gets
that far.
Running EVERYTHING with debugging, insensitive logging etc etc,
I noticed that Nova could not authenticate "something" (I just got
the non-descriptive "Something, something needs authentication").
I spent a whole day checking, triple checking etc. Everything WAS
ok! I'm almost sure of it! As sure I can get without fully knowing
what I'm doing at least :).

I decided that the easiest way to solve this (which I was going to
do anyway, I was just hoping to put it of until everything was working)
was to create individual service accounts for everything.


Now I can't see the Compute node any more :(.

Running "openstack --debug flavor list" (etc, etc) gives me
(with using my admin-openrc file which is supposed to give me
admin rights):

----- s n i p -----
[..]
Auth plugin password selected
auth_type: password
Using auth plugin: password
Using parameters {'username': 'admin', 'project_name': 'admin', 'auth_url': 'http://control:35357/v3', 'user_domain_name': 'default', 'password': '***', 'project_domain_name': 'default'}
Get auth_ref
REQ: curl -g -i -X GET http://control:35357/v3 -H "Accept: application/json" -H "User-Agent: python-openstackclient keystoneauth1/2.4.1 python-requests/2.10.0 CPython/2.7.12rc1"
Starting new HTTP connection (1): control
"GET /v3 HTTP/1.1" 200 260
RESP: [200] Vary: X-Auth-Token Content-Type: application/json Content-Length: 260 X-Openstack-Request-Id: req-168f79a9-53d5-482f-841c-d9a68dbb270e Date: Tue, 21 Jun 2016 15:49:27 GMT Connection: keep-alive
RESP BODY: {"version": {"status": "stable", "updated": "2016-04-04T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.6", "links": [{"href": "http://control:35357/v3/", "rel": "self"}]}}

Making authentication request to http://control:35357/v3/auth/tokens
"POST /v3/auth/tokens HTTP/1.1" 201 11701
run(Namespace(all=False, columns=[], formatter='table', limit=None, long=False, marker=None, max_width=0, noindent=False, public=True, quote_mode='nonnumeric'))
Instantiating compute client for VAPI Version Major: 2, Minor: 0
Making authentication request to http://control:35357/v3/auth/tokens
"POST /v3/auth/tokens HTTP/1.1" 201 11701
REQ: curl -g -i -X GET http://10.0.4.1:8774/v2/1857a7b08b8046038005b98e8b238843/flavors/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}e3b5968af44686e0d3abfbf6e3934d6991235c46"
Starting new HTTP connection (1): 10.0.4.1
"GET /v2/1857a7b08b8046038005b98e8b238843/flavors/detail HTTP/1.1" 503 170
RESP: [503] Content-Length: 170 Content-Type: application/json; charset=UTF-8 X-Compute-Request-Id: req-c40a135f-2445-4d68-a6aa-0c37d05f363c Date: Tue, 21 Jun 2016 15:49:29 GMT Connection: keep-alive
RESP BODY: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\n\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"}
[..]
----- s n i p -----

And the web GUI gives me:

Error: Unable to get network agents info.
Error: Unable to get nova services list.
Error: Unable to get cinder services list.
Error: Unable to get Orchestration service list.

and the list of "Compute Services" is empty..



Here it's trying to connect (from what I've figured out) the compute
node. This IS up and running (on 10.0.4.3) but it seems like it haven't
(successfully) registered itself to the controller.


This is the Compute node:

----- s n i p -----
bladeA03b:~# rgrep -E '^admin_|^#_tenant_|^#.*_domain_' /etc/nova | egrep -v '\.orig|~:' | sed "s@\(admin_password = \).*@\1SECRET@" | less
/etc/nova/nova.conf:admin_username = ironic # The [ironic] section:
/etc/nova/nova.conf:admin_password = SECRET
/etc/nova/nova.conf:admin_tenant_name = service
/etc/nova/nova.conf:admin_user = nova # The [keystone_authtoken] section:
/etc/nova/nova.conf:admin_password = SECRET
/etc/nova/nova.conf:admin_tenant_name = service
/etc/nova/nova.conf:#default_domain_id = <None>
/etc/nova/nova.conf:#default_domain_name = <None>
/etc/nova/nova.conf:#project_domain_id = <None>
/etc/nova/nova.conf:#user_domain_id = <None>
/etc/nova/nova.conf:#user_domain_name = <None>
----- s n i p -----

On the Control:

----- s n i p -----
bladeA01b:~# rgrep -E '^admin_|^#_tenant_|^#.*_domain_' /etc/{nova,keystone,ironic} | egrep -v '\.orig|~:' | sed "s@\(.*_\(password\|token\) = \).*@\1SECRET@"/etc/nova/nova.conf:admin_user = nova
/etc/nova/nova.conf:admin_password = SECRET
/etc/nova/nova.conf:admin_tenant_name = service
/etc/nova/nova.conf:#default_domain_id = <None>
/etc/nova/nova.conf:#default_domain_name = <None>
/etc/nova/nova.conf:#project_domain_id = <None>
/etc/keystone/keystone.conf:admin_token = SECRET
/etc/keystone/keystone.conf:#federated_domain_name = Federated/etc/keystone/keystone.conf:#default_domain_id = default
/etc/keystone/keystone.conf:#admin_project_domain_name = <None>
----- s n i p -----

Also, basically the only thing i can do is list users etc:

----- s n i p -----
bladeA01b:~# openstack user list
+----------------------------------+------------+
| ID | Name |
+----------------------------------+------------+
| 010049f831d84b19827ae27b72c406f1 | magnum |
| 0b7e5b0653084efdad5d67b66f2cf949 | admin |
| 0bc0163659864511a1610ba784d9e4b3 | mistral |
| 25cc2c5cf61c46329489e68656676ee4 | aodh |
| 4cf009b2dc7c4622b7230ad27f8242fe | nova |
| 4d1f0fd8c7524b7797d823eeba85cb03 | glance |
| 55f3968618b540b2a070ef845eb0c947 | ironic |
| 56e8666f2b044577934f9707ad29da5f | heat |
| 5eda7ede1be44745abd7d7815a85d927 | manila |
| 6e69a71d41da453893769ebf597bf914 | zaqar |
| 8a6694f8dde2497bbe230fbf4382f37d | trove |
| 964a9e06be3e411f9bfa80e9ea07e986 | senlin |
| a5bb89f8bbeb43d496e54109d11b1be6 | cinder |
| c0853dac1d1c4c7294f3bdfa05731c37 | barbican |
| c1bafcd2a72c429dbbf0bde8b35abb38 | murano |
| c63ad4ff853b4b72a70d64dee7aa596b | ceilometer |
| de4b432c9c7b4f1785fd600fc22df6b4 | demo |
| e298427fe3734640bfd0c6e043e13763 | neutron |
| e8bbf36bae5b4d9bb1649395b5a49886 | designate |
+----------------------------------+------------+
bladeA01b:~# openstack user list --project service

bladeA01b:~# openstack user show magnum
+--------------------+----------------------------------+
| Field | Value |
+--------------------+----------------------------------+
| default_project_id | f491fbef5f1748cc8fefed046973974e |
| domain_id | default |
| enabled | True |
| id | 010049f831d84b19827ae27b72c406f1 |
| name | magnum |
+--------------------+----------------------------------+
bladeA01b:~# openstack project show f491fbef5f1748cc8fefed046973974e
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Debian service project |
| domain_id | default |
| enabled | True |
| id | f491fbef5f1748cc8fefed046973974e |
| is_domain | False |
| name | service |
| parent_id | default |
+-------------+----------------------------------+
----- s n i p -----

What "worries" me a little is that the "user list --project"
output is empty! I know that part worked once, on another
install, when I _didn't_ use individual accounts for each
service. But the "user show" seems to indicate that the user
IS in the correct project after all..


So what is the correct way to have services authenticate themselves?
What variable/setting am I missing (or have used when I shouldn't)?

I can't see anything in the logs, even with debugging and verbose
enabled.
Turbo Fredriksson
2016-06-22 19:30:08 UTC
Permalink
Post by Turbo Fredriksson
I can't see anything in the logs, even with debugging and verbose
enabled.
What I do see is this:

----- s n i p -----
2016-06-22 20:25:58.102 2942 DEBUG nova.service [req-e5132bf8-d3a1-4214-8cba-94ea06dfc273 - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python2.7/dist-packages/nova/service.py:236
2016-06-22 20:25:58.102 2942 DEBUG nova.servicegroup.drivers.db [req-e5132bf8-d3a1-4214-8cba-94ea06dfc273 - - - - -] DB_Driver: join new ServiceGroup member bladeA03b to the compute group, service = <nova.service.Service object at 0x7f8aa3b8e1d0> join /usr/lib/python2.7/dist-ackages/nova/servicegroup/drivers/db.py:48
----- s n i p -----

I'm not sure what it means though.
Turbo Fredriksson
2016-06-22 20:58:57 UTC
Permalink
Yay! I managed to solve a problem on my own!! :D
Mostly by guessing and hoping, but the problem was solved none
the less :D


Tweaking *_tenant_* values etc apparently solved the problem.
I now see my Compute node. And everything seems to be back where
I left of two days ago.


I now return you back to the original problem..
--
Imagine you're an idiot and then imagine you're in
the government. Oh, sorry. Now I'm repeating myself
- Mark Twain
Eugen Block
2016-06-23 07:40:19 UTC
Permalink
What version are you using?
control1:~ # rpm -qi
openstack-neutron-linuxbridge-agent-8.1.3~a0~dev5-1.1.noarch
Name : openstack-neutron-linuxbridge-agent
Version : 8.1.3~a0~dev5
Release : 1.1
Architecture: noarch
Install Date: Fr 17 Jun 2016 16:09:05 CEST
Group : Development/Languages/Python
Size : 14254
License : Apache-2.0
Signature : RSA/SHA256, Mi 15 Jun 2016 05:08:07 CEST, Key ID
893a90dad85f9316
Source RPM : openstack-neutron-8.1.3~a0~dev5-1.1.src.rpm
Build Date : Mi 15 Jun 2016 05:07:22 CEST
Build Host : wildcard2
Relocations : (not relocatable)
Vendor : obs://build.opensuse.org/Cloud:OpenStack
URL : https://launchpad.net/neutron
Summary : OpenStack Network - Linux Bridge Agent
Description :
This package provides the Linux Bridge Agent.
Distribution: Cloud:OpenStack:Mitaka / openSUSE_Leap_42.1
By external network, do you mean it has router:external=True?
Yes, that's what I mean.

It's really strange, yesterday I had to restart the agent multiple
times without any hint what had happened. So it's not necessarily
connected to nova but to other services, too. I just don't have a clue
yet where to look. I set the linuxbridge-agent.log into debug mode but
I didn't find a temporal connection to the interruption.
Post by Eugen Block
I am seeing a strange behaviour of my cloud and could use some help
on this.> I have a project containing 2 VMs, one is running in an
external > network, the other is in a tenant-network with a
floating ip. Security
What version are you using? By external network, do you mean it has
router:external=True ?
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Eugen Block
2016-06-23 08:21:08 UTC
Permalink
Hmm, these are really for Neutron routers. Not sure about connecting
VMs to them.
It's a flat network on the compute node, I created a vlan directly on
the compute node to avoid sending all the traffic to the
control/network node for instances that are supposed to be in a
productive environment. And until a couple of days ago, maybe two
weeks, this doesn't work as it used to. I have been testing all the
time while instances were running in that network, but without impact
on their connectivity. Now it seems that several services have some
kind of impact on neutron on the compute nodes. I just figured out
that restarting libvirtd also leads to an interruption.
Post by Eugen Block
By external network, do you mean it has router:external=True?
Yes, that's what I mean.
Hmm, these are really for Neutron routers. Not sure about connecting
VMs to them.
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Eugen Block
2016-06-24 08:15:34 UTC
Permalink
Make sure nova is using the noop driver.
I'm trying to use ceilometer, in that case the docs say to use
messagingv2 driver, so that's what I did. And until two weeks ago it
worked just fine, I had no networking issues.
double check your security groups config
The security groups also seem to be fine, my colleague works via ssh
on those instances. And the interruption can be caused by deleting an
instance in a different project with it's own security groups, it just
has to run on the same compute node.
double check your security groups config. Make sure nova is using
the noop driver.
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Eugen Block
2016-06-24 13:26:57 UTC
Permalink
If you are using the Neutron API for security groups, then I think
you need firewall_driver=nova.virt.firewall.NoopFirewallDriver in
nova.conf - that's what devstack does.
I think this was really the solution! I tried to provoke the
interruption in three different ways that broke the connection before,
but I couldn't! I hope this is it, I'll report if the interruptions
return, but so far thank you very much!!!
Post by Eugen Block
Make sure nova is using the noop driver.
I'm trying to use ceilometer, in that case the docs say to use
messagingv2 driver, so that's what I did. And until two weeks ago it
worked just fine, I had no networking issues.
Your iptables output is showing entries from both nova-compute and
neutron. If you are using the Neutron API for security groups, then
I think you need
firewall_driver=nova.virt.firewall.NoopFirewallDriver in nova.conf -
that's what devstack does.
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ***@nde.ag

Vorsitzende des Aufsichtsrates: Angelika Mozdzen
Sitz und Registergericht: Hamburg, HRB 90934
Vorstand: Jens-U. Mozdzen
USt-IdNr. DE 814 013 983
Loading...