Friday, January 27, 2012

openstack - update

Last time I was able to deploy an image. Next step would be to list it and then run. But I have hit problems.


To list images I run command:

euca-describe-images


which hangs up forever and after long time exits with message "connection reset by peer".


I have disabled iptables to eliminate firewall issues. No help.

All manuals assume that euca-describe-images should simply run and do not give instruction what to do if it does not.


Following Josh's advice I did:

strace -o edi_output -f -ff euca-describe-images

and then I looked into the output files. It seems that there might be two problems:

  1. Some euca2ools files are missing - in particular the .eucarc configuration file.
  2. There are messages about missing python files, like for example "open("/usr/lib64/python2.6/site-packages/gtk-2.0/org.so", O_RDONLY) = -1 ENOENT (No such file or directory)" (There are manu more like that).
So it seems that the eucatools installation described in previous posts may be not complete - and it missed some key files. Or python (which we already know had to be patched) is not OK. Or both.

That's all I know for now.

Tuesday, January 10, 2012

How to register an image in openstack

After having installed and configured the worker and controller nodes of the openstack testbed we would like to upload images into it.

First I downloaded some images to /root/images on controller node. One is from Xin and another one is a minimal image for testing I got from the net. I have no idea what are they worth.


Then I tried to follow the instructions

http://docs.openstack.org/cactus/openstack-compute/admin/content/part-ii-getting-virtual-machines.html

which go like this:

image="ubuntu1010-UEC-localuser-image.tar.gz"
wget http://c0179148.cdn1.cloudfiles.rackspacecloud.com/ubuntu1010-UEC-localuser-image.tar.gz
uec-publish-tarball $image [bucket-name] [hardware-arch]


and I could not find where does the
uec-publish-tarball

command comes from. Finally I realized that it comes from Ubuntu and the manual became Ubuntu specific without saying it explicitly.


So I tried different approach.

cd /root/images

glance add name="My Image" < sl61-kvm.tar.bz2 # the image I got from Xin

The command responded that the image got Id=1, which is a good sign.

Then I did:

glance show 1

and got:

URI: http://0.0.0.0/images/1
Id: 1
Public: No
Name: My Image
Size: 199737477
Location: file:///var/lib/glance/images/1
Disk format: raw
Container format: ovf

Which suggests that the file is in the system. But when I tried:

glance index

it said:

no public images found

So I tried to register it again:

glance add name="My Image" is_public=true < sl61-kvm.tar.bz2
Added new image with ID: 2

I tried to list:

glance index
Found 1 public images...
ID Name Disk Format Container Format Size
---------------- ------------------------------ -------------------- -------------------- --------------
2 My Image raw ovf 199737477

So it seems we have uploaded an image to the system.


Now I have to figure out how to run it.

Friday, January 6, 2012

How to configure worker node - part 2

Compute node configuration - continued

We execute the following commands:

This command is supposed to synchronize the database:


/usr/bin/nova-manage db sync
Now we have to create users and projects. We call both users and projects "nova"

/usr/bin/nova-manage user admin nova
/usr/bin/nova-manage project create nova nova
/usr/bin/nova-manage network create 192.168.0.0/24 1 256

We check that users and projects were created correctly:

/usr/bin/nova-manage project list
nova

/usr/bin/nova-manage user list
nova

Create Certifications

On the controller node execute

mkdir –p /root/creds

/usr/bin/python /usr/bin/nova-manage project zipfile nova nova /root/creds/novacreds.zip


If you encounter a python error, then apply the python patch described few posts earlier.

Create /root/creds on the compute node and copy the
novacreds.zip file there. Then unpack it

unzip /root/creds/novacreds.zip -d /root/creds/

A few files will appear, among them
/root/creds/novarc . This file needs to be appended to .bashrc, but there is a catch:
first line of the file has an error and has to be replaced:


Original line:

NOVA_KEY_DIR=$(pushd $(dirname $BASH_SOURCE)>/dev/null; pwd; popd>/dev/null)

has to be replaced with

NOVA_KEY_DIR=~/creds

The content of novarc file now is

NOVA_KEY_DIR=~/creds

export EC2_ACCESS_KEY="XXXXXXXXXXXXXXXXXXXXXXXX:nova"
export EC2_SECRET_KEY="XXXXXXXXXXXXXXXXXXXXXXXX"
export EC2_URL="http://130.199.148.53:8773/services/Cloud"
export S3_URL="http://130.199.148.53:3333"
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
export NOVA_API_KEY="XXXXXXXXXXXXXXXXXXXXXXXXXXX"
export NOVA_USERNAME="nova"
export NOVA_URL="http://130.199.148.53:8774/v1.0/"


Where "XXXX.." strings denote keys which I do not post here, for security.

The content of novarc file should now be added to bashrc:

cat /root/creds/novarc >> ~/.bashrc source ~/.bashrc

This should be done both on compute and controller nodes.

Enable access to worker node

First unset a proxy and then do:

euca-authorize -P icmp -t -1:-1 default euca-authorize -P tcp -p 22 default

Thursday, January 5, 2012

How to configure worker node

In the following I will describe how to configure the worker node. I assume that the worker node has been already installed following the instructions posted on this blog.


Firs of all, before we start, we still need to add nova-network (it has not been installed so far).

Do:

yum install openstack-nova-network

Once this is done, we can go on and edit the /etc/nova/nova.conf file.

First, add to the file the option

--daemonize=1 


The relevant switches are:

--sql_connection
--s3_host
--rabbit_host
--ec2_api
--ec2_url
--fixed_range
--network_size

In the end the configuration file should look like:

--auth_driver=nova.auth.dbdriver.DbDriver
--buckets_path=/var/lib/nova/buckets
--ca_path=/var/lib/nova/CA
--cc_host=
--credentials_template=/usr/share/nova/novarc.template
--daemonize=1
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--ec2_api=130.199.148.53
--ec2_url=http://130.199.148.53:8773/services/Cloud
--fixed_range=192.168.0.0/16
--glance_host=
--glance_port=9292
--image_service=nova.image.glance.GlanceImageService
--images_path=/var/lib/nova/images
--injected_network_template=/usr/share/nova/interfaces.rhel.template
--instances_path=/var/lib/nova/instances
--keys_path=/var/lib/nova/keys
--libvirt_type=kvm
--libvirt_xml_template=/usr/share/nova/libvirt.xml.template
--lock_path=/var/lib/nova/tmp
--logdir=/var/log/nova
--logging_context_format_string=%(asctime)s %(name)s: %(levelname)s [%(request_id)s %(user)s %(project)s] %(message)s
--logging_debug_format_suffix=
--logging_default_format_string=%(asctime)s %(name)s: %(message)s
--network_manager=nova.network.manager.VlanManager
--networks_path=/var/lib/nova/networks
--network_size=8
--node_availability_zone=nova
--rabbit_host=130.199.148.53
--routing_source_ip=130.199.148.53
--s3_host=130.199.148.53
--scheduler_driver=nova.scheduler.zone.ZoneScheduler
--sql_connection=mysql://{USER}:{PWD}@130.199.148.53/{DATABASE}
--state_path=/var/lib/nova
--use_cow_images=true
--use_ipv6=false
--use_s3=true
--use_syslog=false
--verbose=false
--vpn_client_template=/usr/share/nova/client.ovpn.template

where {USER},{PWD} and {DATABASE} denote nova database user, pasword and database name.

Now go to the controller node and open the following ports for incoming connections: 3333,3306,5672,8773,8000.

Go back to worker node and prepare /root/bin/openstack-init.sh script with the following content:

#!/bin/bash
for n in ajax-console-proxy compute vncproxy network; do
service openstack-nova-$n $@;
done

Then run

/root/bin/openstack-init.sh stop
Stopping OpenStack Nova Web-based serial console proxy: [ OK ]
Stopping OpenStack Nova Compute Worker: [ OK ]
Stopping OpenStack Nova VNC Proxy: [ OK ]
Stopping OpenStack Nova Network Controller: [ OK ]
[root@gridreserve30 compute]# /root/bin/openstack-init.sh start
Starting OpenStack Nova Web-based serial console proxy: [ OK ]
Starting OpenStack Nova Compute Worker: [ OK ]
Starting OpenStack Nova VNC Proxy: [ OK ]
Starting OpenStack Nova Network Controller: [ OK ]

to be continued...