Openstack metadata和userdata

Metadata

metadate service最早概念是AWS提出的,通过该服务可以为虚拟机提供元数据,包含虚拟机id、主机名、密钥、IP地址、用户自定义脚本等。

在OpenStack中,虚拟机获取 Metadata 信息的方式有两种:Config drivemetadata RESTful服务。

config drive

config drive是一个特殊的文件系统,如果创建实例时无法通过metadata service获取metadata(无 DHCP 或者 nova-api-metadata 服务),OpenStack会将metadata写到config drive,并在实例launch时进行挂载。若实例安装了cloud-init,config drive会被自动mount并从中读取metadata,进而完成后续的初始化工作。

启用config drive

config drive默认是disable的,可通过以下3种方式开启

  • 创建实例时指定 --config-drive true或在配置标签页中勾选配置驱动
  • 在计算节点的/etc/nova/nova.conf中配置 force_config_drive = true
    若该计算节点上已创建过实例,且创建时未启用config drive则会导致实例关闭后提示DiskNotFound无法启动
  • 镜像中添加img_config_drive=mandatory的元数据(推荐

config drive支持两种格式,iso9660(默认)和vfat,但iso9660会导致实例无法在线迁移,必须指定config_drive_format=vfat才能在线迁移。iso9660对应的是/dev/sr0,vfat对应的是/dev/vdb。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# cat /var/log/cloud-init.log
2018-09-27 11:48:53,229 - util.py[DEBUG]: Running command ['mount', '-o', 'ro,sync', '-t', 'auto', u'/dev/vdb', '/tmp/tmpM5VBjl'] with allowed return codes [0] (shell=False, capture=True)
2018-09-27 11:48:53,286 - openstack.py[DEBUG]: Selected version '2015-10-15' from ['2012-08-10', '2013-04-04', '2013-10-17', '2015-10-15', '2016-06-30', '2016-10-06', '2017-02-22', 'content', 'latest']
2018-09-27 11:48:53,286 - util.py[DEBUG]: Reading from /tmp/tmpM5VBjl/openstack/2015-10-15/vendor_data.json (quiet=False)
2018-09-27 11:48:53,287 - util.py[DEBUG]: Read 2 bytes from /tmp/tmpM5VBjl/openstack/2015-10-15/vendor_data.json
2018-09-27 11:48:53,288 - util.py[DEBUG]: Reading from /tmp/tmpM5VBjl/openstack/2015-10-15/user_data (quiet=False)
2018-09-27 11:48:53,288 - openstack.py[DEBUG]: Failed reading optional path /tmp/tmpM5VBjl/openstack/2015-10-15/user_data due to: [Errno 2] No such file or directory: '/tmp/tmpM5VBjl/openstack/2015-10-15/user_data'
2018-09-27 11:48:53,288 - util.py[DEBUG]: Reading from /tmp/tmpM5VBjl/openstack/2015-10-15/network_data.json (quiet=False)
2018-09-27 11:48:53,289 - util.py[DEBUG]: Read 553 bytes from /tmp/tmpM5VBjl/openstack/2015-10-15/network_data.json
2018-09-27 11:48:53,289 - util.py[DEBUG]: Reading from /tmp/tmpM5VBjl/openstack/2015-10-15/meta_data.json (quiet=False)
2018-09-27 11:48:53,289 - util.py[DEBUG]: Read 1884 bytes from /tmp/tmpM5VBjl/openstack/2015-10-15/meta_data.json
2018-09-27 11:48:53,290 - util.py[DEBUG]: Reading from /tmp/tmpM5VBjl/openstack/content/0000 (quiet=False)
2018-09-27 11:48:53,291 - util.py[DEBUG]: Read 159 bytes from /tmp/tmpM5VBjl/openstack/content/0000
2018-09-27 11:48:53,292 - util.py[DEBUG]: Reading from /tmp/tmpM5VBjl/ec2/latest/meta-data.json (quiet=False)
2018-09-27 11:48:53,292 - util.py[DEBUG]: Read 988 bytes from /tmp/tmpM5VBjl/ec2/latest/meta-data.json
2018-09-27 11:48:53,292 - util.py[DEBUG]: Running command ['umount', '/tmp/tmpM5VBjl'] with allowed return codes [0] (shell=False, capture=True)
2018-09-27 11:48:53,318 - util.py[DEBUG]: Recursively deleting /tmp/tmpM5VBjl
2018-09-27 11:48:53,319 - util.py[DEBUG]: Reading from /var/lib/cloud/data/instance-id (quiet=False)
2018-09-27 11:48:53,320 - handlers.py[DEBUG]: finish: init-local/search-ConfigDrive: SUCCESS: found local data from DataSourceConfigDrive
2018-09-27 11:48:53,320 - stages.py[INFO]: Loaded datasource DataSourceConfigDrive - DataSourceConfigDrive [net,ver=2][source=/dev/vdb]

通过config drive配置网络

当子网中开启了DHCP,无论创建实例时是否启用config drive,cloud-init都是通过metadata RESTful获取元数据,因此实例的IP地址仍然是通过dhcp自动获取的(BOOTPROTO=dhcp),且路由表中会添加一条连接169.254.169.254的网络对应的网关是该实例对应子网的dhcp地址(2个地址其实都是同一个dhcp-agent namespace上的IP,且在80端口上监听了http服务)。

1
2
3
4
5
6
7
8
9
10
11
12
13
# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
# Created by cloud-init on instance boot automatically, do not edit.
#
BOOTPROTO=dhcp
DEVICE=eth0
HWADDR=fa:16:3e:d5:52:4f
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
# ip route
default via 192.168.104.1 dev eth0
169.254.169.254 via 192.168.104.2 dev eth0 proto static
192.168.104.0/24 dev eth0 proto kernel scope link src 192.168.104.7


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@control02 ~]# ip netns exec qdhcp-76859894-b485-44b7-9da4-efdb6380ca04 ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ns-2f22c403-15@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether fa:16:3e:68:f4:94 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.169.254/16 brd 169.254.255.255 scope global ns-2f22c403-15
valid_lft forever preferred_lft forever
inet 192.168.104.2/24 brd 192.168.104.255 scope global ns-2f22c403-15
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe68:f494/64 scope link
valid_lft forever preferred_lft forever
[root@control02 ~]# ip netns exec qdhcp-76859894-b485-44b7-9da4-efdb6380ca04 netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 13030/haproxy
tcp 0 0 169.254.169.254:53 0.0.0.0:* LISTEN 12996/dnsmasq
tcp 0 0 192.168.104.2:53 0.0.0.0:* LISTEN 12996/dnsmasq
tcp6 0 0 fe80::f816:3eff:fe68:53 :::* LISTEN 12996/dnsmasq

当子网中关闭了DHCP,且创建实例时启用config drive,则cloud-init将通过config drive获取实例的网络配置并写入到配置文件,其中IP地址是静态配置的(BOOTPROTO=none),且没有到169.254.169.254的路由。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
# Created by cloud-init on instance boot automatically, do not edit.
#
BOOTPROTO=none
DEFROUTE=yes
DEVICE=eth0
GATEWAY=192.168.101.1
HWADDR=fa:16:3e:8c:cd:80
IPADDR=192.168.101.12
MTU=1450
NETMASK=255.255.255.0
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
# ip route
default via 192.168.101.1 dev eth0
192.168.101.0/24 dev eth0 proto kernel scope link src 192.168.101.12

若子网中关闭了DHCP,且创建实例时未启用config drive,则实例launch时cloud-init将无法连接到metadata service获取到元数据。

Metadata RESTful 服务

OpenStack提供了RESTful接口,虚拟机可以通过REST API来获取metadata信息。提供该服务的组件为:Neutron-ns-metadata-proxy、Neutron-metadata-agent和Nova-api-metadata。

  • Neutron-ns-metadata-proxy
    Neutron-ns-metadata-proxy运行在网络节点。为了解决网络节点的网段和租户的虚拟网段重复的问题,OpenStack引入了网络命名空间。Neutron中的路由和DHCP服务器都在各自独立的命名空间中。由于虚拟机获取metadata的请求都是以路由和DHCP服务器作为网络出口,所以需要通过neutron-ns-metadata-proxy联通不同的网络命名空间,将请求在网络命名空间之间转发。Neutron-ns-metadata-proxy 利用在 unix domain socket 之上的 HTTP 技术,实现了不同网络命名空间之间的 HTTP 请求转发。并在请求头中添加’X-Neutron-Router-ID’和’X-Neutron-Network-ID’信息,以便 Neutron-metadata-agent 来辨别发送请求的虚拟机,获取虚拟机的 ID。
  • Neutron-metadata-agent
    Neutron-metadata-agent运行在网络节点,负责将接收到的获取metadata请求转发给nova-api-metadata。Neutron-metadata-agent会获取虚拟机和租户的 ID,添加到请求的HTTP头部中。
  • Nova-api-metadata
    Nova-api-metadata启动了RESTful服务,负责处理虚拟机发送来的REST API请求。从请求的HTTP头部中取出相应的信息,获得虚拟机的ID,继而从数据库中读取虚拟机的metadata信息,最后将结果返回。

User-data

批处理(rem cmd)

PowerShell(#ps1_sysnative)

Bash(#!/bin/bash)

cloud-init(#cloud-config

坚持原创技术分享,您的支持将鼓励我继续创作!
0%