部署kubernetes1.5.2集群


环境准备

hostname IP 作用
k8s-master 172.16.100.200 master/etcd
k8s-node1 172.16.100.201 node
k8s-node2 172.16.100.202 node

先决条件

  • 设置本地host解析

    1
    2
    3
    4
    5
    # cat /etc/hosts
    172.16.100.200 k8s-master
    172.16.100.200 etcd
    172.16.100.201 k8s-node1
    172.16.100.202 k8s-node2
  • 关闭CentOS自带的防火墙服务

    1
    2
    # systemctl stop firewalld
    # systemctl disable firewalld

Flannel网络配置

Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址,并使Docker容器可以互连。

flannel利用etcd来管理可分配的IP地址段资源,同时监控etcd中每个Pod的实际地址,并在内存中建立一个Pod节点路由表;然后下连docker0和物理网络,使用内存中的Pod节点路由表,将docker0发给它的数据包封装起来,利用物理网络的连接将数据包发送到目标flanneld上,从而完成Pod到Pod之间的直接地址通信。

etcd节点安装并配置etcd

1
2
3
4
5
6
# yum -y install etcd
# grep ^[0-Z] /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"

在etcd中定义flannel网络

1
2
3
4
5
6
7
8
# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.17.0.0/16"}
# etcdctl ls atomic.io/network/config
/atomic.io/network/config
# etcdctl get atomic.io/network/config
{"Network":"172.17.0.0/16"}
# curl -L http://localhost:2379/v2/keys/atomic.io/network/config
{"action":"get","node":{"key":"/atomic.io/network/config","value":"{\"Network\":\"172.17.0.0/16\"}","modifiedIndex":1544,"createdIndex":1544}}

所有节点安装、配置并启动flanneld

1
2
3
4
5
# yum -y install flanneld
# grep ^[0-Z] /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# systemctl start flanneld

检查flannel网络

1
2
3
4
5
6
7
8
9
10
11
12
13
# etcdctl ls /atomic.io/network
/atomic.io/network/config
/atomic.io/network/subnets
# etcdctl ls /atomic.io/network/subnets
/atomic.io/network/subnets/172.17.45.0-24
/atomic.io/network/subnets/172.17.10.0-24
/atomic.io/network/subnets/172.17.57.0-24
# etcdctl get /atomic.io/network/subnets/172.17.45.0-24
{"PublicIP":"172.16.100.200"}
# etcdctl get /atomic.io/network/subnets/172.17.10.0-24
{"PublicIP":"172.16.100.201"}
# etcdctl get /atomic.io/network/subnets/172.17.57.0-24
{"PublicIP":"172.16.100.202"}
  • master:

    1
    2
    3
    4
    5
    # ip add show flannel0
    6: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none
    inet 172.17.45.0/16 scope global flannel0
    valid_lft forever preferred_lft forever
  • node1:

    1
    2
    3
    4
    5
    # ip add show flannel0
    4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none
    inet 172.17.10.0/16 scope global flannel0
    valid_lft forever preferred_lft forever
  • node2:

    1
    2
    3
    4
    5
    # ip add show flannel0
    4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
    link/none
    inet 172.17.57.0/16 scope global flannel0
    valid_lft forever preferred_lft forever

Master节点安装配置

  • 安装kubernetes-master

    1
    2
    3
    4
    5
    # yum -y install kubernetes-master
    # yum list installed "kubernetes*"
    Installed Packages
    kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 @extras
    kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 @extras
  • 编辑/etc/kubernetes/apiserver文件

    1
    2
    3
    4
    5
    6
    7
    8
    # grep ^[0-Z] /etc/kubernetes/apiserver
    KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    KUBE_API_PORT="--port=8080"
    KUBELET_PORT="--kubelet-port=10250"
    KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    KUBE_API_ARGS=""
  • 启动相关的服务并设置开机启动

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    # for SERVICES in kube-apiserver kube-controller-manager kube-scheduler;do systemctl start $SERVICES && systemctl enable $SERVICES;done
    # for SERVICES in kube-apiserver kube-controller-manager kube-scheduler;do systemctl status $SERVICES | head -n 7; done
    ● kube-apiserver.service - Kubernetes API Server
    Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2017-05-04 14:23:54 CST; 3min 18s ago
    Docs: https://github.com/GoogleCloudPlatform/kubernetes
    Main PID: 8800 (kube-apiserver)
    CGroup: /system.slice/kube-apiserver.service
    └─8800 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --port=8080 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
    ● kube-controller-manager.service - Kubernetes Controller Manager
    Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2017-05-04 14:23:54 CST; 3min 18s ago
    Docs: https://github.com/GoogleCloudPlatform/kubernetes
    Main PID: 8830 (kube-controller)
    CGroup: /system.slice/kube-controller-manager.service
    └─8830 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080
    ● kube-scheduler.service - Kubernetes Scheduler Plugin
    Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
    Active: active (running) since Thu 2017-05-04 14:23:54 CST; 3min 17s ago
    Docs: https://github.com/GoogleCloudPlatform/kubernetes
    Main PID: 8859 (kube-scheduler)
    CGroup: /system.slice/kube-scheduler.service
    └─8859 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080

Node节点安装配置

  • 安装kubernetes-node

    1
    # yum -y install kubernetes-node
  • 启动docker服务,并确认网络接口docker0的IP地址是否属于flannel0的子网:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    # systemctl start docker
    # ip add show docker0
    4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:ff:db:ec:ba brd ff:ff:ff:ff:ff:ff
    inet 172.17.4.1/24 brd 172.17.4.255 scope global docker0
    valid_lft forever preferred_lft forever

    # cat /run/flannel/docker
    DOCKER_OPT_BIP="--bip=172.17.4.1/24"
    DOCKER_OPT_IPMASQ="--ip-masq=true"
    DOCKER_OPT_MTU="--mtu=1472"
    DOCKER_NETWORK_OPTIONS=" --bip=172.17.4.1/24 --ip-masq=true --mtu=1472"
    [root@k8s-node1 ~]# cat /run/flannel/subnet.env
    FLANNEL_NETWORK=172.17.0.0/16
    FLANNEL_SUBNET=172.17.4.1/24
    FLANNEL_MTU=1472
    FLANNEL_IPMASQ=false
  • 修改/etc/kubernetes/config文件,指定kubernetes-master地址

    1
    2
    3
    4
    5
    # grep ^[0-Z] /etc/kubernetes/config
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=0"
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    KUBE_MASTER="--master=http://k8s-master:8080"
  • 修改/etc/kubernetes/kubelet文件,根据相应的node节点进行配置
    node1

    1
    2
    3
    4
    5
    6
    # grep ^[0-Z] /etc/kubernetes/kubelet
    KUBELET_ADDRESS="--address=0.0.0.0"
    KUBELET_HOSTNAME="--hostname-override=k8s-node1"
    KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
    KUBELET_ARGS=""

node2

1
2
3
4
5
6
# grep ^[0-Z] /etc/kubernetes/kubelet 
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=k8s-node2"
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""

  • 在所有Node节点上启动kube-proxy、kubelet服务,并设置开机启动
    1
    2
    3
    4
    # for SERVICES in kube-proxy kubelet
    do
    systemctl restart $SERVICES && systemctl enable $SERVICES && systemctl status $SERVICES
    done

验证集群是否安装成功

1
2
3
4
5
6
7
8
9
# kubectl get node
NAME STATUS AGE
k8s-node1 Ready 5m
k8s-node2 Ready 2m
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
坚持原创技术分享,您的支持将鼓励我继续创作!
0%