使用kubeadm部署Kubernetes1.11.3 HA

在完成科学上网的前提下,我们准备使用kubeadm通过http代理部署Kubernetes。

部署方案

hostname IP 角色 组件
k8s-master01 10.200.112.111 master keepalived、nginx、etcd、kubelet、kube-apiserver
k8s-master02 10.200.112.112 master keepalived、nginx、etcd、kubelet、kube-apiserver
k8s-master03 10.200.112.113 master keepalived、nginx、etcd、kubelet、kube-apiserver
k8s-node01 10.200.112.114 node kubelet、kube-proxy
10.200.112.100 master-vip

openstack上创建实例

以boot from volume方式创建4个docker实例(已初始化安装docker-ce-18.03.1),并指定固定IP,手动创建k8s-master-vip的port端口,避免该IP被分配到其他虚拟机中

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+------------------------------------------+------+
| ID | Volume ID | Status | Name | Size |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------+------+
| 9d696ab5-8368-4720-9ab1-0fa83508eecc | e6c18478-34b1-4ab0-a91b-109f30085ac2 | available | snapshot for centos7.5-docker-ce-18.03.1 | 50 |
+--------------------------------------+--------------------------------------+-----------+------------------------------------------+------+
# cinder create --snapshot-id 9d696ab5-8368-4720-9ab1-0fa83508eecc --name opsbase-k8s-master01
# cinder create --snapshot-id 9d696ab5-8368-4720-9ab1-0fa83508eecc --name opsbase-k8s-master02
# cinder create --snapshot-id 9d696ab5-8368-4720-9ab1-0fa83508eecc --name opsbase-k8s-master03
# cinder create --snapshot-id 9d696ab5-8368-4720-9ab1-0fa83508eecc --name opsbase-k8s-node01
# cinder list --status available
+--------------------------------------+-----------+----------------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+----------------------+------+-------------+----------+-------------+
| 0379ed28-a124-43bf-98e2-e31400c79bad | available | opsbase-k8s-master02 | 50 | - | true | |
| 5647c318-1d2e-4c5d-9313-9110e9668a68 | available | opsbase-k8s-node01 | 50 | - | true | |
| a14d54d2-81d1-4aa4-87c9-9b776b4fb179 | available | opsbase-k8s-master01 | 50 | - | true | |
| a59340dd-7702-4a5c-89e9-2da1db8bf226 | available | opsbase-k8s-master03 | 50 | - | true | |
+--------------------------------------+-----------+----------------------+------+-------------+----------+-------------+
# openstack server create --volume opsbase-k8s-master01 --flavor m5.medium --security-group public --key-name devops --config-drive True --nic net-id=net_external,v4-fixed-ip=10.200.112.111 opsbase-k8s-master01
# openstack server create --volume opsbase-k8s-master02 --flavor m5.medium --security-group public --key-name devops --config-drive True --nic net-id=net_external,v4-fixed-ip=10.200.112.112 opsbase-k8s-master02
# openstack server create --volume opsbase-k8s-master03 --flavor m5.medium --security-group public --key-name devops --config-drive True --nic net-id=net_external,v4-fixed-ip=10.200.112.113 opsbase-k8s-master03
# openstack server create --volume opsbase-k8s-node01 --flavor m5.medium --security-group public --key-name devops --config-drive True --nic net-id=net_external,v4-fixed-ip=10.200.112.114 opsbase-k8s-node01
# openstack port create --network net_external --project Operation --fixed-ip subnet=2add8374-be0f-4834-ad4c-486b5317ab87,ip-address=10.200.112.110 k8s-master-vip

openstack上禁用master节点的安全组

由于OpenStack Neutron的安全组默认会对每个网口开启MAC/IP过滤功能(防arp欺骗),不是该网口的MAC/IP发出的包会被宿主机丢弃。因此需要将3个运行keepalived的master节点的安全组禁用,否则keepalived的backup节点无法通过收到vrrp的通告报文,将导致3个节点都是master。

1
2
3
4
5
6
7
# openstack port list|grep 10.200.112.11[1-3]
| 7511287f-2390-42de-becf-86c3e468bc87 | | fa:16:3e:a5:53:41 | ip_address='10.200.112.112', subnet_id='2add8374-be0f-4834-ad4c-486b5317ab87' | ACTIVE |
| b82137f2-19c0-41ef-b534-8ba224a80a57 | | fa:16:3e:eb:01:5c | ip_address='10.200.112.113', subnet_id='2add8374-be0f-4834-ad4c-486b5317ab87' | ACTIVE |
| b9a017e7-b4bd-4596-a063-f5906e1eec4e | | fa:16:3e:5f:11:06 | ip_address='10.200.112.111', subnet_id='2add8374-be0f-4834-ad4c-486b5317ab87' | ACTIVE |
# openstack port set --no-security-group --disable-port-security b9a017e7-b4bd-4596-a063-f5906e1eec4e
# openstack port set --no-security-group --disable-port-security 7511287f-2390-42de-becf-86c3e468bc87
# openstack port set --no-security-group --disable-port-security b82137f2-19c0-41ef-b534-8ba224a80a57

环境准备[all_node]

版本信息

  • Linux版本:

    1
    2
    # cat /etc/redhat-release
    CentOS Linux release 7.5.1804 (Core)
  • 内核版本:

    1
    2
    # uname -r
    4.18.11-1.el7.elrepo.x86_64
  • docker版本
    docker-ce-18.06存在http代理失败的问题

    1
    2
    # docker -v
    Docker version 18.03.1-ce, build 9ee9f40

配置docker http代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# vim /etc/hosts
10.200.112.111 opsbase-k8s-master01
10.200.112.112 opsbase-k8s-master02
10.200.112.113 opsbase-k8s-master03
10.200.112.114 opsbase-k8s-node01
# mkdir /etc/systemd/system/docker.service.d/
# cd /etc/systemd/system/docker.service.d/
# cat > http-proxy.conf <<EOF
[Service]
Environment="HTTP_PROXY=http://10.200.112.21:8118/" "NO_PROXY=localhost,127.0.0.1,10.200.112.111,10.200.112.112,10.200.112.113"
EOF
# cat > https-proxy.conf <<EOF
[Service]
Environment="HTTPS_PROXY=http://10.200.112.21:8118/" "NO_PROXY=localhost,127.0.0.1,10.200.112.111,10.200.112.112,10.200.112.113"
EOF
# systemctl daemon-reload
# systemctl restart docker
# docker info|grep -i proxy
HTTP Proxy: http://10.200.112.21:8118/
HTTPS Proxy: http://10.200.112.21:8118/
No Proxy: localhost,127.0.0.1,10.200.112.111,10.200.112.112,10.200.112.113

安装所需软件

可使用阿里云的kubernetes源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# yum -y install kubelet-1.11.3 kubeadm-1.11.3 kubectl-1.11.3
# yum list kubeadm kubelet kubectl
Installed Packages
kubeadm.x86_64 1.11.3-0 @kubernetes
kubectl.x86_64 1.11.3-0 @kubernetes
kubelet.x86_64 1.11.3-0 @kubernetes
# systemctl enable kubelet

开启ipvs

从kubernetes1.8版本开始,新增了kube-proxy对ipvs的支持,并在1.11版本中实现了GA(General Availability)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# yum -y install ipvsadm
# modprobe ip_vs
# modprobe ip_vs_rr
# modprobe ip_vs_wrr
# modprobe ip_vs_sh
# modprobe nf_conntrack_ipv4
# lsmod |grep ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 151552 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 135168 2 nf_conntrack_ipv4,ip_vs
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs

# cat > /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF

获取所需docker镜像

通过kubeadm获取基础组件镜像清单

1
2
3
4
5
6
7
8
# kubeadm config images list --kubernetes-version=v1.11.3
k8s.gcr.io/kube-apiserver-amd64:v1.11.3
k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
k8s.gcr.io/kube-scheduler-amd64:v1.11.3
k8s.gcr.io/kube-proxy-amd64:v1.11.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd-amd64:3.2.18
k8s.gcr.io/coredns:1.1.3

通过kubeadm拉取基础镜像

1
2
3
4
5
6
7
8
# kubeadm config images pull --kubernetes-version=v1.11.3
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.11.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.11.3
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.18
[config/images] Pulled k8s.gcr.io/coredns:1.1.3

配置nginx和keepalived[master]

nginx

nginx实现master节点的4层TCP反向代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# mkdir /root/k8s-cluster
# cat > /root/k8s-cluster/nginx.conf <<EOF
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
server {
listen 8443;
proxy_pass k8s-master;
}
upstream k8s-master {
server 10.200.112.111:6443;
server 10.200.112.112:6443;
server 10.200.112.113:6443;
}
}
EOF

# docker run -d --name k8s-nginx -v /root/k8s-cluster/nginx.conf:/etc/nginx/nginx.conf -p 8443:8443 --restart always nginx

keepalived

keepalived实现k8s-master节点的kube-apiserver服务之间的failover。(必须先启用ip_vs)

1
2
3
4
5
6
# docker run -d --name k8s-keepalived --restart=always --net=host --cap-add=NET_ADMIN \
-e KEEPALIVED_INTERFACE=eth0 \
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['10.200.112.110']" \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['10.200.112.111,'10.200.112.112','10.200.112.113']" \
-e KEEPALIVED_PASSWORD=kubernetes \
osixia/keepalived

配置master[master]

k8s-master01

生成配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# cat > /root/k8s-cluster/kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3
apiServerCertSANs:
- "opsbase-k8s-master01"
- "opsbase-k8s-master02"
- "opsbase-k8s-master03"
- "10.200.112.110"
- "10.200.112.111"
- "10.200.112.112"
- "10.200.112.113"
- "127.0.0.1"
api:
advertiseAddress: 10.200.112.111
controlPlaneEndpoint: 10.200.112.110:8443
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://10.200.112.111:2379"
advertise-client-urls: "https://10.200.112.111:2379"
listen-peer-urls: "https://10.200.112.111:2380"
initial-advertise-peer-urls: "https://10.200.112.111:2380"
initial-cluster: "opsbase-k8s-master01=https://10.200.112.111:2380"
serverCertSANs:
- opsbase-k8s-master01
- 10.200.112.111
peerCertSANs:
- opsbase-k8s-master01
- 10.200.112.111
networking:
podSubnet: "10.244.0.0/16"
kubeProxy:
config:
#mode: ipvs
mode: iptables
EOF

初始化kubernetes集群

若初始化失败需要通过kubeadm reset清理环境后重新初始化。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# kubeadm init --config /root/k8s-cluster/kubeadm-config.yaml
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
I1018 17:37:54.958292 15912 kernel_validator.go:81] Validating kernel version
I1018 17:37:54.958578 15912 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [opsbase-k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local opsbase-k8s-master01 opsbase-k8s-master02 opsbase-k8s-master03] and IPs [10.96.0.1 10.200.112.111 10.200.112.110 10.200.112.110 10.200.112.111 10.200.112.112 10.200.112.113 127.0.0.1]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [opsbase-k8s-master01 localhost opsbase-k8s-master01] and IPs [127.0.0.1 ::1 10.200.112.111]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [opsbase-k8s-master01 localhost opsbase-k8s-master01] and IPs [10.200.112.111 127.0.0.1 ::1 10.200.112.111]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 41.505348 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node opsbase-k8s-master01 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node opsbase-k8s-master01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "opsbase-k8s-master01" as an annotation
[bootstraptoken] using token: qis1en.sh1k760bz9bei877
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 10.200.112.110:8443 --token qis1en.sh1k760bz9bei877 --discovery-token-ca-cert-hash sha256:5e2687c96590e28392c193e355cb6000adf7ce35891df4fad5809ca33316b738

配置使用kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# su - kubernetes
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-f2448 0/1 Pending 0 6m <none> <none> <none>
coredns-78fcdf6894-hlsfn 0/1 Pending 0 6m <none> <none> <none>
etcd-opsbase-k8s-master01 1/1 Running 0 6m 10.200.112.111 opsbase-k8s-master01 <none>
kube-apiserver-opsbase-k8s-master01 1/1 Running 0 6m 10.200.112.111 opsbase-k8s-master01 <none>
kube-controller-manager-opsbase-k8s-master01 1/1 Running 0 6m 10.200.112.111 opsbase-k8s-master01 <none>
kube-proxy-gd2p4 1/1 Running 0 6m 10.200.112.111 opsbase-k8s-master01 <none>
kube-scheduler-opsbase-k8s-master01 1/1 Running 0 5m 10.200.112.111 opsbase-k8s-master01 <none>
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:33057 0.0.0.0:* LISTEN 3049/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 3049/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 3713/kube-proxy
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 3385/kube-scheduler
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3349/etcd
tcp 0 0 10.200.112.111:2379 0.0.0.0:* LISTEN 3349/etcd
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 3342/kube-controlle
tcp 0 0 10.200.112.111:2380 0.0.0.0:* LISTEN 3349/etcd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1068/sshd
tcp6 0 0 :::8443 :::* LISTEN 1301/docker-proxy
tcp6 0 0 :::10250 :::* LISTEN 3049/kubelet
tcp6 0 0 :::6443 :::* LISTEN 3394/kube-apiserver
tcp6 0 0 :::10256 :::* LISTEN 3713/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 1068/sshd

打包相关证书到其他master节点

1
2
3
4
# cd /etc/kubernetes
# tar zcf k8s-key.tar.gz admin.conf pki/ca.* pki/sa.* pki/front-proxy-ca.* pki/etcd/ca.*
# scp k8s-key.tar.gz opsbase-k8s-master02:/etc/kubernetes/
# scp k8s-key.tar.gz opsbase-k8s-master03:/etc/kubernetes/

k8s-master02

生成配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# cat > /root/k8s-cluster/kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3
apiServerCertSANs:
- "opsbase-k8s-master01"
- "opsbase-k8s-master02"
- "opsbase-k8s-master03"
- "10.200.112.110"
- "10.200.112.111"
- "10.200.112.112"
- "10.200.112.113"
- "127.0.0.1"
api:
advertiseAddress: 10.200.112.112
controlPlaneEndpoint: 10.200.112.110:8443
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://10.200.112.112:2379"
advertise-client-urls: "https://10.200.112.112:2379"
listen-peer-urls: "https://10.200.112.112:2380"
initial-advertise-peer-urls: "https://10.200.112.112:2380"
initial-cluster: "opsbase-k8s-master01=https://10.200.112.111:2380,opsbase-k8s-master02=https://10.200.112.112:2380"
initial-cluster-state: existing
serverCertSANs:
- opsbase-k8s-master02
- 10.200.112.112
peerCertSANs:
- opsbase-k8s-master02
- 10.200.112.112
networking:
podSubnet: "10.244.0.0/16"
kubeProxy:
config:
#mode: ipvs
mode: iptables
EOF

配置kubelet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# tar zxf /etc/kubernetes/k8s-key.tar.gz -C /etc/kubernetes/
# kubeadm alpha phase certs all --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase kubelet config write-to-disk --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase kubelet write-env-file --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase kubeconfig kubelet --config /root/k8s-cluster/kubeadm-config.yaml
# systemctl start kubelet
# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-4q2fs 0/1 Pending 0 11m <none> <none> <none>
coredns-78fcdf6894-tsql9 0/1 Pending 0 11m <none> <none> <none>
etcd-opsbase-k8s-master01 1/1 Running 0 10m 10.200.112.111 opsbase-k8s-master01 <none>
kube-apiserver-opsbase-k8s-master01 1/1 Running 0 11m 10.200.112.111 opsbase-k8s-master01 <none>
kube-controller-manager-opsbase-k8s-master01 1/1 Running 0 11m 10.200.112.111 opsbase-k8s-master01 <none>
kube-proxy-b5p88 1/1 Running 0 8m 10.200.112.112 opsbase-k8s-master02 <none>
kube-proxy-dz9qq 1/1 Running 0 11m 10.200.112.111 opsbase-k8s-master01 <none>
kube-scheduler-opsbase-k8s-master01 1/1 Running 0 11m 10.200.112.111 opsbase-k8s-master01 <none>

添加etcd到集群中

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# export CP0_IP="10.200.112.111"
# export CP0_HOSTNAME="opsbase-k8s-master01"
# export CP1_IP="10.200.112.112"
# export CP1_HOSTNAME="opsbase-k8s-master02"
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
# kubeadm alpha phase etcd local --config /root/k8s-cluster/kubeadm-config.yaml
# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-4q2fs 0/1 Pending 0 1h <none> <none> <none>
coredns-78fcdf6894-tsql9 0/1 Pending 0 1h <none> <none> <none>
etcd-opsbase-k8s-master01 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
etcd-opsbase-k8s-master02 1/1 Running 0 15s 10.200.112.112 opsbase-k8s-master02 <none>
kube-apiserver-opsbase-k8s-master01 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
kube-controller-manager-opsbase-k8s-master01 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
kube-proxy-b5p88 1/1 Running 0 1h 10.200.112.112 opsbase-k8s-master02 <none>
kube-proxy-dz9qq 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
kube-scheduler-opsbase-k8s-master01 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>

启动master节点服务

1
2
3
# kubeadm alpha phase kubeconfig all --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase controlplane all --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase mark-master --config /root/k8s-cluster/kubeadm-config.yaml

配置使用kubectl(可选)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# su - kubernetes
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-4q2fs 0/1 Pending 0 1h <none> <none> <none>
coredns-78fcdf6894-tsql9 0/1 Pending 0 1h <none> <none> <none>
etcd-opsbase-k8s-master01 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
etcd-opsbase-k8s-master02 1/1 Running 0 4m 10.200.112.112 opsbase-k8s-master02 <none>
kube-apiserver-opsbase-k8s-master01 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
kube-apiserver-opsbase-k8s-master02 1/1 Running 0 1m 10.200.112.112 opsbase-k8s-master02 <none>
kube-controller-manager-opsbase-k8s-master01 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
kube-controller-manager-opsbase-k8s-master02 1/1 Running 0 1m 10.200.112.112 opsbase-k8s-master02 <none>
kube-proxy-b5p88 1/1 Running 0 1h 10.200.112.112 opsbase-k8s-master02 <none>
kube-proxy-dz9qq 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
kube-scheduler-opsbase-k8s-master01 1/1 Running 0 1h 10.200.112.111 opsbase-k8s-master01 <none>
kube-scheduler-opsbase-k8s-master02 1/1 Running 0 1m 10.200.112.112 opsbase-k8s-master02 <none>

k8s-master03

生成配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# cat > /root/k8s-cluster/kubeadm-config.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.3
apiServerCertSANs:
- "opsbase-k8s-master01"
- "opsbase-k8s-master02"
- "opsbase-k8s-master03"
- "10.200.112.110"
- "10.200.112.111"
- "10.200.112.112"
- "10.200.112.113"
- "127.0.0.1"
api:
advertiseAddress: 10.200.112.113
controlPlaneEndpoint: 10.200.112.110:8443
etcd:
local:
extraArgs:
listen-client-urls: "https://127.0.0.1:2379,https://10.200.112.113:2379"
advertise-client-urls: "https://10.200.112.113:2379"
listen-peer-urls: "https://10.200.112.113:2380"
initial-advertise-peer-urls: "https://10.200.112.113:2380"
initial-cluster: "opsbase-k8s-master01=https://10.200.112.111:2380,opsbase-k8s-master02=https://10.200.112.112:2380,opsbase-k8s-master03=https://10.200.112.113:2380"
initial-cluster-state: existing
serverCertSANs:
- opsbase-k8s-master03
- 10.200.112.113
peerCertSANs:
- opsbase-k8s-master03
- 10.200.112.113
networking:
podSubnet: "10.244.0.0/16"
kubeProxy:
config:
#mode: ipvs
mode: iptables
EOF

配置kubelet

1
2
3
4
5
6
# tar zxf /etc/kubernetes/k8s-key.tar.gz -C /etc/kubernetes/
# kubeadm alpha phase certs all --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase kubelet config write-to-disk --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase kubelet write-env-file --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase kubeconfig kubelet --config /root/k8s-cluster/kubeadm-config.yaml
# systemctl start kubelet

添加etcd到集群中

1
2
3
4
5
6
7
# export CP0_IP="10.200.112.111"
# export CP0_HOSTNAME="opsbase-k8s-master01"
# export CP2_IP="10.200.112.113"
# export CP2_HOSTNAME="opsbase-k8s-master03"
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
# kubeadm alpha phase etcd local --config /root/k8s-cluster/kubeadm-config.yaml

启动master节点服务

1
2
3
# kubeadm alpha phase kubeconfig all --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase controlplane all --config /root/k8s-cluster/kubeadm-config.yaml
# kubeadm alpha phase mark-master --config /root/k8s-cluster/kubeadm-config.yaml

配置使用kubectl(可选)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# su - kubernetes
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-4q2fs 0/1 Pending 0 14h <none> <none> <none>
coredns-78fcdf6894-tsql9 0/1 Pending 0 14h <none> <none> <none>
etcd-opsbase-k8s-master01 1/1 Running 0 14h 10.200.112.111 opsbase-k8s-master01 <none>
etcd-opsbase-k8s-master02 1/1 Running 0 13h 10.200.112.112 opsbase-k8s-master02 <none>
etcd-opsbase-k8s-master03 1/1 Running 0 1m 10.200.112.113 opsbase-k8s-master03 <none>
kube-apiserver-opsbase-k8s-master01 1/1 Running 0 14h 10.200.112.111 opsbase-k8s-master01 <none>
kube-apiserver-opsbase-k8s-master02 1/1 Running 0 13h 10.200.112.112 opsbase-k8s-master02 <none>
kube-apiserver-opsbase-k8s-master03 1/1 Running 0 19s 10.200.112.113 opsbase-k8s-master03 <none>
kube-controller-manager-opsbase-k8s-master01 1/1 Running 0 14h 10.200.112.111 opsbase-k8s-master01 <none>
kube-controller-manager-opsbase-k8s-master02 1/1 Running 0 13h 10.200.112.112 opsbase-k8s-master02 <none>
kube-controller-manager-opsbase-k8s-master03 1/1 Running 0 19s 10.200.112.113 opsbase-k8s-master03 <none>
kube-proxy-b5p88 1/1 Running 0 14h 10.200.112.112 opsbase-k8s-master02 <none>
kube-proxy-dz9qq 1/1 Running 0 14h 10.200.112.111 opsbase-k8s-master01 <none>
kube-proxy-vppm4 1/1 Running 0 13h 10.200.112.113 opsbase-k8s-master03 <none>
kube-scheduler-opsbase-k8s-master01 1/1 Running 0 14h 10.200.112.111 opsbase-k8s-master01 <none>
kube-scheduler-opsbase-k8s-master02 1/1 Running 0 13h 10.200.112.112 opsbase-k8s-master02 <none>
kube-scheduler-opsbase-k8s-master03 1/1 Running 0 19s 10.200.112.113 opsbase-k8s-master03 <none>

部署Pod网络(Fannel)[any master]

确保Pod网络的Network与kubeadm配置的podSubnet一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ grep -w Network kube-flannel.yml
"Network": "10.244.0.0/16",
$ kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-4q2fs 1/1 Running 0 17h 10.244.1.3 opsbase-k8s-master02 <none>
coredns-78fcdf6894-tsql9 1/1 Running 0 17h 10.244.1.2 opsbase-k8s-master02 <none>
etcd-opsbase-k8s-master01 1/1 Running 0 1m 10.200.112.111 opsbase-k8s-master01 <none>
etcd-opsbase-k8s-master02 1/1 Running 0 16h 10.200.112.112 opsbase-k8s-master02 <none>
etcd-opsbase-k8s-master03 1/1 Running 0 2h 10.200.112.113 opsbase-k8s-master03 <none>
kube-apiserver-opsbase-k8s-master01 1/1 Running 0 1m 10.200.112.111 opsbase-k8s-master01 <none>
kube-apiserver-opsbase-k8s-master02 1/1 Running 0 16h 10.200.112.112 opsbase-k8s-master02 <none>
kube-apiserver-opsbase-k8s-master03 1/1 Running 0 2h 10.200.112.113 opsbase-k8s-master03 <none>
kube-controller-manager-opsbase-k8s-master01 1/1 Running 0 1m 10.200.112.111 opsbase-k8s-master01 <none>
kube-controller-manager-opsbase-k8s-master02 1/1 Running 0 16h 10.200.112.112 opsbase-k8s-master02 <none>
kube-controller-manager-opsbase-k8s-master03 1/1 Running 0 2h 10.200.112.113 opsbase-k8s-master03 <none>
kube-flannel-ds-amd64-2sgds 1/1 Running 0 13m 10.200.112.111 opsbase-k8s-master01 <none>
kube-flannel-ds-amd64-htzbs 1/1 Running 0 13m 10.200.112.112 opsbase-k8s-master02 <none>
kube-flannel-ds-amd64-vh52z 1/1 Running 0 13m 10.200.112.113 opsbase-k8s-master03 <none>
kube-proxy-b5p88 1/1 Running 0 17h 10.200.112.112 opsbase-k8s-master02 <none>
kube-proxy-dz9qq 1/1 Running 0 17h 10.200.112.111 opsbase-k8s-master01 <none>
kube-proxy-vppm4 1/1 Running 0 16h 10.200.112.113 opsbase-k8s-master03 <none>
kube-scheduler-opsbase-k8s-master01 1/1 Running 0 1m 10.200.112.111 opsbase-k8s-master01 <none>
kube-scheduler-opsbase-k8s-master02 1/1 Running 0 16h 10.200.112.112 opsbase-k8s-master02 <none>
kube-scheduler-opsbase-k8s-master03 1/1 Running 0 2h 10.200.112.113 opsbase-k8s-master03 <none>
$ kubectl get node
NAME STATUS ROLES AGE VERSION
opsbase-k8s-master01 Ready master 17h v1.11.3
opsbase-k8s-master02 Ready master 17h v1.11.3
opsbase-k8s-master03 Ready master 16h v1.11.3

部署node节点[node]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# kubeadm join 10.200.112.110:8443 --token qis1en.sh1k760bz9bei877 --discovery-token-ca-cert-hash sha256:5e2687c96590e28392c193e355cb6000adf7ce35891df4fad5809ca33316b738
[preflight] running pre-flight checks
I1019 11:19:23.210371 2182 kernel_validator.go:81] Validating kernel version
I1019 11:19:23.210598 2182 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "10.200.112.110:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.200.112.110:8443"
[discovery] Requesting info from "https://10.200.112.110:8443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.200.112.110:8443"
[discovery] Successfully established connection with API Server "10.200.112.110:8443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "opsbase-k8s-node01" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ kubectl get node
NAME STATUS ROLES AGE VERSION
opsbase-k8s-master01 Ready master 17h v1.11.3
opsbase-k8s-master02 Ready master 17h v1.11.3
opsbase-k8s-master03 Ready master 16h v1.11.3
opsbase-k8s-node01 Ready <none> 1m v1.11.3
$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
coredns-78fcdf6894-4q2fs 1/1 Running 0 17h 10.244.1.3 opsbase-k8s-master02 <none>
coredns-78fcdf6894-tsql9 1/1 Running 0 17h 10.244.1.2 opsbase-k8s-master02 <none>
etcd-opsbase-k8s-master01 1/1 Running 0 23m 10.200.112.111 opsbase-k8s-master01 <none>
etcd-opsbase-k8s-master02 1/1 Running 0 16h 10.200.112.112 opsbase-k8s-master02 <none>
etcd-opsbase-k8s-master03 1/1 Running 0 2h 10.200.112.113 opsbase-k8s-master03 <none>
kube-apiserver-opsbase-k8s-master01 1/1 Running 0 23m 10.200.112.111 opsbase-k8s-master01 <none>
kube-apiserver-opsbase-k8s-master02 1/1 Running 0 16h 10.200.112.112 opsbase-k8s-master02 <none>
kube-apiserver-opsbase-k8s-master03 1/1 Running 0 2h 10.200.112.113 opsbase-k8s-master03 <none>
kube-controller-manager-opsbase-k8s-master01 1/1 Running 0 23m 10.200.112.111 opsbase-k8s-master01 <none>
kube-controller-manager-opsbase-k8s-master02 1/1 Running 0 16h 10.200.112.112 opsbase-k8s-master02 <none>
kube-controller-manager-opsbase-k8s-master03 1/1 Running 0 2h 10.200.112.113 opsbase-k8s-master03 <none>
kube-flannel-ds-amd64-2sgds 1/1 Running 0 35m 10.200.112.111 opsbase-k8s-master01 <none>
kube-flannel-ds-amd64-htzbs 1/1 Running 0 35m 10.200.112.112 opsbase-k8s-master02 <none>
kube-flannel-ds-amd64-rn8kj 1/1 Running 4 5m 10.200.112.114 opsbase-k8s-node01 <none>
kube-flannel-ds-amd64-vh52z 1/1 Running 0 35m 10.200.112.113 opsbase-k8s-master03 <none>
kube-proxy-5st7w 1/1 Running 0 5m 10.200.112.114 opsbase-k8s-node01 <none>
kube-proxy-b5p88 1/1 Running 0 17h 10.200.112.112 opsbase-k8s-master02 <none>
kube-proxy-dz9qq 1/1 Running 0 17h 10.200.112.111 opsbase-k8s-master01 <none>
kube-proxy-vppm4 1/1 Running 0 16h 10.200.112.113 opsbase-k8s-master03 <none>
kube-scheduler-opsbase-k8s-master01 1/1 Running 0 23m 10.200.112.111 opsbase-k8s-master01 <none>
kube-scheduler-opsbase-k8s-master02 1/1 Running 0 16h 10.200.112.112 opsbase-k8s-master02 <none>
kube-scheduler-opsbase-k8s-master03 1/1 Running 0 2h 10.200.112.113 opsbase-k8s-master03 <none>
坚持原创技术分享,您的支持将鼓励我继续创作!
0%