使用kubeadm部署Kubernetes1.8.5

在完成科学上网的前提下,我们准备使用kubeadm通过http代理部署Kubernetes。

环境准备(在所有节点上执行)

hostname IP 作用
k8s-master 172.16.100.50 master/etcd
k8s-node1 172.16.100.51 node
k8s-node2 172.16.100.52 node

关闭swap

由于Kubernetes1.8之后需要关闭swap,否则将会出现如下报错:
running with swap on is not supported. Please disable swap

1
2
# swapoff -a
# sed -i '/swap/d' /etc/fstab

配置http代理

由于kubeadm init时需要访问google的网站,如果不科学上网将会出现如下报错:
unable to get URL "https://dl.k8s.io/release/stable-1.8.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.8.txt: dial tcp 172.217.160.112:443: i/o timeout

1
2
3
4
# vi ~/.profile
export http_proxy="http://k8s-master:8118"
export https_proxy=$http_proxy
export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,172.16.100.50"

注:如果no_proxy中不添加172.16.100.50,则会出现[preflight] WARNING: Connection to "https://172.16.100.50:6443" uses proxy "http://172.16.100.50:8118". If that is not intended, adjust your proxy settings的告警

Docker配置http代理

1
2
3
4
5
6
# mkdir /etc/systemd/system/docker.service.d/
# cd /etc/systemd/system/docker.service.d/
# vi http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://k8s-master:8118/"
Environment="HTTPS_PROXY=https://k8s-master:8118/"

添加Kubernetes的apt源

1
2
3
4
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
# cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

安装所需软件

1
2
# apt-get update
# apt-get install -y docker.io kubelet=1.8.5-00 kubeadm=1.8.5-00 kubectl=1.8.5-00

部署Master节点

通过kubeadm初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
root@k8s-master:~# kubeadm init --apiserver-advertise-address 172.16.100.50 --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.5
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.100.50]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.

[apiclient] All control plane components are healthy after 615.502170 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-master as master by adding a label and a taint
[markmaster] Master k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 3d52f3.9899527f02a75122
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join --token 3d52f3.9899527f02a75122 172.16.100.50:6443 --discovery-token-ca-cert-hash sha256:c04f230146d11fd87932bb589b0a6ccc897bd15f99bda74f009a69919de5a205

初始化过程主要完成:

  1. [preflight]:kubeadm执行初始化前的检查;
  2. [certificates]:生成token和证书。
  3. [kubeconfig]~[etcd]:生成相关的配置文件;
  4. [init]~[bootstraptoken]:安装Master组件,会从goolge的 Registry下载组件的Docker镜像,这一步可能会花一些时间,主要取决于网络质量。
  5. [addons]:安装附加组件kube-dns和kube-proxy;
  6. Kubernetes Master 初始化成功;
  7. 提示如何配置kubectl(使用普通用户);
  8. 提示如何安装 Pod 网络(参考 http://kubernetes.io/docs/admin/addons/ );
  9. 提示如何注册其他节点到 Cluster(需要记录下提示命令)。
  • 使用普通用户管理Kubernetes:
    1
    2
    3
    vnimos@k8s-master:~$ mkdir -p $HOME/.kube
    vnimos@k8s-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    vnimos@k8s-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查Kubernetes状态

由于还未部署pod网络,所以kube-dns还处于Pending状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
vnimos@k8s-master:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2017-12-21 15:33:24 CST; 14min ago
Docs: http://kubernetes.io/docs/
Main PID: 7941 (kubelet)
Tasks: 16
Memory: 42.7M
CPU: 15.932s
CGroup: /system.slice/kubelet.service
└─7941 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests -
vnimos@k8s-master:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/kube-apiserver-amd64 v1.8.5 ff90510bd7a8 13 days ago 194 MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.8.5 b3710be972a6 13 days ago 129 MB
gcr.io/google_containers/kube-scheduler-amd64 v1.8.5 b7977f445d3b 13 days ago 55 MB
gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 10 months ago 169 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 19 months ago 747 kB
vnimos@k8s-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m v1.8.5
vnimos@k8s-master:~$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8s-master 1/1 Running 0 3s 172.16.100.50 k8s-master
kube-apiserver-k8s-master 1/1 Running 0 3s 172.16.100.50 k8s-master
kube-controller-manager-k8s-master 1/1 Running 0 3s 172.16.100.50 k8s-master
kube-dns-545bc4bfd4-d299p 0/3 Pending 0 19m <none> <none>
kube-proxy-9bnnx 1/1 Running 0 19m 172.16.100.50 k8s-master
kube-scheduler-k8s-master 1/1 Running 0 3s 172.16.100.50 k8s-master

部署Pod网络(Fannel)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
vnimos@k8s-master:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
vnimos@k8s-master:~$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8s-master 1/1 Running 0 1m 172.16.100.50 k8s-master
kube-apiserver-k8s-master 1/1 Running 0 1m 172.16.100.50 k8s-master
kube-controller-manager-k8s-master 1/1 Running 0 1m 172.16.100.50 k8s-master
kube-dns-545bc4bfd4-d299p 3/3 Running 0 31m 10.244.0.2 k8s-master
kube-flannel-ds-fw56r 1/1 Running 0 2m 172.16.100.50 k8s-master
kube-proxy-9bnnx 1/1 Running 0 31m 172.16.100.50 k8s-master
kube-scheduler-k8s-master 1/1 Running 0 1m 172.16.100.50 k8s-master
vnimos@k8s-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 31m v1.8.5

部署node节点

join Kubernetes集群

如果部署完Master节点忘了记录Token,可通过kubeadm token list查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@k8s-node1:~# kubeadm join --token 3d52f3.9899527f02a75122 172.16.100.50:6443 --discovery-token-ca-cert-hash sha256:c04f230146d11fd87932bb589b0a6ccc897bd15f99bda74f009a69919de5a205
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "172.16.100.50:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.16.100.50:6443"
[discovery] Requesting info from "https://172.16.100.50:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.16.100.50:6443"
[discovery] Successfully established connection with API Server "172.16.100.50:6443"
[bootstrap] Detected server version: v1.8.5
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

检查Kubernetes状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
vnimos@k8s-master:~$ kubectl get pod -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE
etcd-k8s-master 1/1 Running 0 18m 172.16.100.50 k8s-master
kube-apiserver-k8s-master 1/1 Running 0 17m 172.16.100.50 k8s-master
kube-controller-manager-k8s-master 1/1 Running 0 18m 172.16.100.50 k8s-master
kube-dns-545bc4bfd4-frlb5 3/3 Running 0 17m 10.244.0.2 k8s-master
kube-flannel-ds-68xvq 1/1 Running 0 16m 172.16.100.50 k8s-master
kube-flannel-ds-hp5ck 1/1 Running 0 15m 172.16.100.51 k8s-node1
kube-flannel-ds-j67hh 1/1 Running 3 4m 172.16.100.52 k8s-node2
kube-proxy-lck5q 1/1 Running 0 4m 172.16.100.52 k8s-node2
kube-proxy-rtrxh 1/1 Running 0 17m 172.16.100.50 k8s-master
kube-proxy-trlt7 1/1 Running 0 15m 172.16.100.51 k8s-node1
kube-scheduler-k8s-master 1/1 Running 0 18m 172.16.100.50 k8s-master
vnimos@k8s-master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18m v1.8.5
k8s-node1 Ready <none> 15m v1.8.5
k8s-node2 Ready <none> 4m v1.8.5
坚持原创技术分享,您的支持将鼓励我继续创作!
0%