目录

Kubeadm系列-03-staticpod

概述

上节提到执行 kubeadm init 之后,会在机器上启动 kubelet,实际上在 controlplane 节点上还会启动像 kube-apiserver, kube-controller-manager, kube-scheduler 甚至还有 etcd,那么这些进程又是怎么被启动的呢,下文会做简单的分析。

Static Pod

关于 Static Pod 是怎么启动的,这里需要复习一下 Static Pod 的相关内容,简而言之,默认情况下放在 /etc/kubernetes/manifests/ 目录下的文件都会被当成 Static Pod,被 kubelet 启动,而上述提到的这些组件的 manifest 都会被 kubeadm 放置到目录中,那么随着 kubelet 的启动,这些 Static Pod 也会被启动。

下面是 manifests 目录的默认位置和内容。

1
2
3
4
5
6
7
8
# pwd
/etc/kubernetes/manifests
# ll
total 16
-rw------- 1 root root 2292 Jun 19 19:31 etcd.yaml
-rw------- 1 root root 3357 Jun 19 19:31 kube-apiserver.yaml
-rw------- 1 root root 2764 Jun 19 19:31 kube-controller-manager.yaml
-rw------- 1 root root 1464 Jun 19 19:31 kube-scheduler.yaml

在 controlplane 中,就是上述四种 Pod 会被创建,至于这些 Pod 如何被 kubelet 创建,这里就不细说了,可以去参考一下 kubelet 的代码,这些 Static Pod 作为 controlplane 的重要部分,在 kubeadm init 的过程中,专门注册了一个 phase 来等待他们的创建。

1
2
3
4
5
6
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

而实际上,这些 manifests 都是需要渲染出来的,首先通过 GetStaticPodSpecs 获取可能存在本地的已经写入的 manifests,然后会根据 kubeadm init 初始化化逻辑里,通过 PatchStaticPod 方法修改一下 manifests,最后才会通过 WriteStaticPodToDisk 重新写到目录中。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
// CreateStaticPodFiles creates all the requested static pod files.
func CreateStaticPodFiles(manifestDir, patchesDir string, cfg *kubeadmapi.ClusterConfiguration, endpoint *kubeadmapi.APIEndpoint, componentNames ...string) error {
	// gets the StaticPodSpecs, actualized for the current ClusterConfiguration
	klog.V(1).Infoln("[control-plane] getting StaticPodSpecs")
	specs := GetStaticPodSpecs(cfg, endpoint)

	// creates required static pod specs
	for _, componentName := range componentNames {
		// retrieves the StaticPodSpec for given component
		spec, exists := specs[componentName]
		if !exists {
			return errors.Errorf("couldn't retrieve StaticPodSpec for %q", componentName)
		}

		// print all volumes that are mounted
		for _, v := range spec.Spec.Volumes {
			klog.V(2).Infof("[control-plane] adding volume %q for component %q", v.Name, componentName)
		}

		// if patchesDir is defined, patch the static Pod manifest
		if patchesDir != "" {
			patchedSpec, err := staticpodutil.PatchStaticPod(&spec, patchesDir, os.Stdout)
			if err != nil {
				return errors.Wrapf(err, "failed to patch static Pod manifest file for %q", componentName)
			}
			spec = *patchedSpec
		}

		// writes the StaticPodSpec to disk
		if err := staticpodutil.WriteStaticPodToDisk(componentName, manifestDir, spec); err != nil {
			return errors.Wrapf(err, "failed to create static pod manifest file for %q", componentName)
		}

		klog.V(1).Infof("[control-plane] wrote static Pod manifest for component %q to %q\n", componentName, kubeadmconstants.GetStaticPodFilepath(componentName, manifestDir))
	}

	return nil
}

默认启动参数

下面是 kubeadm init 下启动的一些默认参数,其实不管 kubeadm.yaml 怎么填,核心还是得转换到这些组件的参数上的,当然这里面大部分参数都是跟证书和认证有关系的。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
kube-apiserver
--advertise-address=10.7.0.149
--allow-privileged=true
--authorization-mode=Node,RBAC
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
--insecure-port=0
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-issuer=https://kubernetes.default.svc.cluster.local
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key

kube-controller-manager
--allocate-node-cidrs=true
--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
--bind-address=127.0.0.1
--client-ca-file=/etc/kubernetes/pki/ca.crt
--cluster-cidr=10.244.0.0/16
--cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
--controllers=*,bootstrapsigner,tokencleaner
--kubeconfig=/etc/kubernetes/controller-manager.conf
--leader-elect=true
--port=0
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--root-ca-file=/etc/kubernetes/pki/ca.crt
--service-account-private-key-file=/etc/kubernetes/pki/sa.key
--service-cluster-ip-range=10.96.0.0/12
--use-service-account-credentials=true

kube-scheduler
--authentication-kubeconfig=/etc/kubernetes/scheduler.conf
--authorization-kubeconfig=/etc/kubernetes/scheduler.conf
--bind-address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
--port=0

etcd
--advertise-client-urls=https://10.7.0.149:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--initial-advertise-peer-urls=https://10.7.0.149:2380
--initial-cluster=10.7.0.149=https://10.7.0.149:2380
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379,https://10.7.0.149:2379
--listen-metrics-urls=http://127.0.0.1:2381
--listen-peer-urls=https://10.7.0.149:2380
--name=10.7.0.149
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

容器运行情况

通过命令 docker ps 查询容器运行的情况。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# docker ps --no-trunc --format "table {{.Image}}\t{{.Command}}"
IMAGE                                                                     COMMAND
sha256:7e58936d778d1754c3fdb98d4718582f6ee95feedec044ea290e88ebc1e3efcb   "/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=10.7.0.149"
registry.aliyuncs.com/google_containers/pause:3.4.1                       "/pause"
sha256:7a37590177f7c20d147d526ff3799eb110a233187a95df10ac04c2aee0fe79ec   "kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --port=0 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true"
sha256:0f5bfd20d26ede5d5db6cee6bfc2c2cd53bd4e1802217edf4f43de795dcc3151   "kube-apiserver --advertise-address=10.7.0.149 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key"
sha256:c67c2461177d871150bd5e96a2e326f0c78e6f9f24f34ad3911d564cc3eb5410   "kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true --port=0"
sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   "etcd --advertise-client-urls=https://10.7.0.149:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://10.7.0.149:2380 --initial-cluster=10.7.0.149=https://10.7.0.149:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://10.7.0.149:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://10.7.0.149:2380 --name=10.7.0.149 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"
registry.aliyuncs.com/google_containers/pause:3.4.1                       "/pause"
registry.aliyuncs.com/google_containers/pause:3.4.1                       "/pause"
registry.aliyuncs.com/google_containers/pause:3.4.1                       "/pause"
registry.aliyuncs.com/google_containers/pause:3.4.1                       "/pause"

自定义Static Pod配置

一般情况下,kubeadm 配置的 Kubernetes 组件的启动参数都是够用,但是如果希望做一定的定制,有没有方法呢?答案是有的,可以通过 --patches 这个参数,将想要调整的组件的 yaml 文件打上 patch,在 1.21.7 的版本的 kubeadm 对应的参数是 --experimental-patches

查看镜像

使用过 kubeadm 来创建集群的老铁应该知道,像 kube-apiserver, kube-controller-manager, kube-scheduler 这些组件是通过静态 Pod,也就是 Static Pod 来启动的,那么也许我们会好奇,这些镜像是怎么下载的,不妨看下 kubeadm 的代码分析一下。

如果按常理想用 docker ps 或者 docker images 看不到的话,可以换通过 crictl pscrictl images 是可以看到的

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
docker.io/library/redis                                           alpine3.13          1690b63e207f6       10.9MB
registry.aliyuncs.com/google_containers/coredns                   v1.8.6              a4ca41631cc7a       13.6MB
registry.aliyuncs.com/google_containers/etcd                      3.5.3-0             aebe758cef4cd       102MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.24.1             e9f4b425f9192       33.8MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.24.1             b4ea7e648530d       31MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.24.1             beb86f5d8e6cd       39.5MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.24.1             18688a72645c5       15.5MB
registry.aliyuncs.com/google_containers/pause                     3.7                 221177c6082a8       311kB

拉取镜像逻辑

下面的代码是 Kubernetes 1.21.7 的代码,这个版本还是 Dockershim 保留的版本,如果看 1.24.0 的代码就会发现,PullImage 这个方法只剩下调用 crictl 的了。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// PullImage pulls the image
func (runtime *CRIRuntime) PullImage(image string) error {
	var err error
	var out []byte
	for i := 0; i < constants.PullImageRetry; i++ {
		out, err = runtime.exec.Command("crictl", "-r", runtime.criSocket, "pull", image).CombinedOutput()
		if err == nil {
			return nil
		}
	}
	return errors.Wrapf(err, "output: %s, error", out)
}

// PullImage pulls the image
func (runtime *DockerRuntime) PullImage(image string) error {
	var err error
	var out []byte
	for i := 0; i < constants.PullImageRetry; i++ {
		out, err = runtime.exec.Command("docker", "pull", image).CombinedOutput()
		if err == nil {
			return nil
		}
	}
	return errors.Wrapf(err, "output: %s, error", out)
}

按照代码,很明显,在执行 kubeadm init 的时候,会有一个 PullImage 的操作去拉取一些镜像,至于用的是 docker pull 还是 crictl pull 就要具体看 Kubernetes 的版本了。

警告
本文最后更新于 2022年3月20日,文中内容可能已过时,请谨慎参考。