engine 的进程状况
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
root@instance-manager-e-01f96248:/tmp# pstree -a -n -u
tini -- engine-manager --debug daemon --listen 0.0.0.0:8500
`-longhorn-instan --debug daemon --listen 0.0.0.0:8500
|-tgtd -f
| `-108*[{tgtd}]
|-tee /var/log/tgtd.log
|-27*[{longhorn-instan}]
|-longhorn controller pvc-29a9e2af-d804-41e9-8eaf-566d272aa0cd --frontendtgt-blockde
| `-24*[{longhorn}]
|-longhorn controller pvc-394ae54e-e59b-41df-b19f-c5fdb87df707 --frontendtgt-blockde
| `-33*[{longhorn}]
|-longhorn controller pvc-fd29a9be-83d4-4f6e-b9a5-2e88879818cc --frontendtgt-blockde
| `-31*[{longhorn}]
|-longhorn controller pvc-e5abfb38-fb17-46a3-b4b7-c3f01006f7b2 --frontendtgt-blockde
| `-22*[{longhorn}]
|-longhorn controller pvc-8cc66947-711d-4164-9b47-1ddc1bf6193a --frontendtgt-blockde
| `-21*[{longhorn}]
`-longhorn controller pvc-5e1c1e39-baa4-4c6b-b68c-3ea940f8381a --frontendtgt-blockde
`-25*[{longhorn}]
|
replica 的进程状况
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
root@instance-manager-r-70967eaf:/tmp# pstree -a -n -u
tini -- longhorn-instance-manager --debug daemon --listen 0.0.0.0:8500
`-longhorn-instan --debug daemon --listen 0.0.0.0:8500
|-44*[{longhorn-instan}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-8cc66947
| |-29*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10077 --replica0.0.0.0:
| `-27*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-c3847a3b
| |-30*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10092 --replica0.0.0.0:
| `-21*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-520c28df
| |-23*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10017 --replica0.0.0.0:
| `-21*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-d3f121b6
| |-23*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10152 --replica0.0.0.0:
| `-21*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-becd9332
| |-30*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10047 --replica0.0.0.0:
| `-22*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-fd0c0b63
| |-30*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10032 --replica0.0.0.0:
| `-21*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-5c9c2243
| |-23*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10107 --replica0.0.0.0:
| `-18*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-dcdd7aaf
| |-20*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10002 --replica0.0.0.0:
| `-18*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-e5abfb38
| |-19*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10062 --replica0.0.0.0:
| `-19*[{longhorn}]
|-longhorn replica/host/var/lib/longhorn/replicas/pvc-0c482452
| |-19*[{longhorn}]
| `-longhorn sync-agent --listen 0.0.0.0:10122 --replica0.0.0.0:
| `-16*[{longhorn}]
`-longhorn replica/host/var/lib/longhorn/replicas/pvc-394ae54e
|-19*[{longhorn}]
`-longhorn sync-agent --listen 0.0.0.0:10137 --replica0.0.0.0:
`-18*[{longhorn}]
|
manager 的 Pod 进程情况
1
2
3
4
5
6
|
# 入口命令
longhorn-manager -d daemon --engine-image harbor.dev-prev.com/middleware/longhorn-engine:v1.2.2 --instance-manager-image harbor.dev-prev.com/middleware/longhorn-instance-manager:v1_20210731 --share-manager-image harbor.dev-prev.com/middleware/longhorn-share-manager:v1_20210914 --backing-image-manager-image harbor.dev-prev.com/middleware/backing-image-manager:v2_20210820 --manager-image harbor.dev-prev.com/middleware/longhorn-manager:v1.2.2 --service-account longhorn-service-account
root@longhorn-manager-7tf2r:/# pstree -a -n -u
longhorn-manage -d daemon --engine-image harbor.dev-prev.com/middleware/longhorn-engine:v1.2.2 --instance-manager-imageharbor.dev-prev.com/middleware/lon
`-30*[{longhorn-manage}]
|
自定义对象 InstanceManager
1
2
3
4
5
6
7
8
9
10
11
12
|
➜ ~ k get InstanceManager
NAME STATE TYPE NODE AGE
instance-manager-e-01f96248 running engine 10.9.204.73 117d
instance-manager-e-3d9336b0 running engine 10.9.204.79 117d
instance-manager-e-4f4ca0c7 running engine 10.9.16.121 41d
instance-manager-e-95394194 running engine 10.9.24.190 41d
instance-manager-e-bb002e14 running engine 10.9.204.74 117d
instance-manager-r-23876a60 running replica 10.9.24.190 41d
instance-manager-r-4a1068c9 running replica 10.9.204.74 117d
instance-manager-r-70967eaf running replica 10.9.204.73 117d
instance-manager-r-8c26cd54 running replica 10.9.204.79 117d
instance-manager-r-9b92e76b running replica 10.9.16.121 41d
|
自定义资源对象
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
➜ tmp k get crd|grep -i longhorn
backingimagedatasources.longhorn.io 2021-12-17T04:10:24Z
backingimagemanagers.longhorn.io 2021-12-17T04:10:24Z
backingimages.longhorn.io 2021-12-17T04:10:24Z
backups.longhorn.io 2021-12-17T04:10:24Z
backuptargets.longhorn.io 2021-12-17T04:10:24Z
backupvolumes.longhorn.io 2021-12-17T04:10:24Z
engineimages.longhorn.io 2021-12-17T04:10:24Z
engines.longhorn.io 2021-12-17T04:10:24Z
instancemanagers.longhorn.io 2021-12-17T04:10:24Z
nodes.longhorn.io 2021-12-17T04:10:24Z
recurringjobs.longhorn.io 2021-12-17T04:10:24Z
replicas.longhorn.io 2021-12-17T04:10:24Z
settings.longhorn.io 2021-12-17T04:10:24Z
sharemanagers.longhorn.io 2021-12-17T04:10:24Z
volumes.longhorn.io 2021-12-17T04:10:24Z
|
通过 helm 部署之后,会部署一个 ds,每个节点一个 longhorn-manager
还有一个 longhorn-driver-deployer,猜测是部署相关的,会检查一下部署是否安装完成
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
➜ tmp k logs longhorn-driver-deployer-69cb5896d5-j7sk8
2021/12/22 07:06:29 proto: duplicate proto type registered: VersionResponse
W1222 07:06:29.055813 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2021-12-22T07:06:29Z" level=debug msg="Deploying CSI driver"
time="2021-12-22T07:06:29Z" level=debug msg="proc cmdline detection pod discover-proc-kubelet-cmdline in phase: Pending"
time="2021-12-22T07:06:30Z" level=debug msg="proc cmdline detection pod discover-proc-kubelet-cmdline in phase: Pending"
time="2021-12-22T07:06:31Z" level=debug msg="proc cmdline detection pod discover-proc-kubelet-cmdline in phase: Running"
time="2021-12-22T07:06:32Z" level=info msg="Proc found: kubelet"
time="2021-12-22T07:06:32Z" level=info msg="Try to find arg [--root-dir] in cmdline: [kubelet --read-only-port=0 --cni-bin-dir=/opt/cni/bin --eviction-hard=memory.available<256Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5% --kube-reserved=cpu=0.5,memory=1024Mi --cgroup-driver=cgroupfs --runtime-request-timeout=2m0s --pod-infra-container-image=10.9.28.38/rancher/pause:3.1 --root-dir=/var/lib/kubelet --event-qps=0 --v=2 --authorization-mode=Webhook --system-reserved=cpu=0.5,memory=1024Mi --network-plugin-mtu=1500 --eviction-soft-grace-period=memory.available=1m30s,nodefs.available=1m30s,imagefs.available=1m30s,nodefs.inodesFree=1m30s --eviction-pressure-transition-period=30s --hostname-override=10.9.204.74 --network-plugin=cni --cgroups-per-qos=true --global-housekeeping-interval=1m0s --serialize-image-pulls=false --cluster-dns=10.255.0.10 --eviction-max-pod-grace-period=30 --max-pods=250 --kube-api-burst=30 --volume-stats-agg-period=1m0s --client-ca-file=/etc/kubernetes/ssl/kube-ca.pem --kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml --kube-api-qps=15 --registry-qps=0 --fail-swap-on=false --resolv-conf=/etc/resolv.conf --address=0.0.0.0 --cni-conf-dir=/etc/cni/net.d --authentication-token-webhook=true --registry-burst=10 --eviction-soft=memory.available<512Mi,nodefs.available<15%,imagefs.available<20%,nodefs.inodesFree<10% --max-open-files=2000000 --housekeeping-interval=10s --cloud-provider= --cluster-domain=cluster.local --make-iptables-util-chains=true --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 --volume-plugin-dir=/var/lib/kubelet/volumeplugins --anonymous-auth=false --enforce-node-allocatable=pods --streaming-connection-idle-timeout=30m --sync-frequency=3s --node-status-update-frequency=10s --cgroup-driver=cgroupfs ]"
time="2021-12-22T07:06:32Z" level=info msg="Detected root dir path: /var/lib/kubelet"
time="2021-12-22T07:06:32Z" level=info msg="Upgrading Longhorn related components for CSI v1.1.0"
time="2021-12-22T07:06:32Z" level=debug msg="Detected CSI Driver driver.longhorn.io CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected service csi-attacher CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected deployment csi-attacher CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected service csi-provisioner CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected deployment csi-provisioner CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected service csi-resizer CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected deployment csi-resizer CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected service csi-snapshotter CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected deployment csi-snapshotter CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="Detected daemon set longhorn-csi-plugin CSI version v1.2.2 Kubernetes version v1.18.15 has already been deployed"
time="2021-12-22T07:06:32Z" level=debug msg="CSI deployment done"
|
1.3. CSI Driver
The Longhorn CSI driver takes the block device, formats it, and mounts it on the node. Then the kubelet bind-mounts the device inside a Kubernetes Pod. This allows the Pod to access the Longhorn volume.
The required Kubernetes CSI Driver images will be deployed automatically by the longhorn driver deployer. To install Longhorn in an air gapped environment, refer to this section.
1.4. CSI Plugin
Longhorn is managed in Kubernetes via a CSI Plugin. This allows for easy installation of the Longhorn plugin.
The Kubernetes CSI plugin calls Longhorn to create volumes to create persistent data for a Kubernetes workload. The CSI plugin gives you the ability to create, delete, attach, detach, mount the volume, and take snapshots of the volume. All other functionality provided by Longhorn is implemented through the Longhorn UI.
The Kubernetes cluster internally uses the CSI interface to communicate with the Longhorn CSI plugin. And the Longhorn CSI plugin communicates with the Longhorn Manager using the Longhorn API.
Longhorn does leverage iSCSI, so extra configuration of the node may be required. This may include the installation of open-iscsi or iscsiadm depending on the distribution.
警告
本文最后更新于 2022年11月9日,文中内容可能已过时,请谨慎参考。