概述
同步镜像
1
2
3
4
5
|
docker pull zookeeper:3.6.3
docker pull pravega/zookeeper-operator:0.2.13
docker pull lachlanevenson/k8s-kubectl:v1.16.10
docker pull pravega/zookeeper:0.2.13
docker pull zookeeper:3.4.9
|
Helm Charts准备
在 ZooKeeper Operator 项目内,通过 helm template
获取 yaml 文件,然后分别部署 ZooKeeper Operator 和 ZooKeeper 集群。
部署
下面是部署所需要的 ZooKeeper 的 CRD 的 yaml 文件,可见最后通过在 helm template
注入了持久化存储,指定为 rook-ceph-block
的 StorageClass
然后就会自动创建 PVC/PV 并且 Mount 到 Pod 上,其中 Values.yaml 文件如下。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
# values.yaml
persistence:
storageClassName: rook-ceph-block
## specifying reclaim policy for PersistentVolumes
## accepted values - Delete / Retain
reclaimPolicy: Delete
annotations: {}
volumeSize: 20Gi
---
apiVersion: "zookeeper.pravega.io/v1beta1"
kind: "ZookeeperCluster"
metadata:
name: zookeeper
namespace: default
labels:
app.kubernetes.io/name: zookeeper
app.kubernetes.io/version: "0.2.13"
app.kubernetes.io/managed-by: Helm
helm.sh/chart: "zookeeper-0.2.13"
spec:
replicas: 3
image:
repository: harbor.dev-prev.com/middleware/zookeeper
tag: 0.2.13
pullPolicy: IfNotPresent
kubernetesClusterDomain: cluster.local
probes:
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
successThreshold: 1
timeoutSeconds: 10
livenessProbe:
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 10
pod:
serviceAccountName: zookeeper
storageType: persistence
persistence:
reclaimPolicy: Delete
spec:
storageClassName: rook-ceph-block
resources:
requests:
storage: 20Gi
|
配置信息
1
2
3
4
|
[zk: localhost:2181(CONNECTED) 3] get /zookeeper/config
server.1=zookeeper-0.zookeeper-headless.default.svc.cluster.local:2888:3888:participant;0.0.0.0:2181
server.2=zookeeper-1.zookeeper-headless.default.svc.cluster.local:2888:3888:participant;0.0.0.0:2181
server.3=zookeeper-2.zookeeper-headless.default.svc.cluster.local:2888:3888:participant;0.0.0.0:2181
|
切换Pod
使其中一个 Pod 进行重建,观察选主过程。
1
2
3
4
5
6
7
8
9
10
|
[oper@szglbd_b1419_7_docker-master-28-68 drummer]$ k delete po zookeeper-0 --force
pod "zookeeper-0" force deleted
[oper@szglbd_b1419_7_docker-master-28-68 drummer]$ k get po -l app=zookeeper -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
zookeeper-0 0/1 ContainerCreating 0 4s <none> 10.9.28.51 <none> <none>
zookeeper-1 1/1 Running 0 2d15h 10.42.11.163 10.9.28.53 <none> <none>
zookeeper-2 1/1 Running 0 2d15h 10.42.10.25 10.9.28.54 <none> <none>
zookeeper-0 0/1 ContainerCreating 0 10s <none> 10.9.28.51 <none> <none>
zookeeper-0 0/1 ContainerCreating 0 10s <none> 10.9.28.51 <none> <none>
zookeeper-0 0/1 Running 0 11s 10.42.12.184 10.9.28.51 <none> <none>
|
扩容
扩容 Pod 数目。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
[oper@szglbd_b1419_7_docker-master-28-68 drummer]$ k edit zk zookeeper
Edit cancelled, no changes made.
[oper@szglbd_b1419_7_docker-master-28-68 drummer]$ k get po -l app=zookeeper -o wide -w
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
zookeeper-0 1/1 Running 0 5m33s 10.42.12.184 10.9.28.51 <none> <none>
zookeeper-1 1/1 Running 0 2d16h 10.42.11.163 10.9.28.53 <none> <none>
zookeeper-2 1/1 Running 0 2d16h 10.42.10.25 10.9.28.54 <none> <none>
zookeeper-3 0/1 ContainerCreating 0 25s <none> 10.9.204.75 <none> <none>
zookeeper-3 0/1 Running 0 28s 10.42.3.49 10.9.204.75 <none> <none>
zookeeper-3 1/1 Running 0 42s 10.42.3.49 10.9.204.75 <none> <none>
zookeeper-4 0/1 Pending 0 0s <none> <none> <none> <none>
zookeeper-4 0/1 Pending 0 0s <none> <none> <none> <none>
zookeeper-4 0/1 Pending 0 2s <none> 10.9.204.11 <none> <none>
zookeeper-4 0/1 ContainerCreating 0 2s <none> 10.9.204.11 <none> <none>
zookeeper-4 0/1 ContainerCreating 0 10s <none> 10.9.204.11 <none> <none>
zookeeper-4 0/1 ContainerCreating 0 10s <none> 10.9.204.11 <none> <none>
zookeeper-4 0/1 Running 0 11s 10.42.4.199 10.9.204.11 <none> <none>
zookeeper-4 1/1 Running 0 24s 10.42.4.199 10.9.204.11 <none> <none>
[oper@szglbd_b1419_7_docker-master-28-68 drummer]$ k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-zookeeper-0 Bound pvc-faaa172a-16d1-48a1-8556-a8d15f80861b 20Gi RWO rook-ceph-block 2d18h
data-zookeeper-1 Bound pvc-a45c0eaa-540b-4b0e-99e2-f40724e301dc 20Gi RWO rook-ceph-block 2d16h
data-zookeeper-2 Bound pvc-c6854ba7-8fa9-481e-9547-bc3016355d44 20Gi RWO rook-ceph-block 2d16h
data-zookeeper-3 Bound pvc-4c81587c-9165-4643-bf60-c45c7a7f0afe 20Gi RWO rook-ceph-block 4m4s
data-zookeeper-4 Bound pvc-22a5c11c-557c-4298-a375-addd702b338d 20Gi RWO rook-ceph-block 38s
|
其他
- 收集业务场景下的真实运维情况,扩展Operator的运维能力
- 网络环境
- benchmark
- 完善监控
参考资料
- https://www.cnblogs.com/kevingrace/p/5252903.html
警告
本文最后更新于 2017年2月1日,文中内容可能已过时,请谨慎参考。