目录

Kubernetes跨10个版本升级之旅-CNI改造

概述

近期准备对公司内部的 Kubernetes 从 v1.20.3 升级到 v1.30.4 版本,因为 v1.24 之后就移除了 dockershim 相关的代码,又因为公司内部魔改过一个不是非常流行的 CNI(估计很多人没有听过) Contiv Netplugin依赖 Docker 创建和清理容器 IP,所以要升级 Kubernetes 进去,就必须改造 Contiv Netplugin,将关于 Docker 的代码进行修改以支持 Kubernetes 直接调用 Containerd。

下面是 Kubelet, Dockershim 和 Contiv Netplugin 的几个组件交互的过程。

/kubernetes%E8%B7%A810%E4%B8%AA%E7%89%88%E6%9C%AC%E5%8D%87%E7%BA%A7%E4%B9%8B%E6%97%85-cni%E6%94%B9%E9%80%A0/img.png

旧集群配置容器网络的过程

下面是当前 Kubernetes v1.20.3 版本下,通过 Contiv Netplugin 配置容器网络的日志。先分析一下,在旧环境依赖 Docker 的情况下,容器的网络是如何被配置的。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
RunPodSandbox // /kubernetes-1.20.3/pkg/kubelet/dockershim/docker_sandbox.go
|- ds.client.StartContainer(createResp.ID)
    |- d.client.ContainerStart(...) // /kubernetes-1.20.3/pkg/kubelet/dockershim/libdocker/kube_docker_client.go
        |- cli.post(ctx, "/containers/"+containerID+"/start", query, ...) // /kubernetes-1.20.3/vendor/github.com/docker/docker/client/container_start.go
|- ds.network.SetUpPod(...)
    |- pm.plugin.SetUpPod(...)
        |- SetUpPod() // /kubernetes-1.20.3/pkg/kubelet/dockershim/network/cni/cni.go
            |- netnsPath, err := plugin.host.GetNetNS(id.ID)
                |- getNetworkNamespace(c *dockertypes.ContainerJSON) // /kubernetes-1.20.3/pkg/kubelet/dockershim/helpers_linux.go
                |- fmt.Sprintf(dockerNetNSFmt, c.State.Pid) // dockerNetNSFmt = "/proc/%v/ns/net"
            |- plugin.addToNetwork(..., netnsPath, ...)
                |- plugin.buildCNIRuntimeConf(..., podNetnsPath, ...)
                    |- libcni.RuntimeConf{ContainerID: podSandboxID.ID, NetNS: podNetnsPath, ...}
                   
                 ||
                 || http调用
                 ||
                 \/ 
                    
t.HandleFunc(cniapi.EPAddURL, makeHTTPHandler(addPod)) // /contiv-netplugin/src/github.com/contiv/netplugin/mgmtfn/k8splugin/cniserver.go
|- addPod(r *http.Request) (interface{}, error)
    |- content, err := ioutil.ReadAll(r.Body)
    |- json.Unmarshal(content, &pInfo)
    |- ep, err := createEP(&epSpec{EndpointID: pInfo.InfraContainerID}, &pInfo)
    |- pid, err := nsToPID(pInfo.NwNameSpace)
        |- func nsToPID(ns string) (int, error)
            |- ok := strings.HasPrefix(ns, "/proc/") // /contiv-netplugin/src/github.com/contiv/netplugin/mgmtfn/k8splugin/driver.go
    |- setIfAttrs(pid, ep.PortName, ep.IPAddress, pInfo.IntfName)
        |- link, err := getLink(ifname)
        |- netlink.LinkSetNsPid(link, pid) // 关键
        |- osexec.Command(nsenterPath, "-t", nsPid, "-n", "-F", "--", ipPath, "link", "set", "dev", ifname, "name", newname)
        |- osexec.Command(nsenterPath, "-t", nsPid, "-n", "-F", "--", ipPath, "link", "set", "dev", ifname, "name", newname)
        |- osexec.Command(nsenterPath, "-t", nsPid, "-n", "-F", "--", ipPath, "link", "set", "dev", newname, "up")
    |- setDefGw(pid, gw, gwIntf)
        |- osexec.Command(nsenterPath, "-t", nsPid, "-n", "-F", "--", routePath, "add", "default", "gw", gw, intfName)

从上面的代码看,最重要的一步是 netlink.LinkSetNsPid(link, pid),这个方法会把 OVS 创建出来的 Port(OVS概念)和 sandbox 的 PID 的网络命名空间关联,简单理解就是 Veth Pair 的一头会被放到这个 PID 的网络命名空间,然后就是一系列的 nsenter 的操作,进入到这个 PID 的命名空间之后,就可以把虚拟网卡的名字修改,并且配置 IP 地址和配置默认网关,这就是 Contiv Netplugin 的原理。

新版本Kubernetes配置容器网络

从旧集群配置容器网络的过程可以了解 dockershim 的作用主要有:

  1. 创建sandbox
  2. 启动sandbox获取pause容器的PID
  3. PID传递到CNI,CNI根据PID确定容器的网络命名空间

新版本 Kubernetes 没有了 dockershim 之后,容器的网络应该如何配置呢,下面是 Containerd 调用 CNI 的过程。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
RunPodSandbox // /containerd-1.7.14/pkg/cri/server/sandbox_run.go
    |- var netnsMountDir = "/var/run/netns"
    |- sandbox.NetNS, err = netns.NewNetNS(netnsMountDir)
        |- NewNetNSFromPID(baseDir, 0)
            |- path, err := newNS(baseDir, pid)
                |- nsName := fmt.Sprintf("cni-%x-%x-%x-%x-%x", b[0:4], b[4:6], b[6:8], b[8:10], b[10:]) // 关键区别
                |- nsPath = path.Join(baseDir, nsName)
                |- mountPointFd, err := os.OpenFile(nsPath, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0666)
                |- origNS, err = cnins.GetNS(getCurrentThreadNetNSPath())
                |- unix.Mount(getCurrentThreadNetNSPath(), nsPath, "none", unix.MS_BIND, "")
    |- sandbox.NetNSPath = sandbox.NetNS.GetPath()
    |- c.setupPodNetwork(ctx, &sandbox)
        |- netPlugin.Setup(ctx, id, path, opts...)
            |- path = sandbox.NetNSPath
            |- newNamespace(id, path, opts...) // /containerd-1.7.14/vendor/github.com/containerd/go-cni/cni.go
    |- task, err := container.NewTask(ctx, containerdio.NullIO, taskOpts...)
    |- task.Start(ctx) // 关键区别

从上面代码的关键区别,可以看到,不依赖 dockershim 之后,Containerd 调用 CNI 的过程中,是先生成网络命名空间的名字,然后再启动 sandbox 容器,在启动 sandbox 容器之前,是无法获取容器的 PID 的,也就是说如果还是按照之前的方式调用 Contiv Netplugin 的话,是无法通过 PID 来获取网络命名空间的,我们可以从下面的日志中得到验证。

在不修改 Contiv Netplugin 的情况下,直接升级 Kubernetes 版本到 v1.30.4,且移除 Docker 进程后,无法通过 Contiv Netplugin 正常配置和清理网络配置,具体日志如下。可以看到,Netplugin 发送请求到 Netmaster 成功常见 Endpoint(Contiv的概念),Netmaster 根据绑定的网络配置,已经将网络配置的具体内容,包括 IP 和网关等返回给 Netplugin 了,OVS 也创建了 Veth Pairs,但是在接下来的步骤中遇到问题 Invalid nw name space

1
2
3
4
5
6
7
8
9
time="Aug 31 15:54:54.278044376" level=info msg="Handling \"add pod\" event"
time="Aug 31 15:54:54.280424039" level=info msg="Making REST request to url: http://10.199.133.226:9999/plugin/createEndpoint"
...
time="Aug 31 15:54:54.293904673" level=info msg="Results for (http://10.199.133.226:9999/plugin/createEndpoint): &{AssignedTenant:default AssignedNetwork:ns-default-net7 EndpointConfig:{CommonState:{StateDriver:<nil> ID:ns-default-net7.default-471f65e66fbf0259f1455a6eec5b40fd3d5c9e720ba0122e4a4ded0d8075fee2} PodName:nm-2wpjp NetID:ns-default-net7.default EndpointID:471f65e66fbf0259f1455a6eec5b40fd3d5c9e720ba0122e4a4ded0d8075fee2 ServiceName: EndpointGroupID:0 EndpointGroupKey: IPAddress:10.189.72.71 IPv6Address: MacAddress:02:02:0a:bd:48:47 HomingHost:ns-k8s-noah-staging001-node-s1500 IntfName: VtepIP: Labels:map[] ContainerID: ContainerName: Rx_Bandwidth: Tx_Bandwidth: IPReservationID:}}\n"
time="Aug 31 15:54:54.293947635" level=debug msg="Got endpoint create resp from master: {AssignedTenant:default AssignedNetwork:ns-default-net7 EndpointConfig:{CommonState:{StateDriver:<nil> ID:ns-default-net7.default-471f65e66fbf0259f1455a6eec5b40fd3d5c9e720ba0122e4a4ded0d8075fee2} PodName:nm-2wpjp NetID:ns-default-net7.default EndpointID:471f65e66fbf0259f1455a6eec5b40fd3d5c9e720ba0122e4a4ded0d8075fee2 ServiceName: EndpointGroupID:0 EndpointGroupKey: IPAddress:10.189.72.71 IPv6Address: MacAddress:02:02:0a:bd:48:47 HomingHost:ns-k8s-noah-staging001-node-s1500 IntfName: VtepIP: Labels:map[] ContainerID: ContainerName: Rx_Bandwidth: Tx_Bandwidth: IPReservationID:}}"
time="Aug 31 15:54:54.302155243" level=info msg="Creating Veth pairs with name: vport1, vvport1"
time="Aug 31 15:54:54.608503260" level=info msg="Current workmode for netplugin when [CreatePort] in [ovsSwitch] is []"
time="Aug 31 15:54:54.612413763" level=debug msg="==Unlock LocalEpInfoMutex=="
time="Aug 31 15:54:54.615316941" level=error msg="Error moving to netns. Err: Invalid nw name space: /var/run/netns/cni-a4be4aed-0ec5-d43a-2f44-7e918282a09c"

Contiv Netplugin适配Kubernetes v1.30.4

在了解了旧集群配置容器网络的过程和新版本 Kubernetes 配置容器网络的过程之后,就可以理解了,在 Kubernetes 1.30.4 中,在没有 dockershim 的情况下,如何直接获取 sandbox 的命名空间,然后配置好容器网络之后再启动容器。

首先 pInfo.NwNameSpace 在 Kubernetes v1.30.4 下,会直接传递 sandbox 的网络命名空间,格式类似为 /var/run/netns/cni-a4be4aed-0ec5-d43a-2f44-7e918282a09c,这个区别于依赖 dockershim 时候的格式 /proc/<PID>/ns/net,所以我们不能像原来需要 Netplugin 的方法 netlink.LinkSetNsPid(link, pid) 来将 Veth Pairs 在容器一段的 link 绑定到 sandbox 容器的网络命名空间下,但我们通过 ip 命令直接将 Veth Pairs 在容器一段的 link 直接配置到 /var/run/netns/cni-a4be4aed-0ec5-d43a-2f44-7e918282a09c 这样的网络命名空间下,然后就是配置 IP 地址和默认网关、路由的操作了。

1
2
3
4
5
6
7
|- setIfAttrs(pid, ep.PortName, ep.IPAddress, pInfo.IntfName)
    |- runCommand(ipPath, "link", "set", epPort, "netns", nwName) // 关键
    |- runCommand(ipPath, "netns", "exec", nwName, ipPath, "link", "set", epPort, "name", newname) // 修改网卡名
    |- runCommand(ipPath, "netns", "exec", nwName, ipPath, "addr", "add", cidr, "dev", newname) // 设置IP地址
    |- runCommand(ipPath, "netns", "exec", nwName, ipPath, "link", "set", "dev", newname, "up") // UP网卡
|- setDefGw(pid, gw)
    |- runCommand(ipPath, "netns", "exec", nwName, ipPath, "route", "add", "default", "via", gw) // 配置默认网卡

至此 Contiv Netplugin 适配 Kubernetes v1.30.4 的工作就完成了,从下面的日志可以看到 Netplugin 去 Netmaster 请求 Endpoint 的分配,Netmaster 给 Netplugin 返回了 Endpoint 的网络配置,IPAddress:10.189.52.138,然后 Netplugin 就负责把 sandbox 的网络配置配置好就可以了。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# Netmaster日志
time="Sep  3 11:27:34.348474091" level=info msg="Token authenticate successfully."
time="Sep  3 11:27:34.348623286" level=info msg="Received CreateEndpointRequest: {TenantName: NetworkName: NetworkGroupName: ServiceName: EndpointID:95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7 ConfigEP:{PodName:nm-9qzbt Container:95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7 Host:ns-k8s-noah-staging001-node-s1501 IPAddress: IPv6Address: ServiceName: Rx_Bandwidth: Tx_Bandwidth: IPReservationID:}}"
time="Sep  3 11:27:34.348754891" level=info msg="No tenant name is specified, using default"
time="Sep  3 11:27:34.350891529" level=info msg="allocating IP from default-network-group for nm-9qzbt/95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7"
time="Sep  3 11:27:34.354240577" level=info msg="round 0: trying to allocate IP from ns-default-net2.default for nm-9qzbt/95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7"
time="Sep  3 11:27:34.363311779" level=info msg="CreateEndpoint successful as {CommonState:{StateDriver:0xc0001ed340 ID:ns-default-net2.default-95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7} PodName:nm-9qzbt NetID:ns-default-net2.default EndpointID:95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7 ServiceName: EndpointGroupID:0 EndpointGroupKey: IPAddress:10.189.52.138 IPv6Address: MacAddress:02:02:0a:bd:34:8a HomingHost:ns-k8s-noah-staging001-node-s1501 IntfName: VtepIP: Labels:map[] ContainerID: ContainerName: Rx_Bandwidth: Tx_Bandwidth: IPReservationID:}"
time="Sep  3 11:27:34.371533563" level=debug msg="Got Watch Resp: &{Action:compareAndSwap Node:{Key: /contiv.io/lock/netmaster/leader, CreatedIndex: 17194653049, ModifiedIndex: 19739309421, TTL: 30} PrevNode:{Key: /contiv.io/lock/netmaster/leader, CreatedIndex: 17194653049, ModifiedIndex: 19739308058, TTL: 20} Index:19739308059}"

# Netplugin日志
time="Sep  3 11:27:34.344601795" level=info msg="Handling \"add pod\" event"
time="Sep  3 11:27:34.344649401" level=debug msg="Add Pod for content: {\"K8S_POD_NAME\":\"nm-9qzbt\",\"K8S_POD_NAMESPACE\":\"kube-system\",\"K8S_POD_INFRA_CONTAINER_ID\":\"95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7\",\"CNI_NETNS\":\"/var/run/netns/cni-5b20035c-a815-d297-815a-a77e89cc59e9\",\"CNI_IFNAME\":\"eth0\",\"NETWORK_GROUP\":\"\",\"RESRV_ID\":\"\"}"
time="Sep  3 11:27:34.344778332" level=debug msg="Add Pod for pInfo: {nm-9qzbt kube-system 95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7 /var/run/netns/cni-5b20035c-a815-d297-815a-a77e89cc59e9 eth0    }"
time="Sep  3 11:27:34.347353672" level=info msg="Making REST request to url: http://10.199.133.226:9999/plugin/createEndpoint"
time="Sep  3 11:27:34.347491055" level=debug msg="Initializing admin token \"8daf3fc71fffc82d2a1206a7e0e10127b8ddc3685c27843d3f\"."
time="Sep  3 11:27:34.347518437" level=debug msg="Current authentication method is [token] "
time="Sep  3 11:27:34.347535888" level=debug msg="Contiv cluster post use admin token \"8daf3fc71fffc82d2a1206a7e0e10127b8ddc3685c27843d3f\"."
time="Sep  3 11:27:34.363743248" level=info msg="Results for (http://10.199.133.226:9999/plugin/createEndpoint): &{AssignedTenant:default AssignedNetwork:ns-default-net2 EndpointConfig:{CommonState:{StateDriver:<nil> ID:ns-default-net2.default-95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7} PodName:nm-9qzbt NetID:ns-default-net2.default EndpointID:95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7 ServiceName: EndpointGroupID:0 EndpointGroupKey: IPAddress:10.189.52.138 IPv6Address: MacAddress:02:02:0a:bd:34:8a HomingHost:ns-k8s-noah-staging001-node-s1501 IntfName: VtepIP: Labels:map[] ContainerID: ContainerName: Rx_Bandwidth: Tx_Bandwidth: IPReservationID:}}\n"
time="Sep  3 11:27:34.363812955" level=debug msg="Got endpoint create resp from master: {AssignedTenant:default AssignedNetwork:ns-default-net2 EndpointConfig:{CommonState:{StateDriver:<nil> ID:ns-default-net2.default-95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7} PodName:nm-9qzbt NetID:ns-default-net2.default EndpointID:95c3e141e753c065906a41cbb7ddc4c5f519e93046493cec2dd232ccbb492de7 ServiceName: EndpointGroupID:0 EndpointGroupKey: IPAddress:10.189.52.138 IPv6Address: MacAddress:02:02:0a:bd:34:8a HomingHost:ns-k8s-noah-staging001-node-s1501 IntfName: VtepIP: Labels:map[] ContainerID: ContainerName: Rx_Bandwidth: Tx_Bandwidth: IPReservationID:}}"
time="Sep  3 11:27:34.377223267" level=info msg="Creating Veth pairs with name: vport666, vvport666"
time="Sep  3 11:27:34.679049172" level=info msg="Current workmode for netplugin when [CreatePort] in [ovsSwitch] is []"
time="Sep  3 11:27:34.683943965" level=debug msg="==Unlock LocalEpInfoMutex=="
time="Sep  3 11:27:34.689802183" level=debug msg="Executing command: /usr/sbin/ip link set vport666 netns cni-5b20035c-a815-d297-815a-a77e89cc59e9"
time="Sep  3 11:27:34.702184222" level=info msg="Set netns cni-5b20035c-a815-d297-815a-a77e89cc59e9 to link vport666 successfully."
time="Sep  3 11:27:34.702208404" level=debug msg="Executing command: /usr/sbin/ip netns exec cni-5b20035c-a815-d297-815a-a77e89cc59e9 /usr/sbin/ip link set vport666 name eth0"
time="Sep  3 11:27:34.720161351" level=info msg="Rename ifName from vport666 to eth0 successfully."
time="Sep  3 11:27:34.720185910" level=debug msg="Executing command: /usr/sbin/ip netns exec cni-5b20035c-a815-d297-815a-a77e89cc59e9 /usr/sbin/ip addr add 10.189.52.138/22 dev eth0"
time="Sep  3 11:27:34.722863795" level=info msg="Assigned IP 10.189.52.138/22 to eth0 successfully."
time="Sep  3 11:27:34.722884186" level=debug msg="Executing command: /usr/sbin/ip netns exec cni-5b20035c-a815-d297-815a-a77e89cc59e9 /usr/sbin/ip link set dev eth0 up"
time="Sep  3 11:27:34.725458713" level=info msg="Brought up interface eth0 successfully."
time="Sep  3 11:27:34.725506716" level=debug msg="Executing command: /usr/sbin/ip netns exec cni-5b20035c-a815-d297-815a-a77e89cc59e9 /usr/sbin/ip route add default via 10.189.52.1"
time="Sep  3 11:27:34.728267729" level=info msg="Set default gateway to 10.189.52.1 successfully."

关于删除容器的流程

CNI Del 的操作没有依赖 Docker,所以这里不讨论正常的删除容器的操作。但 Contiv Netplugin 在处理异常情况的容器 IP 残留是需要依赖 Docker 的,那么这个过程又要如何适配在没有 Docker 的情况下回收 IP 呢。下面是 Contiv Netplugin 在清理 IP 时候的过程。

1
2
3
4
5
6
7
8
ipAddressCleanUp
    |- netList, err := getClient(masterNode).NetworkList()
    |- occupiedIPs[endpoint.IpAddress[0]] = struct{}{}
    |- c, err = client.NewClient("unix:///var/run/docker.sock", "", nil, defaultHeaders) // 依赖Docker
    |- containers, err := c.ContainerList(ctx, opts)
    |- containerInfo, err := c.ContainerInspect(ctx, container.ID)
    |- ipaddr, err := getIpFromContainer(containerInfo.State.Pid)
    |- activeIPs[ipaddr] = struct{}{}

从上面的过程中可以发现 Contiv Netplugin 在清理 IP 的时候,会依赖 Docker,主要是通过类似 docker inspect <contianerID> 来获取还在运行中的容器的 PID,然后通过 nsenter 进入容器进程中获取 IP 地址,之后会跟 Netmaster 上记录的这个节点上占用的 IP 做对比,如果 Netmaster 上有的 IP 在本机的 Netplugin 上发现不到,就会判断为过期的 IP,需要释放 Netmaster 上记录的这个 IP,这样在后面的分配中,这个 IP 才能被重新使用。

因为 Netplugin 使用 Docker 的 Client 是比较轻度的,所以这里只要换个方式获取运行中容器的 ID 即可,比如可以使用 crictl ps 获取,有可以正常的回收无效的 IP。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
time="Sep  3 14:55:12.434832392" level=info msg="Handling \"del pod\" event"
time="Sep  3 14:55:12.436905067" level=info msg="[DelPod 14:55:12.434932 nm-szw9b/bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59] Pod network cleanup procedure started."
time="Sep  3 14:55:12.436964158" level=info msg="Deleting K8SOperEndpointState for &{Name:nm-szw9b K8sNameSpace:kube-system InfraContainerID:bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59 NwNameSpace:/var/run/netns/cni-64fc8271-e5ae-25b5-b1a9-71dc326b383b IntfName:eth0 Rx_Bandwidth: Tx_Bandwidth: NetworkGroup: ReservationID:}"
time="Sep  3 14:55:12.442502849" level=info msg="Current workmode for netplugin when [DeletePort] in [ovsSwitch] is []"
time="Sep  3 14:55:12.443341182" level=info msg="Deleting Veth pairs with name: vvport667, vport667"
time="Sep  3 14:55:12.468429988" level=debug msg="[DelPod 14:55:12.434932 nm-szw9b/bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59] Probing removal of /var/run/netns/cni-64fc8271-e5ae-25b5-b1a9-71dc326b383b for 0/18 time"
time="Sep  3 14:55:12.468489456" level=debug msg="[DelPod 14:55:12.434932 nm-szw9b/bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59] First seen [inode: 0, mtime: 0], last seen [inode: 4026532593, mtime: 1725346211]"
time="Sep  3 14:55:13.313412241" level=debug msg="Refreshing key: /contiv.io/service/netplugin/10.189.110.46:9002"
time="Sep  3 14:55:16.648641431" level=debug msg="Refreshing key: /contiv.io/service/netplugin/10.189.110.46:9002"
time="Sep  3 14:55:19.984385458" level=debug msg="Refreshing key: /contiv.io/service/netplugin/10.189.110.46:9002"
time="Sep  3 14:55:22.468621793" level=debug msg="[DelPod 14:55:12.434932 nm-szw9b/bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59] Probing removal of /var/run/netns/cni-64fc8271-e5ae-25b5-b1a9-71dc326b383b for 1/18 time"
time="Sep  3 14:55:22.468686078" level=info msg="[DelPod 14:55:12.434932 nm-szw9b/bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59] /var/run/netns/cni-64fc8271-e5ae-25b5-b1a9-71dc326b383b removed, start IP recycling for Pod: nm-szw9b"
time="Sep  3 14:55:22.469374740" level=info msg="Making REST request to url: http://10.199.133.226:9999/plugin/deleteEndpoint"
time="Sep  3 14:55:22.469422426" level=debug msg="Initializing admin token \"8daf3fc71fffc82d2a1206a7e0e10127b8ddc3685c27843d3f\"."
time="Sep  3 14:55:22.469437663" level=debug msg="Current authentication method is [token] "
time="Sep  3 14:55:22.469449840" level=debug msg="Contiv cluster post use admin token \"8daf3fc71fffc82d2a1206a7e0e10127b8ddc3685c27843d3f\"."
time="Sep  3 14:55:22.484065094" level=info msg="Results for (http://10.199.133.226:9999/plugin/deleteEndpoint): &{EndpointConfig:{CommonState:{StateDriver:<nil> ID:ns-default-net4.default-bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59} PodName:nm-szw9b NetID:ns-default-net4.default EndpointID:bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59 ServiceName: EndpointGroupID:0 EndpointGroupKey: IPAddress:10.189.62.45 IPv6Address: MacAddress:02:02:0a:bd:3e:2d HomingHost:ns-k8s-noah-staging001-node-s1501 IntfName: VtepIP: Labels:map[] ContainerID: ContainerName: Rx_Bandwidth: Tx_Bandwidth: IPReservationID:}}\n"
time="Sep  3 14:55:22.484106036" level=info msg="[DelPod 14:55:12.434932 nm-szw9b/bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59] IP recycled for Pod: nm-szw9b"
time="Sep  3 14:55:22.484119539" level=info msg="[DelPod 14:55:12.434932 nm-szw9b/bc3eb3224a4db88d77a24e80c3171f970d30b5f079374ccb17a61a8337b41a59] Pod network cleanup procedure ended."

测试

1
2
3
4
5
cd /opt/cni/
ip netns add ns1
CNI_ARGS='K8S_POD_UID=;IgnoreUnknown=;K8S_POD_NAMESPACE=;K8S_POD_NAME=;K8S_POD_INFRA_CONTAINER_ID=abc;NwNameSpace=;CNI_IFNAME=' CNI_NETNS=/var/run/netns/ns1 CNI_IFNAME=eth0 CNI_COMMAND=ADD ./contivk8s.bin {"cniVersion": "0.1.0", "name": "contiv-poc", "type": "contivk8s.bin"}
CNI_ARGS='K8S_POD_UID=;IgnoreUnknown=;K8S_POD_NAMESPACE=;K8S_POD_NAME=;K8S_POD_INFRA_CONTAINER_ID=abc;NwNameSpace=;CNI_IFNAME=' CNI_NETNS=/var/run/netns/ns1 CNI_IFNAME=eth0 CNI_COMMAND=DEL ./contivk8s.bin {"cniVersion": "0.1.0", "name": "contiv-poc", "type": "contivk8s.bin"}
ip netns del ns1

总结

Kubernetes 集群升级需要排查很多问题,如果是跨多个版本升级,就更需要仔细的摸排潜在的坑了,在这些坑里,最恐怖就是网络,比网络更可怕的就是网络插件的一些历史包袱问题了。