ernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1

[Copied from]: https://access.redhat.com/solutions/3659011

RHEL7 and kubernetes: kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1

 SOLUTION VERIFIED – Updated  –

Issue

  • We are trying to prototype kubernetes on top of RHEL and encounter the situation that the device seems to be frozen. There are repeated messages similar to:
[1331228.795391] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[1331238.871337] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[1331248.919329] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[1331258.964322] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
  • This problem occurs when scaling down pods in kubernetes. A reboot of the node is required to rectify.
  • This has been seen after customers upgraded to a kernel with the fix for https://access.redhat.com/solutions/3105941 But after that the messages appear on ethX instead of lo.

Environment

  • Red Hat Enterprise Linux 7
    • kernel-3.10.0-862.el7.x86_64
    • kernel-3.10.0-957.el7.x86_64
  • upstream Kubernetes
  • upstream docker

refer:

https://github.com/kubernetes/kubernetes/issues/70427

 

[Kubernetes] Create deployment, service by Python client

Install Kubernetes Python Client and PyYaml:

# pip install kubernetes pyyaml

1. Get Namespaces or Pods by CoreV1Api:

# -*- coding: utf-8 -*-
from kubernetes import client, config, utils

config.kube_config.load_kube_config(config_file="../kubecfg.yaml")
coreV1Api = client.CoreV1Api()

print("\nListing all namespaces")
for ns in coreV1Api.list_namespace().items:
    print(ns.metadata.name)

print("\nListing pods with their IP, namespace, names:")
for pod in coreV1Api.list_pod_for_all_namespaces(watch=False).items:
    print("%s\t\t%s\t%s" % (pod.status.pod_ip, pod.metadata.namespace, pod.metadata.name))

2. Create Deployment and Service by AppsV1Api:

# -*- coding: utf-8 -*-
from kubernetes import client, config, utils
import yaml

config.kube_config.load_kube_config(config_file="../kubecfg.yaml")
yamlDeploy = open( r'deploy.yaml')
jsonDeploy = yaml.load(yamlDeploy, Loader = yaml.FullLoader)

yamlService = open(r'service.yaml')
jsonService = yaml.load(yamlService, Loader = yaml.FullLoader)

appsV1Api = client.AppsV1Api()

if jsonDeploy['kind'] == 'Deployment':
    appsV1Api.create_namespaced_deployment(
        namespace="default", body = jsonDeploy
    )

if jsonService['kind'] == 'Service':
    coreV1Api.create_namespaced_service(
        namespace="default",
        body=jsonService
    )

3. Create ANY type of objects from a yaml file by utils.create_from_yaml, you can put multiple resources in one yaml file:

# -*- coding: utf-8 -*-
from kubernetes import client, config, utils

config.kube_config.load_kube_config(config_file="../kubecfg.yaml")

k8sClient = client.ApiClient()
utils.create_from_yaml(k8sClient, "deploy-service.yaml")

Reference:
https://github.com/kubernetes-client/python/blob/6709b753b4ad2e09aa472b6452bbad9f96e264e3/examples/create_deployment_from_yaml.py
https://stackoverflow.com/questions/56673919/kubernetes-python-api-client-execute-full-yaml-file

Customize hosts record on docker and kubernetes

Docker:

docker run -it --rm --add-host=host1:172.17.0.2 --add-host=host2:192.168.1.3 busybox

use “–add-host” to add entries to /etc/hosts

 

Kubernetes:

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.local"
    - "bar.local"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.remote"
    - "bar.remote"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

use “spec.hostAliases” to configure hosts entry for pod/deployment

 

https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/

遇到了传说中的container runtime is down PLEG is not healthy

在一次异常断电后, 开发环境的一个小kubernetes cluster中不幸遭遇了PLEG is not healthy问题, 表现是k8s中的pod状态变成Unknown或ContainerCreating, k8s节点状态变成NotReady:

# kubectl get nodes
NAME             STATUS     ROLES     AGE	VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-dev-master   Ready      master    1y        v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://17.3.0
k8s-dev-node1    NotReady   node      1y        v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://Unknown
k8s-dev-node2    NotReady   node      1y        v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://Unknown
k8s-dev-node3    NotReady   node      289d	v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://Unknown
k8s-dev-node4    Ready      node      289d	v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://17.3.0

Kubelet日志中提示: skipping pod synchronization, container runtime is down PLEG is not healthy:

9月 25 11:05:06 k8s-dev-node1 kubelet[546]: I0925 11:05:06.003645     546 kubelet.go:1794] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 21m18.877402888s ago; threshold is 3m0s]
9月 25 11:05:11 k8s-dev-node1 kubelet[546]: I0925 11:05:11.004116     546 kubelet.go:1794] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 21m23.877803484s ago; threshold is 3m0s]
9月 25 11:05:16 k8s-dev-node1 kubelet[546]: I0925 11:05:16.004382     546 kubelet.go:1794] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 21m28.878169681s ago; threshold is 3m0s]

重启节点docker和kubelet后恢复,过不了多久又出错变成NotReady, google了一把,在stackoverflow和github/kubernetes上有相关的issue:

#45419在v1.16中才被fix, 从1.10升级到1.16太繁琐, 看到 #61117中的一个评论说通过请求节点上的/var/lib/kubelet/pods目录可以解决, 第一次试了下由于mount卷的占用问题没有删除掉该目录, 问题没有解决, 后面索性级升级了docker, 从17.3.0升级到了19.3.2, 并请除了每个节点中/var/lib/kubelet/pods/, /var/lib/docker两个目录下的所有数据后,问题解决了。

大致过程:

# 先禁用docker和kubelet自动启动, 重启后清除文件:
systemctl disable docker && systemctl disable kubelet
reboot
rm -rf /var/lib/kubelet/pods/
rm -rf /var/lib/docker

# 中间顺便把docker-ce从17.3.0升级到了19.3.2

# 升级完docker后修改docker.service还指定17.3.0中默认的storage-driver为overlay, 中间试过overlay2, devicemapper, vfs, kubelet中都有报错, 不知是kubernetes v1.10的支持问题还是数据没有清除干净
vi /etc/systemd/system/docker.service

ExecStart=/usr/bin/dockerd ... --storage-driver=overlay

# 重新加载配置后启动docker
systemctl daemon-reload
systemctl start docker && systemctl enable docker
systemctl status docker

# 由于/var/lib/docker目录被整体删除, 如果节点不能直接访问k8s镜像库,需要手动导入节点需要的基础镜像:
docker load -i kubernetes-v10.0-node.tar

# 启动Kubelet
systemctl start kubelet && systemctl enable kubelet
systemctl status kubelet

问题解决:

# kubectl get nodes -o wide
NAME             STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-dev-master   Ready     master    1y        v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://17.3.0
k8s-dev-node1    Ready     node      1y        v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://19.3.2
k8s-dev-node2    Ready     node      1y        v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://19.3.2
k8s-dev-node3    Ready     node      289d      v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://19.3.2
k8s-dev-node4    Ready     node      289d      v1.10.0   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://19.3.2

本次断电不幸造成了kong网关上3个月的配置数据丢失:(, 备份! 备份! 备份!

 

 

 

 

 

Kubernetes CronJob failed to schedule: Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew

Kubernetes v1.13.3, schedule了一个cronjob, 每5分钟运行一次, 但发现已经有3天没有新的pod被创建了:

# kubectl get cronjob/dingtalk-atndsyncer
NAME                  SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
dingtalk-atndsyncer   */5 * * * *   False     0        3d1h            4d21h

cronjob中的.spec.concurrencyPolicy为Forbid, 不允许多任务并行, describe该cronjob提示:FailedNeedsStart, 具体message是”Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.”

# kubectl describe cronjob/dingtalk-atndsyncer
Name:                       dingtalk-atndsyncer
Namespace:                  default
Labels:                     app=dingtalk-atndsyncer
Annotations:                <none>
Schedule:                   */5 * * * *
Concurrency Policy:         Forbid
Suspend:                    False
Starting Deadline Seconds:  <unset>
Selector:                   <unset>
Parallelism:                <unset>
Completions:                <unset>
Pod Template:
  Labels:  <none>
  Containers:
   dingtalk-atndsyncer:
    Image:      dingtalk-atndsyncer:v1.0
    Port:       <none>
    Host Port:  <none>
    Environment:
      ASPNETCORE_ENVIRONMENT:               Production
    Mounts:                                 <none>
  Volumes:                                  <none>
Last Schedule Time:                         Fri, 06 Sep 2019 08:15:00 +0800
Active Jobs:                                <none>
Events:
  Type     Reason            Age                   From                Message
  ----     ------            ----                  ----                -------
  Warning  FailedNeedsStart  43m (x790 over 178m)  cronjob-controller  Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.
  Warning  FailedNeedsStart  25m (x89 over 40m)    cronjob-controller  Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.
  Warning  FailedNeedsStart  119s (x117 over 22m)  cronjob-controller  Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.

Google后仔细阅读官方文档(https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#starting-deadline), 说是如果没有配置.spec.startingDeadlineSeconds, 则从最后一次的schedule时间统计错过的schedule次数,如果超过100次就不再schedule
尝试把.spec.startingDeadlineSeconds配置为300秒, 意味着如果5分钟内错过schedule超过100次,才不会schedule (因为schedule周期是5分钟, 所以是一个不太可能达到的条件), 配置后任务schedule正常了

Kubernetes 1.13.3 external etcd clean up | Kubernetes外部etcd数据清除

Kubernetes配置过程中如果出了问题, 可以用kubeadm reset重置Kubernetes cluster状态, 但如果使用了外部etcd cluster, 则执行kubeadm reset不会清除外部etcd cluster中的数据, 也就意味着如果再次执行kubeadm init, 则会看到上一个kubenetes cluster中的数据。

查询和手动清除外部etcd cluster的方式如下(以Kubernetes 1.13.3为例):

1. 查询所有数据:

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.2.24 etcdctl --cert="/etc/kubernetes/pki/etcd/healthcheck-client.crt" --key="/etc/kubernetes/pki/etcd/healthcheck-client.key" --cacert="/etc/kubernetes/pki/etcd/ca.crt" --endpoints https://etcd1.cloud.k8s:2379 get "" --prefix

2. 删除所有数据:

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.2.24 etcdctl --cert="/etc/kubernetes/pki/etcd/healthcheck-client.crt" --key="/etc/kubernetes/pki/etcd/healthcheck-client.key" --cacert="/etc/kubernetes/pki/etcd/ca.crt" --endpoints https://etcd1.cloud.k8s:2379 del "" --prefix

指令中的几个关键点:

  1. 运行使用docker镜像k8s.gcr.io/etcd:3.2.24中的etcdctl指令, 也可以使用外部的
  2. 通过docker -e参数设置环境变量ETCDCTL_API=3设置API Version为3
  3. 挂载外部的etcd ca和客户端证书连接etcd cluster

参考:

External etcd clean up: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/#external-etcd-clean-up