IT漫步

技术生活札记©Yaohui

ernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1

[Copied from]: https://access.redhat.com/solutions/3659011 RHEL7 and kubernetes: kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1  SOLUTION VERIFIED – Updated October 13 2019 at 9:17 PM – English  Issue We are trying to prototype kubernetes on top of RHEL and encounter the situation that the device seems to be frozen. There are repeated messages similar to: Raw …


[Kubernetes] Create deployment, service by Python client

Install Kubernetes Python Client and PyYaml: # pip install kubernetes pyyaml 1. Get Namespaces or Pods by CoreV1Api: # -*- coding: utf-8 -*- from kubernetes import client, config, utils config.kube_config.load_kube_config(config_file="../kubecfg.yaml") coreV1Api = client.CoreV1Api() print("\nListing all namespaces") for ns in coreV1Api.list_namespace().items: print(ns.metadata.name) print("\nListing pods with their IP, namespace, names:") for pod in coreV1Api.list_pod_for_all_namespaces(watch=False).items: print("%s\t\t%s\t%s" % (pod.status.pod_ip, …


Customize hosts record on docker and kubernetes

Docker: docker run -it --rm --add-host=host1:172.17.0.2 --add-host=host2:192.168.1.3 busybox use “–add-host” to add entries to /etc/hosts   Kubernetes: apiVersion: v1 kind: Pod metadata: name: hostaliases-pod spec: hostAliases: - ip: "127.0.0.1" hostnames: - "foo.local" - "bar.local" - ip: "10.1.2.3" hostnames: - "foo.remote" - "bar.remote" containers: - name: cat-hosts image: busybox command: - cat args: - "/etc/hosts" use …


遇到了传说中的container runtime is down PLEG is not healthy

在一次异常断电后, 开发环境的一个小kubernetes cluster中不幸遭遇了PLEG is not healthy问题, 表现是k8s中的pod状态变成Unknown或ContainerCreating, k8s节点状态变成NotReady: # kubectl get nodes NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-dev-master Ready master 1y v1.10.0 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://17.3.0 k8s-dev-node1 NotReady node 1y v1.10.0 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 docker://Unknown k8s-dev-node2 NotReady node 1y v1.10.0 <none> CentOS Linux 7 (Core) 3.10.0-957.21.3.el7.x86_64 …


Kubernetes CronJob failed to schedule: Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew

Kubernetes v1.13.3, schedule了一个cronjob, 每5分钟运行一次, 但发现已经有3天没有新的pod被创建了: # kubectl get cronjob/dingtalk-atndsyncer NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE dingtalk-atndsyncer */5 * * * * False 0 3d1h 4d21h cronjob中的.spec.concurrencyPolicy为Forbid, 不允许多任务并行, describe该cronjob提示:FailedNeedsStart, 具体message是”Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.” # kubectl describe …


Kubernetes 1.13.3 external etcd clean up | Kubernetes外部etcd数据清除

Kubernetes配置过程中如果出了问题, 可以用kubeadm reset重置Kubernetes cluster状态, 但如果使用了外部etcd cluster, 则执行kubeadm reset不会清除外部etcd cluster中的数据, 也就意味着如果再次执行kubeadm init, 则会看到上一个kubenetes cluster中的数据。 查询和手动清除外部etcd cluster的方式如下(以Kubernetes 1.13.3为例): 1. 查询所有数据: docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.2.24 etcdctl --cert="/etc/kubernetes/pki/etcd/healthcheck-client.crt" --key="/etc/kubernetes/pki/etcd/healthcheck-client.key" --cacert="/etc/kubernetes/pki/etcd/ca.crt" --endpoints https://etcd1.cloud.k8s:2379 get "" --prefix 2. 删除所有数据: docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.2.24 etcdctl --cert="/etc/kubernetes/pki/etcd/healthcheck-client.crt" --key="/etc/kubernetes/pki/etcd/healthcheck-client.key" --cacert="/etc/kubernetes/pki/etcd/ca.crt" --endpoints https://etcd1.cloud.k8s:2379 del "" --prefix …

Proudly powered by WordPress