Such as not being able to pull on charts, and so on.
Such as not being able to pull on charts, and so on.
This can be for many reasons, but the main culprit is usually CoreDNS issues.
Add the dnsutils pod to your nodes like so;
apiVersion: v1
kind: Pod
metadata:
name: dnsutils-m1
namespace: default
spec:
containers:
- name: dnsutils-m1
image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "infinity"
imagePullPolicy: IfNotPresent
restartPolicy: Always
nodeSelector:
kubernetes.io/hostname: m1.example.com
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
---
apiVersion: v1
kind: Pod
metadata:
name: dnsutils-w1
namespace: default
spec:
containers:
- name: dnsutils-w1
image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "infinity"
imagePullPolicy: IfNotPresent
restartPolicy: Always
nodeSelector:
kubernetes.io/hostname: w1.example.com
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
---
apiVersion: v1
kind: Pod
metadata:
name: dnsutils-w2
namespace: default
spec:
containers:
- name: dnsutils-w2
image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "infinity"
imagePullPolicy: IfNotPresent
restartPolicy: Always
nodeSelector:
kubernetes.io/hostname: w2.example.com
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
Then exec to the pods like so;
kubectl exec -ti dnsutils-m1 -- bash
Then you can run loops like this to see if there are dropped connections;
clear; for a in {1..50}; do echo ===============================; dig google.com; sleep .5; done
# or
clear; for a in {1..50}; do echo ===============================; nslookup google.com | grep SERVFAIL; sleep .5; done