By this I mean, I have a powerdns server running in my cluster, I would like Kubernetes to add/update dns entries in my dns server to reflect all services or any domains that would be used within the cluster, this is to fix a current issue I am having, and for general control and centralization purposes.
I’d be surprised if it’s still kubedns… the service name is still kubedns, but there will probably be CoreDNS pods behind it. To debug this, you should first ensure that you can resolve DNS by directly pointing to an external DNS server from a pod, and then from the node if that fails. eg. dig @1.1.1.1 google.com, or host google.com 1.1.1.1. It might be a routing/firewall/nat issue more than DNS, and this would help track that down.
https://pastebin.com/RhU5xtma I cant access any external address including dns servers, so, there is no firewall running on my pi (the master node), I can set the nameserver to be 1.1.1.1 in the pods config and iirc that works, but inside the pod, it doesn’t work, so how do i fix this? You probably need more information so i can share. I am running calico as my CNI
Sorry - I totally misread this. You cannot access internet addresses. So it’s a routing or NAT issue, most likely.
I assume you are using k3d for this, btw?
So… on the “server” (eg. docker exec -ti k3d-k3s-default-server-0 – /bin/sh), you should be able to “ping 8.8.8.8” successfully.
If not, the issue may lie with your host’s docker setup.
Not k3d, just plain k3s
Your k3s/calico networking is likely screwed. Try creating a new cluster with flannel instead.
Well I switched to cilium, same issue, and the reason I started using a CNI earlier than I intended was because flannel didn’t work.
This issue might seem complex but could you tell me some debugging stuff and logs to try to maybe get to the source of the issue or atleast provide a way to reproduce my issue (so I could maybe file a bug report)
It might be a simple issue like ip forwarding not being enabled, or host-level iptables configuration, or perhaps weird and wonderful routing (eg. wireguard or other VPNs).
Do you have any NetworkPolicies configured that could block ingress (to kubedns, in kube-system) or egress (in your namespace) ? If any ingress or egress networkpolicy matches a pod, it flips from AllowByDefault to DenyByDefault.
You should also do kubectl get service and kubectl get endpoints in kube-system, as well as kubectl get pods | grep -i dns
spiderunderurbed@raspberrypi:~/k8s $ kubectl get networkpolicy -A No resources found spiderunderurbed@raspberrypi:~/k8s $
No networkpolicies.
spiderunderurbed@raspberrypi:~/k8s $ kubectl get pods -A | grep -i dns default pdns-admin-mysql-854c4f79d9-wsclq 1/1 Running 1 (2d22h ago) 4d9h default pdns-mysql-master-6cddc8cd54-cgbs9 1/1 Running 0 7h49m kube-system coredns-ff8999cc5-hchq6 1/1 Running 1 (2d22h ago) 4d11h kube-system svclb-pdns-mysql-master-1993c118-8xqzh 3/3 Running 0 4d kube-system svclb-pdns-mysql-master-1993c118-whf5g 3/3 Running 0 124m spiderunderurbed@raspberrypi:~/k8s $
Ignore powerdns, its just extra stuff, but yeah coredns is running
spiderunderurbed@raspberrypi:~/k8s $ kubectl get endpoints -n kube-system NAME ENDPOINTS AGE kube-dns 172.16.246.61:53,172.16.246.61:53,172.16.246.61:9153 4d11h metrics-server 172.16.246.45:10250 4d11h traefik <none> 130m spiderunderurbed@raspberrypi:~/k8s $
^ endpoints and services:
spiderunderurbed@raspberrypi:~/k8s $ kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d11h metrics-server ClusterIP 10.43.67.112 <none> 443/TCP 4d11h traefik LoadBalancer 10.43.116.221 <pending> 80:31123/TCP,443:30651/TCP 131m spiderunderurbed@raspberrypi:~/k8s $