By this I mean, I have a powerdns server running in my cluster, I would like Kubernetes to add/update dns entries in my dns server to reflect all services or any domains that would be used within the cluster, this is to fix a current issue I am having, and for general control and centralization purposes.

  • SpiderUnderUrBed@lemmy.zipOP
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    6 days ago

    No, i want to replace kube-dns and coredns, and some of my applications will resolve the ip at my dns server, then try those ips within the server, but mainly I want to replace the current dns stack due to several issues.

    • Joe@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      ·
      6 days ago

      Ok… so your actual issue is with CoreDNS, and you are asking here for a more complicated, custom, untested, alternative?

      What is your issue with CoreDNS?

      • SpiderUnderUrBed@lemmy.zipOP
        link
        fedilink
        arrow-up
        2
        ·
        6 days ago

        Well, its kube-dns, and it simply, does not work, more specifically, it cannot resolve any external domains, I think it can resolve internal domains but I doubt thats working, but mainly it cant resolve external domains. I posted about it, here: https://lemmy.zip/post/36964791

        Recently, it was fixed because I found the correct endpoint, and uhh, now it stopped working, I updated the endpoint to the newer one, but it went back to the original issue detailed in that post.

        • Joe@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          5 days ago

          I’d be surprised if it’s still kubedns… the service name is still kubedns, but there will probably be CoreDNS pods behind it. To debug this, you should first ensure that you can resolve DNS by directly pointing to an external DNS server from a pod, and then from the node if that fails. eg. dig @1.1.1.1 google.com, or host google.com 1.1.1.1. It might be a routing/firewall/nat issue more than DNS, and this would help track that down.

          • SpiderUnderUrBed@lemmy.zipOP
            link
            fedilink
            arrow-up
            1
            ·
            5 days ago

            https://pastebin.com/RhU5xtma I cant access any external address including dns servers, so, there is no firewall running on my pi (the master node), I can set the nameserver to be 1.1.1.1 in the pods config and iirc that works, but inside the pod, it doesn’t work, so how do i fix this? You probably need more information so i can share. I am running calico as my CNI

            • Joe@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              4 days ago

              Sorry - I totally misread this. You cannot access internet addresses. So it’s a routing or NAT issue, most likely.

              I assume you are using k3d for this, btw?

              So… on the “server” (eg. docker exec -ti k3d-k3s-default-server-0 – /bin/sh), you should be able to “ping 8.8.8.8” successfully.

              If not, the issue may lie with your host’s docker setup.

                  • SpiderUnderUrBed@lemmy.zipOP
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    4 days ago

                    Well I switched to cilium, same issue, and the reason I started using a CNI earlier than I intended was because flannel didn’t work.

                    This issue might seem complex but could you tell me some debugging stuff and logs to try to maybe get to the source of the issue or atleast provide a way to reproduce my issue (so I could maybe file a bug report)

            • Joe@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              4 days ago

              Do you have any NetworkPolicies configured that could block ingress (to kubedns, in kube-system) or egress (in your namespace) ? If any ingress or egress networkpolicy matches a pod, it flips from AllowByDefault to DenyByDefault.

              You should also do kubectl get service and kubectl get endpoints in kube-system, as well as kubectl get pods | grep -i dns

              • SpiderUnderUrBed@lemmy.zipOP
                link
                fedilink
                arrow-up
                1
                ·
                4 days ago
                spiderunderurbed@raspberrypi:~/k8s $ kubectl get networkpolicy -A
                No resources found
                spiderunderurbed@raspberrypi:~/k8s $ 
                

                No networkpolicies.

                spiderunderurbed@raspberrypi:~/k8s $ kubectl get pods -A | grep -i dns
                default                      pdns-admin-mysql-854c4f79d9-wsclq                         1/1     Running            1 (2d22h ago)    4d9h
                default                      pdns-mysql-master-6cddc8cd54-cgbs9                        1/1     Running            0                7h49m
                kube-system                  coredns-ff8999cc5-hchq6                                   1/1     Running            1 (2d22h ago)    4d11h
                kube-system                  svclb-pdns-mysql-master-1993c118-8xqzh                    3/3     Running            0                4d
                kube-system                  svclb-pdns-mysql-master-1993c118-whf5g                    3/3     Running            0                124m
                spiderunderurbed@raspberrypi:~/k8s $ 
                

                Ignore powerdns, its just extra stuff, but yeah coredns is running

                spiderunderurbed@raspberrypi:~/k8s $  kubectl get endpoints  -n kube-system
                NAME             ENDPOINTS                                              AGE
                kube-dns         172.16.246.61:53,172.16.246.61:53,172.16.246.61:9153   4d11h
                metrics-server   172.16.246.45:10250                                    4d11h
                traefik          <none>                                                 130m
                spiderunderurbed@raspberrypi:~/k8s $ 
                

                ^ endpoints and services:

                spiderunderurbed@raspberrypi:~/k8s $ kubectl get svc -n kube-system
                NAME             TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
                kube-dns         ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP       4d11h
                metrics-server   ClusterIP      10.43.67.112    <none>        443/TCP                      4d11h
                traefik          LoadBalancer   10.43.116.221   <pending>     80:31123/TCP,443:30651/TCP   131m
                spiderunderurbed@raspberrypi:~/k8s $