Configure own DNS name servers
# umh-support
r
How can I configure my own DNS nameservers on my instance? I've updated /etc/resolv.conf:
Copy code
nameserver 100.64.0.2
nameserver 1.1.1.1
nameserver 8.8.8.8
I also updated the instance nameservers in UMH console. I'm pretty new at creating protocol converters, but I'm trying to connect to a Microsoft SQL server located on premise (umh instance connected in the cloud) through an overlay network, but it keeps trying to resolve that domain within the local cluster network:
Copy code
level=error msg="Failed to connect to sql_raw: lookup myserverlocation.com on 10.43.0.10:53: no such host" @service=benthos label=sql_input path=root.input
I'm pretty new to trying to create a connection from the cluster to an external database, and I honestly can't seem to find any guides on this topic
j
@Ferdinand
FYI: this is likely also a good Amal blog article
*small
f
In general coredns should load the resolv.conf from the system. What you can do is to restart it, forcing it to reload it's config:
sudo $(which kubectl) --kubeconfig /etc/rancher/k3s/k3s.yaml -n kube-system delete pods -l k8s-app=kube-dns
If it still refuses to resolve your domain, you can edit the coredns config, to directly tell it to use your DNS server.
sudo $(which kubectl) --kubeconfig /etc/rancher/k3s/k3s.yaml -n kube-system edit configmap coredns
Modify the
forward
section to include 1-3 DNS servers (in my case it told it to use 1.1.1.1) You can also specify
log
to make the coredns logs more verbose. Here are some more samples: -
forward . 1.1.1.1
(This resolves all non-local domains using cloudflare's dns server) -
forward . 1.1.1.1 /etc/resolv.conf
(This will use cloudflare's DNS & the /etc/resolv.conf configured ones) -
forward . 1.1.1.1. 8.8.8.8
(This uses cloudflare and google DNS in round-robin mode with failover if one is unreachable) There are also alot more options (https://github.com/coredns/coredns/blob/master/plugin/forward/README.md) for advanced usecases. After changing these configs, restart the pod. You can now either go directly into your data flow component (it will need a restart/re-deploy) or use a test container
sudo $(which kubectl) --kubeconfig /etc/rancher/k3s/k3s.yaml run -it --rm --restart=Never dns-test --image=busybox -- sh
(inside this you can run
nslookup myserverlocation.com
to validate that you can now reach it). https://cdn.discordapp.com/attachments/1308916155796164628/1309092553366179851/Screenshot_2024-11-21_at_10.41.25.png?ex=674052f2&is=673f0172&hm=e329b8b33cf2c9b4530f1cf9cf653feaf507c29fef699e5bc77ff103f2836ad7&
If you want to check the logs of CoreDNS you can use:
sudo $(which kubectl) --kubeconfig /etc/rancher/k3s/k3s.yaml -n kube-system logs -f -l k8s-app=kube-dns
r
Thank you both! restarting the coredns pod worked, and definitely agree this would be a great article for using those various methods. I would definitely point my team to it as all of our instances are going to be using custom nameservers, so this would be a go-to article for us.
3 Views