Howdy Everyone!
As I am setting up my infrastructure at home using docker I wanted to ask, is it better to have DNS, something like pi-hole, on my main docker swarm or would it be better to have it on a dedicated machine/docker host separate from the rest of my infrastructure?
Thanks for the input!
Either is fine: the question is what happens when something breaks and if you care about issues and such.
If your docker host depends on the pihole it’s running, there can be some weirditry if it’s not available during boot and whatnot (or if it crashes, etc.).
…I ended up with a docker container of pihole and an actual pi as the secondary so that it’s nice and redundant.
This approach sounds good.
I think the correct approach is both, if you have the option.
Most devices accept two name servers. Redundancy is always good, especially for DNS.
Depending on the network’s setup, having Pihole fail or unavailable could leave the network completely broken until Pihole becomes available again. Configuring the network to have at least one backup DNS server is therefore extremely important.
I also recommend having redundant and/or highly available Pihole instances running on different hardware if possible. It may also be a good idea to have an additional external DNS server (eg: 1.1.1.1, 8.8.8.8, 9.9.9.9, etc.) configured as a last resort backup in the event that all the Pihole instances are unavailable (or misconfigured).
While I agree with you that additional DNS server is without a question a good thing, on this you need to understand that if you set up two nameservers on your laptop (or whatever) they don’t have any preference. So, if you have a pihole as one nameserver and google on another you will occasionally see ads on things and your pihole gets overrided every now and then.
There’s multiple ways of solving this, but people often seem to have a misinformed idea that the first item on your dns server list would be preferred and that is very much not the case.
Personally I’m running a pihole for my network on a VM and if that’s down for a longer time then I’ll just switch DNS servers from DHCP and reboot my access points (as family hardware is 99% on wifi) and the rest of the family has working internet while I’m working to bring rest of the infrastructure back on line, but that’s just my scenario, yours will most likely be more or less different.
I did not know that. TIL that I am people!
Do you know if it’s always this way? For example, you mentioned this is how it works for DNS on a laptop, but would it behave differently if DNS is configured at the network firewall/router? I tried searching for more info confirming this, but did not find information indicating how accurate this is.
As far as I know it is the default way of handling multiple DNS servers. I’d guess that at least some of the firmware running around treats them as primary/secondary, but based on my (limited) understanding at least majority of linux/bsd based software uses one or the other more or less randomly without any preference. So, it’s not always like that, but I’d say it’s less comon to treat dns entries with any kind of preference instead of picking one out randomly.
But as there’s a ton of various hardware/firmware around this of course isn’t conclusive, for your spesific case you need to dig out pretty deep to get the actual answer in your situation.
For Windows it absolutely is in order of listing however. Typical behaviour is no reply after a second against the primary DNS results in it moving down the list.
Redundancy aside, this is more important when you span multiple datacenters and always want lookups going to the completely local or most local DC available.
TIL about the Linux/BSD not having preference though. Good to know.
My preferred way of solving this is to run a PowerDNS cluster with DNSDist and keepalived. You get all the redundancy via a single (V)IP.
Technitium is probably more user friendly for greenhorns, though… and offers DHCP too. Beats pihole by a mile.
Weirditry. Holy shit my brain melted.