Self-Hosting Pi-hole with Docker and Traefik

The Pi-hole logo

Ever get tired of seeing ads? Feel like your internet is slowed down, because of tracking scripts? Just bored on a weekend with nothing to do?

Well do I have the blog post for you!

What is Pi-hole?

No, it’s not an insult. Pi-hole primarily acts as a DNS sinkhole, which is a fun way of saying it “lies” to clients when they make certain DNS queries and tells them the IP address is not known. If you have a Pi-hole on your network, it will check each request against a list of URLs to see if should “lie” or send it to an actual DNS provider. If you want more of an explanation, there are a few different ones on this post.

Self Hosting

So is this just for ad blocking? No! Though that is the primary purpose, you can also do other useful things with DNS.

For instance, when you set up a service (like LibreSpeed to do internet speed tests) on your own network, by default you could only get to it by using it’s IP address, such as 192.168.0.222. To make it easy to remember, you could set up each of your computers individually to map something like librespeed.local to that IP address. But that is time consuming and doesn’t work if you add a connect a new computer. But with Pi-hole, you can set a DNS record network wide, so any computer will automatically be able to connect to your service with your easy to remember URL.

You can also take it to the next level and turn your Pi-hole into a recursive DNS server. This means that instead of getting DNS entries from middle men (such as Cloudflare or Google’s popular DNS servers), you go directly to the source, also called an authoritative server. This can be a more private approach to DNS, though it does mean your initial DNS calls will likely be slower. But once you have a call cached on your Pi-hole, your DNS request will be much faster. Plus, you are less vulnerable to some DNS hijacking attacks, which is an awesome bonus. The Pi-hole docs talk more about how to do this with the Unbound project.

In my view, even if you don’t use Pi-hole to block ads, this DNS control makes it a fundamental part of any complex self hosting setup. Eventually I’ll write about my plans and philosophy to self hosting to go more in-depth here, but the short summary is I want more control of my own data without inconveniencing myself to a noticeable degree. How well I execute is to be seen, but this is a solid first step on that journey.

Quick ethical quandary. I do have mixed feelings about blocking ads on the internet. They can be annoying, slow things down, and have really bad privacy implications. But they also are the thing that funds content from the creators I like. I’ve been trying to pay for content more as I’ve started blocking more things. It isn’t a perfect system, especially as lots of platforms to pay for creators take a sizable cut. Plus there are lots of sites or folk I check out occasionally that I’m not going to donate/pay, which just loose out on all revenue. It’s something I’m still figuring out myself.

How to setup Pi-hole

Despite the name, you can run Pi-hole on basically anything, not just a Raspberry Pi (though a Raspberry Pi an excellent option for self hosting services like this). In addition to running Pi-hole on a “bare metal” install, you can also run Pi-hole in a virtual machine. This provides some good separation from other services (which is great if you want to avoid potentially packaging issues) and will allow you to take “snapshots” of your system for backing up. The basic install for either of these options is very straightforward.

I however, am using Docker containers for running my services. I prefer this for several reasons:

  1. Much simpler to create or change then an OS install
  2. When used with Docker Compose, the description of whats running is human readable and trackable with Git
  3. All the separation and backup-ability of VMs

In addition to Docker, I also use Traefik to act as a reverse proxy. This allows me to have multiple services running on the same machine without running into issues with IP address and ports. So librespeed.local and pihole.local could exist in harmony, even though both “use” the same IP address and Port 80. Where Traefik sets itself apart from other reverse proxies is how it leverages Docker Compose labels. It allows me to create Compose files that don’t know or rely on other Compose files existing (aside from the Traefik one)! This means my services can be much more modular, which is awesome, especially while experimenting.

Gotchas

Before I go through my config, I wanted touch on some issues I encountered.

The first is that my router doesn’t allow me to set the DNS server to a local IP address. Fortunately, Pi-hole does have a solution for this! You can enable DHCP on your Pi-hole and allow that to distribute the DNS server IP to all your local machines. Be warned though, this does mean if your Pi-hole goes offline, then all your local machines will eventually be unable to get IP address and connect. I eventually plan to get a more configurable router and switch DHCP back to that, in order to reduce my reliance on Pi-hole.

Second is if you use a VPN on your machines (which I do), by default, you aren’t going to use the Pi-hole for DNS lookups. You’ll end up using the VPN’s DNS server instead. This might be more private, assuming you trust your VPN service (which you should if you’re using them), but does mean your Pi-hole does nothing for those devices. Some VPN clients allow you to use a different DNS server, but that workaround is annoying and not universal. Eventually, I’d like to run the VPN on my network level, which would resolve this issue when my devices are local.

Third, some devices, such as smart TVs, have hardcoded DNS servers, so even if you tell them to use your Pi-hole, they will ignore it. A solution I’ve read online some people use is to set a firewall rule blocking the common DNS servers. That way those devices will fail connecting to their hardcoded DNS and fallback to the Pi-hole one.

Fourth and finally, for some reason the VM I was running my Pi-hole in had a static and dynamic IP address when I first started. The dynamic IP address was getting it’s IP from my router. This meant when I switched to DHCP on my Pi-hole, things would work fine for a while till my VM’s dynamic IP address expired. Then everything would stop working. I don’t know why it had two IP address nor how exactly losing the lease on one IP broke DHCP and DNS everywhere, but the solution was to disable the DHCP client on the VM and set the static IP address in the /etc/netplan/01-netcfg.yaml file.

Using Docker Compose and Traefik

I should update all of these to the same Docker Compose version as well as decide on an approach for image versioning, but that will be a future task. Also, I’m only going to comment on the configuration relevant to this setup, not explaining how Docker Compose works in general.

First up, my Traefik docker-compose.yaml:

version: "3.3"

services:

  traefik:
    image: "traefik:v2.4"
    container_name: "traefik"
    command:
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
    ports:
      - "80:80"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    networks: 
      - lan
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.api.rule=Host(`traefik.local`)"
      - "traefik.http.routers.api.service=api@internal"
      - "traefik.http.routers.api.entrypoints=web"

networks: 
  lan:
    external: true

The command section sets up Traefik’s web based dashboard and support for non HTTPS connections on port 80. It also make sure other services must explicitly enable exposing themselves with Traefik. This helps prevents me accidentally setting something up before I’m ready.

The labels section enables Traefik, sets it’s URL, and that it’s a web services (so it works on ports 80 and 443). The api@internal bit is required for Traefik to setup it’s API service.

All the containers are going to use this external lan network, which allows me to separate out each service into it’s on Docker Compose file. That network needs to be created with the docker network create lan command.

For fun, lets all setup LibreSpeed! It’s docker-compose.yaml:

---
version: "2.1"
services:
  librespeed:
    image: ghcr.io/linuxserver/librespeed
    container_name: librespeed
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - PASSWORD=PASSWORD
      - CUSTOM_RESULTS=false
      - DB_TYPE=sqlite
    volumes:
      - /services/librespeed/config:/config
    restart: unless-stopped
    networks: 
      - lan
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.librespeed.rule=Host(`librespeed.local`)"
      - "traefik.http.routers.librespeed.entrypoints=web"

networks: 
  lan:
    external: true

Like other LinuxServer.io containers, it allows you to map the internal container user to a user on the host machine (instead of being root). More information here. Otherwise we just configure a password, disable sharing custom results, and use the default built in SQLite database.

The Traefik config in labels is very similar to the Traefik file, just without the internal API router.

Finally, here is the Pi-hole docker-compose.yaml:

version: "3"

services:
  pihole:
    image: pihole/pihole:latest
    container_name: pihole
    ports:
      - "53:53/tcp"
      - "53:53/udp"
    dns:
      - 127.0.0.1
      - 9.9.9.9
    environment:
      TZ: 'America/Los_Angeles'
      WEBPASSWORD: 'PASSWORD'
      PIHOLE_DNS_: 9.9.9.9;149.112.112.112;1.1.1.1
      DNSSEC: 'true'
      ServerIP: 192.168.0.204 # Actual server IP. Matches DHCP conf file IP
      VIRTUAL_HOST: pihole.local # Same as port traefik config
      DNSMASQ_LISTENING: all
      DHCP_ACTIVE: 'true'
      DHCP_START: 192.168.0.100
      DHCP_END: 192.168.0.199
      DHCP_ROUTER: 192.168.0.1
      DHCP_LEASETIME: 6
      WEBTHEME: default-dark
      PIHOLE_DOMAIN: lan
    volumes:
      - '/services/pihole/pihole:/etc/pihole/'
      - '/services/pihole/dnsmasq.d:/etc/dnsmasq.d/'
    cap_add:
      - NET_ADMIN
    restart: unless-stopped
    networks: 
      lan: {}
      backend:
        ipv4_address: '172.31.0.100'
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.pihole.rule=Host(`pihole.local`)"
      - "traefik.http.services.pihole.loadbalancer.server.port=80"
      - "traefik.http.routers.pihole.entrypoints=web"
    depends_on:
      - dhcphelper

  dhcphelper:
    image: noamokman/dhcp-helper
    container_name: dhcp-helper
    restart: unless-stopped
    network_mode: "host"
    command: -s 172.31.0.100
    cap_add:
      - NET_ADMIN

networks: 
  backend:
    ipam:
      config:
        - subnet: 172.31.0.0/16
  lan:
    external: true

A little more is going on. We actually are running two containers with this one. Pi-hole itself, and another called DHCP helper, which as it sounds, helps assist the Pi-hole with it’s DHCP. They use the backend network to work together. The DHCP uses the IP 172.31.0.100 which runs in the internal Docker notwork. Because this isn’t reachable from your network by default, you’ll need to add another file at /etc/dnsmasq.d/07-dhcp-options.conf, which looks like this:

dhcp-option=option:dns-server,192.168.0.204 # Host IP address

Going through the Pi-hole config. The Pi-hole does need to expose the DNS ports (53). We also tell Docker Compose to use custom DNS, in this case, our local address and another at 9.9.9.9 for backup.

On to the environmental variables, we have a password for the web interface and a list of DNS providers the Pi-hole will connect to. You can consult the PrivacyTools page on suggested DNS providers. I have DNSSEC enabled, which adds a little assurance that the data you get back from the DNS call is authentic. From there I specify the host’s IP address, the local domain I’d like to use, and that Pi-hole should listed to all DHCP request from the Docker bridge network. DHCP is then set to active, the start and end address is set to specify what range can be handed out to local devices, as well as how long there DHCP lease is valid. Finally I set dark mode, because of course, and that I want Pi-hole to hand out the lan domain (which is not the same thing as the Docker network).

I use cap_add to give the container additional network privileges. In the labels I also set a load balancer, but everything else should look like the other services.

Thanks to DerFetzer’s post on this for a lot of help with this part.

I also use Pi-hole to manage the DNS of my other local services. To do so, first I specify a mapping of domain names to IP address at /etc/pihole/custom.list:

192.168.0.204 librespeed.local
192.168.0.204 traefik.local
192.168.0.204 pihole.local

I also need to set a DNS record to route local traffic properly at /etc/dnsmasq.d/07-dhcp-options.conf:

address=/.lan/192.168.0.204 

That’s it! You should now have a fully working system that is all lovingly containerized. You can find all my config in my Git repository.

Next Steps

There are a lot of possible things to do next. I’m going to list them out in order of importance for me (but is very likely to change). There should be a post at one point about what I do next too!

  • Configure everything with Ansible and use it for secret management. This would make deployments much more programmatic and repeatable then me manually triggering things with CLI commands.
  • Use systemd to manage my container life cycle (aka starting and stopping). This seems to a popular way to keep containers alive and happy.
  • Setup an Opensense router. It’s an open source router software with some powerful options for VPNs, VLAN, firewalls, and of course, basics like DHCP.
  • Investigate other encrypted DNS protocols, such as DNS-over-HTTPS or DNS-over-TLS. I need to do more research before I decide on how to approach either for enhances security.
  • Setup Home Assistant for managing my small (but potentially growing) set of home automation devices. Also might be fun to play around with building my own devices.
  • Build a NAS to store and backup all my files. The server I’m currently using for my services will use this to store it’s configuration and data as well. I like the idea of separating out a more reliable storage box from a more experimental service box. Though it does add a little complexity.
  • Store and version the Dockerfiles locally. It’s more management I’d have to do, but I can then guarantee what steps are being taken to create the containers I’m using instead of relying on someone else. Granted, most of them will probably be copies of other examples to begin with. The main thing I need to look into is how to best handle new versions.
  • Look into doing local SSL for my internal services. This would allow me to use HTTPS on all my services, which would make me feel warm and fuzzy.

Conclusion

It took quite a bit of time to get up-to-speed enough with Pi-hole, Traefik, and the networking concepts I mention here, but I’m very glad I invested the time. If you are interested in a similar approach, I encourage you to use my config as a reference as you play around yourself! Feel free to contact me if you have issues with this or were able to use the post to get running!

Till next time,
- Matthew Booe

Related Posts