This document describes the integration procedure to provision a ThreatSTOP DNS Firewall using our Docker container.


This document describes the integration procedure to provision a ThreatSTOP DNS Firewall using our Docker container. If you are unfamiliar with Docker containers, they are a simple way of packaging up an application and runtime environment into a standardized deployment mechanism. Our Docker container can be used stand alone via docker run, or in tandem when using docker-compose, Docker Swarm, or in a Kubernetes cluster. This means you can run our DNS Firewall on anything that supports x64 based docker containers. While this document will get you up and running most customers will use our sample deployment files as a starting point to customize their containers/environments.

The basic procedure is:

  • Open a ThreatSTOP account if you have not already done so.
  • Using the Admin Portal, configure your DNS Firewall’s settings and download the yaml configuration file(s) containing pre-configured examples for your use.
  • Installing Docker if you don’t have Docker installed and running already
  • Run docker run or docker-compose command to launch the Docker container.


OS Compatibility

Tested Host Operating Systems

Operating System Notes
Ubuntu 18.04  
CentOS 7 Use named volumes if SELinux is enforcing
RHEL 7 Use named volumes if SELinux is enforcing
MacOS 10.14 Host networking mode not supported
Windows 10 Host networking mode not supported

We don’t expect you would encounter issues on any other Operating System running Docker containers on x64 platform. If you do encounter issues please contact support.

Container Host Orchestration Engines

Provider Notes Docker Server docker-compose Docker Swarm
Azure Kubernetes Service (AKS) **
Amazon Elastic Cloud Service (ECS) ** See: † Deploying Docker Container doc
Amazon Elastic Container Service for Kubernetes (EKS) **
Google Kubernetes Engine (GKE) ** Kubernetes including minikube

** untested but should work.
† - port 53 UDP reserved by Amazon ECS.

ThreatSTOP DNS Firewall & logging Docker image versions

Image Base Operating System Tag version
DNS Firewall (threatstop/dnsfw) Alpine Linux Edge BIND 9.14.x
Logging (threatstop/ts-logger) Alpine Linux Edge  

Server Compatibility

Resource Minimum Recommended
CPU dual core x64 quad core x64
Disk 10 GB HDD 40GB+ SSD

Software Compatibility

Service/Application Description
Time synchronized (NTP) time synchronized regularly for authentication mechanisms and logging
Container host server Docker, Kubernetes or something that implements Docker Containers
  • If you intend to use this DNS Firewall to protect clients on the same network as the server it must be a linux based server for client addresses to be logged properly. See host network mode for more details.

Network Compatibility

Service Port Direction IP Address/CIDR Range Protocol Notes
DNS 53/5353 Outbound UDP & TCP  
DNS notifications *optional 53/5353 Inbound UDP Used for DNS notify for faster policy updates (see doc for more details)
DNS over TLS 5353 Outbound TCP Required only for web automation enabled devices
HTTPS 443 Outbound TCP log upload service
NTP 123 Outbound (or your preferred NTP server) UDP time sync server

Network Considerations

Networking varies depending on the type of container server you are running. However there are some general rules to keep in mind while planning the permanent networking installation settings for the DNS Firewall container.

  1. We highly recommend running this in host networking mode if it is intended to protect clients on the same network as the container host server. If you don’t run with host networking mode enabled your client log hits will be masked behind the Docker network NAT address.
  2. We recommend running multiple instances for redundancy. This also allows you to upgrade one while allowing simultaneous queries to flow to the other DNS Firewall keeping your network protected.

Host Network Topology

The following diagram shows the network topology of running host networking mode. Both Kubernetes and Docker implement this feature. The main reason we recommend this network mode over the default bridge container network mode is we expect clients will be on the same network as the Docker server. For example, let’s say we have a compromised machine on the network and we were running in default bridge network mode, we would not see the requesting clients IP Address. We would instead see the NAT IP address of the Docker server.

Internal Network Topology

There are valid reasons to not run host network mode. For instance, if you are protecting a fleet of containers all located inside the container network you actually want to run without host mode so it is on the same network as the protected clients.

Load Balancers

When creating a service definition or load balancer for Kubernetes keep in mind they all do NAT load balancing so you will likely be loosing the requesting client’s IP Address in the process.


Device Settings

You will need the following settings for the Docker container. These are here for reference, and are also provided in the download links after setting up the device.

ThreatSTOP Portal Device Settings we provide

Setting Description
Master DNS Server IP Address Zone masters retrieved from device settings
Device ID Retrieved from device settings
Policy Zone Name Retrieved from device settings
TSIG Key Name Retrieved from device settings
TSIG Key Secret Retrieved from device settings

ThreatSTOP Portal Device Settings you provide

Setting Description
Nickname Name of the device simply used to identify it in the portal & reporting
Policy DNS RPZ Policy you choose
IP Type Static or Dynamic Public IP address
IP Address Public IP address or hostname if IP Type is set to dynamic
Bind Mode Bind mode of operation (1 = Recursion only, 2 = Forwarder only)
Bind Forwarders (if Bind mode set to forwarder only) Space separated DNS server IP addresses used to forward upstream queries i.e.
Trusted ACL Access control list for restricting who can query the server. Space separated list of Special Keywords, IP(s) or CIDR addresses i.e. Special keywords (all, localhost, localnets) handled by Bind.

ThreatSTOP image builds are available on Dockerhub and is the recommended method of installation.

$ docker pull threatstop/dnsfw:latest


The quickstart is intended for customers who are proficiently familiar with the integration. If you have not done this before please read the document in its entirety to ensure you don’t miss any important details.

docker run --name dnsfw -d --restart=always \
  --publish=5353:53/tcp --publish 5353:53/udp --publish=10000:10000/tcp \
  --volume=logs:/var/log/named --volume=data:/data \
  --env='TSIG_KEY=SecretKeyGoesHere==' --env='TSIG_KEY_NAME=threatstop-keyname' \
  --env='MASTER_DNS_IP=' --env='DEVICE_ID=tdid_abcd1234' \
  --env='DNS_POLICY=Basic-DNSFW.rpz.threatstop.local' --env='WEBMIN_ENABLED=true' \
  threatstop/dnsfw && docker run --name logupload -d --restart=always --volume=logs:/logs \
  --env='CRON_SCHEDULE="*/5 * * * *"' --env='LOGROTATE_SIZE=5K' threatstop/logupload

Alternatively, you can use the provided docker-compose.yml file to start the container using Docker Compose

When the container is started the Webmin service is also started (if env variable WEBMIN_ENABLED=true provided) and is accessible from the web browser at https://localhost:10000. Login to Webmin with the username root and password threatstop. Specify --env ROOT_PASSWORD=secretpassword on the docker run command to set a password of your choosing for testing. For permanent installs you should be using mounted files to load sensitive settings see secrets handling for more details.

The launch of Webmin can be disabled by omitting the --env WEBMIN_ENABLED=true to the docker run command. Note that the ROOT_PASSWORD parameter has no effect when the launch of Webmin is disabled.


The following steps will walk you through adding the ThreatSTOP DNS Firewall container.

Step 1. ThreatSTOP Portal setup

  • If you want to use a custom DNS Firewall policy, please read DNS Firewall Policies
  • Create a new Device Entry: Click on Devices and then on Add Device.
    • The Manufacturer is: ISC
    • The Model is: BIND 9 (Docker)
    • Docker DNS Firewall Container
  • Select the DNS Firewall policy - either a pre-defined policy or a custom policy

Step 2. Instantiate Docker Image

To create the DNS Firewall container now that you have setup ThreatSTOP-side configuration:

  • Download the Docker or Kubernetes configuration yaml files to jump-start your configuration.
  • Ensure the secrets exist and have the correct values. An example file hierarchy is shown below.
    ├── TSIG_KEY
    └── docker-compose.yml
  • Run docker-compose up -d or equivelant command to kickoff the container creation process.
  • Ensure you didn’t have any failures by checking the output. After the container provisioning step you should be able to run tsadmin health from the container to verify its health. E.g. docker exec dnsfw tsadmin health.

Step 3. Propogate DNS to clients

You will want to manually test the DNS Firewall works as intended before deploying settings company wide just in case you need to adjust the TRUSTED_ACL. Next update your DHCP Server, or Active directory to hand out the IP address of the DNS Firewall(s) along with backups in the event it becomes unavailable.

Step 4. Test installation

Now that we have a working DNS Firewall we want to test the installation to verify a few things.

  • We are blocking from machines we expect should be protected
  • We are logging hits to the test block site, including the correct IP Address of the requesting client.
  • Log upload is working

Verify we are blocking

Log into a machine that should be protected. You can omit the @<IP Address> if you’ve setup DNS server propogation correctly, but it doesn’t hurt to test both ways to verify the new DNS settings propagated correctly. We are specifically looking for the NXDOMAIN response when attempting to resolve (our testing domain added to each policy).

# Example of testing a linux-type client after SSH'ing into the client
user@client>$ dig @[DNSFW IP or Hostname]
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 5611
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 2

Example of testing a Windows-type client after remote connecting to Powershell into client

PS C:\Users> nslookup [Bind Server IP or Hostname]
Server:     [Bind Server Hostname]
Address:    [Bind Server IP]#53

** server can't find NXDOMAIN

Verify we are logging blocks

On the container server, launch an interactive shell into the dnsfw device. E.g. docker exec -it dnsfw bash

cd /var/log/named
# Your DNS Firewall should be logging to a specific <device id>.log file located in /var/log/named
cat tdid_*.log
# now verify you have blocks along with correct client IP Addresses
08-May-2020 20:24:57.127 rpz: info: client @0x559714b76700 ( rpz QNAME NXDOMAIN rewrite via
08-May-2020 20:34:22.138 rpz: info: client @0x55971506c540 ( rpz QNAME NXDOMAIN rewrite via

Verify we are uploading logs

To verify the installation is uploading logs correctly you’ll have to exceed the log size threshold set in the logupload container. You can check our docker-compose yaml or equivalent for an environment setting like: LOGROTATE_SIZE: 5K.

We can manually exceed the size by running a command like the one below or simply by waiting until the log naturally exceeds the size.

# the following can be done via a protected client or directly in the container via an interactive shell
# If in the container you can query localhost @
while true; do dig @; done & sleep 5; kill $!

Now we wait until the cron interval has run. The default *\5 * * * * runs every 5 minutes. After the next scheduled cron run, you should see the log file has been rotated (moved to tdid_xxxxx.log.1) and a log of the transaction created as shown below.

-rw-r--r--    1 root     root       14.7K May 12 18:04 logupload_tdid_xxxxxxxx.log
-rw-r--r--    1 root     root        1.5K May 12 18:04 logupload.log

At this point running tsadmin health should show all [PASS] marks. Congratulations you’ve successfully completed the installation.

Customizing your container

We offer a variaty of ways to customize your container to suit your needs. If you need to add a custom local Bind configuration you can create configuration files and use the NAMED_OPTIONS or NAMED_OPTIONS_FILE to mount a file into the container that is read at build time.

For example, lets say you have some local Bind configuration in a file options.conf. To include that in the container we’ll simply mount the file and set the env variable NAMED_OPTIONS_FILE to point to it. Below is an example showing an exerpt from a docker-compose file.

      TSIG_KEY_FILE: /run/secrets/tsig_key
      NAMED_OPTIONS_FILE: /mnt/options.conf
      - /tmp/data:/data
      - /tmp/logs:/var/log/named
      # Local configuration can be done inside of these include files
      - ./options.conf:/mnt/options.conf

As you can see in this sample, that we mount a file from the host machine’s filesystem located at ./options.conf onto the container’s filesystem at /mnt/options.conf. We will read the contents of that file during the creation process of the container and include it in the configuration.

Advanced customization

If you need to customize Bind configuration files directly and want to avoid having the containers startup processes overwrite them you can simply add the environment variable CUSTOM_CONFIGS: "true".

Editing configuration files

  • If you’ve enabled webmin, you can login and edit the configuration using their friendly web interface.
  • You can mount the /data volume and edit the files directly outside of the container. The Bind configuration files are located in /data/bind/etc.
  • You can include local configuration files using the NAMED_OPTIONS, NAMED_RPZ, and NAMED_LOGGING environment variables. Read the section above for more details.

  • Note You’ll have to restart the named process to re-read any configuration changes made.

Bind extra arguments

You can customize the launch command of BIND server by specifying arguments to named on the docker run command. For example the following command prints the help menu of named command:

# everything to the right of the image name gets sent to named
docker run --name dnsfw -it --rm \
  --publish 53:53/tcp --publish 53:53/udp --publish 10000:10000/tcp \
  --volume /srv/docker/bind:/data \
  threatstop/dnsfw:latest -h

Environment Variables

We use environment variables to drive different functions in the creation of the container. Environment variables can be either set on the command line or a docker-compose configuration file. For CLI either specify them like: docker run --env KEY=value or -e="KEY=value" . Important note: If using docker run, all env parameters must preceed the image name threatstop/dnsfw, parameters after will be sent to executed named process as positional parameters.

For docker-compose.yml files as shown below:

version: "3.1"

    network_mode: host
    container_name: dnsfw
    restart: always
    image: threatstop/dnsfw
      KEY: value
      DNS_POLICY: Basic-DNSFW.rpz.threatstop.local

Below is a list of available environment variables and their functions.

Env Variable Value Examples Description Notes Default
ROOT_PASSWORD <password> Password to set for both root user and webmin ‘root’ user. Valid chars: lower case alphabetics digits 0 thru 9 punctuation marks Not used unless WEBMIN_ENABLED=true  
NAMED_LOGS Positive Integer 1-100 Number of logs to keep if not rotated by logrotate or logupload container If logupload container is used this will mainly apply to the named.log files 20
NAMED_LOG_SIZE 1K, 5M Maximum log file size threshold before logs are rotated positive integer and disk size unit (K,M) between (1K - 20M) 5M
BIND_MODE 1 - recursion, 2 - forwarder Sets Bind in either recursor mode (DNS server recursively looks up the domain authority), or forwarder mode (lookups are forwarded to another DNS server) forwarder requires FORWARDER_IPS to be set. Value should be just the integer alone. 1
FORWARDER_IPS;;; DNS Server to forward non-authoritative requests to. Only used when BIND_MODE=2, format should be <IP/CIDR> followed by a semi-colon  
MASTER_DNS_IP IP Address of the master ThreatSTOP policy server. This will be supplied on the portal devices information page. value should be an IP address  
MASTER_DNS_PORT 53|5353 Alternate port for upstream RPZ DNS transfers. Valid choices (53, 5353) integer representing port number 53 or 5353 53
TRUSTED_ACL localhost; localnets;;; Addresses allowed to query this DNS server. Space + semi-colon separated special Bind keywords (all, localhost, localnets), IP address(es) or CIDR addresses; localhost; localnets;
OVERWRITE_CONFIGS true|false Disregard any customizations done to any Bind configuration and overwrite them with newly generated configuration files.   false
CUSTOM_CONFIGS true|false Skip attempting to generate any Bind configuration. This implies a valid configuration from a previous container already exists. Upgrading using this is not recommended as this will bypass upgrade logic.   true
NAMED_OPTIONS <valid Bind options block config> Optional Bind configuration to be placed near end of named.conf.options    
NAMED_RPZ <valid Bind configuration> Optional Bind configuration to be placed near end of named.conf.local    
NAMED_LOGGING <valid Bind logging block config> Optional Bind configuration to be placed in the logging section of named.conf.local    

You can also read in each variable in the form of a file, look to the way we handle secrets below as an example.

version: "3.1"

    container_name: dnsfw
    image: threatstop/dnsfw-dev
      TSIG_KEY_FILE: /run/secrets/tsig_key
      TSIG_KEY_NAME: threatstop-threa023
      TRUSTED_ACL: 'localhost; localnets;;;'
     - 5353:53/udp
     - 5353:53/tcp
     - tsig_key
     - root_password
      - /tmp/data:/data
      - /tmp/logs:/var/log/named
    file: ./TSIG_KEY
    file: ./ROOT_PASSWORD

This example shows setting TSIG_KEY from TSIG_KEY_FILE via a mounted file from the local host filesystem.


For the BIND to preserve its state across container shutdown and startup you should mount a volume at /data.

The Quickstart command already mounts a volume for persistence.

SELinux users should update the security context of the host mountpoint so that it plays nicely with Docker:

mkdir -p /srv/docker/bind
chcon -Rt svirt_sandbox_file_t /srv/docker/bind

Container Volumes

Container volumes vary depending on container host software. We will briefly talk about Docker specific volumes below, most of which should apply to other systems.

Docker volumes are defined in the docker-compose.yaml file or via the --volume arguments. There are at least two types you should be aware of Named Volumes and Mounted Volumes.

Named Volumes

Named volumes are created if they don’t exist. For example --volume logs:/var/log/named will create a Docker Named Volume called logs and mount it in the container at /var/log/named.

Mounted Volumes

Conversly if you want to mount a directory and map it to a directory on the container you can just specify the absolute path on the left side of the :. For example --volume /tmp/logs:/var/log/named will create /tmp/logs if it doesn’t exist and mount in it in the container at /var/log/named. Keep in mind since this is the same filesystem if you are running SELinux there may be some enforcement issues doing this. You will have to explicitly create and grant privileges accordingly to get this type of volume working correctly while SELinux is enforcing.


Safetly Handling Secrets

Handling secrets like TSIG_KEY and ROOT_PASSWORD varies depending on the container hosting flavor you choose to run the container in Docker Swarm, Kubernetes, etc… We highly recommend looking to a proper secrets manager for your container secrets solution. See this video for some background on the topic.

For compatibility reasons, we set the secrets as file mounts. While it is possible to pass these in as environment variables we don’t recommend it past testing.

To verify environment variables don’t contain sensitive data run Docker inspect:

docker inspect dnsfw -f "{{json .Config.Env}}"


To upgrade to newer releases:

# 1. Download the updated Docker image:
docker pull threatstop/dnsfw:latest
# 2. Stop the currently running image:
docker stop dnsfw
# 3. Remove the stopped container
docker rm -v dnsfw
# 4. Start the updated image
docker run -name dnsfw -d \
    [OPTIONS] \

Health Status

We’ve added a utility to quickly check the health status of your installed DNS Firewall container. This utility checks:

  • If it can reach our internal servers
  • If it successfully pulled down the policy
  • If it is currently blocking
  • If it is logging blocks
  • If logupload is working

For checking the health of a running container run:

docker exec dnsfw tsadmin health

Shell Access

For debugging and maintenance purposes you may want access the containers shell. If you are using Docker version 1.3.0 or higher you can access a running containers shell by starting bash using docker exec:

docker exec -it dnsfw bash


Binding port 53 error

If you are having trouble running the DNS Firewall due to an error binding to port 53 here are a few things to check:

  • Did you run the command as a superuser? Most Operating systems consider ports below 1000 “privileged” which require additional access. Try running in non-host mode and assigning to another high port e.g. 5353 to see if the issue persists.
  • Recent Debian based OS’s such as Ubuntu (16.10+) ship with systemd-resolved.service which provides local applications with an interface to the DNS. This conflicts with ThreatSTOP’s DNS Firewall. We recommend disabling it completely to avoid the potential for applications to bypass DNS Firewall’s protection.
  • Verify the port is free for use. netstat -an|grep ':53 '
#disable systemd-resolved
sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolved

Depending on the system & version you may also want to ensure the following line dns=default is in Network Manager configuration.

if [ -f /etc/NetworkManager/NetworkManager.conf ] && ! grep -q 'dns=' /etc/NetworkManager/NetworkManager.conf;then
  echo "dns=default"|sudo tee -a /etc/NetworkManager/NetworkManager.conf
  echo "removing resolv.conf symlink" && sudo rm /etc/resolv.conf
  echo "restarting networking stack" && sudo systemctl restart NetworkManager
  echo "  ** Not compatible with this system **"


The built in script tsadmin health gives you a wealth of information in an easy to digest quick command. However some problems prevent you from getting the the stage where you can run it. You’ll find a few tips on troubleshooting those types of issues below.

Trouble launching a container

The following will help you troubleshoot failures during container creation / bootup.

  • Check the container server has network access to Docker hub.
  • Check that your system time is correct and in sync with a public NTP server.
  • Check named.log Bind logs located where ever you mounted your data volume.
  • Consider switching named foreground flags from default flag -f to -g which outputs to STDERR/STDOUT.
  • Check container server logs like kubectl logs tsdnsfw-5c544c9755-dhgn2 -c dnsfw or docker logs dnsfw for helpful errors. This type of logging requires the FOREGROUND_FLAG: -g env variable to function. IMPORTANT It is important to set the foreground flag back after troubleshooting is complete to avoid breaking DNS hit logging.
  • Check container server logs for possible local issues
  • If you are using Kubernetes, try launching in Docker using the same settings to try to reproduce the issue

Configuration merge conflict errors

If you receive output telling you the container found merge conflict issues this basically means the configuration is in a state is different than the configuration it generated. Typically this happens when a user customizes the Bind configuration and sets the OVERWRITE_CONFIGS: "false" environment variable. See both CUSTOM_CONFIGS or OVERWRITE_CONFIGS environment variable details above for more information.

This is easy to remedy. First, as the help message shows, you’ll want to note those customizations to decide if they are something you want to keep or not. If you want to keep the current configuration consider setting the environment variable CUSTOM_CONFIGS:"true" to bypass attempting to generate new configuration files. If you don’t care about the changes add the environment variable OVERWRITE_CONFIGS: "true" to overwrite all configuration witout prompting. After setting one of the environment variables attempt to recreate the container.

SELinux file permission issues

If you’ve verified enforcing SELinux is causing issues with the container you can update the security context of the host mountpoint so that it plays nicely:

mkdir -p /srv/docker/bind
chcon -Rt svirt_sandbox_file_t /srv/docker/bind

Testing blocking domains

dig @[DNSFW IP or Hostname]
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 5611
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 2

Below is an example using nslookup, found natively on Windows machines.

C:\Users> nslookup [Bind Server IP or Hostname]
Server:     [Bind Server Hostname]
Address:    [Bind Server IP]#53

** server can't find NXDOMAIN

Looking up a non-restricted website like should return its current IP. You can repeat this on any client using this device as a DNS server.

  • Check that a log entry was added to /var/log/named/[device id].log

  • You can test connectivity to ThreatSTOP by running:

Testing network access to ThreatSTOP

$ curl
Your IP address: <ip address>
Address is in the list of authorized hosts

Testing Bind configuration syntax is valid

You can verify the Bind configuration has no major syntax errors by running


Additional Information

This page was last updated at 2020-07-10 19:23.