Serve using CoreDNS file plugin

Savithru Lokanath
5 min readFeb 7, 2023

--

CoreDNS is a lightweight, open-source DNS server written in Golang & is popular due to its deployment in the core Kubernetes project. In this blog, we’ll see how we can use CoreDNS in a non-Kubernetes environment as a standalone DNS server.

CoreDNS is a CNCF project

First things first, there’s various ways of installing CoreDNS. For simplicity, we’ll use Docker as the runtime. To install Docker, visit this page. Once installed, verify that you can connect to the Docker daemon,

$ docker ps

Now, let’s start building the config files to run CoreDNS. Before we do that, let’s understand the plugins concept in CoreDNS.

A CoreDNS plugin is a piece of code that gets invoked when a client makes a request to the DNS server. The CoreDNS server chains these individual pieces of code. So when a request comes in from a client, the first plugin is invoked, it either responds to the request or passes it on to the next one till the request reaches the end of the plugin chain. If none of the plugin bits handle the request, a default response is handed out to the client.

File plugin is one such CoreDNS plugin that enables serving zone data from an RFC 1035-style master file located on the disk.

Now, let’s start building the CoreDNS’s config file a.k.a corefile

$ cat corefile
.:53 {
forward . 8.8.8.8 1.1.1.1
log
errors
health
}

labs.hello.db.:53 {
log
errors
health
file /etc/coredns/master.db
}

in-addr.arpa.:53 {
log
errors
health
file /etc/coredns/master.db
}

In the above file,

  • L1 indicates the start of the server block & the zones the server should be authoritative for. The above server block specifies a server that is responsible for all zones below the root zone: .; basically, this server should handle every possible query. Server blocks can optionally specify a port number to listen on, in this case we’ve specified a default port 53
  • L2 indicates a recursor to which the requests can be forwarded to, in the case the server can’t find an authoritative answer. Here, we are just forwarding requests to public DNS servers on the internet
  • L3–4 adds the log plugin to enable query logging & error plugin to enable error logging on the server
  • L5 enables health endpoint on the server
  • The next section is pretty similar to the default section, except that its a zone for labs.hello.db forward lookups & contains a line which uses the file plugin to point to the master file which contains the DNS data (records)
  • The last section is a second zone for reverse lookups & is similar to the forward lookup zone defined above it

Next, let’s populate the master file with the DNS data. Generally this is populated by an automation tool such as Ansible/Jinja2 or using a Python API.

We’ll start with the creation of the start of authority (SOA) record. This record indicates that this server is authoritative for the zone in consideration. There are two SOA records, one for the forward lookups (A, CNAME) & we can use a custom domain name here, the second for the reverse lookup (PTR).

$ cat master.db

labs.hello.db. IN SOA dns.labs.hello.db. admin.labs.hello.db. 1675303881 7200 3600 1209600 3600
in-addr.arpa. IN SOA dns.labs.hello.db. admin.labs.hello.db. 1675303881 7200 3600 1209600 3600

From the file,

  • C1 labs.hello.com. & in-addr-arpa. are the zones for which this DNS server is authoritative
  • C3 is the type of DNS record
  • C4 is the name of the DNS server
  • C5 is the email of the DNS admin. @ translates to a .
  • C6 is the serial number to keep track of the DNS changes. Usually it’s an epoch of the create/update timestamp
  • C7 is the record TTL, after which the client must retrieve the SOA info
  • C8 is the retry TTL. Failure to retrieve the SOA info must be retried after this interval
  • C9 is the SOA expiry. This is the time after which the client should consider the zone as “non-authoritative”
  • C10 is the TTL for all records in the zone

Now, let’s go ahead & create a couple of different DNS records

An “A” record maps a domain name to an IPv4 address

dns.labs.hello.db.          IN  A     192.168.1.3
node.labs.hello.db. IN A 192.168.1.10

NOTE: A device might contain multiple interfaces & hence multiple IPv4 addresses. It’s a good practice to create reverse DNS records (PTR) by keeping the interface name into consideration. This will help utilities like traceroute to display the interface name while tracing the path towards the destination

A “PTR” record an IPv4 address to a domain name. It’s the opposite of a type “A” record


1.1.168.192.in-addr.arpa. IN PTR eth0.dns.labs.hello.db
10.1.168.192.in-addr.arpa. IN PTR eth0.node.labs.hello.db
10.2.168.192.in-addr.arpa. IN PTR eth1.node.labs.hello.db

A “CNAME” or “Canonical Name” or “alias” record is a pointer to a true domain name or an “A” record

mgmt.dns.labs.hello.db.     IN  CNAME  dns.labs.hello.db.
mgmt.node.labs.hello.db. IN CNAME node.labs.hello.db.

A “TXT” record is used to store custom notes about a record

dns.labs.hello.db.          IN  TXT    "DNS server in Cloud A"
node.labs.hello.db. IN TXT "Node in Cloud A"

Putting this all together, the master database file finally looks like below,

$ cat master.db

labs.hello.db. IN SOA dns.labs.hello.db. admin.labs.hello.db. 1675303881 7200 3600 1209600 3600
in-addr.arpa. IN SOA dns.labs.hello.db. admin.labs.hello.db. 1675303881 7200 3600 1209600 3600
dns.labs.hello.db. IN A 192.168.1.3
node.labs.hello.db. IN A 192.168.1.10
1.1.168.192.in-addr.arpa. IN PTR eth0.dns.labs.hello.db.
10.1.168.192.in-addr.arpa. IN PTR eth0.node.labs.hello.db.
10.2.168.192.in-addr.arpa. IN PTR eth1.node.labs.hello.db.
mgmt.dns.labs.hello.db. IN CNAME dns.labs.hello.db.
mgmt.node.labs.hello.db. IN CNAME node.labs.hello.db.
dns.labs.hello.db. IN TXT "DNS server in Cloud A"
node.labs.hello.db. IN TXT "Node in Cloud A"

Next step is to run the CoreDNS service. As specified earlier, we’ll use Docker as the runtime & we’ll use compose to declare & run the application. Below is an example of the Docker compose file,

$ pwd
/home/demo/coredns

$ ls
coredns-docker-compose.yml corefile master.db

$ cat coredns-docker-compose.yml
---
version: "3"
services:
coredns:
container_name: coredns
image: "coredns/coredns:1.10.0"
restart: always
volumes:
- "/home/demo/coredns:/etc/coredns"
command: "-conf /etc/coredns/corefile"
ports:
- "53:53/udp"

The Docker compose file above is declaring a service called corednswith version v1.10.0 listening on port 53 & with a host path mount.

Bring up the service & verify that the service is healthy,

$ docker-compose up -d
Creating network "coredns_default" with the default driver
Pulling coredns (coredns/coredns:1.10.0)...
1.10.0: Pulling from coredns/coredns
9731739b2823: Pull complete
4dfb45b72a09: Pull complete
Digest: sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955
Status: Downloaded newer image for coredns/coredns:1.10.0
Creating coredns ... done

$ docker ps | grep core
401a2a1b5b91 coredns/coredns:1.10.0 "/coredns -conf /roo…" 9 seconds ago Up 8 seconds 53/tcp, 0.0.0.0:53->53/udp coredns

$ docker logs coredns
.:53
CoreDNS-1.10.0
linux/arm64, go1.19.1, 596a9f9

Validate by making queries to the CoreDNS server running on the host

## A RECORD LOOKUP

$ nslookup node.labs.hello.db 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53

Name: node.labs.hello.db
Address: 192.168.1.10

---------------------------------------------------------------------------
## TXT RECORD LOOKUP

$ nslookup -q=TXT node.labs.hello.db 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53

node.labs.hello.db text = "Node in Cloud A"

---------------------------------------------------------------------------
## CNAME RECORD LOOKUP

$ nslookup mgmt.node.labs.hello.db 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53

mgmt.node.labs.hello.db canonical name = node.labs.hello.db.
Name: node.labs.hello.db
Address: 192.168.1.10

---------------------------------------------------------------------------
## PTR RECORD LOOKUP

$ nslookup 192.168.1.10 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53

10.1.168.192.in-addr.arpa name = eth0.node.labs.hello.db.

$ nslookup 192.168.2.10 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53

10.2.168.192.in-addr.arpa name = eth1.node.labs.hello.db.

---------------------------------------------------------------------------
## EXTERNAL LOOKUP

$ nslookup icann.org 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53

Non-authoritative answer:
Name: icann.org
Address: 192.0.43.7

To summarize, we created the DNS master db file & used the file plugin of CoreDNS to serve the DNS records. We didn’t touch advance concepts such as caching, zone splits, forwarding, etc (we’ll save it for another blog), but I hope this post helps getting folks started with a lightweight DNS server like CoreDNS.

--

--