May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix


News Networking Recommended Books Recommended Links Tutorials How to set up a home DNS server FAQs
DNS Security Troubleshooting DNS servers DNS Clients

MX Records checking

DNS Ports Usage DNS Zone Generators
DNS Tools dig nslookup hostname   Perl Tools DNS Audit Scripts
Domain politics Load balancing
via DNS
 Random Findings History Tips Humor Etc

Domain Name System (DNS) is an Internet-wide naming system for resolving host names to IP addresses and IP addresses to host names. DNS supports name resolution for both local and remote hosts, and uses the concept of domains to allow multiple hosts with identical name to coexist on the Internet.

It is originated from the needs of addressing electronic mail. While the invention is attributed to Paul Mockapetris (the original specifications are described in RFC 882 November, 1983); the ideas were in the air for a long time: RFC 805 reflected the decision to introduce DNS-type names for mail addressing; RFC 811 by K. Harrenstien, V. White and E. Feinler introduced the idea of the original centralized hostname lookup server (March,1982); and RFC 819 documented the tree structure ideas of DNS (Aug, 1982).

Updated the DNS specification were published in 1987 (RFC 1034 and RFC 1035). Altogether there is more then a dozen RFCs published that propose various extensions to the core protocol. Historically DNS can be considered the first directory service implemented on Internet.

The collection of networked systems that use DNS is referred to as the DNS namespace. The DNS namespace is divided into a hierarchy of domains. A DNS domain is a group of systems. Each domain is usually supported by two or more name servers, a master name server, and one or more secondary (slave) DNS servers. In Unix server implements DNS by running special daemon, the in.named daemon in case of bind. On the client�s side, DNS is implemented through the kernel�s resolver. The resolver library resolves users� queries. The resolver queries several databases including a name server in a specified order. In case DNS servers are queried one of them returns either the requested information or a referral to another DNS server.

DNS's hierarchy has two dimensions. The first is the hierarchical domain names, such as with hierarchy left to right. Rightmost (org) is the highest level of hierarchy, the second softpanorama is a subdomain of the org domain and www is a subdomain of softpanorama. The rightmost components of FDNS name are often called top-level domains, or TLDs. Among them com, edu, and org are the most popular, but several dozens exists including one for each country on the planet (based of ISO two-letter country codes). Together, the domain names which make up, are called fully qualified domain name (FQDN).

However, DNS hierarchy has another dimension, which is how this information is distributed among DNS servers on the Internet. TLDs are maintained on so called root servers. Root servers contain information about second level DNS servers for each and every second level domain. This hierarchy of DNS servers on the Internet is completely distinct from the DNS hierarchy.

DNS is client-server architecture:

The top-level domains are administered by various organizations that maintain so called root servers. All organization with root servers report to the governing authority called the Internet Corporation for Assigned Names and Numbers (ICANN). Administration of the lower-level domains is delegated to the organizations which bought the domain name from one of DNS registrars. A typical year fee for the domain is highly varied and fluctuates between $4 and $35.

The top-level domain that you choose can depend on which one best suits the needs of your organization. Large international companies typically use the organizational domains, while small organizations or individuals often choose to use a country code. But this is true not for all countries. Country domains can be prohibitively expensive and can easily cost 10-100 times more that generic domains like .com, .net, .edu or .org. That's why many web sites from some countries of former USSR are registered under generic domains, typically .com.

And as any commercial enterprise domain business connected with DNS has its share of political issues. First of all the ability to extend naming scheme by introducing new top level domains is a highly charged political power and is ripe with different types of abuses. The second consideration is about a fair domain fee. How much it should cost to register the second level domain ?

There is also a phenomenon called domain hijacking when the organization that for some reason forgot to pay its year domain fee (expired domain) discover that somebody else already registered the name. Or even that non-expired domain name suddenly migrated from their ownership to somebody else because somebody transferred the domain from one registrar to another. In old days the organization that not yet have Internet presence and try to get one often discover that their domain name is already registered by some individual who wants a neat sum of money for the return of their name. Also because of multiple top level domain different organizations may own them. For example, can be owned by different organizations. Important domains like can be sold for over a million dollars.

Everything below the secondary domain falls into a zone of authority maintained by the domain owner. It is the responsibility the zone owner, to design and implement the redundancy and robustness you need from DNS. Most TLD authorities require at least two working nameservers for a zone before they will delegate authority over it to the zone owner.

DNS has been designed to be redundant. When one server breaks down, the other servers for that domain will still be capable of answering queries about that zone on their own. All a subdomain's authoritative servers are listed in the domain above it. When your computer tries to find an address from a name, it has many alternative servers to query at each step in the process. If one server fails to answer within a set timeout, another will be queried. If they all fail then the query will fail and not return any result. The redundancy also provides load distribution between the servers that are authoritative for a domain.

If the two required nameservers of a zone were located in the same building, then power outage or a single router, or network switch failure could make DNS name service unavailable. If the two domain servers are also located in separate cities and use separate Internet providers then the company is pretty safe from one single failure taking out DNS service. Therefore the secondary name server generally should be geographically far apart from the primary, and in case the company has only one location probably should be implemented as a box in some hosting company or as a service.

The redundancy necessitates a zone duplication mechanism, a way to copy the zones from a master server to all the redundant slave servers of that zone (zone transfer). Until recently, these servers were usually called primary and secondary servers, not master and slave.

When zone file is updated on the master server, the slave server will either act on a notification of the update, or if the notification is lost, notice that a long time has elapsed since it last heard from the master server. It will then check whether there are any updates available. If the check fails, it will be repeated quite often until the master answers, making the duplication mechanism more robust against network failures. If the slave server has been incapable of contacting the master server for a long time, usually a week, it will not give up. Instead, it will stop serving queries from the old data it has stored. Serving old data masquerading as correct data can very well be worse than serving no data at all.

The rootservers are the most important servers on the entire Internet, which is why they are highly redundant. Right now, 13 root nameservers exist on different networks and on different continents. Recent attacks proved that they are pretty resilient and it is pretty unlikely that DNS will fail due to a rootserver failure.

There are four different types DNS servers:

  1. primary server (mandatory)

  2. secondary server (one is mandatory; optionally can be more then one)

  3. caching server (optional)

  4. forwarding server (optional)

Two main types of DNS name servers are primary and secondary name servers. There are also name servers called �caching-only� name servers. These servers will resolve name queries, but do not own or maintain any DNS database files. Changes to primary domain name servers must be propagated to secondary name servers, as primary domain name servers own the database records. This is accomplished via a �zone transfer�, which copies a complete. Here is a more full explanation of what each type means and how it functions:

  1. Primary server. Each domain must have one and only one primary server. All changes to information about the domain should be made on this server. In their turn primary servers update and synchronize secondary servers of the domain. They can also delegate authority for subdomains.
  2. Secondary server(s). Each domain should also have at least one secondary server (NIC does not register domains unless there are two working DNS servers). There is one or more secondary server per domain. They have the following properties:
    • They obtain a copy of the domain information for all domains they serve from the appropriate primary server or another secondary server for the domain.

    • They are authoritative for all domains they serve.

    • They periodically receive updates from the primary servers of the domain.

    • They provide load sharing with the primary server and other servers of the domain.

    • They provide redundancy in case one or more other servers are temporarily unavailable.

    • They provide more local access to name resolution if placed appropriately.

  3. Caching servers. All DNS servers cache information for remote domains over which they are non-authoritative. Caching-only servers only cache information, they are not authoritative for any domain. They have lower administrative overhead and do not require any zone transfers. They allow DNS client access to local cached naming information without the expense of setting up a DNS primary or secondary server.
  4. Forwarding Servers. Forwarding servers are a variation on a primary or secondary server and act as focal points for all off-site DNS queries. Designating a server as a forwarding server causes all off-site requests to go through that server first. Forwarding servers centralize off-site requests. The server being used as a forwarder builds up a rich cache of information. This way they reduce the number of redundant off-site requests. No special setup on forwarders is required. If forwarders fail to respond to queries, the local server can still contact remote site, DNS servers itself.

Answers returned from DNS servers can be described as authoritative or non-authoritative.

DNS Name Resolution Process

The following sequence of steps is typically used by a DNS client to resolve name to address (let' assume that the client wants to access

  1. The client system consults the /etc/nsswitch.conf file to determine name resolution order. In this example, the presumed order is local file first, DNS server second.

  2. The client system consults the local /etc/inet/host file and does not find an entry.

  3. The client system consults the /etc/resolv.conf file to determine the name resolution search list and the address of the local DNS server.

  4. The client system resolver routines send a recursive DNS query regarding the return address for to the local DNS server. (A recursive query says "I'll wait for the answer, you do all the work.") At this point, the client will wait until the local server has completed name resolution. Resolver does not maintain any cache. If case of Solaris if nscd is running its cache will be consulted before sending the query to the local DNS server. Let's assume that the name was not cached.

  5. The local DNS server consults the contents of its cached information in case this query has been tried recently. If the answer is in local cache, it is returned to the client as a non-authoritative answer.

  6. The local DNS server contacts the appropriate DNS server for the domain, if known, or a root server and sends an iterative query. (An iterative query means "Send me the best answer you have, I'll do the rest."). Let's assume that the answer is not cached on the local DNS server. In this case the root server for org domain needs to be contacted.

  7. The root server returns the best information it has. In this case, the only information you can be guaranteed that the root server will have is the names and addresses of all the org. servers. The root server returns these names and addresses along with a time-to-live value specifying how long the local name server can cache this information.

  8. The local DNS server contacts one of the org. servers returned from the previous query, and transmits the same iterative query sent to the root servers earlier.

  9. The org. server contacted returns the best information it has, which is the names and addresses of the DNS servers along with a time-to-live value.

  10. The local DNS server contacts one of the DNS servers and makes the same query.

  11. The DNS servers return the address(es) of the along with a time-to-live value.

  12. The local DNS server returns the requested address to the client system and the WWW browser request to read a page can proceed.

It is up to organization how to implement its DNS server. Many organizations use Solaris for this purpose as one of more secure flavor of Unix. In Solaris DNS server is usually implemented using open source DNS package called BIND. Actually Sun ships Solaris with some version of bind preinstalled. Generally it does not make sense to preserve the original version of bind that comes with Solaris as more recent versions are often available (precompiled version can be found at Solaris Freeware site, but it is better to compile bind yourself using Studio 11 compiler). See Solaris DNS Server Installation and Administration page for more info.

Dr. Nikolai Bezroukov

Top Visited
Past week
Past month


Old News ;-)

2007 2006 2005 2004 2003 2002 and earlier

[May 08, 2020] Configuring Unbound as a simple forwarding DNS server Enable Sysadmin

May 08, 2020 |

In part 1 of this article, I introduced you to Unbound , a great name resolution option for home labs and small network environments. We looked at what Unbound is, and we discussed how to install it. In this section, we'll work on the basic configuration of Unbound.

Basic configuration

First find and uncomment these two entries in unbound.conf :

interface: ::0

Here, the 0 entry indicates that we'll be accepting DNS queries on all interfaces. If you have more than one interface in your server and need to manage where DNS is available, you would put the address of the interface here.

Next, we may want to control who is allowed to use our DNS server. We're going to limit access to the local subnets we're using. It's a good basic practice to be specific when we can:

Access-control: allow  # (allow queries from the local host)
access-control: allow
access-control: allow

We also want to add an exception for local, unsecured domains that aren't using DNSSEC validation:

domain-insecure: "forest.local"

Now I'm going to add my local authoritative BIND server as a stub-zone:

        name: "forest"
        stub-first: yes

If you want or need to use your Unbound server as an authoritative server, you can add a set of local-zone entries that look like this:

local-zone:  "forest.local." static

local-data: "jupiter.forest"         IN       A
local-data: "callisto.forest"        IN       A

These can be any type of record you need locally but note again that since these are all in the main configuration file, you might want to configure them as stub zones if you need authoritative records for more than a few hosts (see above).

If you were going to use this Unbound server as an authoritative DNS server, you would also want to make sure you have a root hints file, which is the zone file for the root DNS servers.

Get the file from InterNIC . It is easiest to download it directly where you want it. My preference is usually to go ahead and put it where the other unbound related files are in /etc/unbound :

wget -O /etc/unbound/root.hints

Then add an entry to your unbound.conf file to let Unbound know where the hints file goes:

# file to read root hints from.
        root-hints: "/etc/unbound/root.hints"

Finally, we want to add at least one entry that tells Unbound where to forward requests to for recursion. Note that we could forward specific domains to specific DNS servers. In this example, I'm just going to forward everything out to a couple of DNS servers on the Internet:

        name: "."

Now, as a sanity check, we want to run the unbound-checkconf command, which checks the syntax of our configuration file. We then resolve any errors we find.

[root@callisto ~]# unbound-checkconf
/etc/unbound/unbound_server.key: No such file or directory
[1584658345] unbound-checkconf[7553:0] fatal error: server-key-file: "/etc/unbound/unbound_server.key" does not exist

This error indicates that a key file which is generated at startup does not exist yet, so let's start Unbound and see what happens:

[root@callisto ~]# systemctl start unbound

With no fatal errors found, we can go ahead and make it start by default at server startup:

[root@callisto ~]# systemctl enable unbound
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/unbound.service.

And you should be all set. Next, let's apply some of our DNS troubleshooting skills to see if it's working correctly.

First, we need to set our DNS resolver to use the new server:

[root@showme1 ~]# nmcli con mod ext ipv4.dns
[root@showme1 ~]# systemctl restart NetworkManager
[root@showme1 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
[root@showme1 ~]#

Let's run dig and see who we can see:

root@showme1 ~]# dig; <<>> DiG 9.11.4-P2-RedHat-9.11.4-9.P2.el7 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36486
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 4096
;.                              IN      NS

.                       508693  IN      NS

Excellent! We are getting a response from the new server, and it's recursing us to the root domains. We don't see any errors so far. Now to check on a local host:

jupiter.forest.         5190    IN      A

Great! We are getting the A record from the authoritative server back, and the IP address is correct. What about external domains?

;; ANSWER SECTION:             3600    IN      A

Perfect! If we rerun it, will we get it from the cache?

;; ANSWER SECTION:             3531    IN      A

;; Query time: 0 msec

Note the Query time of 0 seconds- this indicates that the answer lives on the caching server, so it wasn't necessary to go ask elsewhere. This is the main benefit of a local caching server, as we discussed earlier.

Wrapping up

While we did not discuss some of the more advanced features that are available in Unbound, one thing that deserves mention is DNSSEC. DNSSEC is becoming a standard for DNS servers, as it provides an additional layer of protection for DNS transactions. DNSSEC establishes a trust relationship that helps prevent things like spoofing and injection attacks. It's worth looking into a bit if you are using a DNS server that faces the public even though It's beyond the scope of this article.

[ Getting started with networking? Check out the Linux networking cheat sheet . ]

[Dec 01, 2019] How to Find DNS (Domain Name Server) Records On Linux Using the Dig Command

Dec 01, 2019 |

The common syntax for dig command as follows:

dig [Options] [TYPE] []
1) How to Lookup a Domain "A" Record (IP Address) on Linux Using the dig Command

Use the dig command followed by the domain name to find the given domain "A" record (IP address).

$ dig

; <<>> DiG 9.14.7 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7777
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
;                  IN      A

;; ANSWER SECTION:           299     IN      A           299     IN      A

;; Query time: 29 msec
;; WHEN: Thu Nov 07 16:10:56 IST 2019
;; MSG SIZE  rcvd: 73

It used the local DNS cache server to obtain the given domain information from via port number 53.

2) How to Only Lookup a Domain "A" Record (IP Address) on Linux Using the dig Command

Use the dig command followed by the domain name with additional query options to filter only the required values of the domain name.

In this example, we are only going to filter the Domain A record (IP address).

$ dig +nocomments +noquestion +noauthority +noadditional +nostats

; <<>> DiG 9.14.7 <<>> +nocomments +noquestion +noauthority +noadditional +nostats
;; global options: +cmd           299     IN      A           299     IN      A
3) How to Only Lookup a Domain "A" Record (IP Address) on Linux Using the +answer Option

Alternatively, only the "A" record (IP address) can be obtained using the "+answer" option.

$ dig +noall +answer           299     IN      A           299     IN      A
4) How Can I Only View a Domain "A" Record (IP address) on Linux Using the "+short" Option?

This is similar to the output above, but it only shows the IP address.

$ dig +short
5) How to Lookup a Domain "MX" Record on Linux Using the dig Command

Add the MX query type in the dig command to get the MX record of the domain.

# dig MX +noall +answer
# dig -t MX +noall +answer           299     IN      MX      0

According to the above output, it only has one MX record and the priority is 0.

6) How to Lookup a Domain "NS" Record on Linux Using the dig Command

Add the NS query type in the dig command to get the Name Server (NS) record of the domain.

# dig NS +noall +answer
# dig -t NS +noall +answer           21588   IN      NS           21588   IN      NS
7) How to Lookup a Domain "TXT (SPF)" Record on Linux Using the dig Command

Add the TXT query type in the dig command to get the TXT (SPF) record of the domain.

# dig TXT +noall +answer
# dig -t TXT +noall +answer           288     IN      TXT     "ca3-8edd8a413f634266ac71f4ca6ddffcea"
8) How to Lookup a Domain "SOA" Record on Linux Using the dig Command

Add the SOA query type in the dig command to get the SOA record of the domain.

me width=

me width=

# dig SOA +noall +answer
# dig -t SOA +noall +answer           3599    IN      SOA 2032249144 10000 2400 604800 3600
9) How to Lookup a Domain Reverse DNS "PTR" Record on Linux Using the dig Command

Enter the domain's IP address with the host command to find the domain's reverse DNS (PTR) record.

# dig -x +noall +answer 21599 IN    PTR
10) How to Find All Possible Records for a Domain on Linux Using the dig Command

Input the domain name followed by the dig command to find all possible records for a domain (A, NS, PTR, MX, SPF, TXT).

# dig ANY +noall +answer

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.23.rc1.el6_5.1 <<>> ANY +noall +answer
;; global options: +cmd           12922   IN      TXT     "v=spf1 ip4: +a +mx +ip4: ?all"           12693   IN      MX      0           12670   IN      A           84670   IN      NS           84670   IN      NS
11) How to Lookup a Particular Name Server for a Domain Name

Also, you can look up a specific name server for a domain name using the dig command.

# dig

; <<>> DiG 9.14.7 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10718
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
;                IN      A

;; ANSWER SECTION: 21599   IN      A

;; Query time: 23 msec
;; WHEN: Tue Nov 12 11:22:50 IST 2019
;; MSG SIZE  rcvd: 67

;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45300
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
;                  IN      A

;; ANSWER SECTION:           299     IN      A           299     IN      A

;; Query time: 23 msec
;; WHEN: Tue Nov 12 11:22:50 IST 2019
;; MSG SIZE  rcvd: 73
12) How To Query Multiple Domains DNS Information Using the dig Command

You can query DNS information for multiple domains at once using the dig command.

# dig NS +noall +answer TXT +noall +answer SOA +noall +answer           21578   IN      NS           21578   IN      NS      299     IN      TXT     "ca3-e9556bfcccf1456aa9008dbad23367e6"      299     IN      TXT     "google-site-verification=a34OylEd_vQ7A_hIYWQ4wJ2jGrMgT0pRdu_CcvgSp4w"           3599    IN      SOA 2032053532 10000 2400 604800 3600
13) How To Query DNS Information for Multiple Domains Using the dig Command from a text File

To do so, first create a file and add it to the list of domains you want to check for DNS records.

In my case, I've created a file named dig-demo.txt and added some domains to it.

# vi dig-demo.txt

Once you have done the above operation, run the dig command to view DNS information.

# dig -f /home/daygeek/shell-script/dig-test.txt NS +noall +answer           21599   IN      NS           21599   IN      NS      21599   IN      NS      21599   IN      NS           21599   IN      NS           21599   IN      NS
14) How to use the .digrc File

You can control the behavior of the dig command by adding the ".digrc" file to the user's home directory.

If you want to perform dig command with only answer section. Create the .digrc file in the user's home directory and save the default options +noall and +answer .

# vi ~/.digrc

+noall +answer

Once you done the above step. Simple run the dig command and see a magic.

# dig NS           21478   IN      NS           21478   IN      NS

[Jan 26, 2019] How and why i run my own dns servers

Notable quotes:
"... Learn Bash the Hard Way ..."
"... Learn Bash the Hard Way ..."
Introduction Despite my woeful knowledge of networking, I run my own DNS servers on my own websites run from home. I achieved this through trial and error and now it requires almost zero maintenance, even though I don't have a static IP at home.

Here I share how (and why) I persist in this endeavour.

Overview This is an overview of the setup: DNSSetup

This is how I set up my DNS. I:

How? Walking through step-by-step how I did it: 1) Set up two Virtual Private Servers (VPSes) You will need two stable machines with static IP addresses. If you're not lucky enough to have these in your possession, then you can set one up on the cloud. I used this site , but there are plenty out there. NB I asked them, and their IPs are static per VPS. I use the cheapest cloud VPS (1$/month) and set up debian on there. NOTE: Replace any mention of DNSIP1 and DNSIP2 below with the first and second static IP addresses you are given. Log on and set up root password SSH to the servers and set up a strong root password. 2) Set up domains You will need two domains: one for your dns servers, and one for the application running on your host. I use to get free throwaway domains. In this case, I might setup a DNS domain and a site domain. Whatever you choose, replace your DNS domain when you see YOURDNSDOMAIN below. Similarly, replace your app domain when you see YOURSITEDOMAIN below. 3) Set up a 'glue' record If you use as above, then to allow you to manage the YOURDNSDOMAIN domain you will need to set up a 'glue' record. What this does is tell the current domain authority ( to defer to your nameservers (the two servers you've set up) for this specific domain. Otherwise it keeps referring back to the .tk domain for the IP. See here for a fuller explanation. Another good explanation is here . To do this you need to check with the authority responsible how this is done, or become the authority yourself. has a web interface for setting up a glue record, so I used that. There, you need to go to 'Manage Domains' => 'Manage Domain' => 'Management Tools' => 'Register Glue Records' and fill out the form. Your two hosts will be called ns1.YOURDNSDOMAIN and ns2.YOURDNSDOMAIN , and the glue records will point to either IP address. Note, you may need to wait a few hours (or longer) for this to take effect. If really unsure, give it a day.
If you like this post, you might be interested in my book Learn Bash the Hard Way , available here for just $5. hero
4) Install bind on the DNS Servers On a Debian machine (for example), and as root, type: apt install bind9 bind is the domain name server software you will be running. 5) Configure bind on the DNS Servers Now, this is the hairy bit. There are two parts this with two files involved: named.conf.local , and the db.YOURDNSDOMAIN file. They are both in the /etc/bind folder. Navigate there and edit these files. Part 1 � named.conf.local This file lists the 'zone's (domains) served by your DNS servers. It also defines whether this bind instance is the 'master' or the 'slave'. I'll assume ns1.YOURDNSDOMAIN is the 'master' and ns2.YOURDNSDOMAIN is the 'slave.
Part 1a � the master
On the master/ ns1.YOURNDSDOMAIN , the named.conf.local should be changed to look like this:
 type master;
 file "/etc/bind/db.YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };
 type master;
 file "/etc/bind/YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };

zone "" {
 type master;
 notify no;
 file "/etc/bind/db.75";
 allow-transfer { DNSIP2; };

logging {
 channel query.log {
 file "/var/log/query.log";
 // Set the severity to dynamic to see all the debug messages.
 severity debug 3;
category queries { query.log; };
The logging at the bottom is optional (I think). I added it a while ago, and I leave it in here for interest. I don't know what the 14.127 zone stanza is about.
Part 1b � the slave
Jan 26, 2019 |

On the slave/ ns2.YOURNDSDOMAIN , the named.conf.local should be changed to look like this:

 type slave;
 file "/var/cache/bind/db.YOURDNSDOMAIN";
 masters { DNSIP1; };

 type slave;
 file "/var/cache/bind/db.YOURSITEDOMAIN";
 masters { DNSIP1; };

zone "" {
 type slave;
 file "/var/cache/bind/db.75";
 masters { DNSIP1; };

Now we get to the meat � your DNS database is stored in this file.

On the master/ ns1.YOURDNSDOMAIN the db.YOURDNSDOMAIN file looks like this :

$TTL 4800
  2018011615 ; Serial
  604800 ; Refresh
  86400 ; Retry
  2419200 ; Expire
  604800 ) ; Negative Cache TTL

On the slave/ ns2.YOURDNSDOMAIN it's very similar, but has ns1 in the SOA line, and the IN NS lines reversed. I can't remember if this reversal is needed or not :

  2018011615 ; Serial
 604800 ; Refresh
 86400 ; Retry
 2419200 ; Expire
 604800 ) ; Negative Cache TTL

A few notes on the above:

the next step is to dynamically update the DNS server with your dynamic IP address whenever it changes.

6) Copy ssh keys

Before setting up your dynamic DNS you need to set up your ssh keys so that your home server can access the DNS servers.

NOTE: This is not security advice. Use at your own risk.

First, check whether you already have an ssh key generated:

ls ~/.ssh/id_rsa

If that returns a file, you're all set up. Otherwise, type:


and accept the defaults.

Then, once you have a key set up, copy your ssh ID to the nameservers:

ssh-copy-id root@DNSIP1
ssh-copy-id root@DNSIP2

Inputting your root password on each command.

7) Create an IP updater script

Now ssh to both servers and place this script in /root/ :

set -o nounset
sed -i "s/^(.*) IN A (.*)$/1 IN A $1/" /etc/bind/db.YOURDNSDOMAIN
sed -i "s/.*Serial$/ $(date +%Y%m%d%H) ; Serial/" /etc/bind/db.YOURDNSDOMAIN
/etc/init.d/bind9 restart

Make it executable by running:

chmod +x /root/

Going through it line by line:

This line throws an error if the IP is not passed in as the argument to the script.

Replaces the IP address with the contents of the first argument to the script.

Ups the 'serial number'

Restart the bind service on the host.

8) Cron Your Dynamic DNS

At this point you've got access to update the IP when your dynamic IP changes, and the script to do the update.

Here's the raw cron entry:

* * * * * curl 2>/dev/null > /tmp/ip.tmp && (diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/ $(cat /tmp/ip)")); curl 2>/dev/null > /tmp/ip.tmp2 && (diff /tmp/ip.tmp2 /tmp/ip2 || (mv /tmp/ip.tmp2 /tmp/ip2 && ssh [email protected] "/root/ $(cat /tmp/ip2)"))

Breaking this command down step by step:

curl 2>/dev/null > /tmp/ip.tmp

This curls a 'what is my IP address' site, and deposits the output to /tmp/ip.tmp

diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/ $(cat /tmp/ip)"))

This diffs the contents of /tmp/ip.tmp with /tmp/ip (which is yet to be created, and holds the last-updated ip address). If there is an error (ie there is a new IP address to update on the DNS server), then the subshell is run. This overwrites the ip address, and then ssh'es onto the

The same process is then repeated for DNSIP2 using separate files ( /tmp/ip.tmp2 and /tmp/ip2 ).


You may be wondering why I do this in the age of cloud services and outsourcing. There's a few reasons.

It's Cheap

The cost of running this stays at the cost of the two nameservers (24$/year) no matter how many domains I manage and whatever I want to do with them.


I've learned a lot by doing this, probably far more than any course would have taught me.

More Control

I can do what I like with these domains: set up any number of subdomains, try my hand at secure mail techniques, experiment with obscure DNS records and so on.

I could extend this into a service. If you're interested, my rates are very low :)

If you like this post, you might be interested in my book Learn Bash the Hard Way , available here for just $5.

[Jan 08, 2019] Bind DNS threw a (network unreachable) error CentOS

Jan 08, 2019 |

submitted 11 days ago by


Bind 9 on my CentOS 7.6 machine threw this error:
error (network unreachable) resolving './DNSKEY/IN': 2001:7fe::53#53
error (network unreachable) resolving './NS/IN': 2001:7fe::53#53
error (network unreachable) resolving './DNSKEY/IN': 2001:500:a8::e#53
error (network unreachable) resolving './NS/IN': 2001:500:a8::e#53
error (FORMERR) resolving './NS/IN':
error (network unreachable) resolving './DNSKEY/IN': 2001:dc3::35#53
error (network unreachable) resolving './NS/IN': 2001:dc3::35#53
error (network unreachable) resolving './DNSKEY/IN': 2001:500:2d::d#53
error (network unreachable) resolving './NS/IN': 2001:500:2d::d#53
managed-keys-zone: Unable to fetch DNSKEY set '.': failure

What does it mean? Can it be fixed?

And is it at all related with DNSSEC cause I cannot seem to get it working whatsoever.

cryan7755 1 point 2 points 3 points 11 days ago (1 child)
Looks like failure to reach ipv6 addressed NS servers. If you don't utilize ipv6 on your network then this should be expected.
knobbysideup 1 point 2 points 3 points 11 days ago (0 children)
Can be dealt with by adding

[Dec 07, 2015] Localhost DNS Cache By Kyle Rankin

Feb 23, 2015 | Linux Journal

... ... ...

There are a number of different ways to implement DNS caching. In the past, I've used systems like nscd that intercept DNS queries before they would go to name servers in /etc/resolv.conf and see if they already are present in the cache. Although it works, I always found nscd more difficult to troubleshoot than DNS when something went wrong. What I really wanted was just a local DNS server that honored TTL but would forward all requests to my real name servers. That way, I would get the speed and load benefits of a local cache, while also being able to troubleshoot any errors with standard DNS tools.

The solution I found was dnsmasq. Normally I am not a big advocate for dnsmasq, because it's often touted as an easy-to-configure full DNS and DHCP server solution, and I prefer going with standalone services for that. Dnsmasq often will be configured to read /etc/resolv.conf for a list of upstream name servers to forward to and use /etc/hosts for zone configuration. I wanted something completely different. I had full-featured DNS servers already in place, and if I liked relying on /etc/hosts instead of DNS for hostname resolution, I'd hop in my DeLorean and go back to the early 1980s. Instead, the bulk of my dnsmasq configuration will be focused on disabling a lot of the default features.

The first step is to install dnsmasq. This software is widely available for most distributions, so just use your standard package manager to install the dnsmasq package. In my case, I'm installing this on Debian, so there are a few Debianisms to deal with that you might not have to consider if you use a different distribution. First is the fact that there are some rather important settings placed in /etc/default/dnsmasq. The file is fully commented, so I won't paste it here. Instead, I list two variables I made sure to set:


The first variable makes sure the service starts, and the second will tell dnsmasq to ignore any input from the resolvconf service (if it's installed) when determining what name servers to use. I will be specifying those manually anyway.

The next step is to configure dnsmasq itself. The default configuration file can be found at /etc/dnsmasq.conf, and you can edit it directly if you want, but in my case, Debian automatically sets up an /etc/dnsmasq.d directory and will load the configuration from any file you find in there. As a heavy user of configuration management systems, I prefer the servicename.d configuration model, as it makes it easy to push different configurations for different uses. If your distribution doesn't set up this directory for you, you can just edit /etc/dnsmasq.conf directly or look into adding an option like this to dnsmasq.conf:


In my case, I created a new file called /etc/dnsmasq.d/dnscache.conf with the following settings:


Let's go over each setting. The first, no-hosts, tells dnsmasq to ignore /etc/hosts and not use it as a source of DNS records. You want dnsmasq to use your upstream name servers only. The no-resolv setting tells dnsmasq not to use /etc/resolv.conf for the list of name servers to use. This is important, as later on, you will add dnsmasq's own IP to the top of /etc/resolv.conf, and you don't want it to end up in some loop. The next two settings, listen-address and bind-interfaces ensure that dnsmasq binds to and listens on only the localhost interface ( You don't want to risk outsiders using your service as an open DNS relay.

The server configuration lines are where you add the upstream name servers you want dnsmasq to use. In my case, I added three different upstream name servers in my preferred order. The syntax for this line is server=/domain_to_use/nameserver_ip. So in the above example, it would use those name servers for resolution. In my case, I also wanted dnsmasq to use those name servers for IP-to-name resolution (PTR records), so since all the internal IPs are in the 10.x.x.x network, I added as the domain.

Once this configuration file is in place, restart dnsmasq so the settings take effect. Then you can use dig pointed to localhost to test whether dnsmasq works:

$ dig @localhost

; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> @localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4208
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;                   IN      A

;; ANSWER SECTION:    265     IN      A

;; Query time: 0 msec
;; WHEN: Thu Sep 18 00:59:18 2014
;; MSG SIZE  rcvd: 56

Here, I tested and saw that it correctly resolved to If you inspect the dig output, you can see near the bottom of the output that SERVER: confirms that I was indeed talking to to get my answer. If you run this command again shortly afterward, you should notice that the TTL setting in the output (in the above example it was set to 265) will decrement. Dnsmasq is caching the response, and once the TTL gets to 0, dnsmasq will query a remote name server again.

After you have validated that dnsmasq functions, the final step is to edit /etc/resolv.conf and make sure that you have nameserver listed above all other nameserver lines. Note that you can leave all of the existing name servers in place. In fact, that provides a means of safety in case dnsmasq ever were to crash. If you use DHCP to get an IP or otherwise have these values set from a different file (such as is the case when resolvconf is installed), you'll need to track down what files to modify instead; otherwise, the next time you get a DHCP lease, it will overwrite this with your new settings.

I deployed this simple change to around 100 servers in a particular environment, and it was amazing to see the dramatic drop in DNS traffic, load and log entries on my internal name servers. What's more, with this in place, the environment is even more tolerant in the case there ever were a real problem with downstream DNS servers-existing cached entries still would resolve for the host until TTL expired. So if you find your internal name

[Aug 04, 2015] My 10 UNIX Command Line Mistakes by Vivek Gite

Accidentally Changed Hostname and Triggered False Alarm

Accidentally changed the current hostname (I wanted to see current hostname settings) for one of our cluster node. Within minutes I received an alert message on both mobile and email.


Wrong CNAME DNS Entry

Created a wrong DNS CNAME entry in zone file. The end result - a few visitors went to /dev/null:
echo 'foo 86400 IN CNAME' >> && rndc reload

[Oct 26, 2013] Cryptolocker (Win32/Crilock.A)

We desperately need the ability to block newly created domains (say, less then a month old) in DNS.

This is a game changing Trojan, which belong to the class of malware known as Ransomware . It seriously changes views on malware, antivirus programs and on backup routines. One of few Trojan/viruses which managed to get into front pages of major newspapers like Guardian.

Unlike most Trojans this one does not need Admin access to inflict the most damage. It also targets backups of your data on USB and mapped network drives. If you offload your backups to cloud storage without versioning and this backup has an extension present in the list of extensions used by this Trojan, it will destroy (aka encrypt) your "cloud" backups too.

It really encrypts the data in a way that excludes possibility of decryption without paying ransom. So it is very effective in extorting money for decryption key. Which you may or may not get as servers that can transmit it from the Command and Control center might be already blocked; still chances are reasonably high -- server names to which Trojan connect to get public key changes (daily ?), so far at least one server the Trojan "pings" is usually operational. So even on Oct 28 decryption was possible). At the same time the three days timer is real and if it is expire possibility of decrypting files is gone. Essentially you have only two options:

Beware snake oil salesmen, who try to sell you the "disinfection" solution. First of all disinfecting from Trojan is trivial, as it is launched by standard CurrentVersion\Run registry entry. The problem is that such a solution does not and can't include restoration of your files.

It was discovered in early September 2013 (around September 3 when domains to reach C&C center were registered, with the first description on September 10, see Trojan:Win32/Crilock.A.). Major AV programs did not detect it until September 17, which resulted in significant damage inflicted by Trojan.

Here is the screen displayed when the Trojan finished encrypting the files (it operates silently before that, load on computer is considerable -- encryption is a heavy computational task):


[Jan 18, 2012] Domain Registry of America Review

INTERNET SERVICES in Buffalo, NY - BBB Reliability Report - BBB serving Upstate NY Those scamsters managed somehow to get rating D- instead of F from BBB :-)
Business Name:

Domain Registry of America

Domain Registry of Canada


Business Address: 2316 Delaware Ave
Ste 266
Buffalo, NY 14216
See the location on a Mapquest Map
See the location on a Google Map

Original Business Start Date: 9/9/2002 Type of Entity: Corporation Principal:

Alvin Chen, Relations Manager

Phone Number:

(866) 434-0212

(866) 434-0212

(905) 479-2533

Fax Number:

(866) 434-0211

BBB Accreditation: This business is not a BBB Accredited Business


Website Address:

[Jan 18, 2012] Domain Registry of America Scam

Today I got the second email from this scammer. Like one commenter to - Domain Registry of America Hosting Reviews Exposed noted: "I'm a fairly sophisticated in terms of domain name registration and hosting, and these notices almost fooled me. If they almost fooled me, then I can only imagine the incredible amounts of money they have made scamming tens of thousands of unsuspecting dupes." In my case they use a return address of 2316 Delaware Ave. #266, Buffalo, NY. This time it was not about transfer but about re-registration due to expiration. I searched Google and found multiple notes about this scam. The earliest is from 2001. They were prosecuted by FTC in 2003 ( ) So they are still in business after more then a decade of scamming people. I you get such a letter please send a written complaint with your scam letter to: Federal Trade Commission, Consumer Response Center, 600 Pennsylvania Ave., N.W., Washington, DC 20580. Althouth as one commenter on noted "Those reporting such letters might have better luck through the US Post Office as mail fraud is a federal offense."
Domain Registry of America Scam

DROA serves as a reseller of domain name registration services for eNom, Inc. ("eNom"), an ICANN-accredited registrar of second level domain names. DROA's domain name registration services enable its customers to establish their identities on the web.

In the course of offering domain name services, DROA has engaged in a direct mail marketing campaign aimed at soliciting consumers in the United States to transfer their domain name registrations from their current registrar to eNom through DROA.

In many instances, consumers do not realize that by returning the invoices along with payment to "renew" their domain name registrations they are, in fact, transferring their domain name registrations from their then-current registrars to eNom. DROA's renewal notices/invoices do not clearly and conspicuously inform consumers of this material fact. 16. Defendant's renewal notices/invoices also fail to inform consumers that DROA charges a processing fee of $4.50 for any transfers of domain name registrations that are not completed, even if through no fault of the consumers. 17. In many instances, DROA promises credits to consumers who request them, but fails to transmit the credits to the consumers' credit card accounts in a timely manner.

Despite the FTC ruling again DRoA (located online at: IT IS HEREBY ORDERED that, in connection with the advertising, marketing, promotion, offering for sale, selling, distribution, or provision of any domain name services, Defendant, its successors, assigns, officers, agents, servants, and employees, and those persons in active concert or participation with it who receive actual notice of this Order by personal service or otherwise are hereby permanently restrained and enjoined from making or from assisting in the making of, expressly or by implication, orally or in writing, any false or misleading statement or representation of material fact, including but not limited to any representation that the transfer of a domain name registration is a renewal. II. IT IS FURTHER ORDERED that, in any written or oral communication where Defendant makes any representation that a domain name service is expiring or requires renewal, Defendant, its successors, assigns, officers, agents, servants, and employees, and those persons in active concert or participation with it who receive actual notice of this Order by personal service or otherwise are hereby permanently restrained and enjoined from failing to disclose, in a clear and conspicuous manner, in advance of receipt of any payment for services: A. Any cancellation or processing fees imposed prior to the effective date of any transfer or renewal; and B. Any limitations or restrictions on cancelling a request for domain name services.

June 27, 2010 | Website Design, Content Management System And SEO Blog
A few days ago we received a statement in the mail from Domain Registry of America. The invoice gives us the impression that a couple of our domain names are up for renewal and are about to expire. The letter actually states that, "Your domain name registrations will expire November 19, 2010!" Even though the dates they have on file are correct, we're not falling for this type of direct mail scam and you shouldn't either! This type of marketing scam is aimed at consumers who do not realize that by returning the invoices along with a payment, their domain names are in fact transferring from their current domain registrar to DROA.

If you received one of these letters, please ignore it! Do NOT complete the payment slip at the bottom or make any payments to this company. To add insult to injury, the letter has their address listed as: 2316 Delaware Avenue #266 Buffalo, New York. With some quick help from Google maps, the address comes up the same as the UPS Store, so guaranteed it's just a mail box!

July 7, 2006 | Lucid Design

We have discovered that a company called "Domain Registry of America" or "DROA" has been emailing domain name owners with deceptive messages about domain transfers. The goal of the emails is to trick people into transferring their domain names away from their existing domain name provider. The emails falsely claim to be a response to a transfer request made by the current owner and should NOT be acted upon. This has been going on for over a year and several of our clients have been duped by this scam. Once DROA takes over ownership it can be somewhat difficult to regain control of the domain. This is in addition to the phenomenally high prices they charge (they make it sound like you get a good deal with them).

This scam seems to be targeting .com domains only and I haven't seen any cases yet for other domains.

If people wish to express their concern, they can contact The Federal Trade Commission (in the US) at or the Ministry of Consumer Affairs' scam watch (NZ) at

[Jan 10, 2012] adsuck � Freecode

adsuck is a small DNS server that spoofs blacklisted addresses and forwards all other queries. The idea is to be able to prevent connections to undesirable sites such as ad servers, crawlers, etc. It can be used locally, for the road warrior, or on the network perimeter in order to protect local machines from malicious sites.

[Jan 9, 2012] Cooking with DNS & BIND - O'Reilly Media

Creating zone data and configuring a name server is just the beginning. Managing a name server over time requires an understanding of how to control it and which commands it supports. It takes familiarity with other tools from the BIND distribution, including nsupdate, used to send dynamic updates to a name server.

This chapter includes lots of recipes that involve ndc and rndc, programs that send control messages to BIND 8 and 9 name servers, respectively. These programs let an administrator reload modified zones, refresh slave zones, flush the cache, and much more. The list of commands the name server supports seems to grow with each successive release of BIND, so I've provided a peek at a few new commands in BIND 9.3.0 for the curious.

In the brave new world of dynamic zones, an administrator may have to make most of the changes to zone data using dynamic update, rather than by manually editing zone data files.

Figuring Out How Much Memory a Name Server Will Need


You need to figure out how much memory a name server will require.


While this answer may seem like a cop-out, the only sure-fire way to determine how much memory a name server will need is to configure it, start it, and then monitor it using a tool like top. After a week or so, the size of the named process should stabilize, and you'll know how much memory it needs.


The reason it's so difficult to calculate how much memory a name server requires is that there are so many variables involved. The size of the named executable varies on different operating systems and hardware architectures. Zones have a unique mix of records. Zone data files may use lots of shortcuts (e.g., leaving out the origin, or even using a $GENERATE control statement) or none at all. The resolvers that use the name server may send a huge volume of queries, causing the name server's cache to swell, or may send just sporadic queries.

Testing a Name Server's Configuration


You want to test a name server's configuration before putting it into production.


Use the named-checkconf and named-checkzon programs to check the named.conf file and zone data files, respectively. named-checkconf reads /etc/named.conf by default, so if you haven't moved the configuration file into /etc yet, specify the pathname to the configuration file you want to test as the argument:

$ named-checkconf ~/test/named.conf

named-checkconf uses the routines in BIND (BIND 9.1.0 and later, to be exact) to make sure the named.conf file is syntactically correct. If there are any syntactic or semantic errors in named.conf, named-checkconf will print an error. For example:

$ named-checkconf /tmp/named.conf
/tmp/named.conf:3: missing ';' before '}'

named-checkzon uses BIND's own routines to check the syntax of a zone data file. To run it, specify the domain name of the zone and the name of the zone data file as arguments:

$ named-checkzone foo.example

If the zone contains any errors, named-checkzon prints an error. If the zone would load without errors, named-checkzon prints a message like this:

zone foo.example/IN: loaded serial 2002022400

Once you've checked the configuration file and zone data, configure the name server to listen on a nonstandard port with the listen-on options substatement, and not to use a control channel:

controls { };

options {
    directory "/var/named";
    listen-on port 1053 { any; };

That way, the test name server won't interfere with any production name server you might already have running. Check the name server's syslog output (which should be clean, if you ran named-checkconf and named-checkzon) and query the name server with dig or another query tool, specifying the alternate port:

$ dig -p 1053 soa foo.example.

Once you're satisfied with the name server's responses to a few queries, you can remove the listen-on substatement, add a real controls statement and put it into production.


Even though named-checkconf and named-checkzon first shipped with BIND 9.1.0, BIND 8's configuration syntax is similar enough to BIND 9's that you can easily use named-checkconf with a BIND 8 named.conf file. The zone data file format is exactly the same between versions, so you can use named-checkzon, too.

[Jul 15, 2008] Technology: Paul Vixie Responds To DNS Hole Skeptics

Posted by kdawson on Tuesday July 15, @08:07AM
from the be-afraid-be-very-afraid-then-get-patching dept.

syncro writes "The recent massive, multi-vendor DNS patch advisory related to DNS cache poisoning vulnerability, discovered by Dan Kaminsky, has made headline news. However, the secretive preparation prior to the July 8th announcement and hype around a promised full disclosure of the flaw by Dan on August 7 at the Black Hat conference has generated a fair amount of backlash and skepticism among hackers and the security research community. In a post on CircleID, Paul Vixie offers his usual straightforward response to these allegations. The conclusion: 'Please do the following. First, take the advisory seriously - we're not just a bunch of n00b alarmists, if we tell you your DNS house is on fire, and we hand you a fire hose, take it. Second, take Secure DNS seriously, even though there are intractable problems in its business and governance model - deploy it locally and push on your vendors for the tools and services you need. Third, stop complaining, we've all got a lot of work to do by August 7 and it's a little silly to spend any time arguing when we need to be patching.'"

[May 15, 2008] DNS trouble knocks National Security Agency off Internet


A server problem at the U.S. National Security Agency has knocked the secretive intelligence agency off the Internet. The Web site was unresponsive at 7 a.m. Pacific time Thursday and continued to be unavailable
throughout the morning for Internet users.

The Web site was unreachable because of a problem with the NSA's DNS (Domain Name System) servers, said Danny McPherson, chief research officer with Arbor Networks. DNS servers are used to translate things like the Web addresses typed into machine-readable Internet Protocol addresses that computers use to find each other on the Internet.

The agency's two authoritative DNS servers were unreachable Thursday morning, McPherson said.

Because this DNS information is sometimes cached by Internet service providers, the NSA would still be temporarily reachable by some users, but unless the problem is fixed, NSA servers will be knocked completely off-line. That means that e-mail sent to the agency will not be delivered, and in some cases, e-mail being sent by the NSA would not get through.

"We are aware of the situation and our techs are working on it," a NSA spokeswoman said at 9:45 a.m. PT. She declined to identify herself.

A similar DNS problem knocked off-line in early May.

There are three possible reasons the DNS server was knocked off-line, McPherson said. "It's either an internal routing problem of some sort on their side or they've messed up some firewall or ACL [access control list] policy," he said. "Or they've taken their servers off-line because something happened."

That "something else" could be a technical glitch or a hacking incident, McPherson said.

In fact, the NSA has made some basic security mistakes with its DNS servers, according to McPherson.

"Say there was some Apache or Windows vulnerability and hackers controlled that server, they would now own the DNS server for," he said. "That really surprised me. I wouldn't think that these guys would do something like that."

The NSA is responsible for analysis of foreign communications, but it is also charged with helping protect the U.S. government against cyber attacks, so the outage is an embarrassment for the agency.

"I am certain that someone's going to send an e-mail at some point that's not going to get through," McPherson said. "If it's related to national security and it's not getting through, then as a U.S. citizen, that concerns me."

(Anders Lotsson with Computer Sweden contributed to this report.)

[Feb 6, 2008] Industry milestone DNS turns 25

02/06/08 |Network World

The Domain Name System turned 25 last week.

Paul Mockapetris is credited with creating DNS 25 years ago and successfully tested the technology in June 1983, according to several sources.

The anniversary of the technology that underpins the Internet -- and prevents Web surfers from having to type a string of numbers when looking for their favorite sites -- reminds us how network managers can't afford to overlook even the smallest of details. Now in all honesty, DNS has been on my mind lately because of a recent film that used DNS and network technology in its plot, but savvy network managers have DNS on the mind daily.

DNS is often referred to as the phone book for the Internet, it matches the IP address with a name and makes sure people and devices requesting an address actually arrive at the right place. And if the servers hosting DNS are configured wrong, networks can be susceptible to downtime and attacks, such as DNS poisoning.

And in terms of managing networks, DNS has become a critical part of many IT organization's IP address management strategies. And with voice-over-IP and wireless technologies ramping up the number of IP addresses that need to be managed, IT staff are learning they need to also ramp up their IP address management efforts. Companies such as Whirlpool are on top of IP address management projects, but industry watchers say not all IT shops have that luxury. (Learn more about IP ADDRESS MANAGEMENT products from our IP ADDRESS MANAGEMENT Buyer's Guide)

"IP address management sometimes gets pushed to the back burner because a lot of times the business doesn't see the immediate benefit -- until something goes wrong," says Larry Burton, senior analyst with Enterprise Management Associates.

And the way people are doing IP address management today won't hold up under the proliferation of new devices, an update to the Internet Protocol (from IPv4 to IPv6) and the compliance requirements that demand detailed data on IP addresses.

"IP address management for a lot of IT shops today is manual and archaic. It is now how most would say to manage a critical network service," says Robert Whiteley, a senior analyst at Forrester Research. "Network teams need to fix how they approach IP address management to be considered up to date."

And those looking to overhaul their approach to IP address management might want to consider migrating how they do DNS and DHCP services as well. While the technology functions can be conducted with separate platforms -- albeit integration among them is a must -- some experts say while updating how they manage IP addresses, network managers should also take a look at their DNS and DHCP infrastructure.

"Some people think of IP address management as the straight up managing of IP addresses and others incorporate the DNS/DHCP infrastructure, says Lawrence Orans, research director at Gartner. "If you are updating how you manage IPs it's a good time to also see if how you are doing DNS and DHCP needs an update."

[Debian Sarge] Installing A Bind9 Master-Slave DNS System HowtoForge -

Linux Howtos and Tutorials

In this howto we will install 2 bind dns servers, one as the master and the other as a slave server. For security reasons we will chroot bind9 in its own jail.

Using two servers for a domain is a commonly used setup and in order to host your own domain you are required to have at least 2 domain servers. If one breaks, the other can continue to serve your domain.
Our setup will use Debian Sarge 3.1 (stable) for its base. A simple clean and up2date install will be enough since we will install the required packages with this howto.

In this howto I will use the fictional domain "linux.lan". The nameservers will use and as there ip.

Some last words before we begin: I read Joe's howto (also on this site) and some more tuts but none of them worked without some tweaks. Therefor, i made my own howto. And it SHOULD work at once :)

[May 22, 2006] DNSSEC Deployment at the Root Posted by Thierry Moreau

The DNSSEC is a security protocol for providing cryptographic assurance (i.e. using the public key cryptography digital signature technology) to the data retrieved from the DNS distributed database (RFC4033). DNSSEC deployment at the root is said to be subject to politics, but there is seldom detailed discussion about this "DNS root signing" politics. Actually, DNSSEC deployment requires more than signing the DNS root zone data; it also involves secure delegations from the root to the TLDs, and DNSSEC deployment by TLD administrations (I omit other participants involvement as my focus is policy around the DNS root). There is a dose of naivety in the idea of detailing the political aspects of the DNS root, but I volunteer! My perspective is an interested observer.

Recent developments surrounding ICANN:

That is for policy-related signals. Turning to the technology and demand rationale for DNSSEC deployment, the picture is somehow more definite. At the time of this writing, from a technology development perspective, the DNSSEC protocols are almost ready for wide scale deployment, and at least one ccTLD supports it (the Swedish registry, .se). The specific protocol areas where developments are still under way include mainly:

1) solving a privacy issue (not to be confused with data confidentiality which is not part of DNSSEC) referred to as "zone walking" prevention, or "NSEC3" as a technical buzzword for the solution being finalized,

2) the trust anchor key rollover issue, a protocol development item that merges into the "root signing" activity (this is the area in which I am involved),

3) some further testing might be required to strengthen the confidence that DNSSEC is adequate for full-scale deployment.

Beyond the mere observation that the current DNS implementation lacks cryptographic assurance security, the demand for DNSSEC deployment comes from an overall concern with Internet e-commerce insecurity, and from specific needs for a distribution channel for public keys, the latter supporting spam-prevention schemes (a potential killer-application for DNSSEC?) and ubiquitous encryption key distribution (a nightmare for "national security" premises?). The materialization of such DNSSEC benefits requires more software development on the DNS resolver side and end-user applications, but since deployment must start somewhere, why not at the DNS root and TLDs!

Perhaps a characteristic of the DNSSEC technology deserves a special note to observers of the DNS governance: while a secure delegation (DS resource record) parallels the plain DNS delegation (NS resource records) along the name hierarchy, they are otherwise independent relationships. It means that almost nothing from the existing ICANN policy for namespace management can be taken for granted when defining policy for DNSSEC support. Here are a few of the questions that may arise:

I'm getting more and more convinced that DNSSEC deployment momentum can only occur through TLD administrations involvement, with focus on their respective understanding on the DNS institutional and policy issues. If you think of TLDs as independent entities deploying cryptographic assurance to the DNS data they publish, with their own requirements and conditions as they see fit, you just whish there is a higher level of technical coordination, acting merely as an agent of each enrolled TLD. Gone the view that ICANN empowers TLDs to do something, e.g. provide DNSSEC value added name registrations. After all, the ICANN board justified the .xxx rejection by its inability to cope with the diversified societal and legal environments that make up the global Internet.

Hal's Low-Tech Homepage Hal Pomeranz DNS related links: Solaris 10 What's New

BIND 9 is new in the Solaris Express 8/04 release. In the Solaris 10 3/05 release, the BIND version was upgraded to BIND version 9.2.4.

BIND is an open source implementation of DNS. BIND is developed by the Internet Systems Consortium (ISC). BIND allows DNS clients and applications to query DNS servers for the IPv4 and IPv6 networks. BIND includes two main components: a stub resolver API, resolver(3resolv), and the DNS name server with various DNS tools.

BIND enables DNS clients to connect to IPv6 DNS servers by using IPv6 transport. BIND provides a complete DNS client-server solution for IPv6 networks.

BIND 9.2.4 is a redesign of the DNS name server and tools by the Internet Systems Consortium (ISC). The BIND version 9.2.4 nameserver and tools are available in the Solaris 10 OS.

BIND 8.x-to-BIND 9 migration information is available in the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP). Additional information and documentation about BIND 9 is also available on the ISC web site at For information about IPv6 support, see the System Administration Guide: IP Services.

dns Perl DNS communication module

dns -
Tcl Domain Name Service Client

package require Tcl 8.2
package require dns  ? 1.2.1 ? 
::dns::resolve query  ? options ? 
::dns::configure  ? options ? 
::dns::name token
::dns::address token
::dns::cname token
::dns::result token
::dns::status token
::dns::error token
::dns::reset token
::dns::wait token
::dns::cleanup token


The dns package provides a Tcl only Domain Name Service client. You should refer to (1) and (2) for information about the DNS protocol or read resolver(3) to find out how the C library resolves domain names. The intention of this package is to insulate Tcl scripts from problems with using the system library resolver for slow name servers. It may or may not be of practical use. Internet name resolution is a complex business and DNS is only one part of the resolver. You may find you are supposed to be using hosts files, NIS or WINS to name a few other systems. This package is not a substitute for the C library resolver - it does however implement name resolution over DNS. The package also extends the package uri to support DNS URIs (4) of the form or dns://my.nameserver/ The dns::resolve command can handle DNS URIs or simple domain names as a query.

Note: The package defaults to using DNS over TCP connections. If you wish to use UDP you will need to have the tcludp package installed and have a version that correctly handles binary data (> 1.0.4). This is available at If the udp package is present then UDP will be used by default.


::dns::resolve query ? options ?
Resolve a domain name using the DNS protocol. query is the domain name to be lookup up. This should be either a fully qualified domain name or a DNS URI.
-nameserver hostname or -server hostname
Specify an alternative name server for this request.
-protocol tcp|udp
Specify the network protocol to use for this request. Can be one of tcp or udp.
-port portnum
Specify an alternative port.
-search domainlist
-timeout milliseconds
Override the default timeout.
-type TYPE
Specify the type of DNS record you are interested in. Valid values are A, NS, MD, MF, CNAME, SOA, MB, MG, MR, NULL, WKS, PTR, HINFO, MINFO, MX, TXT, SPF, SRV, AAAA, AXFR, MAILB, MAILA and *. See RFC1035 for details about the return values. See about SPF. See (3) about AAAA records and RFC2782 for details of SRV records.
-class CLASS
Specify the class of domain name. This is usually IN but may be one of IN for internet domain names, CS, CH, HS or * for any class.
-recurse boolean
Set to false if you do not want the name server to recursively act upon your request. Normally set to true.
-command procname
Set a procedure to be called upon request completion. The procedure will be passed the token as its only argument.

::dns::configure ? options ?
The ::dns::configure command is used to setup the dns package. The server to query, the protocol and domain search path are all set via this command. If no arguments are provided then a list of all the current settings is returned. If only one argument then it must the the name of an option and the value for that option is returned.
-nameserver hostname
Set the default name server to be used by all queries. The default is localhost.
-protocol tcp|udp
Set the default network protocol to be used. Default is tcp.
-port portnum
Set the default port to use on the name server. The default is 53.
-search domainlist
Set the domain search list. This is currently not used.
-timeout milliseconds
Set the default timeout value for DNS lookups. Default is 30 seconds.

::dns::name token
Returns a list of all domain names returned as an answer to your query.
::dns::address token
Returns a list of the address records that match your query.
::dns::cname token
Returns a list of canonical names (usually just one) matching your query.
::dns::result token
Returns a list of all the decoded answer records provided for your query. This permits you to extract the result for more unusual query types.
::dns::status token
Returns the status flag. For a successfully completed query this will be ok. May be error or timeout or eof. See also ::dns::error
::dns::error token
Returns the error message provided for requests whose status is error. If there is no error message then an empty string is returned.
::dns::reset token
Reset or cancel a DNS query.
::dns::wait token
Wait for a DNS query to complete and return the status upon completion.
::dns::cleanup token
Remove all state variables associated with the request.


% set tok [dns::resolve]
% dns::status $tok
% dns::address $tok
% dns::name $tok
% dns::cleanup $tok

Using DNS URIs as queries:

% set tok [dns::resolve ";type=MX"]
% set tok [dns::resolve "dns://"]

Reverse address lookup:

% set tok [dns::resolve]
% dns::name $tok
% dns::cleanup $tok


  1. Mockapetris, P., "Domain Names - Concepts and Facilities", RFC 1034, November 1987. (
  2. Mockapetris, P., "Domain Names - Implementation and Specification", RFC 1035, November 1087. (
  3. Thompson, S. and Huitema, C., "DNS Extensions to support IP version 6", RFC 1886, December 1995. (
  4. Josefsson, S., "Domain Name System Uniform Resource Identifiers", Internet-Draft, October 2003, (
  5. Gulbrandsen, A., Vixie, P. and Esibov, L., "A DNS RR for specifying the location of services (DNS SRV)", RFC 2782, February 2000, (


Pat Thoyts




DNS, resolver, domain name service, rfc 1034, rfc 1035, rfc 1886

[Jan 28, 2004] New Instance of DNS Root Server Makes Internet History by Paul Rendek

For the first time in Internet history the number of instances of DNS root servers outside the United States has overtaken the number within. The balance was tipped by the recent launch in Frankfurt of an anycast instance of the RIPE NCC operated K-root server.

The K-root server is one of the 13 DNS root servers that resolve lookups for domain names all over the world and form a critical part of the global Internet infrastructure. The K-root server has been operated by the RIPE NCC since 1997 when the first server was installed at the London Internet Exchange (LINX) in London, UK.

Deployment of anycast instances of the K-root server further improves the distribution of this crucial service in various Internet regions and its resilience against Distributed Denial of Service (DDoS) attacks. As K-root is one of the 13 root servers, this also means improvement for the whole Root Server System.

RIPE NCC technicians were among the pioneers of the anycast concept for root servers and have deployed instances of the K-root server, hosted at the LINX, at the AMS-IX in Amsterdam and at the DE-CIX, Frankfurt. They are planning to have up to 10 instances of the K-root server deployed by the end of 2004.

"We operate K-root as a service to the Internet at large on behalf of our 3,500 members, across more than 100 countries, to whom we provide Internet resources and co-ordination services," stated Axel Pawlik, Managing Director of the RIPE NCC. "As a membership association we are directly responsible for fulfilling the needs of our members. Our members are committed to providing reliable DNS service because their businesses depend on it."

Anycast allows exact copies of the server, including the name and IP address, to be deployed in different locations. These copies are deployed in collaboration with local partners but are under sole management and administrative control of the RIPE NCC. Using anycast makes the root server system more difficult to attack and improves the DNS response for local communities by providing shorter paths between clients and servers.

"Our strategy is to deploy servers at multiple locations where there is a lot of Internet connectivity. We do that in close co-operation with ISPs who are also our members," said Andrei Robachevsky, Chief Technology Officer at the RIPE NCC. "However, by taking full operational responsibility for the servers themselves, the RIPE NCC can build a very strong service that is resilient to disasters and attack."

By locating the servers at Internet exchange points, they have the advantage of being as hardened as the infrastructure at these points themselves. "This is very economical because we do not need to spend extra money to harden these sites or to develop their connectivity," noted Robachevsky. "Service quality and security is not always proportional to money spent."

"We do not need fancy, hardened Network Operations Centres," added Daniel Karrenberg, Chief Scientist of the RIPE NCC, who installed the first instance of at the London Internet Exchange (LINX) back in 1997. "Our engineering builds on diversity and distribution of functions. The servers will continue to run reliably for a very long time even if our Network Operations Centres should be down. We monitor the quality of the root name service from more than 50 locations worldwide, and we publish the results for everyone to see."

These results are available through the RIPE NCC DNS Monitoring site. The site uses Test Traffic Measurements (TTM) network to provide an up-to-date service overview of certain DNS root and Top-Level Domain (TLD) name servers. The DNS Monitoring service is available at:

"The strength of the Internet does not come from centralistic or hierarchical designs but from de-centralised and distributed design and engineering," noted Karrenberg. "Operationally, the root servers are equal peers and client software can choose any one of them based on an estimate of which provides the best service to the client's location at the time."

The strength of the root name server system lies in its diversity on all levels, a legacy of the late Jon Postel who oversaw its construction in the 1990s. "It is not a weakness but a strength of the system that servers are operated by a widely diverse group of organisations," said Pawlik. "Measurements show that the current system is performing well," he added. "It will be hard to introduce more central or hierarchical structures without substantially weakening the system as a whole."

the list of root servers
A VeriSign Dulles, Virginia, USA
B ISI Marina Del Rey, California, USA
C Cogent distributed using anycast
D University of Maryland College Park, Maryland, USA
E NASA Mountain View, California, USA
F ISC distributed using anycast
G U.S. DoD NIC Vienna, Virginia, USA
H U.S. Army Research Lab Aberdeen Proving Ground, Maryland, USA
I Autonomica distributed using anycast
VeriSign distributed using anycast
RIPE NCC distributed using anycast
ICANN Los Angeles, California, USA
WIDE Project Tokyo, Japan

[Jan 8, 2003] Perspective Defending the DNS Perspectives CNET

The October attack was a DDoS "ping" attack. The attackers broke into machines on the Internet (popularly called "zombies") and programmed them to send streams of forged packets at the 13 DNS root servers via intermediary legitimate machines. The goal was to clog the servers, and communication links on the way to the servers, so that useful traffic was gridlocked. The assault is not DNS-specific--the same attack has been used against several popular Web servers in the last few years.

[Jan 3, 2003] Perspective Defending the DNS Perspectives CNET

The domain name system--the global directory that maps names to Internet Protocol addresses--was designed to distribute authority, making organizations literally "masters of their own domain." But with this mastery comes the responsibility of contributing to the defense of the DNS.

The distributed denial-of-service (DDoS) attacks against the DNS root servers on Oct. 21, 2002, should serve as a wake-up call. The attack was surprisingly successful--most of the root servers were disrupted by a well-known attack strategy that should have been easily defeated. Future attacks against all levels of the DNS--the root at the top; top-level domains like .com, .org and the country codes; and individual high-profile domains--are inevitable.

The October attack was a DDoS "ping" attack. The attackers broke into machines on the Internet (popularly called "zombies") and programmed them to send streams of forged packets at the 13 DNS root servers via intermediary legitimate machines. The goal was to clog the servers, and communication links on the way to the servers, so that useful traffic was gridlocked. The assault is not DNS-specific--the same attack has been used against several popular Web servers in the last few years.

The legitimate use of ping packets is to check whether a server is responding, so a flood of ping packets is clearly either an error or an attack. The typical defense is to program routers to throw away excessive ping packets, which is called rate limiting. While this protects the server, the attack streams can still create traffic jams up to the point where they are discarded.

Excess capacity in the network can help against such attacks, as long as the additional bandwidth can't be used to carry additional attacks. By intent, root servers are deployed at places in the network where multiple Internet service providers intersect. In the October attacks, some networks filtered out the attack traffic while others did not, so a particular root server would seem to be "up" for a network that was filtering and "down" for one that was not.

Unlike most DDoS attacks, which fade away gradually, the October strike on the root servers stopped abruptly after about an hour, probably to make it harder for law enforcement to trace.

Future attacks against all levels of the DNS are inevitable.

DNS caching kept most people from noticing this assault. In very rough terms, if the root servers are disrupted, only about 1 percent of the Internet should notice for every two hours the attack continues--so it would take about a week for an attack to have a full effect. In this cat-and-mouse game between the attackers and network operators, defenders count on having time to respond to an assault.

Defending the root
The root servers are critical Internet resources, but occupy the "high ground" in terms of defensibility. The root server database is small and changes infrequently, and entries have a lifetime of about a week. Any organization can download an entire copy of the root database, check for updates once a day, and stay current with occasional reloads. A few organizations do this already.

Root server operators are also starting to deploy root servers using "anycast" addresses that allow multiple machines in different network locations to look like a single server.

Unfortunately for those of us that depend on the Internet, the attackers seem likely to strengthen their tactics and distribute new attackware.

In short, defending the DNS root is relatively easy since it is possible to minimize the importance of any root server, by creating more copies of the root database--some private, some public.

Top-level domains, or TLDs, will be much harder to defend. The copying strategy that can defend the root server will not work for most TLDs. It is much harder to protect, say, .com or .fr than to defend the root. This is because the data in TLDs is more voluminous and more volatile, and the owner is less inclined to distribute copies for privacy or commercial reasons.

There is no alternative. TLD operators must defend their DNS servers with rate-limiting routers and anycast because consumers of the TLD data cannot insulate themselves from the attacks.

Defending your organization
If your organization has an intranet, you should provide separate views of DNS to your internal users and your external customers. This will isolate the internal DNS from external attacks. Copy the root zone to insulate your organization from future DDoS attacks on the root. Consider also copying DNS zones from business partners on extranets. When DNS updates go over the Internet, they can also be hijacked in transit--use TSIGs (transaction signatures) to sign them or send updates over VPNs (virtual private networks) or other channels.

But understand that until tools for digital signatures in DNS are finished and deployed, you are going to be at risk from the DNS counterfeiting attacks that lie not too far in the future (and that have apparently already occurred in China). Unfortunately for those of us who depend on the Internet, the attackers seem likely to strengthen their tactics and distribute new attackware, while the Internet community struggles to mount a coordinated approach to DNS defense.

Paul Mockapetris, the inventor of the domain name system, is chief scientist and chairman of the board at Nominum.

Carnegie Mellon NetReg-NetMon

The CMU NetReg/NetMon package is a lightweight and flexible Web-based system for managing networks. It consolidates information about DNS zones, subnets, machine registrations, and DHCP configuration, and provides tools for easy management. The system exports ISC BIND configuration and zone files and ISC DHCP configurations. The NetMon component provides a central system for device tracking (including DHCP lease information and ARP/CAM table tracking) and reporting.

Securing an Internet Name Server Securing an Internet Name Server [PDF slides]2002 is a command line script which will take a (sub)network entered by the user and check all the reverse DNS records for that network. scandns then checks the forward records of the hosts returned by the initial scan and reports any inconsistencies. This is quite useful for cleaning up old inverse records, as well as general network maintenance and security.

[Feb 5, 2001 ] Men & Mice


REYKJAVIK, Iceland, February 5, 2001/-- A new survey performed by DNS specialist Men & Mice shows that 40% of DNS servers for .com domains have serious security vulnerabilities. This news comes just days after Microsoft's web outage. To respond to these new threats, Men & Mice today announced its DNS Audit Service, that quickly checks a company's DNS system for numerous vulnerabilities. See further information at The DNS Audit Service offers a comprehensive DNS security audit, covering the latest DNS security vulnerabilities including:

We are already receiving a number of queries from concerned corporations said Sigurdur Ragnarsson, Men & Mice's Director of DNS Audit Services. Cricket Liu, DNS specialist and author of the O'Reilly & Associates' Nutshell Handbook "DNS and BIND" said, "The Men & Mice Audit Service is the most comprehensive I know of. They are people that live and breathe this stuff�. This is an important service, because it allows companies to quickly get an independent snapshot of vulnerabilities in their DNS infrastructure. Registering for the service is quick and simple To use the service, log on to the Men & Mice Web site at, then enter your domain name and email address. The Men & Mice experts use DNS diagnostic tools to examine the set-up and configuration of the domain name and potential security issues. Sigurdur Ragnarsson said, "What you receive is a personalized, printable report of DNS misconfigurations, delivered to your email address. Our experts summarize the main findings and suggest corrective action. We adapt our recommendation to the customer's DNS knowledge." The DNS Audit, basic service is priced at $1490(USD). Please visit

Recommended Links

Google matched content

Softpanorama Recommended

Top articles



BIND 9 Administrator Reference Manual

BIND FAQ from Nominum Home of DNS inventor. Paul Mockapetris, the inventor of the domain name system, is chief scientist and chairman of the board at Nominum.
Simple DNS Configuration Example (RIPE-192)
Probably the quickest way to get going, if you are using BIND 8 on Unix. Just cut and paste the examples and make a few changes for a working setup.
FAQ document for the newsgroup by Chris Peckham. Here is a local copy of the most recently posted text version. You could also try a DNS book, DNSRD tips or Ask Mr. DNS.
FAQ for microsoft.public.windowsnt.dns
FAQ document for the Microsoft newsgroup microsoft.public.windowsnt.dns.
How to set up DNS on Linux, by Nicolai Langfeldt. There are some small problems with this document, but it has some useful tips.


Nice collection can be found at Nominum-Resources-Standards Information-DNS RFCs

See also DNS related RFCs, DNS RFCs.

Search for RFCs is available at Internet Requests for Comments (RFC)

Major RFCs

Worth reading for zone administrators:

Reference documents about protocols and administrative rules:

Full list (from DNS related RFCs):

RFC 805
Computer Mail Meeting Notes by J. Postel
The decision to introduce DNS-type names for mail addressing. Feb-1982
RFC 810 E. Feinler, K. Harrenstien, Z. Su, and V. White, "DOD Internet
Host Table Specification", , Network Information Center,
SRI International, March 1982.
RFC 811
Hostnames Server by K. Harrenstien, V. White and E. Feinler
The original centralized hostname lookup server. Mar-1982

[5] K. Harrenstien, and V. White, "NICNAME/WHOIS", RFC 812, Network
Information Center, SRI International, March 1982.

Z. Su, "A Distributed System for Internet Name Service",
RFC 830, Network Information Center, SRI International,
October 1982.

RFC 819
The Domain Naming Convention for Internet User Applications by Z. Su and J. Postel
Documents the original structural ideas of DNS. Aug-1982
RFC 0881 - Domain names plan and schedule
RFC 882 obsoleted by RFC 1034. Nov-1983
RFC 883 obsoleted by RFC 1034. Nov-1983
RFC 920
Domain Requirements by J. Postel and J. Reynolds
Administrative document about domains. Will become historical shortly. Oct-1984
RFC 973 obsoleted by RFC 1034. Jan-1986
RFC 974
Mail Routing and the Domain System by Craig Partridge
Describes MX record processing. Jan-1986
RFC 1032
Domain Administrator's Guide by M. Stahl
Explains role of domain administrator. Nov-1987
RFC 1033 updated by RFC 1912.
Domain Administrators Operations Guide by M. Lottor
How-to guide, now somewhat out of date. Nov-1987
RFC 1034 updated by RFC 1101.
Domain Names--Concepts and Facilities by P. Mockapetris
Reference guide, covers just about everything. Nov-1987
RFC 1035 updated by RFC 1706.
Domain Names--Implementation and Specification by P. Mockapetris
Mechanics of the DNS. An HTML version with graphic illustrations is available (thanks to Russ Nelson). A local copy is also available. Nov-1987
RFC 1101 updates RFC 1034.
DNS Encoding of Network Names and Other Types by P. Mockapetris
How to add network names and netmasks to the DNS. Apr-1989
RFC 1122
Requirements for Internet Hosts -- Communication Layers edited by R. Braden
Not directly related to DNS, but section 4 discusses UDP and TCP issues that have important low-level effects on DNS. Oct-1989
RFC 1123
Requirements for Internet Hosts -- Application and Support edited by R. Braden
Includes chapter 6, about DNS. Oct-1989
RFC 1178
Choosing a Name for Your Computer by D. Libes
Good advice to keep in mind when naming computers. Aug-1990
RFC 1183
New DNS RR Definitions by C. Everhart, L. Mamakos and R. Ullmann and edited by P. Mockapetris
New resource records, not widely used. Oct-1990
RFC 1348 updates RFC 1035, obsoleted by RFC 1706. Jul-1992
RFC 1464
Using the Domain Name System To Store Arbitrary String Attributes by R. Rosenbaum
Using TXT records to store arbitrary strings in the DNS. May-1993
RFC 1480
The US Domain by A. Cooper and J. Postel
Policies and procedures related to the .US top-level domain. Jun-1993
RFC 1535
A Security Problem and Proposed Correction With Widely Deployed DNS Software by E. Gavron
Highlights subversion possibilities with default resolver search lists. Oct-1993
RFC 1536
Common DNS Implementation Errors and Suggested Fixes by A. Kumar, J. Postel, C. Neuman, P. Danzig and S. Miller
What to fix and how to fix it, for developers. Oct-1993
RFC 1537 obsoleted by RFC 1912. Oct-1993
RFC 1591
Domain Name System Structure and Delegation by J. Postel
Administrative details about the DNS name space. Mar-1994
RFC 1611
DNS Server MIB Extensions by R. Austein and J. Saperia
Interfacing SNMP to the server side of DNS, waiting to be implemented. May-1994
RFC 1612
DNS Resolver MIB Extensions by R. Austein and J. Saperia
Interfacing SNMP to the client side of DNS, waiting to be implemented. May-1994
RFC 1637 obsoletes RFC 1348, obsoleted by RFC 1706. Jun-1994
RFC 1664 obsoleted by RFC 2163.
Using the Internet DNS to Distribute RFC1327 Mail Address Mapping Tables by C. Allocchio, A. Bonito, B. Cole, S. Giordano and R. Hagens
Mapping information for converting between X.400 and SMTP addressing into the DNS. Aug-1994
RFC 1706 obsoletes RFC 1348 and RFC 1637.
DNS NSAP Resource Records by B. Manning and R. Colella
How to add OSI-style NSAPs to the DNS using PTR records. Oct-1994
RFC 1712 obsoleted by RFC 1876.
DNS Encoding of Geographical Location by C. Farrell, M. Schulze, S. Pleitner and D. Baldoni
Paul Vixie wrote: `deprecated and retracted by its authors but the RFC editors accidentally published it anyway'. Nov-1994
RFC 1713
Tools for DNS debugging by A. Romao
Overview of some DNS tools. An HTML version is available. Nov-1994
RFC 1794
DNS Support for Load Balancing by T. Brisco
DNS support for balancing loads of many types. Apr-1995
RFC 1811 obsoleted by RFC 1816. Jun-1995
RFC 1816 obsoletes RFC 1811, obsoleted by RFC 2146. Aug-1995
RFC 1876 obsoletes RFC 1712.
A Means for Expressing Location Information in the Domain Name System by C. Davis, P. Vixie, T. Goodwin and I. Dickinson
Geographical location DNS records. Jan-1996
RFC 1884
IP Version 6 Addressing Architecture edited by R. Hinden and S. Deering
All about IPv6 addresses. Dec-1995
RFC 1886
DNS Extensions to support IP version 6 by S. Thomson and C. Huitema
Backward-compatible IPv6 DNS extensions, including new AAAA record type and new domain IP6.INT. Dec-1995
RFC 1912 obsoletes RFC 1537.
Common DNS Operational and Configuration Errors by D. Barr
Errors and common practice in operation of servers and format of data. An HTML version is available. Feb-1996
RFC 1956
Registration in the MIL Domain by D. Engebretson and R. Plzak
Describes the registration policy of the US Department of Defense domain. Jun-1996
RFC 1982
Serial Number Arithmetic by R. Elz and R. Bush
Defines how serial numbers are compared to determine if a zone has been updated. Aug-1996
RFC 1995
Incremental Zone Transfer in DNS by M. Ohta
A mechanism for use with NOTIFY which allows transferring only that part of the zone that changed. Aug-1996
RFC 1996
Notify: a mechanism for prompt notification of authority zone changes by P. Vixie
Describes new NOTIFY opcode for advising slave servers that the master's data has been changed. Aug-1996
RFC 2010
Operational Criteria for Root Name Servers by B. Manning and P. Vixie
Requirements for root name servers. Oct-1996
RFC 2052
A DNS RR for specifying the location of services (DNS SRV) by A. Gulbrandsen and P. Vixie
Generalised MX records for services other than mail. Oct-1996
RFC 2053
The AM (Armenia) Domain by E. Der-Danieliantz
Procedures for registering in the AM TLD. Oct-1996
RFC 2065
Domain Name System Security Extensions by D. Eastlake, 3rd and C. Kaufman
Digital signatures for data integrity and authentication in the DNS. An HTML version is available. Jan-1997
RFC 2136
Dynamic Updates in the Domain Name System (DNS UPDATE) by P. Vixie (editor), S. Thomson, Y. Rekhter and J. Bound
Atomic record-level addition and deletion of DNS information: WINS done properly. Apr-1997
RFC 2137
Secure Domain Name System Dynamic Update by D. Eastlake 3rd
Security for dynamic updates. Apr-1997
RFC 2146 obsoletes RFC 1816.
U.S. Government Internet Domain Names by Federal Networking Council
Registration procedures in the .GOV top-level domain, and first steps in the migration to .FED.US. May-1997
RFC 2163 obsoletes RFC 1664.
Using the Internet DNS to Distribute MIXER Conformant Global Address Mapping (MCGAM) by C. Allocchio
Update to RFC 1664, on storing information in the DNS for mapping between X.400 and RFC 822 email addressing. Defines new PX record and .X42D.xx second-level domain names for each country-specific TLD xx. Jan-1998
RFC 2168
Resolution of Uniform Resource Identifiers using the Domain Name System by R. Daniel and M. Mealling
Defines NAPTR (Naming Authority Pointer) record type, which maps URI namespace identifiers to domain names. Jun-1997
RFC 2181 updates RFC 1034, RFC 1035 and RFC 1123.
Clarifications to the DNS Specification by R. Elz and R. Bush
Clarifications regarding multi-homed servers, TTLs, zone cuts, SOA records, the TC (truncated) flag, authoritative/canonical names, and valid labels. An HTML version is available.
RFC 2182 (also BCP 16)
Selection and Operation of Secondary DNS Servers by R. Elz, R. Bush, S. Bradner and M. Patton
How to select secondary servers. An HTML version is available.
RFC 2219 (also BCP 17)
Use of DNS Aliases for Network Services by M. Hamilton and R. Wright
The IANA name for a protocol should be used as the domain name for the machine that supports that protocol at a site. An HTML version is available.
RFC 2230
Key Exchange Delegation Record for the DNS by R. Atkinson
KX records for IP security, assuming Secure DNS. KX defines a host willing to act as a key exchanger for a given domain name. An HTML version is available. Nov-1997
RFC 2240
A Legal Basis for Domain Name Allocation by O. Vaughan
Proposes creation of uniform second-level domain names for commercial organisations, within the country-specific TLD's. Besides a bunch of typos, there appears to be very little of note in this document. Nov-1997
RFC 2247
Using Domains in LDAP/X.500 Distinguished Names by S. Kille, M. Wahl, A. Grimstad, R. Huber and S. Sataluri
Representing domain names as distinguished names (using a new X.500 attribute called DC) so that LDAP can contain DNS information. See also ids-dirnaming. An HTML version is available.
RFC 2308
Negative Caching of DNS Queries (DNS NCACHE) by M. Andrews
Recommends that negative caching (the caching of information about non-existence of resource records) becomes mandatory in resolvers.
RFC 2317
Classless IN-ADDR.ARPA delegation by H. Eidnes, G. de Groot and P. Vixie
How to do IN-ADDR.ARPA delegations on arbitrary boundaries, in a way compatible with existing software, by using CNAME records and new zones.



comp.protocols.dns.bind Last 50 Messages

comp.protocols.dns.ops Last 50 Messages



USM Engineering Department -- System Guides DNS The Domain Name Service By Glenn Stevens. Very good slides

Network Ice Corporation. "DNS Zone Transfer." 2000.
URL: (22 July, 2000).

Mr. DNS. "Restricting zone transfers in BIND 4.9.x with the xfernets directive."
URL: (22 July, 2000).

Split DNS

Network Ice Corporation. "Split-DNS"
URL: (22 July, 2000).

Load Balancing via DNS


DNS, DHCP, IP Address Management Solutions from Men & Mice

Yahoo! - Nominum, Inc. Company Profile

Microsoft DNS

Troubleshooting DNS servers Domain Name System(DNS)

Random Findings

Slashdot U.S. Won't Let Go of DNS An Anonymous Reader wrote in with a story on the Eweek site, reporting that the Federal Government is going to keep control of the Domain Name System rather than handing it over to ICANN. From the article: "...the United States is committed to taking no action that would have the potential to adversely impact the effective and efficient operation of the DNS, and will therefore maintain its historic role in authorizing changes or modifications to the authoritative root zone file..."

Domain Name System: Proper use reduces intranet administration costs by Anton Holleman
Brief white paper from Origin, supporting the use of DNS instead of services like NIS or WINS. Originally published in Dutch in April 1999.
The InterNIC Lame Delegation Policy
The InterNIC's draft policy to deal with lame delegations for domains that they administer. They are supposed to deregister the domain if all delegated servers are lame for 90 days, but they don't seem to bother. The issues relating to trademarks are highly contentious. See also the DNSRD pages on domain registration and disputes.



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater�s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright � 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Created May 16, 1996; Last modified: May 10, 2020