inkblot/puppet-bind

request for support for split dns configurations

schirmacher opened this issue · 5 comments

Thanks a lot for providing this module, i really appreciate it.
Each system in our data center can be accessed by its official ip address from the outside. But when accessing systems internally the internal private ip address must be used. Therefore the dns server must be configured in such a way that it returns the internal ip address when accessed internally, and the external ip address otherwise. This is called a split dns configuration (I think).

The setup below does exactly this, with the one caveat that each node now needs two IP addresses in the same zone, which does not work. The dns_rr entry is rejected as a duplicate.

Please advise whether it is possible to create such a configuration. Maybe we can use an extra qualifier to the dns_rr name like 'internal-IN/A/www.example.org' to make it unique but I don't have the Puppet or Ruby experience to try it myself.

node 's1006' {

        class { 'bind': }

        bind::zone { 'example.org-internal':
                zone_type       => 'master',
                domain          => 'example.org',
                allow_updates   => [ '!key external', 'key internal', ],
                dnssec          => false,
        }

        bind::zone { 'example.org-external':
                zone_type       => 'master',
                domain          => 'example.org',
                allow_updates   => [ '!key internal', 'key external', ],
                dnssec          => false,
        }

        bind::key { 'internal':
                algorithm => 'hmac-md5',
                secret    => 'x2ZKW4SxbeySMK7PmV1Nng==',
                owner     => 'root',
                group     => 'bind',
        }

        bind::key { 'external':
                algorithm => 'hmac-md5',
                secret    => 'kiHB9BR6IeSmUUnp1QMCcA==',
                owner     => 'root',
                group     => 'bind',
        }

        bind::view { 'internal':
                recursion     => true,
                match_clients => [ 'update-internal', 'internal', ],
                zones         => [ 'example.org-internal', ],
        }

        bind::view { 'external':
                recursion     => false,
                match_clients => [ 'update-external', 'external', ],
                zones         => [ 'example.org-external', ],
        }

        bind::acl { 'internal':
                addresses => [
                        '!update-internal',
                        '!update-external',
                        '10.1.1.6',
                ],
        }

        bind::acl { 'external':
                addresses => [
                        '!update-internal',
                        '!update-external',
                        '10.1.1.3',
                ],
        }

        bind::acl { 'update-internal':
                addresses => [
                        'key internal',
                        '10.1.1.6',
                ],
        }

        bind::acl { 'update-external':
                addresses => [
                        'key external',
                        '10.1.1.3',
                ],
        }

#       dns_rr { 'IN/A/www.example.org':
#               ensure  => present,
#               rrdata  => [ '10.1.1.20', ],
#               ttl     => 86400,
#               server  => '10.1.1.6',
#               keyname => 'internal',
#               hmac    => 'hmac-md5',
#               secret  => 'x2ZKW4SxbeySMK7PmV1Nng==',
#       }

        dns_rr { 'IN/A/www.example.org':
                ensure  => present,
                rrdata  => [ '93.184.216.119', ],
                ttl     => 86400,
                server  => '10.1.1.6',
                keyname => 'external',
                hmac    => 'hmac-md5',
                secret  => 'kiHB9BR6IeSmUUnp1QMCcA==',
        }

}

Note: for test purposes the views are set up so that all requests from 10.1.1.3 are "external" and from 10.1.1.6 are considered "internal".

There is a workaround, but it's not very satisfying. You can place the dns_rr resources for one of the views in a node definition for a different machine (perhaps there is a secondary NS?).

I've been considering how to implement this request since you posted it. The problem is ultimately that the precise identity of a record in a zone in a view on a server is a combination of several pieces of information, whereas a puppet resource has a single-value title used to identify it. Right now the dns_rr resource parses these pieces out of its title, and is missing the piece you need. I am leaning toward deprecating dns_rr and writing a new resource type that entirely separates the resource title from the semantic information needed to create the record.

What are your thoughts on something like this:

resource_record { 'web server inside view': # Call it whatever you like
  ensure => present,

  # These bits of data are the ensurable properties of the resource
  name => 'www',
  ttl => 86400,
  class => 'IN', # 'IN' should probably just be the default
  type => 'A', # 'A' could also be a reasonable default
  rrdata => [ '10.1.1.20' ],

  # These bits of data are really just there so that the agent knows where/how to put the record
  zone => 'example.org',
  server => '10.1.1.6',
  keyname => 'internal',
  hmac => 'hmac-md5',
  secret => "that's not your real secret in the issue description is it?",
}

resource_record { 'web server outside view':
  ensure => present,
  name => 'www',
  ttl => 86400,
  class => 'IN',
  type => 'A',
  rrdata => [ '93.184.216.119' ],
  zone => 'example.org',
  server => '10.1.1.6',
  keyname => 'external',
  hmac => 'hmac-md5',
}

Actually, class and type probably belong under "identifying information". Blech. DNS is kind of icky.

I have created pull request #9 to address this issue. I have tested it locally and will merge it. Could you please replace the dns_rr resources in your sample with these resource_record resources and let me know whether this change works for your use case, too?

       resource_record { 'www.example.org internal':
               ensure  => present,
               record => 'www.example.org',
               type => 'A',
               data  => [ '10.1.1.20', ],
               ttl     => 86400,
               server  => '10.1.1.6',
               keyname => 'internal',
               hmac    => 'hmac-md5',
               secret  => 'x2ZKW4SxbeySMK7PmV1Nng==',
       }

        resource_record { 'www.example.org external':
                ensure  => present,
                record => 'www.example.org',
                type => 'A',
                data  => [ '93.184.216.119', ],
                ttl     => 86400,
                server  => '10.1.1.6',
                keyname => 'external',
                hmac    => 'hmac-md5',
                secret  => 'kiHB9BR6IeSmUUnp1QMCcA==',
        }

I have released version 3.0.0 of the module which contains the new resource_record resource type and other recent changes.

I'm going to go ahead and close this issue.