Extract the TLD/domain/subdomain parts of an URL/hostname against mozilla TLDs official listing.
var parser = require('tld-extract');
console.log( parser("http://www.google.com") );
console.log( parser("http://google.co.uk") );
/**
* >> { tld: 'com', domain: 'google.com', sub: 'www' }
* >> { tld: 'co.uk', domain: 'google.co.uk', sub: '' }
*/
Private TLDs are supported, see chromium source code for specs
console.log( parser("http://jeanlebon.cloudfront.net"));
/**
* >> { tld : 'net', domain : 'cloudfront.net', sub : 'jeanlebon' };
*/
console.log( parser("http://jeanlebon.cloudfront.net", {allowPrivateTLD : true}));
/**
* >> { tld : 'cloudfront.net', domain : 'jeanlebon.cloudfront.net', sub : '' };
*/
By default, unknown TLD throw an exception, you can allow them and use tld-extract as a parser using the allowUnknownTLD
option
parse("http://nowhere.local")
>> throws /Invalid TLD/
parse("http://nowhere.local", {allowUnknownTLD : true}))
>> { tld : 'local', domain : 'nowhere.local', sub : '' }
- no dependencies
- really fast
- full code coverage
- easy to read (10 lines)
- easily updatable vs mozilla TLDs source list
You can update the remote hash table using npm run update
-
A port of a yks/PHP library
-
tldextract => bad API, (no need for async, "domain" property is wrong), no need for dependencies
-
tld => (nothing bad, a bit outdated )
-
tld.js => no sane way to prove/trust/update TLD listing