/SitemapParser

XML Sitemap parser class compliant with the Sitemaps.org protocol.

Primary LanguagePHPMIT LicenseMIT

Build Status Scrutinizer Code Quality Code Climate Test Coverage License Packagist Join the chat at https://gitter.im/VIPnytt/SitemapParser

XML Sitemap parser

An easy-to-use PHP library to parse XML Sitemaps compliant with the Sitemaps.org protocol.

The Sitemaps.org protocol is the leading standard and is supported by Google, Bing, Yahoo, Ask and many others.

SensioLabsInsight

Features

  • Basic parsing
  • Recursive parsing
  • Custom User-Agent string
  • Proxy support
  • Offline parsing

Formats supported

  • XML .xml
  • Compressed XML .xml.gz
  • Robots.txt rule sheet robots.txt
  • Line separated text [disabled by default]

Requirements:

Installation

The library is available for install via Composer. Just add this to your composer.json file:

{
    "require": {
        "vipnytt/sitemapparser": "1.0.*"
    }
}

Then run composer update.

Getting Started

Basic example

Returns an list of URLs only.

use vipnytt\SitemapParser;
use vipnytt\SitemapParser\Exceptions\SitemapParserException;

try {
    $parser = new SitemapParser();
    $parser->parse('https://www.google.com/sitemap.xml');
    foreach ($parser->getURLs() as $url => $tags) {
        echo $url . '<br>';
    }
} catch (SitemapParserException $e) {
    echo $e->getMessage();
}

Advanced

Returns all available tags, for both Sitemaps and URLs.

use vipnytt\SitemapParser;
use vipnytt\SitemapParser\Exceptions\SitemapParserException;

try {
    $parser = new SitemapParser('MyCustomUserAgent');
    $parser->parse('http://php.net/sitemap.xml');
    foreach ($parser->getSitemaps() as $url => $tags) {
        echo 'Sitemap<br>';
        echo 'URL: ' . $url . '<br>';
        echo 'LastMod: ' . $tags['lastmod'] . '<br>';
        echo '<hr>';
    }
    foreach ($parser->getURLs() as $url => $tags) {
        echo 'URL: ' . $url . '<br>';
        echo 'LastMod: ' . $tags['lastmod'] . '<br>';
        echo 'ChangeFreq: ' . $tags['changefreq'] . '<br>';
        echo 'Priority: ' . $tags['priority'] . '<br>';
        echo '<hr>';
    }
} catch (SitemapParserException $e) {
    echo $e->getMessage();
}

Recursive

Parses any sitemap detected while parsing, to get an complete list of URLs

use vipnytt\SitemapParser;
use vipnytt\SitemapParser\Exceptions\SitemapParserException;

try {
    $parser = new SitemapParser('MyCustomUserAgent');
    $parser->parseRecursive('http://www.google.com/robots.txt');
    echo '<h2>Sitemaps</h2>';
    foreach ($parser->getSitemaps() as $url => $tags) {
        echo 'URL: ' . $url . '<br>';
        echo 'LastMod: ' . $tags['lastmod'] . '<br>';
        echo '<hr>';
    }
    echo '<h2>URLs</h2>';
    foreach ($parser->getURLs() as $url => $tags) {
        echo 'URL: ' . $url . '<br>';
        echo 'LastMod: ' . $tags['lastmod'] . '<br>';
        echo 'ChangeFreq: ' . $tags['changefreq'] . '<br>';
        echo 'Priority: ' . $tags['priority'] . '<br>';
        echo '<hr>';
    }
} catch (SitemapParserException $e) {
    echo $e->getMessage();
}

Parsing of line separated text strings

Note: This is disabled by default to avoid false positives when expecting XML, but get some plain text in return.

To disable strict standards, simply pass this configuration to constructor parameter #2: ['strict' => false].

use vipnytt\SitemapParser;
use vipnytt\SitemapParser\Exceptions\SitemapParserException;

try {
    $parser = new SitemapParser('MyCustomUserAgent', ['strict' => false]);
    $parser->parse('https://www.xml-sitemaps.com/urllist.txt');
    foreach ($parser->getSitemaps() as $url => $tags) {
            echo $url . '<br>';
    }
    foreach ($parser->getURLs() as $url => $tags) {
            echo $url . '<br>';
    }
} catch (SitemapParserException $e) {
    echo $e->getMessage();
}

Additional examples

Even more examples available in the examples directory.