temoto/robotstxt

Parse rules for a given user agent

yields opened this issue · 1 comments

I'm wondering if it's worth optimizing memory-usage a bit by parsing just a single ruleset for a given user agent, so the signature might be one of:

rules, err := robotstxt.ForAgent(buf, "mybot")
rules, err := robotstxt.ParseAgent(buf, "mybot")

The parser would skip all non-matching user-agents (except for *), if a ruleset for mybot was found, it would return its ruleset, otherwise it would return the default ruleset *.

The method could accept multiple useragents, so for a example a search engine crawler might do:

rules, err := robotstxt.Parse(buf, "Searchbot", "Googlebot")
// rules is searchbot
// fallback to Googlebot
// fallback to *

LMK if you would consider a PR that implements this feature.