/snakyscraper

SnakyScraper is a lightweight and Pythonic web scraping toolkit built on top of BeautifulSoup and Requests. It provides an elegant interface for extracting structured HTML and metadata from websites with clean, direct outputs.

Primary LanguagePythonMIT LicenseMIT

๐Ÿ SnakyScraper

SnakyScraper is a lightweight and Pythonic web scraping toolkit built on top of BeautifulSoup and Requests. It provides an elegant interface for extracting structured HTML and metadata from websites with clean, direct outputs.

Fast. Accurate. Snake-style scraping. ๐Ÿ๐ŸŽฏ


๐Ÿš€ Features

  • โœ… Extract metadata: title, description, keywords, author, and more
  • โœ… Built-in support for Open Graph, Twitter Card, canonical, and CSRF tags
  • โœ… Extract HTML structures: h1โ€“h6, p, ul, ol, img, links
  • โœ… Powerful filter() method with class, ID, and tag-based selectors
  • โœ… return_html toggle to return clean text or raw HTML
  • โœ… Simple return values: string, list, or dictionary
  • โœ… Powered by BeautifulSoup4 and Requests

๐Ÿ“ฆ Installation

pip install snakyscraper

Requires Python 3.7 or later


๐Ÿ› ๏ธ Basic Usage

from snakyscraper import SnakyScraper

scraper = SnakyScraper("https://example.com")

# Get the page title
print(scraper.title())  # "Welcome to Example.com"

# Get meta description
print(scraper.description())  # "This is the example meta description."

# Get all <h1> elements
print(scraper.h1())  # ["Welcome", "Latest News"]

# Extract Open Graph metadata
print(scraper.open_graph())  # {"og:title": "...", "og:description": "...", ...}

# Custom filter: find all div.card elements and extract child tags
print(scraper.filter(
    element="div",
    attributes={"class": "card"},
    multiple=True,
    extract=["h1", "p", ".title", "#desc"]
))

๐Ÿงช Available Methods

๐Ÿ”น Page Metadata

scraper.title()
scraper.description()
scraper.keywords()
scraper.keyword_string()
scraper.charset()
scraper.canonical()
scraper.content_type()
scraper.author()
scraper.csrf_token()
scraper.image()

๐Ÿ”น Open Graph & Twitter Card

scraper.open_graph()
scraper.open_graph("og:title")

scraper.twitter_card()
scraper.twitter_card("twitter:title")

๐Ÿ”น Headings & Text

scraper.h1()
scraper.h2()
scraper.h3()
scraper.h4()
scraper.h5()
scraper.h6()
scraper.p()

๐Ÿ”น Lists

scraper.ul()
scraper.ol()

๐Ÿ”น Images

scraper.images()
scraper.image_details()

๐Ÿ”น Links

scraper.links()
scraper.link_details()

๐Ÿ” Custom DOM Filtering

Use filter() to target specific DOM elements and extract nested content.

โ–ธ Single element

scraper.filter(
    element="div",
    attributes={"id": "main"},
    multiple=False,
    extract=[".title", "#description", "p"]
)

โ–ธ Multiple elements

scraper.filter(
    element="div",
    attributes={"class": "card"},
    multiple=True,
    extract=["h1", ".subtitle", "#meta"]
)

The extract argument accepts tag names, class selectors (e.g., .title), or ID selectors (e.g., #meta).
Output keys are automatically normalized:
.title โ†’ class__title, #meta โ†’ id__meta

โ–ธ Clean Text Output

You can also disable raw HTML output:

scraper.filter(
    element="p",
    attributes={"class": "dark-text"},
    multiple=True,
    return_html=False
)

๐Ÿ“ฆ Output Example

scraper.title()
# "Welcome to Example.com"

scraper.h1()
# ["Main Heading", "Another Title"]

scraper.open_graph("og:title")
# "Example OG Title"

๐Ÿค Contributing

Contributions are welcome!
Found a bug or want to request a feature? Please open an issue or submit a pull request.


๐Ÿ“„ License

MIT License ยฉ 2025 โ€” SnakyScraper


๐Ÿ”— Related Projects


๐Ÿ’ก Why SnakyScraper?

Think of it as your Pythonic sniper โ€” targeting HTML content with precision and elegance.