Mapollo 11 is a local-first exploration companion that helps you gather and review the best spots from Google Maps. Use the crawler CLI to fetch highly rated locations into a shared SQLite database and review them from a lightweight web UI.
- Crawl Google Maps for places that match your minimum rating and review count.
- Deduplicated storage with automatic updates when a place is discovered again.
- Inline review UI to triage places, update their status, and capture personal notes.
- Sortable table to quickly surface newly found gems or the top-rated heavy hitters.
- Node.js 18+
- A Google Maps Places API key with access to the Places API
npm installThis installs the required runtime dependencies and creates package-lock.json.
The crawler expects a Google Maps API key. Set it via the GOOGLE_MAPS_API_KEY environment variable.
To keep secrets out of your shell history, create a .env file alongside the code:
GOOGLE_MAPS_API_KEY=your_api_key_hereThe .env file is automatically loaded by the crawler script.
All data is stored in data/mapollo11.db, a SQLite database that is safe to sync with Dropbox or any other file sync tool. The schema is created on demand the first time you run the crawler or start the server.
Table columns:
| Column | Description |
|---|---|
place_id |
Google Maps unique place identifier (primary key) |
name |
Place display name |
user_ratings_total |
Number of public reviews |
rating |
Average rating |
created_at |
Timestamp when the place was first saved |
updated_at |
Timestamp for the most recent update |
triage_status |
Your triage status (to_review, ignore, to_visit, visited) |
triage_comment |
Personal notes |
url |
Direct Google Maps link |
Run the crawler with your preferred geographic envelope and filters. Provide either --area or an explicit --center along with --square-m to describe the square to crawl. Results are filtered locally to honor your minimum rating and review thresholds. Omit --tile-m to use the 1,000 meter default tile size.
npm run crawl -- \
--area "Lisbon, Portugal" \
--minRating 4.7 \
--minReviews 500 \
--tile-m 1000When you already know the exact coordinates you want to cover, specify the center point and the square size in meters:
npm run crawl -- \
--center 38.7223,-9.1393 \
--square-m 20000 \
--minRating 4.7 \
--minReviews 500 \
--tile-m 1000The crawler automatically loads place types from types.json. Adjust that file to control which categories are crawled.
Each saved place records the matched type, making it easy to understand which search category surfaced a result when reviewing
the database or API responses.
- Located in the project root,
types.jsoncontains the array of Google Places API supported types that the crawler cycles through during a run. - You can edit the list to focus on the categories that matter most to you. Removing entries will skip those categories entirely; adding new types (one per string) will expand the crawl.
- The crawler reads the file once at startup, so restart the command whenever you change it.
- Use Google’s documentation to verify valid identifiers before adding them; invalid types will cause the API to return errors.
| Flag | Description |
|---|---|
--area |
Human-friendly location to geocode. |
--center |
Latitude/longitude pair (lat,lng) describing the crawl center. |
--square-m |
Width of the crawl square in meters (required with --center, used as a fallback for point geocodes). |
--tile-m |
Tile size in meters used to subdivide the crawl area (defaults to 1000). |
--minRating |
Minimum average rating (defaults to 0). |
--minReviews |
Minimum number of public reviews (defaults to 0). |
--types |
Comma-separated list of place types to crawl. Entries must exist in types.json. |
The crawler logs each processed place and saves it immediately. Re-running the crawl updates existing entries rather than duplicating them.
Google's Places Text Search API bills each request, so it pays to narrow your crawl before you start:
- Limit the type list. Edit
types.jsonor pass--typeswith a comma-separated list (for example,--types museum,art_gallery) so you only query categories that matter to you. - Shrink the crawl envelope. Use a smaller
--square-mor pick a more precise--centerto avoid scanning areas you do not care about. - Adjust tile sizes deliberately. Increasing
--tile-mreduces the number of tiles (and therefore search calls), which is useful once you have already found the obvious hotspots.
npm startThe server runs on http://localhost:3000. Open the page to view your collection.
- Change the sort order using the dropdown on top of the table.
- Update a place's status inline via the dropdown in each row.
- Click Add/Edit in the Comment column to open a dialog where you can write notes.
- Use the Open link to jump straight to Google Maps.
- The server is a simple Express app that serves both the API and static files.
- SQLite access uses
better-sqlite3for fast, synchronous queries. - The crawler is intentionally single-threaded to respect Google API quotas.
mapollo-11/
├── public/ # Static assets for the web UI (HTML, CSS, JS)
├── scripts/ # Google Maps crawler entry point
├── src/ # Express server and database helpers
├── data/ # SQLite database is created here at runtime
└── README.md # Usage instructions and crawler tips
Use npm start to run the Express server and npm run crawl -- <flags> to populate the database with your preferred search queries.
- Persist crawl history to track rating trends over time.
- Add filters to the UI (e.g., show only
to_visitplaces). - Support exporting selected places to trip itineraries.
Happy exploring! 🚀