MCP (Model Context Protocol) server for Scrapbox (Cosense) integration.
- scrapbox_list_pages: List pages from a Scrapbox project
- scrapbox_get_page: Get detailed information about a specific page
- scrapbox_get_page_text: Get the text content of a specific page
- scrapbox_search_pages: Search for pages by title or description
pnpm install
pnpm buildYou can configure the server using multiple methods:
export SCRAPBOX_BASE_URL="https://my-scrapbox.example.com" # HTTP also supported
export SCRAPBOX_DEFAULT_PROJECT="my-project"
export SCRAPBOX_TIMEOUT="30000"
export SCRAPBOX_PROXY_URL="socks5://127.0.0.1:1080"
export SCRAPBOX_HOST_HEADER="scrapbox.io"
export SCRAPBOX_CONFIG_PATH="/path/to/config.json"You can also use standard proxy environment variables:
export HTTPS_PROXY="socks5://127.0.0.1:1080"
export HTTP_PROXY="socks5://127.0.0.1:1080"Create a scrapbox.config.json file in the project root:
{
"baseUrl": "https://scrapbox.io",
"defaultProjectName": "my-project",
"timeout": 30000,
"proxyUrl": "socks5://127.0.0.1:1080",
"hostHeader": "scrapbox.io"
}Note: baseUrl supports both HTTP and HTTPS protocols. Examples:
https://scrapbox.io(default, secure)http://localhost:3000(local development)http://192.168.1.100:8080(internal network)
You can also specify a custom config file path with SCRAPBOX_CONFIG_PATH.
Each tool accepts optional parameters that override the configuration:
// Override baseUrl, proxyUrl, and hostHeader for specific call
scrapbox_list_pages({
projectName: "specific-project",
baseUrl: "https://custom-scrapbox.example.com", // HTTP also supported
proxyUrl: "socks5://192.168.1.100:1080",
hostHeader: "scrapbox.io"
})
// Use HTTP for local development
scrapbox_list_pages({
projectName: "test-project",
baseUrl: "http://localhost:3000"
})
// Use custom Host header for SNI bypass
scrapbox_search_pages({
projectName: "windymelt",
query: "test",
hostHeader: "scrapbox.io"
})
// Disable proxy for specific call (empty string)
scrapbox_get_page({
projectName: "windymelt",
pageTitle: "example",
proxyUrl: ""
})Priority: Tool parameters > Environment variables > Configuration file > Defaults
The server supports both HTTP and HTTPS protocols:
- HTTPS (default): Secure connections to
https://scrapbox.ioand other HTTPS endpoints - HTTP: Unencrypted connections for local development, internal networks, or specific requirements
Common use cases for HTTP:
- Local development servers:
http://localhost:3000 - Internal network services:
http://192.168.1.100:8080 - Non-SSL corporate proxies
- Testing environments
Security Note: Use HTTPS in production environments to protect data in transit.
You can specify a custom Host header for requests, which is useful for:
- SNI (Server Name Indication) bypass: When using a proxy, the Host header can be different from the actual hostname in the URL
- Virtual host routing: Access different virtual hosts on the same IP address
- CDN bypass: Direct access to origin servers while maintaining proper routing
# Environment variable
export SCRAPBOX_HOST_HEADER="scrapbox.io"
# Tool parameter (highest priority)
scrapbox_list_pages({
projectName: "windymelt",
baseUrl: "https://192.168.1.100",
hostHeader: "scrapbox.io"
})The server supports SOCKS5 proxies for accessing Scrapbox through firewalls or for privacy:
# Using SCRAPBOX_PROXY_URL
export SCRAPBOX_PROXY_URL="socks5://127.0.0.1:1080"
# Or using standard proxy environment variables
export HTTPS_PROXY="socks5://127.0.0.1:1080"
export HTTP_PROXY="socks5://127.0.0.1:1080"Supported proxy formats:
socks5://host:portsocks4://host:portsocks://host:port(defaults to SOCKS5)
Priority: Tool parameters > SCRAPBOX_PROXY_URL > HTTPS_PROXY > HTTP_PROXY > config file
pnpm devpnpm startThis server uses Scrapbox's internal API endpoints:
/api/pages/:projectName- List pages/api/pages/:projectName/:pageTitle- Get page details/api/pages/:projectName/:pageTitle/text- Get page text
Note: These are internal APIs and may change without notice.
Lists pages from a Scrapbox project.
Parameters:
projectName(optional if set in config): Name of the Scrapbox projectskip(optional): Number of pages to skip (default: 0)limit(optional): Maximum number of pages to return (default: 100)baseUrl(optional): Base URL of the Scrapbox instance (default: from config or https://scrapbox.io). Supports both HTTP and HTTPSproxyUrl(optional): SOCKS5 proxy URL (e.g. socks5://127.0.0.1:1080)hostHeader(optional): Custom Host header for the request (e.g. scrapbox.io)
Gets detailed information about a specific page.
Parameters:
projectName(optional if set in config): Name of the Scrapbox projectpageTitle(required): Title of the page to retrievebaseUrl(optional): Base URL of the Scrapbox instance (default: from config or https://scrapbox.io). Supports both HTTP and HTTPSproxyUrl(optional): SOCKS5 proxy URL (e.g. socks5://127.0.0.1:1080)hostHeader(optional): Custom Host header for the request (e.g. scrapbox.io)
Gets the text content of a specific page.
Parameters:
projectName(optional if set in config): Name of the Scrapbox projectpageTitle(required): Title of the page to retrieve text frombaseUrl(optional): Base URL of the Scrapbox instance (default: from config or https://scrapbox.io). Supports both HTTP and HTTPSproxyUrl(optional): SOCKS5 proxy URL (e.g. socks5://127.0.0.1:1080)hostHeader(optional): Custom Host header for the request (e.g. scrapbox.io)
Searches for pages using either server-side API (Elasticsearch) or client-side search.
Parameters:
projectName(optional if set in config): Name of the Scrapbox projectquery(required): Search query to match against page content- Supports multiple words:
scrapbox gyazo - Exclusion with
-:scrapbox -java - Exact phrase with quotes:
"exact phrase"
- Supports multiple words:
limit(optional): Maximum number of results to return (default: 1000)baseUrl(optional): Base URL of the Scrapbox instance (default: from config or https://scrapbox.io). Supports both HTTP and HTTPSproxyUrl(optional): SOCKS5 proxy URL (e.g. socks5://127.0.0.1:1080)hostHeader(optional): Custom Host header for the request (e.g. scrapbox.io)useAPI(optional): Use server-side search API (default: true)
Search Methods:
- Server-side API (default): Uses Elasticsearch backend for advanced search
- Returns matched lines and context
- Better performance for large projects
- Supports complex queries
- Client-side search (useAPI: false): Downloads pages and searches locally
- Works offline
- May be slower for large projects
- Simple substring matching
Tests SOCKS5 proxy connection for debugging.
Parameters:
proxyUrl(optional): SOCKS5 proxy URL to test (e.g. socks5://127.0.0.1:1080)baseUrl(optional): Base URL of the Scrapbox instance (default: https://scrapbox.io)hostHeader(optional): Custom Host header for the request (e.g. scrapbox.io)
MIT