MCP server for the Spider web crawling and scraping API. Crawl, scrape, search, and extract web data for AI agents, RAG pipelines, and LLMs.
Sign up at spider.cloud and get your API key from the API Keys page.
Add to your claude_desktop_config.json:
{
"mcpServers": {
"spider": {
"command": "npx",
"args": ["-y", "spider-cloud-mcp"],
"env": {
"SPIDER_API_KEY": "your-api-key"
}
}
}
}claude mcp add spider -- npx -y spider-cloud-mcpThen set your API key in the environment or .env file:
SPIDER_API_KEY=your-api-key
Add to your Cursor MCP settings:
{
"mcpServers": {
"spider": {
"command": "npx",
"args": ["-y", "spider-cloud-mcp"],
"env": {
"SPIDER_API_KEY": "your-api-key"
}
}
}
}| Tool | Description |
|---|---|
spider_crawl |
Crawl a website and extract content from multiple pages |
spider_scrape |
Scrape a single page (no crawling) |
spider_search |
Search the web with optional page content fetching |
spider_links |
Extract all links from a page |
spider_screenshot |
Capture page screenshots |
spider_unblocker |
Access bot-protected content with anti-bot bypass |
spider_transform |
Transform HTML to markdown/text/other formats |
spider_get_credits |
Check your credit balance |
These tools require an active AI subscription plan.
| Tool | Description |
|---|---|
spider_ai_crawl |
AI-guided crawling with natural language prompts |
spider_ai_scrape |
Extract structured data using plain English |
spider_ai_search |
AI-enhanced semantic web search |
spider_ai_browser |
Automate browser interactions with natural language |
spider_ai_links |
Intelligent link extraction and filtering |
Use spider_crawl to crawl https://example.com with limit 10 and return_format "markdown"
Use spider_search to search for "latest AI research papers" with fetch_page_content true and num 5
Use spider_ai_scrape on https://news.ycombinator.com with prompt "Extract all article titles, URLs, points, and comment counts as structured JSON"
Use spider_get_credits to check my balance
Full API documentation: spider.cloud/docs/api
MIT