A favicon of Web Crawl Integration

Web Crawl Integration

MCP server tailored to connecting web crawler data and archives

MCP Server Webcrawl

Website | Github | Docs

mcp-server-webcrawl

Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more.

mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:

mcp-server-webcrawl is free and open source, and requires Claude Desktop, Python (>=3.10). It is installed on the command line, via pip install:

pip install mcp-server-webcrawl

Features

  • Claude Desktop ready
  • Fulltext search support
  • Filter by type, status, and more
  • Multi-crawler compatible
  • Quick MCP configuration
  • ChatGPT support coming soon

MCP Configuration

From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.

You can set up more mcp-server-webcrawl connections under mcpServers as needed.

{ 
  "mcpServers": {
    "webcrawl": {
      "command": "mcp-server-webcrawl",
       "args": [varies by crawler, see below]
    }
  }
}

Important Note for macOS Users

On macOS, you must use the absolute path to the mcp-server-webcrawl executable in the command field, rather than just the command name. This is different from Windows configuration.

For example:

"command": "/Users/yourusername/.local/bin/mcp-server-webcrawl",

To find the absolute path of the mcp-server-webcrawl executable on your system:

  1. Open Terminal
  2. Run which mcp-server-webcrawl
  3. Copy the full path returned and use it in your config file

wget (using --mirror)

The datasrc argument should be set to the parent directory of the mirrors.

"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]

WARC

The datasrc argument should be set to the parent directory of the WARC files.

"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]

InterroBot

The datasrc argument should be set to the direct path to the database.

"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]

Katana

The datasrc argument should be set to the parent directory of the text cache files.

"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]

SiteOne (using archiving)

The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.

"args": ["--crawler", "siteone", "--datasrc", "/path/to/SiteOne/archives/"]
Share:
Details:
  • Stars


    0
  • Forks


    0
  • Last commit


    7 days ago
  • Repository age


    1 month
View Repository

Auto-fetched from GitHub .

MCP servers similar to Web Crawl Integration:

 

 
 
  • Stars


  • Forks


  • Last commit


 

 
 
  • Stars


  • Forks


  • Last commit


 

 
 
  • Stars


  • Forks


  • Last commit