A Model Context Protocol (MCP) server that provides seamless integration with the CrowdCent Challenge API, enabling AI assistants to interact with CrowdCent's prediction challenges directly.
This MCP server allows AI assistants like Claude Desktop and Cursor to:
- Access and manage CrowdCent challenges
- Download training and inference datasets
- Submit predictions
- Monitor submissions
- Access meta models
- Python 3.12+
- uv (Python package manager)
- CrowdCent API key (get one at crowdcent.com)
- Clone this repository:
git clone https://github.com/crowdcent/crowdcent-mcp.git
cd crowdcent-mcp- (Optional) Install dependencies with uv:
uv venv
uv pip install -e .Create a .env file in the project root:
CROWDCENT_API_KEY=your_api_key_hereAdd the following to your Cursor settings (~/.cursor/mcp.json or through Cursor Settings UI):
{
"mcpServers": {
"crowdcent-mcp": {
"command": "/path/to/.cargo/bin/uv",
"args": ["run",
"--directory",
"/path/to/crowdcent-mcp",
"server.py"
]
}
}
}Replace /path/to/ with your actual paths. For example:
/home/username/.cargo/bin/uvon Linux/Users/username/.cargo/bin/uvon macOSC:\\Users\\username\\.cargo\\bin\\uvon Windows
For Claude Desktop, add the following to your configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Linux: ~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"crowdcent-mcp": {
"command": "uv",
"args": ["run",
"--directory",
"/path/to/crowdcent-mcp",
"server.py"
]
}
}
}After configuring the MCP server in your AI assistant, you can use natural language to interact with CrowdCent:
"Download data, train a model, and submit predictions to the crowdcent challenge!"
"Download the crowdcent training data and do some EDA"
"Create time series folds for the crowdcent challenge and train/evaluate a model"
- Ensure uv is installed and in your PATH
- Check that the directory path in your config is correct
- Verify the server.py file has execute permissions
- Make sure your API key is valid
- Check if it's properly set in .env or passed to init_client
- Ensure your predictions file has the required columns:
id,pred_10d,pred_30d - Check that all asset IDs match the current inference period
- Verify submission window is still open (within 4 hours of inference data release)
- CrowdCent Documentation
- Hyperliquid Ranking Challenge
- MCP Documentation
- CrowdCent Challenge Python Client
For issues with:
- This MCP server: Open an issue in this repository
- CrowdCent API: Email [email protected] or join our Discord
