Skip to content

Add Valuein connector for SEC EDGAR fundamentals + smart-money + forensic-audit#183

Open
rainer85ah wants to merge 1 commit into
anthropics:mainfrom
valuein:feat/add-valuein-connector
Open

Add Valuein connector for SEC EDGAR fundamentals + smart-money + forensic-audit#183
rainer85ah wants to merge 1 commit into
anthropics:mainfrom
valuein:feat/add-valuein-connector

Conversation

@rainer85ah
Copy link
Copy Markdown

Summary

Adds Valuein as a partner-built plugin under plugins/partner-built/valuein/. Valuein hosts a free + low-cost MCP server at https://mcp.valuein.biz/mcp providing SEC EDGAR standardized fundamentals, ratios, valuation metrics, smart-money intelligence (insider transactions + 13F + 13D/G), and forensic-audit scores for the US public-company universe (~105M facts, 17,000+ active + delisted tickers).

The plugin packages 3 high-level commands and 3 supporting skills that demonstrate the connector's value out of the box. Sample-tier access is unauthenticated, so reviewers can evaluate the connector against S&P 500 names without provisioning a token first.

Why this PR

The existing partner connectors (lseg, sp-global) require institutional data subscriptions ($12K-$25K/seat minimum). The agent-plugins (earnings-reviewer, market-researcher, etc.) currently have no free or sub-$500 option for SEC fundamentals — agents fall back to scraping EDGAR or reading the filings text directly, which loses standardization across companies + multi-period cross-sectional analysis.

This connector fills that gap. The same shape as lseg/sp-global, the same SKILL.md conventions, the same lineage-grounded outputs — just on a different data layer aimed at the indie quant / fintwit / Pro-tier analyst persona that doesn't yet have FactSet / S&P CIQ access.

Structure

Mirrors plugins/partner-built/lseg/ and plugins/partner-built/spglobal/ directory-for-directory:

plugins/partner-built/valuein/
├── .claude-plugin/plugin.json     # metadata + mcpServers
├── .mcp.json                       # HTTP MCP endpoint
├── LICENSE                         # Apache-2.0 (same as repo)
├── README.md                       # commands + skills + integrations + install
├── CONNECTORS.md                   # 21 MCP tools by domain category
├── commands/
│   ├── research-equity.md          # equity research snapshot
│   ├── forensic-audit.md           # earnings-quality red-flag brief
│   └── screen-and-shortlist.md     # factor screen + forensic gate
└── skills/
    ├── equity-research/SKILL.md
    ├── forensic-audit/SKILL.md
    └── screen-and-shortlist/SKILL.md

Registered in .claude-plugin/marketplace.json alongside lseg and sp-global.

Commands

Command What it does
/research-equity Equity research snapshot — fundamentals trajectory, quality ratios, peer positioning, capital-allocation scorecard, valuation summary.
/forensic-audit Earnings-quality brief — partial Beneish M-Score (SGI + TATA + LVGI), Sloan accruals, solvency snapshot, amendment history, capital-allocation cross-check.
/screen-and-shortlist Cross-sectional factor screen → forensic gate → ranked shortlist with citations.

Each command has a matching SKILL.md providing domain knowledge, tool-chaining workflow, and an explicit "what not to do" section. Every numerical output cites its originating SEC accession via the response's lineage envelope (source_filing + source_url → clickable EDGAR link).

Connector design

  • Auth: Bearer token in the standard MCP Authorization header. No secrets in checked-in files. Sample tier is unauthenticated guest access.
  • Tier gating: Sample (free, S&P 500 / 5yr) → SP500 Free (signup, S&P 500 / full history) → Pro ($49/mo, full US universe, 30yr) → Institutional ($499/mo, smart-money + foreign issuers + redistribution license). The plugin's user-facing tools surface tier requirements clearly in README.md and CONNECTORS.md.
  • Lineage: Every numerical response carries a structured lineage envelope. Skills are written to surface source_url in the final output so a reader can one-click verify any number against the originating 10-K / 10-Q.
  • No internal references, no hardcoded secrets, no Anthropic-internal URLs.

Validation

$ python3 scripts/check.py
OK — 81 file(s) checked, 0 issues.

Secret-scan + internal-reference-scrub patterns from .github/workflows/secret-scan.yml run clean against plugins/partner-built/valuein/.

Live MCP preflight

$ curl -X POST https://mcp.valuein.biz/mcp \
    -H 'Content-Type: application/json' \
    -H 'Accept: application/json, text/event-stream' \
    -d '{"jsonrpc":"2.0","id":1,"method":"initialize",
          "params":{"protocolVersion":"2025-06-18",
                    "capabilities":{},
                    "clientInfo":{"name":"preflight","version":"1.0.0"}}}'

{
  "result": {
    "protocolVersion": "2025-06-18",
    "capabilities": {
      "tools":{"listChanged":true},
      "prompts":{"listChanged":true},
      "resources":{"listChanged":true},
      "logging":{},
      "completions":{}
    },
    "serverInfo": {"name":"valuein-sec-edgar","version":"2.0.0"}
  },
  "jsonrpc":"2.0",
  "id":1
}

License

Valuein's connector code is Apache-2.0, matching the repo's root license. The plugin includes its own LICENSE file with the Apache-2.0 text and Valuein copyright notice — same pattern as plugins/partner-built/spglobal/LICENSE.

Test plan for reviewer

  1. Validation passes: python3 scripts/check.py returns OK — 81 file(s) checked, 0 issues.
  2. No secrets: gitleaks + internal-reference scrub patterns return clean.
  3. Structure matches partner-built convention: byte-for-byte mirror of lseg/spglobal file layout.
  4. MCP endpoint live: https://mcp.valuein.biz/mcp responds to MCP initialize with valid protocol version + capabilities + serverInfo.
  5. Sample tier works without auth: skills are usable on S&P 500 names without provisioning a Valuein token first (sample bucket access is guest).

Happy to address any review comments — naming, scope of the initial 3 skills, additional commands, or anything else that would make this a better fit for the repo.

Adds Valuein as a partner-built plugin providing SEC EDGAR fundamentals,
financial ratios, valuation metrics, smart-money intelligence (insider +
13F + 13D/G), and forensic-audit scores for US-listed equities via the
hosted MCP server at https://mcp.valuein.biz/mcp.

Structure mirrors plugins/partner-built/lseg and plugins/partner-built/
spglobal exactly:

  plugins/partner-built/valuein/
  ├── .claude-plugin/plugin.json     # metadata + mcpServers
  ├── .mcp.json                       # HTTP MCP endpoint
  ├── LICENSE                         # Apache-2.0
  ├── README.md                       # commands + skills + integrations + install
  ├── CONNECTORS.md                   # full MCP tool reference, 21 tools by category
  ├── commands/
  │   ├── research-equity.md          # equity research snapshot workflow
  │   ├── forensic-audit.md           # earnings-quality red-flag brief
  │   └── screen-and-shortlist.md     # factor screen + forensic gate
  └── skills/
      ├── equity-research/SKILL.md
      ├── forensic-audit/SKILL.md
      └── screen-and-shortlist/SKILL.md

Each skill includes YAML frontmatter (name + description), domain
principles, available MCP tools, tool-chaining workflow, output format,
and explicit "what not to do" guidance. Every numerical claim links back
to the originating SEC accession via the response's `lineage` envelope
(`source_filing` + `source_url`), so a reader can verify any number with
one click.

The plugin is also registered in .claude-plugin/marketplace.json so it
is discoverable alongside lseg and sp-global.

Validation:
  $ python3 scripts/check.py
  OK — 81 file(s) checked, 0 issues.

Live MCP preflight (https://mcp.valuein.biz/mcp):
  protocolVersion: 2025-06-18
  serverInfo: { name: "valuein-sec-edgar", version: "2.0.0" }
  capabilities: tools.listChanged, prompts.listChanged, resources.listChanged
rainer85ah added a commit to valuein/valuein that referenced this pull request May 12, 2026
Adds a public, reproducible benchmark scoreboard for Valuein's MCP
server, inspired by FinanceBench (Islam et al. 2023, arXiv:2311.11944)
but with a different shape — we test the **structured-data MCP** path
rather than the LLM-with-RAG-over-PDFs path that FinanceBench measured.

## What ships

  benchmarks/
  ├── README.md                       # marketing-friendly overview + invitation
  └── finance-bench/
      ├── README.md                   # not yet present — methodology covers it
      ├── methodology.md              # scoring rules, signals, weights
      ├── tasks.jsonl                 # 20 single-doc tasks across S&P 500
      ├── run-bench.sh                # curl + jq runner, no SDK deps
      └── results-latest.md           # pending first official run

## Methodology

Each task is scored on three signals:

  * Numerical accuracy (weight 0.5) — within `tolerance_pct`
  * Lineage citation (weight 0.3) — response cites the originating accession
  * PIT correctness (weight 0.2) — `as_of_date` enforced server-side

Score = weighted sum. Aggregate publishes overall + single-doc subset +
numerical-only + lineage-only.

Tasks v1 cover 20 single-doc questions across AAPL, MSFT, NVDA, GOOGL,
META, AMZN, TSLA, JPM, BRK.B (FY2023 + FY2024 10-Ks) — sourced from
public filings, with the originating SEC accession committed alongside
each task for reviewer verification.

## Why no committed score yet

Pending first official run on a non-rate-limited tier. The free sample
tier hits 60/min mid-run, so the published score requires either a free
S&P 500 token (no card needed) or a Pro tier token. We will NOT publish
a number we haven't reproduced end-to-end — that's how benchmarks lose
credibility. First run will land as a separate commit named
`bench: first official run — overall X.XX`.

## Reproducibility

The runner is bash + curl + jq — anyone can audit the wire format
without trusting a TypeScript or Python harness. The MCP request shape
is documented in `results-latest.md`. Same `tasks.jsonl` + same
warehouse snapshot = same score, byte-for-byte.

## Inviting external audits

Competing data providers (FactSet, S&P CIQ, Bloomberg, etc.) are
explicitly invited to PR `<provider>/run-bench.sh` against this same
task set so the score is comparable. The bar is the same for everyone.

## README integration

The top-level README now lists `benchmarks/` alongside `docs/` so
visitors discover the scoreboard from the front page.

This is part of a series aligning Valuein with the
`anthropics/financial-services` ecosystem — see the new Valuein
connector at anthropics/financial-services#183
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant