You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix CI review R1: boundary-horizon validation, placebo=False guard, docstrings
P0: _largest_consecutive_block now raises ValueError when boundary horizon
(-1 or +1) is missing after finite-SE filtering instead of silently
returning the full list (would produce wrong HonestDiD bounds).
P1: honest_did=True now rejects placebo=False early instead of silently
returning honest_did_results=None with no warning.
P2: Added 3 regression tests (boundary -1 missing, boundary +1 missing,
placebo=False + honest_did).
P3: Updated docstrings in honest_did.py (6 locations) and docs/llms.txt
to include ChaisemartinDHaultfoeuilleResults alongside MultiPeriodDiD/CS.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3. **Test parallel trends** — simple 2x2: `check_parallel_trends()`, `equivalence_test_trends()`; staggered: inspect CS event-study pre-period coefficients (generic PT tests are invalid for staggered designs). Insignificant pre-trends do NOT prove PT holds.
21
21
4. **Choose estimator** — staggered adoption → CS/SA/BJS (NOT plain TWFE); few treated units → SDiD; factor confounding → TROP; simple 2x2 → DiD. Run `BaconDecomposition` to diagnose TWFE bias.
22
22
5. **Estimate** — `estimator.fit(data, ...)`. Always print the cluster count first and choose inference method based on the result (cluster-robust if >= 50 clusters, wild bootstrap if fewer).
23
-
6. **Sensitivity analysis** — `compute_honest_did(results)` for bounds under PT violations (MultiPeriodDiD/CS only), `run_all_placebo_tests()` for 2x2 falsification, specification comparisons for staggered designs.
23
+
6. **Sensitivity analysis** — `compute_honest_did(results)` for bounds under PT violations (MultiPeriodDiD, CS, or dCDH), `run_all_placebo_tests()` for 2x2 falsification, specification comparisons for staggered designs.
8. **Robustness** — compare 2-3 estimators (CS vs SA vs BJS), MUST report with and without covariates (shows whether conditioning drives identification), present pre-trends and sensitivity bounds.
0 commit comments