|
| 1 | +# sensPy Technical Audit — Sensometrics 2026 Poster |
| 2 | + |
| 3 | +*Audit date: 2026-02-16 | sensPy v0.1.0 | Validated against sensR v1.5.3 (R 4.4.0)* |
| 4 | + |
| 5 | +--- |
| 6 | + |
| 7 | +## 1. Feature Parity Audit |
| 8 | + |
| 9 | +### 1.1 Discrimination Protocols Implemented |
| 10 | + |
| 11 | +All 8 standard sensR single protocols and 5 double-protocol variants are implemented with complete psychometric link functions (`psy_fun`, `psy_inv`, `psy_deriv`): |
| 12 | + |
| 13 | +| Protocol | Single | Double | Guessing Prob | Link Implementation | |
| 14 | +|----------|:------:|:------:|:-------------:|---------------------| |
| 15 | +| Triangle | ✅ | ✅ | 1/3 | Non-central F distribution (`scipy.stats.ncf`) | |
| 16 | +| Duo-Trio | ✅ | ✅ | 1/2 | Closed-form normal CDF (`scipy.stats.norm`) | |
| 17 | +| 2-AFC | ✅ | ✅ | 1/2 | Direct normal CDF — fully vectorized | |
| 18 | +| 3-AFC | ✅ | ✅ | 1/3 | Numerical integration (`scipy.integrate.quad`) | |
| 19 | +| Tetrad | ✅ | ✅ | 1/3 | Numerical integration (`scipy.integrate.quad`) | |
| 20 | +| Hexad | ✅ | — | 1/10 | Double numerical integration | |
| 21 | +| 2-out-of-5 | ✅ | — | 1/10 | Numerical integration | |
| 22 | +| 2-out-of-5 (specified) | ✅ | — | 2/5 | Numerical integration | |
| 23 | + |
| 24 | +**Additional analysis models (fully ported):** |
| 25 | + |
| 26 | +| Model | Function | Description | |
| 27 | +|-------|----------|-------------| |
| 28 | +| Same-Different | `samediff()` | Thurstonian model estimating delta and tau | |
| 29 | +| 2-AC | `twoac()` | 2-Alternative Certainty with preference/indifference | |
| 30 | +| Degree of Difference | `dod()`, `dod_fit()` | Ordinal-scale DOD with boundary parameters | |
| 31 | +| A-Not-A | `anota()` | Signal detection with probit regression and Fisher's Exact Test | |
| 32 | +| Beta-Binomial | `betabin()` | Overdispersion model for replicated panel data | |
| 33 | + |
| 34 | +### 1.2 D-prime ($d'$) Support |
| 35 | + |
| 36 | +**Fully supported.** The `discrim()` function (`senspy/discrim.py`) provides: |
| 37 | + |
| 38 | +- Point estimation of $d'$ from observed proportion correct via inverse psychometric functions |
| 39 | +- Standard errors via the **delta method** using `psy_deriv()` |
| 40 | +- Confidence intervals via four methods: **Exact** (Clopper-Pearson), **Likelihood** (profile likelihood), **Wald** (normal approximation), **Score** (Wilson) |
| 41 | +- P-values for both **difference** testing ($H_1: d' > d'_0$) and **similarity** testing ($H_1: d' < d'_0$) |
| 42 | +- Rescaling between $d'$, $P_c$ (proportion correct), and $P_d$ (proportion of discriminators) |
| 43 | + |
| 44 | +Hypothesis testing on $d'$ is handled by three additional functions: |
| 45 | + |
| 46 | +- `dprime_test()` — tests whether a common $d'$ equals a null value across groups |
| 47 | +- `dprime_compare()` — Chi-square any-difference test across multiple groups |
| 48 | +- `posthoc()` — pairwise comparisons with Holm/Bonferroni adjustment and compact letter display |
| 49 | + |
| 50 | +### 1.3 R-Index |
| 51 | + |
| 52 | +**Not implemented.** No R-index functions are present in the current codebase. |
| 53 | + |
| 54 | +### 1.4 Power Analysis & Sample Size Estimation |
| 55 | + |
| 56 | +**Fully implemented** across four core functions and two protocol-specific functions: |
| 57 | + |
| 58 | +| Function | Description | Method | |
| 59 | +|----------|-------------|--------| |
| 60 | +| `discrim_power()` | Power from $P_d$ values | Exact binomial, normal approx, continuity-corrected | |
| 61 | +| `dprime_power()` | Power from $d'$ values | Wrapper converting $d'$ → $P_d$ | |
| 62 | +| `discrim_sample_size()` | Sample size from $P_d$ | Binary search over power function | |
| 63 | +| `dprime_sample_size()` | Sample size from $d'$ | Wrapper converting $d'$ → $P_d$ | |
| 64 | +| `samediff_power()` | Same-Different power | **Monte Carlo simulation** | |
| 65 | +| `twoac_power()` | 2-AC exact power | **Exact enumeration** (up to $N = 5000$) | |
| 66 | +| `dod_power()` | DOD power | Simulation-based | |
| 67 | + |
| 68 | +Supports four test statistic methods: `exact`, `normal`, `cont.normal` (continuity-corrected), and `stable.exact`. |
| 69 | + |
| 70 | +### 1.5 sensR Features Not Yet Ported |
| 71 | + |
| 72 | +| Feature | Status | Notes | |
| 73 | +|---------|--------|-------| |
| 74 | +| R-index calculation | Not ported | Not present in codebase | |
| 75 | +| Double-protocol simulation | Not implemented | `discrim_sim()` raises error for double variants | |
| 76 | +| Hexad / 2-out-of-5 exact formulas | Approximated | Comments in `psychometric.py` note these are approximations; marked `xfail` in validation tests | |
| 77 | +| ANOVA / mixed models | Not ported | sensR's descriptive analysis functions not included | |
| 78 | +| `findcr()` advanced options | Partial | Basic `find_critical()` ported; some advanced options may be missing | |
| 79 | + |
| 80 | +--- |
| 81 | + |
| 82 | +## 2. Architecture & Performance |
| 83 | + |
| 84 | +### 2.1 Primary Dependencies |
| 85 | + |
| 86 | +| Dependency | Version | Role in sensPy | |
| 87 | +|------------|---------|----------------| |
| 88 | +| **NumPy** ≥ 1.23 | Core | Array operations, broadcasting, vectorized psychometric computations | |
| 89 | +| **SciPy** ≥ 1.9 | Core | MLE optimization, statistical distributions, numerical integration, special functions | |
| 90 | +| **Pandas** ≥ 1.5 | Listed | Available for user data handling; not directly imported in source modules | |
| 91 | +| **Plotly** ≥ 5.15 | Core | Interactive visualization (psychometric curves, ROC, power curves, SDT distributions) | |
| 92 | +| **Numba** ≥ 0.56 | Listed | Declared dependency but **not actively used** (no `@njit`/`@jit` decorators in codebase) | |
| 93 | +| **Matplotlib** ≥ 3.6 | Optional | Static plot export via `static-plots` extra | |
| 94 | + |
| 95 | +### 2.2 SciPy for Maximum Likelihood Estimation |
| 96 | + |
| 97 | +MLE is implemented across multiple modules using SciPy's optimization and statistical machinery: |
| 98 | + |
| 99 | +**`scipy.optimize.minimize` — Multi-parameter MLE:** |
| 100 | + |
| 101 | +- `dod.py:485, 659` — DOD model fitting (tau + $d'$ parameters) |
| 102 | +- `betabin.py:457` — Beta-binomial model fitting with log-space likelihood |
| 103 | +- `samediff.py:394, 409, 437, 452, 482` — Same-Different protocol (multiple optimization cases) |
| 104 | + |
| 105 | +**`scipy.optimize.minimize_scalar` — Single-parameter MLE:** |
| 106 | + |
| 107 | +- `dprime_tests.py:325–329` — Common $d'$ estimation (bounded 1D search) |
| 108 | +- `twoac.py:303, 313` — Tau estimation under null hypothesis |
| 109 | +- `protocol_power.py:143, 336, 343` — Tau parameter optimization for power calculations |
| 110 | + |
| 111 | +**`scipy.optimize.brentq` — Root-finding for inverse link functions:** |
| 112 | + |
| 113 | +- `links/psychometric.py:121, 194, 268, 349, 453, 535, 600` — Inverting each protocol's psychometric function |
| 114 | +- `links/double.py:64–67` — Double-protocol inverse links |
| 115 | + |
| 116 | +**Negative log-likelihood (NLL) functions are explicitly defined for:** |
| 117 | + |
| 118 | +- 2-AC protocol (`twoac.py:122–148`) |
| 119 | +- DOD model (`dod.py:225–252`) |
| 120 | +- Common $d'$ (`dprime_tests.py:281–302`) |
| 121 | +- Beta-binomial (`betabin.py:275–310`, using `scipy.special.betaln` for numerical stability) |
| 122 | + |
| 123 | +**Standard errors** are computed via: |
| 124 | + |
| 125 | +- **Delta method** — for $d'$ standard errors in `discrim()` and `dprime_tests` |
| 126 | +- **Numerical Hessian** (finite differences) — for 2-AC (`twoac.py:219–273`) and DOD (`dod.py:424`) |
| 127 | +- **Profile likelihood** — for confidence intervals in `discrim.py:21–57` and `twoac.py:344–400` |
| 128 | + |
| 129 | +### 2.3 Vectorized Operations & Performance |
| 130 | + |
| 131 | +**Fully vectorized (fast) paths:** |
| 132 | + |
| 133 | +- **2-AFC**: Direct `stats.norm.cdf(d_prime / sqrt(2))` — pure NumPy/SciPy broadcasting, O(n) (`psychometric.py:61–85`) |
| 134 | +- **Duo-Trio**: Closed-form with vectorized `norm.cdf` / `norm.pdf` operations (`psychometric.py:91–153`) |
| 135 | +- **Triangle**: Vectorized via `stats.ncf.sf()` with NumPy masking for boundary cases (`psychometric.py:159–177`) |
| 136 | +- **DOD boundary probabilities**: Vectorized `norm.cdf()` on tau arrays (`dod.py:204–222`) |
| 137 | +- **Delta method SE**: Vectorized derivative + division (`dprime_tests.py:234–239`) |
| 138 | + |
| 139 | +**Loop-based (slower) paths — inherent to `scipy.integrate.quad`:** |
| 140 | + |
| 141 | +- **3-AFC**: Per-element `quad()` integration (`psychometric.py:230–251`) |
| 142 | +- **Tetrad**: Per-element `quad()` + `brentq()` for inverse (`psychometric.py:307–378`) |
| 143 | +- **Hexad**: Double `quad()` per element — slowest protocol (`psychometric.py:385–481`) |
| 144 | + |
| 145 | +**Numerical stability techniques:** |
| 146 | + |
| 147 | +- Log-space computation via `scipy.special.betaln()` and `gammaln()` in beta-binomial fitting (`betabin.py:293–294`) |
| 148 | +- Safe log via `scipy.special.xlogy()` handling $0 \cdot \log(0) = 0$ (`dprime_tests.py:256–278`) |
| 149 | +- Macmillan & Kaplan $1/(2n)$ correction for extreme hit/false-alarm rates in A-Not-A (`anota.py`) |
| 150 | + |
| 151 | +**Note on Numba:** Despite being listed as a dependency, no `@njit` or `@jit` decorators are present in the codebase. The integration-heavy protocols (3-AFC, Tetrad, Hexad) could benefit from JIT compilation of their integrand functions. |
| 152 | + |
| 153 | +--- |
| 154 | + |
| 155 | +## 3. API Design |
| 156 | + |
| 157 | +### 3.1 Functional API (Not Class-Based) |
| 158 | + |
| 159 | +sensPy uses a **purely functional API** mirroring sensR's design. Users call top-level functions that return structured result objects: |
| 160 | + |
| 161 | +```python |
| 162 | +from senspy import discrim, discrim_power, dprime_compare |
| 163 | + |
| 164 | +# Analysis returns a DiscrimResult dataclass |
| 165 | +result = discrim(correct=80, total=100, method="triangle") |
| 166 | +result.d_prime # 2.486 |
| 167 | +result.p_value # 1.23e-12 |
| 168 | +result.confint() # (1.89, 3.21) |
| 169 | + |
| 170 | +# Power returns a float |
| 171 | +power = discrim_power(d_prime=1.5, sample_size=100, method="triangle") |
| 172 | +``` |
| 173 | + |
| 174 | +There are no `TriangleTest` or `DuoTrioAnalysis` class hierarchies. Every module exports functions: `discrim()`, `betabin()`, `samediff()`, `twoac()`, `dod()`, `anota()`. |
| 175 | + |
| 176 | +### 3.2 Result Dataclasses |
| 177 | + |
| 178 | +All functions return **rich dataclass objects** with properties and convenience methods: |
| 179 | + |
| 180 | +| Result Class | Key Fields | Methods | |
| 181 | +|-------------|------------|---------| |
| 182 | +| `DiscrimResult` | `d_prime`, `pc`, `pd`, `se_d_prime`, `p_value`, `statistic` | `confint()`, `summary()`, `__str__()` | |
| 183 | +| `BetaBinomialResult` | `coefficients`, `log_likelihood`, `n_obs` | `se()`, `lr_overdispersion()`, `lr_association()`, `summary()` | |
| 184 | +| `SameDiffResult` | `delta`, `tau`, `se_delta`, `se_tau` | `__str__()` | |
| 185 | +| `TwoACResult` | `d_prime`, `tau`, `se_d_prime`, `se_tau`, `p_value` | `__str__()` | |
| 186 | +| `DODResult` | `d_prime`, `tau`, `se_d_prime`, `conf_int`, `p_value` | Properties | |
| 187 | +| `ANotAResult` | `d_prime`, `se_d_prime`, `p_value`, `hit_rate`, `false_alarm_rate` | — | |
| 188 | + |
| 189 | +### 3.3 Protocol Handling |
| 190 | + |
| 191 | +The `Protocol` enum (`senspy/core/types.py`) encapsulates protocol metadata: |
| 192 | + |
| 193 | +```python |
| 194 | +Protocol.TRIANGLE.value # "triangle" |
| 195 | +Protocol.TRIANGLE.p_guess # 0.333... |
| 196 | +``` |
| 197 | + |
| 198 | +The `parse_protocol()` function provides **flexible, case-insensitive input** with aliases: |
| 199 | + |
| 200 | +- `"triangle"`, `"Triangle"`, `"TRIANGLE"` all resolve |
| 201 | +- `"2afc"`, `"2-AFC"`, `"2_afc"` → `Protocol.TWOAFC` |
| 202 | +- `"3afc"` → `Protocol.THREEAFC` |
| 203 | +- `"duo"`, `"tri"` → shorthand aliases |
| 204 | +- `"2outof5"`, `"2of5"`, `"2/5"` → `Protocol.TWOFIVE` |
| 205 | + |
| 206 | +### 3.4 Pandas Integration |
| 207 | + |
| 208 | +Pandas is listed as a core dependency but is **not directly imported** in any source module. It is available for users who wish to load or organize data with DataFrames before passing values to sensPy functions. The API accepts standard Python types and NumPy arrays. |
| 209 | + |
| 210 | +### 3.5 Pythonic Design Choices vs. R |
| 211 | + |
| 212 | +| Aspect | sensR (R) | sensPy (Python) | |
| 213 | +|--------|-----------|-----------------| |
| 214 | +| Naming | `discrimPwr()`, `d.primeSS()` | `discrim_power()`, `dprime_sample_size()` | |
| 215 | +| Results | S3 lists with `$` access | Typed dataclasses with properties and methods | |
| 216 | +| Protocol input | String only | String (flexible aliases) or `Protocol` enum | |
| 217 | +| Docstrings | R help pages | NumPy-style docstrings | |
| 218 | +| CI access | `confint(result)` | `result.confint(level=0.99, parameter="d_prime")` | |
| 219 | +| Display | `print()` / `summary()` | `__str__()` and `.summary()` methods | |
| 220 | + |
| 221 | +--- |
| 222 | + |
| 223 | +## 4. Validation |
| 224 | + |
| 225 | +### 4.1 Golden Data Testing |
| 226 | + |
| 227 | +The test suite uses a **golden data pattern**: reference values exported from sensR v1.5.3 (R 4.4.0), stored in `tests/fixtures/golden_sensr.json` (885 lines). This fixture covers 12 validation categories: |
| 228 | + |
| 229 | +- Link functions (`psy_fun`, `psy_inv`, `psy_deriv`) for all protocols |
| 230 | +- `discrim` estimates, standard errors, confidence intervals, p-values |
| 231 | +- `rescale` conversions |
| 232 | +- `power` and `sample_size` calculations |
| 233 | +- `betabin`, `twoac`, `samediff`, `dod` model outputs |
| 234 | +- `dprime_tests` hypothesis testing |
| 235 | + |
| 236 | +### 4.2 Test Suite Scale |
| 237 | + |
| 238 | +- **23 test files** with **740+ test functions** |
| 239 | +- Dedicated `test_sensr_validation.py` with 5 test classes and 10 methods targeting exact R parity |
| 240 | +- Separate `test_coverage_*.py` files for edge cases and branch coverage |
| 241 | +- Full protocol × statistic-type matrix testing (8 protocols × 4 statistics) |
| 242 | + |
| 243 | +### 4.3 Tolerance Levels |
| 244 | + |
| 245 | +Defined centrally in `tests/conftest.py`: |
| 246 | + |
| 247 | +| Metric | Tolerance | Decimal Places | |
| 248 | +|--------|-----------|----------------| |
| 249 | +| Coefficients ($d'$ estimates) | 1e-3 | ~3 significant figures | |
| 250 | +| Probabilities ($P_c$, $P_d$) | 1e-3 | ~3 significant figures | |
| 251 | +| P-values | 1e-4 | ~4 significant figures | |
| 252 | +| Derivatives / SEs | 1e-2 | ~2 significant figures | |
| 253 | +| Strict (2-AFC closed-form) | 1e-6 | ~6 significant figures | |
| 254 | + |
| 255 | +### 4.4 Numerical Consistency Summary |
| 256 | + |
| 257 | +> **For exact protocols (2-AFC, Duo-Trio, Triangle, 3-AFC, Tetrad), sensPy matches sensR's d-prime estimates and p-values to 3–6 significant figures, validated against golden reference data exported from sensR v1.5.3; approximated protocols (Hexad, 2-out-of-5) are functional but marked as expected failures in validation tests pending exact formula alignment.** |
| 258 | +
|
| 259 | +### 4.5 Known Validation Gaps |
| 260 | + |
| 261 | +- Hexad, 2-out-of-5, and 2-out-of-5 (specified) protocols use numerical approximations and are marked `xfail` in `test_sensr_validation.py` |
| 262 | +- Tetrad numerical derivatives require a wider tolerance (15%) due to inherent imprecision in finite-difference computation |
| 263 | +- Wald-type p-values use 10% relative tolerance (normal approximation is inherently less precise than exact or likelihood methods) |
| 264 | + |
| 265 | +--- |
| 266 | + |
| 267 | +## 5. Technical Highlights for the Abstract |
| 268 | + |
| 269 | +### Robustness |
| 270 | + |
| 271 | +- **740+ automated tests** validating numerical parity against sensR v1.5.3 golden data |
| 272 | +- **4 test statistic methods** (Exact, Likelihood, Wald, Score) implemented for `discrim()`, matching sensR's full repertoire |
| 273 | +- **Boundary-safe**: $d' = 0$ when $P_c \leq P_{guess}$; log-space computation (`betaln`, `gammaln`, `xlogy`) prevents overflow in beta-binomial fitting; Macmillan & Kaplan correction for extreme hit/FA rates |
| 274 | + |
| 275 | +### Feature Completeness |
| 276 | + |
| 277 | +- **13 protocol variants**: 8 single + 5 double discrimination protocols with complete psychometric link functions |
| 278 | +- **6 analysis models**: `discrim`, `betabin`, `twoac`, `samediff`, `dod`, `anota` |
| 279 | +- **Full inference pipeline**: estimation → hypothesis testing (`dprime_test`, `dprime_compare`) → post-hoc comparisons with multiplicity adjustment → power/sample-size planning |
| 280 | +- **Monte Carlo and exact power**: `samediff_power()` via simulation; `twoac_power()` via exact enumeration |
| 281 | + |
| 282 | +### Architecture |
| 283 | + |
| 284 | +- **Functional API** preserving sensR naming conventions while adopting Python idioms (snake_case, type hints, dataclass results) |
| 285 | +- **Protocol-agnostic engine**: `discrim()` delegates to protocol-specific psychometric link functions, making it trivial to add new protocols |
| 286 | +- **Structured results**: Typed dataclasses with lazy CI computation, formatted display, and property-based access — replacing R's untyped list returns |
| 287 | +- **SciPy-native MLE**: `minimize`, `minimize_scalar`, `brentq` for optimization; `integrate.quad` for Thurstonian integrals; profile likelihood for confidence intervals |
| 288 | + |
| 289 | +### Specific Functions to Highlight |
| 290 | + |
| 291 | +- `discrim()` — main entry point with 4 test statistics and similarity/difference testing |
| 292 | +- `dprime_compare()` + `posthoc()` — multi-group comparison with compact letter display |
| 293 | +- `betabin()` — chance-corrected beta-binomial with log-space kernel for numerical stability |
| 294 | +- `samediff_power()` — Monte Carlo power for Same-Different protocol |
| 295 | +- `parse_protocol()` — flexible protocol parsing with 7+ aliases for user convenience |
| 296 | +- `plot_psychometric()`, `plot_roc()`, `plot_power_curve()` — Plotly-based interactive visualization |
0 commit comments