Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 84 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,79 @@ If `git pull` shows a conflict or error, reach out before trying to fix it.
3. **Set frame range** — use "Set Start/End" buttons to find the active region automatically
4. **Run analysis** — click "Analyze Brightness" (or press F5), choose an output folder

### Capture Metadata Sidecar (new)

When a video is loaded, the app now checks for an optional sidecar file named:

`<video_filename>.capture.json`

Example:
- `experiment_01.mov`
- `experiment_01.capture.json`

The current schema authority is a lightweight versioned contract with `schema_version: "1.0"`. The validator checks required acquisition fields (`device_model`, exposure/white-balance lock flags, exposure duration, ISO, FPS, resolution, HDR flag), warns on legacy or invalid metadata, and shows a metadata status line in the UI after load.

Reference:
- `docs/capture_metadata_schema.md`

### Capture Inbox Workflow (new)

For end-to-end testing, you can now point a local inbox at incoming iPhone captures and optionally auto-run analysis with a fixed manifest:

```bash
python tools/ingest_capture_inbox.py tools/capture_inbox_manifest.example.json
```

That tool:
- scans `inbox_dir` for video + `*.capture.json` pairs
- validates capture metadata using the current schema contract
- creates deterministic per-capture output folders using `capture_id` when present
- writes `capture_ingest_summary.json` for each capture
- optionally runs the existing mask-review analysis path if `analysis_case` is configured
- optionally archives processed source files out of the inbox

If you want to rerun ingest against the same capture and `output_dir`, pass `--force-reprocess` or set `"force_reprocess": true` in the manifest. Otherwise identical source signatures are skipped on purpose, and the run summary now includes the existing summary path that triggered the skip.

Use `--watch-seconds 5` to keep rescanning during manual device-to-desktop testing.

### Output

Each analysis produces:
- **CSV files** — one per ROI with columns: `frame, brightness_mean, brightness_median, blue_mean, blue_median`
- **Plot images** — dual-panel PNG (brightness trends + difference plot) with statistical annotations
- **Metadata sidecar** — one `*_analysis_metadata.json` file capturing mask mode, thresholds, source frames, mask-quality warnings, and normalized capture provenance / validation results when a capture sidecar exists

### Dark-Enclosure Review Workflow

For lab-style review of electrode light inside a dark enclosure:

1. Lock exposure, ISO, white balance, and focus before recording.
2. Draw tight electrode ROIs and place the background ROI close to the electrodes, but outside visible glow.
3. Capture a fixed mask, then enable **Show Pixel Mask** to inspect agreement between the fixed mask and the current adaptive mask.
Fixed-only pixels render in red, adaptive-only pixels in blue, and agreement in magenta.
4. Check the mask-quality summary:
- `high` / `medium` confidence means the consensus mask is stable enough to review.
- `low` confidence, `low_consensus`, `unstable_mask`, or `small_mask` means the mask needs operator review before trusting the run.
5. Export the analysis and confirm the `*_analysis_metadata.json` sidecar was written next to the CSV/plot files.
6. Package one or more exported runs into a repeatable review bundle:

```bash
python tools/run_real_video_review.py \
tools/real_video_review_manifest.example.json \
--output-dir review_output
```

The manifest should point at already-exported analysis folders plus the original raw video paths. The review bundle copies metadata, CSVs, and plots into one folder and generates `review_report.md` with a per-run PASS/FAIL summary.

For a direct rerun from raw videos and ROI manifests, use:

```bash
python tools/run_mask_review.py \
tools/mask_review_manifest.example.json \
--output-dir mask_review_outputs
```

That runner performs auto-capture plus full analysis from the raw videos, writes overlay PNGs for source frames, exports fresh CSV/metadata artifacts, and generates a case-by-case review summary.

### Useful Shortcuts

Expand All @@ -70,10 +138,11 @@ Arrow keys nudge a selected ROI instead of navigating frames. Shift+Arrow for 10

1. User draws ROIs on the video frame (one can be designated as a background reference).
2. For each frame in the selected range, the tool converts BGR pixels to **CIE LAB** color space and extracts the **L\* channel** (perceptually uniform brightness, 0–100 scale).
3. Pixels below a noise threshold (default 5 L\*) are filtered out. An optional morphological opening (erode then dilate) removes isolated bright pixels.
4. If a background ROI is set, its brightness (configurable percentile, default 90th) is subtracted per-frame to compensate for lighting drift.
5. Both mean and median brightness are computed per ROI per frame.
6. Results are exported to CSV and plotted.
3. Fixed-mask capture scores signal above the background ROI and absolute noise floor, then builds a deterministic consensus mask from the strongest source frames.
4. Pixels below the noise floor (default 5 L\*) are filtered out. Morphological opening plus connected-component filtering remove isolated bright artifacts.
5. If a background ROI is set, its brightness (configurable percentile, default 90th) is subtracted per-frame to compensate for lighting drift.
6. Both mean and median brightness are computed per ROI per frame.
7. Results are exported to CSV, plots, and metadata.

### Architecture

Expand Down Expand Up @@ -123,10 +192,21 @@ Brightness Sorcerer reports **relative** L\* brightness values derived from smar
### Pipeline Notes

- **Background subtraction** uses a configurable percentile (default 90th) from the background ROI. This adapts to gradual lighting drift but assumes the background ROI contains no glow signal.
- **Fixed-mask provenance** records the source frames, consensus score, warning flags, and threshold settings used to create each reusable mask.
- **Morphological filtering** removes isolated bright pixels but may erode edges of very small glow regions. For ROIs smaller than ~50 px, use smaller kernel sizes (1–3).
- **No temporal smoothing.** Each frame is analyzed independently. Raw traces may appear noisier than time-averaged instruments; post-hoc filtering (moving average, Savitzky-Golay) can be applied to the exported CSV data.
- **Blue channel values** are on the raw 0–255 sensor scale without perceptual correction — useful for qualitative spectral trends, not calibrated spectral measurements.

### Mask-Quality Interpretation

- `high` confidence: the fixed mask stayed stable across the strongest source frames and showed no blocking warnings.
- `medium` confidence: acceptable for review, but verify the overlay and source frames before using the run as a reference.
- `low` confidence: do not trust the run without manual inspection and likely recapturing the mask.
- `single_frame_capture`: only one usable source frame contributed to the fixed mask; repeatability is weaker.
- `low_consensus`: candidate frames disagreed about which pixels belonged to the glow region.
- `unstable_mask`: the consensus region was much smaller than the total detected union, suggesting drifting or noisy detections.
- `small_mask`: the retained signal region was near the minimum component-size floor and may be dominated by artifacts.

### Reporting Recommendations

When citing results in publications, note:
Expand Down
53 changes: 53 additions & 0 deletions docs/capture_metadata_schema.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Capture Metadata Sidecar Schema

The analyzer accepts an optional sidecar JSON file next to each video:

- Video: `experiment_01.mov`
- Sidecar: `experiment_01.capture.json`

The current lightweight schema authority is:

- `schema_version: "1.0"`
- Schema contract source of truth: `ecl_analysis/ingest/metadata.py`

This is intentionally versioned but non-blocking during the transition from legacy videos to the dedicated iPhone capture app. Missing or invalid metadata should warn, not block analysis.

## Required fields for schema `1.0`

```json
{
"schema_version": "1.0",
"device_model": "iPhone 15 Pro",
"capture_id": "8A0F0A5A-2A79-4D8C-9C2A-0CCF9F9368EA",
"recorded_at": "2026-04-01T10:15:30Z",
"app_version": "0.1.0",
"ios_version": "iOS 26.0",
"video_codec": "h264",
"color_space": "sdr",
"exposure_mode_locked": true,
"exposure_duration": 0.0333333333,
"iso": 80,
"white_balance_mode_locked": true,
"fps": 30,
"resolution": "1920x1080",
"hdr_disabled": true
}
```

## Validation behavior

- Missing sidecar: warning-only in the UI; analysis still proceeds.
- Missing `schema_version`: warning; validator assumes compatibility with schema `1.0` and marks `schema_version_assumed: true` in exported provenance.
- Unknown `schema_version`: warning; validator performs best-effort validation against current fields.
- Missing required acquisition fields: warning-only at load time, but surfaced as validation errors in exported provenance.
- Unknown fields are retained in provenance as `unrecognized_fields` so schema drift is visible without blocking ingest.

## Export behavior

Analysis metadata exports now include:

- `capture_metadata_validation`: whether the sidecar passed validation plus any warnings/errors
- `capture_metadata`: normalized capture provenance when a sidecar is present
- `capture_provenance`: grouped export view that carries both the normalized metadata and the validation record used for the run

That contract is the boundary the iPhone capture app should target.
73 changes: 73 additions & 0 deletions docs/iphone_capture_pipeline_review.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# iPhone Capture Pipeline Feasibility Review

## Context
The current app analyzes pre-recorded videos and assumes camera settings are stable enough for relative brightness trends.

## What the project already does well
- Computes brightness using CIE L* from each frame and supports background subtraction and noise/morphological filtering.
- Exports reproducible frame-level CSV files and plots.
- Explicitly documents that manual exposure/ISO/white balance lock is required for valid results.

## Current gap vs. requested workflow
Your proposed workflow is:
1. Record on iPhone with exposure lock and stable imaging pipeline.
2. Persist capture settings in metadata.
3. Automatically deliver video into ECL_Analysis for processing.

The repository currently starts analysis from a local file picker / drag-drop and does not include:
- iPhone capture controls.
- In-app metadata ingestion/validation for camera settings.
- An automated watch/import service for incoming files.

## Feasibility assessment
This is feasible and likely worth it if consistency is your top priority.

### Why it is worth doing
- This codebase already depends on consistency of acquisition conditions for scientific validity.
- Most of your measurement error risk is upstream (capture variability), not downstream (analysis code).
- A capture-controlled iPhone flow should reduce false trends caused by auto-exposure, tone mapping, HDR, or AWB drift.

### Practical constraints to account for
- iPhone camera APIs are iOS-native (AVFoundation). A robust capture app is best built as a separate iOS app, not inside this PyQt desktop app.
- iOS may not allow writing arbitrary custom metadata into the container exactly how you want for every codec/profile; often you should also create a sidecar JSON record.
- HEVC/HDR/Dolby Vision defaults can distort analysis unless explicitly disabled.

## Recommended architecture (incremental)

### Phase 1 (highest ROI, low risk): metadata-aware import in this repo
Add import-time validation in ECL_Analysis:
- Parse container metadata via ffprobe/exiftool (codec, fps, dimensions, capture date, color transfer/profile when available).
- Use a lightweight versioned sidecar JSON contract (`schema_version: "1.0"`) from `ecl_analysis/ingest/metadata.py` with fields like:
- device_model
- exposure_mode_locked
- exposure_duration
- iso
- white_balance_mode_locked
- fps
- resolution
- hdr_disabled
- Warn, rather than block, when required fields are missing or invalid so legacy videos remain analyzable during the transition.
- Normalize recognized sidecar fields before export so downstream analysis artifacts stay reproducible even when inputs vary in representation.

### Phase 2: automatic ingest
- Add a watched inbox folder (`incoming/`).
- New files with valid sidecar metadata are queued for analysis automatically.
- Save outputs to deterministic folder names tied to capture IDs.

### Phase 3: iPhone acquisition app
- Build a lightweight iOS capture app (Swift + AVFoundation):
- lock exposure/ISO/white balance/focus
- disable HDR/night mode/deep tone mapping where possible
- force fixed FPS and resolution
- export MOV + sidecar JSON
- upload directly to shared storage / API endpoint consumed by the desktop pipeline

## Suggested acceptance criteria
- Repeated static-scene captures produce <= X% frame-level brightness variance across runs.
- Pipeline surfaces capture-provenance warnings for any run lacking lock-confirmed metadata.
- Analysis output includes capture settings provenance in summary artifacts, including schema version, validation status, and normalized sidecar fields.

## Bottom line
Yes, this is feasible. It is also strategically aligned with the project’s own measurement assumptions.

Best path: keep this Python analyzer as the analysis engine, and add (1) metadata-gated ingest now, then (2) iPhone capture app integration. That gives you immediate quality gains without a risky full rewrite.
50 changes: 50 additions & 0 deletions docs/metadata_ingest_execution_plan.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Metadata Ingest Execution Plan

## Goal
Improve acquisition consistency and provenance in the desktop analyzer while keeping legacy videos analyzable during the transition to a dedicated iPhone capture app.

## Decisions
- Capture metadata validation is warning-first, not hard-blocking.
- The schema authority is a lightweight versioned sidecar contract with `schema_version: "1.0"`.
- The Python desktop app remains the analysis engine.
- The iPhone capture app should live in a separate repository and can start as a minimal AVFoundation MVP.

## Phase Status

### Phase 1: metadata-aware import in this repo
Status: in progress

Implemented:
- Sidecar schema contract and validator in `ecl_analysis/ingest/metadata.py`
- UI load-time metadata status in `ecl_analysis/video_analyzer.py`
- Provenance export fields in analysis metadata outputs
- Tests covering validation behavior and metadata export wiring

Remaining:
- Optional container-level metadata parsing (`ffprobe` / `exiftool`) to cross-check sidecar claims
- More explicit UI surfacing of validation warnings/details beyond the status line

### Phase 2: automatic ingest
Status: in progress

Implemented:
- Inbox ingest script in `tools/ingest_capture_inbox.py`
- Deterministic capture output folders using `capture_id` when present
- Per-capture ingest summaries and optional archive behavior
- Manifest-driven optional auto-analysis flow

Remaining:
- Decide where the watched inbox should live in real deployments
- Add any daemon/service wrapper if continuous unattended ingest is needed

### Phase 3: iPhone capture app
Status: not started in this repository

Planned:
- Minimal Swift / AVFoundation capture app in a separate repository
- Fixed capture settings, sidecar JSON export, and transfer into the desktop ingest path

## Near-Term Next Steps
1. Commit the Phase 1 and Phase 2 desktop-side ingest work.
2. Decide whether container metadata cross-checking is required before starting the iPhone app.
3. Create a separate repository for the iPhone capture MVP.
8 changes: 7 additions & 1 deletion ecl_analysis/analysis/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,20 @@
from .background import compute_background_brightness
from .brightness import compute_brightness, compute_brightness_stats, compute_l_star_frame
from .duration import validate_run_duration
from .models import AnalysisRequest, AnalysisResult
from .masking import MASK_TOP_CANDIDATES, build_consensus_mask, build_signal_mask, evaluate_mask_candidate
from .models import AnalysisRequest, AnalysisResult, MaskCaptureMetadata

__all__ = [
"AnalysisRequest",
"AnalysisResult",
"MaskCaptureMetadata",
"MASK_TOP_CANDIDATES",
"build_consensus_mask",
"build_signal_mask",
"compute_background_brightness",
"compute_brightness",
"compute_brightness_stats",
"compute_l_star_frame",
"evaluate_mask_candidate",
"validate_run_duration",
]
Loading
Loading