Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
72 changes: 72 additions & 0 deletions docs/iphone_capture_pipeline_review.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# iPhone Capture Pipeline Feasibility Review

## Context
The current app analyzes pre-recorded videos and assumes camera settings are stable enough for relative brightness trends.

## What the project already does well
- Computes brightness using CIE L* from each frame and supports background subtraction and noise/morphological filtering.
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Markdown, L* will typically render the * as emphasis. To keep the intended notation consistent with README (which uses L\*), consider escaping the asterisk here (e.g., CIE L\*).

Suggested change
- Computes brightness using CIE L* from each frame and supports background subtraction and noise/morphological filtering.
- Computes brightness using CIE L\* from each frame and supports background subtraction and noise/morphological filtering.

Copilot uses AI. Check for mistakes.
- Exports reproducible frame-level CSV files and plots.
- Explicitly documents that manual exposure/ISO/white balance lock is required for valid results.

## Current gap vs. requested workflow
Your proposed workflow is:
1. Record on iPhone with exposure lock and stable imaging pipeline.
2. Persist capture settings in metadata.
3. Automatically deliver video into ECL_Analysis for processing.

The repository currently starts analysis from a local file picker / drag-drop and does not include:
- iPhone capture controls.
- In-app metadata ingestion/validation for camera settings.
- An automated watch/import service for incoming files.

## Feasibility assessment
This is feasible and likely worth it if consistency is your top priority.

### Why it is worth doing
- This codebase already depends on consistency of acquisition conditions for scientific validity.
- Most of your measurement error risk is upstream (capture variability), not downstream (analysis code).
- A capture-controlled iPhone flow should reduce false trends caused by auto-exposure, tone mapping, HDR, or AWB drift.

### Practical constraints to account for
- iPhone camera APIs are iOS-native (AVFoundation). A robust capture app is best built as a separate iOS app, not inside this PyQt desktop app.
- iOS may not allow writing arbitrary custom metadata into the container exactly how you want for every codec/profile; often you should also create a sidecar JSON record.
- HEVC/HDR/Dolby Vision defaults can distort analysis unless explicitly disabled.

## Recommended architecture (incremental)

### Phase 1 (highest ROI, low risk): metadata-aware import in this repo
Add import-time validation in ECL_Analysis:
- Parse container metadata via ffprobe/exiftool (codec, fps, dimensions, capture date, color transfer/profile when available).
- Require a sidecar JSON contract with fields like:
- device_model
- exposure_mode_locked
- exposure_duration
- iso
- white_balance_mode_locked
- fps
- resolution
- hdr_disabled
- Block or warn when required fields are missing.

### Phase 2: automatic ingest
- Add a watched inbox folder (`incoming/`).
- New files with valid sidecar metadata are queued for analysis automatically.
- Save outputs to deterministic folder names tied to capture IDs.

### Phase 3: iPhone acquisition app
- Build a lightweight iOS capture app (Swift + AVFoundation):
- lock exposure/ISO/white balance/focus
- disable HDR/night mode/deep tone mapping where possible
- force fixed FPS and resolution
- export MOV + sidecar JSON
- upload directly to shared storage / API endpoint consumed by the desktop pipeline

## Suggested acceptance criteria
- Repeated static-scene captures produce <= X% frame-level brightness variance across runs.
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The acceptance criterion uses a placeholder (<= X%). To make this actionable/testable, specify a concrete target (or describe how X is chosen, e.g., based on baseline variance + margin) so teams can evaluate pass/fail consistently.

Suggested change
- Repeated static-scene captures produce <= X% frame-level brightness variance across runs.
- Repeated static-scene captures produce <= 2% frame-level brightness variance across runs (relative to mean brightness).

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Mar 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same Markdown rendering issue as above: brightness variance criterion includes L*/<= X% style text; consider escaping * (e.g., L\*) so it doesn’t italicize in rendered docs.

Copilot uses AI. Check for mistakes.
- Pipeline rejects any run lacking lock-confirmed metadata.
- Analysis output includes capture settings provenance in summary artifacts.

## Bottom line
Yes, this is feasible. It is also strategically aligned with the project’s own measurement assumptions.

Best path: keep this Python analyzer as the analysis engine, and add (1) metadata-gated ingest now, then (2) iPhone capture app integration. That gives you immediate quality gains without a risky full rewrite.
Loading