DTUTMO (DTU Tone Mapping Operator) is a biologically inspired high dynamic range (HDR) tone mapping pipeline. It models optical, retinal, and neural stages of the human visual system to convert HDR imagery into perceptually faithful standard dynamic range (SDR) or HDR display encodings. Please refer to the wiki for the technical details.
- End-to-end tone mapping via the
CompleteDTUTMOpipeline - DTUCAM color appearance model with CIECAM16 and XLR-CAM options
- Separate rod and cone processing with mesopic combination
- NEW (v2.0.1): Post-processing contrast enhancement and saturation boost
- NEW (v2.0.1): Optional perceptual effects (OTF blur, glare, night vision) as post-processing
- Multiple display mapping strategies, including a production hybrid
- Neural contrast sensitivity (CastleCSF) filtering
- Melanopic EDI estimation utilities for HDR analysis
- Optional PyTorch implementation for GPU acceleration
DTUTMO targets Python 3.10+. Install runtime dependencies:
pip install numpy scipyOptional GPU acceleration (recommended for large images) requires PyTorch 2.5.1+:
pip install "torch>=2.5.1" "torchvision>=0.20.1"Install the project itself (editable mode shown) and include the optional GPU stack with extras:
pip install -e .[torch]For immediate HDR processing with recommended settings:
# Process single image
python scripts/process_hdr.py input.hdr -o output.png
# Batch process directory
python scripts/process_hdr.py input_dir/ -o output_dir/
# Custom settings
python scripts/process_hdr.py input.hdr -o output.png --contrast 0.35 --saturation 1.5
# With perceptual effects
python scripts/process_hdr.py input.hdr -o output.png --perceptual realisticSee scripts/README.md for all options and examples.
For natural-looking images with good contrast and color:
import numpy as np
from dtutmo import CompleteDTUTMO, DTUTMOConfig, CAMType, DisplayMapping
# Recommended configuration
config = DTUTMOConfig(
use_cam=CAMType.NONE,
display_mapping=DisplayMapping.PRODUCTION_HYBRID,
# Disable pre-processing perceptual effects (prevents flat/dull images)
use_otf=False,
use_glare=False,
use_bilateral=False,
mesopic_rod_weight_scale=0.0,
# Enable post-processing contrast enhancement
postprocess_enabled=True,
postprocess_contrast=0.25,
postprocess_saturation=1.3,
)
hdr_image = np.random.rand(512, 512, 3) * 4000.0 # Example HDR radiance map
tmo = CompleteDTUTMO(config)
ldr_image = tmo.process(hdr_image)DTUTMO v2.0.1 provides two approaches for perceptual effects:
For quick previews and artistic effects:
from dtutmo import apply_perceptual_effects, PERCEPTUAL_PRESET_REALISTIC
# Apply emulated perceptual effects after tone mapping
ldr_with_effects = apply_perceptual_effects(ldr_image, PERCEPTUAL_PRESET_REALISTIC)For vision research and accurate simulations:
# Extract accurate glare and mesopic maps from the pipeline
python examples/process_perceptual_maps.pyThis extracts the scientifically accurate CIE disability glare and mesopic color shift from the pipeline and applies them as post-processing overlays. See docs/POST_PROCESSING.md for detailed documentation and comparison.
For publication-ready scientific visualizations with diagnostic maps:
python scripts/process_scientific_viz.py examples/Samples/christmas_photo_studio_04_2k.hdr -o examples/output_scientific_vizOutputs include combined SDR, glare map, mesopic ratio map, luminance map with colorbar, melanopic EDI map with colorbar, and rod weight/pupil/scotopic maps.
Requires matplotlib for colorbar rendering.
DTUTMO v2.0.1 produces natural-looking images with proper contrast and saturation:
From left to right: HDR luminance map, base tone mapping, enhanced luminance map, final output with contrast enhancement
Contrast enhancement comparison (left to right): No enhancement, subtle (0.15), natural (0.30), strong (0.50)
Age-Dependent Glare (CIE 180:2010 disability glare model with spectral weighting):
![]() |
![]() |
![]() |
| Age 24 | Age 64 | Age 82 |
Glare contribution maps showing increased intraocular light scattering with age using the CIE 180:2010 disability glare model with spectral PSF
Night Vision Simulation - Accurate vs Emulated:
Comparison: Accurate model uses proper rod/cone contribution ratios from photoreceptor physiology, while emulated model uses fast heuristic approximations suitable for interactive applications
Accurate glare and mesopic diagnostics with luminance and melanopic EDI maps:
![]() |
![]() |
![]() |
![]() |
Generated by scripts/process_scientific_viz.py using the accurate glare + mesopic pathway with diagnostic maps and colorbars.
Accurate glare and mesopic processing with visualization maps:
![]() |
![]() |
| 2×2 grids showing: Base (clean tone mapping) | Mesopic color shift | CIE glare | Combined effects | |
More Examples:
examples/output_perceptual_maps/- Accurate perceptual maps with debug visualizationsexamples/output_perceptual_highres/- High-resolution demonstrations of fast emulated effectsexamples/output_documentation/- Labeled figures for documentation
| Approach | Performance | Accuracy | Use Case |
|---|---|---|---|
| Accurate Perceptual Maps | ~2× slower | Scientifically validated | Vision research, publications |
| Fast Emulated Effects | 10-200ms | Perceptually plausible | Interactive use, artistic effects |
See docs/POST_PROCESSING.md for detailed comparison and usage examples.
Configuration is handled through the DTUTMOConfig dataclass:
from dtutmo import CAMType, DisplayStandard, DTUTMOConfig
config = DTUTMOConfig(
use_cam=CAMType.DTUCAM,
target_display=DisplayStandard.REC_2100_PQ,
)
tmo = CompleteDTUTMO(config)
result = tmo.process(hdr_image, return_intermediate=True)DTUTMO expects scene-linear HDR values in approximate cd/m^2. If your HDR source
is very dim (common with .hdr files that store relative radiance), enable the
auto-exposure boost or provide an explicit scale:
config = DTUTMOConfig(auto_exposure=True) # default; boosts only very dim inputs
# or
config = DTUTMOConfig(input_scale=500.0) # manual exposureAuto-exposure is applied by default for SDR targets without CAM. For HDR targets
(PQ/HLG) and CAM paths, the pipeline preserves the input scale unless you set
auto_exposure_target or input_scale explicitly.
If your input color primaries or white point differ from sRGB/D65, specify them:
config = DTUTMOConfig(
input_color_space="rec2020",
input_white_point=(95.24, 100.0, 100.89), # D60 XYZ
input_white_balance_rgb=(1.05, 1.0, 0.95),
)By default, DTUTMO applies a mild warm balance (1.05, 1.0, 0.95) to counter
the greenish cast seen in many HDR sources. Set input_white_balance_rgb=None
to disable this adjustment.
PyTorch acceleration (if installed):
import torch
from dtutmo import TorchDTUTMO
hdr = torch.rand(1, 3, 512, 512, device="cuda") * 4000.0 # BCHW
tmo = TorchDTUTMO()
ldr = tmo.process(hdr)See examples/ for more detailed usage patterns.
DTUTMOConfig.display_mapping controls the final photoreceptor-to-display transform:
DisplayMapping.LEGACY- original display adaptation moduleDisplayMapping.WHITEBOARD- fast inverse Naka-Rushton approximationDisplayMapping.FULL_INVERSE- analytical inverse of the dual-adaptation modelDisplayMapping.HYBRID- automatic blend of whiteboard and full inverseDisplayMapping.PRODUCTION_HYBRID- gradient-aware hybrid mapper for production
from dtutmo import DisplayMapping
config = DTUTMOConfig(display_mapping=DisplayMapping.PRODUCTION_HYBRID)
tmo = CompleteDTUTMO(config)Production hybrid defaults are tuned for SDR realism:
hybrid_target_mean_luminance=35.0, hybrid_black_level=0.02. Override them
to trade highlight roll-off vs. midtone contrast.
DTUTMO implements the following stages (see dtutmo/core/pipeline.py):
- Optical transfer function (OTF) - ocular blur in frequency domain
- CIE disability glare - wide-angle veiling glare point spread function
- Color conversion - linear sRGB <-> XYZ <-> LMS, Von Kries adaptation
- Local adaptation - multi-scale luminance and TVI threshold (Vangorp et al.)
- Bilateral separation - base/detail split to preserve textures
- Photoreceptors - corrected Hood & Finkelstein model for L/M/S cones and rods
- Mesopic combination - luminance-dependent rod/cone blending
- Neural CSF - CastleCSF opponent-space filtering in frequency domain
- Color appearance (optional) - DTUCAM / CIECAM16 / XLR-CAM forward+inverse
- Display mapping - whiteboard, full inverse, hybrid or production hybrid
Intermediate products (e.g., adaptation maps, cone/rod responses, CSF outputs) can be retrieved with return_intermediate=True.
Below is a compact summary of the core equations implemented in DTUTMO. Symbols use cd/m^2 for luminance unless noted.
Optical Transfer Function (OTF)
age_factor = 1 + (age - 20)/100
f_c = 60 / age_factor # cycles/degree
OTF(f) = exp(-(f / f_c)^2)
CIE Disability Glare PSF (piecewise; angles in degrees)
For theta in [0.1, 1): PSF(theta) proportional to 10*A / theta^3
theta in [1, 30): PSF(theta) proportional to 10*A / theta^2
theta in [30, 100]: PSF(theta) proportional to 5*A / theta^(1.5)
A = age_factor; optional wavelength scaling proportional to (550 / lambda)^4
PSF normalized; optional Purkinje reflections near ~3-3.5 deg
Von Kries Chromatic Adaptation
LMS_adapt = D * (LMS / LMS_white) + (1 - D) * LMS
D = F * (1 - (1/3.6) * exp(-(L_A + 42)/92)), F set by surround
Photoreceptor Response (Corrected Hood-Finkelstein + bleaching)
I_td = I * pupil_area * lens_factor # retinal trolands
p = B / (B + ln(I_a_td + epsilon)) # bleaching factor
sigma_H = k1 * ((O1 + I_a_td)/O1)^m # semi-saturation
sigma = sigma_H + sigma_neural / p # effective sigma
R_max = k2 * ((O2 + p*I_a_td)/O2)^(-1/2) # response ceiling
s(I_a) = s_base + s_factor * log10(I_a_td + epsilon) # offset
S = p * (ln(I_td + epsilon) - s(I_a)) # modulated signal
R = R_max * sign(S) * |S|^n / (sigma^n + |S|^n) # Naka-Rushton form
Inverse Photoreceptor (per channel)
r = clip(R/R_max, 0, 0.99)
x = [ r / (1 - r) ]^(1/n) * sigma
E = x / p + s(I_a); I_td = exp(E) - epsilon
I_scene = I_td / (pupil_area * lens_factor)
Mesopic Combination (local)
w_rod = interp(log10(L_p), [log10(0.01), log10(10)], [1, 0])
LMS_mesopic = (1 - w_rod)*LMS_cone + w_rod*R_rod
Bilateral Base/Detail Split
base = Gaussian(img, sigma_spatial)
w = exp(-|img - base|^2 / sigma_range^2)
filtered = w*base + (1 - w)*img
Neural CSF (CastleCSF; normalized)
Achromatic: log-Gaussian around f_p with bandwidth b, scaled by L_A
Chromatic: exp(-f / f_p), with luminance-dependent gain
Whiteboard Display Mapping (fast inverse tone curve)
R' = normalize(R) in [0, 1)
L_d = (R' * L_mean) / (1 - R')^n (optional blend with linear)
PQ (ST 2084) and HLG encodings
PQ: E = [ (c1 + c2*L^m1) / (1 + c3*L^m1) ]^m2
HLG: E = sqrt(3*L) for L <= 1/12; else E = a*ln(12*L - b) + c
Where parameters (k1, O1, m, k2, O2, sigma_neural, B, s_base, s_factor, n, epsilon) are per photoreceptor class and are documented in dtutmo/photoreceptors/response.py and dtutmo/photoreceptors/inverse_complete.py.
- Color Appearance (
dtutmo/appearance/)DTUCAM- physiologically grounded opponent-space model with photoreceptor driveCIECAM16- simplified forward/inverse of the CIE 2016 modelXLR-CAM- extended luminance-range CAM
- Optics (
dtutmo/optics/)otf.compute_otf/otf.apply_otf- ocular blur in frequency domainGlareModel- CIE 180 veiling glare with optional spectral dependence
- Adaptation (
dtutmo/adaptation/)LocalAdaptation- multi-scale adaptation luminance and TVImesopic_global/mesopic_local- rod/cone blendingDisplayAdaptation- XYZ->RGB and EOTF (gamma, PQ, HLG)
- Photoreceptors (
dtutmo/photoreceptors/)CorrectedPhotoreceptorResponse- forward L/M/S and rod responsesInversePhotoreceptorComplete- exact analytical inverse per channel
- Neural (
dtutmo/neural/)CastleCSF- opponent-space CSF filter
- Display Mapping (
dtutmo/display/)DisplayOutputMapper-whiteboard|full_inverse|hybridHybridDisplayMapper- production-grade, gradient-aware hybrid
Key enums exposed through DTUTMOConfig (see dtutmo/core/config.py):
ViewingCondition:DARK,DIM,AVERAGECAMType:NONE,DTUCAM,XLRCAM,CIECAM16DisplayStandard:REC_709,REC_2020,DCI_P3,REC_2100_PQ,REC_2100_HLGDisplayMapping:LEGACY,WHITEBOARD,FULL_INVERSE,HYBRID,PRODUCTION_HYBRID
Selected numeric parameters (defaults shown in code):
- Observer
age,field_diameter,pixels_per_degree - Stage toggles:
use_otf,use_glare,use_bilateral,use_local_adapt,use_cam - Input scaling:
input_scale,auto_exposure,auto_exposure_key - Input color:
input_color_space,input_white_point,input_white_balance_rgb - Optics strength:
otf_strength,glare_strength - CAM output:
cam_scale_to_peak - Photoreceptor timing:
cone_integration,rod_integration - Display outputs:
target_display,display_mapping
pytestIf your environment blocks network access, pre-install numpy and scipy wheels locally before running tests.
dtutmo/- core optics, photoreceptors, adaptation, appearance, displaydtutmo/torch/- PyTorch-accelerated implementations of key stagesexamples/- usage snippets illustrating the APItests/- automated regression and smoke tests for the public APIdocs/- supplemental documentation material
- Hood, D. C., and Finkelstein, M. A. (1986). Sensitivity to light. In K. Boff et al. (Eds.), Handbook of Perception and Human Performance.
- CIE 180:2010. Disability glare. Commission Internationale de l'Eclairage.
- SMPTE ST 2084 (PQ) and ITU-R BT.2100 (HLG) EOTFs.
- Ashraf et al. (2024). CASTLE: A comprehensive CSF for natural images.
HDR test images used in examples are sourced from Poly Haven under the CC0 license.
This project is licensed under the MIT License. See LICENSE for details.













