The Dressmaker

Original #theDress chroma-axis remapping (CIELUV) – dual orientation hypotheses

Quick Summary:

This tool collapses your image’s object colours onto the same single blue↔yellow CIELUV hue axis that made #theDress ambiguous. Two variants differ only in axis orientation (Hypothesis A vs B), echoing alternative illumination assumptions. Use the intensity slider to interpolate between the original and fully mapped “Dress-like” version.

How It Works (Intuitive)

1. Convert all pixels to CIELUV (a perceptual colour space).
2. Find your image’s dominant chroma line (PCA on u*,v*) and collapse its chroma to that line (“one hue”).
3. Align lightness vs chroma sign to match the Dress (correlation with v*).
4. Ignore v* amplitude and use only centred u* to drive position along the published Dress axis.
5. Impose Dress per-channel mean & standard deviation (L*, u*, v*).
6. Generate a flipped orientation (Hypothesis B) by negating the axis to represent the rival interpretation.
7. Blend mapped vs original in Luv, then convert back to sRGB with clipping.

Algorithm Steps (Technical)
Input: sRGB image (entire image treated as object), fixed Dress stats.
Constants (Dress):
  DressPC = [0.245, 0.970] (normalized)
  DressMean = [45.2, -8.7, 15.3]
  DressStd  = [12.8, 18.4, 22.1]
  DressCorr(L,v) ≈ -0.58

1. sRGB→linear→XYZ→CIELUV (D65 whitepoint).
2. Compute PCA on (u,v): eigenvector p_user (unit), means μ_u, μ_v.
3. One-hue collapse: (u,v) → μ + t·p_user with t = (c - μ)·p_user.
4. Correlation alignment: r_user = corr(L, v_collapse);
   if r_user * DressCorr < 0 then (u,v) ← - (u,v).
5. Amplitude (quirk): a = u - mean(u).
6. Map to Dress axis: (u_map, v_map) = a * DressPC.
7. Channel-wise z-score to Dress stats:
   For k in {L,u,v}: out_k = DressStd_k * ( (chan_k - mean(chan_k))/std(chan_k) ) + DressMean_k
8. Hypothesis A = mapped; Hypothesis B = mapped with axis flipped before standardization (equivalent to sign inversion).
9. Intensity blend in Luv: Lmix = (1−t)·Lorig + t·Lmapped (and u,v).
10. Luv→XYZ→linear RGB→gamma sRGB; clip to [0,1]; OOG % estimated prior to clip.
      
FAQ & Tips
  • Two panels? Same mapping, opposite axis direction (illumination ambiguity).
  • Intensity? 0% = original; 100% = full Dress statistics & axis.
  • Why CIELUV? Linearizes blue↔yellow chroma geometry so the dress axis stays perceptually straight.
  • Why only u* amplitude? Using u* deviation keeps the mapping aligned to the published dress axis while preserving lightness trends.
  • Different image types? Works best on objects with two dominant colour regions and moderate lightness contrast.
  • Gamut clipping? Hard clipping keeps values in range; mild shifts near extremes are expected.
  • Real Dress stats? Replace constants with empirically computed stats if you have the masked Dress image.
  • Masking? This demo treats the full image; add an object mask uploader for tighter control.
70%
Ready – load an image.

Original Image

Hypothesis A

Dress Axis (Aligned)

Hypothesis B

Dress Axis (Flipped)