return to table of content

Why does the chromaticity diagram look like that?

PaulHoule
11 replies
2d22h

That particular version of the chromaticity diagram makes it look like the colors missing from your display are various shades of laser pointer green as opposed to all the shades of red and blue that are missing because really saturated red and blue primaries are too dim (per unit of energy) to use.

See https://nanosys.com/blog-archive/2012/08/14/color-space-conf...

I learned a lot more about color management than I wanted to know in the progress of making red-cyan stereograms because I found when I asked for sRGB red I was getting something like (180,16,16) on my high gamut monitor which resulted in serious crosstalk between the channels.

Right now I am working with a seamstress friend on custom printed fabrics and I have a flower print where yellow somehow turned to orange in the midst of processing the image and I want to get it debugged and thoroughly proofed before I send out the order... I am still learning more than I want to know about color management.

ThrowawayTestr
6 replies
2d21h

Right now I am working with a seamstress friend on custom printed fabrics and I have a flower print where yellow somehow turned to orange in the midst of processing the image

That's why Pantone makes so much money.

lolinder
5 replies
2d16h

And, importantly, that's why Pantone isn't just a freeloader making money off of nothing the way that some of the more clickbaity parts of the internet represent them. They're not solving an easy problem, if they were they wouldn't get paid.

ttoinou
2 replies
2d15h

Aren’t they (present humans) simply profiting from work done decades ago (past humans, not them) through patents or others kinds of IP / protections granted by governments ? Surely it was original and useful before but now it’s part of humans knowledge base

itishappy
1 replies
2d3h

Somewhat, but that work has utility to this day. You can find competing products out there if you don't want to pay Pantone's rates, but there's a reason many people still work with Pantone. It's a universal language of sorts.

https://www.culturehustleusa.com/products/freetonebook

marcosdumay
0 replies
1d21h

What defines them as "leeches" or not is if they keep adding value to their work.

Passively profiting from some work done decades ago makes them something you want to destroy; increasingly improving their catalogue with valuable new tones and knowledge makes them something you want to protect.

Dylan16807
1 replies
2d14h

They solve a very real problem with most of their products. Universal physical references and supplies are great.

But charging for the libraries that list a basic sRGB or CMYK code for each Pantone color is a pain in the ass leech.

PaulHoule
0 replies
1d2h

Pantone is not the immediate answer to my problem because my image is photographic. Pantone's strength is that you can mix a few colorants from a library to make a number of precise spot colors many of which can't be rendered in CYMK. It has dayglo, metallic and all sorts of amazing things.

My vendor (Spoonflower) takes sRGB and will sell me a set of color swatches with hex code labels that I can use, like the Pantone book, to calibrate color by eye or colorimeter, etc.

Spoonflower will give me a %10 commission if somebody buys my product in their marketplace, but the product is expensive, a luxurious fabric that costs $20 a yard costs upwards of $50 if it is inkjet printed. For all this the only material investment is proofing, prototyping and showpiece production.

If I really wanted to make money though I could find another vendor who, I think, could set up a run of 10,000 yards on an offset lithography press. The litho process is more tolerant than inkjet so the shop can mix up Pantone colors as spots.

My photo is mostly monochromatic and might come across well in spot color so it might be a choice to pick a color I like out of the Pantone book if I was working with a larger run print shop. Problem is I'd have to put up quite a bit of money and then warehouse the stuff and be really certain people will buy it or I can make something out of it.

My photo is monochromatic and might do OK if I thought a Pantone chip was close to the object color I could do it in spot but this time I'm going to do it by sRGB chips.

ahazred8ta
1 replies
1d23h

Right now I am working on custom printed fabrics

You almost certainly need to work in the vendor's CMYK/SWOPv2 color space profile to get the colors to come out right. Ask them. Saturated RGB doesn't convert well. Plan B: desaturate to a slightly grayish yellow.

One example: https://ctnbee.com/en/upload

PaulHoule
0 replies
1d2h

The vendor specifies sRGB so sRGB it is. For that matter my Epson printer specifies a native color space which is basically RGB, I wish I could tell it to lay down various amounts of C, Y, M and K pigments I can't.

I have the real object to compare with the screen and with paper and fabric prints under various viewing conditions. I am going to print small samples, order a set of RGB labeled color swatches, etc. I have to tighten up my whole chain because I have a wide gamut monitor and also produce Display P3 for social sharing and would like to prototype realistically on my inkjet.

derefr
0 replies
2d21h

as opposed to all the shades of red and blue that are missing because really saturated red and blue primaries are too dim (per unit of energy) to use.

Would another way to put that be, that the chromaticity diagram could keep going southeastward (i.e. the XYZ color-space could have the X and Z activation functions extended leftward and rightward), but due to the frequencies continuing on the spectral line, that area of the diagram would necessarily be made mostly of infrared and ultraviolet frequencies that we can't see?

ProllyInfamous
0 replies
2d1h

From your linked article:

The colored region of the plot, labeled "visible region", contains points in the space representing colors humans can see, while the non-colored region contains points representing physically impossible cone stimulations.

Visual perception goes way way way beyond just what the eyes' cones can physically see — you've got an entire brain back there trying to interpret optical nerve phenomena!

For a nice brain tickler, look up physically impossible to conically detect Chimerical Colors (e.g. stygian blue; self-luminous red; hyperbolic orange), which can be seen without actually having been seen

†: https://www.perplexity.ai/search/please-provide-chimerical-c...

W: https://en.wikipedia.org/wiki/Impossible_color

radicality
10 replies
2d21h

Kinda related, but does someone maybe have a good set of links to help understand what HDR actually is? Whenever I tried in the past, I always got lost and none of it was intuitive.

There’s so many concepts there like: color spaces, transfer functions, HDR vs Apple’s XDR HDR, HLG vs Dolby Vision, mastering displays, max brightness vs peak brightness, all the different hdr monitor certification levels, 8 bit vs 10bit, “full” vs “video” levels when recording video etc etc.

Example use case - I want to play iPhone-recorded videos using mpv on my MacBook. There’s hundreds of knobs to set, and while I can muck around with them and get it looking close-ish to what playing the file in QuickTime/Finder, I still have no idea what any of these settings are doing.

wongarsu
7 replies
2d21h

HDR is whatever marketing wants it to be.

Originally it's just about being able to show both really dark and really bright colors. Something that's really easy if each pixel is an individual LED, but that's very hard in LCD monitors with one big backlight and pixels are just dimmable filters for that backlight. Or alternatively on the sensor side the ability to capture really bright and really dark spots in the same shot, something our sensors are much worse at than our eyes, but you can pull some tricks.

Once you have that ability you notice that 8 bits of brightness information isn't that much. So you go with 10 bit or 16 bits. Your gamma settings also play a role (the thing that turns your linear color values into exponential values).

And of course the people who care about HDR have a big overlap with people who care about colors, so that's where your color spaces, certifying and calibrating monitors to match those color spaces etc comes in. It's really adjacent but often just rolled in for convenience.

radicality
5 replies
2d21h

More bits to store more color/brightness etc makes sense.

I think my main confusion has usually been that it all feels like some kind of a… hack? Suppose I set my macbook screen to max brightness, and then open up a normal “white” png. Looks fine, and you would think “well, the display is at max brightness, and the png is filled with white”, so a fair conclusion would be thats the whitest/brightest that screen goes. But then you open another png but of a more special “whiter white”, and suddenly you see your screen actually can go brighter! So you get thoughts like “why is this white brighter”, “how do I trigger it”, “what are the actual limits of my screen”, “is this all some separate hacky code path”, “how come I only see it in images/videos, and not UI elements”, “is it possible to make a native Mac ui with that brightness”.

In any case, thanks for the answer. I might be overthinking it and there’s probably lots of historical/legacy reasons for the way things are with hdr.

wongarsu
0 replies
2d20h

there’s probably lots of historical/legacy reasons for the way things are with hdr

That's pretty much it. If you use a HDR TV it will usually work like you describe. It would display the same white for a normal white PNG and an "even whiter" "HDR" PNG.

Apple's decision makes sense if you imagine SDR (so not-HDR) images as HDR images clipped to to some SDR range in the middle of the HDR range (leading to lots of over- and underexposure in the SDR image). If you then show them side-by-side of course the whitest white in the HDR range is whiter than the whitest white in the SDR image. Of course that's a crude simplification of how images work, but it makes for a great demo: HDR images really pop and look visually better. If you stretched everything to the same brightness range the HDR images wouldn't be nearly as impressive, just more detail and less color banding. The marketing people wouldn't like that

suzumer
0 replies
2d17h

While one commenter had it somewhat right that HDR has to do with how bright/dark an image can be, the main thing HDR images specify is how far ABOVE reference white you can display. With srgb, 100 percent of all channels is 100 percent white (brightness of a perfect lambertian reflector). Rec 2100 together with rec 2408 specify modern hdr encoding, where 203 nits is 100 percebt white, and above that would be anything brighter (light sources, specular reflection, etc). So if a white image encoded in sdr looks dimmer than hdr for non specular detail, that is probably encoding or decoding error.

duskwuff
0 replies
2d20h

So you get thoughts like [...] “what are the actual limits of my screen” [...]

Some of the limitations, at least in Apple's displays, are thermal! The backlight cannot run at full brightness continuously across the full display; it can only hit its peak brightness (1600 nits) in a small area, or for a short time.

dahart
0 replies
2d14h

One motivation for HDR is having absolute physical units, such as luminance in candelas per square meter. You can imagine that might be a floating point value and that 8 bits per channel might not be enough.

The problem you’re describing is that color brightness is relative, but if you want physical units and you have a calibrated display then adjusting your brightness is not allowed, because it would break the calibration.

Another reason for HDR is to allow you to change the “exposure” of an image. Imagine you take a photo of the sun with a camera. It clips to white. Most of the time even the whole sky clips to white, and clouds too. With a film camera, once the film is exposed, that’s it. You can’t see the sun or clouds because they got clamped to white. But what if you had a special camera that could see any color value, bright or dark, and you could decide later which parts are white and which are black. That’s what HDR gives you - lots of range, and it’s not necessarily all meant to be visible.

In computer graphics, this is useful for the same reason - if you render something with path tracing, you don’t want to expose it and throw away information that happens to get clamped to white. You want to save out the physical units and then simulate the exposure part later, so you don’t have to re-render.

So that’s all to say- the concept of HDR isn’t hacky at all, it’s closer to physics, but that can make it a bit harder to use and understand. Others have pointed out that productized HDR can be a confusing array of marketing mumbo jumbo, and that’s true, but not because HDR is messed up, that’s just a thing companies tend to do to consumers when dealing with science and technology.

I was introduced to HDR image formats in college while studying physically based rendering, and the first HDR image format I remember was Greg Ward’s .hdr format that is clever- 8 bits mantissa per channel and an 8 bit shared exponent, because if, say, green is way brighter than the other channels, you can’t see the dark detail in red & blue.

crazygringo
0 replies
2d20h

It is all extremely hacky.

Because HDR allows us to encode brightnesses that virtually no consumer displays can display.

And so deciding how to display those on any given display, on a given OS, in a given app, is making whatever "hacky" and totally non-standardized tradeoffs the display+OS+app decide to make. And they're all different.

It's a complete mess. I'm strongly of the opinion that HDR made a fundamental mistake in trying to design for "ideal" hardware that nobody has, and then leaving "degraded" operation to be implementation-specific.

It's a complete design failure that playing HDR content on different apps/devices results in output that is often too dark and often has a telltale green tint. It's ironic that in practice, something meant to enable higher brightness and greater color accuracy has resulted in darker images and color that varies from slightly wrong to totally wrong.

bscphil
0 replies
1d23h

Originally it's just about being able to show both really dark and really bright colors.

Sort of. The primary significance of HDR is the ability to specify absolute luminance.

Good quality SDR displays were already very bright and already had high native dynamic range (a large ratio between the brightest white and darkest black). The issue was that the media specifications did not control the brightness in any way. So a 1.0 luminance pixel was just whatever the brightest value the display could show (usually tuned by a brightness setting). And a 0.0 luminance pixel was just the minimum brightness the display could show (unfortunately, usually also affected by the brightness setting thanks to backlighting).

What HDR fundamentally changes is not the brightness of displays or their dynamic range (some HDR displays are worse than some older SDR displays when it comes to dynamic range), but the fact that HDR media has absolute luminance. This means that creators can now make highlights (stars, explosions) close to the peak brightness of a bright display, while diffuse whites are now dimmer.

Prior to HDR, a good bright display was just a calibrated display with its brightness incorrectly turned up too high, making everything bright. With HDR a good bright display is a display calibrated with correct brightness and the ability to show highlights and saturated colors with much more power than a normal white.

You're right about higher bit depths, though. Because HDR media describes a much wider dynamic range on a properly calibrated display (though not necessarily on a typical over-bright SDR display), it has a different gamma curve to allocate bits more appropriately, and is typically without any banding in only 10 or 12 bits. https://en.wikipedia.org/wiki/Perceptual_quantizer

ttoinou
0 replies
2d15h

You can start by understanding the physics behind high dynamic range. Any real world analog value can have a tremendous dynamic range, it’s not just light : distances, sound, weight, time, frequencies etc. We always need to reduce / compress / limit / saturate dynamic range when converting to digital values. And we always need to expand it back when reconverting to an analog signal

carlosjobim
9 replies
2d20h

I think the explanation is simple: Color is light and it is linear going from ultraviolet to blue to green to yellow to red to infrared. It's just a line.

In physical reality, there exists no purple light. Our minds make up all the shades of purple and magenta between blue and red when our eyes receive both red and blue light.

So in order to include the magentas, you need to draw another line between blue and red. Meaning you have to bend the real color line. And that's what we see in the chromaticity diagram.

tobinfricke
6 replies
2d19h

Wavelength (or frequency) is linear but light, in general, is made up of many wavelengths -- an entire spectrum.

carlosjobim
5 replies
2d18h

Each wavelength of visible light corresponds to a color on the gradient from blue-green-yellow-red. Purple or magenta colors do not exist as light and only exists in our minds. That's why rainbows do not contain any of these colors.

ianburrell
4 replies
2d16h

Purple totally exists, but isn’t a single wavelength of light. It is multiple wavelengths of light. Physical colors are all blends of wavelengths.

Displays are tricking the eye by showing three single colors that look like real color.

carlosjobim
3 replies
2d7h

As a hue, magenta and purple shades do not physically exist in the electromagnetic spectrum. All hues on the gradient blue-green-yellow-red exist and can be generated by a single wavelength of radiation.

You can test this in physical reality with a prism, which will never show purple shades, because it is an extraspectral color that is made up in our minds.

Color can thus exist as pure in physical reality. However, our eyes can maybe not perceive colors purely, since our receptors overlap each other.

ianburrell
2 replies
2d1h

Colors are not single wavelengths. If you are redefining what color means, you should use a different word to reduce confusion. Maybe spectral color.

Secondary colors are colors. Notice that the hue on color wheel includes magenta and purple because it includes mixtures of the primary. Magenta and purple exist on electromagnetic spectrum but not as single wavelengths.

There are imaginary colors that are represented in color space but not by physical light spectrum. But purple and magenta are not imaginary. As can tell from the Roman emperors' clothing.

carlosjobim
1 replies
1d20h

The discussion on this specific graph is fairly scientific and not about cultural color perception. I changed the wording to "hue" to be more clear. Magenta and purple absolutely do not exist on the electromagnetic spectrum as their own wavelengths. Every other hue of other colors we can perceive does.

"Primary" and "secondary" colors are not scientific terms, but cultural terms.

The evidence is right there in physical reality, a rainbow or a prism will not include magenta/purple, because the colors between red and blue are not part of the spectrum. It is an amazing thing that we can make up these shades in ur minds.

But I was wrong in my original comment, because the red-blue connection can also be done by making a color wheel, ie making a complete curve.

carlosjobim
0 replies
1d4h

Edit: So let's make a comparison with sound, which is also waves, but not radiation. Every note of music corresponds to a pure sine wave. Just as every color hue corresponds to a pure electromagnetic wave. Except the purple and magentas, who do not occur like this.

Then you can say that most colors we see are mixed and reflected, which is true, just as most notes are not pure sine waves.

mncharity
0 replies
2d14h

Color is light

For an ELI5 on a "maybe teach color better by emphasizing spectra?" side project, I went for hard disjointness on "color" vs "light". Distinguishing world-physics-light from wetware-perception-color. Writing not "red light", but "\"red\" light". So physical spectra were grayscale, on nm and energy. Paired with perceptual spectra in color, on hue angle and luminosity. And both could be wrapped around a 3D perceptual color space (tweening the physical spectra from nm to hue). Or along a 2D non-primate mammalian dichromat space, to emphasize the wetware dependence. Misconceptions around color are so very pervasive, K-graduate, that extreme care for clarity seems helpful.

_wire_
0 replies
1d3h

I think the explanation is simple: Color is light and it is linear going from ultraviolet to blue to green to yellow to red to infrared. It's just a line.

The most common misunderstanding of color is that it has any property extrinsic to the seer. Color is in the mind.

In physical reality, there exists no purple light.

In what you imply is physical reality (optics) there's no color at all.

Our minds make up all the shades of purple and magenta between blue and red when our eyes receive both red and blue light.

To repeat, your mind makes up every color. It's a category error to imbue light with a trait of color independent of the seer. In color science, what you refer to as physical reality is more precisely termed spectral power distribution of electromagnetic radiation in the visible range.

To the extent that radiation stimulates a color response for the human visual system, there most certainly is purple light, it's just not simulated by a single narrow band of radiation.

As you have attempted to note, The CIE horseshoe diagram (spectrum locus) illustrates this with the "purple boundary" at the bottom, which represents the perceptual edge of mixing long and short wavelength stimulus (red/blue primaries).

But to presume that there's a natural color of light is a distinctly human biased statement of reality. Different creatures have characteristic responses, with very different traits from humans. Imagine having sensitivity to polarization.

The artist's color wheel is circular because that's how color as qualia manifests in the mind of the artist. The physics of the qualia are more with complex intrinsic response of the organism than the extrinsic stimulus.

To speak of the color of light is common sense, but over-imbues the physics of the stimulus with human traits.

Make explanations as simple as possible but no simpler.

...Meaning you have to bend the real color line. And that's what we see in the chromaticity diagram.

Any explanation that regards "real color" as a trait of light rather than of perception misleads more than it clarifies.

SirMaster
7 replies
2d23h

It's probably good to start with XYZ, but we have much better colorspaces now that do a better job at correlating with our vision.

Mainly CIE 1976 L',u',v' and even more recently ICtCp from Dolby research.

refulgentis
2 replies
2d22h

CAM-16. When in doubt, ask the color scientists :)

refulgentis
0 replies
2d20h

No, it's not, by definition. It's one matrix multiplication to do an approximation of it. More here: https://news.ycombinator.com/item?id=41081832

The only claim to superiority it makes is gradients, and that's a category error: they blend polar opposite hues in the Cartesian space (i.e. x / y / z), rather than polar (i.e. h/s/l). Opposite hues mean lerp'ing in cartesian brings it through the center of the circle, 0 saturation. Thus, blue and yellow do combine to a off-white. Engineering around it indicates something fundamentally off, much less that it is better. I don't ascribe ill intent but I do worry very much about how widely this is misunderstood.

contravariant
1 replies
2d21h

The xyY colour space is designed such that the colours of light you get by blending two points all lie on the line between the two corresponding points. This makes it extremely helpful when you want to figure out which colours you can make with a particular set of primaries. Similarly you can draw the colours corresponding to pure wavelenghts and figure out the entire space of physically possible colours by taking its convex closure.

These features are not really replicable in any other colour space, at best you can use a linear transformation of it (which XYZ already is, and it has almost all properties you could want of a choice of basis).

JKCalhoun
0 replies
2d20h

That's true. And I will add that the viewable color gamut of a display can be depicted with a simple triangle on the xyY plot. All you need to know are the three chromaticity values for the reg, green and blue phosphors — they make up the three corners of the triangle.

paipa
0 replies
2d19h

I don't think starting with XYZ and color matching functions is a good idea. LMS and cone response functions are a more fundamental and intuitive description of human color response, so if you're going to bother with XYZ at all, you should arrive there from first principles, via LMS.

mncharity
6 replies
2d18h

Does anyone know of a nice "pedagogical" color space? That is, one optimized for teaching and learning, for correctness rather than for simple math? Where the space's highly-noticeable characteristics are actual features of human perception, rather than the usual mess of "nope, that too is a model artifact" (mostly from optimizing for computation). And full-gamut, well behaved out to spectral locus. And with at least somewhat linear hues and color combination. Sort of the Munsell niche, but full gamut, and this century.

I wasn't able to find anything even close, for a "maybe teach color better by emphasizing spectra?" side project, so I kludged. CAM16UCS as state-of-the-art for perceptual color, untwisted with Jzazbz for linear hues (it also sanity checked absolute luminosity), with a rather-unprincipled mashing down of CAM's IIUC-non-perceptual near-locus silly blue tail. Implemented as lookup tables. If there is any related work out there, I'd love to hear of it. Tnx.

suzumer
3 replies
2d18h

Cam16 (as opposed to cam16 ucs) is perception based. It calculates chroma, lightness, and hue, and is based on the munsell color system. Hellwig and Fairchild recently simplifed the model mathematically, improving it's chroma accuracy.( http://markfairchild.org/PDFs/PAP45.pdf) Another, simpler, model is CIELAB, which outputs paramters L, a, and b, where L is lightness, hypot(a,b) is chroma, and arctan2(b,a) is the hue.

mncharity
2 replies
2d15h

Thanks! IIRC(fuzzily - it's been a while), I chose -UCS for a more euclidean color difference metric - I should review that. My even fuzzier recollection, is CIELAB's visible gamut shape is very artifacty[1], perhaps misleadingly representing the volume outside sRGB/P3 for instance.

The pedagogical objectives of playing well with full visible 3D gamut, and spectral locus, and of avoiding shape artifacts (concavities, excursions), are... non-traditional. Characteristics which could be happily traded away in traditional uses of color spaces, for characteristics like model math and simplicity which here have near-zero value (lookup tables satisficing). And were - most spaces have "oh my, that's a hard downselect" bizarre visual hulls, and topologies outside of P3 or even sRGB can get quite strange. Thus the need to untwist CAM16's curving hue lines - they're not bad within sRGB, but by the time they hit visible hull, yipes, I recall some as near parallel to hull.

Having a color space to play with as a realistic 3D whole, seems not the kind of thing we collectively incentivize. A lot of science education content difficulty seems like that.

[1] https://commons.wikimedia.org/wiki/File:Visible_gamut_within...

suzumer
1 replies
2d14h

CAM16's hue lines are curved by design. Hue is not linear with regards to xy chromaticity, as evidenced by the Abney effect[1].

[1] https://en.wikipedia.org/wiki/Abney_effect

mncharity
0 replies
2d6h

But maybe not this[1] non-linear? Fun if real. But perhaps fitting was done within a gamut folks care more about, and model math then induced artifacts at the margins of the full visible gamut? I'd really love to know if that blue tail represents real perception.

[1] https://www.researchgate.net/profile/Volodymyr-Pyliavskyi/pu... [png] from https://ojs.suitt.edu.ua/index.php/digitech/article/download... [PDF dl] (Curiously, bing image search has this figure, but google doesn't.)

Daub
1 replies
2d11h

Does anyone know of a nice "pedagogical" color space? Where the space's highly-noticeable characteristics are actual features of human perception

When talking with students about color, I find the HSL space the easiest to employ. From a color maths point of view I have been told that it is very messy, which is one reason why Adobe stopped using it in version 3 of Photoshop. But from the perceptual point if view, it is the artists favorite.

Perhaps a better option is the Muncell color space. Muncell chopped up the entirety of the color domain into thousands of small chunks. The distance between each chunk was a single units of 'barely perceptual difference' which he established through meticulous user testing. Hence the green domian was much larger than the yellow. The story behind his development of this space makes for facinatting reading. He was an artist (a painter) yet his work paved the way for more modern spaces.

mncharity
0 replies
2d4h

When talking with students about color, I find the HSL space the easiest to employ. [...]

Nod. I was exploring how/whether an emphasis on spectra might be used to more successfully explain and teach color. Motivated by observations of wide-spread profound failure, like first-tier graduate students saying "the Sun doesn't have a color; it's rainbow color", and color instruction content, err, exercising diverse artistic license. So I'd hoped for Munsell-like web interactives, but with principled coupling to spectra. Hopefully as easily understood as HSL, if less convenient as a more-abstracted artist UI. I like implicit curriculum ("things noticed in passing provide insight and invite exploration"), and am leery of science education graphics' traditional "some aspects done with great care for correctness, mixed with others of utter bogosity, with students unable to tell which is which". And so wanted to avoid the high-profile "no, that too is merely a model artifact, not reality" of say CIE 1931 chromaticity diagrams ("such tiny sRGB coverage!", "so much green!", "a prism shaped solid!"). And also its frequently sloppy graphics (incorrect colors, misleading handling of out-of-gamut colors, misplaced white point not intersected by blackbody curve). I had hope for linear-hue absolute-physical-brightness Jzazbz, but meh[1]. Perception-optimized wide-gamut recent CAM16 seems unsurprisingly closest to this vision, but for some "hmm, is that bit real or model?", and "how does this behave with changes in absolute illumination?". A tweaked CAM16-UCS can end up a seemingly unobjectionable swelling blob. The scattered instances of intensive use of Munsell in art instruction suggested at least some hope of clarity and utility. Thanks for the thoughts.

[1] https://user-images.githubusercontent.com/181628/61452533-43... [png] from https://github.com/coloria-dev/coloria/issues/41

klysm
3 replies
2d21h

I think a good explanation of color spaces might be starting at a camera sensor with a bayer array and how that’s processed.

contravariant
1 replies
2d20h

Starting with the function of the human eye seems like a good choice, I mean how else would you explain why a Bayer matrix only has 3 colours?

JKCalhoun
0 replies
2d20h

Or why there are two green pixels in the Bayer pattern for every red and blue...

avidiax
0 replies
2d21h

I'm not so sure.

I think it's really important to understand spectral colors and metamerism.

If you start at the Bayer array, you are going to have an odd discussion about how the bayer filters have spectral transfer functions that aren't directly related to the cones in our eyes, nor any color space's primaries, etc.

It's going in the deep end.

kurthr
2 replies
2d20h

I prefer this. The failure to discus Lab and OKLab in the main link is quite odd.

Also, I'd mention to those who think that violet/magenta aren't "real" colors that the red X decays more slowly than the blue Z at short wavelengths so you can get saturated violet/magenta single wavelength colors (not well represented on the standard chroma charts) below 400nm at high power. Of course they aren't efficient for monitors (even blue isn't) and they're dangerous to look at for any length of time. But if you see a (single wavelength) violet/magenta laser, it's time to look away or shut your eyes.

jlongster
1 replies
2d19h

Yeah that article is better. I'm the author and I wrote this only for me as I studied it, it's not great as a way to describe it to others

I wanted to start from the very beginning and as far as I know Lab and OKLab didn't come later. Studying the 1931 studies and such was a start, and I wanted to later bring up all the other things we've learned since then, but haven't had time to write more about it

kurthr
0 replies
12m

I appreciate this color to your article. My background (3rd/4th career) was developing tests and compensation algorithms for visible defects in consumer displays. It was a real learning experience and trying to understand the many ways that people try to control the display of and quantify image color was daunting. Also amazing, both what trained people CAN see, and what most people CAN'T.

GrantMoyer
3 replies
2d20h

In my opinion, plotting chromaticity on a Cartesian grid — by far the most common way — is pretty misleading, since chromaticity diagrams use barycentric coordinates (and to be clear, I blame the institution, not the author). The effect is that the shape of the gamut looks skewed, but only because of how it's plotted; the weird skewedness of a typical XYZ chromaticity diagram doesn't represent anything real about the data.

Instead, a chromaticity diagram is better thought of as a 2D planar slice of a 3D color space, specifically the slice through all three standard unit vectors. From this conception, it's much more natural to plot a chromaticity diagram in an equilateral triangle, such as the diagram at [1]. A plot in a triangle makes it clear, for instance, that the full color gamut in XYZ space isn't some arbitrary, weird, squished shape, but instead was intentionally chosen in a way that fills the positive octant pretty well given the constraints of human vision.

[1]: https://physics.stackexchange.com/questions/777501/why-is-th...

meindnoch
0 replies
1d21h

Indeed.

But the chromaticity diagram is not a slice through the color space; rather a central projection from the origin onto the X+Y+Z=1 plane.

jacobolus
0 replies
2d

The CIE 1976 u'v' chromaticity diagram is skewed to be closer to perceptually uniform, and is probably overall a better picture than the xy chromaticity diagram or than the equilateral triangle picture.

X, Y, and Z dimensions are somewhat arbitrary, so plotting xy inside an equilateral triangle isn't inherently more correct than plotting it inside an isosceles right triangle.

If you want you can use something closer to the cone cell responses as your coordinates, and the result may be conceptually clearer, but any 2d picture like this is going to be misleading to viewers who don't have a sophisticated understanding of how vision works and what the diagram means.

drmpeg
0 replies
2d18h

Here's a video that shows the concept. Each frame shows the allowable colors for a particular brightness in Rec. 709 YCbCr space.

https://www.w6rz.net/chromacity.mp4

meindnoch
2 replies
2d21h

Mostly correct, but I don't understand what the author is trying to do in the last section, where they try to fill the locus by generating spectra with two peaks and projecting it into the chromaticity diagram. Why do it like that?

This is how you should do it:

- You pick a Y value. This is going to be the luminance of your diagram.

- For each pixel inside the area bounded by the spectral locus (and the line of purples - the line connecting the two endpoints of the locus) you take its x, y coordinates.

- Together these 3 values specify your color in the CIE xyY color space. Converting from xyY to XYZ is trivial: X = Y / y * x, Y = Y, Z = Y / y * (1 - x - y)

- You map these XYZ values into your output image's color space (e.g. sRGB). If a given XYZ value maps outside the [0,1] interval in sRGB, then it's outside the sRGB gamut, and you may clip the values to the closest valid value inside the gamut.

jlongster
1 replies
2d18h

Author of the article here; I wasn't able to understand how to get that to work, and I talked about that in the post. This demo is doing that: https://jlongster.com/why-chromaticity-shape#block-31f373

The problem is "for each pixel inside the area". I could have done that, and then clipped the output by that shape. The problem is this doesn't answer why the shape is this way at all because you are using the shape itself to clip the output. It felt fake.

I do think this is what is most common though. I was trying to understand a more rigorous approach, and the one where you generate spectra and try to fill it is described here: https://clarkvision.com/articles/color-cie-chromaticity-and-...

That feels like a more rigorous approach, but clipping is probably "good enough" too

meindnoch
0 replies
2d9h

Ok, so you're asking why every visible color has to lie within the bounds of the spectral locus in the chromaticity diagram?

The reasoning is simple:

1. Spectral colors are basis vectors of the color spectrum. I.e. every possible spectrum can be thought of as a weighted sum of infinitely many Dirac deltas. With nonnegative weights, in particular, so it's a so-called conical combination (i.e. linear combination with nonnegative weights).

2. Taking the inner product with color matching functions is a linear transformation from this infinite dimensional space spanned by spectral colors to a 3-dimensional space. Linearity means that weighted sums are preserved, that is: every possible color spectrum's XYZ values are going to be the weighted sum of the spectral colors' XYZ values. And because the XYZ color matching functions are nonnegative everywhere, conical combinations are also preserved.

3. And finally, the conversion from XYZ to xyz is such, that it turns conical combinations into convex combinations (i.e. conical combinations where the weights sum to 1). It's easy to verify this with pen and paper.

It follows, that every color on the xy chart is going to be a convex combination of xy points corresponding to spectral colors, which geometrically means that they're going to lie inside the spectral colors' convex hull.

VanillaCafe
2 replies
2d20h

I thought this might be a useful article because I've often had a similar question. But there's a diagram that has text:

More simply put: imagine that you have red, green, and blue light sources. What is the intensity of each one so that the resulting light matches a specific color on the spectrum?

...

The CIE 1931 color space defines these RGB color matching functions. The red, green, and blue lines represent the intensity of each RGB light source:

This seems very oddly phrased to me. I would presume that what that chart is actually showing is the response for each color of cone in the human eye?

In which case it's not a question of "intensity of the light source" but more like "the visual response across different wavelengths of a otherwise uniform intensity light source"?

... fwiw, I'm not trying to be pedantic, just trying to see if I'm missing the point or not.

jlongster
0 replies
2d19h

I'm the author of the article and the intensity is referring to the level of the light source used in the study to generate the data. See the study explained here: https://medium.com/hipster-color-science/a-beginners-guide-t...

but you're right, the intensity needed of each R, G, and B light sources to produce the correct color is directly related to how our eyes perceive each of those sources, so yes you are correct

GrantMoyer
0 replies
2d19h

The wording on the article is correct, despite being confusing. The CIE 1931 RGB primaries each stimulate multiple types of cone in human eyes, so the RGB Color Matching Functions (CMFs) don't represent individual cone stimulations.

However, the CMFs for LMS space[1] do directly represent individual cone stimulations over. Like the CIE RGB CMFs, the LMS CMFs can also be thought of as the required intensities of three primariy colors required to reproduce the color of a given spectrum. The reason these two definitions coorespond for LMS space is that each primary would stimulate only one type of cone. However, unlike CIE RGB, no colors of light which stimulate only one type of cone physically exist.

Finally, CIE RGB and LMS space are linear transformations of each other, so the CIE RGB CMFs are linear combinations of the LMS CMFs, so each CIE RGB CMF can be though of as representing a specific linear combination of cone stimulations (the combination excited by the primary color).

I often find it easiest to reason about these color spaces in terms of LMS space, since it's the most physically straightforward.

[1]: https://en.m.wikipedia.org/wiki/LMS_color_space

ttoinou
1 replies
2d15h

I’ve been having problems studying this topic for years now, is there actually an official scientific field with official books and an official consensus on this ? Seems hard to know who to trust on this wide-but-niche topic

gorgoiler
1 replies
2d14h

This is fantastic. It gave me an idea about colors, perception, and gamut.

Put simply, imagine that there is a combination of wavelengths of light that causes you to perceive the smell of ripe cheese, and another that causes you to think that there is a bear behind you. Now your diagrams must be filled in not only with colored pixels but also include a small picture of a cheese and a bear at the points where those specific perceptions occur.

I think, in real life, this is what magenta is: a non spectral color that’s more of a feeling or sensation that, in order for our brains to not get too overwhelmed, we simply perceive as another color. This is also, I believe, close to describing a real phenomenon for those living with varying degress of synesthesia or, if you will forgive a play on words, those on the synesthesia spectrum.

meindnoch
0 replies
1d21h

this is what magenta is: a non spectral color that’s more of a feeling or sensation that, in order for our brains to not get too overwhelmed, we simply perceive as another color.

Ummm. What?

Why would the brain get "overwhelmed" by non-spectral colors? You realize that spectral colors are pretty much non-existent in nature, right?

vinnyvichy
0 replies
2d13h

Bengtsson & Zyczkowski in the introduction (see p14-16) to their wonderful book make use of chromaticity diagrams to motivate their study of quantum states.

https://www.researchgate.net/profile/Karol-Zyczkowski/public...

In a way tradition suggests that colour theory should be studied before quantum mechanics, because this is what Schroedinger was doing before inventing his wave equation.
tylerneylon
0 replies
2d18h

This page is also a beautiful explanation of color spaces, with chromaticity explained toward the end: https://ciechanow.ski/color-spaces/

Note that many of the diagrams are interactive 3d graphics (I didn't realize that at first, and it makes the page more interesting.)

tylerneylon
0 replies
2d18h

I have a question for fellow color science nerds. I've been reading through Guild's original data: https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.1932...

However, I'm having trouble understanding the meaning of the numbers in table 4. Does anyone understand all the columns there?

What I'm particularly interested in is finding the unnormalized coefficients from the color matching experiments, or some way to un-normalize those coefficients. (By "those coefficients," I mean the trichromatic coefficients u{a,b,c}_\lambda listed in table 3.) I don't know if that data is in table 4 so maybe those are two separate questions.

refulgentis
0 replies
2d22h

TL;DR: Imagine color space has 3 dimensions in polar coordinates.

- hue, the angle. your familiar red, orange, yellow, green, blue...

- saturation/chroma, radial distance from center. intensity of the pigment

- lightness, top to bottom, white to black

The XY diagram shows 3d color space, from the top, in XYZ.

XYZ is a particular color space that Nathan Myhrvold picked at Microsoft in the early 90s.

There is no privileged "correct" color space, they're developed based on A/B tests and intuition by color scientists.

However, there are more correct color spaces, in that color science matters and is a real field. Commonly agreed state of the art is CAM16.

It's a significant mistake that Oklab is the first space with significant mindshare since HSL, it was a quick hack by a ex-game developer to make something akin to CAM16 with just one matrix multiplication.

CAM16 conversions involve significantly more than one matrix multiplication. But, its ~400 lines of code, and you can do millions a second on modern hardware.

The lightness scale is _way_ off from scientific color spaces, and thus it can't be used to create simple rules like "40 delta L* ~= 3.0 contrast ratio, 50 delta L* ~= 4.5 contrast ratio". Instead you're still manually plugging colors into a contrast checker :(

Then again, its still a step forward. It's even more maddening HSL was used for so long: it's absolutely absurd, ex. lightness = average of two highest RGB components. Great for demo hacks in 1976, not so great in 2016.

klodolph
0 replies
2d6h

What do you think a negative red light source means?

It means that the subject turned a dial to add red light to the color being matched.

Basically, you have an unknown color C, and then an R+G+B color. Sometimes, you can’t match it, so you try matching C+R = G+B. This results in “negative” R, because you’re adding R to the other side of the equation.

The same happens with green and blue, but to a lesser extent.

hilbert42
0 replies
2d13h

That link is so slow where I live that I had difficulty getting the site to work, but as far as I could judge it gives a rather nice and understandable explanation of what is a rather complex matter. It's in considerable contrast to those sections of my textbooks on color theory, they're so dry as to make one yawn, they're full of algebra, the complex operator and matrices with precious little other explanation of what it all means.

Some of the comments have already covered most of what I'd have mentioned so I won't dwell on them now, although I'd add that I reckon GrantMoyer is on the mark with his point about the inappropriateness of displaying chromaticity on Cartesian coordinates.

It's worth noting that understanding the intricacies of chromaticity and color theory is difficult to the extent that its 'opaqueness' has been used to protect trade secrets (and likely still is for reasons I'll mention in a moment).

Commercial lab printers that print masked color negative (neg film with the orange mask) to positives—color photos and color film print stock—go to great lengths to protect their matrices (precision resistor banks) against copying. Similarly, companies like Kodak do not publish the 'film terms' for their various emulsions ('film terms' being the unique matrix information for each film emulsion).

The reason for this that to reverse-engineer the matrix with enough accuracy for a single film is a complex job let alone do so for a multitude of different films. Moreover, it's imperative the matrix be accurate if good color balance is to be achieved. Keeping this info secret provided a competitive edge, selling or licensing the info is worth money.

I'd add that the destructive orange mask used in color negative film is a brilliant concept for reasons I cannot cover here, however what's relevant here is that the mask makes reverse-engineering the negative's film terms that much more complicated.

I'm a bit out of touch these days but no doubt the same applies with inkjet printers and the like (matching coordinates to specific inks etc). So there's a modicum of truth to statements from Canon, Epson and HP when they say not to use third-party inks because the colors won't match properly (mind you, that's never stopped me at the exorbitant and outrageous prices they charge for inks).

My point is that if it were possible to unravel and make this chromaticity stuff simpler to understand then many of these expensive commercial decisions would disappear.

Ahh but alas, we're suck with it.

BTW, for those who've used scanner software like SilverFast the manufacturer provides a list of film emulsions to select from before the film is scanned. Selecting the correct emulsion type ensures the proper 'film terms' are used for the scan, this in turn ensures the color balance is optimal.

I'm a bit cynical about SilverFast's approach to the problem, they've a limited range of film emulsions to select from (many of the old and important color negative types are missing). SilverFast's literature suggests that if one's color negative type is not listed then to select one that best suits. I am at a loss how one does that except to just make a guesstimate, so much for calibration. Also, one has to wonder why SilverFast has such a limited range given they've been in the business since many of said emulsions were still in production.

There are similar issues with Hamrick's VueScan software but I've not time to address them here.

Again, all these issues further illustrate the practical complexities surroundibg the chromaticity diagram.

akira2501
0 replies
2d18h

I say "cursed" because I have no idea what that means. What the heck is that shape??

Reminds me of frinklang.

"The most-commonly used, CIE 1931, is long known to be off by a factor of 7 from average human perception at short wavelengths, (compare it to the 1978 definition at 400 nm) and is arbitrarily truncated before the limits of human perception. In addition, no one perceptually-weighted curve is possible because the human eye is differently sensitive for photopic (bright-light, cone cells) and scotopic (dark-adapted, rod cells), or if the illumination occurs over narrower or wider fields. Many incremental improvements on these systems have been proposed, but none are part of the authoritative, oversimplified definition of the candela, making it useless for unambiguous definitions that can be agreed upon or binding to any party. Pronouncements of the CIE are in no way binding on the BIPM, nor vice-versa, and the CIE has a proliferation of "standard curves," which all disagree with each other. Agreements to use one curve or another thus have to be agreed outside the definitions of the SI, and, of course, parties can disagree on which curve to use. You can use CIE 1931, or CIE 1978, or the "CIE 1988 Modified 2° Spectral Luminous Efficiency Function for Photopic Vision" or the 2005 improvements by Sharpe, Stockman, Jagla & Jägle, or ISO 23539:2005(E), or something else..."

https://frinklang.org/frinkdata/units.txt

_wire_
0 replies
2d21h

This article illustrates the theory and math that lead to the horseshoe diagram in a very approachable style that is as simple as possible without being too simple.

A Beginner’s Guide to (CIE) Colorimetry — Chandler Abraham

https://medium.com/hipster-color-science/a-beginners-guide-t...

Leftium
0 replies
2d19h

The https://oklch.com color picker shows another way to represent colors:

- The 3D version looks like a mountainscape of colors

- L(ightness), C(hroma), and H(ue) are orthogonal 2d slices of this mountainscape

---

And this software renders 3D chromaticity (gamut?) diagrams: https://youtu.be/FdFpJFSTMVw?t=679