Discussion:
[RFC wayland-protocols] Color management protocol
Niels Ole Salscheider
2016-12-02 12:17:46 UTC
Permalink
Hi,
Then they use the "set_colorspace" request
to set the color space of their surface to the same color space in order
to
indicate that the compositor must not perform any additional color
conversion.
that's one issue I have. This is a *really bad idea*,
based on experience with its use on Apple OS X,
and experience in designing other color management systems.
Use an explicit "color management off" flag instead.
That clearly indicates what is intended, is not time
sensitive (i.e. it's meaning is the same irrespective of
what the device profile changes to in the future),
doesn't demand that the client, application or file have the
ability to identify or fetch the device profile, and is not
subject to mistakes or confusion about whether the right
profile has been set or whether profiles match, or whether
pixel values are really being altered or not.
The first version of my proposal had such a flag. I removed it and replaced it
by the described version based on feedback from Zoxc (***@gmail.com).

I can see advantages with both solutions. One advantage with the current
proposal is that you can have a surface that covers multiple screens. In this
case the compositor can still try its best to correct the colours for all but
the main screen.

Back then I argued that this might not be good enough if you want to calibrate
the monitor. But the consent was that this would require another protocol to
disable all colour corrections anyway and that it could be developed at a
later point.

I've CCed the list again because others might have an opinion on that...
Cheers,
Graeme Gill.
Graeme Gill
2016-12-08 02:33:20 UTC
Permalink
Niels Ole Salscheider wrote:

Hi,
Post by Niels Ole Salscheider
The first version of my proposal had such a flag. I removed it and replaced it
Do you have a link to the specifics ?
Post by Niels Ole Salscheider
I can see advantages with both solutions. One advantage with the current
proposal is that you can have a surface that covers multiple screens. In this
case the compositor can still try its best to correct the colours for all but
the main screen.
I'm not quite sure what you mean. Generally an application will have
specific reasons for wanting to do it's own color management - for
instance, perhaps it is previewing a CMYKOGlclm file, and wants to
treat out of gamut mapping and black point mapping in a particular way, etc.
I don't think the Wayland compositor is going to be expected to handle
CMYKOGlclm etc. input rasters, never mind all the requirements of specialist
application color management!

Which is not to say that compositor color management doesn't have its
place - it is ideal for applications that just want to use "RGB", and
not deal with specific display behavior.
Post by Niels Ole Salscheider
Back then I argued that this might not be good enough if you want to calibrate
the monitor. But the consent was that this would require another protocol to
disable all colour corrections anyway and that it could be developed at a
later point.
I strongly disagree with this idea - disabling application-side color
management is a fundamental step in achieving end to end color management.
You don't have color management until you are able to profile the output device,
so this is not something that can be left until latter!

Graeme Gill.
Niels Ole Salscheider
2016-12-09 13:29:14 UTC
Permalink
Post by Graeme Gill
Hi,
Post by Niels Ole Salscheider
The first version of my proposal had such a flag. I removed it and
replaced it by the described version based on feedback from Zoxc
Do you have a link to the specifics ?
Most of the discussion happened on IRC back then. It should be in the logs
but...
Post by Graeme Gill
Post by Niels Ole Salscheider
I can see advantages with both solutions. One advantage with the current
proposal is that you can have a surface that covers multiple screens. In
this case the compositor can still try its best to correct the colours
for all but the main screen.
I'm not quite sure what you mean. Generally an application will have
specific reasons for wanting to do it's own color management - for
instance, perhaps it is previewing a CMYKOGlclm file, and wants to
treat out of gamut mapping and black point mapping in a particular way, etc.
I don't think the Wayland compositor is going to be expected to handle
CMYKOGlclm etc. input rasters, never mind all the requirements of
specialist application color management!
This is of course something that the client application has to do. It would
query the main output for its surface, do the conversions to that color space
and then attach the output color space to the surface.

The compositor now must not touch the parts of the surface on the main output
(where the color spaces match). But it could still try to convert from the
color space of the main output to that of a secondary screen if the surface
covers two screens with different color profiles.

This might of course cause artifacts when one of the screens has a too small
gamut but still seems better than ignoring this.

But then again most people that work with professional applications would not
make them cover multiple screens, I guess. Therefore I'm not opposed to adding
a flag that indicates that the application wants to disable color corrections
completely for that surface, independent of the output.
Post by Graeme Gill
Which is not to say that compositor color management doesn't have its
place - it is ideal for applications that just want to use "RGB", and
not deal with specific display behavior.
Very simple applications would just keep the attached sRGB color space and
maybe place images on subsurfaces with the embedded color space from the image
attached.

Applications that care a bit more about color correction (but do not have
professional needs) could convert all their colors to the blending color space
of the compositor. I'd expect this blending color space to be linear if the
compositor cares about good colors.
This would have the advantage that the compositor does not have to do the
conversion "application output color space -> blending color space".
Post by Graeme Gill
Post by Niels Ole Salscheider
Back then I argued that this might not be good enough if you want to
calibrate the monitor. But the consent was that this would require
another protocol to disable all colour corrections anyway and that it
could be developed at a later point.
I strongly disagree with this idea - disabling application-side color
management is a fundamental step in achieving end to end color management.
You don't have color management until you are able to profile the output
device, so this is not something that can be left until latter!
Graeme Gill.
_______________________________________________
wayland-devel mailing list
https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Carsten Haitzler (The Rasterman)
2016-12-10 02:55:49 UTC
Permalink
On Fri, 09 Dec 2016 14:29:14 +0100 Niels Ole Salscheider
Post by Niels Ole Salscheider
Post by Graeme Gill
Hi,
Post by Niels Ole Salscheider
The first version of my proposal had such a flag. I removed it and
replaced it by the described version based on feedback from Zoxc
Do you have a link to the specifics ?
Most of the discussion happened on IRC back then. It should be in the logs
but...
Post by Graeme Gill
Post by Niels Ole Salscheider
I can see advantages with both solutions. One advantage with the current
proposal is that you can have a surface that covers multiple screens. In
this case the compositor can still try its best to correct the colours
for all but the main screen.
I'm not quite sure what you mean. Generally an application will have
specific reasons for wanting to do it's own color management - for
instance, perhaps it is previewing a CMYKOGlclm file, and wants to
treat out of gamut mapping and black point mapping in a particular way, etc.
I don't think the Wayland compositor is going to be expected to handle
CMYKOGlclm etc. input rasters, never mind all the requirements of
specialist application color management!
This is of course something that the client application has to do. It would
query the main output for its surface, do the conversions to that color space
and then attach the output color space to the surface.
The compositor now must not touch the parts of the surface on the main output
(where the color spaces match). But it could still try to convert from the
color space of the main output to that of a secondary screen if the surface
covers two screens with different color profiles.
This might of course cause artifacts when one of the screens has a too small
gamut but still seems better than ignoring this.
But then again most people that work with professional applications would not
make them cover multiple screens, I guess. Therefore I'm not opposed to
adding a flag that indicates that the application wants to disable color
corrections completely for that surface, independent of the output.
why not simply let the compositor decide. if a surface spans multile screens it
may have to emulate on another screen (egh one screen can do adobe arg, another
is ye-olde sRGB). this is simply a matter of letting the compositor know what
colorspace the rgb values are in so it can "do the appropriate thing". :)
Post by Niels Ole Salscheider
Post by Graeme Gill
Which is not to say that compositor color management doesn't have its
place - it is ideal for applications that just want to use "RGB", and
not deal with specific display behavior.
Very simple applications would just keep the attached sRGB color space and
maybe place images on subsurfaces with the embedded color space from the
image attached.
Applications that care a bit more about color correction (but do not have
professional needs) could convert all their colors to the blending color
space of the compositor. I'd expect this blending color space to be linear if
the compositor cares about good colors.
This would have the advantage that the compositor does not have to do the
conversion "application output color space -> blending color space".
if compositor just lists what colorspaces it can do, which happen to have
native hardware support (i.e. the display panel itself is capable of it), then
client can choose whatever works best, and compositor just "does its best" too
which may mean adjusting dislpay gammut or output transforms at the gpu level
or via side-bad protocols with the display panel itself if that were to exist.
Post by Niels Ole Salscheider
Post by Graeme Gill
Post by Niels Ole Salscheider
Back then I argued that this might not be good enough if you want to
calibrate the monitor. But the consent was that this would require
another protocol to disable all colour corrections anyway and that it
could be developed at a later point.
I strongly disagree with this idea - disabling application-side color
management is a fundamental step in achieving end to end color management.
You don't have color management until you are able to profile the output
device, so this is not something that can be left until latter!
Graeme Gill.
_______________________________________________
wayland-devel mailing list
https://lists.freedesktop.org/mailman/listinfo/wayland-devel
_______________________________________________
wayland-devel mailing list
https://lists.freedesktop.org/mailman/listinfo/wayland-devel
--
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) ***@rasterman.com
Graeme Gill
2016-12-17 10:16:41 UTC
Permalink
a display may not have a single native colorspace. it may be able to switch.
embedded devices can do this as the display panel may have extra control lines
for switching to a different display gammut/profile. it may be done at the gfx
card output level too... so it can change on the fly.
That's not a typical situation though, but nothing special would be
happening - a new profile may be installed by the user as well,
in which case an application should re-render to accommodate
the change.
yes. compositors right now work in display colorspace. they do no conversions.
eventually they SHOULD to display correctly. to do so they need a color profile
for the display.
For enhanced color management yes. But core comes first, and is necessary
for many color critical applications, because the compositor will never
have the color transformations they require.
it may be that a window spans 8 different screens all with different profiles.
then what?
As I've explained several times, what happens is that the application
is aware of this, and transforms each region appropriately - just
as they currently do on X11/OS X/MSWin systems.
currently the image looks a bit different on each display.
That would be because you haven't implemented color management support
yet, making it possible for applications to implement color management.
with a
proper color correcting compositor it can make them all look the same.
As will a color aware application given appropriate color management support.
if you
want apps to be able to provide "raw in screen colorspace pixels" this is going
to be horrible especially as windows span multilpe screens.
The code is already there to do all that in color critical application.
if i mmove the
window around the client has drawn different parts of its buffer with different
colorspaces/profiles in mind and then has to keep redrawing to adjust as it
moves.
Yes.
you'll be ablew to see "trails" of incorrect coloring around the
boundaries of the screens untl the client catches up.
It's damage, just like any other, and color critical users using
color critical applications will take "trails" over wrong color
anytime. No "trails" and wrong color = a system they can't use.
the compositor SHOULD do any color correction needed at this point.
Not at all. That's a way to do it under some circumstances yes, but
it's not satisfactory for all.
if you want
PROPER color correction the compositor at a MINIMUM needs to be able to report
the color profile of a screen even if it does no correcting.
Yes - exactly what I'm suggesting as core color management support.
yes you may have
multiple screens. i really dislike the above scenario of incorrect pixel tails
because this goes against the whole philosophy of "every frame is perfect".
"Every pixel being perfect" except they are the wrong color, isn't perfect.

There are multiple ways of doing the best thing possible - you can't re-render
a frame in the compositor if it doesn't have the pixels needed to render it,
so you can 1) not re-render until the application provides the pixels
needed 2) Render the wrong color pixels until the application catches up
or 3) (if the compositor has some color management capability and
the application sets it up) get it to do an approximate correction to
the pixels until the application catches up with the correct color.
you
cannot do this given your proposal. it can only be done if the compositor
handles the color correction and the clients just provide the colorspace being
used for their pixel data.
And a compositor can't know how to transform color in the way some
applications require. This trumps such goals.
i'm totally ignoring the case of having alpha. yes. blending in gamma space is
"wrong". but it's fast. :)
Sure.
I'm not sure what you mean by that. Traditionally applications render
to the display colorspace. Changing the display setup (i.e. switching
display colorspace emulation) is a user action, complicated only by the
need to make the corresponding change to the display profile, and re-rendering
anything that depends on the display profile.
being able to modify what the screen colorspace is in any way is what i
dislike.
That's the reality of how displays work. The user presses a button on the
front that says "emulate sRGB" or "native" or "Preset 1" or something else.
only the compositor should affect this based on it's own decisions.
And color critical users will scream bloody murder at anything related
to color that isn't under their control, if it affects the accuracy or scope
of the color workflow.
No, not supported = native device response = not color managed.
and for most displays that is sRGB.
Not in the slightest. Having (ahem!) profiled a few displays, none of them
are exactly sRGB. Some may aspire to be sRGB, they may approach sRGB,
but (because they are real devices, not an idealized norm) none are sRGB.
[ Black point alone is miles out for most LCD based displays. ]
either way monitors tend to have slightly different color reproduction and most
are "not that good" so basically sRGB.
All slightly different is certainly not the same as sRGB. That's why anyone
critically interested in color, profiles their display.
the compositor then is effectively
saying "unmanaged == sRGB, but it may really be anything so don't be fussy".
No display profile = can't know what to transform to = don't do anything.
No compositor is involved. If the application doesn't know
the output display profile, then it can't do color management.
it can assume sRGB.
That's up to the user. The user may have something else they can
assign if they are unable to profile the display (EDID derived
profile, model generic profile, etc.)
Please read my earlier posts. No (sane) compositor can implement CMM
capabilities to a color critical applications requirements,
so color management without any participation of a compositor
is a core requirement.
oh course it can. client provides 30bit (10bit per rgb) buffers for example and
compositor can remap. from the provided colorspace for that buffer to the real
display colorspace.
It's not about bit depth, it's about algorithms. No compositor can
do a transformation that it doesn't have an algorithm for.
Relying on an artificial side effect (the so called "null color transform")
to implement the ability to directly control what is displayed, is a poor
approach, as I've explained at length previously.
but that is EXACTLY what you have detailed to rely on for color managed
applications. for core color management you say that the client knows the
colorspace/profile/mappings of the monitor and renders appropriately and
expects its pixel values to be presented 1:1 without remapping on the screen
because it knows the colorspace...
Yes, a switch (Don't do color management) is far cleaner than trying
to trick a constant color management compositor into not doing color
management by feeding it a source profile that is (hopefully)
the same as the destination profile (and how do you do that
if the surface spans more than one Monitor ?)
No compositor should be involved for core support. The application
should be able to render appropriately to each portion of the span.
then no need for any extension. :) compositor HAs to be involved to at least
tell you the colorspace of the monitor... as the screen is its resource.
As I've explained a few times, and extension is needed to provide
the Output region information for each surface, as well as each
outputs color profile, as well as be able to set each Outputs
per channel VideoLUT tables for calibration.
Post by Carsten Haitzler (The Rasterman)
this way client doesnt need to know about outputs, which outputs it spans
etc. and compositor will pick up the pieces. let me give some more complex
That only works if the client doesn't care about color management very much -
i.e. it's not a color critical application. I'd hope that the intended use of
Wayland is wider in scope than that.
how does it NOT work?
It doesn't work when the compositor doesn't have the color transform
capability that the application requires.
let me give a really simple version of this.
you have a YUV buffer. some screens can display yuv, some cannot. you want to
know which screens support yuv and know where your surface is mapped to which
screens so you can render some of your buffer (some regions) in yuv and some
in rgb (i'm assuming packed YUVxYUVxYUVx and RGBxRGBxRGBx layout here for
example)... you wish to move all color correct rendering, clipping that correct
(yuv vs rgb) rendering client-side and have the compositor just not care.
Let me give you an example. The application has a ProPhotoRGB buffer,
and wants to render it with image specific gamut mapping into the display
space. It has code and algorithms to 1) Gather the image gamut, 2) Compute
a gamut mapping from the image gamut to the Output Display gamut, invert
the A2B cLUT tables of the Output display profile to floating point
precision with gamut clipping performed in a specially weighted CIECAM02 space.

I'm not quite sure how the Wayland compositor is going to manage all that,
especially given that the application could tweak or change this in
every release.
the point of wayland is to be "every frame is perfect". this breaks that.
A pixel is not perfect if it is the wrong color.
If you don't care so much about color, yes. i.e. this is
what I call "Enhanced" color management, rather than core.
It doesn't have to be as flexible or as accurate, but it has
the benefit of being easy to use for applications that don't care
as much, or currently aren't color managed at all.
how not? a colorspace/profile can be a full transform with r/g/b points in
space... not just a simple enum with only fixed values (well thats how i'm
imagining it).
A color profile can be quite complex, including scripted in the case of
something like an OCIO or ACES profile
<http://www.oscars.org/science-technology/sci-tech-projects/aces>.

But the device profile is only half the story - it does nothing on its
own, it needs to be linked with another device profile. And the flexibility
at that point is unlimited.
in this case the api's tell the client the available colorspaces
and chooses the best. it would have NO CHOICE in your core mangement anyway.
it'd be stuck with that colorspace and have to render accordingly which is the
exact same thing you are proposing for core.
Not at all. How it transforms from the source colorspace to
the display is then completely under its control, something
needed for color critical applications as well as calibration
and profiling software.
profide a list of 1 colorspace -
the monitor native one. application renders accordingly. if colorspace of
rendered buffer == colorspace of target screen, compositor doesn't touch pixel
values.
Bad way of doing it, for reasons I've pointed out multiple times.
Be explicit rather than rely on a trick - use a switch.
if it's RGB or YUV (YCbCr) it's the same thing. just vastly different color
mechanisms. color correction in RGB space is actually the same as in YUV. it's
different spectrum points in space that the primaries point to.
I'm aware of what YCbCr is - I've implemented code to convert many
such color formats.
color management require introducing such things. BT.601, BT.709, BT.2020.
the compositor MUST KNOW which colorspace the YUV data uses to get it correct.
Sure, but that's not an aspect I've mentioned. Ultimately the display
is RGB, irrespective of the encoding using to carry that information
to it.
i'm literally starting at datasheets of some hardware and you have to tell it
to use BT.601 or 709 equation when dealing with YUV. otherwise the video data
will look wrong. colors will be off. in fact BT.709 == sRGB.
Sure - complexity in managing encodings. But that has nothing
directly to do with color management, which is about colorspace
differences.
now here comes the problem... each hardware plane yuv may be assigned to MAY
have a different colorspace. they also then get affected by the color
reproduction of the screen at the other end.
To be fare, I'm not that aware of how the hardware presents itself
in regard to such things (data sheets seem hard to come by, and I
have gone looking for them in vain on a few occasions), but for many color
critical uses, it's not an immediate concern because such applications
are not going to be using yuv buffers. (Exception might be a video
editing/color grading application sending previews to a TV or
studio monitor - but all that is about encodings rather than
colorspaces.)
any list of colorspaces IMHO should also include the yuv colorspaces where/if
possible.
I don't think so. If you look at the video standards, the color spaces are
all specified as RGB. YCbCr is a different encoding of the same color
space with a precise definition of the transformation to/from.
if a colorspace is not supported by the compositor then the appjust
needs to take a "best effort". the default colorspace today could be considered
BT.709/sRGB. also you could say "it's null transform" colorspace. i.e. you know
nothing so don't try colorcorrect.
There is a distinction between color encoding and color space.
my point was i don't think it's needed to split this up.
compositor lists available colorspaces. a list of 1 sRGB or null-transform or
adobe-rgb(with transform matrix), wide-gammut, etc. means thathat is the one
and only output supported.
I'm not quite sure of the context here - the display system only
knows about color spaces it has been told about. Someone has
to tell it what the color profile of its displays are, and
the application is the thing that knows what the color spaces
of the input spaces it deals with are.
not as i see it. given a choice of output colorspaces the client can choose to
do its own conversion, OR if it's colorspace of preference is supported by the
compositor then choose to pass the data in that colorspace to the compositor
and have the compositor do it.
Yes. But one is not the equivalent of the other, if the compositor
doesn't have the same color transformation capability.
*sigh* and THAT IS WHY i keep saying that the client can choose to do it's own!
I'm in furious agreement with this bit. I just want to make sure that
it is a core capability.
BUT this is not going to be perfect across multiple screens unless all screens
This is an already solved problem in other systems, including X11.
1 screen is a professional grade monitor with wide gammut rgb output.
1 screen is a $50 thing i picked up from the bargain basement bin at walmart.
null transform RGB
BT.709 YUV
Why is it reporting an encoding rather than a colorspace,
and why isn't it providing the two display profiles ?
null transform RGB
wide gammut RGB
sRGB
BT.709 YUV
BT.601 YUV
I don't see how extra encodings are useful without their
corresponding color spaces.
in the dumb case your app can't do much.
In the dumb or core case, it has two display profiles, one
for the professional grade monitor with wide gammut rgb output,
and the other for the bargain basement bin display from walmart.
It can then transform source images in whatever colorspace they
are tagged with, into the appropriate display colorspace,
in the way the application and user needs it to be transformed.
the smart case means that pixels
displaying on the pro montior either with null transform OR with wide gammut
colorspace get no transform done. pixels in sRGB, BT.709 and BT.601 have to be
transformed to the wide gammut rgb colorspace by the compositor. of course the
user would place the window on the best quality screen. within the color
spectrum the screens share colors SHOULD look identical.the client KNOWS the
colorspace being used and can transform/render data accordingly.
In the smart or enhanced case, the application would provide a source colorspace
profile, and the compositor would transform to the appropriate display
colorspace and encoding in the limited fashion it is capable of. This
is probably quite acceptable to for many applications with a limited range
of input formats or color conversion requirements.
the point of wayland is "every frame is perfect". you want clients to
rendering their content differently based on what screen their window is on
then a compositor can NEVER get this right no matter how hard they try because
clients are fighting them and making assumptions they absolutely should not. i
already told you of more realistic cases of windows in miniature in pagers that
are not on the same screen as the full sized window (as opposed to the silly
bunnyrabit example above, but it's meant to make a point).
If this is really the case, then the conclusion is that Wayland is
not suitable for serious applications, and certainly is not a replacement
for X11. I don't actually think that that is true.
you HAVE to abstract/hide this kind of information to ALLOW the compositor to
get things right.
I doubt that. You just have to make some allowance for the
application being able to determine the RGB values sent to the
display, if it wishes to. Given that this is basically
the case without compositor color management, and that
in the compositor there is a definition of how surfaces
get mapped to displays, I don't see at all why this is
now impossible, when it is supported in other serious
graphics systems.
A color critical user won't put up with such things - they expect to
be in control over what's happening, and if a system has proper
color management (core + enhanced), there is absolutely no
reason for them to run the display in anything other than it's native gamut.
a user actually should not have to deal with most of these issues at all. even
a color critical one. they likely shouldn't have to remember which one of their
16 screens has the best colorspace support for that image.
Ideally, yes - but few have the money to get a full set of 16 EIZO displays.
No, it's a list of N output display colorspaces, one for each display.
see above. it should not be per display.
How can there be any color management unless the
colorspaces of each display is recorded somewhere ???
As explained, yes, core color management needs support -
control over VideoLUT state, plus registration of the output
display colorspaces + knowledge of which output the different
parts of a surface map to.
as you describe "core color management" - it's not control. that's simple
passive reading of the state and providing to the client. control is when you
start determining the state of these.
That part is just information needed for the client application to
perform color management, but calibration needs control over CRTC
per channel VideoLUTs.
sRGB is the colorspace of every HD display (or should be). how does it not come
into it?
Because that's not actually true. Each real display has it's own
response. That's why I write tools to profile displays, and why
people use those tools.
you don't need anything special for color calibration beyond a null transform
and a compositor that won't go ignoring that null transform anyway for the
purpose of color calibration (when used by a calibration app).
Agreed, + control over calibration curves.
It's the simplest possible support, (hence calling it "core").
It's needed internally anyway for a compositor to implement CMM
operations for "enhance" color management.
it's also broken when you attach the color profile to a specific output. see
above.
No it's not - it already works on X11/OS X/MSWin.
that's out of scope for wayland.
Exactly, which is why you can't hope to cover all possible
client application requirements with color management
done in the compositor.
HOW it is transformed is either done
client-side to present whatever source data in a given output colorspace to the
compositor OR it's done by the compositor to fix colorspaces provided by
clients to display as correctly as possible on a given screen + hardware.
Right - so the client-side needs proper support for doing this, which
is what a "core" color management extension provides.
Hmm. Not really. Mostly a lot of other stuff has to go on top of that
to make things turn out how people expect (source colorspace definition,
white point mapping, gamut clipping or mapping, black point mapping etc.)
source definition is out of scope.
It can't be out of scope if the compositor is to do color management.
that's up the the app (e.g. photoshop). the
colorspce defintition indeed covers what you say. and it is about adjusting. i
was saying the exact same thing. i am not unfamiliar with colorspaces, color
correction and mapping. it's necessary for YUV->RGB and is fundamentally the
same as RGB->RGB
I'm now wondering if we are talking about different things.
The color management protocol I'm commenting on, is about
transforming between different device color spaces,
defined by ICC profiles etc. You seem to be referring
mainly to color encoding transforms, although you are then
throwing in references to sRGB, which is a colorspace definition.
1 colorspace which is the screen's output space is NOT the same? is that not
the same as a single screen system with the display colorspace on that 1
screen? how is it not the same? it's 1 colorspace exposed by compositor to
client in both cases. the SAME colorspace. how is this not the same?
the difference is that i dont think it should be per monitor.
The whole point is that each display has a different color response,
and incoming color should be transformed to compensate for these
differences. So each display (ideally) should have an associated
color profile.
and that is why when a compositor DOES know the display colorspace it would
list that likely in addition to a null transform (there is basically no
downside to listing a null transform. it's the compositor just doing nothing
which is about as efficient as it gets).
This isn't typically true. A->B + B->A is not actually a null transform
for (say) a cLUT based ICC profile, since the B2A is not an exact inverse of
A2B. So you have to add a hack, that declares it a null transform.
if the colorspace of a provided buffer == colorspace of output then it IS
effectively a null transform for the compositor and it does (or should do) just
that.
This depends on technical details of the profiles. Some
sorts of profiles will be very close to null transforms, and some
will not. (See above).
is a profile "exactly" invertible (i.e. to floating point
precision). Use a LUT for the per channel curves (such as the
original sRGB profile), and it's not quite perfectly invertible
(although it may be to low precision). Use cLUT based profiles,
and it certainly isn't. So it has to be declared to be
a special case and assumed to be a null transform.
no one is asking anyone to transform anything (thus invert or anything else)
with a null transform.
That's what a null transform is though - a forward conversion
(Device space to PCS) followed by an inverse transform
(PCS to Device space).
and if colorspaces match no one is converting anything
either.
That's the hack - declaring matching profiles to be a null
transform, even though if you actually performed the transform,
pixel values might be altered.
"colorspaces match". to me that means either a strictly standards defined
colorspace with fixed constants and both sides agree to use it, or its
something where the constants have adjustments based on doing a color profile
of the screen. in BOTH cases i argue that if you flatten the data into some
memory blob memcmp() == 0 if they match. the best way i see is the compositor
provides a list, client chooses and just says "i used the colorspace #6 you
told me". then it does match when on display/hardware that really exactly
physically matches. if it doesn't match compositor will have to "choose what to
do". see above.
The only way to make it almost certain, is for the client application
to download the display profile from the compositor, and then set it as the
surface source profile. But this breaks down if the surface covers more than
one display - you would need a means of setting a source profile for the
different regions that correspond to each display. This isn't a requirement
for normal compositor implemented color management. But if you can simply
mark the surface as "do not color manage" instead, then there are no such
problems, each pixel of the surface is in the display colorspace it maps to.
* If a surface straddles two displays, then labeling all the pixels
with one of the two displays profile is not the same as not
touching the pixels.
either way if the client is colorcorrecting itself based on the display output
it thinks it might be on (and it may be on many display outputs or wrapped
around bunnies)... then it WILL look incorrect on at LEAST one of those
displays at some point. and the point is to not look incorrect.
I don't really see why that should be the case, any more than the situation
with any other client display content change.
* What happens at startup, before the output display profiles are
loaded into the compositor, or if there is no display profile ?
How do you create a null transform to do an initial calibration
or profile ?
at startup a compositor would load the color profiles that were
configured/stored from any previous execution that it knows match the displays
That's not possible if there was no previous execution.
it has. you mean at setup time - like when someone buys a new monitor...
There may be no profile initially, - one is only available after
the system is running, and the user is able to profile it.
i'd have the compositor use a null transform (do nothing to rgb values) UNTIL
it has calibration data for that screen. you dont have to "create" a null
transform. it's just listed in the colorspaces supported. it is the "do
nothing" colorspace.
There's no colorspace to "list" without a profile though.
why specialize it to a flag when it actually is just an "identity transform"
really which math-wise == no nothing as a fast path which is already what
compositors do.
Because they aren't the same thing. A flag is a "null" transform irrespective
of what the output colorspace is - it's the equivalent of a wild card profile.
A specific color profile will only match a specific display profile to be
a null transform. The surface spanning more than a single display is an
example of this ddistinction.
let me roll back in time. long ago in a land far far away i was working with
x11 systems...
Hey - so did I. I was hacking on Labtam X terminal cfb code in the late 80's/early 90's,
making sure we had the fastest X terminals in the world :-) :-)
and you then found some x11 apps that refused to work on your
xserver... because they NEEDED an 8bpp visual, but your display was just a 1
bit mono one? no emulation. apps were specifically bound to a specific depth
because thats how x11 worked. it strictly defined the output pixel value of
operations so emulation was disallowed. result - you cant run the app at all.
Sure - it took effort to write portable applications, and not everyone
was aware, or could be bothered. X terminals worked really well with
some applications, but quite poorly with those that had been written
on the assumption that the X server was running on the same box as
the client. Not much has changed - a lot of web applications seem to
the same - they run really badly in real life, but I'm sure they
run perfectly on the developers machine!
the not long after i had 8bpp x11 apps that refused to run on 16bpp. they also
didn't work on 1bpp. hooray! i ended up actually porting quake to 16bpp myself
(i had some ... let's say dubiously obtained source to have a linux and even
solaris/sparc (8bbp), osf1/alpha(8bpp) and linux/ix86(16bpp) port of quake to
x11...).
Dealing with 8bpp displays was what actually led me to take an interest in
color science. I started investigating perceptually uniform colorspaces
in developing 24->8 bpp color quantization code for my xli fork of xloadimage.
the problem was that you ended up with apps that just refused to work and if
i didn't have source and the time and desire to fix them, they would have
continued to not work and if i was a regular user i would likely have just
sworn and gotten unhappy and eventually moved to a platform where this doesn't
happen.
i do not want to see this kind of thing happen again in wayland land. that's
why it matters to me. it leads to a frustrating user experience.
I'm not sure of the relevance. There are many color managed applications
written for other graphics systems, and while there are things can trigger
color management issues (OS X is somewhat notorious for issues caused
by Apples API changes), I can't see how the situation could be analogous
to the 1bpp/8bpp/24bpp X11 situation you illustrate above.

Graeme Gill.
Chris Murphy
2016-12-20 18:11:07 UTC
Permalink
Hi Graeme,
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
then no need for any extension. :) compositor HAs to be involved to at least
tell you the colorspace of the monitor... as the screen is its resource.
As I've explained a few times, and extension is needed to provide
the Output region information for each surface, as well as each
outputs color profile, as well as be able to set each Outputs
per channel VideoLUT tables for calibration.
That's one way of looking at it, yes. But no, the exact thing you're
describing will never occur for any reason. If you'd like to take a
step back and explain your reasoning, as well as the alternate
solutions you've discarded, then that's fine, but otherwise, with a
firm and resolute 'no, never' to this point, we're at a dead end.
We can't have multiple white points on the display at the same time;
it causes incomplete user adaptation and breaks color matching
everywhere in the workflow. The traditional way to make sure there is
only one white point for all programs is manipulating the video card
LUTs. It's probably not the only way to do it. But if it's definitely
not possible (for reasons I'm not really following) for a privileged
display calibration program to inject LUT data, and restore it at boot
up time as well as wake from sleep, then another way to make certain
there's normalized color rendering on the display is necessary.

The holy grail is as Richard Hughes describes, late binding color
transforms. In effect every pixel that will go to a display is going
to be transformed. Every button, every bit of white text, for every
application. There is no such thing as opt in color management, the
dumbest program in the world will have its pixels intercepted and
transformed to make sure it does not really produce 255,255,255
(deviceRGB) white on the user display.

The consequences for a single dumb program, for even 2 seconds,
showing up on screen with 255,255,255 white on an uncalibrated display
is the user's white point adaptation is clobbered for at least 10
minutes, possibly 30 or more minutes. And it doesn't just affect the
other things they're viewing on the display, it will impact their
ability to reliably evaluate printed materials in the same
environment.

So the traditional way of making absolutely certain no program can
hose the workflow is this crude lever in the video card. If you can
come up with an equivalently sure fire reliable s that doesn't demand
that the user draw up a list of "don't ever run these programs" while
doing color critical work, then great. Otherwise, there's going to
need to be a way to access the crude calibration lever in the video
card. Even though crude, this use case is exactly what it's designed
for.
--
Chris Murphy
Graeme Gill
2016-12-21 01:43:50 UTC
Permalink
Hi Chris,
Post by Chris Murphy
So the traditional way of making absolutely certain no program can
hose the workflow is this crude lever in the video card. If you can
come up with an equivalently sure fire reliable s that doesn't demand
that the user draw up a list of "don't ever run these programs" while
doing color critical work, then great. Otherwise, there's going to
need to be a way to access the crude calibration lever in the video
card. Even though crude, this use case is exactly what it's designed
for.
I really wouldn't call it just a "crude lever", given the other
benefits, such as it being a higher resolution way to ensure
"nice" display characteristics. Other possible mechanisms
further up the rendering pipeline are not in as good
a place for this in some ways, and it has zero performance
impact. Tools, standards and workflows are already geared to this
mechanism.

Cheers,
Graeme Gill.
Chris Murphy
2016-12-21 19:18:08 UTC
Permalink
I'd like to attack and bottom out the
non-calibration usecase without muddying those waters, though ...
What about games? I'm not aware of any color managed games. As far as
I know they all assume deviceRGB and are content with the user getting
something rather obviously less chromatic with less dynamic range on a
laptop, and rather more if they have a 4K TV, or a wide gamut display.
Do games typically draw through X or do they use DRM/KMS?

Video. This is such a rat hole of a mess right now. There are
different codecs with different color spaces, and some of the wrappers
imply different color handling on top of this, and then the user
agents all end up interpreting this mess differently so the same video
content can appear differently depending on user agent, and then
depending on platform. It's so bad. In ancient times Apple had
quicktime video pumped through ColorSync with very good performance.
I'm going to guess this involved simple matrix + TRC defined source
and destination color spaces for the transform. I know that Adobe
Flash, about 3-4 years ago, started to do on-the-fly display
compensation if the metadata in the source file enabled it. (I don't
remember if the tag says "I'm sRGB" and therefore Flash color managed
it, or if the tag says "color management me" and Flash assumes it's
sRGB and therefore color manages it.) And all of this would work in a
web browser's cut out area.

Web browsers. Right now all the web browsers are doing their own color
management, haphazardly. e.g. only images with embedded profiles are
color managed. Untagged images, html and css are deviceRGB, there is
no transformation, and thus no display compensation most of the time.
Browser developers have complained color managing every element in a
web page slows down rendering by a ton. Maybe they're doing it wrong.
:-D Also, browser plugins do their own thing, even if the browser has
full color management enabled, this doesn't affect plug-in content. I
don't know if there's any difference between NPAAPI vs PPAPI plug-ins
in this regard.

GIMP, Scribus, Krita, Digicam, Darktable - are all color managing
their content in advance (early binding) and supply device RGB, having
used the current display profile as the destination in a transform. So
they already do display compensation and don't need it to happen
again. So either they've got a bunch of work to do to rip that out if
Wayland is going to do it for them, or they need a passthrough. Web
browsers are a good example of programs that should probably rip out
what they have and have something else completely color manage all of
their content regardless of whether it's native elements or plug-in,
but I digress.

There has been a need for an explicit pass through to avoid any (or
additional) color management, and there are presently programs that
continue to expect that there will be.

On macOS this is achieved by tagging the content with the current
display profile, since source equals destination, the CMM sees this
and does a null transform, it's a noop. But there's no actual,
literal, off switch. If the content is not explicitly tagged, it's
assumed to be sRGB and hence color managed. So it's both opt in (if
you want something other than sRGB) and opt out (you want a pass
through).

On iOS there are no transforms, there is no need for display
compensation because all displays are either sRGB or (recently)
DCI-P3, and apparently they have the quality control to ensure this
level of consistency at a hardware level. Ironically, they've reverted
to closed loop hardware based calibration. Don't underestimate the
power of consistency and ancient ideas!

On Windows, ICC based color management is opt in, there is an API. If
a transform is not requested, it doesn't happen.

On Android, it's the same as Windows except no API, and display gamut,
dynamic range and tone response are all over the map. (And yet the
world isn't ending, even though if you view the same image on 10
Android phones or tablets you will have 10 different experiences).

I don't assume that on Linux any of these must be mimicked. But there
are distinct advantages with mimicking: we'll have a good idea where
the bodies are buried, rather than having something completely
different from any other platform.
--
Chris Murphy
Graeme Gill
2017-01-05 02:30:23 UTC
Permalink
Daniel Stone wrote:

Hi Daniel,
For the purposes of this discussion, I'd like to park the topic of
calibration for now. It goes without saying that providing facilities
for non-calibration clients is useless without calibration existing,
I'm puzzled by what you mean by "non-calibration clients" ?

- taken literally that translates to applications that don't
perform calibration, which is almost all of them, which I guess
is not what you mean.
but the two are surprisingly different from a window-system-mechanism
point of view; different enough that my current thinking tends towards
a total compositor bypass for calibration, and just having it drive
DRM/KMS directly. I'd like to attack and bottom out the
non-calibration usecase without muddying those waters, though ...
There are dangers in bypasses that go outside the normal
rendering pipeline - if not very carefully understood, they
can lead to the situation where the test values are not
being processed in the same way that normal rendering is
processed, leading to invalid test patch results.

And in any case, this approach always strikes me as really hacky -
if there is a well thought out color management pipeline, then
the same mechanisms used to configure the steps in the pipeline
are the very ones that should work to configure it into a state suitable
for measurement. This is natural and elegant, and much safer.

("Safer" in color management terms typically translates to
"less likely to lead to difficult to diagnose failures to
get the expected color, due to processing steps that are
modifying color in hard to scrutinize ways".)
Post by Chris Murphy
The holy grail is as Richard Hughes describes, late binding color
transforms. In effect every pixel that will go to a display is going
to be transformed. Every button, every bit of white text, for every
application. There is no such thing as opt in color management, the
dumbest program in the world will have its pixels intercepted and
transformed to make sure it does not really produce 255,255,255
(deviceRGB) white on the user display.
I wouldn't call this a "holy grail", nor am I sure this really
falls into what is normally regarded as a "late binding" color
workflow.

In color workflows, "late binding" typically refers to delaying
the rendering of original (assumed photographic) source material
into an intermediate device dependent colorspace until it actually
gets to the point where this is necessary (i.e. printing
or display). But this is not automatically better or easier,
it actually comes down to where the rendering intent information
(creative judgment) resides, as well as where the gamut definitions reside.
[At this point I'll omit a whole discussion about the nuances and pro's
and con's of early & late binding.]

What Chris is talking about above, is simply providing a mechanism
in a display server to by-default manage non-color managed "RGB"
output from applications. That's very desirable in a world
full of non-color aware applications (including most desktop
software itself) and native wide gamut displays.
I completely agree with you! Not only on the 'opt in' point quoted
just here, but on this. When I said 'opt out' in my reply, I was
talking about several proposals in the discussion that applications be
able to 'opt out of colour management' because they already had the
perfect pipeline created.
Right. A completely different case to dealing with
non-color aware/managed applications.
As arguments to support his solution, Graeme presents a number of
cases such as complete perfect colour accuracy whilst dragging
surfaces between multiple displays, and others which are deeply
understandable coming from X11. The two systems are so vastly
different in their rendering and display pipelines that, whilst the
problems he raises are valid and worth solving, I think he is missing
an endless long tail of problems with his decreed solution caused by
the difference in processing pipelines.
Sorry, I'm not at all as convinced that I don't understand
many of the differences between X11 and Wayland.
Put briefly, I don't believe it's possible to design a system which
makes these guarantees, without the compositor being intimately aware
of the detail.
Using device links is a direction to make this possible I think,
since this allows decoupling the setting up of the color transform,
from when (and who) executes the corresponding pixel transformation.
So the application can then (if it chooses to manage its own color)
setup the color transformations for each output, while the compositor
can execute the pixel transformation on whichever of the surface
pixels it chooses to whichever of the outputs it needs to render to.
Note the implications about what range of source colorspace widths
the compositor then needs to handle though!
I'm not fussy about the architectural details. But there's a real
world need for multiple display support, and for images to look
correct across multiple displays, and for images to look correct when
split across displays. I'm aware there are limits how good this can
be, which is why the high end use cases involve self-calibrating
displays with their own high bit internal LUT, and the video card LUT
is set to linear tone response. In this case, the display color space
is identical across all display devices. Simple.
Except it's not, unless all the displays have identical
chromaticity primaries and mix the colors in the
same way. So conceivable yes for high quality displays that
are notionally identical. But even high end displays
with more complex internal calibration machinery
have the same limitations with regard to how to
match gamuts if the primaries are not identical, the same as
attempting to use video card lut-matrix-lut machinery
- you can't square the circle. You either reduce all
to a common smaller gamut, or loose any control over
how clipping occurs. This may be perfectly OK for
displays that are close to being the same, or for
users that are prepared to sacrifice some gamut,
but it is distinctly not so good when mixing different
types of displays, which happens all the time when someone
docks their laptop.
Similarly, I'd like to park the discussion about surfaces split across
multiple displays; it's a red herring.
I'm not sure why you say that, as it seems to lie at the
heart of what's different between (say) X11 and Wayland.
Again, in X11, your pixel
content exists in one single flat buffer which is shared between
displays. This is not a restriction we have in Wayland, and a lot of
the discussion here has rat-holed on the specifics of how to achieve
this based on assumptions from X11. It's entirely possible that the
best solution to this (a problem shared with heterogeneous-DPI
systems) is to provide multiple buffers.
I'm certainly not assuming anything like a single buffer shared
between displays - all I'm interested in is who sets up and who
does the transformation between the source color spaces
and the output colorspaces. Spatial transformation is orthogonal.
Or maybe, as you suggest
below, normalised to an intermediate colour system of perhaps wider
gamut.
Doesn't solve the problem. Stuffing color into a wide gamut
colorspace is easy, what to do with it after that is hard,
since the transformation of those colors to a specific
output may depend on the source gamut and the destination
gamut. i.e. stuffing things into a wide gamut space doesn't
decouple the transformation without also sacrificing
the color outcome due to loss of control over clipping
or gamut mapping behavior.
The video card LUT is a fast transform. It applies to video playback
the same as anything else, and has no performance penalty.
So then I wonder where the real performance penalty is these days?
Video card LUT is a simplistic 2D transform. Maybe the "thing" that
ultimately pushes pixels to each display, can push those pixels
through a software 2D LUT instead of the hardware one, and do it on 10
bits per channel rather than on full bit data.
Some of the LUTs/matrices in display controllers (see a partial
enumeration in reply to Mattias) can already handle wide-gamut colour,
with caveats. Sometimes they will be perfectly appropriate to use, and
sometimes the lack of granularity will destroy much of their value.
Interesting speculation of course, but I'm not actually sure it
helps, except for specific situations. For instance, it
may help the Video situation if there is a hardware decode
pipeline that renders directly to the display buffer (or
if further processing via the CPU or GPU is undesirable
from a processing overhead or power consumption point of view).
Doing accurate color in this situation is hard, because
simple machinery (i.e. matrix and 1D luts) assume a perfectly
behaved output device in terms of additivity, and either
a wider gamut than the video space or a willingness to put
up with whatever clipping the machinery implements (typically
per component clipping, leading to clip hue changes).

In general (i.e. for application supplied color), such machinery
doesn't offer much of interest over the far more flexible
and capable mechanisms that an ICC profile (and corresponding CMM)
offer, and that can be applied per application element rather than over the
whole output.

Even if it comes to the point where graphic card hardware routinely
offers a reasonable resolution multi-dimensional LUT
within a CRTC, I can't actually see that as being very useful,
apart from some very specific situations (for some reason you want
your whole desktop to be in a specific emulated colorspace
such as Rec709, rather than leaving it up to each application to take
full advantage of the displays capabilities).
If
the compositor is using the GPU for composition, then doing colour
transformations is extremely cheap, because we're rarely bound on the
GPU's ALU capacity.
Yes - programmable beats fixed pipelines for flexibility and
possible quality. I'm not so confident that super high quality
color management can't tax a GPU though - a high end
Video 3DLut box supports 64 x 64 x 64 resolution LUT
(that's something like 1.6 Mbytes of table for a single
transform @ 16 BPC), and MadVR uses a 256 x 256 x 256 table for
it's color, i.e. 100 Mbytes using the GPU. And multiple
applications may use multiple transforms.
In practice I wouldn't expect apps. to normally push these limits,
because most of this stuff originated on systems with
much more limited CPU and memory, so 17^4 and 33^3 table
resolutions or similar are much more common.
Mind you, I see an ideal steady state for non-alpha-blended
colour-aware applications on a calibrated display, as involving no
intermediate transformations other than a sufficiently capable display
controller LUT/matrix. Efficiency is important, after all. But I think
designing a system to the particular details of a subset of hardware
capability today is unnecessarily limiting, and we'd be kicking
ourselves further down the road if we did so.
Yep. Future project, as HW and possible usage becomes clearer.

Regards,

Graeme Gill.
Carsten Haitzler (The Rasterman)
2016-12-19 03:20:06 UTC
Permalink
Post by Graeme Gill
a display may not have a single native colorspace. it may be able to
switch. embedded devices can do this as the display panel may have extra
control lines for switching to a different display gammut/profile. it may
be done at the gfx card output level too... so it can change on the fly.
That's not a typical situation though, but nothing special would be
happening - a new profile may be installed by the user as well,
in which case an application should re-render to accommodate
the change.
the user at most should be interacting with compositor settings/tools to
configure a specific profile for a display (let's assume a fixed profile for
that screen), so a compositor tool to tell the compositor the profile changed
(pressed a button on the monitor to change it). when they alter the compositor,
then compositor can tell applications.
Post by Graeme Gill
yes. compositors right now work in display colorspace. they do no
conversions. eventually they SHOULD to display correctly. to do so they
need a color profile for the display.
For enhanced color management yes. But core comes first, and is necessary
for many color critical applications, because the compositor will never
have the color transformations they require.
they will have to have them, or then not support that colorspace at all.
Post by Graeme Gill
it may be that a window spans 8 different screens all with different
profiles. then what?
As I've explained several times, what happens is that the application
is aware of this, and transforms each region appropriately - just
as they currently do on X11/OS X/MSWin systems.
not in wayland. not acceptable. the app will never know which windows it is on.
as i have said - could be wrapped around a sphere or a room full of bunnies
bouncing about across 8 screens.
Post by Graeme Gill
with a
proper color correcting compositor it can make them all look the same.
As will a color aware application given appropriate color management support.
it can't if it doesn't know how a window or buffer is transformed or mapped to
screens.
Post by Graeme Gill
The code is already there to do all that in color critical application.
then they get to pick a colorspace and render to that. attahc that to the
buffer. compositor will do whatever it wants. e.g. if that buffer is on the
screen it matches it'd do no transform. maybe it does no transforms at all. and
simply hovers some indicator above the serface telling you if the surface
colorspace matches the screens colorspace or not. maybe compositor disallows
the window to be moved off the screen that matches the colorspace. that can be
configured via the compositor.
Post by Graeme Gill
if i mmove the
window around the client has drawn different parts of its buffer with
different colorspaces/profiles in mind and then has to keep redrawing to
adjust as it moves.
Yes.
and now you just hit "unacceptable in wayland land" space. which is why i tell
you that this is not acceptable.
Post by Graeme Gill
you'll be ablew to see "trails" of incorrect coloring around the
boundaries of the screens untl the client catches up.
It's damage, just like any other, and color critical users using
color critical applications will take "trails" over wrong color
anytime. No "trails" and wrong color = a system they can't use.
and totally unacceptable for wayland space. so a big fat no to that kind of
design.
Post by Graeme Gill
the compositor SHOULD do any color correction needed at this point.
Not at all. That's a way to do it under some circumstances yes, but
it's not satisfactory for all.
it is up to the compositor. if the goal is to make colors visibly as uniform as
is possible given the displays that exist then it should. not all compositors
will. they may do things differently like above. maybe limit location of a
window/surface or place an indicator above a window when it is fully on the
correct screen for its color profile, and/or they may transform colorspaces.
Post by Graeme Gill
if you want
PROPER color correction the compositor at a MINIMUM needs to be able to
report the color profile of a screen even if it does no correcting.
Yes - exactly what I'm suggesting as core color management support.
but it doesnt have to tell you which screen it is nor tell you which screen
your window is on. it's an option of 1 of various colorspaces to be able to use
given a specific compositor.
Post by Graeme Gill
yes you may have
multiple screens. i really dislike the above scenario of incorrect pixel
tails because this goes against the whole philosophy of "every frame is
perfect".
"Every pixel being perfect" except they are the wrong color, isn't perfect.
IF the compositor is doing the transformce of colorspaces, then you can acheive
perfect pixels.
Post by Graeme Gill
There are multiple ways of doing the best thing possible - you can't re-render
a frame in the compositor if it doesn't have the pixels needed to render it,
so you can 1) not re-render until the application provides the pixels
needed 2) Render the wrong color pixels until the application catches up
or 3) (if the compositor has some color management capability and
the application sets it up) get it to do an approximate correction to
the pixels until the application catches up with the correct color.
thus compositor only supports the colorspaces/profiles is knows how to
transform if it is doing this, or it is passive and just reports what is
available doing no transforms and maybe saying "get monitors that share the
same color profile if you really care".

reality is compositor do have to do transforms to get 601,709 and the new 2020
colorspaces right for video anyway.
Post by Graeme Gill
you
cannot do this given your proposal. it can only be done if the compositor
handles the color correction and the clients just provide the colorspace
being used for their pixel data.
And a compositor can't know how to transform color in the way some
applications require. This trumps such goals.
if it doesn't know, and it wants to display by transforming, then it shouldnt
offer.
Post by Graeme Gill
being able to modify what the screen colorspace is in any way is what i
dislike.
That's the reality of how displays work. The user presses a button on the
front that says "emulate sRGB" or "native" or "Preset 1" or something else.
i mean control by applications. by wayland clients.
Post by Graeme Gill
only the compositor should affect this based on it's own decisions.
And color critical users will scream bloody murder at anything related
to color that isn't under their control, if it affects the accuracy or scope
of the color workflow.
that's what compositor tools are for. the compositor settings are going to be
how you load/configure color profiles for a display. not via protocol.
Post by Graeme Gill
No, not supported = native device response = not color managed.
and for most displays that is sRGB.
Not in the slightest. Having (ahem!) profiled a few displays, none of them
are exactly sRGB. Some may aspire to be sRGB, they may approach sRGB,
but (because they are real devices, not an idealized norm) none are sRGB.
[ Black point alone is miles out for most LCD based displays. ]
of course they are not exact. they are approximately srgb which is vastly
different from adobe or wide gammut etc. no display other than insanely priced
professional displays is going to be EXACTLY sRGB or adobe rgb etc. it's going
to be a bit off and thus some color profile adjusting via either some table/lut
with interpolation is going to be needed.
Post by Graeme Gill
No compositor is involved. If the application doesn't know
the output display profile, then it can't do color management.
it can assume sRGB.
That's up to the user. The user may have something else they can
assign if they are unable to profile the display (EDID derived
profile, model generic profile, etc.)
that's the compositor that deals with this internally. via its own tools and
settings. not some generic user tools or photoshop app etc.
Post by Graeme Gill
Please read my earlier posts. No (sane) compositor can implement CMM
capabilities to a color critical applications requirements,
so color management without any participation of a compositor
is a core requirement.
oh course it can. client provides 30bit (10bit per rgb) buffers for
example and compositor can remap. from the provided colorspace for that
buffer to the real display colorspace.
It's not about bit depth, it's about algorithms. No compositor can
do a transformation that it doesn't have an algorithm for.
and then it doesnt offer it if its display methodology is to transofrm to get
colors uniform. pretty simple.
Post by Graeme Gill
Relying on an artificial side effect (the so called "null color
transform") to implement the ability to directly control what is
displayed, is a poor approach, as I've explained at length previously.
but that is EXACTLY what you have detailed to rely on for color managed
applications. for core color management you say that the client knows the
colorspace/profile/mappings of the monitor and renders appropriately and
expects its pixel values to be presented 1:1 without remapping on the
screen because it knows the colorspace...
Yes, a switch (Don't do color management) is far cleaner than trying
to trick a constant color management compositor into not doing color
management by feeding it a source profile that is (hopefully)
the same as the destination profile (and how do you do that
if the surface spans more than one Monitor ?)
you need a bunch of "enums" or values for colorspaces provided plus some
matching LUT or transform adjustment data. it makes far more sense for sRGB/709
(the assumed default right now) to be part of this list with no lut/adjustments
rather than a special single flag.
Post by Graeme Gill
No compositor should be involved for core support. The application
should be able to render appropriately to each portion of the span.
then no need for any extension. :) compositor HAs to be involved to at
least tell you the colorspace of the monitor... as the screen is its
resource.
As I've explained a few times, and extension is needed to provide
the Output region information for each surface, as well as each
outputs color profile, as well as be able to set each Outputs
per channel VideoLUT tables for calibration.
that's not going to happen. it's a wayland design premise that applications
should not know this.
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
this way client doesnt need to know about outputs, which outputs it spans
etc. and compositor will pick up the pieces. let me give some more
That only works if the client doesn't care about color management very
much - i.e. it's not a color critical application. I'd hope that the
intended use of Wayland is wider in scope than that.
how does it NOT work?
It doesn't work when the compositor doesn't have the color transform
capability that the application requires.
see above.
Post by Graeme Gill
let me give a really simple version of this.
you have a YUV buffer. some screens can display yuv, some cannot. you want
to know which screens support yuv and know where your surface is mapped to
which screens so you can render some of your buffer (some regions) in yuv
and some in rgb (i'm assuming packed YUVxYUVxYUVx and RGBxRGBxRGBx layout
here for example)... you wish to move all color correct rendering,
clipping that correct (yuv vs rgb) rendering client-side and have the
compositor just not care.
Let me give you an example. The application has a ProPhotoRGB buffer,
and wants to render it with image specific gamut mapping into the display
space. It has code and algorithms to 1) Gather the image gamut, 2) Compute
a gamut mapping from the image gamut to the Output Display gamut, invert
the A2B cLUT tables of the Output display profile to floating point
precision with gamut clipping performed in a specially weighted CIECAM02 space.
I'm not quite sure how the Wayland compositor is going to manage all that,
especially given that the application could tweak or change this in
every release.
app provides the above mapped data. it ends up being an R, G and B value for
one of the compositor supported colorspaces - e.g. one of them is monitor #3.
when on monitor #3 the compositor does passthrough of that buffer. it displays
with no changes (for example). but when it is on another screen the compositor
maps FROm the colorspace of that monitor TO another. yes. this can b e lossy.
you can dither of course (and even do temporaly dithering) to approximate
colors. it will look best on that display. it'll look "as good as possible" on
other displays. the compositor doesn't need to know or care what original data
is if it's prophotorgb or anything else. it just does its best to map colors
for 1 output colorspace to another *IF* it does remapping. if it doesnt then
you get what you get. but the compositor knows what colorspace a buffer is for
and CAN try and get it right.
Post by Graeme Gill
the point of wayland is to be "every frame is perfect". this breaks that.
A pixel is not perfect if it is the wrong color.
flickering/trails is worse.
Post by Graeme Gill
If you don't care so much about color, yes. i.e. this is
what I call "Enhanced" color management, rather than core.
It doesn't have to be as flexible or as accurate, but it has
the benefit of being easy to use for applications that don't care
as much, or currently aren't color managed at all.
how not? a colorspace/profile can be a full transform with r/g/b points in
space... not just a simple enum with only fixed values (well thats how i'm
imagining it).
A color profile can be quite complex, including scripted in the case of
something like an OCIO or ACES profile
<http://www.oscars.org/science-technology/sci-tech-projects/aces>.
But the device profile is only half the story - it does nothing on its
own, it needs to be linked with another device profile. And the flexibility
at that point is unlimited.
and the device end is all a compositor cares about.
Post by Graeme Gill
profide a list of 1 colorspace -
the monitor native one. application renders accordingly. if colorspace of
rendered buffer == colorspace of target screen, compositor doesn't touch
pixel values.
Bad way of doing it, for reasons I've pointed out multiple times.
Be explicit rather than rely on a trick - use a switch.
it's not a trick. its a standard null op. src fmt == dest fmt. no conversion
needed.
Post by Graeme Gill
color management require introducing such things. BT.601, BT.709, BT.2020.
the compositor MUST KNOW which colorspace the YUV data uses to get it correct.
Sure, but that's not an aspect I've mentioned. Ultimately the display
is RGB, irrespective of the encoding using to carry that information
to it.
without a colorspace attached to yuv buffers you cannto know how to transform
them to rgb (srgb) values correctly at all. ignoring screen output colorspce
differences entirely here. just plain math to srgb space.
Post by Graeme Gill
i'm literally starting at datasheets of some hardware and you have to tell
it to use BT.601 or 709 equation when dealing with YUV. otherwise the
video data will look wrong. colors will be off. in fact BT.709 == sRGB.
Sure - complexity in managing encodings. But that has nothing
directly to do with color management, which is about colorspace
differences.
its the same thing - it's mapping a set of colors to another set in order to
get correct physical colors.
Post by Graeme Gill
now here comes the problem... each hardware plane yuv may be assigned to
MAY have a different colorspace. they also then get affected by the color
reproduction of the screen at the other end.
To be fare, I'm not that aware of how the hardware presents itself
in regard to such things (data sheets seem hard to come by, and I
have gone looking for them in vain on a few occasions), but for many color
critical uses, it's not an immediate concern because such applications
are not going to be using yuv buffers. (Exception might be a video
editing/color grading application sending previews to a TV or
studio monitor - but all that is about encodings rather than
colorspaces.)
tell the people building millions of smart tv's that yuv color correctness
doesn't matter for that tv they are selling for $20,000+. :) hell even over
$100k. yuv color correctness matters just as much.
Post by Graeme Gill
any list of colorspaces IMHO should also include the yuv colorspaces
where/if possible.
I don't think so. If you look at the video standards, the color spaces are
all specified as RGB. YCbCr is a different encoding of the same color
space with a precise definition of the transformation to/from.
well the 3 i mention are always associated with a yuv encoding of video data
specifically. as i have mentioned srgb/709 are the same rgb colorspace. 2020 is
closer to wide gammut rgb.
Post by Graeme Gill
my point was i don't think it's needed to split this up.
compositor lists available colorspaces. a list of 1 sRGB or null-transform
or adobe-rgb(with transform matrix), wide-gammut, etc. means thathat is
the one and only output supported.
I'm not quite sure of the context here - the display system only
knows about color spaces it has been told about. Someone has
to tell it what the color profile of its displays are, and
the application is the thing that knows what the color spaces
of the input spaces it deals with are.
the compositor will have configuration/tools/data. it will be configured to
have a specific profile/LUT/mapping for a specific screen. it may come with an
edid database for "professional monitors" and known color profiles or something
else... but it's not the place of a general client protocol to do this.
Post by Graeme Gill
not as i see it. given a choice of output colorspaces the client can
choose to do its own conversion, OR if it's colorspace of preference is
supported by the compositor then choose to pass the data in that
colorspace to the compositor and have the compositor do it.
Yes. But one is not the equivalent of the other, if the compositor
doesn't have the same color transformation capability.
and as above. don't advertise the colorspace if you can't transform it properly
and no display supports it, or limit otherwise indicate to the user that the
window is on the correct display or can only be there etc. if you don't
transform.
Post by Graeme Gill
*sigh* and THAT IS WHY i keep saying that the client can choose to do it's own!
I'm in furious agreement with this bit. I just want to make sure that
it is a core capability.
but they can't be guaranteed the compositor won't do more on top - eg adjust
colors when on a different screen not matching that profile. copmpositor may or
may not. as above.
Post by Graeme Gill
BUT this is not going to be perfect across multiple screens unless all
This is an already solved problem in other systems, including X11.
1 screen is a professional grade monitor with wide gammut rgb output.
1 screen is a $50 thing i picked up from the bargain basement bin at walmart.
null transform RGB
BT.709 YUV
Why is it reporting an encoding rather than a colorspace,
and why isn't it providing the two display profiles ?
it doesnt know them from a stick in the mud. it's a dumb compositor.
Post by Graeme Gill
null transform RGB
wide gammut RGB
sRGB
BT.709 YUV
BT.601 YUV
I don't see how extra encodings are useful without their
corresponding color spaces.
as i have said before. the colorspace is a color transform WITH adjusting
constants/matrix/LUT etc.
Post by Graeme Gill
in the dumb case your app can't do much.
In the dumb or core case, it has two display profiles, one
for the professional grade monitor with wide gammut rgb output,
no. the dumb case is "compositor has no idea. so i do rgb and yuv". it has no
clue otherwise.
Post by Graeme Gill
and the other for the bargain basement bin display from walmart.
It can then transform source images in whatever colorspace they
are tagged with, into the appropriate display colorspace,
in the way the application and user needs it to be transformed.
it has no clue what the bargain basement bin monitor displays. the dumb
compositor isn't even going to try.

in the smart case the compositor at least knows what color profiles exist and
advertises them. it may or may not transform them from screen to screen for you.
Post by Graeme Gill
the point of wayland is "every frame is perfect". you want clients to
rendering their content differently based on what screen their window is on
then a compositor can NEVER get this right no matter how hard they try
because clients are fighting them and making assumptions they absolutely
should not. i already told you of more realistic cases of windows in
miniature in pagers that are not on the same screen as the full sized
window (as opposed to the silly bunnyrabit example above, but it's meant
to make a point).
If this is really the case, then the conclusion is that Wayland is
not suitable for serious applications, and certainly is not a replacement
for X11. I don't actually think that that is true.
multiple people have told you this by now. applications are not going to know
what screen(s) their buffers span. it's abstract. equating this to "then
wayland is not for serious use" is sticking your head in the sand because
wayland does not work like every other display system that is 20 years old.
Post by Graeme Gill
you HAVE to abstract/hide this kind of information to ALLOW the compositor
to get things right.
I doubt that. You just have to make some allowance for the
application being able to determine the RGB values sent to the
display, if it wishes to. Given that this is basically
the case without compositor color management, and that
in the compositor there is a definition of how surfaces
get mapped to displays, I don't see at all why this is
now impossible, when it is supported in other serious
graphics systems.
see above.
Post by Graeme Gill
A color critical user won't put up with such things - they expect to
be in control over what's happening, and if a system has proper
color management (core + enhanced), there is absolutely no
reason for them to run the display in anything other than it's native gamut.
a user actually should not have to deal with most of these issues at all.
even a color critical one. they likely shouldn't have to remember which
one of their 16 screens has the best colorspace support for that image.
Ideally, yes - but few have the money to get a full set of 16 EIZO displays.
even if it's 2 screens. the principle holds.
Post by Graeme Gill
No, it's a list of N output display colorspaces, one for each display.
see above. it should not be per display.
How can there be any color management unless the
colorspaces of each display is recorded somewhere ???
as i have said several times. colorspace includes the details of the mapping.
e.g. a LUT or matrix or whatever blob sufficiently describes the mapping.
Post by Graeme Gill
sRGB is the colorspace of every HD display (or should be). how does it not
come into it?
Because that's not actually true. Each real display has it's own
response. That's why I write tools to profile displays, and why
people use those tools.
yes. i know its slightly off and correcting input can make it look right (to
some extent). i'm talking braodly that most displays are nominally sRGB, or
nominally adobe rgb, or something else. there is a small bit of adjusting to do
to make them "close to perfect" and thats where your colorimiter plus doing and
actual profile of the display and recording the data helps. or maybe there is a
manufacturer provided file for that already...
Post by Graeme Gill
you don't need anything special for color calibration beyond a null
transform and a compositor that won't go ignoring that null transform
anyway for the purpose of color calibration (when used by a calibration
app).
Agreed, + control over calibration curves.
no out-of-compositor tool will control those curves. likely they will provide
some calibration file that the compositor consumes and either adjust output or
monitor to correct it, or transforms pixel data to correct, OR it can pass that
data back to clients via the colorspace list.
Post by Graeme Gill
It's the simplest possible support, (hence calling it "core").
It's needed internally anyway for a compositor to implement CMM
operations for "enhance" color management.
it's also broken when you attach the color profile to a specific output.
see above.
No it's not - it already works on X11/OS X/MSWin.
not in wayland. bunny rabbits.
Post by Graeme Gill
that's out of scope for wayland.
Exactly, which is why you can't hope to cover all possible
client application requirements with color management
done in the compositor.
HOW it is transformed is either done
client-side to present whatever source data in a given output colorspace
to the compositor OR it's done by the compositor to fix colorspaces
provided by clients to display as correctly as possible on a given screen
+ hardware.
Right - so the client-side needs proper support for doing this, which
is what a "core" color management extension provides.
which is just a list of colorspaces (as i've sayd - they contain transofmr data
such as matrices, lut tables etc.).
Post by Graeme Gill
Hmm. Not really. Mostly a lot of other stuff has to go on top of that
to make things turn out how people expect (source colorspace definition,
white point mapping, gamut clipping or mapping, black point mapping etc.)
source definition is out of scope.
It can't be out of scope if the compositor is to do color management.
compositor adjusts from an output colorspace to another. source is not
important to it. it also may not adjust/map and just add some indicator.
Post by Graeme Gill
that's up the the app (e.g. photoshop). the
colorspce defintition indeed covers what you say. and it is about
adjusting. i was saying the exact same thing. i am not unfamiliar with
colorspaces, color correction and mapping. it's necessary for YUV->RGB and
is fundamentally the same as RGB->RGB
I'm now wondering if we are talking about different things.
The color management protocol I'm commenting on, is about
transforming between different device color spaces,
defined by ICC profiles etc. You seem to be referring
mainly to color encoding transforms, although you are then
throwing in references to sRGB, which is a colorspace definition.
no - i'm talking the device rgb colorspaces (yuv happens to be included too as
video colorspace definitions with yuv values map to specific defined rgb
output).
Post by Graeme Gill
1 colorspace which is the screen's output space is NOT the same? is that
not the same as a single screen system with the display colorspace on that
1 screen? how is it not the same? it's 1 colorspace exposed by compositor
to client in both cases. the SAME colorspace. how is this not the same?
the difference is that i dont think it should be per monitor.
The whole point is that each display has a different color response,
and incoming color should be transformed to compensate for these
differences. So each display (ideally) should have an associated
color profile.
but clients shouldnt know which output it's for or which output they are on.
Post by Graeme Gill
and that is why when a compositor DOES know the display colorspace it would
list that likely in addition to a null transform (there is basically no
downside to listing a null transform. it's the compositor just doing
nothing which is about as efficient as it gets).
This isn't typically true. A->B + B->A is not actually a null transform
for (say) a cLUT based ICC profile, since the B2A is not an exact inverse of
A2B. So you have to add a hack, that declares it a null transform.
eh? client rendered data for output given a specific colorspace/profile ... if
the monitor it is displaying on matches that... how does this not equate to a
null transform? the data is already in the correct colorspace...
Post by Graeme Gill
if the colorspace of a provided buffer == colorspace of output then it IS
effectively a null transform for the compositor and it does (or should do)
just that.
This depends on technical details of the profiles. Some
sorts of profiles will be very close to null transforms, and some
will not. (See above).
i'm not talking close. i'm talking "i used colorspace 36 in your list for this
buffer". and colorspace 36 is the exact profile of monitor B. and the window
now is displaying ONLY on monitor B. no transform needed at all for display
there.
Post by Graeme Gill
is a profile "exactly" invertible (i.e. to floating point
precision). Use a LUT for the per channel curves (such as the
original sRGB profile), and it's not quite perfectly invertible
(although it may be to low precision). Use cLUT based profiles,
and it certainly isn't. So it has to be declared to be
a special case and assumed to be a null transform.
no one is asking anyone to transform anything (thus invert or anything
else) with a null transform.
That's what a null transform is though - a forward conversion
(Device space to PCS) followed by an inverse transform
(PCS to Device space).
which is inputpixel == outputpixel. nothing changes. no transform is done at
all.
Post by Graeme Gill
at startup a compositor would load the color profiles that were
configured/stored from any previous execution that it knows match the displays
That's not possible if there was no previous execution.
it could have a database of every monitor in the world and edid matches and get
it right first go. for embedded use the device maker would pre-configure the
device with the correct profile at the factory.
Post by Graeme Gill
it has. you mean at setup time - like when someone buys a new monitor...
There may be no profile initially, - one is only available after
the system is running, and the user is able to profile it.
nope. see above.
Post by Graeme Gill
i'd have the compositor use a null transform (do nothing to rgb values) UNTIL
it has calibration data for that screen. you dont have to "create" a null
transform. it's just listed in the colorspaces supported. it is the "do
nothing" colorspace.
There's no colorspace to "list" without a profile though.
yes. and profile can be pre-shipped, some database etc.
Post by Graeme Gill
let me roll back in time. long ago in a land far far away i was working
with x11 systems...
Hey - so did I. I was hacking on Labtam X terminal cfb code in the late
80's/early 90's, making sure we had the fastest X terminals in the
world :-) :-)
i remember our labtams... shame about that coax... :)
Post by Graeme Gill
i do not want to see this kind of thing happen again in wayland land.
that's why it matters to me. it leads to a frustrating user experience.
I'm not sure of the relevance. There are many color managed applications
written for other graphics systems, and while there are things can trigger
color management issues (OS X is somewhat notorious for issues caused
by Apples API changes), I can't see how the situation could be analogous
to the 1bpp/8bpp/24bpp X11 situation you illustrate above.
as long as it's not necessary to support it and applications run even if their
colors are a bit off.
--
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) ***@rasterman.com
Graeme Gill
2016-12-19 06:01:50 UTC
Permalink
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
That's not a typical situation though, but nothing special would be
happening - a new profile may be installed by the user as well,
in which case an application should re-render to accommodate
the change.
the user at most should be interacting with compositor settings/tools to
configure a specific profile for a display (let's assume a fixed profile for
that screen), so a compositor tool to tell the compositor the profile changed
(pressed a button on the monitor to change it). when they alter the compositor,
then compositor can tell applications.
"Compositor tool" == color management application, such as ArgyllCMS "dispwin -I"!
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
yes. compositors right now work in display colorspace. they do no
conversions. eventually they SHOULD to display correctly. to do so they
need a color profile for the display.
For enhanced color management yes. But core comes first, and is necessary
for many color critical applications, because the compositor will never
have the color transformations they require.
they will have to have them, or then not support that colorspace at all.
There's nothing "have to" about it. It's technically impossible for
a compositor to satisfy every applications color management requirements,
because they could be defined by the application, if it has sufficiently
specialized needs.

And I'm not sure what you mean by "supporting a colorspace".
A source colorspace is supported if a device profile exists for it that the CMM
knows how to handle. But that is not the complete story, because there
is then the details of how that profile is to be used, i.e. the
details of how it should be linked to the output profile.
Post by Carsten Haitzler (The Rasterman)
not in wayland. not acceptable. the app will never know which windows it is on.
as i have said - could be wrapped around a sphere or a room full of bunnies
bouncing about across 8 screens.
It's also not acceptable that color be wrong. So how about trying to
come up with a solution, rather than saying "Wayland hasn't been
designed for that requirement up to now, so it can't be done" ?
Post by Carsten Haitzler (The Rasterman)
it can't if it doesn't know how a window or buffer is transformed or mapped to
screens.
So provide mechanism for it to know!
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
The code is already there to do all that in color critical application.
then they get to pick a colorspace and render to that.
There is no colorspace to pick - a display has a characteristic
behavior. There is no choice in it, short of tweaking it's controls,
altering its calibration, or changing one of its settings.
Post by Carsten Haitzler (The Rasterman)
attahc that to the
buffer.
Different display devices have different characteristics, hence
different profiles. So one choice will not work for a surface that
spans multiple displays.
Post by Carsten Haitzler (The Rasterman)
compositor will do whatever it wants. e.g. if that buffer is on the
screen it matches it'd do no transform. maybe it does no transforms at all. and
simply hovers some indicator above the serface telling you if the surface
colorspace matches the screens colorspace or not. maybe compositor disallows
the window to be moved off the screen that matches the colorspace. that can be
configured via the compositor.
The user will want to configure their windows as they see fit, including
moving images from one display to the other. There needs to be a mechanism
to allow color accurate display when this happens.
Post by Carsten Haitzler (The Rasterman)
and now you just hit "unacceptable in wayland land" space. which is why i tell
you that this is not acceptable.
Wrong color is unacceptable too. So how do you solve it ?
(And it's a solved problem on all other systems. Why do
you want to cripple Wayland ?)
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
It's damage, just like any other, and color critical users using
color critical applications will take "trails" over wrong color
anytime. No "trails" and wrong color = a system they can't use.
and totally unacceptable for wayland space. so a big fat no to that kind of
design.
So Wayland is a big fat no for anything that requires accurate color ?
Wow. And I though Wayland was intended to ultimately replace X11.
Apparently not!
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Yes - exactly what I'm suggesting as core color management support.
but it doesnt have to tell you which screen it is nor tell you which screen
your window is on. it's an option of 1 of various colorspaces to be able to use
given a specific compositor.
I don't see how this can work, except for limited color accuracy requirements.
Pick a small colorspace with a common gamut, and the full gamut of all displays
can't be used. Pick a large gamut and clipping policy can't be managed.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
"Every pixel being perfect" except they are the wrong color, isn't perfect.
IF the compositor is doing the transformce of colorspaces, then you can acheive
perfect pixels.
No you can't, if the compositor doesn't have the information or algorithms
to do the transformation. This is absolutely no different from any
other situation - the compositor can't render stuff it doesn't have -
ultimately the application has to provide the content that appears on
screen, and the compositor is its agent in doing so. Color is part of
the content.
Post by Carsten Haitzler (The Rasterman)
reality is compositor do have to do transforms to get 601,709 and the new 2020
colorspaces right for video anyway.
What's that got to do with anything ? They are just colorspaces, like any
number of other RGB colorspaces. (And yes, and I released a public domain
ICC profile for Rec 2020 in 2013).
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
That's the reality of how displays work. The user presses a button on the
front that says "emulate sRGB" or "native" or "Preset 1" or something else.
i mean control by applications. by wayland clients.
That's pretty rare - most displays use poorly documented or standardized
means of setting such options programmatically (EDID data lines or some have a USB).
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
And color critical users will scream bloody murder at anything related
to color that isn't under their control, if it affects the accuracy or scope
of the color workflow.
that's what compositor tools are for. the compositor settings are going to be
how you load/configure color profiles for a display. not via protocol.
Huh ? How are "compositor" tools meant to be written ? They need a means
of talking Wayland to do so !
Post by Carsten Haitzler (The Rasterman)
of course they are not exact. they are approximately srgb which is vastly
different from adobe or wide gammut etc. no display other than insanely priced
professional displays is going to be EXACTLY sRGB or adobe rgb etc. it's going
to be a bit off and thus some color profile adjusting via either some table/lut
with interpolation is going to be needed.
Right. Color management is needed! That's the topic at hand.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
That's up to the user. The user may have something else they can
assign if they are unable to profile the display (EDID derived
profile, model generic profile, etc.)
that's the compositor that deals with this internally. via its own tools and
settings. not some generic user tools or photoshop app etc.
What are "its own tools" ? You mean color management tools - the ones
used to calibrate, profile and configure the display! The ones I'd
like to be able to port to Wayland, if it had the actual protocol/API
support needed to do so !?
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
It's not about bit depth, it's about algorithms. No compositor can
do a transformation that it doesn't have an algorithm for.
and then it doesnt offer it if its display methodology is to transofrm to get
colors uniform. pretty simple.
So you basically saying that Wayland applications will have to have
color management that's crippled to the compositors color transformation
capability, even if the application itself knows exactly what to do ?
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
As I've explained a few times, and extension is needed to provide
the Output region information for each surface, as well as each
outputs color profile, as well as be able to set each Outputs
per channel VideoLUT tables for calibration.
that's not going to happen. it's a wayland design premise that applications
should not know this.
So you're condemning Wayland to be unsuitable for color critical
applications and users ?
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Let me give you an example. The application has a ProPhotoRGB buffer,
and wants to render it with image specific gamut mapping into the display
space. It has code and algorithms to 1) Gather the image gamut, 2) Compute
a gamut mapping from the image gamut to the Output Display gamut, invert
the A2B cLUT tables of the Output display profile to floating point
precision with gamut clipping performed in a specially weighted CIECAM02 space.
I'm not quite sure how the Wayland compositor is going to manage all that,
especially given that the application could tweak or change this in
every release.
app provides the above mapped data.
Huh ? You've just finished telling me that that can't be supported
by Wayland !
Post by Carsten Haitzler (The Rasterman)
it ends up being an R, G and B value for
one of the compositor supported colorspaces - e.g. one of them is monitor #3.
when on monitor #3 the compositor does passthrough of that buffer. it displays
with no changes (for example). but when it is on another screen the compositor
maps FROm the colorspace of that monitor TO another. yes. this can b e lossy.
Not lossy - inaccurate, and not according to what the user is expecting.
Post by Carsten Haitzler (The Rasterman)
you can dither of course (and even do temporaly dithering) to approximate
colors. it will look best on that display. it'll look "as good as possible" on
other displays.
That's not acceptable when color accuracy is expected. I was really
hoping that every pixel would be perfect, not compromised due to
the inability of Wayland to accommodate the application providing
the correct value.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
A pixel is not perfect if it is the wrong color.
flickering/trails is worse.
A user who is doing color critical work will disagree with you, and
they will be the ultimate arbiter of what is a viable system. They
care a lot less about such aesthetics, if it has to be a trade off
against color accuracy. (And I'm not convinced that there has
to be such a trade off).
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
A color profile can be quite complex, including scripted in the case of
something like an OCIO or ACES profile
<http://www.oscars.org/science-technology/sci-tech-projects/aces>.
But the device profile is only half the story - it does nothing on its
own, it needs to be linked with another device profile. And the flexibility
at that point is unlimited.
and the device end is all a compositor cares about.
Huh ? You've just spent considerable energy arguing that the compositor
has to the conversion to device space, because only it is allowed
to know how a surface maps to Outputs!
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Bad way of doing it, for reasons I've pointed out multiple times.
Be explicit rather than rely on a trick - use a switch.
it's not a trick. its a standard null op. src fmt == dest fmt. no conversion
needed.
It's not naturally a no-op. Pixel values can change if you actually
perform such a conversion.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Sure, but that's not an aspect I've mentioned. Ultimately the display
is RGB, irrespective of the encoding using to carry that information
to it.
without a colorspace attached to yuv buffers you cannto know how to transform
them to rgb (srgb) values correctly at all.
ignoring screen output colorspce
differences entirely here. just plain math to srgb space.
Sure, you need to know the decoding transform from YCrCr to RGB if that is
your workflow. But that conversion doesn't necessarily define the colorspace,
since the underlying RGB may or may not correspond to a standard colorspace.
It may be a device profile RGB instead, for instance.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Sure - complexity in managing encodings. But that has nothing
directly to do with color management, which is about colorspace
differences.
its the same thing - it's mapping a set of colors to another set in order to
get correct physical colors.
No it's not. An encoding change is just a mathematically different
representation of the same colorspace. There are mathematically
exact reversible equations for the transformation, but the
underlying colorimetry is defined to be the same. (i.e.
the primaries of a YCbCr space are that of the RGB primaries
it was derived from).
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
To be fare, I'm not that aware of how the hardware presents itself
in regard to such things (data sheets seem hard to come by, and I
have gone looking for them in vain on a few occasions), but for many color
critical uses, it's not an immediate concern because such applications
are not going to be using yuv buffers. (Exception might be a video
editing/color grading application sending previews to a TV or
studio monitor - but all that is about encodings rather than
colorspaces.)
tell the people building millions of smart tv's that yuv color correctness
doesn't matter for that tv they are selling for $20,000+. :) hell even over
$100k. yuv color correctness matters just as much.
You've misinterpreted what I've said. I said the encodings
don't matter, because it's the underlying colorspaces that matter
with regard to color accuracy.

[ Are you really unaware that people use my color management
tools to create workflows that allow accurate emulation
of Video display standards ?

<http://www.avsforum.com/forum/139-display-calibration/1464890-eecolor-processor-argyllcms.html>
<http://www.avsforum.com/forum/139-display-calibration/1471169-madvr-argyllcms.html>
i.e. I have a clue.
]
Post by Carsten Haitzler (The Rasterman)
the compositor will have configuration/tools/data.
No it won't, if there are no Wayland protocols & API's to make that
possible!
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
null transform RGB
BT.709 YUV
Why is it reporting an encoding rather than a colorspace,
and why isn't it providing the two display profiles ?
it doesnt know them from a stick in the mud. it's a dumb compositor.
Huh ? This is what color management extensions provide !
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
null transform RGB
wide gammut RGB
sRGB
BT.709 YUV
BT.601 YUV
I don't see how extra encodings are useful without their
corresponding color spaces.
as i have said before. the colorspace is a color transform WITH adjusting
constants/matrix/LUT etc.
How can that be done without the colorspace profiles ?
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
In the dumb or core case, it has two display profiles, one
for the professional grade monitor with wide gammut rgb output,
no. the dumb case is "compositor has no idea. so i do rgb and yuv". it has no
clue otherwise.
That is the point. The user can use their color management tools
to profile their displays, register the ICC profiles with the
compositor, and then the color-aware application can then fetch the
output profiles for the display, and color manage the pixel
data given to the compositor. So color management works.
Existing color sensitive applications have all the code
to do that.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
and the other for the bargain basement bin display from walmart.
It can then transform source images in whatever colorspace they
are tagged with, into the appropriate display colorspace,
in the way the application and user needs it to be transformed.
it has no clue what the bargain basement bin monitor displays. the dumb
compositor isn't even going to try.
Nor should it for core support. But that doesn't stop
color management if the application is given the opportunity.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
If this is really the case, then the conclusion is that Wayland is
not suitable for serious applications, and certainly is not a replacement
for X11. I don't actually think that that is true.
multiple people have told you this by now. applications are not going to know
what screen(s) their buffers span. it's abstract. equating this to "then
wayland is not for serious use" is sticking your head in the sand because
wayland does not work like every other display system that is 20 years old.
So rather than finding reasons why it can't be done, how about
finding ways that it can be done ? Is the development
of Wayland really so inflexible and out of touch, that
when faced with the problem that one of the assumptions
made is incorrect, it can't adapt ?
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
How can there be any color management unless the
colorspaces of each display is recorded somewhere ???
as i have said several times. colorspace includes the details of the mapping.
e.g. a LUT or matrix or whatever blob sufficiently describes the mapping.
And as I've pointed out several times now, this isn't sufficient if
the source profile can't be described that way (ACES etc.), or
if the actual use of the profile (linking) requires more nuance
that the compositor knows how to do.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Because that's not actually true. Each real display has it's own
response. That's why I write tools to profile displays, and why
people use those tools.
yes. i know its slightly off and correcting input can make it look right (to
some extent).
It's not slightly off - people spend a lot of time and money
making it right.
Post by Carsten Haitzler (The Rasterman)
i'm talking braodly that most displays are nominally sRGB, or
nominally adobe rgb, or something else.
Oh boy. Yes, if you don't care much about color. Definitely not if you are
doing something color critical, like photography, desktop publishing,
color grading, etc.
Post by Carsten Haitzler (The Rasterman)
there is a small bit of adjusting to do
to make them "close to perfect" and thats where your colorimiter plus doing and
actual profile of the display and recording the data helps. or maybe there is a
manufacturer provided file for that already...
All I can suggest is that you become a bit more familiar with
what color management entails. There is a lot of info out
there on the web. My small contribution is this:
<http://www.argyllcms.com/doc/ColorManagement.html>

Summary: there is a lot more going on that "a little bit of adjusting".
Post by Carsten Haitzler (The Rasterman)
no out-of-compositor tool will control those curves.
I hope your wrong. Because no in-compositor tool has the smarts
needed to take on color management functions such as calibration
or profiling. Without color management tools having access to the curves,
I can state categorically that Wayland is unsuitable for color critical
applications, because it can't be calibrated. Shall I start spreading
the word ?
Post by Carsten Haitzler (The Rasterman)
likely they will provide
some calibration file that the compositor consumes and either adjust output or
monitor to correct it, or transforms pixel data to correct, OR it can pass that
data back to clients via the colorspace list.
Completely inadequate. If you don't really know very much about
color management, then you aren't in a position to make such
value judgments on the users behalf.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
No it's not - it already works on X11/OS X/MSWin.
not in wayland. bunny rabbits.
So when will there be a replacement for X11, if Wayland isn't it ?
Post by Carsten Haitzler (The Rasterman)
which is just a list of colorspaces (as i've sayd - they contain transofmr data
such as matrices, lut tables etc.).
I think I have a clue what an ICC profile contains.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
It can't be out of scope if the compositor is to do color management.
compositor adjusts from an output colorspace to another. source is not
important to it. it also may not adjust/map and just add some indicator.
Output to output transforms will make any but a single
display 2nd class. This is hardly "every pixel is perfect".
Post by Carsten Haitzler (The Rasterman)
no - i'm talking the device rgb colorspaces (yuv happens to be included too as
video colorspace definitions with yuv values map to specific defined rgb
output).
OK - that was your out.
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
The whole point is that each display has a different color response,
and incoming color should be transformed to compensate for these
differences. So each display (ideally) should have an associated
color profile.
but clients shouldnt know which output it's for or which output they are on.
You say that, but you can't back it up. I've explained at length and
rather repetitively why it is necessary, and you haven't
offered any technically feasible alternatives.

I'd be really interested to learn if everyone else working on Wayland
really thinks that accurate color doesn't matter.
Post by Carsten Haitzler (The Rasterman)
i'm not talking close. i'm talking "i used colorspace 36 in your list for this
buffer". and colorspace 36 is the exact profile of monitor B. and the window
now is displaying ONLY on monitor B. no transform needed at all for display
there.
OK - if there is a level of indirection (as suggested by Niels Ole Salscheider's
extension suggestion), then this is not ambiguous. The issue of
identifying which output profile is the one that a surface is going
to land on, remains. Without this the client can't know which
output profile enum to assign it to result in a null transform.
(And the bootstrap issue remains.)
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
That's what a null transform is though - a forward conversion
(Device space to PCS) followed by an inverse transform
(PCS to Device space).
which is inputpixel == outputpixel. nothing changes. no transform is done at
all.
Not true. For instance:

xicclu -ff -ir display_lut.icm
1.000000 1.000000 0.000000 [RGB] -> Lut -> 97.741833 -15.695131 91.298053 [Lab]

xicclu -fb -ir display_lut.icm
97.741833 -15.695131 91.298053 [Lab] -> Lut -> 0.991896 0.992829 0.084499 [RGB]
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
That's not possible if there was no previous execution.
it could have a database of every monitor in the world and edid matches and get
it right first go. for embedded use the device maker would pre-configure the
device with the correct profile at the factory.
Right, but you're performing contortions to avoid just saying what the
situation actually is - that there is no profile.
Post by Carsten Haitzler (The Rasterman)
yes. and profile can be pre-shipped, some database etc.
So you're assuming that people can only use systems that are
pre-packaged ? How do the packagers set them up initially ?
(i.e. there is a cascade of issues just to avoid adding one flag that
says what the situation actually is. This has flow on effects - for instance
if there is a flag, then the application can tell the user "this display
doesn't currently have a profile - the color is probably incorrect", rather
that not knowing what the situation is).
Post by Carsten Haitzler (The Rasterman)
as long as it's not necessary to support it and applications run even if their
colors are a bit off.
"Core" level is mainly information - the application is able to
use it if it chooses, and fall back to no color management,
or even fail if the author decides that is the best.
"Enhanced" level is a convenience for an application,
and a tool for the user to manage non color aware applications.
An application has similar choices to above - fall back
to core, fall back to none, or fail if the author decides
that the application has no value without color management.

Graeme Gill.
Carsten Haitzler (The Rasterman)
2016-12-19 08:27:20 UTC
Permalink
On Mon, 19 Dec 2016 17:01:50 +1100 Graeme Gill <***@argyllcms.com> said:

....

at this point i'm going to summarize this. this might be more helpful than
continuing point by point rebuttals as i sense that there's something missing
in this conversation:

summary of what i think should be the case (or roughly):

1. if colorspace/profile with buffer from client matches screen compositor
SHOULD do nothing with the pixels unless it has to (eg has to blend them with
content below or otherwise do effects etc. designated by its layout policy).
2. if null colorspace then compositors generally agree to ALSO not touch pixels
and transform them unless it has to blend them etc. like #1, but irrespective
of the output
3. if colorspace/profile does NOT match the display then compositor can either
a) transform colors to get them correct on another display or b) leave them
along and just leave things as they are and perhaps let the user know that the
colors for that window/surface may be wrong on that display, or limit the
screens that display that buffer based on context.

how to clients select a colorspace? compositor sends available colorspaces to
client (maybe send per surface actually?). client picks one and renders content
given that colorspace/profile and presents that to the compositor for display.
compositor may update colorspaces that are available at any time.

colorspaces/profiles should be nominal colorspace with a whole bunch of numbers
detailing adjustments/mappings to do exact color correction for that
colorspace. if i have 2 sRGB monitors i may get 2 sRGB colorspaces each with
different transform constants/gamma lut tables etc.
--
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) ***@rasterman.com
Benoit Gschwind
2017-01-13 21:39:03 UTC
Permalink
Hello,

It's very difficult to contribute to this discussion, but here is my
delta contribution.

I agree with the following proposal with comment bellow.
Post by Carsten Haitzler (The Rasterman)
....
at this point i'm going to summarize this. this might be more helpful than
continuing point by point rebuttals as i sense that there's something missing
1. if colorspace/profile with buffer from client matches screen compositor
SHOULD do nothing with the pixels unless it has to (eg has to blend them with
content below or otherwise do effects etc. designated by its layout policy).
2. if null colorspace then compositors generally agree to ALSO not touch pixels
and transform them unless it has to blend them etc. like #1, but irrespective
of the output
3. if colorspace/profile does NOT match the display then compositor can either
a) transform colors to get them correct on another display or b) leave them
along and just leave things as they are and perhaps let the user know that the
colors for that window/surface may be wrong on that display, or limit the
screens that display that buffer based on context.
how to clients select a colorspace? compositor sends available colorspaces to
client (maybe send per surface actually?). client picks one and renders content
given that colorspace/profile and presents that to the compositor for display.
compositor may update colorspaces that are available at any time.
colorspaces/profiles should be nominal colorspace with a whole bunch of numbers
detailing adjustments/mappings to do exact color correction for that
colorspace. if i have 2 sRGB monitors i may get 2 sRGB colorspaces each with
different transform constants/gamma lut tables etc.
First this proposal do not cover the calibration and profiling
procedure. This is about 'non-calibration clients' to use already used
vocabulary, and I agree with the proposition to treat this topic
independently of the calibration and profiling procedure.

The color space must be defined: color space define the link between the
pixel value and the corresponding physical light property
(Intensity/Luminescence, the mix of wavelength, ...?)[1]

To that proposal, I would add that the compositor should advertise the
preferred color space. The definition of 'preferred' color space is
implementation-defined (compositor-defined)[2]. For example a compositor
may set the preferred color space to the color space that have the most
accurate result, or the color space the most 'hardware' efficient
(null-transform), or simply the user preferred color space[3]. He may
not match any monitor color space.

We should also agree that surface that do not include color space
specification should be null-transformed or maybe sRGB or something else
(this is an open choice).

[1] I guess most color spaces are not defined as absolute light
intensity but with relative intensity between a given black point and
white point.

[2] I choose compositor-defined, because in case of multi-monitor it
would be difficult to give a definition.

[3] I choose only one preferred color space instead of per monitor,
because per-monitor color space, imply that client must known the
monitor he belong.
Graeme Gill
2016-12-21 01:39:21 UTC
Permalink
Daniel Stone wrote:

Hi Daniel,
That's one way of looking at it, yes. But no, the exact thing you're
describing will never occur for any reason. If you'd like to take a
step back and explain your reasoning, as well as the alternate
solutions you've discarded, then that's fine, but otherwise, with a
firm and resolute 'no, never' to this point, we're at a dead end.
I've gone through the technical reasons behind this quite a few
times so far in this discussion. I know color is a bit
esoteric to most people, and it's easy to not appreciate
some of the subtleties that none the less have an impact
in the real world. I'm really not sure how I can expand
on this much more, but I'm willing to continue trying
for a while, so can you point to where you want me
to start ?

Cheers,

Graeme Gill.
Niels Ole Salscheider
2016-12-10 09:01:40 UTC
Permalink
On Fri, Dec 9, 2016 at 5:29 AM, Niels Ole Salscheider
Post by Niels Ole Salscheider
Applications that care a bit more about color correction (but do not have
professional needs) could convert all their colors to the blending color
space of the compositor. I'd expect this blending color space to be
linear if the compositor cares about good colors.
This would have the advantage that the compositor does not have to do the
conversion "application output color space -> blending color space".
Actually it is quite likely that the compositor may use a "blending
space" that is *not* the preferred colorspace of the buffers. For
instance just using OpenGL sRGB sampling for the input and output
textures will cause a linear "blending space" while all the buffers
are using sRGB gamma. Even on the CPU it is reasonably efficient to do
this conversion as part of the composite, especially if you are
willing to have some inaccuracy in the effective power functions.
Maybe I should reword it then. The intention of allowing to query the blending
space was to tell applications the preferred color space of a surface. That
is, if they do any color management they should create surfaces with that
color space to minimize computing time.

Anyway, this preferred space should probably not be sRGB if the compositor
cares about accurate colors. This is because there are many monitors that have
wider gamuts and you would clip the colors at that point.
Graeme Gill
2016-12-13 06:53:58 UTC
Permalink
Post by Niels Ole Salscheider
Maybe I should reword it then. The intention of allowing to query the blending
space was to tell applications the preferred color space of a surface. That
is, if they do any color management they should create surfaces with that
color space to minimize computing time.
Sounds overly complex for a first cut.
Post by Niels Ole Salscheider
Anyway, this preferred space should probably not be sRGB if the compositor
cares about accurate colors. This is because there are many monitors that have
wider gamuts and you would clip the colors at that point.
See my previous suggestion - use a linearised light output display space
for compositing - it's simple, fast, and there are no gamut issues.

Graeme Gill.
Graeme Gill
2016-12-12 06:57:08 UTC
Permalink
Hi,
Post by Niels Ole Salscheider
Post by Graeme Gill
I'm not quite sure what you mean. Generally an application will have
specific reasons for wanting to do it's own color management - for
instance, perhaps it is previewing a CMYKOGlclm file, and wants to
treat out of gamut mapping and black point mapping in a particular way, etc.
I don't think the Wayland compositor is going to be expected to handle
CMYKOGlclm etc. input rasters, never mind all the requirements of
specialist application color management!
This is of course something that the client application has to do. It would
query the main output for its surface, do the conversions to that color space
and then attach the output color space to the surface.
Right. So a protocol for querying the profile of each output for its surface is
a base requirement.
Post by Niels Ole Salscheider
The compositor now must not touch the parts of the surface on the main output
(where the color spaces match). But it could still try to convert from the
color space of the main output to that of a secondary screen if the surface
covers two screens with different color profiles.
Not as a base requirement. The application needs to be able to
do it's own color management, which means color managing for
every output the surface goes to. So the base requirement
has no rendering requirement for a composer - it's just
about signalling the required information to the client application.
Post by Niels Ole Salscheider
But then again most people that work with professional applications would not
make them cover multiple screens, I guess.
People using color managed applications expect color management to work as
best it can across multiple screens.
Post by Niels Ole Salscheider
Therefore I'm not opposed to adding
a flag that indicates that the application wants to disable color corrections
completely for that surface, independent of the output.
This is only something that becomes a question at the next
level, where there is an expectation that the composer
has some degree of color management capability.
Post by Niels Ole Salscheider
Post by Graeme Gill
Which is not to say that compositor color management doesn't have its
place - it is ideal for applications that just want to use "RGB", and
not deal with specific display behavior.
Very simple applications would just keep the attached sRGB color space and
maybe place images on subsurfaces with the embedded color space from the image
attached.
That works only in the case that the composer supports the image colorspaces.
This may well be the case for some applications (i.e. web browsers), but
I imagine it may not be desirable to insist that all composers supporting
a color management capability, support a full range of colorspaces (N - color in general).
Post by Niels Ole Salscheider
Applications that care a bit more about color correction (but do not have
professional needs) could convert all their colors to the blending color space
of the compositor. I'd expect this blending color space to be linear if the
compositor cares about good colors.
It's not clear to me that any application that is interested in good
color would trust it to blending operations.

I'd also imagine that there is a class of applications wishing to get the
compositor to deal with gross display differences (Wide gamut vs.
sRGB-like for instance) but is not interested enough in exactly what's going
on color wise to be aware of the distinctions between linear light space
compositing and other approaches.

Cheers,

Graeme Gill.
Carsten Haitzler (The Rasterman)
2016-12-13 03:12:03 UTC
Permalink
Post by Graeme Gill
Hi,
Post by Niels Ole Salscheider
Post by Graeme Gill
I'm not quite sure what you mean. Generally an application will have
specific reasons for wanting to do it's own color management - for
instance, perhaps it is previewing a CMYKOGlclm file, and wants to
treat out of gamut mapping and black point mapping in a particular way,
etc. I don't think the Wayland compositor is going to be expected to handle
CMYKOGlclm etc. input rasters, never mind all the requirements of
specialist application color management!
This is of course something that the client application has to do. It would
query the main output for its surface, do the conversions to that color
space and then attach the output color space to the surface.
Right. So a protocol for querying the profile of each output for its surface
is a base requirement.
i totally disagree. the compositor should simply provide available colorspaces
(and generally only provide those that hardware can do). what screen they apply
to is unimportant.

if the colorspace is native to that display or possible, the compositor will do
NO CONVERSION of your pixel data and display directly (and instead convert sRGB
data into that colorspace). if your surface spans 2 screens the compositor may
convert some to the colorspace of a monitor if it does not support that
colorspace. choose the colorspace (as a client) that matches your data best.
compositor will do a "best effort".

this way client doesnt need to know about outputs, which outputs it spans etc.
and compositor will pick up the pieces. let me give some more complex examples:

compositor has a mirroring mode where it can mirror a window across multiple
screens. some screens can or cannot do color management. what do you do now?
compositor may display your window in a pager that is duplicated across
multiple screens and thus the content of that window should be rendered
correctly. what happens when the colorspace changes on the fly (you recalbrate
the screen or output driving hardware). you expect applications to directly
control this and have to respond to this and redraw content all the time?

this can be far simpler:

1. list of supported colorspaces (bonus points if flags say if its able to be
native or is emulated).
2. colorspace attached to buffer by client.

that's it.
Post by Graeme Gill
Post by Niels Ole Salscheider
The compositor now must not touch the parts of the surface on the main
output (where the color spaces match). But it could still try to convert
from the color space of the main output to that of a secondary screen if
the surface covers two screens with different color profiles.
Not as a base requirement. The application needs to be able to
do it's own color management, which means color managing for
every output the surface goes to. So the base requirement
has no rendering requirement for a composer - it's just
about signalling the required information to the client application.
Post by Niels Ole Salscheider
But then again most people that work with professional applications would
not make them cover multiple screens, I guess.
People using color managed applications expect color management to work as
best it can across multiple screens.
Post by Niels Ole Salscheider
Therefore I'm not opposed to adding
a flag that indicates that the application wants to disable color
corrections completely for that surface, independent of the output.
This is only something that becomes a question at the next
level, where there is an expectation that the composer
has some degree of color management capability.
Post by Niels Ole Salscheider
Post by Graeme Gill
Which is not to say that compositor color management doesn't have its
place - it is ideal for applications that just want to use "RGB", and
not deal with specific display behavior.
Very simple applications would just keep the attached sRGB color space and
maybe place images on subsurfaces with the embedded color space from the
image attached.
That works only in the case that the composer supports the image colorspaces.
This may well be the case for some applications (i.e. web browsers), but
I imagine it may not be desirable to insist that all composers supporting
a color management capability, support a full range of colorspaces (N - color in general).
Post by Niels Ole Salscheider
Applications that care a bit more about color correction (but do not have
professional needs) could convert all their colors to the blending color
space of the compositor. I'd expect this blending color space to be linear
if the compositor cares about good colors.
It's not clear to me that any application that is interested in good
color would trust it to blending operations.
I'd also imagine that there is a class of applications wishing to get the
compositor to deal with gross display differences (Wide gamut vs.
sRGB-like for instance) but is not interested enough in exactly what's going
on color wise to be aware of the distinctions between linear light space
compositing and other approaches.
Cheers,
Graeme Gill.
_______________________________________________
wayland-devel mailing list
https://lists.freedesktop.org/mailman/listinfo/wayland-devel
--
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) ***@rasterman.com
Graeme Gill
2016-12-14 07:49:14 UTC
Permalink
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Right. So a protocol for querying the profile of each output for its surface
is a base requirement.
i totally disagree. the compositor should simply provide available colorspaces
(and generally only provide those that hardware can do). what screen they apply
to is unimportant.
Please read my earlier posts. No (sane) compositor can implement CMM
capabilities to a color critical applications requirements,
so color management without any participation of a compositor
is a core requirement.
Post by Carsten Haitzler (The Rasterman)
if the colorspace is native to that display or possible, the compositor will do
NO CONVERSION of your pixel data and display directly (and instead convert sRGB
data into that colorspace).
Relying on an artificial side effect (the so called "null color transform")
to implement the ability to directly control what is displayed, is a poor
approach, as I've explained at length previously.
Post by Carsten Haitzler (The Rasterman)
if your surface spans 2 screens the compositor may
convert some to the colorspace of a monitor if it does not support that
colorspace. choose the colorspace (as a client) that matches your data best.
compositor will do a "best effort".
No compositor should be involved for core support. The application
should be able to render appropriately to each portion of the span.
Post by Carsten Haitzler (The Rasterman)
this way client doesnt need to know about outputs, which outputs it spans etc.
That only works if the client doesn't care about color management very much -
i.e. it's not a color critical application. I'd hope that the intended use of
Wayland is wider in scope than that.
Post by Carsten Haitzler (The Rasterman)
compositor has a mirroring mode where it can mirror a window across multiple
screens.
Sure, and in that case the user has a choice about which screen is
properly color managed. Nothing new there - the same currently
applies on X11, OS X, MSWin. Anyone doing color critical work
will not run in such modes, or will just use the color managed screen.
Post by Carsten Haitzler (The Rasterman)
some screens can or cannot do color management.
Nothing to do with screens - core color management is up to
the application, and all it needs is to know the display profile.
Post by Carsten Haitzler (The Rasterman)
what happens when the colorspace changes on the fly (you recalbrate
the screen or output driving hardware). you expect applications to directly
control this and have to respond to this and redraw content all the time?
Yep, same as any other sort of re-rendering event (i.e. exactly what happens with
current systems - nothing new here.)
Post by Carsten Haitzler (The Rasterman)
1. list of supported colorspaces (bonus points if flags say if its able to be
native or is emulated).
2. colorspace attached to buffer by client.
that's it.
If you don't care so much about color, yes. i.e. this is
what I call "Enhanced" color management, rather than core.
It doesn't have to be as flexible or as accurate, but it has
the benefit of being easy to use for applications that don't care
as much, or currently aren't color managed at all.

Graeme Gill.
Carsten Haitzler (The Rasterman)
2016-12-14 08:43:05 UTC
Permalink
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Right. So a protocol for querying the profile of each output for its
surface is a base requirement.
i totally disagree. the compositor should simply provide available
colorspaces (and generally only provide those that hardware can do). what
screen they apply to is unimportant.
Please read my earlier posts. No (sane) compositor can implement CMM
capabilities to a color critical applications requirements,
so color management without any participation of a compositor
is a core requirement.
oh course it can. client provides 30bit (10bit per rgb) buffers for example and
compositor can remap. from the provided colorspace for that buffer to the real
display colorspace.
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
if the colorspace is native to that display or possible, the compositor
will do NO CONVERSION of your pixel data and display directly (and instead
convert sRGB data into that colorspace).
Relying on an artificial side effect (the so called "null color transform")
to implement the ability to directly control what is displayed, is a poor
approach, as I've explained at length previously.
but that is EXACTLY what you have detailed to rely on for color managed
applications. for core color management you say that the client knows the
colorspace/profile/mappings of the monitor and renders appropriately and
expects its pixel values to be presented 1:1 without remapping on the screen
because it knows the colorspace...
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
if your surface spans 2 screens the compositor may
convert some to the colorspace of a monitor if it does not support that
colorspace. choose the colorspace (as a client) that matches your data best.
compositor will do a "best effort".
No compositor should be involved for core support. The application
should be able to render appropriately to each portion of the span.
then no need for any extension. :) compositor HAs to be involved to at least
tell you the colorspace of the monitor... as the screen is its resource.
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
this way client doesnt need to know about outputs, which outputs it spans
etc. and compositor will pick up the pieces. let me give some more complex
That only works if the client doesn't care about color management very much -
i.e. it's not a color critical application. I'd hope that the intended use of
Wayland is wider in scope than that.
how does it NOT work? let me give a really simple version of this.

you have a YUV buffer. some screens can display yuv, some cannot. you want to
know which screens support yuv and know where your surface is mapped to which
screens so you can render some of your buffer (some regions) in yuv and some
in rgb (i'm assuming packed YUVxYUVxYUVx and RGBxRGBxRGBx layout here for
example)... you wish to move all color correct rendering, clipping that correct
(yuv vs rgb) rendering client-side and have the compositor just not care.

this leads to the artifacts i was mentioning. just this one will be a LOT more
obvious.
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
compositor has a mirroring mode where it can mirror a window across multiple
screens.
Sure, and in that case the user has a choice about which screen is
properly color managed. Nothing new there - the same currently
applies on X11, OS X, MSWin. Anyone doing color critical work
will not run in such modes, or will just use the color managed screen.
the point of wayland is to be "every frame is perfect". this breaks that.
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
some screens can or cannot do color management.
Nothing to do with screens - core color management is up to
the application, and all it needs is to know the display profile.
i mean they are able to display wider gammut of color beyond the limited sRGB
range that is common.
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
what happens when the colorspace changes on the fly (you recalbrate
the screen or output driving hardware). you expect applications to directly
control this and have to respond to this and redraw content all the time?
Yep, same as any other sort of re-rendering event (i.e. exactly what happens
with current systems - nothing new here.)
and this leads to imperfect frames.
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
1. list of supported colorspaces (bonus points if flags say if its able to
be native or is emulated).
2. colorspace attached to buffer by client.
that's it.
If you don't care so much about color, yes. i.e. this is
what I call "Enhanced" color management, rather than core.
It doesn't have to be as flexible or as accurate, but it has
the benefit of being easy to use for applications that don't care
as much, or currently aren't color managed at all.
how not? a colorspace/profile can be a full transform with r/g/b points in
space... not just a simple enum with only fixed values (well thats how i'm
imagining it). in this case the api's tell the client the available colorspaces
and chooses the best. it would have NO CHOICE in your core mangement anyway.
it'd be stuck with that colorspace and have to render accordingly which is the
exact same thing you are proposing for core. profide a list of 1 colorspace -
the monitor native one. application renders accordingly. if colorspace of
rendered buffer == colorspace of target screen, compositor doesn't touch pixel
values. if the buffer is sRGB (or well current wayland rgb buffers) it might
rmap srgb to this output colorspace OR it might just do nothing and also leave
it alone, thus doing exactly what you propose for core color management.
--
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) ***@rasterman.com
Pekka Paalanen
2016-12-15 11:48:04 UTC
Permalink
On Wed, 14 Dec 2016 18:49:14 +1100
Post by Graeme Gill
Post by Carsten Haitzler (The Rasterman)
Post by Graeme Gill
Right. So a protocol for querying the profile of each output for its surface
is a base requirement.
i totally disagree. the compositor should simply provide available colorspaces
(and generally only provide those that hardware can do). what screen they apply
to is unimportant.
Please read my earlier posts. No (sane) compositor can implement CMM
capabilities to a color critical applications requirements,
so color management without any participation of a compositor
is a core requirement.
Hi,

that is a very strange requirement, and architecturally it is a big
step backwards.

Applications are never in direct control of the display hardware. If
they need to be, they need to be written directly on DRM/KMS, not
Wayland. That way you truly get what you want: no compositor to mess
with your colors, and no fighting between different apps demanding
different display confgurations. If you really have applications this
color critical, then you'd better do exactly this or provide/audit the
compositor yourself.

In every other case, there is a display server *with a compositor*
between the application and the display. The compositor will
necessarily copy and translate pixel values from one format to another,
if not from color space to another.

The compositor is also in control of the display hardware: color lookup
tables, color transformation matrices, hardware overlays and direct
scanout of client buffers at opportunity. Scanout hardware can often do
color transformations on its own, with a method that might be unknown
to anyone but the vendor. All of these depend on the hardware whether
and which of them are available. What you are proposing necessarily
leads to moving the control of all of these into Wayland clients. It
may have been so with X11 where any client can change any display
setting any time it wants, but I cannot see that happening on Wayland.

If the pixels go through a display server, you have no choice but to
trust the display server to do the right thing. So the real question
is: "How do you let the compositor do the right thing?"
Not: "How do I bypass the compositor?"

You cannot take the compositor out of the equation unless you don't
have a compositor at all.


Here are some other points:

- Blending will only be an issue if a color-managed window has
non-opaque pixels. If the application cares about the end result, it
should not have non-opaque pixels, because not only it cannot know
*how* they will be blended, it also cannot know *what* they will be
blended with.

- You do not know which parts of a window are shown on which outputs.
Some parts may be shown on multiple outputs at the same time. There
is no provision for clients to provide a buffer separately for each
output (you could add that as an extension). Therefore the same
buffer content must be applicable to any and all outputs in the
system. Unless all outputs have identical color profiles, this
necessarily means the compositor must convert while compositing.

- Different compositors will always have different levels of color
management support. You might want the color management protocol
extension(s) to implicitly or explicitly indicate the level, perhaps
even validation/auditing certificates. (Think about, say,
professionally calibrated workstations where all the hardware, the
drivers, the compositor and settings have been verified and locked
down.)

- Calibration (i.e. modifying compositor configuration) must
necessarily be a separate interface from the ones used by
applications that only want to present color-managed content.
Conflating it all into a single interface will cause problems with
privilege separation and encourage usage patterns where different
applications cannot work at the same time because they will be
fighting over who gets to set the current configuration.


Thanks,
pq
Pekka Paalanen
2016-12-20 09:24:57 UTC
Permalink
On Tue, 20 Dec 2016 15:17:31 +1100
Since Wayland doesn't currently implement any color management support,
I'm not following how it can be a step backwards.
It is step towards "clients are in control and the compositor does not
know what is happening". That is, a step towards X11 design and away
from Wayland design principles.
It depends on what you mean by "control". Wayland can't operate
without applications providing the content, and color is part of
content.
When I was talking about configuration, your reply talks about content
delivery. We are continuously talking past each other. I talk about one
thing, you deliberately interpret me talking about the other thing, so
that you can make me look silly. I'm tired of correcting that in every
turn.

Clients control what they send to the compositor: that is content
delivery.

Configuration controls compositor's global settings, like CRTC CLUT
values.

This difference is fundamental to any kind of Wayland protocol design.
And it is DIFFERENT from X11 on purpose.
Wayland is useless unless there is a way to manage it. Which means
that it should have channels for administrative tools to configure it.
Indeed: Administrative tools. These are part of the compositor or the
desktop environment distribution. They have the privileges to configure
the compositor.
And how are management functions that can't reasonably be contained
in the compositor handled then ?
A compositor is _not_ going to come with display color calibration
and profiling functionality - it's too complex and specialized.
The user will insist on being able to choosing such tools anyway.
Why would you not let compositors use the CMS libraries people have
developed?

Or are all today's CMS implementations so intimitely written to the X11
model that they simply cannot work at all? That would be very sad, but
also something that requires work on them, not only on Wayland and
compositors.

Maybe the CMS implementations should be primarily used by the
compositors rather than applications then.
However, in this thread we are talking about arbitrary applications,
not administrative tools.
There is no difference, apart from any privilege needed.
This is where we disagree, and that prevents us from making any
progress.


Thanks,
pq
Kai-Uwe
2016-12-21 20:45:24 UTC
Permalink
Hello Pekka,
On Wed, 21 Dec 2016 10:30:50 +0000
I think that's basically correct, argyllcms doesn't have any header
files or shared libraries. When using it to generate color profiles
for things like printers from gnome-color-manager I have to spawn the
binaries themselves (and only in a VT...) and then scrape the output.
https://git.gnome.org/browse/gnome-color-manager/tree/src/gcm-calibrate-argyll.c#n273
Oh, that's a huge surprise to me, being accustomed to open source.
At least the Oyranos CMS API is certainly able to integrate ArgyllCMS as
a CMM, despite Argyll providing only CLI interfaces. Oyranos CMS
provides libraries, tools. And GUIs and (X11) compositors use those
Oyranos libs.
Yes! The CMS needs to provide the API that all compositors could use.
Sorry about the typo, I meant "an API", not "the API". We're not
Khronos, indeed.
Just like programs can choose their toolkits, compositors should be
able to choose their color management providers for calibration and
color processing. We would still have the public and generic Wayland
extension for providing color-managed content, so it would not affect
normal application compatibility.
As a CMS author, who is much involved with the KDE community and working
as well for other DEs, I appreciate this openness. I wish to integrate
Oyranos CMS with Wayland/Weston.

colord is considerd very Gnome centric.

Kai-Uwe
Daniel Stone
2016-12-21 10:42:41 UTC
Permalink
Hey Richard,
so "just set the CLUT" is already an outdated approach
This is another point: We're all talking about the
least-common-denominator approach of setting the RGB 8-bit ramps on
the logic it's the only way to set the white point without the
overhead of a shader lookup. Most modern hardware actually supports
some kind of *matrix* and LUT on the crtc output itself, although
there is no common abstract interface that's provided by libdrm, yet.
Not quite correct; the split-gamma-plus-matrix approach is already
supported by KMS properties. It doesn't quite expose (or at least
enumerate; it might be there if you poke sharply enough) the full
spectrum of complexity that I just outlined in reply to Mattias, but
it'll get there I'm sure ...

Cheers,
Daniel
Chris Murphy
2016-12-21 00:14:20 UTC
Permalink
But from the the calibration application writers point of view,
to write such an application it's very highly desirable
(i.e. may make the difference between such an application
existing or not) that there be a standard API to operate this
graphics system element, that is common and supported by the
graphics display system that it is intended to "configure".
What's the path for that, if it's not a Wayland protocol extension.
Is there a standard Wayland configuration protocol that can be extended ?
Doesn't colord already push vcgt tag contents to the video card? On
macOS, this is done through the display manager, no program directly
modifies the video card LUT. I'm pretty sure it's the same on Windows
in the modern era - it used to be every display calibration program
came with a helper program that would launch at startup time and apply
the calibration curve to the video card LUT.
--
Chris Murphy
Kai-Uwe
2016-12-21 20:20:16 UTC
Permalink
Hello Richard,
I suggest that compositors use the CMS you have spent so much time and
effort perfecting, and you start with the assumption that they will not
or cannot do so. Why?
I think lcms2 is fine to use; it's widely used in other projects,
tested, and already optionally used in weston.
For the simple shoot and forget interface for client color space
tagging, which Niels is thankfully working on, a CMS like littleCMS
might be well established, maintained, reviewed, audited and supported.

However as it is used in Weston, it has clipping and other not so good
side effects, but it is certainly valuable on its own for base color
correction. A different CMS can go beyond that and add more color
management features to the benefit of the while desktop not only some
esoteric clients, which support device links or other more powerful
expressions from their own GUIs.

Kai-Uwe
Graeme Gill
2017-01-04 22:05:03 UTC
Permalink
On Wed, 21 Dec 2016 10:30:50 +0000
I think that's basically correct, argyllcms doesn't have any header
files or shared libraries. When using it to generate color profiles
for things like printers from gnome-color-manager I have to spawn the
binaries themselves (and only in a VT...) and then scrape the output.
https://git.gnome.org/browse/gnome-color-manager/tree/src/gcm-calibrate-argyll.c#n273
Oh, that's a huge surprise to me, being accustomed to open source.
What connection do you expect between open source and a set of
applications somehow _having_ to be structured as a library ??

(Yes, of course my Open Source ArgyllCMS Color Management toolset
has a multitude of headers and libraries that it uses to implement
its functionality. But being a set of applications, it isn't necessarily
intended that other applications link to all that.)
Just like programs can choose their toolkits, compositors should be
able to choose their color management providers for calibration and
color processing.
You missed profiling. CMM yes, but calibration and profiling and complex
linking, etc. etc. - no - they are user applications. Users
want to be able to pick and choose between such applications, not
be locked into a single implementation. This fosters
competition and innovation, just like any other application
area.

Graeme Gill.
Graeme Gill
2017-01-05 01:51:35 UTC
Permalink
Pekka Paalanen wrote:

Hi,
I suggest that compositors use the CMS you have spent so much time and
effort perfecting, and you start with the assumption that they will not
or cannot do so. Why?
We need to be clear on terminology here - "CMS" can have a few
meanings, depending on the intent of the user.

CMS in the sense of the tools I've written is a suite
of applications. Unless my understanding of what
a compositor is, is very wide of the mark, I really
don't think you want to incorporate something like
a quarter of a million lines of extra source code that
is organized into a flexible suite of inter-operating
tools that work together, into a compositor.

(CMS can also loosely refer to things like a CMM
(Color Management Module). A CMM typically has much better defined
functionality and smaller scope, and can certainly be incorporated in a
compositor - i.e. lcms).

As an application writer, I don't think incorporation
into a compositor is a reasonable proposition either - the very
purpose of an operating system is to be a good
environment to support applications, so to propose
re-implementing the applications within some sort
of quasi-non-standard operating environment
such as a compositor, has nothing attractive about it.
(Take that to the power N if there are N different compositors.)

To have the compositor call out to the tools rather
than incorporate them is perfectly possible
(Richard does some of this in colord I think), but doesn't
solve the problem of how the tools can do their job
if the graphics sub-system (i.e OS) doesn't support the facilities
they need to access the graphics hardware.
Are you implying that the CMS you worked on so hard is impossible to use
from a compositor?
As incorporated code - yes - I don't think it is practical.
No, I do not mean forking the CMS and shoving the code into a compositor
project.
I mean libraries, services, daemons... does your CMS really not have
any such interfaces?
No - it's an application, so it's organization is private - it doesn't
aim to provide services to anything except its own components. But even
if it was so organized, how does this solve the problem of those services
not being able to access the hardware ?
Why would the compositor or desktop environment want to re-implement
the applications UI's ?
Answer - this seems rather unlikely to be a viable path. It's
not reasonable - color management applications need
to be independent applications to be practical, and the
alternative is fairly simply by comparison :-
provide the API's needed as standard.
Yes! The CMS needs to provide the API that all compositors could use.
That's back to front. The CMS needs the API's that the Compositor provides,
since the Compositor is the operating system element that
is providing a standard and coherent API for applications
to access the display hardware.
Even in the X11 case, simply shoving something into the CLUT directly
did not work.
Terminology confusion - in ICC speak, cLUT is a multi-dimensional Color lookup
table (aka "3DLUT" in the 3D case, aka "mLUT"). The terms
I'm familiar with for the single dimensional lookup tables
in the (used to be) Video D/A are either "RAMDAC" (From the Brooktree HW) or
"VideoLUT", the latter I thought being the generally adopted term.
You can have multiple apps messing with the CLUT (e.g.
redshift, right?) without knowing about each other, leaving the user
clueless of the state of the display.
There's a big difference between "did not work" and "could be
improved". In practice it all works well enough, because few applications
mess with the VideoLUTs, and the user can fix it if there is
a problem, by not installing or enabling applications that clash.
This is little different than the user dealing with other
clashes, such as which application is currently talking
to a serial port, which application is currently full screen, etc.

And "can be made to work" is infinitely better than
"impossible, because the application environment doesn't give access".

Could it be improved to better manage or resolve such clashes - yes!

But for a user who wants color management, there is no clash - they
don't run things like "redshift" or applications that alter the brightness
of the display, because that will mess up their display profile
and hence color accuracy.
I believe modern display hardware
start with a CLUT-matrix-CLUT pipeline and continuously invents more
ways, so "just set the CLUT" is already an outdated approach, not to
mention that each hardware plane might have its own pipeline setup
This is a complete distraction.

1) Saying "there's more complicated stuff that could be standardized
in the future, so lets not implement things that people currently
depend on now" is not serving the users at all, and could go on
indefinitely.

2) CLUT-matrix-CLUT has no application to display calibration. It
has possible uses I'm sure, for instance you could use it to make the display
do a rough emulation of some other colorspace, the accuracy of which
depends on how additive the native display response is (typically
you can't assume more than good additivity, which is one reason why
cLUT based ICC profiles are often used), as well as having no real
control over gamut clipping :- but if a display is connected to
a general purpose operating system which aims to make the
full capabilities of the display available to applications,
the matrix and second 1D LUTs are useless. The reason is
that any matrix which is non-unity can only make the available
gamut smaller, or present a gamut boundary that is unexpectedly
small (i.e. a gamut boundary that doesn't coincide with the
apparent device gamut boundary).
The situation on Wayland not that different. You still need the
{ display server, window manager, compositor } process to work *with*
CMS to produce best results, and it even offers significant benefits
like choosing a more appropriate blending space or automatic and
GPU-accelerated color mapping, plus preventing fighting over control.
All those things may be desirable, yes.
You don't have X11 for communicating between the compositor and CMS,
now you need a library interface. It's not something one wants Wayland
for, because Wayland is IPC, implying the two parts are in separate
processes.
Who is standardizing and providing that library interface in
a Wayland based system, so that applications can use Wayland
instead of X11 ?

Why do you think that the Compositor and CMS are not separate
processes ? It is certainly the case if the CMS is a set of independent
applications.

Why is it the compositors job to provide the API (Wayland) for accessing
just some parts of the display hardware, and not all of it that
applications need access to ?
If you design the compositor-facing interface as a library ABI in your
CMS, then you have the power to design it right and tell compositor
developers how to do the right thing by using it.
If Wayland ends up being a remote protocol, how does this work
to a remote display server ?
This, added with the power of adding implementation-specific Wayland
extensions at runtime(*) from within the CMS, should let you implement
much much more than you ever could if you stuck with the X11
architecture "I need to control hardware directly from a CMS tool that
is a Wayland client and the compositor has to stay out of the way".
Realistically, color management isn't crying out for "much much more",
just "is possible to implement current functionality" would be
a good place to start. (And I don't mean to imply by
this that things should be implemented in exactly the same
way as existing systems.) This is essentially because this is
all an application centered activity for color sensitive tasks,
so the required display server involvement is minimal.
As I understand it, Wayland already provides the ability
for applications to make use of the GPU in rendering to
their buffers, so taking advantage of this for color
management doesn't need any further enabling by the
display server. Lots of more basic things are missing though.

A display server providing some level of default
color management or basic color management for
otherwise non-color managed applications would
certainly be a good thing as well, and a reason to
incorporate at least some basic CMM capabilities in
the compositor.

Having a standardized means of coordinating the
display profile would make it possible for the compositor
to use that information to provide better blending and
anti-aliasing behavior at some cost to performance,
although color sensitive users probably won't want
to make use of transparency.

Regards,

Graeme Gill.
Daniel Stone
2016-12-21 13:36:22 UTC
Permalink
Hi Graeme,
I recommend you re-read Pekka's emails a couple of times until they
sink in. It is clear that you are extremely experienced in colour
management;
I think that applies in the other direction as well.
I would definitely benefit from seeing an expansion of some of the
terminology that is thrown around which can be subtly misleading. But
yes, as said, I've got some Christmas reading to do.
If you reduce Wayland's capabilities to that of X11, then some of
your suggestions may be less wildly unsuitable, but on the other hand,
we might as well be using X11 if that were the case.
It's not my intent to do so, but to contrast the gap in
capabilities that currently exists, and would like
to see filled in a workman like manner.
Per my reply to Chris, some of those gaps can be filled in ways which
are not obvious when coming from other environments. Trying to solve
them in the exact same way would be bashing a square peg into a round
hole, so it can be useful to step back and have a think about
first-order requirements, rather than just jumping directly to deeply
specific solutions.
Let's say that the target here is a user for whom absolute colour
correctness is so critical, that they have deeply specialised
applications and hardware, monitors most people have never heard of,
secondary measurement and calibration hardware, and have sunk a great
deal of time into specifically configuring this.
For such a user to pick a compositor which was actively bad at colour
management would be rank stupidity.
If you don't trust the compositor you're using to do the thing which
is above all else most important to you, maybe pick something else?
If there is no capability or standardization in this area for Wayland based
systems, I don't see how such a user has any other choice than to pick a
"compositor" called Microsoft Windows or Apple OS X. That would be a shame.
Yes. No-one in this thread wants to see a solution which precludes
fully correct and accurate colour management.
You're painting a false dichotomy where either people use external
tools to allow colour-critical users to compensate for the damage
their known-bad compositors do, or there is no chance of ever having
colour management ever. If only there was a third option.
I'm not sure what you mean by a "known-bad compositors",
nor do I really understand how this paints a false dichotomy.
By 'known-bad compositor', I speak of a compositor which, when
processing information from clients, does so destructively. Such that
the final output from the compositor (to any connected display, or
screenshot/screencast, be it via means of direct scanout from a plane,
or intermediate GPU rendering, or an intermediate hardware composition
through a dedicated block, or software composition, or, or), produces
incorrect results.

A lot of the discussion on this thread seems aimed at ensuring
compositors are not involved in colour management and colour-critical
applications are forced to use external systems to route around them
in order to achieve the desired colour result. This is a
self-fulfilling prophecy.
The external tools are a means of creating the data to
allow for managing and compensating for their particular displays.
A competent application to create this data is non-trivial.
If a graphical systems makes implementation of such software
difficult or awkward or laborious, then such software
may be slow to be developed for such systems. Without
these tools then either color management can't exist
on those systems, or it is compromised (i.e. using EDID
derived profiles, or stock profiles), or awkward workarounds
are needed (switch to an X11 server, boot Windows etc.)
Completely understandable. Again though, I think it's important to
separate the discussion of how to create a calibrator, from how to
create a colour-aware application that can achieve its desired colour
results. The two may, at a very low level, share similar requirements
(along with that of managing the data and applying it to outputs etc),
but I expect them to look very different from a protocol point of
view. Some of it may, as said earlier, not involve Wayland at all.
This thread perhaps demonstrates why many projects which would
otherwise care about colour management have such difficulty supporting
colour management correctly.
It's a difficult subject to get a grasp on at times. Many programmers
get to the stage of knowing what RGB is, and thinking there is not
much more.
Yes, but it's also how you work with people. Security had the same
issue for the longest time, where there were application/system
people, and then there were security people, and the only time they
met was to tell each other that they didn't live in the real world. I
desperately want to avoid colour management remaining a ghetto where
the only way to get correct results is via complex external systems
which attempt to route around the underlying display pipeline. I want
it to be a first-class part of our system. But this thread is not a
good start.
Setting VideoLUTs has been standard in display systems almost
forever. Find a way of implementing support in Wayland, so
that color management can happen in Wayland.
No.
I think you need to be a whole lot more flexible here,
if progress is to be made. I'm open to other ideas on
how to manage this, but "no, it's not possible"
does not work towards this. Progress might be
made if you or others with a better grasp of
Waylan's architecture at least understand why, and what,
rather than simply saying "No" without any understanding.
I totally understand how driving (one of) the video LUT(s) is a
reasonable way to achieve your end goal.
The problem here starts with 'it's not even a complete solution',
continues on via 'colour control within display hardware is now more
complex than you realise', and ends with 'this is limiting enough to
be the worst of all possible worlds'.
I'm not sure what you are talking about.
Simply providing video LUT data and nothing else is not a complete
solution, because it means GPU-based composition pipelines (they do
exist for non-alpha-blended cases too) are unaware of colourspace, and
are thus liable to be destructive to the end result, especially if
they attempt to do things such as taking clients with no explicit
source information to be in sRGB colourspace.

That you keep saying 'program the video LUT' makes me think you
perhaps do not realise the full complexity of colour management in
modern display hardware (again, see other thread). This includes that
LUTs can sometimes be applied on a per-plane basis as well as
per-output.

The last part is important, because we can now sensibly make use of
hardware overlay planes in Wayland compositors. If we have enough
information about the colour source, we can even do it in such a way
that it still achieves perfectly colour-correct output. Treating 'the
video LUT' as a first-class, client-visible, object in and of itself
removes the compositor's flexibility to promote clients to hardware
planes, forcing us to choose between performance (a notable dent on
battery life; pushing thermals high enough to trigger limits; putting
enough extra load that full framerate cannot be achieved), and colour
correctness.
Reading you reiterate this yet again, I'm stunned that you want to
explicitly design for a system where a core and indispensible
component of the rendering pipeline, is incapable of correct
rendering.
Sorry, I don't know what you mean by that. Can you be more
explicit in what you mean by "core and indispensable
component of the rendering pipeline", and "is incapable of
correct rendering" ?
The compositor is a core and indispensable component of the rendering
pipeline. There is no way for Wayland clients to display anything on a
display, without the compositor mediating the display. Often the
compositor will need to perform intermediate rendering in the process.
The compositor is empowered with vastly more flexibility than previous
rigid systems, and as such is capable of a lot more than you may
realise.

If the compositor does not have fully accurate information about the
colour properties of client surfaces, it stands to reason that
anything the compositor does outside the specific case the client
targets, will not retain colour accuracy. It could attempt to
guess/infer this, but building that into system design seems
self-defeating.
Niels, I think conceptually you have the foundations of a good system
in this proposal. I need to do some more looking into colour
management (bit of light reading for Christmas), so hopefully I can
have some more pertinent questions and suggestions after that.
It would be great if others did a bit of that too, so that
there is more chance of some of things I've said being
understood, or even taken seriously. (And yes, I will continue
to poke away at understanding Wayland a little better.)
I don't think anyone doubts your expertise in the specific field, but
much like a compositor with no explicit information on client colour
properties, I'm having to try to walk back on your reasoning, to go
from specific solutions you appear to be demanding, back to actual
first-order problems. A lossy and frustrating exercise.

Wayland and X11 are no more similar than two surfaces both in RGB
colourspace. They may look so from afar, but at the moment it just
appears like you're trying to bash a square peg into a round hole. In
order to do this properly, and finally give colour management its
deserved first-class place in open-source window systems, you really
need to take a step back and start unpicking some things you and
Argyll have previously taken as a given. At that point we can progress
from both sides saying 'no', to constructive design discussion. I'm
incredibly grateful to Niels here for starting this particular fire,
because his proposal is a model we can actually pick at, reason about,
and make concrete suggestions to.

Cheers,
Daniel
Graeme Gill
2017-01-05 10:50:02 UTC
Permalink
Daniel Stone wrote:

Hi Daniel,
Post by Daniel Stone
I would definitely benefit from seeing an expansion of some of the
terminology that is thrown around which can be subtly misleading. But
yes, as said, I've got some Christmas reading to do.
If anybody would like some concise explanations for color management
terms and concepts, I'm willing to so explain, as well as
provide references for more expansion or detail.
Being lucid enough to succeed in conveying comprehension
is another thing of course.
Post by Daniel Stone
Per my reply to Chris, some of those gaps can be filled in ways which
are not obvious when coming from other environments. Trying to solve
them in the exact same way would be bashing a square peg into a round
hole, so it can be useful to step back and have a think about
first-order requirements, rather than just jumping directly to deeply
specific solutions.
Agreed - and this is something I'm trying to do as well, to outline
the big picture as the background that leads to specific suggestions.
Post by Daniel Stone
I'm not sure what you mean by a "known-bad compositors",
nor do I really understand how this paints a false dichotomy.
By 'known-bad compositor', I speak of a compositor which, when
processing information from clients, does so destructively. Such that
the final output from the compositor (to any connected display, or
screenshot/screencast, be it via means of direct scanout from a plane,
or intermediate GPU rendering, or an intermediate hardware composition
through a dedicated block, or software composition, or, or), produces
incorrect results.
Incorrect in the sense of spatial rendering, or color values/fidelity or both ?
Post by Daniel Stone
A lot of the discussion on this thread seems aimed at ensuring
compositors are not involved in colour management and colour-critical
applications are forced to use external systems to route around them
in order to achieve the desired colour result. This is a
self-fulfilling prophecy.
That's not what I'm suggesting. Compositors have a vital
role in allowing different applications to share a desktop,
as well as help applications and toolkits compose and
operate a GUI. But that is all about buffer management,
spatial rendering and coordination of updates. Transparency
and color encoding formats starts to dip into some color
related aspects, but only in a simple way that isn't related
to the colorimetry. A color managed workflow is about how
color is best preserved between different device dependent
color spaces, and in the context of a display system driven
by applications, this is primarily about the color spaces the
application takes in, or defines its own colors in, and the
display colorspace. (And lets be clear, when I say "colorspace"
here I'm speaking of the colorimetry, tonal response and color
mixing characteristics represented by the (often) RGB values,
although applications in general will deal with many other
types of input colorspaces such as CMYK, device independent
colorspaces, multi-channel spaces, etc. etc.)
Post by Daniel Stone
Yes, but it's also how you work with people. Security had the same
issue for the longest time, where there were application/system
people, and then there were security people, and the only time they
met was to tell each other that they didn't live in the real world. I
desperately want to avoid colour management remaining a ghetto where
the only way to get correct results is via complex external systems
which attempt to route around the underlying display pipeline. I want
it to be a first-class part of our system. But this thread is not a
good start.
Agreed it's not a good start, and I guess the gulf in
understanding is far wider than I have imagined from my
normal technical interactions with people.
Post by Daniel Stone
Simply providing video LUT data and nothing else is not a complete
solution, because it means GPU-based composition pipelines (they do
exist for non-alpha-blended cases too) are unaware of colourspace, and
are thus liable to be destructive to the end result, especially if
they attempt to do things such as taking clients with no explicit
source information to be in sRGB colourspace.
I'm not sure what you are alluding to. A compositor pipeline
that alters color values in arbitrary ways will of course
make a color managed workflow difficult to impossible,
but what sort of compositor processing do you anticipate will
do this kind of thing ?
Post by Daniel Stone
That you keep saying 'program the video LUT' makes me think you
perhaps do not realise the full complexity of colour management in
modern display hardware (again, see other thread). This includes that
LUTs can sometimes be applied on a per-plane basis as well as
per-output.
Right - so this is perhaps something that might be made more
flexible. Currently it's kind of random whether things like video
overlay planes get VideoLUT curves applied to them or not, depending
on the hardware capability and/or diligence or understanding of the
driver writers.
Post by Daniel Stone
The last part is important, because we can now sensibly make use of
hardware overlay planes in Wayland compositors. If we have enough
information about the colour source, we can even do it in such a way
that it still achieves perfectly colour-correct output. Treating 'the
video LUT' as a first-class, client-visible, object in and of itself
removes the compositor's flexibility to promote clients to hardware
planes, forcing us to choose between performance (a notable dent on
battery life; pushing thermals high enough to trigger limits; putting
enough extra load that full framerate cannot be achieved), and colour
correctness.
A straightforward solution is to provide a virtual color management
VideoLUT that then gets translated into a device dependent manner
to the actual hardware (i.e. duplication/adaptation to other planes).
Define it's purpose clearly, and it will be ahead of the current
implementations that tend towards just exposing the hardware capability
(although it would reduce to that in the simplest case).
The color workflow then retains a simple element with a clear purpose
and role, which is supported by existing tools.

Accommodating specific hardware capabilities such as video decoding
might well need something more, but since this is an area with no
current tools or API's and something that would (for good color
management) build on the conventional color workflow, this
seems like a project that would come a little later.
Post by Daniel Stone
Reading you reiterate this yet again, I'm stunned that you want to
explicitly design for a system where a core and indispensible
component of the rendering pipeline, is incapable of correct
rendering.
Sorry, I don't know what you mean by that. Can you be more
explicit in what you mean by "core and indispensable
component of the rendering pipeline", and "is incapable of
correct rendering" ?
The compositor is a core and indispensable component of the rendering
pipeline. There is no way for Wayland clients to display anything on a
display, without the compositor mediating the display.
Sure.
Post by Daniel Stone
Often the
compositor will need to perform intermediate rendering in the process.
What sort of rendering though ?
Post by Daniel Stone
The compositor is empowered with vastly more flexibility than previous
rigid systems, and as such is capable of a lot more than you may
realise.
Maybe, and maybe not. We'll see.
Post by Daniel Stone
If the compositor does not have fully accurate information about the
colour properties of client surfaces, it stands to reason that
anything the compositor does outside the specific case the client
targets, will not retain colour accuracy.
This doesn't follow, unless there is a reason for the compositor
to do a color transformative operation. Nothing involving spatial
transformations or even transparency really falls into that category.
(Linear light blending/anti-aliasing is a nice to have that doesn't
need accurate colorspace information to be largely effective.)
Post by Daniel Stone
I don't think anyone doubts your expertise in the specific field, but
much like a compositor with no explicit information on client colour
properties, I'm having to try to walk back on your reasoning, to go
from specific solutions you appear to be demanding, back to actual
first-order problems. A lossy and frustrating exercise.
On the contrary, I think I've given copious levels of "big picture" to
back up specifics. Perhaps the overall context means that I'm not being
understood.

Regards,
Graeme Gill.
James Feeney
2016-12-19 15:04:48 UTC
Permalink
We very deliberately avoid defining any "standard" Wayland interfaces
for configuring a compositor, because every compositor is different.
With X11, you had the one single X server implementation and no other.
On Wayland, every compositor is an individual, just like every X11
window manager is.
I do not want to waste time in designing a "standard configuration
interface" when the realistic expectation is that none of the major
DEs' compositor will be implementing it. They already have their own
tailor-made ways. As a case study one could look at the feature set of
xrandr tool.
At first glance, that comes across as off-point and shirking responsibility,
where Weston boastfully promotes itself as "*the* reference implementation of a
Wayland compositor, and a useful compositor in its own right".

Where is *Weston's* "pixel perfect" Color Management System?

Unless the argument is convincingly made that *nothing* will ever be required
from the Wayland protocol in order for any compositor to implement a "pixel
perfect" CMS, on its own, then 'deliberately avoid[ing] defining any "standard"
Wayland interfaces for configuring a compositor' is just "throwing a monkey
wrench" into the conversation.

To convincingly make that argument, create the Weston "pixel perfect" CMS, and
demonstrate that nothing CMS related was required from the Wayland protocol.

What is the design outline of that Wayland-protocol-free CMS?
Pekka Paalanen
2016-12-21 09:38:41 UTC
Permalink
On Wed, 21 Dec 2016 11:08:06 +1100
Anything more and the application author will just
decide it's not worth the bother. To calibrate we just ask for a
surface that's not going to be tampered with, but we don't want to
optimize for this super-uncommon case.
I disagree - leave it to be an afterthought, and it will be
done badly or left out completely, crippling the practicality
of color management for the system.
Designing that is trivial:

GLOBAL cms_calibrator
- request: create_calibration_surface(wl_surface, new cms_calibration_surface)
# Assigns wl_surface role.

INTERFACE cms_calibration_surface
# Surfaces with this role will only be shown on the set output,
# with direct color path bypassing all color-management, and
# and the hardware has been reset to neutral/identity settings.
# (or whatever requirements are appropriate, you can decide
# what to write here)
- request: set_output(wl_output)
# Which output this surface targets. The compositor replies
with a
# configure event.
- event: configure(width, height)
# delivers the width and height the application needs to use


How it operates from a client perspective:

1. create a wl_surface
2. bind to cms_calibrator
3. send create_calibration_surface
4. send set_output
5. wait for configure
6. draw the calibration surface in the correct size
7. use Presentation feedback interface to ensure the calibration
surface is show with the latest content
8. do what you want to do with the colorimeter
9. go to 6 to update the image if necessary
10. destroy cms_calibration_surface and wl_surface; the display
automatically returns to normal


To be user friendly, one probably wants to add an event in case the
user denies the request to show the calibration window as it will have
temporary global effects.

Whether the global needs to be privileged or not, and how privileges
are implemented are an orthogonal matter.


Thanks,
pq
Pekka Paalanen
2016-12-21 11:40:25 UTC
Permalink
On Wed, 21 Dec 2016 11:38:41 +0200
Post by Pekka Paalanen
On Wed, 21 Dec 2016 11:08:06 +1100
Anything more and the application author will just
decide it's not worth the bother. To calibrate we just ask for a
surface that's not going to be tampered with, but we don't want to
optimize for this super-uncommon case.
I disagree - leave it to be an afterthought, and it will be
done badly or left out completely, crippling the practicality
of color management for the system.
GLOBAL cms_calibrator
- request: create_calibration_surface(wl_surface, new cms_calibration_surface)
# Assigns wl_surface role.
INTERFACE cms_calibration_surface
# Surfaces with this role will only be shown on the set output,
# with direct color path bypassing all color-management, and
# and the hardware has been reset to neutral/identity settings.
# (or whatever requirements are appropriate, you can decide
# what to write here)
- request: set_output(wl_output)
# Which output this surface targets. The compositor replies
with a
# configure event.
- event: configure(width, height)
# delivers the width and height the application needs to use
1. create a wl_surface
2. bind to cms_calibrator
3. send create_calibration_surface
4. send set_output
5. wait for configure
6. draw the calibration surface in the correct size
7. use Presentation feedback interface to ensure the calibration
surface is show with the latest content
8. do what you want to do with the colorimeter
9. go to 6 to update the image if necessary
10. destroy cms_calibration_surface and wl_surface; the display
automatically returns to normal
To be user friendly, one probably wants to add an event in case the
user denies the request to show the calibration window as it will have
temporary global effects.
Whether the global needs to be privileged or not, and how privileges
are implemented are an orthogonal matter.
Hi Niels,

I really should have CC'd you on this one.

I also forgot to mention that surfaces with the cms_calibration_surface
role, when actually presented, would also guarantee that nothing else
will be shown on that specific output, screen saving will not activate,
etc. anything that might hamper calibration will not happen.

You'd also want an event telling that the user has interrupted the
showing of the calibration window, so that the calibration app cannot
hog the output indefinitely even if it freezes. That might be the same
event telling the user denied it in the first place.

This is something you cannot achieve with just a pass-through color
profile.


Thanks,
pq
Graeme Gill
2017-01-05 01:51:58 UTC
Permalink
Post by Pekka Paalanen
I also forgot to mention that surfaces with the cms_calibration_surface
role, when actually presented, would also guarantee that nothing else
will be shown on that specific output, screen saving will not activate,
etc. anything that might hamper calibration will not happen.
That's all good - but aren't these the sorts of controls
that other applications need too ?
Slide show or Video player apps need to prevent screensaving,
modal dialogs need to be able to pop to the top of
the window stack etc.

Graeme Gill.
Pekka Paalanen
2017-01-13 12:34:13 UTC
Permalink
On Thu, 5 Jan 2017 12:51:58 +1100
Post by Graeme Gill
Post by Pekka Paalanen
I also forgot to mention that surfaces with the cms_calibration_surface
role, when actually presented, would also guarantee that nothing else
will be shown on that specific output, screen saving will not activate,
etc. anything that might hamper calibration will not happen.
That's all good - but aren't these the sorts of controls
that other applications need too ?
Maybe, but not that way. Wayland is intended to have a semantic desktop
shell protocol where clients communicate exact intents, while X11 was a
purely mechanical protocol with no intents at all.

For instance, to fullscreen a window, in X11 you "set position, set
size, stack on top" (wait, what position? what size is appropriate?)
and the WM has to guess what you wanted. In Wayland, you "make me
fullscreen", and the WM will know exactly what you want and it will do
exactly what you need and it can even tell you what the appropriate
size is, so no-one has to guess anything.

Yes, X11 actually has the NET_WM fullscreening protocol added later by
EWMH IIRC. It communicates intent.
Post by Graeme Gill
Slide show or Video player apps need to prevent screensaving,
modal dialogs need to be able to pop to the top of
the window stack etc.
Preventing screensaving is a lot more complicated that you might first
think when integrated properly.

No application will be able to unconditionally steal your whole desktop
like traditional modal dialogs can, so there will not be such a
request, or it will be limited to the windows of the single client, or
a client-provided sub-set of windows. It would not help at all with a
calibration app. Similarly there is no "stack on top" unconditional
request, but there will likely be a "wants attention" request which
under the compositors policy may cause the window to be stacked on top.

And then the "etc.". Compositors may gain new features you have not
anticipated, that will interfere with calibration. How would you know
to write support for them all in your calibration app to disable them
one by one?

Would it not be simpler to just say "I'm doing calibration now, make
sure nothing interferes"?


Thanks,
pq
Graeme Gill
2017-01-05 01:40:08 UTC
Permalink
I'm not so sure.
Post by Pekka Paalanen
GLOBAL cms_calibrator
- request: create_calibration_surface(wl_surface, new cms_calibration_surface)
# Assigns wl_surface role.
INTERFACE cms_calibration_surface
# Surfaces with this role will only be shown on the set output,
# with direct color path bypassing all color-management, and
# and the hardware has been reset to neutral/identity settings.
# (or whatever requirements are appropriate, you can decide
# what to write here)
Why this has to be made a special case? The normal
machinery used to manage color is capable of
configuring things to be in a proper state for calibration
and profiling (if this was not the case, then it is not
truly able to do the color management!)

Due to the different bit depth of the VideoLUT entries and the
frame buffer, it is expected that it is possible to set
the VideoLUT value for the entry that corresponds
with the values set in the frame buffer (i.e. classically
10 bit VideoLUT entry depth in 8 bit frame buffer),
so that the test patch values can be of the same precision
as the resulting VideoLUT entries that get created from them.
Post by Pekka Paalanen
- request: set_output(wl_output)
# Which output this surface targets. The compositor replies
with a
# configure event.
- event: configure(width, height)
# delivers the width and height the application needs to use
Right, but none of this addresses the main point of calibration -
to create a set of 1D LUTs to load into the hardware. How
is the hardware configured ?
Post by Pekka Paalanen
Whether the global needs to be privileged or not, and how privileges
are implemented are an orthogonal matter.
It may be orthogonal, but still needs a concrete solution
to be implementable.

And let me raise a fundamental point about profiling here
(not to be confused with calibration). Profiling the display will not
work if the color values of the pixels to the display is different during
profiling, to what it is for normal application display.

Regards,
Graeme Gill.
Pekka Paalanen
2017-01-13 14:17:53 UTC
Permalink
Hi,

there is some controversy in whether this belongs in Wayland or not,
but if we assume that it does...


On Thu, 5 Jan 2017 12:40:08 +1100
Post by Graeme Gill
I'm not so sure.
Post by Pekka Paalanen
GLOBAL cms_calibrator
- request: create_calibration_surface(wl_surface, new cms_calibration_surface)
# Assigns wl_surface role.
INTERFACE cms_calibration_surface
# Surfaces with this role will only be shown on the set output,
# with direct color path bypassing all color-management, and
# and the hardware has been reset to neutral/identity settings.
# (or whatever requirements are appropriate, you can decide
# what to write here)
Why this has to be made a special case? The normal
machinery used to manage color is capable of
configuring things to be in a proper state for calibration
and profiling (if this was not the case, then it is not
truly able to do the color management!)
So you say, but then you continue...
Post by Graeme Gill
Due to the different bit depth of the VideoLUT entries and the
frame buffer, it is expected that it is possible to set
the VideoLUT value for the entry that corresponds
with the values set in the frame buffer (i.e. classically
10 bit VideoLUT entry depth in 8 bit frame buffer),
so that the test patch values can be of the same precision
as the resulting VideoLUT entries that get created from them.
...which is actually a very important detail. In other words, the
normal pixel path cannot be used for calibration, because it won't
usually have enough precision: the VideoLUT output has more bits than
the buffer pixels have.

So this is why you keep insisting that applications need to have access
to the VideoLUT. Finally.

However, controlling the output values does not imply access to the
VideoLUT - it's just the only way you have had so far.

If I understand right, the calibrating or monitor profiling process
(are these the same thing?) needs to control the "raw" pixel values
going through the encoder/connector (DRM terminology), hence you need
access to the /last/ VideoLUT in the pipeline before the monitor. Right?
Or not even a VideoLUT per se, you just want to control the values to
the full precision the hardware has.

How does the profiling work? I mean, what kind of patterns do you show
on the monitor? All pixels always a uniform value? Or just some varying
areas? Individual pixels? Patterns that are not of uniform color?

If it was enough to just light up all pixels of a monitor with one
specific color value at a time, we could pretty easily define a
calibration protocol that instead of using buffers and surfaces, you
would just tell which values to emit to the monitor. Then the
compositor, which is in charge of the hardware pipeline, can do the
right thing. We could encode the values in e.g. 32-bits per channel or
whatever you like, and there could be a provision for the compositor to
report actual number of bits used.

Plus all the needed guarantees of non-interfering like we discussed in
the other email, and an ack from the compositor when the new value has
actually reached the monitor.

I would argue that it is much easier to make the above work reliably
than craft a buffer of pixels filled with certain values, then tell the
compositor to program the hardware to (not) mangle the values in a
certain way, and assume the output is something you wanted. The
application would not even know what manipulation stages the compositor
and the hardware might have for the pixels, so you would still need a
protocol to say "I want everything to be identity except for the last
LUT in the pipeline". IMO that is a hell of a hard way of saying
"output this value to the monitor".
Post by Graeme Gill
And let me raise a fundamental point about profiling here
(not to be confused with calibration). Profiling the display will not
work if the color values of the pixels to the display is different during
profiling, to what it is for normal application display.
Right.

(What is the difference between calibrating and profiling?)

In the scheme above, there would indeed be very different paths for
profiling vs. normal usage. But I do think that is how it has to be,
they will always be different: normal usage will not have the
opportunity to change the VideoLUT at will.

You can still ensure the compositor works correctly. After you have
profiled the monitor, configured the compositor to use the new
profiles, you can use the normal usage path to show a test image and
verify that the colorimeter agrees.

I think one would want to do the verification step anyway, and with
various different content color... um, definitions(?) to see that the
compositor does indeed work correctly for more than one case.

I recall demands from earlier that there must be a "pass-through mode"
for pixels so that calibration apps can work. I think the design
described above provides that even better. "Pass-through mode" by
definition is a path different from the normal usage, too.

If you would agree to all this, then normal usage and profiling would
really be separate things and could be designed independently and to
the point.


Thanks,
pq
Benoit Gschwind
2017-01-13 17:44:09 UTC
Permalink
the calibrating or monitor profiling process (are these the same thing?)
As far as I know, no they are different, calibrating a monitor is the
attempt to set cLUT and monitor setting to be close to a given target,
often a given color space with a specific white/black point. Profiling
is the characterization of the monitor (before or after calibration),
i.e. you build a map from monitor signal to the color that the monitor
actually show.

Combining both allow applications to correct the signal sent to the
monitor to get as close as possible the desired physical color.

If I take a concrete example:

The calibration will try to make that (128,128,128) rgb tuple to be
shown as a gray with half perceptual brightness than the white point,
using cLUT or monitor settings.

While profiling will send (128,128,128) rgb tuple to the screen an
measure the the actual color shown as physical luminescence, and store
it in a file to allow future use for color correction.

Please correct me if I'am wrong.
Chris Murphy
2017-01-13 19:20:33 UTC
Permalink
Post by Pekka Paalanen
If I understand right, the calibrating or monitor profiling process
(are these the same thing?) needs to control the "raw" pixel values
going through the encoder/connector (DRM terminology), hence you need
access to the /last/ VideoLUT in the pipeline before the monitor. Right?
Or not even a VideoLUT per se, you just want to control the values to
the full precision the hardware has.
Calibration and characterization (profiling) are two distinctly
different things, that from a user perspective are blurred into single
event that's usually just called "display calibration" done by a
single program.

The software first linearizes the videoLUT (sounds like maybe there's
more than one in the hardware, but even from my technical perspective
we're only talking about such thing and I lack the hardware knowledge
to differentiate), then displays RGB test values while measuring their
response (photometer, colorimeter, spectroradiometer). Then a
correction curve is created by the software and applied to the
VideoLUT. This is calibration.

Next the software displays more RGB test values, subject to that video
LUT's correction, while measuring their response with either a
colorimeter or spectroradiometer. The software creates an ICC profile
from these measurements. This is characterization.
Post by Pekka Paalanen
How does the profiling work? I mean, what kind of patterns do you show
on the monitor? All pixels always a uniform value? Or just some varying
areas? Individual pixels? Patterns that are not of uniform color?
The test pattern needs to be large enough for the user to get the
measuring device aperture over the pattern without any ambiguity. The
test pattern is made of identical RGB values - although whether
dithering is used by the panel itself isn't something the software can
control, but is taken into account by the measuring device the same as
our visual system would.

The minimum test is black, white, each primary, and some number of
intermediate values of each channel to determine the tone response
curve (incorrectly called gamma but the shape of the curve could be
defined by a gamma function, or parametric function, or a table with
points). But each profiling software can do different things. There
are simple matrix + TRC only display profiles. And there are full 3D
LUT display profiles; these need more measurements, including more
than just measuring primaries - these include measurements of
secondary and tertiary colors.

Nearby to this test pattern, often there's some sort of status
indicator so the user has some idea the process hasn't stalled,
sometimes also including the RGB values being displayed and their
respective XYZ or Lab measured values (to some degree for
entertainment value I guess).

The ICC profile is used to do various transforms, e.g. CMYK to display
RGB, sRGB/Rec 709 to display RGB, etc. which is what's meant by
"display compensation" so the display produces colors as if it has a
behavior other than it's natural behavior. Those transforms are done
by ICC aware applications using a library such as lcms2. So whatever
pipeline is used for "calibration" needs to produce identical results
to the pipeline used by the application - otherwise all bets are off
and we'll get all kinds of crazy bugs that we will have no good way of
troubleshooting. In fact I'd consider it typical for me to display
sRGB 255,0,0 in say GIMP, and measure it with a colorimeter, and make
sure the XYZ values I get are identical to what the display ICC
profile says they should be. If not, I'm basically hosed. And I've
seen exactly this kind of bug before on every platform I've ever
tested and it's tedious to figure out.

Apple has a great little tool called Digital Color Meter that shows
the pixels on screen (digitally enlarged) so I can see what RGB values
it's sending to the display (these are post ICC transformed values,
but have not been additionally transformed by the video LUT, so in
reality these RGBs are not arriving at the panel but the video LUT
calibration + display are widely considered to be from the user and
even expert perspective to be one thing once the calibration is done).
Post by Pekka Paalanen
I would argue that it is much easier to make the above work reliably
than craft a buffer of pixels filled with certain values, then tell the
compositor to program the hardware to (not) mangle the values in a
certain way, and assume the output is something you wanted. The
application would not even know what manipulation stages the compositor
and the hardware might have for the pixels, so you would still need a
protocol to say "I want everything to be identity except for the last
LUT in the pipeline". IMO that is a hell of a hard way of saying
"output this value to the monitor".
OK. The only problem I see with two pipelines where one is described
as "much easier to make reliable" sounds like the other pipeline may
not be reliable, but that is the pipeline 99.9% of the colors we care
about are going through, which are the colors from all of our
applications.

So instead of testing one pipeline, we're going to have to test two
pipelines, with software and measuring devices, to make certain they
are in fact behaving the same. I'm not really sure what the advantage
is of two pipelines is, in this context.
--
Chris Murphy
Kai-Uwe
2016-12-20 12:05:10 UTC
Permalink
Niels had an extremely good point that compositors *can* do all the
hard stuff too, by using the libraries the CMS experts have written.
This is not the X11 where you cannot add these features and
dependencies to the X server.
That's not correct. Keith Packard and other people rejected the idea of
adding a new XCMS layer in the X server. He said color conversion
belongs into the window manager. So we are using today the X property
system to communicate between applications, compositors and
configuration libraries/systemsettings dialogs (beside the
Xinerama/XRandR gamma table APIs).

IMO the main difference to wayland is that: we do not know a similar
inherent transport mechanism for meta data like Xatom is for X11. (I
share Graemes position, that the meta data communication path should
match that of wayland in order to remain compatible. I do not like the
idea to randomly add different mechanism as wayland extents
capabilities. Most relevant image file formats had adopted some way to
attach meta data. Otherwise workflows are not flexible enough.) That
meta data communication path has __not__ necessarily the need to
configure wayland.

Kai-Uwe
Kai-Uwe
2016-12-20 14:31:14 UTC
Permalink
On Tue, 20 Dec 2016 13:05:10 +0100
Oh nice. So indeed, CMS belongs in the compositor too (not only
clients), because it is the window manager in the Wayland architecture.
The compositor has at least access to the gamma table API and output
dimension information, which matches Graemes base CM
But yes, I did mean to include also political decisions on what belongs
where.
To have the most common client surface/buffer ICC profile -> wayland
case covered is a good political signal.
After thinking about my last reply to Graeme, I have become more
convinced that compositors must be full-fledged CMS users, not only
applications. Now the question becomes: what do you need from Wayland,
so that the application side instance of CMS can relay the necessary
information to the compositor-side CMS, so that the compositor can do
the right thing? And vice versa.
base CM
* set/get graphics card gamma table curves
* profiler and CMS
* set/get the compositor ICC device profile per output
* profiler and CMS
* CM off-switch per client surface/buffer
* useful for early color binding
* profiling
* get output dimensions and notified about changes
* help profilers with UI layout
* early color binding
* get output EDID and notified about changes
* identify monitor
* preselect profiles in UIs for a monitor
* configuration and online ICC Taxi DB
* ICC profile generation from EDID

advanced CM:
* set document/surface/buffer source ICC profile
* set and forget - great for most clients
* set document/surface/buffer <-> output ICC device link profile
* very advanced
* reduces even further the need for early color binding
* movie tools like blender, video players etc.
* use custom CMM with own gamut mapping algorithm
* use effects
* adapt to viewing environment ...
* set a CMS of choice inside the compositor to replace lcms if needed.
When one integrates a CMS in a compositor, you no longer need to
expose configuration (hardware configuration, like CLUT programming)
via any protocol. The compositor talks directly with the CMS and if the
compositor can set e.g. CLUTs, CMS can tell it what to set.
only for the content channels <= display channels and content precision
is <= surface precision

I doubt that we can always know, what is needed for a CMS hungry /
demanding client.
I am assuming that the compositor can interface with a CMS by calling
into a library offered by the CMS. If that interfacing was previously
done over X11, then you have to write that library. It will be more
efficient too, since you don't have to serialize and deserialize, and
asynchronicity (problems) appear only when you want to.
Making this selectable similar to XDG desktop file selectors for DE's.
Is there a path to install such libraries into? What is the API to
interface with? I guess ArgyllCMS, Oyranos CMS and others might be
interested. DE integrators, administrators and users are switching those
on a regular basis, but need now to de-/install packages.
I'm slowly starting to suspect that CMS designers need a slight paradigm
shift: the compositor is meant to integrate with the CMS, instead of
the CMS given low-level access to the hardware bypassing the window
manager. CMS is no longer something that can be bolted on externally,
like it is in X11. Embrace the idea of integration, and I belive and
hope that it will pay off as a much more reliable architecture and
polished user interfaces.
Agreed to the extent that clients can pass enough precision and channels
to the compositor CMS. (I was never able to handle e.g. CMYK, 6-channel
or HDR content inside the compositor.)
Some of what used to go over X11 would
probably be much better as a library ABI, but in X11 times it was not
possible because X11 implied separate processes.
Btw. in X11, how do CMS integrate/interface with compositing managers?
KolorServer:
* kded daemon + D-Bus messages <-> KWin core
https://userbase.kde.org/Color_Management

CompICC:
* plug-in to Compiz, is part of the main process with access to
most(all?) internals
http://www.oyranos.org/FOSDEM2012/ColourManagementInCompositors2012.pdf
Who does the colorspace etc. conversions?
ICC profiles -> CMM (lcms or others) -> 3D texture
How do you control blending spaces?
ugly monitor color space blending, its a known limitation
How have you implemented GPU-accelerated color mapping?
Yes, for instant and CPU relaxing color conversions, both of the above
implementations use shaders with OpenGL 3D textures.

For gamma 1.0 handling to work, the shaders need to accept in-and output
curves. Otherwise there is really ugly color banding as can be seen in
cameraRAW displaying through the above KDE/Compiz color servers.
The compositor internal interfaces can and should be used for what
Xinerama, RandR and whatever you have been using to configure an X
server through X11. This time, the compositor needs to load and
interface with the CMS.
I looked at weston code and found some glue. However the communication
from the compositor(and its CMS library) to the client is not clear to
me. People ask frequently about remote color management. So pretty soon
there is a demand for scattered CMS and clients for remote
configuration, conversions etc.
As to what you really need from Wayland, there are two ways that are
1. The one Niels started with: a standard protocol extension that is
used manually.
I like approach for its shoot and forget as well as it integration feeling
2. The approach what e.g. libEGL uses: if you have a particular CMS
implementation, and the compositor initializes the CMS, the CMS can
hook up it's very own Wayland protocol extensions. When a client
initializes the same CMS, the CMS looks up the
CMS-implementation-specific extension from the Wayland display, and
uses it. This way everything about the Wayland protocol extension is
actually private to the CMS implementation. The cost is that you
need a library ABI in the CMS, one for compositors and one for
clients.
Interestingly you mention a way to extend Wayland privately. Is this a
good starter(?):
https://wayland.freedesktop.org/docs/html/ch04.html

That might be the path for many base CM features. The device link idea
could then be specified more independently from the wayland community as
it is _really_ advanced, I guess.
A "benefit" of option 2 is that you don't have to go through the
Wayland upstream review process but only your own.
Yes. That might accelerate things.

Still option 1. has the political benefit to make wayland embrace color
management by default. I doubt that any other group can have such a
impact. They can easily extent and complement.

Kai-Uwe
Benoit Gschwind
2017-01-13 23:20:07 UTC
Permalink
Hello Graeme Gill

I read many comments on this topic including yours, and I agree with you
about some points in particular about the calibration and profiling
procedure, but I very puzzled by your position about some points. It's
look like you hate display managers[1] because they are not able to
produce accurate color from a given specification/definition/encoding.

I would like to point out that colorspace with pixel values is just an
encoding for the color you would like produce on the monitor. That mean
the compositor is capable to perform the translation to the monitor
accurately as good as the given client application. He can do it even
better because he has the hardware under-control and may have a wider
community to support this feature.

I think you should drop the idea that the compositor cannot do something
better than a specific application, by definition the compositor is an
application.

Some comment about the reality you describe are following inline.
Or be prepared to re-visit Wayland's fundamental design decisions
if they turn out to be based on a false premise.
I don't think that's fair. I think Wayland is the opportunity to upset
the status quo with color management and do correctly what what never
possible with X11.
I think it is fair - a lot of push back is along the lines of
"that's impossible - Wayland doesn't work that way", which
if taken to a logical conclusion, (maybe) implies that Wayland is
incapable of sporting color management.
Assumption: "Applications are oblivious to the output their
pixels are rendered to"
Reality: An application needs to know the output it's pixels
land on if it is to present them in the correct output specific
colorspace.
I disagree, an application need to know what they want show then they
need a way to tell the compositor what they want. An application that
want have a pixel in 40 cd.m-2 of red with wavelength 750 nm, must know
how to tell that to compositor, it may be 240,2,4 RGB in the color space
sRGB (totally random conversion). But once the encoding is agreed the
compositor can do the same as any color-managed-aware software. In other
hand I can agree that a client may want to know monitor capability to
adapt their display, for example if the display isn't wide-gamut I just
fallback to a normal gamut rendering, because rendering wide gamut is a
waste of effort.

In any case please drop this reality. And help to define how the client
should tell to the compositor which color he want to show, and what is
useful to know for an application.
Assumption: "The compositor is able to do all the transformations
necessary to render applications pixels to the output device"
Reality: It may not be practical for a compositor to have the
color conversion machinery capable of doing the transformations
in the manner and to the precision that an application requires.
This reality is correct because you used 'MAY' but just assume the
compositor can do all the transformations necessary to render
applications pixels to the output device as good as any
color-managed-aware software, even better because he will have a broader
community. And also think that kind of transformation can probably be
included in a library, which all compositors will be able to reuse. A
little bit like libinput is currently used by a lot of compositor out-there.
Assumption: "Graphical system API's can be divided into
application operational API's, and Management API's.
Wayland need only cover the former".
Reality: Some graphical functions are both operational
for one type of application, and management for other types
of applications.
I do not understand this assumption, and obviously the related reality.
Assumption: "All management functions can be incorporated in
the compositor"
Reality: Some management functions require full blown applications,
and can't practically be incorporated in each compositor,
nor tailored to each compositor.
So if the realities challenge the assumptions, then
either clever means need to be invented to get things
to work within the assumptions, or the assumptions need
to be softened. Saying "it's impossible" doesn't progress
things, nor help bring Wayland up to having the capabilities
of other existing systems.
[ The above of course is, as best I grasp it from preceding
conversations. ]
I agree with you, because I understand the context but out of context
the assumption "All management functions can be incorporated in the
compositor" is valid, nothing forbid any software to be included within
a compositor, if you want a compositor with Tetris inside, it's up to
you. That say I will explain why I agree with you: because I agree that
it would be nice to split the software that find the calibration value
and build profile from the compositor, and I think we should find a
privileged protocol to enable this design. In other hand I don't think
that the configuration must be allowed to separate software. i.e. the
privileged software has granted privilege for the calibration and the
profiling, once they are done, he produce configuration files that
should be feed to compositor by his 'specific' configuration system.
Lets be honest for a moment. How many applications support color
management on the Linux desktop? We're asking application authors to
understand things like blending spaces, source and destination
profiles, vcgt, overlapping windows on different crtc's and horrible
concepts like that.
Agreed. Easier to access, and default color management framework
is desirable (and for that reason was included in my sketch). But
this shouldn't be at the expense of applications that need more
than that, nor at the expense of color management applications
needed to set things up so that all this can work in the first place.
As a framework guy, _and_ an app developer I just want to tag one
surface with a colorspace and then let the compositor figure out all
the grotty details.
And you should be able to. But what about applications that need
far more than that ? (Ordinary CMM's of the type that are likely
to be practical in a compositor, lack a lot of capabilities
that some applications will need).
Anything more and the application author will just
decide it's not worth the bother. To calibrate we just ask for a
surface that's not going to be tampered with, but we don't want to
optimize for this super-uncommon case.
I disagree - leave it to be an afterthought, and it will be
done badly or left out completely, crippling the practicality
of color management for the system.
Graeme Gill.
_______________________________________________
wayland-devel mailing list
https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Another thing that seems misleading you and the wayland community is
that you use 'standard' protocol, when you use this words in the wayland
mailing-list it's read as wayland 'standard' protocol, in other world a
standard that wayland community is responsible of. But an other path
exists, you can build your own 'standard' protocol (understant wayland
extension, like xrandr is an extension of X11 protocol), maintain it and
lobby the compositor developers to implement it. This how the xdg-shell
or wayland-wall are managed.

Before closing my email, I would like that we separate discussion of how
regular client share color data with the compositor and the topic about
how a software can perform calibration and profiling, even if I agree
that both are needed to have color-managed-monitors, both topic are
mostly orthogonal, imo.

my suggestion of title: "Enable color managed clients" and "Enable color
calibration and profiling"

I hope this e-mail is constructive and will help for further discussions.

Best Regards


[1] a generic name that include X11/Wayland or any others of that kind.

--
Benoit (blocage) Gschwind
Daniel Stone
2016-12-21 12:14:50 UTC
Permalink
Hi Niels,

On 21 December 2016 at 11:21, Niels Ole Salscheider
Maybe the solution for profiling would then be to just use KMS for fullscreen
display and bypass the compositor completely? The profiling application could
do whatever it wants to the hardware and the compositor would then restore the
proper state when it is started again...
My working view at the moment is that whatever is doing calibration
should be directly in charge of the full insane complexity of the
display hardware, and that even enumerating this, let alone offering
control over it, is not tractable. Which leaves us with two options:
the compositor runs calibration, or external calibration apps do not
run under a Wayland session and just drive DRM/KMS directly.

This didn't make any sense when all display drivers were Xorg
components, but hey, we do have a universal API in DRM/KMS that you
can write applications directly towards, so I don't see why we should
bend over backwards making these compromises for special-purpose
clients which by definition do not interoperate with a regular desktop
environment.

Cheers,
Daniel
Graeme Gill
2017-01-05 02:37:13 UTC
Permalink
Daniel Stone wrote:

Hi Daniel,
Post by Daniel Stone
My working view at the moment is that whatever is doing calibration
should be directly in charge of the full insane complexity of the
display hardware, and that even enumerating this, let alone offering
the compositor runs calibration, or external calibration apps do not
run under a Wayland session and just drive DRM/KMS directly.
Really unattractive (i.e. a big obstacle) from a color management
calibration/profiling application writers point of view.

I certainly don't want to have to figure out any insane complexity
of display hardware - that's not my applications job - I just want the
access to the abstracted common color pipeline functionality that has always
been available in every desktop operating system, so that these
applications, and all the color sensitive applications can work.
Post by Daniel Stone
This didn't make any sense when all display drivers were Xorg
components, but hey, we do have a universal API in DRM/KMS that you
can write applications directly towards, so I don't see why we should
bend over backwards making these compromises for special-purpose
clients which by definition do not interoperate with a regular desktop
environment.
Sorry, but from my perspective this is completely insane.

I think that adding a couple of well understood API's doesn't
compare to modifying a desktop application to have to, on the fly
switch from a normal application context into configuring
and then driving a (basically) raw frame buffer, convey all
it's user interface to the frame buffer to run test patches,
and then switch back again. And I don't know what you mean by
"do not interoperate with a regular desktop environment". These
are perfectly regular desktop applications that happen to have
a special purpose. Casting them adrift from the normal desktop
environment raises their difficulty into the "requires heroic effort"
territory, due to huge breakage of cross platform application
compatibility alone, and is directly contrary to the very
idea of what a display server and the overlaying application UI
toolkits are meant to provide!

I'm also thinking it would really help a lot if you and
others contributing to this thread were able
to procure something like a X-Rite i1 display Pro,
run up the manufacturer provided software on
MSWindows or OS X to get a feel for what it does,
then fire up DisplayCAL+ArgyllCMS on Linux/X11
and take it for a spin.
(Another path would be to obtain one of Richard's
ColorHug's, but I think seeing how the commercial
software operates as well, would add a broader perspective.)

I can keep writing a lot of words, but they don't seem to
be conveying much meaning without some common context.

Cheers,

Graeme Gill.
Daniel Stone
2016-12-21 13:54:00 UTC
Permalink
Hi Niels,

On 21 December 2016 at 13:24, Niels Ole Salscheider
For example a still very common form of partial data loss is the dumb
program that can open a JPEG but ignore EXIF color space metadata and
an embedded ICC profile. What'd be nice is if the application doesn't
have to know how to handle this: reading that data, and then tagging
the image object with that color space metadata. Instead the
application should use some API that already knows how to read files,
knows what a JPEG is, knows all about the various metadata types and
their ordering rules, and that library is what does the display
request and knows it needs to attach the color space metadata. It's
really the application that needs routing around.
I agree (the specific JPEG case is something that's been bugging me
for some time), and it's something I want to build in to make it as
easy as possible for people to gradually build their clients to get
this right.
This is really something that should be done by the toolkits (Qt, GTK, ...).
I really hope that they start to read the profile from EXIF when opening an
image. They can then either attach it to the subsurface that is used to
display the image, or convert it to their blending space (which could match
the blending space of the compositor) if blending is performed.
Sure. Complicated of course by things like embedded web views, but ...
Similarly, I'd like to park the discussion about surfaces split across
multiple displays; it's a red herring. Again, in X11, your pixel
content exists in one single flat buffer which is shared between
displays. This is not a restriction we have in Wayland, and a lot of
the discussion here has rat-holed on the specifics of how to achieve
this based on assumptions from X11. It's entirely possible that the
best solution to this (a problem shared with heterogeneous-DPI
systems) is to provide multiple buffers. Or maybe, as you suggest
below, normalised to an intermediate colour system of perhaps wider
gamut. Either way, there's a lot of ways to attack it, but how we
solve that is almost incidental to core colour-management design.
Has there been any discussion about using a buffer per output to solve the
heterogeneous-DPI problem? If we end up doing that we might as well use it for
color correction. But otherwise I would prefer the device link profile
solution.
No, it's something I've just thrown out here because I thought this
thread was too relentlessly productive and on-topic. It's _a_ possible
solution which doesn't seem immediately useless though, so that's
something. I was mostly using it though, to illustrate that there may
be better long-term solutions than are immediately obvious.
So then I wonder where the real performance penalty is these days?
Video card LUT is a simplistic 2D transform. Maybe the "thing" that
ultimately pushes pixels to each display, can push those pixels
through a software 2D LUT instead of the hardware one, and do it on 10
bits per channel rather than on full bit data.
Some of the LUTs/matrices in display controllers (see a partial
enumeration in reply to Mattias) can already handle wide-gamut colour,
with caveats. Sometimes they will be perfectly appropriate to use, and
sometimes the lack of granularity will destroy much of their value. If
the compositor is using the GPU for composition, then doing colour
transformations is extremely cheap, because we're rarely bound on the
GPU's ALU capacity.
Yes, but as Graeme has pointed out doing it in a shader means lower precision
when using a 8 bit framebuffer. How feasible is it to use a higher resolution
framebuffer and how big would the performance impact be?
Well yeah, if you're using an 8-bit framebuffer then that caps your
effective precision. But even ignoring the fact that intermediate
calculations within shaders happen at vastly higher precision (often
32bpc) than either your source or destination buffer, I don't know of
hardware which supports 10bpc sampler targets but not render targets.
Meaning that sudden demotion to 8bpc precision doesn't just happen;
you either succeed or fail in the first.

(By way of example, Mesa does not usefully expose 10bpc formats for
non-Intel drivers right now; the hardware and underlying drivers do;
it's just a small bit of missing glue code. On the other hand, there
is no silent jump between precision; it just wouldn't work.)

Cheers,
Daniel
Graeme Gill
2017-01-05 10:58:19 UTC
Permalink
Post by Daniel Stone
(By way of example, Mesa does not usefully expose 10bpc formats for
non-Intel drivers right now; the hardware and underlying drivers do;
it's just a small bit of missing glue code. On the other hand, there
is no silent jump between precision; it just wouldn't work.)
Right, so that's all (near) future stuff, whereas 10 bpc
VideoLUT output precision is rather old and currently
supported, even on ancient HW like my GeForce 8600 and
its predecessor. (So old in fact, that the D/A and VGA output
conveys that 10 bit to the display).

I'd also rather hope (but haven't had any confirmation)
that HW that permits 10 bpc frame buffers has 12 bit
VideoLUT precision, particularly with HDR now being a thing.

Graeme Gill.
Graeme Gill
2017-01-05 11:22:32 UTC
Permalink
Post by Daniel Stone
(By way of example, Mesa does not usefully expose 10bpc formats for
non-Intel drivers right now; the hardware and underlying drivers do;
it's just a small bit of missing glue code. On the other hand, there
is no silent jump between precision; it just wouldn't work.)
Right, so that's all (near) future stuff, whereas 10 bpc
VideoLUT output precision is rather old and currently
supported, even on ancient HW like my GeForce 8600 and
its predecessor. (So old in fact, that the D/A and VGA output
conveys that 10 bit to the display).

I'd also rather hope (but haven't had any confirmation)
that HW that permits 10 bpc frame buffers has 12 bit
VideoLUT precision....

Graeme Gill.
Graeme Gill
2017-01-05 02:21:56 UTC
Permalink
If the resolution of the frame buffer was high enough we could just apply the
VideoLUT in software when we also apply the display profile and leave the
hardware LUT alone.
Yes, but that's not typically how the HW is arranged. Even
10 bit frame buffer configurations may possibly be followed
by 12 bit output entry 1D LUTS, specifically so that
lineariation curves can be applied at a high enough
resolution so as to be able to configure the 10 bit steps
to a satisfactory accuracy.
You could then profile the screen by setting the device
link profile of your surface to the identity mapping without any vcgt tag.
That doesn't solve the problem for systems that do not have high resolution
frame buffers though. Such systems take a step back when running
Wayland if there is no means to create a set of higher resolution
calibration curves.

Summary - the 1D LUT HW is almost universally supported in
video cards, often offers higher resolution in the
non-linearized display space, comes for free in
terms of performance, ensures better behavior of
the display for all applications, color managed or not, and as
a color management processing step, is very widely supported by
color management systems and tools.
But I agree that we want to program the VideoLUT as long as we use 8 bit
framebuffers. We normally do not want to allow applications to change the
VideoLUT since that would have an influence on all applications and a broken
application might mess with it in some unintended way.
Theoretically, but not practically. All current systems are open
to this "flaw", yet people do useful work on them without
being subject to such problems. It's like saying that
theoretically an application could send loud, annoying
sounds to the audio output. Yes they can, but users don't
put up with that kind of thing, so such applications get
un-installed pretty fast.
Maybe the solution for profiling would then be to just use KMS for fullscreen
display and bypass the compositor completely? The profiling application could
do whatever it wants to the hardware and the compositor would then restore the
proper state when it is started again...
Maybe - but this seems rather hacky, and I'm not clear
if things like (say) Qt will continue to provide the UI
if the application plays with DRM/KMS (aren't you implicitly
shutting down Wayland for that output ? - I'm not clear on the details).

Doesn't this also imply that the calibration and profiling
applications then have to do a lot of fairly low level
configuration to set the display in the same state
that Wayland is configured to have it in ?
(i.e. how much does DRM operate in parallel
to Wayland ? Can I get/set CRTC VideoLUTs via DRM while
Wayland is running on an output ?)

Is it certain that every Wayland supporting system has DRM/KMS available ?

Having created an application calibration curve, how does
the application install it for loading into the correct CRTC when
Wayland is running ? (Packing it in some ICC profile may solve
normal usage situations, if there is a profile, but cuts off
other current uses such as checking that the VideoLUT
is set as expected, capturing it during profiling, etc.)

Cheers,
Graeme Gill.
Richard Hughes
2017-01-13 15:19:59 UTC
Permalink
.. also guarantee that nothing else will be shown on that specific output
FWIW, I'd be fine just displaying a color swatch on the entire screen
with no window decorations; the only reason we use a composited window
in gnome-color-manager is so the user knows where the put the sensor.

Richard.
Pekka Paalanen
2017-01-13 16:08:51 UTC
Permalink
On Fri, 13 Jan 2017 16:37:57 +0100
Post by Richard Hughes
.. also guarantee that nothing else will be shown on that specific output
FWIW, I'd be fine just displaying a color swatch on the entire screen
with no window decorations; the only reason we use a composited window
in gnome-color-manager is so the user knows where the put the sensor.
Well, that will work for profiling, as long as you don't mind not being
able to see progress information (which I would consider a real drawback).
Besides it's surely a bit awkward that the computer then cannot be as
easily used differently in the meantime if it doesn't have a second
monitor connected (not that I'd recommend it, but I often have a music
player open during measurements and sometimes it's nice to have the
ability to adjust the playlist while lengthy measurements are still
running, to give an example. Also who am I to tell people with their new
gigantic 4K 40" desktop monitor that they can't use the rest of the
available real estate while a comparatively tiny measurement swatch is
displayed in the center?).
Also a fullscreen color swatch will probably not be able to deal
gracefully with things like OLED screens which might have automatic
power limiting depending on displayed content and such, which could
botch the measurements in a way that makes the resulting profile useless.
It certainly won't work for guided adjustment (unless you re-implement
UI), where the feedback displayed in the UI is supposed to guide the
user through adjustments to (e.g.) monitor controls he/she needs to make.
Oh, ok, that's why. We could as easily have the compositor show the
color swatch only on a part of the output and leave the rest of the
area for normal use.

However, if that is done with a special protocol so that the compositor
actually knows this is the profiling color swatch, it can make sure
other windows cannot interfere. It could be like the color swatch was
on an always-on-top overlay. You cannot do that any other way from a
Wayland client.

And if unform color for the swatch is all you need, the protocol could
simply take 3 numbers for the color instead of an image buffer. Then
people would not get the urge to abuse this interface for e.g.
application splash screens.


Thanks,
pq
Carsten Haitzler (The Rasterman)
2017-01-14 02:24:14 UTC
Permalink
Post by Pekka Paalanen
On Fri, 13 Jan 2017 16:37:57 +0100
Post by Richard Hughes
.. also guarantee that nothing else will be shown on that specific output
FWIW, I'd be fine just displaying a color swatch on the entire screen
with no window decorations; the only reason we use a composited window
in gnome-color-manager is so the user knows where the put the sensor.
Well, that will work for profiling, as long as you don't mind not being
able to see progress information (which I would consider a real drawback).
Besides it's surely a bit awkward that the computer then cannot be as
easily used differently in the meantime if it doesn't have a second
monitor connected (not that I'd recommend it, but I often have a music
player open during measurements and sometimes it's nice to have the
ability to adjust the playlist while lengthy measurements are still
running, to give an example. Also who am I to tell people with their new
gigantic 4K 40" desktop monitor that they can't use the rest of the
available real estate while a comparatively tiny measurement swatch is
displayed in the center?).
Also a fullscreen color swatch will probably not be able to deal
gracefully with things like OLED screens which might have automatic
power limiting depending on displayed content and such, which could
botch the measurements in a way that makes the resulting profile useless.
It certainly won't work for guided adjustment (unless you re-implement
UI), where the feedback displayed in the UI is supposed to guide the
user through adjustments to (e.g.) monitor controls he/she needs to make.
Oh, ok, that's why. We could as easily have the compositor show the
color swatch only on a part of the output and leave the rest of the
area for normal use.
However, if that is done with a special protocol so that the compositor
actually knows this is the profiling color swatch, it can make sure
other windows cannot interfere. It could be like the color swatch was
on an always-on-top overlay. You cannot do that any other way from a
Wayland client.
And if unform color for the swatch is all you need, the protocol could
simply take 3 numbers for the color instead of an image buffer. Then
people would not get the urge to abuse this interface for e.g.
application splash screens.
i kind of like the idea of a special protocol with 32bit ints per rgb etc. to
say "display this exact uncalibrated color as-is without luts or anything else
in the way"... but apply it to a separate toplevel window/surface (and put your
guided ui controls in another window) or... use a subsurface.
--
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) ***@rasterman.com
Kai-Uwe
2017-01-14 10:52:25 UTC
Permalink
Post by Carsten Haitzler (The Rasterman)
Post by Pekka Paalanen
Oh, ok, that's why. We could as easily have the compositor show the
color swatch only on a part of the output and leave the rest of the
area for normal use.
However, if that is done with a special protocol so that the compositor
actually knows this is the profiling color swatch, it can make sure
other windows cannot interfere. It could be like the color swatch was
on an always-on-top overlay. You cannot do that any other way from a
Wayland client.
And if unform color for the swatch is all you need, the protocol could
simply take 3 numbers for the color instead of an image buffer. Then
people would not get the urge to abuse this interface for e.g.
application splash screens.
i kind of like the idea of a special protocol with 32bit ints per rgb etc. to
say "display this exact uncalibrated color as-is without luts or anything else
in the way"... but apply it to a separate toplevel window/surface (and put your
guided ui controls in another window) or... use a subsurface.
+1 for subsurface. The compositor can even take over responsibility to
move the belonging window to the desired output, for the case the
underlying application can not manage or handle that itself (e.g. for
mirrored outputs).
Carsten Haitzler (The Rasterman)
2017-01-15 02:47:29 UTC
Permalink
Post by Kai-Uwe
Post by Carsten Haitzler (The Rasterman)
Post by Pekka Paalanen
Oh, ok, that's why. We could as easily have the compositor show the
color swatch only on a part of the output and leave the rest of the
area for normal use.
However, if that is done with a special protocol so that the compositor
actually knows this is the profiling color swatch, it can make sure
other windows cannot interfere. It could be like the color swatch was
on an always-on-top overlay. You cannot do that any other way from a
Wayland client.
And if unform color for the swatch is all you need, the protocol could
simply take 3 numbers for the color instead of an image buffer. Then
people would not get the urge to abuse this interface for e.g.
application splash screens.
i kind of like the idea of a special protocol with 32bit ints per rgb etc.
to say "display this exact uncalibrated color as-is without luts or
anything else in the way"... but apply it to a separate toplevel
window/surface (and put your guided ui controls in another window) or...
use a subsurface.
+1 for subsurface. The compositor can even take over responsibility to
move the belonging window to the desired output, for the case the
underlying application can not manage or handle that itself (e.g. for
mirrored outputs).
indeed as it knows the parent surface etc. and thus can "do the right thing"
without the calibration app needing to know or care at all.

in the end a calibrator really just needs to present some data (unmolested by
gamma luts/color correction/blending etc.) onto a display so some colorimiter
device dangling on the screen can read it and feed back what it sees. using
this dat produce a set of calibration data a compositor (or an app) canuse
later that would be applicable to that monitor.
--
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) ***@rasterman.com
Pekka Paalanen
2017-01-13 16:03:24 UTC
Permalink
On Fri, 13 Jan 2017 15:56:50 +0100
[ Bunch of replies to different posts crammed into one, apologies in
advance. ]
Post by Pekka Paalanen
Would it not be simpler to just say "I'm doing calibration now, make
sure nothing interferes"?
Sure, I just hope that "I'm doing calibration now, make sure nothing
interferes" still allows conventional application UI (e.g. using UI
frameworks like Gtk3, Qt etc) to be visible, because otherwise it would
effectively block users from interacting with said UI (going by your
comment "I also forgot to mention that surfaces with the
cms_calibration_surface role, when actually presented, would also
guarantee that nothing else will be shown on that specific output"
somehow sounds like it would not, but maybe I'm misunderstanding what
you mean by "output". I would certainly not be enthused if I had to
low-level re-implement parts of the UI I currently have - in fact I can
tell you right now that it would never, ever happen if it's not as
simple as writing for any of the common UI frameworks).
Hi,

my idea for a user story would be something like this:

- user starts a calibration app and clicks "profile output HDMI 2"

- optional: the compositor shows a dialog: "Application Foo wants to
profile output HDMI 2. Allow / Deny ? You can press Esc to abort
profiling after it starts."

- The output HDMI 2 gets filled with the profiling pattern according
the application requests, and literally nothing else will show up on
that output.

- If the user wants to abort the profiling run before it completes, he
presses Esc, which the compositor grabs and restores HDMI 2 to normal.

- If the profiling finishes by itself, HDMI 2 is restored to normal
automatically.

- The profiling application will know what happened in all cases.

How's that sound? I'm running blind here, because I've never used a
profiling app.

Would you really need the user to interact with the UI while the
profiling pattern is shown?

If you show any UI bits on the profiled output, how do you avoid the UI
affecting the profiling results? Keep in mind the window positioning
limitations in Wayland desktop.

Other outputs should remain normal during profiling, but of course
there might be only one output connected.
Put bluntly, but I agree as well. A "calibration/profiling/measurement"
application is not more of a "special-purpose client" than, say, a CAD
application, a code editor, or an application that lets you create 3D
scenes of rotating bunnies wrapped around bunnies (scnr), ... etc.
Graeme just managed to explain to me why the profiling app needs access
to the VideoLUT (see [1]) which is a global resource. That makes the
profiling app a very special case on Wayland, because on Wayland we
very very carefully avoid unconditional access to such global
resources, so that apps cannot take the user or the desktop hostage or
mess it up even if they tried to(*). Changing the VideoLUT will mess up
all other windows on that output, and the profiling app will not even
tolerate any other windows on that output at the same time.

Any application you listed does not require similar access to a global
resource. A profiling app OTOH cannot use the normal content delivery
paths because as Graeme explained they often do not reach the full
precision without changing the VideoLUT.

Another example of where Wayland denies access to global resources is
that an application cannot make an input grab at will. Input grabs are
always conditional, and breakable. We should have similar behaviour
also with "screen grabs" like these.

[1] https://lists.freedesktop.org/archives/wayland-devel/2017-January/032616.html

(*) The classic example of messing up the user's desktop is games
changing the X server video mode, and for one reason or another (don't
care, bug, broken, crashed) never restores it to original, or even if
it does restore, all your icons on the desktop get squashed in the
top-left corner. That is one reason why Wayland does not have a RandR
interface.
Post by Pekka Paalanen
I'm also thinking it would really help a lot if you and
others contributing to this thread were able
to procure something like a X-Rite i1 display Pro,
run up the manufacturer provided software on
MSWindows or OS X to get a feel for what it does,
then fire up DisplayCAL+ArgyllCMS on Linux/X11
and take it for a spin.
(Another path would be to obtain one of Richard's
ColorHug's, but I think seeing how the commercial
software operates as well, would add a broader perspective.)
Even watching some videos of a typical calibration/profiling workflow on
YouTube may already help.
That's an excellent idea! Could you recommend some, so I don't pick a
random "oh, but they're doing it wrong" video?


Thanks,
pq
Chris Murphy
2017-01-13 19:39:44 UTC
Permalink
Post by Pekka Paalanen
That's an excellent idea! Could you recommend some, so I don't pick a
random "oh, but they're doing it wrong" video?
I'd say probably none of them are doing it wrong, but come with
editorialization of what's happening that they can't prove (and are
often wrong about ). So either watch it with the audio muted; or just
take their conclusions with a grain of salt. For example, I often hear
calibration as fixing course problems with a display, and profiling
fixes remaining inaccuracies after calibration. Wrong. Profiling is
just measuring and recording those measurements, it says nothing about
the rightness or wrongness of the display. It's just a bidirectional
lookup: RGB to XYZ (or Lab) and XYZ (or Lab) to RGB. The profile
itself does not change a display's behavior, it takes two or more
profiles to define a transform, and it's that transform that
(indirectly) moderates the display's behavior; or even the
application's behavior for that matter.
--
Chris Murphy
Benoit Gschwind
2017-01-14 09:32:43 UTC
Permalink
Hello Pekka,

Your idea is mostly correct, but I have few comment. Some element of
context: on my side I do not do photo editing at all but I calibrate and
profile my monitor often to get a similar visual experience from a
computer to another.
Post by Pekka Paalanen
On Fri, 13 Jan 2017 15:56:50 +0100
[ Bunch of replies to different posts crammed into one, apologies in
advance. ]
Post by Pekka Paalanen
Would it not be simpler to just say "I'm doing calibration now, make
sure nothing interferes"?
Sure, I just hope that "I'm doing calibration now, make sure nothing
interferes" still allows conventional application UI (e.g. using UI
frameworks like Gtk3, Qt etc) to be visible, because otherwise it would
effectively block users from interacting with said UI (going by your
comment "I also forgot to mention that surfaces with the
cms_calibration_surface role, when actually presented, would also
guarantee that nothing else will be shown on that specific output"
somehow sounds like it would not, but maybe I'm misunderstanding what
you mean by "output". I would certainly not be enthused if I had to
low-level re-implement parts of the UI I currently have - in fact I can
tell you right now that it would never, ever happen if it's not as
simple as writing for any of the common UI frameworks).
Hi,
- user starts a calibration app and clicks "profile output HDMI 2"
- optional: the compositor shows a dialog: "Application Foo wants to
profile output HDMI 2. Allow / Deny ? You can press Esc to abort
profiling after it starts."
optional but recommended, this avoid the abuse of the protocol for
splash screen. This dialog should also grant the access to change to the
hardware settings (VideoLUT and other things)
Post by Pekka Paalanen
- The output HDMI 2 gets filled with the profiling pattern according
the application requests, and literally nothing else will show up on
that output.
As explained in another e-mail[1] the user may want to use unused space
to do something else, I do it often because: (1) calibrating take quite
a long time, (2) my requirement are not very high in term of accuracy
and (3) the software that perform the calibration may want to show some
calibration/profiling progress and/or feedback or (4) the software that
perform the calibration may want show GUI button to pause/resume the
calibration.

Another random remark is that, as far as I understand, the calibration
process may affect all monitors, depending on hardware because they may
expose only one VideoLUT shared for all outputs.
Post by Pekka Paalanen
- If the user wants to abort the profiling run before it completes, he
presses Esc, which the compositor grabs and restores HDMI 2 to normal.
- If the profiling finishes by itself, HDMI 2 is restored to normal
automatically.
- The profiling application will know what happened in all cases.
How's that sound? I'm running blind here, because I've never used a
profiling app.
You are mostly correct in the approach, imo.
Post by Pekka Paalanen
Would you really need the user to interact with the UI while the
profiling pattern is shown?
If you show any UI bits on the profiled output, how do you avoid the UI
affecting the profiling results? Keep in mind the window positioning
limitations in Wayland desktop.
Other outputs should remain normal during profiling, but of course
there might be only one output connected.
Put bluntly, but I agree as well. A "calibration/profiling/measurement"
application is not more of a "special-purpose client" than, say, a CAD
application, a code editor, or an application that lets you create 3D
scenes of rotating bunnies wrapped around bunnies (scnr), ... etc.
Graeme just managed to explain to me why the profiling app needs access
to the VideoLUT (see [1]) which is a global resource. That makes the
profiling app a very special case on Wayland, because on Wayland we
very very carefully avoid unconditional access to such global
resources, so that apps cannot take the user or the desktop hostage or
mess it up even if they tried to(*). Changing the VideoLUT will mess up
all other windows on that output, and the profiling app will not even
tolerate any other windows on that output at the same time.
Any application you listed does not require similar access to a global
resource. A profiling app OTOH cannot use the normal content delivery
paths because as Graeme explained they often do not reach the full
precision without changing the VideoLUT.
Another example of where Wayland denies access to global resources is
that an application cannot make an input grab at will. Input grabs are
always conditional, and breakable. We should have similar behaviour
also with "screen grabs" like these.
[1] https://lists.freedesktop.org/archives/wayland-devel/2017-January/032616.html
(*) The classic example of messing up the user's desktop is games
changing the X server video mode, and for one reason or another (don't
care, bug, broken, crashed) never restores it to original, or even if
it does restore, all your icons on the desktop get squashed in the
top-left corner. That is one reason why Wayland does not have a RandR
interface.
Post by Pekka Paalanen
I'm also thinking it would really help a lot if you and
others contributing to this thread were able
to procure something like a X-Rite i1 display Pro,
run up the manufacturer provided software on
MSWindows or OS X to get a feel for what it does,
then fire up DisplayCAL+ArgyllCMS on Linux/X11
and take it for a spin.
(Another path would be to obtain one of Richard's
ColorHug's, but I think seeing how the commercial
software operates as well, would add a broader perspective.)
Even watching some videos of a typical calibration/profiling workflow on
YouTube may already help.
That's an excellent idea! Could you recommend some, so I don't pick a
random "oh, but they're doing it wrong" video?
Thanks,
pq
_______________________________________________
wayland-devel mailing list
https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Best regards

[1]
https://lists.freedesktop.org/archives/wayland-devel/2017-January/032619.html

--
Benoit (blocage) Gschwind
Carsten Haitzler (The Rasterman)
2017-01-15 02:53:00 UTC
Permalink
Post by Kai-Uwe
Hello Pekka,
Your idea is mostly correct, but I have few comment. Some element of
context: on my side I do not do photo editing at all but I calibrate and
profile my monitor often to get a similar visual experience from a
computer to another.
Post by Pekka Paalanen
On Fri, 13 Jan 2017 15:56:50 +0100
[ Bunch of replies to different posts crammed into one, apologies in
advance. ]
Post by Pekka Paalanen
Would it not be simpler to just say "I'm doing calibration now, make
sure nothing interferes"?
Sure, I just hope that "I'm doing calibration now, make sure nothing
interferes" still allows conventional application UI (e.g. using UI
frameworks like Gtk3, Qt etc) to be visible, because otherwise it would
effectively block users from interacting with said UI (going by your
comment "I also forgot to mention that surfaces with the
cms_calibration_surface role, when actually presented, would also
guarantee that nothing else will be shown on that specific output"
somehow sounds like it would not, but maybe I'm misunderstanding what
you mean by "output". I would certainly not be enthused if I had to
low-level re-implement parts of the UI I currently have - in fact I can
tell you right now that it would never, ever happen if it's not as
simple as writing for any of the common UI frameworks).
Hi,
- user starts a calibration app and clicks "profile output HDMI 2"
- optional: the compositor shows a dialog: "Application Foo wants to
profile output HDMI 2. Allow / Deny ? You can press Esc to abort
profiling after it starts."
optional but recommended, this avoid the abuse of the protocol for
splash screen. This dialog should also grant the access to change to the
hardware settings (VideoLUT and other things)
not sure the app should directly change the lut ... but should provide the
calibration data it gathered and let the compositor figure out how to program
the lut to do the correcting. compositor could now store that data in any wayit
likes for future re-use (next time it starts auto apply this correction as long
as that monitor is attached to that output etc.)
Post by Kai-Uwe
Post by Pekka Paalanen
- The output HDMI 2 gets filled with the profiling pattern according
the application requests, and literally nothing else will show up on
that output.
As explained in another e-mail[1] the user may want to use unused space
to do something else, I do it often because: (1) calibrating take quite
a long time, (2) my requirement are not very high in term of accuracy
and (3) the software that perform the calibration may want to show some
calibration/profiling progress and/or feedback or (4) the software that
perform the calibration may want show GUI button to pause/resume the
calibration.
Another random remark is that, as far as I understand, the calibration
process may affect all monitors, depending on hardware because they may
expose only one VideoLUT shared for all outputs.
this is why i think the video lut should be hidden from client view entirely as
above. :)
Post by Kai-Uwe
Post by Pekka Paalanen
- If the user wants to abort the profiling run before it completes, he
presses Esc, which the compositor grabs and restores HDMI 2 to normal.
- If the profiling finishes by itself, HDMI 2 is restored to normal
automatically.
- The profiling application will know what happened in all cases.
How's that sound? I'm running blind here, because I've never used a
profiling app.
You are mostly correct in the approach, imo.
Post by Pekka Paalanen
Would you really need the user to interact with the UI while the
profiling pattern is shown?
If you show any UI bits on the profiled output, how do you avoid the UI
affecting the profiling results? Keep in mind the window positioning
limitations in Wayland desktop.
Other outputs should remain normal during profiling, but of course
there might be only one output connected.
Put bluntly, but I agree as well. A "calibration/profiling/measurement"
application is not more of a "special-purpose client" than, say, a CAD
application, a code editor, or an application that lets you create 3D
scenes of rotating bunnies wrapped around bunnies (scnr), ... etc.
Graeme just managed to explain to me why the profiling app needs access
to the VideoLUT (see [1]) which is a global resource. That makes the
profiling app a very special case on Wayland, because on Wayland we
very very carefully avoid unconditional access to such global
resources, so that apps cannot take the user or the desktop hostage or
mess it up even if they tried to(*). Changing the VideoLUT will mess up
all other windows on that output, and the profiling app will not even
tolerate any other windows on that output at the same time.
Any application you listed does not require similar access to a global
resource. A profiling app OTOH cannot use the normal content delivery
paths because as Graeme explained they often do not reach the full
precision without changing the VideoLUT.
Another example of where Wayland denies access to global resources is
that an application cannot make an input grab at will. Input grabs are
always conditional, and breakable. We should have similar behaviour
also with "screen grabs" like these.
[1]
https://lists.freedesktop.org/archives/wayland-devel/2017-January/032616.html
(*) The classic example of messing up the user's desktop is games
changing the X server video mode, and for one reason or another (don't
care, bug, broken, crashed) never restores it to original, or even if
it does restore, all your icons on the desktop get squashed in the
top-left corner. That is one reason why Wayland does not have a RandR
interface.
Post by Pekka Paalanen
I'm also thinking it would really help a lot if you and
others contributing to this thread were able
to procure something like a X-Rite i1 display Pro,
run up the manufacturer provided software on
MSWindows or OS X to get a feel for what it does,
then fire up DisplayCAL+ArgyllCMS on Linux/X11
and take it for a spin.
(Another path would be to obtain one of Richard's
ColorHug's, but I think seeing how the commercial
software operates as well, would add a broader perspective.)
Even watching some videos of a typical calibration/profiling workflow on
YouTube may already help.
That's an excellent idea! Could you recommend some, so I don't pick a
random "oh, but they're doing it wrong" video?
Thanks,
pq
_______________________________________________
wayland-devel mailing list
https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Best regards
[1]
https://lists.freedesktop.org/archives/wayland-devel/2017-January/032619.html
--
Benoit (blocage) Gschwind
_______________________________________________
wayland-devel mailing list
https://lists.freedesktop.org/mailman/listinfo/wayland-devel
--
------------- Codito, ergo sum - "I code, therefore I am" --------------
The Rasterman (Carsten Haitzler) ***@rasterman.com
Loading...