Skip to content

Beautifying the tone mapped render#1915

Open
illwieckz wants to merge 4 commits intofor-0.56.0/syncfrom
illwieckz/tonemap-beautifying
Open

Beautifying the tone mapped render#1915
illwieckz wants to merge 4 commits intofor-0.56.0/syncfrom
illwieckz/tonemap-beautifying

Conversation

@illwieckz
Copy link
Member

@illwieckz illwieckz commented Feb 27, 2026

  1. Set the contrast to 1.4 instead of 1.6, this fits better Unvanquished needs.
  2. Implement a dithering, this is enough to avoid the color banding happening when the tone mapper contrast turns dark shadows to black. This doesn't prevent the blackening, but it hides the distracting visual artifacts by smoothing the banding. This is good enough to make the tone mapper usable on every map and to not get a visual regression on dark shadows in gloomy room corner cases.
  3. Restore low lights in shadows blackened by the tone mapper contrast. Those shadows are so dark that restoring a bit of light there doesn't change the perceptual contrast, it's always darky anyway, we just restore details lost by the tone mapper shadow blackening.

I'll detail the work in comments.

With this, I see no specific corner cases anymore that would make the tone mapped render look less good than without. We can fully embrace tone mapping, which is very good and wanted because we need tone mapping for adaptive lighting and we want adaptive lighting.

This includes the “tonemap before color conversion” fix as it is required to properly evaluate the renders produced by this branch:

Fixes:

@illwieckz illwieckz force-pushed the illwieckz/tonemap-beautifying branch from 14ae71d to 1223b10 Compare February 27, 2026 10:07
@slipher
Copy link
Member

slipher commented Feb 27, 2026

What's an example where dithering is needed?

@illwieckz
Copy link
Member Author

What's an example where dithering is needed?

Low lights, I'll give some examples later.

@illwieckz
Copy link
Member Author

So, first, the contrast tweak.

In its GDC slides, Timothy Lottes used a contrast of 1.3.

Multiple implementations out there of the Lottes' tone mapper use 1.6. This is very strong. It fits well in sunny scenes, and other scenes with plenty of light, giving strong contrast on the rare dark details of the scene.

But it doesn't fit for gloomy darky rooms with some rare bright elements. One example of such scene is the chasm alien base.

Before (1.6):

unvanquished_2026-02-27_140635_000

After (1.4):

unvanquished_2026-02-27_140640_000

The 1.4 value fits better those scenes, and it still provides a cool and noticeable contrast effect on other scenes.

{
float t = threshold - ( i * ( threshold / 10 ) );

bvec3 cutoff = lessThan(mapped, vec3(t));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what r_toneMappingLowLightRestorationThreshold is trying to do, but it seems suspicious that it acts on each color channel individually. So like, you're not allowed to have a pure green color, you must have a legally mandated amount of red and blue.

Copy link
Member Author

@illwieckz illwieckz Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no mandated amount of any channel.

If there is no green in the input color, there is no green in the mapped color, so blending a bit of the input color into the mapped color cannot create green color out of thin air.

The whole transformation (including the tone mapping) only uses the rendered colors as input, it doesn't create any color that doesn't exist first.

Copy link
Member Author

@illwieckz illwieckz Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make it clear, “mapped color” here is the color after the tone mapper operation. The tone mapper operation is just doing things like increasing contrast, this cannot create colors.

And the input color is the unmodified color, so it cannot create color since it's even not modified.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK whatever, it can't be exactly 0, but if you have #03FF03 then the red and blue would be amplified.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently the base tone mapping works that way (per channel) so maybe it doesn't matter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. If there is an amplification, it comes from the tone mapper. We may modify the tone mapper to only work on luminance, but that's outside of the scope of this PR, and my feature to restore low lights is working on what the tone mapper provides so that's not my concern right now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact the whole reason why I implemented my low light restoration feature with a blend of half colors (input and mapped) is to avoid artifacts that can happen when working per channels. A strong cutoff would produce obvious artifacts, where the frontier between where the restoration isn't done and where it is done. But by blending the two colors, and also especially since it is done on low values anyway, this avoids the artifacts.

Copy link
Member Author

@illwieckz illwieckz Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we implement a luminance-based tone mapping, then the low light restoration can probably be a single step with a single mix with a binary cutoff. And the restoration would not tend to the input color but restore the exact input color.

That, if by luck the low light color is low enough and the threshold is low enough for the frontier to not be seen (I believe they are low enough).

@illwieckz
Copy link
Member Author

So, dithering… The use case for it is low lights. Even when the sRGB lightmap carries enough precision to avoid dark blotches, such dark blotches may be reappear as an unwanted byproduct of the tone mapper contrast.

I still use the same scene for my current local build of hangar28.

unvanquished_2026-02-27_141439_000

It may be hard to see depending on the display hardware, but there are many annoying dark blotches:

unvanquished_2026-02-27_141439_000-multiplied-annotations

If I do only one step of low light restoration (more on this later), less parts of the scene gets blackened, but then it increases the amount of dark blotches. Reducing those dark blotches can be done by doing more low light restoration steps, but dithering will smooth out those dark blotches, whatever the amount of them.

unvanquished_2026-02-27_141618_000 unvanquished_2026-02-27_141618_000-multiplied-annotations

The dithering is smoothing them.

But dithering is problematic and to be honest I would prefer to avoid it.

It works by adding noise to the scene, so dithered screenshots look bad when scaled (kinds of moiré, tilesor things like that can surface when scaling down) and especially it displeases a lot JPEG encoders. I would like to avoid dithering if possible.

Maybe we can only apply dithering on low lights, so it would only look bad in places it would look bad without anyway, and not pollute the whole screen.

Also we can use other algorithms. Actually I tested two algorithms yesterday and now I notice the one I pushed is the one that is the most noticeable in screenshots. So better use the other one.

@sweet235
Copy link
Contributor

sweet235 commented Feb 27, 2026

we need tone mapping for adaptive lighting and we want adaptive lighting

We'd have adaptive lighting since long ago if you had not not rejected it in favour of your linear pipeline.

@slipher
Copy link
Member

slipher commented Feb 27, 2026

How about a test case that I can try myself?

@illwieckz
Copy link
Member Author

illwieckz commented Feb 27, 2026

we need tone mapping for adaptive lighting and we want adaptive lighting

We'd have adaptive lighting since long ago if you had not not rejected it in favour of your linear pipeline.

@sweet235 I have a good news for you! This is simply not true at all, so you don't have to worry at all! Isn't it all nice? 🙂️

Maybe you don't know how those things interact, so let's just make it very clear so you can be reassured:

Adaptive lighting, tone mapping, and the linear pipeline are not competing together, there are meant to work together.

None of them has been rejected in favor of another, it would be meaningless to do so.

Also, maybe the misunderstanding also extends to my intention, so let make it very clear as well:

I never rejected adaptive lighting, be it favor of anything else or for any other reason, I never rejected it, ever.

One cause of misunderstanding may have come from this thread:

It looks like there had been from many a deep misunderstanding of my intention in that thread, that thread wasn't saying that the tone mapper was a waste, but that it was wasted. The nuance is very important! So yes if someone reads it as if it would mean that the tone mapper is a waste, this can sound like a reject. But once you understand that what I was saying is that its potential was wasted, this means something very different.

So my thread led the author of the PR to close the PR, if my thread was misunderstood as if I was saying the tone mapper was a waste, yes this is a logical action to do, because waste is meant to be rejected, and tone mapper being a requirement for adaptive lighting, misunderstanding my comments as rejection of tone mapping would mean rejection of adaptive lighting as a consequence.

But that's not what I meant.

Now, I believe we can make a pause and observe we just misunderstood ourselves.

I didn't want to say that the tone mapper was a waste, but that it was wasted by either bugs or configuration not fit for Unvanquished yet. The nuance si very important, on the second case (the real one), my concern was that I wanted tone mapping, but I was observing that it wasn't fully usable yet and that we needed more work, and that I was regretting it wasn't ready yet.

My strong wording wasn't because I had strong disagreement against it, but that I was strongly affected by it not being ready yet. That's very different, it means I strongly wanted it.

Tone mapping was always wanted, but for it to shine, more work was needed, that was what this thread was about.

Also, actually, this thread was looking for things to fix to improve the tone mapper, and that includes fixing bugs. And you know what? There was a indeed a bug! And that was even a bug I myself introduced! So all the harsh reactions to that thread actually discouraged me to investigate more and find the bug I was looking for! Only some days ago I found the courage to start investigating it again.

The good news is that I found the bug and I provided the bug a fix:

Why would I spend precious resources to fix something I would have rejected? I simply never rejected those things.

Also adaptive lighting is so unrejected that not only I listed it as something wanted in multiple places,

here in 2017:

and here in 2023:

but that I never removed them from there, neither made a contradictory statement.

My mind has not changed on this topic, ever.

f you still had any doubt, I actually spent precious time yesterday to rebase the adaptive lighting patches so work can resume on it:

And you know, engine development is so active such rebase isn't an easy task. One should really want such feature so spend such effort on helping on it in one way or another!

So, I can understand how you may have been misled to wrongly believe that adaptive lighting had been rejected by me in favor of the linear pipeline, but now you say that all of this is not true, it's just misunderstanding.

Wording can be misunderstood. Wording can be improved. Both writers and readers can make efforts to avoid misunderstanding. But once there is a misunderstanding (it happens!) and things are clarified after that, one should not stick to the misunderstanding.

It's fine if you misunderstood, such misunderstanding can happen. The misunderstanding may be on me, or on you, or on both of us in various ways, but that was a misunderstanding.

So now that I brought you all those proofs this is not true, and that I now know that you now know that it is not true, I expect you to not bring this wrong statement in the future anymore. Thanks in advance. 😉️

@illwieckz
Copy link
Member Author

How about a test case that I can try myself?

I'll upload a build of the map soon.

@illwieckz
Copy link
Member Author

I implemented partial dithering, dithering is now only applied on very low lights. So at worse in screenshots we trade dithering artifacts against blackening artifacts, no more.

@illwieckz
Copy link
Member Author

I also implemented a r_showDithering cvar so we can see where it is applied:

unvanquished_2026-02-27_161700_001 unvanquished_2026-02-27_161719_001 unvanquished_2026-02-27_152427_000 unvanquished_2026-02-27_152412_001

@illwieckz
Copy link
Member Author

Let's test JPEG (JPEG suffers a lot from dithering, plus GitHub display will downscale it, that's the nightmare case 😵️) :

unvanquished_2026-02-27_152546_000

@illwieckz
Copy link
Member Author

illwieckz commented Feb 27, 2026

When tone mapping is enabled, we may even skip dithering when color is already pitch black (0, 0, 0) before tone mapping (to skip the common/black color texture trick). Also, when tone mapping is enabled, we may even only apply dithering under a certain threshold if and only if tone mapping darkened it.

@illwieckz
Copy link
Member Author

So, now, let's explain my low light restoration algorithm. First, the why:

The idea is that the contrast feeling produces by the tone mapper comes from the fact there are strong lights and strong shadows.

But the darkening of shadows can blacken then, destroying the precision we had in input before the tone mapper did its job.

The good news is that the range for which we then lack precision is small, it's basically the first values above zero.

Those values look very dark to the human eye, so them being that dark or black gives the same contrast feeling.

So if we can restore the input dark values, the contrast feeling si preserved, while restoring the input precision in those areas.

@illwieckz
Copy link
Member Author

illwieckz commented Feb 27, 2026

Now, the how:

When working per-channel, doing a binary cutoff to shift from the mapped color value to the input color would produce artifacts around the frontier between the mapped colors and the restored input colors, as one color channel may be taken from the mapped colors, and another color channel from the input color, in the same pixel.

So the algorithm blends the input color with the mapped color. This avoids producing artifacts because the channel difference, if that happens, is now low enough to be imperceptible. And all channels get at least half their value from the mapped colors, smoothening the frontier.

But since this only restores half the input color (the mapped color is either black or lower than input color anyway), if we do this multiple time (what I call restoration steps), by blending half of the previous step with half of the input color, then more input color is contributing to the pixel.

Since it is assumed the mapped color is lower than the input color (that's the whole reason why we want a color restoration), and that the mapped color fed the algorithm at first, it should be impossible for the repetitive blend steps to produce something brighter than the input color.

In the end, the restoration steps tends to the input color. At some point the difference between restored color and input color would become irrelevant because of precision limitations.

I observed than 5 steps basically fully restores the input color, or it does it good enough for the eye. I haven't checked on comparing screenshots, just by looking at how much the scenes had obvious dark blotches. The purpose is to please perception anyway.

I also observed that when dithering is enabled, 3 steps produce similar results as 5. Because what's annoying in dark blotches is that they are obvious, so even if there may be some remaining stray pixels remaining blackened with only three steps, what matters is that the whole area received light anew, and the the dithering smoothing would produce the same feeling to the brain anyway, as the then-averaged-then-smoothed perceived color would now be the same.

Also as a tweak, for each step, the threshold is lowered a bit, so every step moves their frontier a bit, to add to the smoothening of whole the frontier between the mapped colors and restored colors.

@illwieckz
Copy link
Member Author

illwieckz commented Feb 27, 2026

How about a test case that I can try myself?

I'll upload a build of the map soon.

@slipher: https://dl.illwieckz.net/b/unvanquished/pkg/preview/map-hangar28_0-20251031-224243-e332b37.dpk

Note that some shaders require that fix:

The main scene I use for testing this PR is setviewpos 1030 0 -600 140 0. No texture on that scene suffers from the #1918 bug.

@illwieckz illwieckz force-pushed the illwieckz/tonemap-beautifying branch from f158fc6 to 27bb6b6 Compare February 27, 2026 17:08
@slipher
Copy link
Member

slipher commented Feb 28, 2026

I'm having some trouble finding any differences in my screenshot test set when changing the r_dithering or r_toneMappingLowLightRestorationSteps cvars. I tried making a graph showing the results of changing the steps cvar, with other tonemapping variables at their default values for the branch. If I haven't made any mistakes entering the formulas, the "low light" code only affects values less than 2 on a 0-255 scale. (0.0078 on the graph corresponds to 2). That would explain why I don't see anything...
tonemap curves 1915

@slipher
Copy link
Member

slipher commented Feb 28, 2026

I guess the point is that 2 in the linear scale converts to a much larger value in the sRGB scale. So the map must use linear blending as well as having dark areas for the code to have any effect. It really was impossible to test without illwieckz's test map! 😆

@slipher
Copy link
Member

slipher commented Feb 28, 2026

New graph since the previous one didn't have enough data points. It would seem preferable not to have the function zigzagging up and down.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants