Author Topic: Time to ditch sRGB/Linear as default (?)  (Read 33414 times)

2017-02-20, 11:33:06
Reply #30

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2562
  • Just another user
    • View Profile
    • My Portfolio
You see, this is what I meant by completely changing mindset. You still perceive camera response as some sort of post processing option, and you still perceive linear sRGB as the right default, the base line. But the point here is that sRGB is simply not a right color space to display linear rendered light in. You have some input, in this case computer generated image, and you want to display it on a monitor device in a way that resembles what human eye sees in real world as closely as possible. Digital cameras area already very good at it, but most renderers are not, yet...

Back in the day, people were rendering in wrong linear Gamma 1.0 space, and then someone came up with linear workflow, and there were lots of people popping up saying "Why change something that works, why introduce new confusion?", and then LWF slowly became standard. And this is simply another step in the evolution of getting computer generated lighting and shading data displayed in a way that is most natural to the human eye.

You are still too much preoccupied with mathematical data rather than visual data. You want to have output by default linear, because you are used to adding final photorealism to the output yourself. But it's also important to think about non-technical users, newbie users and future users. You hardly find anyone trying to remove/delete all image processing algorithms from digital cameras they bought, to get mathematically pleasing data out of it.

I think that every step taken towards reducing the amount of manual steps from newbie/migrating users to achieve ultimate photorealism is a good step.

If we'd get new defaults, there would still probably be some legacy mechanism to render scenes exactly as they are.

Matching old renders is rather rare request. Definitely not done on daily basis by most users. Why would you match something old when you can make it look better? I can understand it being a client request, but if someone has very niche client base with very specific requests, general renderer defaults should not conform around that.

2017-02-20, 11:44:39
Reply #31

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2562
  • Just another user
    • View Profile
    • My Portfolio
On the more general note, I'd dare to say that over 95% of Corona user base does not produce image by composing separate shading elements (Refl, Refr, Diffuse, SSS, Self-illum, etc...), so nothing would be lost. Specialized render elements like velocity, world position, normals and such are already removed from tone mapping by default. This could be probably taken one step further if all CESSENTIAL render elements would be excluded from tone mapping too, so it would be just beauty pass affected.

But if 95% of the userbase does not do shading elements compositing, and similar portion of them uses Corona as a virtual digital camera, why should we comform the defaults about the ~5% minority?

Anyone who ever did some successful shading element compositing knows that things need to be kept linear, so anyone who will do so, will know to linearize the image (remove tone mapping) before proceeding to the compositing stage. Whereas most new users as well as future users who aren't technically based, but perhaps ex-photographers, will not enter realm of CG rendering knowing there are some extra steps you need to take to achieve photorealism. If we make it behave more like digital camera by default, good results out of the box will be a lot closer to them.

Right now, Corona is not yet very VFX-capable renderer. And by the time it becomes one, I am quite confident viewing rendered images through camera response rather than plain sRGB will be well established standard (In the same way I predicted 8 years ago PBR will become standard - even in games, and all the blinn, phong and ambient occlusion heroes were mocking me :) )
« Last Edit: 2017-02-20, 11:58:20 by Rawalanche »

2017-02-20, 11:49:56
Reply #32

Dionysios.TS

  • Active Users
  • **
  • Posts: 516
    • View Profile
    • Personal Portfolio
Like +1
Responsable d'Imagerie
Renzo Piano Building Workshop / Paris

https://dionysios.myportfolio.com/

2017-02-20, 11:53:59
Reply #33

PROH

  • Active Users
  • **
  • Posts: 973
    • View Profile
Can't wait to see this in Corona :)

2017-02-20, 12:27:02
Reply #34

pokoy

  • Active Users
  • **
  • Posts: 1392
    • View Profile
I haven't used 32bits channels and probably never will. That's not to say others don't need this as it is one of the standard way of working, and yes experiences vary from person to person. Please respect this and never judge based on your personal experience only.

The problem with a camera response is that it's not one camera response across all camera models - quite the contrary, every camera has its own set of algorithms and sometimes they're made different only to make sure the price tag is justified. Camera vendor - same thing, each one of them has a processed look to make sure your clients know what they buy (Nikon and Canon have a distinctive look and it's kept that way artificially to not alienate their customers when introducing new sensors). So what kind of progress is it to arbitrarily impose a certain way of processing rendered images when all we need to customize them to our liking is already there?

I'm all for new tone mapping algorithms, in fact Filmic Shadows was a wonderful addition. I'd only want this if I can get to the old way with a click or a setting in the defaults. Again, the simple solution would be to introduce a 'Make it photorealistic' button to set the parameters according to the algorithm you come up with and let everything else as it is.

As for legacy settings - well, this is a mess. 1.2, 1.3, 1.4, 1.5 all introduced new things that need to carry code from earlier versions in order to render legacy results, and I assume it must be a nightmare to maintain the code.

And as for always-photorealistic-out-of-the-box. That's a holy grail promise you will not be able to hold up to as it always relies on the artist's skills and eye to properly set up a scene, lights, materials etc. How many renders have we seen from top end renderers that are simply crap because some people don't go the extra mile and polish their materials, or actually work on their image to become really good? That's something you'll not be able to overcome with some math behind.

I really don't want to dismiss the idea just for the sake of keeping everything as it is. It's just that it doesn't convince me why it's better than what we have.

2017-02-20, 12:41:46
Reply #35

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2562
  • Just another user
    • View Profile
    • My Portfolio
I haven't used 32bits channels and probably never will. That's not to say others don't need this as it is one of the standard way of working, and yes experiences vary from person to person. Please respect this and never judge based on your personal experience only.

The problem with a camera response is that it's not one camera response across all camera models - quite the contrary, every camera has its own set of algorithms and sometimes they're made different only to make sure the price tag is justified. Camera vendor - same thing, each one of them has a processed look to make sure your clients know what they buy (Nikon and Canon have a distinctive look and it's kept that way artificially to not alienate their customers when introducing new sensors). So what kind of progress is it to arbitrarily impose a certain way of processing rendered images when all we need to customize them to our liking is already there?

I'm all for new tone mapping algorithms, in fact Filmic Shadows was a wonderful addition. I'd only want this if I can get to the old way with a click or a setting in the defaults. Again, the simple solution would be to introduce a 'Make it photorealistic' button to set the parameters according to the algorithm you come up with and let everything else as it is.

As for legacy settings - well, this is a mess. 1.2, 1.3, 1.4, 1.5 all introduced new things that need to carry code from earlier versions in order to render legacy results, and I assume it must be a nightmare to maintain the code.

And as for always-photorealistic-out-of-the-box. That's a holy grail promise you will not be able to hold up to as it always relies on the artist's skills and eye to properly set up a scene, lights, materials etc. How many renders have we seen from top end renderers that are simply crap because some people don't go the extra mile and polish their materials, or actually work on their image to become really good? That's something you'll not be able to overcome with some math behind.

I really don't want to dismiss the idea just for the sake of keeping everything as it is. It's just that it doesn't convince me why it's better than what we have.

I don't get the point about cameras. Yes, they all have different curves, but they all are superior to CG rendered light and shading interpreted in sRGB, that's the issue here. And while the image processing curves of different cameras are there to make already real image pop, in CG, we don't even have that reality baseline, because we display synthetic, generated light and shading through a response curve that is different to a way human eye is used to seeing reality captured through digital devices in form of digital photographs.

The whole idea of the button is problematic in a way that you basically have a wrong way of displaying something and a right way is hidden behind a button. You wouldn't expect any rendering software these days coming with Linear Workflow disabled by default, and finding a button somewhere in the settings that says "click here to enable LFW", now would you?

These days, LWF is simply a standard, and camera response is evolution of that standard. Again, it's wrong to perceive it as an additional option. It's intended to be update of the default.

I also did not claim that this change would make photorealistic images out of the box. What I (obviously) intended to say that whatever would anyone render would still by default be closer to photorealism than with the old workflow. Even if it was a complete noob, who would just put a gray cube on a gray plane and lit it by Corona spherical light, it would still look a bit closer to what this same scene would look like if it was re-made in real world and shot with a digital camera.

What's also important is, that most people do not realize that in order to successfully translate material properties from photos into 3D, you first need to at least roughly match your tonality to average camera response, otherwise it gets really hard to nail the material properties when translating them from photo to CoronaMTL by eye. Even I, myself found it out relatively recently, and ever since that, I start off with Highlight compression at 1.75, filmic shadows at 0.5 and contrast at 2, whenever I create any scene. And this is the knowledge most of the people do not have. It will just increase their success rate without need for them to actively research it and come to this conclusion, which it took me personally several years to come to. I wish I had known this earlier... No, actually... I wish there was some renderer that would do it for me earlier :)




2017-02-20, 12:50:05
Reply #36

pokoy

  • Active Users
  • **
  • Posts: 1392
    • View Profile
Ok. Let's presume you replace the current state (which I expect) and someone saves out a linear image for comp - how will he know what Corona did to the image to be able to reproduce this in comp? Will it be a black box without info on what it did behind the scenes? I assume it will be. So instead of users asking of how to use tone mapping you'll now get question from people who ask how to match the VFB.

Now what I'd really like to see is comparisons of the old way vs the new way. That would really help.

2017-02-20, 14:40:49
Reply #37

agentdark45

  • Active Users
  • **
  • Posts: 455
    • View Profile
Rawalanche, at the end of the day Corona is your software and the direction it goes in is ultimately yours. If you think (as well as the majority of others in this thread) that this is the way things should be done then please do it! As a general rule people don't take to change kindly, even if the newer option is objectively better.

As you've noted 95% of people in the CG business are only concerned with producing visually pleasing images with the least amount of time/effort required to get there. Streamlining that process is a plus and not a negative. People who are hung up on antiquated workflows will simply have to adapt to changing times (and you've even stated there will be a legacy pure linear option for compositing so I have zero idea why anyone is fighting you on this).
Vray who?

2017-02-20, 14:46:48
Reply #38

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2562
  • Just another user
    • View Profile
    • My Portfolio
Rawalanche, at the end of the day Corona is your software and the direction it goes in is ultimately yours. If you think (as well as the majority of others in this thread) that this is the way things should be done then please do it! As a general rule people don't take to change kindly, even if the newer option is objectively better.

As you've noted 95% of people in the CG business are only concerned with producing visually pleasing images with the least amount of time/effort required to get there. Streamlining that process is a plus and not a negative. People who are hung up on antiquated workflows will simply have to adapt to changing times (and you've even stated there will be a legacy pure linear option for compositing so I have zero idea why anyone is fighting you on this).

Haha, definitely not mine, but Ondra's. I just occasionally talk into UI :)

2017-02-20, 16:04:57
Reply #39

lasse1309

  • Active Users
  • **
  • Posts: 70
    • View Profile
just saying:

isn't the LUT section already pointing into that direction the fstorms and octanes etc are working with?
It would just be consequent. if photorealism is the ultimate goal, why shouldn't the rendered image behave like a camera-picture?

tbh1: i don't actually care which "workflow" is behind the image i am working on, as long as it looks good in the end - and
i would appreciate nothing more than an idiot-proof solution. there are so many things you can screw up in an image/project, render settings
shouldn't be among these things (i hear the old hares cry "you make rendering too easy" already)...

tbh2: most people don't work on iron men, x-men or wolverine-men - in large studios, with long pipelines where it might make sense not to go 5 steps back into the shading department
to change the specular value of some random item and rather "fix it in post". no matter if that breaks the image or not, just from a practical point of view...
i would assume most users are sitting with small teams doing fast turn-around jobs (not in x-men quality) and are happy if there would be a "render-cool-button".


so would that be the said button? the "instant-photoreal-button" we all dreamt of all those long SAD years? Make rendering finally great again guys! :D



2017-02-20, 16:53:44
Reply #40

Ludvik Koutny

  • VIP
  • Active Users
  • ***
  • Posts: 2562
  • Just another user
    • View Profile
    • My Portfolio
Ok. Let's presume you replace the current state (which I expect) and someone saves out a linear image for comp - how will he know what Corona did to the image to be able to reproduce this in comp? Will it be a black box without info on what it did behind the scenes? I assume it will be. So instead of users asking of how to use tone mapping you'll now get question from people who ask how to match the VFB.

Now what I'd really like to see is comparisons of the old way vs the new way. That would really help.

If someone saves out EXR image for compositing with camera response tone mapping baked in, and opens it in Fusion, or Nuke, or anything that loads EXRs with correct gamma, they will get exactly 1:1 result to what they had in CoronaVFB. They won't need to know what happened to the image as long as they get same thing as in CoronaVFB. The problem would happen only if they tried to compose CESSENTIAL render elements, where they would get different result.

Now, first of all, I doubt most of the new users will ever encounter this. As I already mentioned, this workflow is becoming obsolete. Secondly, new users most likely won't be able to tonemap in post, because there's no compositing software that by default has a node which contains tone mapping set similar to CoronaVFB. CoronaVFB has quite refined tone mapping tools compared to Fusion/Nuke/AE, and so on. There won't be any black box, users will know exactly what's going on just by looking at tone mapping settings in Corona VFB.

As for the comparisons, I will make some as soon as I have time.

2017-02-20, 17:22:14
Reply #41

lasse1309

  • Active Users
  • **
  • Posts: 70
    • View Profile
what about corona-tonemapping plugins for the comp-softwares then? :D

2017-02-21, 09:30:48
Reply #42

hybaj

  • Active Users
  • **
  • Posts: 5
    • View Profile
This is such an interesting topic but also one very very difficult to fully understand.

In fact it's so hard on the brain cells that many many cinema production professionals still don't understand that when they get their 12/14/16bit RAW files from their 60 thousand dollar cameras they get just get an image that has not been debayered but it has been already heavily modified by the firmware magic on the camera - the secret sauce behind every camera manufacturer (just like pokoy wrote earlier). They actually believe what they get is the direct signal from sensor which is something so far from truth.

Canon and Nikon DSLRs perform amazing in studio lighting (light that is usually very "white" when talking in kelvin temps) and really rival analog film in these situations. But when you switch to outdoors or different types of lights the image usually falls apart and does not look good anymore (while analog did) - it needs to be brought back in photoshop or any other image processing software. Canon even with their understanding of color have failed at creating a proper cinema camera (C100,C300,C500) - they have created a sort of bland depressing look which actually works for documentaries but not for cinema. Their C700 camera has colors and "tone-mapping" that is almost identical to their DSLR range which in my opinion won't work for cinema - this only shows that they are trying to backtrack to something that worked for them in the past.. they are out of ideas. A multibillion image hardware corporation has really ran out of ideas which is something really amazing.

Arri Alexa - the first ever camera that provides a very durable "cinematic" look right out of the camera. Great dynamic range and beautiful color - very nice desaturation of highlights. Their color processing and dynamic range is still unrivaled and the camera hardware is over 6 year old already.

Maxwell render - Maxwell render during beta and the first version had a very special tone-mapping and color response.. it made images really look good without any post work.

So really.. it'd be amazing if someone ever would get his hands on the direct firmware code of the cameras to see what color math acrobatics they do. LUTs are simple linear transforms which do not catch all of the intricacies of what goes on in the firmware.

Getting to know what the cameras really do (from sensor signal to raw file) It would really really help the renderer developers too - if you could quickly match your renderings to live action footage that would be insanely helpful for VFX.

analog film - Kodak Vision 5207

digital camera - Arri Alexa

2017-02-21, 09:44:20
Reply #43

rampally

  • Active Users
  • **
  • Posts: 202
    • View Profile
what about corona-tonemapping plugins for the comp-softwares then? :D
this is good and interesting!!!

2017-02-21, 12:50:11
Reply #44

pokoy

  • Active Users
  • **
  • Posts: 1392
    • View Profile
The Blender video is all about tone mapping, right? As far as I can see, it has nothing to do with sRGB, it's just being mentioned as the culprit but it's actually tone mapping he's talking about.

Similarly, I'm not sure why sRGB is mentioned in the thread title. From what I understand the initial idea is to add a default tone mapping preset replacing the linear display of the VFB we have now (though we can't be sure there isn't a tone mapper already present that isn't exposed to the user). sRGB is the color space Max displays as it's not able to use anything else and uses the Windows default color space which is sRGB. So without adding color profile support to Corona's VFB it doesn't really make sense to mention it.

The simplified graph would be: render output (linear) > tone mapping (this is what you want to add by default, correct?) > sRGB (gamma applied here)

However, if you're about to add color profile support to Corona's VFB we might actually achieve a more natural look. The current widely used standard in professional photo workflow  is eciRGB v2 which is meant to reliably reproduce colors present in the nature with an emphasis on blue-cyan/orange-yellow tones. This would indeed help in the VFB as sRGB is pretty dull. However, it's something entirely different than tone mapping - if this was the original idea - and saying that overcoming limits of the sRGB color space by adding a default hidden tone mapping curve is misleading.

I guess there's a reason why you want to add this. More or better tone mapping options will certainly not hurt. And as long as we get a legacy behavior checkbox so comp departments get their channels right, I'm all for it. Also, please consider adding color profile support (per scene, not as a global default) as this can really have an impact on how the values are mapped to final colors after tone mapping and it's a very important factor indeed.