Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Fluss

Pages: 1 2 [3] 4 5 ... 25
31
Work in Progress/Tests / Re: dubcats secret little hideout
« on: 2019-01-14, 15:53:07 »
Basically, every micro-faceted BRDF so diffuse, specular and transmission (including clear coat, thin surface etc etc..). We know new shader is gonna come soon so maybe this will be addressed at some point. That would make Corona an absolute killer TBH.

32
Work in Progress/Tests / Re: dubcats secret little hideout
« on: 2019-01-14, 14:25:36 »
Here is an example :

0.9 glossiness :



0.1 glossiness :



As you can see, we're losing an incredible amount of energy here. I'll let you guess the impact of such behavior on an interior render.

33
Work in Progress/Tests / Re: dubcats secret little hideout
« on: 2019-01-14, 13:40:40 »
A question Fluss - lets say we have a very long and narrow room with a window on one end. Does what you imply mean that it will be darker at the other side of the room than let's say in Fstorm (of course with the same camera exposure etc.) ?

I guess it shouldn't affect light propagation that much but rougher materials will appear brighter so overall you should have the feeling that light goes a bit further yes.

what you imply - that Corona does a poor job at maintaining the energy conservation rule

To clarify, Corona team does it the right way and that's totally expected for a single scattering BRDF. That's one of the drawbacks of this kind of implementation. It looks like they've implemented some energy compensation tech for the transmission part already as it doesn't seem to darken when the roughness increase (have to be checked tho, I've made some quick tests a loooooong time ago). And I guess they tried to solve the issue on the specular part and that it was the reason why the glossiness range got fucked up before v1.5, as disabling the PBR checkbox in the shader produces a near perfect furnace test (except edge darkening/brightening).
So basically, it would be really nice to get energy preservation for both specular and diffuse lobes.

You can look at this video to see the phenomenon involved (a lot of the examples are on the transmission lobe, but it's the same for specular and diffuse lobes):


I've already submitted the idea a long time ago and devs checked it and told that this would introduce a massive overhead. But it looks like Sony Imageworks did a pretty decent job to solve that issue and I think that's worth considering. Notice that I've made the distinction between energy conservation and energy preservation. Even if it seems a bit cumbersome, that's how they've described it in their presentation and it's finally a pretty way to focus the incriminated phenomenon.

Have a look at their presentation for more info: https://blog.selfshadow.com/publications/s2017-shading-course/imageworks/s2017_pbs_imageworks_slides_v2.pdf

Also, Stephen Hill made interesting Blog posts on the subject :


34
Work in Progress/Tests / Re: dubcats secret little hideout
« on: 2019-01-13, 15:48:42 »
I also want to add that to transpose real-world scene referred data to render scene referred data, you'll at least need to thoroughly stick to energy conservation AND preservation principles. And in that way, Corona is really far from producing accurate results.
Indeed, as roughness increase, we're losing an insane amount of energy. I've made a quick furnace test to check that out we're losing close to 50% 60% energy at high roughness values (well we cannot completely disable fresnel, but that's close enough to see the issue). Even if it still not perfect, Fstorm did a way better job on that side (It actually produce more energy than what it received but the gap is smaller tho).

35
Work in Progress/Tests / Re: dubcats secret little hideout
« on: 2019-01-13, 13:02:30 »
In the future, I hope that render engines will fuse "PBR" and "scanned cross specular" into glossiness. Right now PBR glossiness reduce IOR at lower glossiness values, but it's global and ignore micro shadowing.
My ultimate wish right now would be to get Diffuse Roughness in Corona, and then get some kind of futuristic PBR + scanned cross specular interaction into glossiness for micro shadowing. This would not break the current PBR workflow and/or maps, but only improve them.

Hey Dubcat, can you elaborate about this? I'm not sure I get it.

edit : Here is what I understand : Basically you're asking a slot to input cross polarized specular scan in order to modify the microfacets distribution, is that right?

36
General Discussion / Re: Overblown light
« on: 2018-12-14, 11:43:57 »
We can either change the way Corona interprets light internally (so high red values would mean white pixels), or somehow "fake" it, for example using LUTs.

Maru, I have to admit that this is a bit scary. The aforementioned phenomenon is purely related to the transition from scene referred data to display referred data, as discussed in many other posts. Corona is computing light the right way already so why would you change scene referred behavior? I'm not criticizing anything here, I would just like to understand. Maybe I misunderstood what you said tho, but I'm a bit concerned there. Also, LUTs won't resolve anything here.

37
I need help / Re: Resizing Renders to 4:6 ratio
« on: 2018-12-14, 09:37:05 »
Hi,

4:6 = 2:3 = 0.66 aspect ratio -> 1280x1920 so that mean it become a vertical format. I think they want it 3:2. 3:2 = 1.5 aspect ratio so 1920x1280 if you rerender it (1920/1280 = 1.5) or 1620x1080 if you need to crop (1620/1080 = 1.5). I don't know if it is clear for you.

38
General CG discussion / Re: Fastest Network storage?
« on: 2018-12-12, 16:54:28 »
Heh yes, you can aggregate even 10GBe for the ultimate super-speed :- ). I actually wanted to do it as well but I saved money by only buying 8-port Switch and have no space for such luxury.

Which I btw regret, my 8-port Netgear drops 10gbe to 1gbe on random ports if all ports are being used at 10gbe fully !! It very easily overheats and struggles with performance.
So if you want to link-aggregate, buy much bigger switch than you think you need. It seams like waste of money but this is what "pro-sumer" switches are like.

Im now looking at putting a dual sfp+ card in our file server and aggregating that instead haha. We already have a switch with sfp+ so its fairly high end

Sfp+ is expensive as hell. I'd rather go with standard cheap cat7 or even cat6e base-T cables unless you plan to run 100+ meters of distance

39
General CG discussion / Re: Fastest Network storage?
« on: 2018-12-12, 16:49:24 »
Heh yes, you can aggregate even 10GBe for the ultimate super-speed :- ). I actually wanted to do it as well but I saved money by only buying 8-port Switch and have no space for such luxury.

Which I btw regret, my 8-port Netgear drops 10gbe to 1gbe on random ports if all ports are being used at 10gbe fully !! It very easily overheats and struggles with performance.
So if you want to link-aggregate, buy much bigger switch than you think you need. It seams like waste of money but this is what "pro-sumer" switches are like.

I have the same issues with mine and that pisses me off! Unplug/replug incriminated ethernet port do the trick but wtf... Stay away from those ones

40
I let that link here, really interesting and informative : https://renderman.pixar.com/stories/cars-3

Incredibly interesting! Worth it for the Close up pixar renders haha.

If you read carefully, it's not about close-up but on the contrary for details preservation when an object is far away from the camera. :-)

41
I leave that link here, really interesting and informative : https://renderman.pixar.com/stories/cars-3

42
wow theres so much more high frequency detail in fstorm!!

Well, setting blur to 0.5 for displacement in V-Ray is a terrible idea... No wonder it looks softer compared to FStorm.

If you look at the settings, Vray displacement blur is set to 0.001, only diffuse and gloss are set to 0.5. I could have lowered it a bit more tho

43
Corona Renderer for Cinema 4D - general / Re: Render times
« on: 2018-12-02, 21:14:14 »
Compare what's comparable, Corona is faster than Vray. Just set it up the same way. Forget Irradiance cache, use brute force instead, no light cutoff, 25 max ray depth for everything and you will have a good base to compare with.

44
Just a random question - what happens in fstorm/other if you use some procedural map for displacement on a large area, so that it cannot be just one tile repeated multiple times?

Fstorm only support bitmaps so I can't tell. For Vray 2D displacement, as raytracing is computed in texture space, it needs proper UV to work. So you can't use object/world XYZ coordinates for displacement. It is still usable tho, by using explicit map channel. Set a planar UVW map to fit the size of the plane (no tiling) and adjust the size and iterations of the noise map as desired. Then, the amount of details is directly driven by the resolution set in the displacement modifier (RAM usage will increase accordingly). See the example below :

Vray - 1k sampling - 2 100MB RAM


Vray - 16k sampling - 21 400MB RAM

45
So, where's the problem? Corona displacement looks as good as its competitors and uses half amount of RAM compared to Vray. The only part where it sucks (sometimes), is that it cuts subdivision behind a camera very dramatically - look at the reflection in mirror ball.

Well just rotate the camera a bit and there you go :

Fstorm - still 3.65 GB - no reflection issue


Corona - 17GB


What's more, the Fstorm displacement is view-independent. In the current configuration, Corona displacement is view-dependent and that does not work well with animation. To get rid of those issues, you have to set it up in world space unit. Try to reach the same quality with these settings and your computer will explode.

Here is a (fancy) example i found, I'd kill to get that quality displacement in my animations :


Pages: 1 2 [3] 4 5 ... 25