Author Topic: AI super-resolution speedup  (Read 2222 times)

2019-04-12, 23:18:38

lolec

  • Active Users
  • **
  • Posts: 81
    • View Profile
Nvidia denoiser has made a huge impact in our workflow, allowing us to vizualize a close aproximation of the final render much faster.

With the latest set of supersampling algorithms, I think there is another oportunity to bring even more speed into the process.

My idea is to have a checkbox called SuperSampling that would essentially devide the render size by half and use AI to upscale to the expected resolution.

I'm not sure, but I believe supersampling is much faster than actually rendering 4x the pixels.

This won't be intended for final renders. But it could be awesome to speed up work.

I've done a few tests with https://topazlabs.com/gigapixel-ai/ , but having it integrated in corona would be amazing. What do you think ?

 

2019-04-13, 08:36:20
Reply #1

sebastian___

  • Active Users
  • **
  • Posts: 135
    • View Profile
Can you post your tests with Topaz AI ?  It would be cool if someone would make a video test with the AI resize.

2019-04-13, 10:25:30
Reply #2

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 5903
  • Let's move this topic, shall we?
    • View Profile
    • My Models
Question is, would AI upscaler benefit from 3D data that renderer can provide? If not, then i think it's better to leave it for post.  Personally, i don't think that i would use such feature much, if at all, but then again, i thought the same about denoiser, before Corona had any, but now i barely can do a render without it.
I'm not Corona Team member. Everything i say, is my personal opinion only.

2019-04-13, 12:10:27
Reply #3

Juraj Talcik

  • Active Users
  • **
  • Posts: 3663
  • Tinkering away
    • View Profile
    • studio website
Question is, would AI upscaler benefit from 3D data that renderer can provide? If not, then i think it's better to leave it for post.  Personally, i don't think that i would use such feature much, if at all, but then again, i thought the same about denoiser, before Corona had any, but now i barely can do a render without it.

I believe the AI would benefit since 3D data can provide information where the edges are. So everything would be bit less smooth than post-production tool provides.
Although, judging the results from nVidia DLSS which sort of does this (although primarily as means for smooth animation AA) the results is kinda smooth.

I believe it will be necessary eventually anyway, rendering 2k animation is quite expensive and timey hassle in Corona (or offline renderer of any sort for the matter) yet we already see amazingly sharp animation in 4k from Unreal rendered basically instantly with the new cards. Offline rendering is still easier and much better quality but at some point, if you can get 8k Unreal animation within single day for nothing with single GPU, or wait two weeks for 2k Corona animation, the decision will be a lot different.

I would never personally use these hacks for still images where every loss of quality is noticeable (I barely use denoising on finals these days), but animation is concerning me.
talcikdemovicova.com  Website and blog
be.net/jurajtalcik   Our studio Behance portfolio
Instagram   Our studio Instagram, managed by Veronika

2019-04-13, 15:59:19
Reply #4

lolec

  • Active Users
  • **
  • Posts: 81
    • View Profile
I will post some images later.

I didn't imagine to be meant for final images, but for drafting... however, it would be great for final animations. In my tests, upscaling 1920 to 4k is kind of the best scenario, it's hard to see the difference. 

Upscaling 960 to 1920 does produce some artifacts, but I still think it would be useful in the same way Nvidia Denoiser is. 

2019-04-13, 20:44:51
Reply #5

sprayer

  • Active Users
  • **
  • Posts: 638
    • View Profile
why you want  this in 3ds max as it's post processing after all, and there is many upscaling algorytms in presents what you may choose without implementing it in 3ds max. I can't imagine why you need to save images with upscaling artifacts right after rendering, and how you will fix them if you rendering and save animation sequence? Again this should be postprocessing work from raw renders images

2019-04-13, 22:34:14
Reply #6

sebastian___

  • Active Users
  • **
  • Posts: 135
    • View Profile
why you want  this in 3ds max as it's post processing after all

This would suppose to be magical "AI" upscaling, and it will result in a much faster rendering while looking almost the same, so it will help you in lookdev and also in time consuming animation rendering.

Kind of like Nvidia denoising, which already does something similar, instantly showing you the results. You could have the same argument there too: Why not denoise in post-processing ? instead of instant, during rendering.

2019-04-14, 02:17:36
Reply #7

lolec

  • Active Users
  • **
  • Posts: 81
    • View Profile
AI upscaling is much better than pretty much any other upscaling algorithm out there. Opening the possibility of a feature where you would actually render at a much lower resolution and 4x faster (or even more), while looking pretty much the same as a higher resolution render.

Again, I'm not saying this feature would be useful for final renders. As even the animation scenario described above would be covered by current plugins, as you only need to set up and run once.

Maybe you work in a different way, but when I'm working on a new scene, I don't need to see 100% of the details all the time. If I can get a 95% approximation that allows me to place and adjust lights, adjust materials etc... it can speed up my workflow significantly, the same way Nvidia Denoiser did.


2019-04-14, 02:46:36
Reply #8

PROH

  • Active Users
  • **
  • Posts: 1024
    • View Profile
Hi. In this video (at 14 min) they show an example of Gigapixel used on a render (upscaled 200%):

:)


2019-04-14, 11:28:43
Reply #10

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 5903
  • Let's move this topic, shall we?
    • View Profile
    • My Models
I will post some images later.

Please do. Unless i'll see the magic with my own eyes, i have a hard time to believe those claims "nearly indistinguishable from originally high-rez". I have very little experience with AI upscalling - only used some online service demo and ocassionally waifu2x - but never managed to see astonishing results even at basic 2x upscalling.

Maybe you work in a different way, but when I'm working on a new scene, I don't need to see 100% of the details all the time. If I can get a 95% approximation that allows me to place and adjust lights, adjust materials etc... it can speed up my workflow significantly, the same way Nvidia Denoiser did.

If you're refering to IR, then there are few questions to be answered. Would IR still be interactive with AI denoiser + AI upscaler? Also you mentioned, that upscaler works best at already pretty high resolution input. Would IR benefit much from upscaler if it does not give optimal results at lower resolution?

I'm not against this request, but i hope that the team won't jump on that AI bandwagon in expense to more conventional features, which are waited by community with great anticipation for a long time.
I'm not Corona Team member. Everything i say, is my personal opinion only.

2019-04-14, 12:25:23
Reply #11

Juraj Talcik

  • Active Users
  • **
  • Posts: 3663
  • Tinkering away
    • View Profile
    • studio website
I wouldn't worry about devs jumping on bandwagon ;- ). It took half a year (or year?) to get nVidia denoiser which is amazing and less than day to get Intel one which is pure shit and no one asked for it. It's not done on basis of request intensity.

This tech doesn't look near close to ready for real-time implementation anyway, but let's watch how fast and good something like DLSS will get and it will be an absolute must eventually. 8k TVs are here, 8k monitors around the corner.
talcikdemovicova.com  Website and blog
be.net/jurajtalcik   Our studio Behance portfolio
Instagram   Our studio Instagram, managed by Veronika

2019-04-14, 16:40:55
Reply #12

sebastian___

  • Active Users
  • **
  • Posts: 135
    • View Profile
i have a hard time to believe those claims "nearly indistinguishable from originally high-rez".

Usually the results could have big differences in quality depending on the source image. Some said it can depend on what kind of  images the resize algorithm was trained with.

that upscaler works best at already pretty high resolution input. Would IR benefit much from upscaler if it does not give optimal results at lower resolution?

Also usually works best not from high resolution images, low resolution could work just as well, but for good results that low resolution image should be at a "final" quality and noise free, which would maybe conflict with the idea of drafting or lookdev usage.

2019-04-14, 23:02:09
Reply #13

burnin

  • Active Users
  • **
  • Posts: 875
    • View Profile
Been using Enhanced AI in my (RnD) pipeline for some time now... quite a good time saver.
BTW, bad quality comes from not having enough of proper training, lack of "learning" material to cover all cases. Which is most obvious with OIDN & was also observed with NVidia Ai. But better times are coming, machines are learning & we're just at the beginning... soon even standard scenes will be designed, visualized, produced, delivered & build by machines... still under human control ;)

Tho let's not get ahead of ourselves... we've already seen idiots on speed doing more damage than good. Speed kills!
« Last Edit: 2019-04-14, 23:15:56 by burnin »

2019-04-15, 05:35:57
Reply #14

SairesArt

  • Active Users
  • **
  • Posts: 687
  • Pizza | The Cheesen One
    • View Profile
    • SairesArt Portfolio
[rant]
Ever since a thread linked a product called Evotis (now their website is not public facing anymore, but there are press reports and the wayback machine), which saved samples instead of pixels to create subpixel perfect masks, I am very hopeful for a resolution independent renderer.
Instead of losing a sample's information in a pixel through the reconstruction filter, it would be interesting to save all the samples to disk. Then after rendering you could set the resolution after the fact.

Kinda like RAW allows you to set white balance, because debayering has not been done yet - saving samples would allow you to set a resolution after rendering finished, because the samples were not collapsed into pixels yet. Be it 480p or 4k. Ignoring pixel-grid alignment, 16 passes at 1080p would equate to 4 passes at 4k with no loss in sharpness, allowing you to switch back and forth.
In the world of offline rendering this would be way more useful than upscaling.
Hope to code up a prototype of this sometime this year.
[/rant]

2019-04-15, 10:08:57
Reply #15

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 5903
  • Let's move this topic, shall we?
    • View Profile
    • My Models
Now that would be terrific feature to have. Thousand times more interesting than upscaling, IMHO.
I'm not Corona Team member. Everything i say, is my personal opinion only.

2019-04-15, 13:27:36
Reply #16

burnin

  • Active Users
  • **
  • Posts: 875
    • View Profile
Hmmm... quite doubtful about it. From "Performance Evaluation of Evotis within a VFX Environment" study by Tim Klink (August 2018) it seemed as there's way too much overhead.

As Ondra put it years ago: "not in the foreseeable future ;)"

« Last Edit: 2019-04-15, 13:31:20 by burnin »

2019-04-15, 15:50:42
Reply #17

SairesArt

  • Active Users
  • **
  • Posts: 687
  • Pizza | The Cheesen One
    • View Profile
    • SairesArt Portfolio
Hmmm... quite doubtful about it. From "Performance Evaluation of Evotis within a VFX Environment" study by Tim Klink (August 2018) it seemed as there's way too much overhead.

As Ondra put it years ago: "not in the foreseeable future ;)"
I mean - Yeah. With 16 Passes you will have effectively wasted space of 16 Full-res EXRs. (Although the math works out differently)
That storage hit is no joke.

But Evotis had different goals in mid - subpixel perfect compositing. Deep Compositing mostly solved this in the high end VFX world. I would just use the samples to rebuild a dynamic resolution image. There is basically no overhead in terms of calculation. Although running 100 passes worth of samples through a reconstruction filter like the default "Tent" seems much, the averaging is very quick. (Considering every Progressive renderer like corona does this every time the VFB updates for one pass anyways...)
Compositing every frame of an animation that way is deadly to performance. Updating the VFB for a different res on a single image? Quite easy on resources.
Also this would be a checkbox type of thing, if such an idea would ever be implemented. Saving samples to disk is not really difficult to do. Which is why I wanna tackle it as a side project :]

2019-04-15, 17:03:40
Reply #18

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 8900
  • Turning coffee to features since 2009
    • View Profile
[rant]
Ever since a thread linked a product called Evotis (now their website is not public facing anymore, but there are press reports and the wayback machine), which saved samples instead of pixels to create subpixel perfect masks, I am very hopeful for a resolution independent renderer.
Instead of losing a sample's information in a pixel through the reconstruction filter, it would be interesting to save all the samples to disk. Then after rendering you could set the resolution after the fact.

Kinda like RAW allows you to set white balance, because debayering has not been done yet - saving samples would allow you to set a resolution after rendering finished, because the samples were not collapsed into pixels yet. Be it 480p or 4k. Ignoring pixel-grid alignment, 16 passes at 1080p would equate to 4 passes at 4k with no loss in sharpness, allowing you to switch back and forth.
In the world of offline rendering this would be way more useful than upscaling.
Hope to code up a prototype of this sometime this year.
[/rant]

Just to get an idea, we could very easily do this. Do some render, take your "samples/s" value, multiply it with 12 * number of your render elements. That is the bandwidth produces. If you can store it somewhere, then we can talk about coding this ;).

I wouldn't worry about devs jumping on bandwagon ;- ). It took half a year (or year?) to get nVidia denoiser which is amazing and less than day to get Intel one which is pure shit and no one asked for it. It's not done on basis of request intensity.

not sure how serious this jab was, but you need to consider that for nvidia denoiser we had to solve CUDA deployment in Corona and add concept of realtime denoising, which was not previously present. For intel denoiser we did not have to do anything but compile and link new library. Also we got requests for intel denoiser. As always, I have no problem with people asking me daily to implement something, but what I really hate is some people dissing feature requests of others as "nobody asked for it" or "that is useless" or "people want this only because they are noobs" etc.
Rendering is magic.
Private scene uploader | How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2019-04-15, 18:08:37
Reply #19

Juraj Talcik

  • Active Users
  • **
  • Posts: 3663
  • Tinkering away
    • View Profile
    • studio website
Was more reply to Romullus then jab ;- ). Still...the Intel one is rather crap..
talcikdemovicova.com  Website and blog
be.net/jurajtalcik   Our studio Behance portfolio
Instagram   Our studio Instagram, managed by Veronika

2019-04-15, 18:13:43
Reply #20

Ondra

  • Administrator
  • Active Users
  • *****
  • Posts: 8900
  • Turning coffee to features since 2009
    • View Profile
they seem to be keen to cooperate with us and improve it though, we are sharing our scenes with them to incorporate into the training set, and they also want to make the denoiser compatible with the new high quality filtering (nvidia might do the same, dunno if we got reply yet). The memory usage thing was already fixed.
Rendering is magic.
Private scene uploader | How to get minidumps for crashed/frozen 3ds Max | Sorry for short replies, brief responses = more time to develop Corona ;)

2019-04-15, 18:33:21
Reply #21

Frood

  • Active Users
  • **
  • Posts: 1319
    • View Profile
    • Rakete GmbH
The memory usage thing was already fixed.

Oh, that's great news!


Good Luck



Never underestimate the power of a well placed level one spell.

2019-04-15, 18:35:35
Reply #22

dfcorona

  • Active Users
  • **
  • Posts: 118
    • View Profile
Was more reply to Romullus then jab ;- ). Still...the Intel one is rather crap..
I got to say, we just used the intel denoiser on an animation with Corona and it was by far the best denoiser we have used. And we used quite a bit. It cleaned all the noise without losing any detail in very fast time. It was a life saver. What exactly did you find that you do not like with it?

2019-04-15, 18:59:27
Reply #23

Juraj Talcik

  • Active Users
  • **
  • Posts: 3663
  • Tinkering away
    • View Profile
    • studio website
What resolution was the animation ? I find that at 2k, even Corona native one is decently fast to use on animation with good quality.

Quality of both nVidia and Intel AI Denoisers are simply not good enough (or even close to good enough) for finals in my eyes at all (with Intel being worse at refraction), but at least nVidia is sky-high fast making for very cool IR compatriot. The only benefit I've seen for Intel are that Nodes don't have GPUs so denoising them can only be done with native one or Intel one. But if someone finds the AI denoise to be acceptable for final, that's to him, but I find it to be of very far bellow acceptable threshold.
I really don't want final images from one of the best ray-tracers on market focused on photorealism to be smeared and painterly like from photon mapping at 1995. I might as well fully switch to Unreal instead then and have sharp result in zero time.

Quote
without losing any detail

I don't find this to be true at all from my standpoint but you can post single frame if you would like (ideally before&after). If you are satisfied though that's good, that's all that matters.
talcikdemovicova.com  Website and blog
be.net/jurajtalcik   Our studio Behance portfolio
Instagram   Our studio Instagram, managed by Veronika

2019-04-15, 22:11:33
Reply #24

burnin

  • Active Users
  • **
  • Posts: 875
    • View Profile
Hmmm... quite doubtful about it. From "Performance Evaluation of Evotis within a VFX Environment" study by Tim Klink (August 2018) it seemed as there's way too much overhead.

As Ondra put it years ago: "not in the foreseeable future ;)"
I mean - Yeah. With 16 Passes you will have effectively wasted space of 16 Full-res EXRs. (Although the math works out differently)
That storage hit is no joke.

But Evotis had different goals in mid - subpixel perfect compositing. Deep Compositing mostly solved this in the high end VFX world. I would just use the samples to rebuild a dynamic resolution image. There is basically no overhead in terms of calculation. Although running 100 passes worth of samples through a reconstruction filter like the default "Tent" seems much, the averaging is very quick. (Considering every Progressive renderer like corona does this every time the VFB updates for one pass anyways...)
Compositing every frame of an animation that way is deadly to performance. Updating the VFB for a different res on a single image? Quite easy on resources.
Also this would be a checkbox type of thing, if such an idea would ever be implemented. Saving samples to disk is not really difficult to do. Which is why I wanna tackle it as a side project :]
"A general increase was to be expected, as in a flat only one set of values per pixel gets saved, regardless of its contents, producing, not accounting for compression, content-independent file sizes, whereas Evotis’ file sizes greatly depend on the images content. Nevertheless, a file size, on average, 140 times larger, especially for such a simple scene, for the adaptively optimized Evotis renderings, exceeds the scope of possibly being usable by far. Even the resampled 2-8 version is unlikely to be properly usable, as the files are, on average, 16.9 times as large as the flat rendering."-------------------------

It's not just the extra data, but also ~3x longer render times (power consumption) and after that extra artistic & engineering work. The latest beta tested, on which study was performed, didn't had deep support...

"In conclusion it is very difficult to predict whether Evotis will be successful and widely accepted in the industry this early in its development. The many advantages, non-uniform images, resolution independence, appending samples and sub-pixel-perfect object separation, as well as the disadvantages, no samples in depth, longer render times, insufficient optimization options and larger files, have all been explained in detail. While including depth sampling will be essential, improving render times and minimizing file size will be important, but not as critical for the short term, 1-2 years, progression. After having included deep support broadening the Nuke support and developing new techniques and approaches based on a sample workflow, not easily possible with flats, will be decisive, while constantly improving performance.
If this development phase will be successful and Evotis becomes an open standard it could well be possible for Evotis to be an industry-wide replacement for deep within the next 5-7 years, but it will probably never replace flats, just as deeps will never be able to replace flats.
The other question is: will this timeframe be fast enough considering all the movement within the industry at the moment? Possibly a new approach will emerge over the next few years making rendered images as an intermediate obsolete altogether."

Source: "Performance Evaluation of Evotis within a Visual Effects Environment" by Tim Klink
https://www.hdm-stuttgart.de/vfx/alumni/bamathesis/pdf_025

... and then some Flame2020, IFX Clarisse Builder, Houdini, Pixar, ChaosGroup, AMD, Apple, IBM... even Blender, humanity surprises me bit by bit. Interesting times for my humble little mind.
« Last Edit: 2019-04-15, 22:19:08 by burnin »

2019-04-15, 22:43:21
Reply #25

dfcorona

  • Active Users
  • **
  • Posts: 118
    • View Profile
What resolution was the animation ? I find that at 2k, even Corona native one is decently fast to use on animation with good quality.

Quality of both nVidia and Intel AI Denoisers are simply not good enough (or even close to good enough) for finals in my eyes at all (with Intel being worse at refraction), but at least nVidia is sky-high fast making for very cool IR compatriot. The only benefit I've seen for Intel are that Nodes don't have GPUs so denoising them can only be done with native one or Intel one. But if someone finds the AI denoise to be acceptable for final, that's to him, but I find it to be of very far bellow acceptable threshold.
I really don't want final images from one of the best ray-tracers on market focused on photorealism to be smeared and painterly like from photon mapping at 1995. I might as well fully switch to Unreal instead then and have sharp result in zero time.

Quote
without losing any detail

I don't find this to be true at all from my standpoint but you can post single frame if you would like (ideally before&after). If you are satisfied though that's good, that's all that matters.
Here is an example, with 7.0 noise limit and Intel denoise, did a fantastic job and kept all detail especially in grass and vegetation.  Only added 10sec. onto a 21min render, but saved at lease a third of time rendering. These are of course straight out of VFB.

2019-04-16, 13:17:39
Reply #26

SairesArt

  • Active Users
  • **
  • Posts: 687
  • Pizza | The Cheesen One
    • View Profile
    • SairesArt Portfolio
Thanks for the awesome resource! A really interesting read.
Hope that Bachelor Thesis got a 1.0 :]

So they already did the independent resolution thing, as was to be expected. 240p to 1080p - Results are basically perfect, as I would've imagined. (See image attached)
"500% zoom-in of the resulting scaled up Evotis (a), of a flat rendered natively at full HD (b), and of a flat, 462x260px, scaled up to full HD using the cubic filtering algorithm (c) are shown."

"but it will probably never replace flats, just as deeps will never be able to replace flats." - Yes, it's a checkbox-Sidegrade to your a workflow specifically for stuff like print. Obviously a minor improvement at the cost of insane space requirement...

If you can store it somewhere, then we can talk about coding this ;)
One harddrive per rendered frame, what's the issue? /s
Ohh shoot, passes, totally forgot :S
Well, it shall live on as a programming show case to bolster my portfolio and self-esteem...

2019-04-16, 14:47:57
Reply #27

romullus

  • Global Moderator
  • Active Users
  • ****
  • Posts: 5903
  • Let's move this topic, shall we?
    • View Profile
    • My Models
Here is an example, with 7.0 noise limit and Intel denoise, did a fantastic job and kept all detail especially in grass and vegetation.  Only added 10sec. onto a 21min render, but saved at lease a third of time rendering. These are of course straight out of VFB.

Sorry, i don't get it. The two images are almost identical. Denoiser didn't do anything there, just added another 10 seconds to your rende time.
I'm not Corona Team member. Everything i say, is my personal opinion only.

2019-04-16, 15:40:02
Reply #28

dfcorona

  • Active Users
  • **
  • Posts: 118
    • View Profile
Here is an example, with 7.0 noise limit and Intel denoise, did a fantastic job and kept all detail especially in grass and vegetation.  Only added 10sec. onto a 21min render, but saved at lease a third of time rendering. These are of course straight out of VFB.

Sorry, i don't get it. The two images are almost identical. Denoiser didn't do anything there, just added another 10 seconds to your rende time.
Lol, you really don't see the difference? Denoise in production is only meant for the last 10% of noise left. But that last 10% a lot of times can mean 1/3 the render time. I can definitely tell the difference between the two and if we did not denoise the renders the animation would be a mess with dancing noise.

2019-04-16, 16:50:06
Reply #29

dfcorona

  • Active Users
  • **
  • Posts: 118
    • View Profile
Here is another version if this helps. It's at 12.0 Noise Limit, 8min 7sec no denoise and 8min 2sec with denoise.  Don't ask me how it rendered faster with denoiser, That wasn't the first render with no denoise.