Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Ryuu

Pages: [1] 2 3 ... 20
General Discussion / Re: CXR Autosave
« on: 2019-03-28, 17:05:35 »
In the autosave file name pattern you can use a subset of "tags" which are used in the render stamp. In the system settings dialog (the one you use to configure autosave) you can click the "?" button next to the VFB title setup to see all the available tags. Some of these are not available for autosave (like the rays/sec metric), but you can use %f which will be replaced by the filename of current scene.

Feature requests / Re: Lightmix should correspond accurately
« on: 2019-01-04, 14:08:41 »
Your suggestion is perfectly valid as long as there is just a single light allowed per LightSelect element. "Unfortunately" we allow multiple lights to be assigned to single element and then this wouldn't really work unless all of the lights have the same color.

This should be fixed in today's daily build. Could you please just confirm that your problem is indeed fixed, so that we can close this thread?

Bug Reporting / Re: Light Lister doesn't show Sun
« on: 2018-11-25, 08:26:04 »
Yes, we know about that. It's one of the limitations of current implementation. We'll be expanding the lister in v4, adding sun & light material to light lister as well as other object type to the lister (cameras & proxys, maybe some others).

We'll leave this thread open for a few days as I guess others might view this as a bug as well and want to report it, then we'll move this thread to resolved. In the meantime, if you have any other suggestions on how to expand the lister, please note them in the feature requests section.

Thank you for the answer! If i understand correctly, given the same image dimension and bit depth, the only thing that matters, is number of channels? So 1024x1024 8 bit texture will take 1 MB RAM if it's greyscale, 3 times as much if it's RGB and 4 times as much if it's RGBA?

Yes, that's correct.

Also, what is the better option - to have 3 single channel textures, or 1 RGB texture, in terms of RAM consumption, scene parsing time?

Good question. I never tried to measure it, but as long as we're talking about 3 files vs 1 file, there should not be any noticeable difference in memory consumption or parsing time. When we get to 3000 vs 1000, the difference might be a bit more noticeable in terms of parsing time as there is some small overhead with opening each file, etc.

The overhead is much higher when the texture files are stored in a network location, as there are higher latencies involved in opening the individual files, etc.

So, I would definitely not worry about this unless you have problems with the parsing times and have a reason to suspect that the texture loading might be the cause of those problems.

One of the key points of Corona philosophy has always been that users should not worry about technical details. So if you ever get to the point where 3 files vs 1 makes any noticeable difference, I would prefer to solve this by optimizing the texture loading code rather than forcing the users to store their assets in a certain way.

Corona stores the images in memory uncompressed, so this really depends on resolution and bit depth. An 8-bit 1024x1024 JPEG will take exactly the same amount of memory as an 8-bit 1024x1024 TIFF no matter what's their size on disk.

General Discussion / Re: Corona + Nvidia AI Denoising
« on: 2018-11-13, 08:24:00 »
Yes, just switch the denoising mode to "Fast preview (NVIDIA)"

Feature requests / Re: The most wanted feature?
« on: 2018-10-16, 13:57:48 »
Well, autobump is also already implemented, but the other displacement improvements are still valid.

Feature requests / Re: Corona Materials Library Resolution
« on: 2018-09-26, 09:41:00 »
This will apply to all textures loaded by Corona.

Thanks for making the list, we'll sure appreciate any more examples you encounter.

I didn't look at the specific examples you mentioned yet, but generally some of these restarts are impossible to avoid, because in some situations all we get from 3ds Max is some notification like "hey, something about this material/object has changed" and we have no way of knowing what that is, so we play it safe and restart the IR. We could manually check if the state of the material/map/object changed in any significant way, but that would add additional processing to IR restarts (slowing them down) and there will plenty of opportunity for new bugs - instead of restarting when not needed, we might end up not restarting when needed in many cases until all the bugs are fixed (and they never are). So for now, we're just playing it safe and restarting whenever we know something has changed and we cannot reliably check that it was not something important.

Well, that was the general case. We'll definitely look into the examples you provided.

And one more example - IR is being restarted when you're navigating the material hierarchy in slate material editor. We'll try to fix this in the next release.

General Discussion / Re: Corona + Nvidia AI Denoising
« on: 2018-08-17, 10:00:13 »
Nope, the original denoising implementation will stay for the foreseeable future. Both versions will co-exist side by side.

General CG discussion / Re: Nvidia real-time raytracing
« on: 2018-08-15, 16:25:16 »
DISCLAIMER: Anything said in this post (and my subsequent replies) is just my personal opinion and it is not definitely any official statement.

In other words, the GPU has much more powerful CPU (RT Cores) than the most powerful CPU's this days.

You do understand that CPU core and GPU cores are vastly different and therefore comparing their counts does not make any sense, right? :) Also "more powerful" is kinda relative. Are GPUs generally more powerful than CPUs at trivial number crunching? Definitely. Are GPUs generally more powerful than CPUs at parsing C++ source files? I wouldn't be so sure about that.

Now we have 32 Core CPU (1700 euro). And, next year, maybe 64 single Core CPU

I kinda doubt we'll see 64-core CPU in next AMD generation or anytime soon. 48-core is a bit more likely, but I wouldn't still bet on that for the next generation.

Also all our plug-ins must be converted etc...

Yes, this is one of the major benefits of using CPU. Unless a plugin has any special requirements from the used renderer, any new sexy plugin you find will work from day one. If Corona was GPU renderer, you will have to request that we support this plugin, then wait at the very least a few days until we do, then you can finally try it with Corona, but you will still have to wait for us to debug it and then after many weeks when all this is finally done and you had the real chance to try this plugin - you find that it's really useless for your needs ;) Of course, reality is not that simple and some plugins may need compatibility tweaking for CPU rendering and with good APIs, most plugins may work out of the box with a good GPU renderer.

But yes, Corona must to watch in the GPU direction too, and not only for VFB or Denoise.

We'll definitely start with baby steps by moving all the image post processing to GPU and then we'll see where we will get from there.

Single Quadro RTX 8000                                              =     rays per second ($10,000)
Single Quadro RTX 5000                                              =     rays per second ($  2,300)
Single GeForce RTX 2080                                            = ?  rays per second ($     699 ?)
Single Threadripper 2990WX 32-Core Processor  =                13.024.100      rays per second ($  1,799)

What exactly does "ray" mean in this context? Is it just computing a single ray-triangle or ray-box intersection? Is is traversing the whole scene and finding out which primitive did the ray hit? Does it also include shading the hit, evaluating all the maps, etc.? Is this just for coherent primary rays, or are the numbers still the same for the wildly incoherent secondary rays? You're comparing two sets of numbers which can mean very different things.

My home path tracing code can process 30 megarays per second on a single core. I don't really think this means its more powerful than Corona :)

I'm definitely not saying that GPUs are not powerful. Optimized GPU renderer may be able to process more data than optimized CPU renderer (depending on specific GPU and CPU). But these numbers don't really prove it, unless we know what exactly do they mean.

Ad speculation about 2080 - I'm not following the news, are future consumer GPUs supposed to feature the tensor cores, or are these just a quadro feature?

Btw, the "AMD Ryzen Threadripper 2990WX 32-Core Processor (×4)" is only 1 CPU, not 4 CPUs. Look at the cores/threads count. "(×4)" is the die count in the 2990WX CPU itself (4 dies). To corona team, please fix this benchmark or update it to the recent version of Corona ;)

Yep, I know about that. It's not nice, but on the other hand it's not really that critical to warrant releasing a new version of benchmark just because of that. Releasing a new version of benchmark would mostly invalidate all the previous results. We might do a new version once we finish porting Corona to another platform.

Do you know I'm just messing around with the Corona team to wake 'em up. And I hope they are aware of it. Look at this:

I guess that I don't really have to mention that we knew about project Lavina before this blog post went public, right? ;)

Thanks for the report. Yes, this has already been reported and should be fixed in next daily build (hopefully today). I'm leaving this topic open until we release the fix

General Discussion / Re: Corona 1.3 Benchmark
« on: 2018-08-07, 09:31:32 »
If we released a new version of benchmark, there would certainly be some performance difference (I hope for the better), but the relative performance of 2 difrerent CPUs should be roughly still the same. We didn't do any specific CPU optimization for a long time, so anything which speeds up Corona on CPU A should (almost) equally speed it up on CPU B.

For this reason we don't have any immediate plans for releasing a new version of benchmark, unless we do some optimization targeting specific CPUs (or specific instruction sets) or we port Corona to a new platform.

It's already fixed internally, so we are waiting for the daily build:

Just a minor correction - it's still waiting for code review, which might result in some more work being done. So the expected release in daily build is in the interval (next week; heat death of the universe) :) But next week is the most probable time when it can find its way into a daily build.

Pages: [1] 2 3 ... 20