31
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-23, 10:17:24 »But... for me... Corona wins! HahahaHaha, same here. :-)
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
But... for me... Corona wins! HahahaHaha, same here. :-)
At the moment I sort of don't believe out-of-core rendering doesn't come at decent performance costOh, it most certainly does! But how big the performance detriment is, depends on numerous factors. Specifically: if you render a scene where the geometry is so heavy and the rendering so complex that it cannot keep all the geometry needed for that one bucket in memory at the same time, things can get awfully slow. However, if all that happens is that it can't keep the entire scene in memory and does most of the "paging" to RAM between buckets instead, the performance cost is much smaller. Basically, the less you go out of core, the better. A little bit of out-of-core only hurts performance by a very small amount, but doing it constantly means you might as well not render on the GPU at all.
For that GTX you need a node, the GTX alone does nothing :) of course you can assemble a node for that GPU for 400€ or 500€ with a cheap CPU, but then you are limited to GPU rendering, what will you do when you have to Render a scene that does not fit in 4Gb of Video RAM?What will you do with your CPU farm when the scene doesn't fit in 16 gigs of memory? Add more to each machine?
The picture I showed you show a 980 against a 780ti, where did you saw those benchmarks that put the 970 at the same performance level as a Titan? I'm really interested, and I'm also interested in a 980 vs Titan benchmark, I wasn't able to find anything that is not related to real time gaming performance, but the only GPGPU benchmark I found is that one.Ask and ye shall receive. It's for a 980 rather than a 970 but it should be relatively easy to extrapolate performance based on clock speeds and core counts. The 980 is consistently around 50 % faster than the 780 Ti, which itself is roughly comparable to a Titan in terms of performance. The 970 should give roughly 75 % of the 980's performance, putting it firmly ahead of the Titan for single-precision computing. Double-precision is a different matter, but ray tracing is all single-precision.
The thing is that Redshift, as far as I know, don't store the scene in the GPU memory, and that's because there are parts of the calculation that are being managed and done by the CPU in the system memory, that is why it does not have the GPU memory limit, and I think algorythms like the ones used in iRay, Octane or even Corona cannot be applied in that regard.Not sure what you mean here. Redshift has memory buffers on the GPU. It stores the entire scene representation in GPU memory, and then it has buffers for things like triangles and textures. If the geometry or the textures exceed the buffers, it will dynamically offload stuff, but if the scene description itself can't fit, it'll fail to render.
Captain Obvious what do you mean with results aren't great? you are referring to render time or to final result?Both, I guess? iray can produce really good results, but it suffers from some pretty severe workflow limitations. Limited texturing, it's always limited by GPU memory (unlike Redshift, and Octane is also going in that direction), and it really is quite slow. For iray to be really fast you need a big cluster of GPU machines, and preferably Quadro-cards to get additional memory (which you will need), so the whole thing is going to be massively expensive. If you spent the same amount of money and just rendered in Corona or Maxwell, or hell even V-Ray, you'd probably get better results in less time.
Just to illustrate my point, why GPU rendering has lost it's opportunity, at least for the moment, this is the improvement (benchmark based) of the maxwell seires vs the previous tech :PI respectfully but strongly disagree. If you look at the new generation of Nvidia cards, the GTX 970 basically gives you Titan-level performance (albeit with 1/3rd less memory) for a significantly lower cost, at a significantly lower power level. The Titan was released about a year and a half ago, cost $999 and consumed 250 watts. Now, the GTX 970 gives you about the same performance for $329 and about 150 watts. In terms of performance per TCO* -- which is what really matters for a farm -- the 900-series is a huge improvement over the previous generation. Performance isn't vastly improved, but the power reduction and price reduction means you can buy more of them.
The problem is that GPU render engines evolution is SLOW, and the reasons for that are the limitations that the GPU has like working with the system ram instead of the GPU ram, suddenly the GPU render becomes SLOOOOOW.You really need to have a look at Redshift :-) it's not a perfect render engine, but it definitely shows that fast development and a plethora of features is possible on the GPU as well.
(But I didn't try Redshift yet so take with slight grain)Redshift is basically... V-Ray on the GPU. It's got similar settings, similar setup, similar algorithms. Light cache, irradiance cache, adaptive anti-aliasing, etc.
I just measured the maximum deviance between each pixels and its neighbours. It doesn't work for calculating the error in a single image (you need the reference image for that), but it works for checking which of two images have the greater variance."LtWCJTK.png" is marginally noisier. I measured. It's something like 2.4 % more noisy.How did you measure it without the reference image? If you have some reliable metric for estimating noise tell me, I can use it for adaptivity ;)
edit: So that's image 1 that's cleaner.
Have you actually tried Redshift? The whole "flying foot" thing is likely a modelling issue; I haven't had any problems with missing shadows and the like. Redshift works really well in brute force + cache mode, and because the first bounce is brute force it catches basically every last detail. I'm sure you can configure RS to give you problems like this -- just like you can with V-Ray.Which is silly, because you can use any render engine to produce unrealistic results. Redshift doesn't produce quite as realistic images as Corona, but on the other hand it is WAY faster, deals with large scenes much better, and has a bunch of other "production features" like volumes and whatnot. If you need really really good interior renders and don't mind the render time, Corona is a great choice. If you need features Corona lacks, or you need really fast render times, and you can live with the slight reduction in realism, Redshift is a great choice. It's basically V-Ray on the GPU.
It's not silly sir, I'm just saying what you're saying too in a raw way. If you can bear this kind of render then yes it worthes its cost.
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image4_1200_900.jpg
The good old chairs flying foot etc... of course its more bearable in video prod.