Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Captain Obvious

Pages: 1 2 [3] 4 5 ... 12
31
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-23, 10:17:24 »
But... for me... Corona wins! Hahaha
Haha, same here. :-)

32
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-22, 11:34:10 »
Arnold may be slow at some things, but it's fast at dealing with extremely heavy geometry. What do you think would happen if you tried to render 200 million unique triangles in Corona?

33
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-21, 23:28:22 »
I saw some tests comparing Arnold and Redshift while constantly upping the amount of geometry. The machine had enough RAM to cope, but only 2-3 gigs of VRAM. Somewhere around the 200 million triangle mark, Arnold overtook Redshift because of out-of-core issues. But still, that's 200 million unique, un-instanced triangles with just a couple of gigs of VRAM, rendered with full GI.

34
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-21, 22:11:35 »
At the moment I sort of don't believe out-of-core rendering doesn't come at decent performance cost
Oh, it most certainly does! But how big the performance detriment is, depends on numerous factors. Specifically: if you render a scene where the geometry is so heavy and the rendering so complex that it cannot keep all the geometry needed for that one bucket in memory at the same time, things can get awfully slow. However, if all that happens is that it can't keep the entire scene in memory and does most of the "paging" to RAM between buckets instead, the performance cost is much smaller. Basically, the less you go out of core, the better. A little bit of out-of-core only hurts performance by a very small amount, but doing it constantly means you might as well not render on the GPU at all.

For image maps, apparently the performance hit is so small that they don't even try to keep everything in memory. The default is a 128 megabyte cache that they just stream everything to, chunk by chunk. Because it can load individual pixels straight from the images in RAM, it doesn't really matter much.

Of course even if they do add out-of-core memory management to Octane it will probably benefit less than Redshift, since Redshift is a bucket renderer and Octane renders pixels "randomly," which would be terrible for geometry swapping.


The rumours are that the 8 gig 980s will be out later this year. I heard november-december.

35
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-21, 20:26:07 »
For that GTX you need a node, the GTX alone does nothing :) of course you can assemble a node for that GPU for 400€ or 500€ with a cheap CPU, but then you are limited to GPU rendering, what will you do when you have to Render a scene that does not fit in 4Gb of Video RAM?
What will you do with your CPU farm when the scene doesn't fit in 16 gigs of memory? Add more to each machine?

The Octane people are adding out-of-core stuff. Redshift already has it. It won't be very long before out-of-core is the standard for GPU renderers, at which point 4 gigs of VRAM will be plenty for all but the craziest of scenes.

36
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-21, 18:23:01 »
The picture I showed you show a 980 against a 780ti, where did you saw those benchmarks that put the 970 at the same performance level as a Titan? I'm really interested, and I'm also interested in a 980 vs Titan benchmark, I wasn't able to find anything that is not related to real time gaming performance, but the only GPGPU benchmark I found is that one.
Ask and ye shall receive. It's for a 980 rather than a 970 but it should be relatively easy to extrapolate performance based on clock speeds and core counts. The 980 is consistently around 50 % faster than the 780 Ti, which itself is roughly comparable to a Titan in terms of performance. The 970 should give roughly 75 % of the 980's performance, putting it firmly ahead of the Titan for single-precision computing. Double-precision is a different matter, but ray tracing is all single-precision.

37
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-20, 17:22:03 »
The thing is that Redshift, as far as I know, don't store the scene in the GPU memory, and that's because there are parts of the calculation that are being managed and done by the CPU in the system memory, that is why it does not have the GPU memory limit, and I think algorythms like the ones used in iRay, Octane or even Corona cannot be applied in that regard.
Not sure what you mean here. Redshift has memory buffers on the GPU. It stores the entire scene representation in GPU memory, and then it has buffers for things like triangles and textures. If the geometry or the textures exceed the buffers, it will dynamically offload stuff, but if the scene description itself can't fit, it'll fail to render.

Also, Redshift supports temporal interpolation for both the irradiance point cache (HD cache equivalent) and the irradiance cache, meaning you can blend values from several nearby frames. That stops flickering quite efficiently. V-Ray supports the same thing. It does mean you have to pre-render the pre-passes, but it's easy enough to automate if you have a render farm and easier still if you're rendering locally.

38
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-20, 14:35:17 »
My point about Redshift was mostly that it demonstrates that GPU memory isn't necessarily a major impediment. I see no reason why it wouldn't be possible to implement exactly the same rendering algorithms that Corona uses, but with Redshift's GPU engine. I don't really like their approach, frankly. Aiming to replicate V-Ray or mental ray seems like a step backwards. It does, however, show that GPU rendering is a valid approach for high-end production work, and it really is massively fast. Even with my lame-ass Quadro 2000M, Redshift is significantly faster than V-Ray, iray, Octane and even Corona. I still prefer Corona because it gives better quality and has a better workflow, but in terms of performance... No, there is no way Corona could compete with Redshift on speed, if you had a machine with a GTX 980/970 or two. Not even on big heavy scenes. Especially not on big heavy scenes, in fact!

As I said: I still prefer Corona. But if the Redshift team manages to match Corona's rendering methodology, then I'm not so sure any more.


Edit: anyway, it's a moot point at this stage. Redshift is what it is. It's a great choice for some people, but for me Corona is a better choice. The only reason I brought it up is because it shatters a lot of myths about GPU rendering (memory limitations, limited to naive path tracing, lousy integration with the host software, etc).

39
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-20, 13:25:36 »
Captain Obvious what do you mean with results aren't great? you are referring to render time or to final result?
Both, I guess? iray can produce really good results, but it suffers from some pretty severe workflow limitations. Limited texturing, it's always limited by GPU memory (unlike Redshift, and Octane is also going in that direction), and it really is quite slow. For iray to be really fast you need a big cluster of GPU machines, and preferably Quadro-cards to get additional memory (which you will need), so the whole thing is going to be massively expensive. If you spent the same amount of money and just rendered in Corona or Maxwell, or hell even V-Ray, you'd probably get better results in less time.

One of the biggest strengths of Corona is that it's extremely well-integrated into 3ds Max -- more so than iray -- which means you don't need to adapt your workflow. It's not limited by GPU memory, it supports all (more or less) native textures, material blending, render elements, etc etc etc.



Just to illustrate my point, why GPU rendering has lost it's opportunity, at least for the moment, this is the improvement (benchmark based) of the maxwell seires vs the previous tech :P
I respectfully but strongly disagree. If you look at the new generation of Nvidia cards, the GTX 970 basically gives you Titan-level performance (albeit with 1/3rd less memory) for a significantly lower cost, at a significantly lower power level. The Titan was released about a year and a half ago, cost $999 and consumed 250 watts. Now, the GTX 970 gives you about the same performance for $329 and about 150 watts. In terms of performance per TCO* -- which is what really matters for a farm -- the 900-series is a huge improvement over the previous generation. Performance isn't vastly improved, but the power reduction and price reduction means you can buy more of them.

Additionally, the fact that they're able to get such great performance out of a card using a relative low amount of power, despite being manufactured using the same 28 nm fab, means that they have plenty of headroom to grow.



Honestly, in terms of hardware performance evolution, GPU rendering has never looked better. The new Haswell-E is a good improvement as well, but mostly for cost reasons. They're not much faster than the previous generation, but they are much cheaper.

* Total Cost of Ownership

The problem is that GPU render engines evolution is SLOW, and the reasons for that are the limitations that the GPU has like working with the system ram instead of the GPU ram, suddenly the GPU render becomes SLOOOOOW.
You really need to have a look at Redshift :-) it's not a perfect render engine, but it definitely shows that fast development and a plethora of features is possible on the GPU as well.

40
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-19, 10:55:20 »
Bunkspeed is literally just iray packaged up in an easy to use interface. If you want to get an idea of how well it would perform rendering interiors, then just set up an interior to render in iray inside Max or whatever.

(spoiler: as evident by their architectural gallery, results aren't great)

41
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-19, 08:54:03 »
(But I didn't try Redshift yet so take with slight grain)
Redshift is basically... V-Ray on the GPU. It's got similar settings, similar setup, similar algorithms. Light cache, irradiance cache, adaptive anti-aliasing, etc.


It's much faster than Corona (with a decent GPU, at least), but it doesn't produce as nice results.

42
[Max] Resolved Bugs / Re: Wierd DOF
« on: 2014-09-18, 09:44:24 »
Simple thought experiment:


Imagine that the amount of depth of field blur was just based on whether or not the reflective or refractive surface was in focus, then how would a camera lens work? If focus was on the front glass element of the camera itself, then everything would be in focus, and if focus was anywhere else, then everything would be out of focus.

43
[Max] General Discussion / Re: Experiment: Noise levels
« on: 2014-09-17, 09:25:50 »
"LtWCJTK.png" is marginally noisier. I measured. It's something like 2.4 % more noisy.

edit: So that's image 1 that's cleaner.
How did you measure it without the reference image? If you have some reliable metric for estimating noise tell me, I can use it for adaptivity ;)
I just measured the maximum deviance between each pixels and its neighbours. It doesn't work for calculating the error in a single image (you need the reference image for that), but it works for checking which of two images have the greater variance.

44
[Max] General Discussion / Re: Experiment: Noise levels
« on: 2014-09-16, 15:49:38 »
"LtWCJTK.png" is marginally noisier. I measured. It's something like 2.4 % more noisy.

edit: So that's image 1 that's cleaner.

45
[Max] General Discussion / Re: Corona render speed
« on: 2014-09-05, 09:47:52 »
Which is silly, because you can use any render engine to produce unrealistic results. Redshift doesn't produce quite as realistic images as Corona, but on the other hand it is WAY faster, deals with large scenes much better, and has a bunch of other "production features" like volumes and whatnot. If you need really really good interior renders and don't mind the render time, Corona is a great choice. If you need features Corona lacks, or you need really fast render times, and you can live with the slight reduction in realism, Redshift is a great choice. It's basically V-Ray on the GPU.

It's not silly sir, I'm just saying what you're saying too in a raw way. If you can bear this kind of render then yes it worthes its cost.
https://www.redshift3d.com/cms/ce_image/made/cms/assets/user_gallery/Image4_1200_900.jpg
The good old chairs flying foot etc... of course its more bearable in video prod.
Have you actually tried Redshift? The whole "flying foot" thing is likely a modelling issue; I haven't had any problems with missing shadows and the like. Redshift works really well in brute force + cache mode, and because the first bounce is brute force it catches basically every last detail. I'm sure you can configure RS to give you problems like this -- just like you can with V-Ray.

Pages: 1 2 [3] 4 5 ... 12