The CPU advantage
An in depth look at CPU vs. GPU rendering.
Chaos Unboxed Live: See what new product launches are coming in 2024!
An in depth look at CPU vs. GPU rendering.
By rendering only on the CPU we avoid all bottlenecks, problems, and limitations of GPU rendering, which are covered in detail below.
You will read a lot about how a GPU offers many thousands of processors able to act together, while a CPU may only have 12 to 64, which sounds like a scary comparison! The truth is more complex though.
Following a simple “more is faster” concept would make no more sense than saying a vehicle must be faster because it has more wheels. As it turns out, a CPU core and a GPU core are also very different things, arranged in very different ways. GPU cores have a smaller instruction set, more limited capabilities, a smaller cache, a lower clock speed, and are created specifically to work together in large groups rather than independently.
Insight64 principal analyst Nathan Brookwood once put it this way: “GPUs are optimized for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place”.
When it comes to light bouncing around in a 3D scene, it DOES skip all over the place – exactly what calculation is required next is not easily predictable, and it may not be the same as other calculations which need to be happening at the same time, and that is why the architecture of a CPU is much better suited to this task than a GPU.
That means that the benefit of “thousands of processors” is only fully realized when all the cores are doing more or less the same thing at more or less the same time. The more varied the work that each processor core is doing, and the more branches in logic, then the more work is required to keep everything in sync and the less benefit you will see with GPU architecture, and at some point in complexity the CPU will easily overtake the performance of the GPU.
The images below show two GPU engines allowed to render for 5 minutes (to give a fair power–price comparison, as the CPU used would cost roughly twice as much as the GPU used):
This means that the more complex your scene, and the more light must bounce from surface to surface and material to material, the less benefit you will see.
This means it will always be possible to set up scenes where GPUs outperform CPUs, and scenes where CPUs outperform GPUs, and neither is “always better” than the other. Overall, according to academic papers and empirical evidence, CPUs and GPUs have roughly the same per–$ and per–Watt performance in non–trivial scenes.
If GPUs were always better for all tasks, then we’d see computers built entirely without a CPU at all – this doesn’t happen, because each is better for particular types of logic and tasks.
Your PC likely has somewhere between 8 and 64 GB of memory installed. Currently, even with the most expensive graphics cards (which can cost more than some entire computers), the maximum memory is 12 GB.
Also, in many cases the memory that your scene can occupy is limited to the GPU with the lowest memory – GPU memory does not stack! This means that you have to take great care when adding extra GPUs to an existing set up, to ensure you are not bottlenecking the performance of existing GPUs.
This also means that adding one extra texture or one extra object can take you over the memory limit, and your scene that was rendering fine now no longer renders correctly, or doesn’t render at all – not an issue with CPU–based solutions, where the very worst you can expect is a slowdown if you exceed the maximum memory of your machine (which you are much less likely to do!)
If you have a lot of third–party plugins in your pipeline, then you won’t need to worry about their compatibility with Corona, while this can be a concern with a GPU–based solution.
We’re guessing that most of the crashes you run into on your computer come from your graphics card driver – it is one of the most complex and least stable parts of your set up. If your graphics card has problems as a result of a driver update, that means your renders have problems. With CPU–based solutions, there is no such concern.
As you can see from above, it is hard to predict whether a given scene will work under a GPU–based solution, and hard to know if it will render faster or slower than with a CPU–based solution. In any commercial situation, we believe that dependability and predictability are as essential as fast render times.
While here we have quite a long list of caveats regarding GPU rendering, the only real comparison a GPU–based approach could offer is “sometimes GPU rendering might be faster”… and without any list of warnings as to what could go wrong with a CPU–based rendering approach.
GPU–based solutions are an exciting and rapidly growing technology, while CPU–based solutions are mature and well–established, and sometimes that stability and predictability is just what your work requires!
To get the most from a GPU–based solution, you will need to turn your video card over to the renderer, resulting in significant loss of performance (even moving windows around on screen needs your video card to do work!)
This is particularly troublesome for animations, where you may be rendering hundreds or even thousands of frames. While you can dedicate a second video card just to run your desktop, that’s a hardware investment that you don’t need to make for a CPU–based solution.
With Corona, you can control on a render–by–render basis just how many threads it will take control of for rendering, letting you choose how responsive you need your computer to be (and even with all cores rendering, you can still comfortably use your computer for other tasks at the same time).
With a GPU–based solution, additional licenses are often required, and you have to worry about the compatibility of the graphics cards in each machine. Adding a machine with a GPU that has less memory than the others will limit the complexity of the scenes you can handle on the network, as you will have to work with that lowest common denominator.
With a CPU solution, it is easy to add additional computers as a render node using software that comes with 3ds Max and Corona. Further, with Corona your license allows you to add up to 3 additional computers along with your workstation, making it easy to put spare or extra machines to use with a minimum of fuss!
Few GPU solutions are fully (legally) supported by render farms. This means that if your own hardware can’t make it in time, then it just won’t be finished! For those particularly large jobs, and for animations in particular, CPU–based rendering can call upon long–established commercial render farms to meet a deadline.
Often GPU–based solutions will cite the fact that a GPU costs less than a whole PC. Of course, unless you are willing to invest in special hardware, a GPU still needs a PC to contain it! When you consider the cost of swapping out your GPU with swapping out your CPU only, the GPU option is just as expensive.
Of course, upgrading a CPU will bring benefits to every program you use, making that money a good investment. On the other hand, upgrading your graphics card will likely only benefit your render time – your OS, your 3D software, and every other program you use on that computer will not see any advantage. Unless you are gaming on your work computer (tsk!) then investing in a GPU upgrade is money that goes into only giving you only very specific, limited benefits.
GPU cooling is generally much noisier than CPU cooling and creates much more heat.