You are here: Home > Computer Articles > Nvidia Mental Ray Explained
Back to all articles

Nvidia Mental Ray Explained

Published:


Mental Ray has been liberated and returned home to it’s rightful owner Nvidia, who are now going to develop it as an independent product. This means that Mental Ray may now be coming to your favorite 3D modelling software as an integrated plug in. Something which is already the case for 3DS Max 2017 and 2016.

Mental Ray has been put to use in some of the highest profile projects you can name. The latter two Matrix films made use of it as did Star Wars Episode II. Say what you want about the actual movies, but there’s little argument that the CG was top notch for the time.

It’s All History Now
Mental Ray has been around since 1989 and currently stands at version 3.14. It was bought from the original creators in 2007 by global graphics titan Nvidia. As you can probably tell from the name, Mental Ray uses ray tracing as its rendering method. This is one of the best techniques to get as close to photorealism as possible, but the tradeoff is slower rendering speed and a massive reliance on computing power. It makes sense that Nvidia would show an interest in this long-standing ray-tracing renderer, but historically renderers like Mental Ray relied on CPUs to do the work.

While it has mostly been CPU-centric, Mental Ray can now make use of the GPU, this latest version allows for GPU acceleration of some parts of the rendering pipeline.

Specifically, the GI-Next component of Mental Ray is GPU-accelerated and can take over global illumination duties from the CPU, speeding up the entire rendering process. Global illumination calculations are a cornerstone of the ray-tracing technique, so this has the potential to really beef up rendering speeds.

To The Hilt
The latest version of Mental Ray has received some serious upgrades from Nvidia. The most important ones relate to the all important factor of speed and efficiency. Not only does this save you time and money, it will also ensure that your existing hardware is used fully.

Light Importance Sampling, for example, determines which light sources in the render are the most important and devotes more compute power to them. So you no longer need to specify the number of sampling passes for each light area manually.

Like Nvidia's other major renderer Iray, Mental Ray uses the same efficient Material Design Definition language that allows for material property definitions to remain constant across different rendering modes.

The GI or Global Illumination engine in Mental Ray is very impressive as well. The GI Next engine uses a brute-force algorithm that strives for maximum quality without shortcuts. Just about every advanced effect you can think of (motion blur, depth of field, hair, particles, etc.) is supported natively.

GI Next has been built with an eye on future GPU technology, so you’ll also get more and more out of it as you upgrade your hardware platform.

That’s Teamwork
Although Mental Ray is still very reliant on CPU rendering, Nvidia have managed to elegantly team the CPU up with the GPU for many of the most compute-intensive tasks. That is, if you have a GPU that’s CUDA capable. For example, the GPU can compute ambient occlusion while the CPU handles the rendering, lightening the load for both and speeding up the whole process.

Mental Ray is still very much an industry stalwart, but that also means it is dragged by a bit more legacy than most of its competition. These latest moves by Nvidia puts it on track to become completely modernized while still showing the kids how it’s done.