That decision was probably due to how non-faked solutions can (still) be interminably long to get what can be considered reasonably noiseless results, more specifically for any non-exterior settings.Mathaeus wrote:I think they (all Rendermans) made a long term mistake, insisting on 'faked' GI solutions, point - based stuff and so, instead of trying to go directly into ray tracing.
A Sun/Sky setup with a box with a hole letting some sun in, can easily involve more than an hour for an HD frame with any RenderMan compliant renderers, or of course Arnold who started the trend of purely brute force.
Yet the hole point Arnold, although it's a remarkably fast pathtracer (one of, if not still -the- fastest CPU based pathtracer), it wasn't (or wasnt just) about final render speed, but mostly about both previewing and easy worry-free lookdev production speed, to then let lots of CPUs do all the work cleaning-up all the noise,
But pathtracing all the way to the darkest corners can involve a very large amount of rays to fill, which can still take quite a bit of time to clean-up even for most effecient GPU pathtracers.
I personally beleive there is a sweet spot between "fake" and fully pathtraced solutions, which I think V-Ray and Redshift nailed, allowing to pathtrace the first bounce (the most important bounce), and point cache the rest of bounces (with hardly any more setup time), thus still looking very precise in the most important aspects, and eliminating any flickering issues inherent to using point cache or FG only, and fully suitable for animated or deformed geometry, moving cameras or whatnot.
In other words, if you look at the 'second bounce and up' of a fully pathtraced image, it's all blurry anyways, and can easily be aproximated without compromising on 'quality', and gain some 50%-70%+ final render performance with negligable setup tweaking.
And MentalRay.. ah mentalray.. when they eventually make updates, they come up with new indirect lighting methods, which almost consistently turn out to be near misses, starting with traditional FG and GI, then Irradiance Particles, Importons, ... then a couple of prototypes up to the latest "next GI" which (at least currently) won't clean-up grain in your room to a decent level even if you let it run for hours if not days, or barely any faster than using FG in "exact mode"
And considering that good old FG can easily produce the very highest quality stills very quickly, I would have wished that instead of following trends, that they would have instead found ways improve or stablize what they already had.
Also would have wished for that to be in Soft 2016 (in my fantasies)
But I'm quite sure it would be possible to track points on geometry that average a collection of samples, (perhaps not unlike video compression track pixels in time between keyframes) or some other creative ways to make final gathering stable for animation.
Which reminds me of this..
So perhaps they could prototype it in ICE lol, they have 16 days left.