3delight update

New plugins, tools etc.
User avatar
FXDude
Posts: 1129
Joined: 19 Jun 2012, 21:59

Re: 3delight update

Post by FXDude » 15 Jan 2016, 22:53

Despite some possible issues, from my own test a while back, perhaps like the newer Renderman RIS engine, it was only slightly slower than Arnold for the same amount of noise when path tracing, which I think is the main thing, because other Renderman compliant renderers including Renderman itself (reyes) , aren't always all as fast to clean-up grain, specially with multiple diffuse bounces.
Mathaeus wrote:I think they (all Rendermans) made a long term mistake, insisting on 'faked' GI solutions, point - based stuff and so, instead of trying to go directly into ray tracing.
That decision was probably due to how non-faked solutions can (still) be interminably long to get what can be considered reasonably noiseless results, more specifically for any non-exterior settings.

A Sun/Sky setup with a box with a hole letting some sun in, can easily involve more than an hour for an HD frame with any RenderMan compliant renderers, or of course Arnold who started the trend of purely brute force.

Yet the hole point Arnold, although it's a remarkably fast pathtracer (one of, if not still -the- fastest CPU based pathtracer), it wasn't (or wasnt just) about final render speed, but mostly about both previewing and easy worry-free lookdev production speed, to then let lots of CPUs do all the work cleaning-up all the noise,

But pathtracing all the way to the darkest corners can involve a very large amount of rays to fill, which can still take quite a bit of time to clean-up even for most effecient GPU pathtracers.

I personally beleive there is a sweet spot between "fake" and fully pathtraced solutions, which I think V-Ray and Redshift nailed, allowing to pathtrace the first bounce (the most important bounce), and point cache the rest of bounces (with hardly any more setup time), thus still looking very precise in the most important aspects, and eliminating any flickering issues inherent to using point cache or FG only, and fully suitable for animated or deformed geometry, moving cameras or whatnot.

In other words, if you look at the 'second bounce and up' of a fully pathtraced image, it's all blurry anyways, and can easily be aproximated without compromising on 'quality', and gain some 50%-70%+ final render performance with negligable setup tweaking.


And MentalRay.. ah mentalray.. when they eventually make updates, they come up with new indirect lighting methods, which almost consistently turn out to be near misses, starting with traditional FG and GI, then Irradiance Particles, Importons, ... then a couple of prototypes up to the latest "next GI" which (at least currently) won't clean-up grain in your room to a decent level even if you let it run for hours if not days, or barely any faster than using FG in "exact mode"

And considering that good old FG can easily produce the very highest quality stills very quickly, I would have wished that instead of following trends, that they would have instead found ways improve or stablize what they already had.

Also would have wished for that to be in Soft 2016 (in my fantasies)

But I'm quite sure it would be possible to track points on geometry that average a collection of samples, (perhaps not unlike video compression track pixels in time between keyframes) or some other creative ways to make final gathering stable for animation.

Which reminds me of this..




So perhaps they could prototype it in ICE lol, they have 16 days left. :)

Kzin
Posts: 432
Joined: 09 Jun 2009, 11:36

Re: 3delight update

Post by Kzin » 20 Jan 2016, 01:23

if you want a real path tracer, use iray or renderman ris. ;)

dont take final gathering brute force as benchmark for anything, especially not the si version.
fg is stable in animation, WHEN you have enough direct light (of course the speed outperforms every path tracer). there were devs with importance sampling involved, to solve more complex lighting situations, but all that was stopped because users want brute force now. but thats a different type of renderer. (how fast would be the renders with mental rays current light IS and a fully IS driven final gahtering, would be interesting to know)

the direction is clear, pure brute force, with bidirect pt for example and try to solve the noise. with more rays or/and with denoise tech (thats the current r&d direction). and of course throw more computing power on it, with big cpu farms or smaller gpu farms.

interpolated gi is dead in the future (but maybe, lets see what ilm is doing in ep8). the demands changed to fast some years back. it was really like what pixar did with renderman, releasing a pathtrace core "overnight". there was no break to think about stuff. and now everybody is complaining about speed and/or quality.
thats why i liked the point based way of reyes because its fast and easy to use. sure, its not accurate and always has that "blurry" feeling, like all interpolation techs, but its damn fast with dense meshes and also things like hair. but you have that precomputation that make interactivity impossible (or you have to use a different renderer just for that but the solutions may differ). and lookdev shots without interactivity is a pain.

the ice thing is nice. would be interesting to know how practical the solution is and how easy/fast i can adopt it to other scenes, especially more complex ones.

User avatar
FXDude
Posts: 1129
Joined: 19 Jun 2012, 21:59

Re: 3delight update

Post by FXDude » 20 Jan 2016, 15:56

Kzin wrote:if you want a real path tracer, use iray or renderman ris. ;)
Renderman (ris) or Arnold no?

And you mention Iray, but I'm still unsure what remains for it to be more commonly used in productions, I assume it's still GPU memory requirements or maybe 'flexibility' ?


I also said that 1 or two hours was long for 1 frame, but only relative to how much time it can take using aproximation methods.

Because 2-3 or 4 hours can otherwise be extremely fast for a fully pathtraced interior setting with enough bounces to not make you want to boost gamma (and therefore also grain in darker corners) to compensate , which can easily go in the 8-12 hours for architectural stills using Maxwell or any pathracer other than the forementionned ones.


Otherwise for FG in exact mode, I was only using it as comparision with the new MR method, after reading a bunch feedback and seeing benchmarks of tiny (and noisy) regions and comparing that with FG performance in exact mode, and also crossreferncing noise levels & used CPU speeds, but I may very well have missed something, and no doubt the new method would improve to a degree. (it's still considered a prototype after all)

Perhpas at some point they would also find ways to somehow optimise depth of feild and motion blur (partcularly for any highlights above 1), because those are things that are of course quite common requirements (without resorting to 2d post), but at least with recently implemented MR deep images, can make traditional 2d post MB & DOF methods much less artifact prone and rather zippy, despite some remaining drawbacks and steps of using deep images.

Especially that those are things where at least Arnold and Redshift shine. (Vray not so much, and don't know about RIS)

(Arnold or Redshift users are probably just laughing at this part of the discussion lol )


Otherwise you mentionned that fg can be stable across frames, but I personally only found a combination or a range of settings that was reasonably stable, but while also extra blurry.
Found when looking for ways to use parts of FG as an element (more on that later) without involving any FG Shooting or single frozen cache files for cam travels only, or other extra finicky things probably responsible for making brute force 2-3-4 hour frames sound good.

One thing for sure, Redshift really seems to have come up with something that combines most of the advantages of most of either CPU or GPU renderers into one (maybe kinda like Soft in it's own field :) )

Kzin
Posts: 432
Joined: 09 Jun 2009, 11:36

Re: 3delight update

Post by Kzin » 20 Jan 2016, 23:17

iray is not a production renderer, thats a problem. renderman RIS is one but misses some things yet reyes made so great.

and for rendertimes, especially in production for film, 10 to 20 hours are normal for the big movies. pixar has more then 20 with pt and they also had more then 20 with reyes for good dinosaur. rendertimes of reyes and ep7 would be interesting. and big hero six, in which they used alot of indirect light, without fake fill lights and a high pt depth, was way more then these 20 hours. and this with their highly optimized way to render this project (alot of tricks were involved).

its the same for arch viz. fully path traced images rendering a long time. thats why you use brute force for first bounce and interpolation for all secondary. check the image gallerys in the forums of pt render engines. the rendertimes are really high for pure pt.

motion blur in mr is fast, and you can clean it fast. its an old myth that its slow. redshift is faster, yes, or lets say, less problematic. but what you do in redshift is throwing more samples on it and thats not a big problem because of gpu rendering (its true for all the fancy stuff).
dof in mr is a bit different, but its still not touched yet in dev.

keep in mind that there is no clamping or filtering going on in gi next as it is. for example, a lamp with intensity of 20.000 will result in gi that has these high energys. try this in redshift and it will explode. you will not be able to render out clean results (i clamp with really agressive low values in redshift to "solve" that problem for gi).
but all that will change. i wish they had written that in their docs. no one would wonder about the noise. and to make that clear, this should not be an excuse for the noise, its just an explaination. its has todo that they rewrite mr and this is only possible in steps.

the problem with fg is that these tech as it is, is not able to render detailed indirect solution in a reasonable amount of time (or in general). it will stay blurry, also with high settings. thats why they introduced the ao in mia mat with its ability to render distance based brute force color bleeding. but because its a shadereffect and nothing in the core, its slower when arnold for example. but all that is deprecated now.
you can render stable solutions, but only with low detail settings and enough direct light. forget to try to use fg for indirect only in animation, it will fail and forget to use it in animation for detail indirect lighting. if you keep this in mind, its possible. but, whenever you can, bake fg. split the moving parts from the rest and render in passes, with baked bg and dynamic fg for moving stuff. btw, i never used exact mode, its way to slow, really slow...

User avatar
FXDude
Posts: 1129
Joined: 19 Jun 2012, 21:59

Re: 3delight update

Post by FXDude » 21 Jan 2016, 16:08

Thanks for that insight, very interesting!

I heard (in a making-of) that some frames of Transformers-II (or III) involved some 70 hours per frame, but not per node, but on the entire (ILM) farm (!) , which I have trouble wrapping my head around, specially that ILM's farm is probably no small farm , and TBH without at all knowing about the context, sounds quite unnecessary or wasteful even for the most complex scenes with all sorts of things, or as if very little effort went into any sort of optimization, without implying making any compromise on final quality.

Maybe also especially for such shallow movie(s) often found to simply have too much of everything more likely make viewers go ;
"wow, I can't keep track of what I'm actually looking at" rather than "wow! awesome!" :p

Kzin
Posts: 432
Joined: 09 Jun 2009, 11:36

Re: 3delight update

Post by Kzin » 21 Jan 2016, 21:47

for transformers 3, the highest frametime was 27X hours. it was rendered with mental ray because only mr was able to render the shot by that time with raytracing. but it was before mr had much faster motion blur, so i can imagine why it rendered that long (the mb rendering after they changed it was up to 5 times faster for a frame from my tests in mentel ray, from 18 to 3,5 hours). ilm is known for not making compromises in quality.

the rendertime may sound high, but this is mostly per core, so a farm with 40k cores can render 40k frames in these 27X hours. but 70 hours per frame on the whole farm sounds to much, maybe for a whole shot?

the wow awesome moments are more or less over i think. the last i had was the stormwall in madmax, but thats less some new tech stuff wow. its more great scaling, camera, depth, a good feeling for the shot in general. i like that more then thousend ultron robots running around in a shot like beheaded chickens. ;)

User avatar
FXDude
Posts: 1129
Joined: 19 Jun 2012, 21:59

Re: 3delight update

Post by FXDude » 22 Jan 2016, 05:36

Actually it was 72 hours, (2nd sequel) and even alot more for certain shots in the 3rd sequel, but the average was probably somewhat much lower, and that was indeed probably per node because it just wouldn't make any sense.

https://en.wikipedia.org/wiki/Transform ... f_the_Moon
In Revenge of the Fallen, it took 72 hours per frame to fully render Devastator for the IMAX format, which is approximately a frame amount of 4,000.

For the Driller, which required the entire render farm, it was up to 122 hours per frame.

The most complex scene involved the Driller destroying a computer-generated skyscraper, which took 288 hours per frame.
It was in Imax res, yet 70h/fr would still be like 1 month for 10 frames on one machine, which despite the immense complexity, can still sound rather inefficient even back then.

Yet it's not necessarily surprising, it's quite commonplace for efficiency in general to be inversely proportional to how much resources are available.

Post Reply

Who is online

Users browsing this forum: No registered users and 29 guests