Lit by light cone, condition tester
Lit by light cone, condition tester
Is there a compound available somewhere that can test if an object or particle is lit within the cone of a specific light?
In ThinkingParticles there's a nifty little Light Condition tester that creates a True output whenever a particle gets lit by a light source in the scene. It can also be given a threshold for the boolean result, look only for particles of a specific colour and look for particles within a certain RGB variation range.
The Raycast node out of the box does nothing similar.
I've searched the RRay database and couldn't find a compound that's already been build so wondered if anybody had made something to function in this way. Thinking about Paul Smith's Raytracer in ICE I'm sure it's possible.
In ThinkingParticles there's a nifty little Light Condition tester that creates a True output whenever a particle gets lit by a light source in the scene. It can also be given a threshold for the boolean result, look only for particles of a specific colour and look for particles within a certain RGB variation range.
The Raycast node out of the box does nothing similar.
I've searched the RRay database and couldn't find a compound that's already been build so wondered if anybody had made something to function in this way. Thinking about Paul Smith's Raytracer in ICE I'm sure it's possible.
Re: Lit by light cone, condition tester
It's relative simple to compare the angle between two directions and light cone angle, something like this pic. Light direction in SI is negative Z axis, that's why that vector is multiplied by kine global matrix. This is a way faster than any spatial query like get closest location, or get closest point. Raycast is special type of spatial query that returns result only from surfaces, poly or nurbs (Houdini VOP equivalent is called Intersect), but not from particles.
Re: Lit by light cone, condition tester
Cheers Mateus. That network will be useful as a basis for a compound the matches the TP functionality.Mathaeus wrote:It's relative simple to compare the angle between two directions and light cone angle, something like this pic. Light direction in SI is negative Z axis, that's why that vector is multiplied by kine global matrix. This is a way faster than any spatial query like get closest location, or get closest point. Raycast is special type of spatial query that returns result only from surfaces, poly or nurbs (Houdini VOP equivalent is called Intersect), but not from particles.
I'd never attempted to use the Raycast node for particles but must admit I'd always assumed it would work. Good to know that I'd have been wasting my time.
Re: Lit by light cone, condition tester
This is a nit picking, anyway,
Raycast or H Intersect VOP traces a line along certain direction, if that line intersect with some polygon (in case of nurbs I think it's still polygon, created by tessellation), there's result, or 'no hit'. Particle can not be a target, but it can be an origin which utilizes the result, somehow.
Raycast or H Intersect VOP traces a line along certain direction, if that line intersect with some polygon (in case of nurbs I think it's still polygon, created by tessellation), there's result, or 'no hit'. Particle can not be a target, but it can be an origin which utilizes the result, somehow.
Re: Lit by light cone, condition tester
Discovered it's not quite doing what I want as the 'Matrix to SRT' is only taking into account the global translation as measured from the centre of the point cloud.Mathaeus wrote:This is a nit picking, anyway,
Raycast or H Intersect VOP traces a line along certain direction, if that line intersect with some polygon (in case of nurbs I think it's still polygon, created by tessellation), there's result, or 'no hit'. Particle can not be a target, but it can be an origin which utilizes the result, somehow.
There's a compound that does exactly what I want with the camera FOV. It's just a matter of deciphering exactly what that jungle of dot product math is doing.
I can make sense of it but '.aspect' and '.FOV. are custom attributes and I'm not sure how specifically they come into play as all the main work seems to originate from the Matrix to SRT (much like in your own example Mathaeus).
Re: Lit by light cone, condition tester
Here, 'matrix to srt' probably converts global transform of camera, to global position, local point position (that is, relative to center of point cloud) is unknown for that node. If point cloud is not at global zero position/orientation/scale, all calculations have to consider the offset. So generally it's good idea to keep the point cloud at world zero. A lot of ICE compounds expects that world zero, somehow, let's say 'modulate by null'. If entire point cloud should be moved, it's always possible to move just points something like get point position>add>set point position, by ice tree in post simulation.
I think Dot products are in 'test visibility to camera', to find positions of particle, relative to four planes. Planes (top, bottom, left, right) are bounds of space visible to camera. Positive or negative dot product tells, on which side of plane certain position is. Fov is camera angle, I think this is same as light 'cone'. While with camera, compound is looking into 'quadratic' space, instead of plain cone (which could be described just by angle) with light.
P.S here is SI model with compound.
I think Dot products are in 'test visibility to camera', to find positions of particle, relative to four planes. Planes (top, bottom, left, right) are bounds of space visible to camera. Positive or negative dot product tells, on which side of plane certain position is. Fov is camera angle, I think this is same as light 'cone'. While with camera, compound is looking into 'quadratic' space, instead of plain cone (which could be described just by angle) with light.
P.S here is SI model with compound.
Re: Lit by light cone, condition tester
Thanks for your help again Mathaeus. Both camera and light grabs are from the same point cloud situated at world zero and I'm actually testing as a non simulated ICE tree so I can see the effect of translations in real time. The thing with your original solution is that it treats the light cone as a point in space and ignores all rotation data of the light itself. Even if the light is pointing in the opposite direction the the light cone is calculated to still be directed at the point cloud.
I understood what the FOV stood for in the compound but am still confused as to what data is being used as .fov is just a declared custom attribute (as is .aspect). Other than the Camera FOV being rectangular and the Light cone being circular both node networks are effectively attempting the same thing. It's just that your original solution ignores any local rotation on the light.
I understood what the FOV stood for in the compound but am still confused as to what data is being used as .fov is just a declared custom attribute (as is .aspect). Other than the Camera FOV being rectangular and the Light cone being circular both node networks are effectively attempting the same thing. It's just that your original solution ignores any local rotation on the light.
Re: Lit by light cone, condition tester
There are two let's say directions in ICE tree: subtract from light position to particle position , and negative z (SI default direction for light), multiplied by global matrix of light (that's position, orientation and scale). Finally, angle between, multiplied by two (because angle is measured from center). Does model from download works ?jonmoore wrote:The thing with your original solution is that it treats the light cone as a point in space and ignores all rotation data of the light itself. Even if the light is pointing in the opposite direction the the light cone is calculated to still be directed at the point cloud.
.
Re: Lit by light cone, condition tester
Unfortunately not. I get the same problem once I remove the light's target null and transform and rotate it's position.Mathaeus wrote:Does model from download works ?jonmoore wrote:The thing with your original solution is that it treats the light cone as a point in space and ignores all rotation data of the light itself. Even if the light is pointing in the opposite direction the the light cone is calculated to still be directed at the point cloud.
.
What I need is something that matches the behavior of the Test Visibility from Camara compound exactly but using a spot lights cone rather than the camera's rectangular FOV (I'd actually like to be able to use Gobo's too). I'm using the setup to trigger other events based off the point data within an animated cone/FOV.
Re: Lit by light cone, condition tester
Well, don't know what's happening - kine global is final result of any kind of transform, should not be related to existence of constrains or such. Adaptation of 'test visibility to camera' should be something like solution I've already posted...
Re: Lit by light cone, condition tester
Ok it seems I was able to reproduce that. There's new one, having a bit different method of calculation, that works in both cases, at least here with XSI 7.01. Most likely my fault, while I can't figure for now, what was wrong.jonmoore wrote: Unfortunately not. I get the same problem once I remove the light's target null and transform and rotate it's position.
Re: Lit by light cone, condition tester
Really appreciate you efforts here. I'm out this afternoon but will test your new version on my return.Mathaeus wrote:
Ok it seems I was able to reproduce that. There's new one, having a bit different method of calculation, that works in both cases, at least here with XSI 7.01. Most likely my fault, while I can't figure for now, what was wrong.
Re: Lit by light cone, condition tester
That's working perfectly now.
And you've managed to teach me a little more about the Matrix to SRT node into the bargain.
Thanks again Mathaeus.
And you've managed to teach me a little more about the Matrix to SRT node into the bargain.
Thanks again Mathaeus.
Who is online
Users browsing this forum: brandwatch [Bot] and 52 guests