Reimplement ICE Modeling to usable state.

General discussion about 3D DCC and other topics
User avatar
AlexanderM
Posts: 28
Joined: 10 Jun 2009, 17:13
Contact:

Reimplement ICE Modeling to usable state.

Post by AlexanderM » 22 Jan 2014, 16:56

Image

Hi folks, i call for all ICEmodeling guys, who has experience in this topic and would like to improve the situation around it. I believe that with more voices we really can expect for Autodesk responce.

ICE modeling poorly developed and I think the fact there are clear reasons. We will not find out who did the actual conception and the development strategy of it and who is to blame (they are not in business anymore), let's think about how to fix it.

Since the introduction of #ICE_Modeling, it became clear that a truly flexible procedural processing topology is not available here. Instead was represented by a bunch of nodes that do not work as they should (the most obvious examples- #Slice Polygon, which can reason only Convex cap, and incredibly slow and useless compounds #Apply_Thickness and #Apply_Slice_Between_Two_Vertices compounds, which only littered UI), baggy work with clusters and lack of topology in Simulation region/stack.You can also add to the above, the inability to get the results of applying the node in the middle branch, you first need execute set.topology.
Initially this was a bad idea to use a list of standard modeling geometry commands to the SI CORE (like scripts) instead of the normal implementation of topology components (point \ index polygons) as datatype, accessible from any place of the tree and nodes, with the possibility of locally operating on this topo data.

In this regard, there is the following concept, which can be implemented with some minor modifications maybe:
-New datatype of topology/geometry, that carries the array of points, polygonal description, and user normals along with UV. (like vector3 data type is composed from three scalars, here we compose arrays of topological data).
-Datatype should include the ability to tag individual components to some abstract group, which can help you to filter these components later (for example, collect to the group number1 all polygons whose normal deviates from the normal vector (0,1,0) of the neighbor polygon more than 10 degrees, and pass this group to the delone triangulation node).
-SDK description and access.
-Basic set of nodes for geometric processing (eg Triangulation, Polygon reduction, Subdivision, Boolean Op).
-Basic set of nodes for modeling (Extrude \ Bevel \ Slice \ Cap Holes etc.)
-Nodes to work locally with the components and groups, with the possibility of manual and procedural input (eg get elements from cluster).
-Basic set of nodes to work with UV (automatic UV and ability to manipulate islands)
-Node to geometry queries (raycast and get closest) over a local set of data (eg between two nodes), a possible merger with geometry datatype.
-The ability to work in any stack/region (well if that is so hard, lets skip it for first time).

So to conclude I want to say that it would be good to start with a good base and good logic, even with a minimum number of nodes, which can be expanded over time and not rested on the absurd restrictions. I strongly recommend you to see how the similar conception work inside Houdini and it possibilities.

User avatar
rray
Moderator
Posts: 1774
Joined: 26 Sep 2009, 15:51
Location: Bonn, Germany
Contact:

Re: Reimplement ICE Modeling to usable state.

Post by rray » 23 Jan 2014, 00:17

Yay another daydreaming session =)
I can't rate ICEmodeling because I have 0 understanding of it. It could be good design and maybe just the addition of operators like thickness to Softimage itself has been done at an too early point.

I think such a redesign would only make sense if tetrahedral primitives and conversions back and forth from polygonal primitives would be done as well. These would support volumetric operators like shell, booleans, slice, shatter.
Reason for this being that compounds for tetrahedrals would probably be much simpler to design/read than equivalent compounds for polygon meshes. Maybe there's even a better suited data structure but probably tetrahedals are the most compatible when thinking of attributes at vertices/edges/polygons. Does Houdini have something like this?

In any case I'd start such a redesign top down from the end user perspective, with different use cases, and "fantasy attempts" to solve them, and see from there what needs to be implemented.

Component tag groups sounds like a good idea, could be interesting to think about implications and whether this really means topology changes could be done at any point in the stack.
softimage resources section updated Jan 5th 2024

scaron
Posts: 119
Joined: 08 Jul 2009, 05:16

Re: Reimplement ICE Modeling to usable state.

Post by scaron » 23 Jan 2014, 05:13

i do not think you can workaround some of the issues you outline... setting topo outside the modeling stack. i also think you are too harsh on ice modeling... the base nodes work fine, yes the compounds can be a bit slow. i think the issue here is more to do with how ice does parallelism and you will see that when doing ice modeling it isn't using as many cores as when you are doing particle effects. i have found ice modeling useful enough for the tasks i have used it for.

other than that, have you tried to make your own bevel compound? or script? or any operator?

Bullit
Moderator
Posts: 2621
Joined: 24 May 2012, 09:44

Re: Reimplement ICE Modeling to usable state.

Post by Bullit » 23 Jan 2014, 09:22

I haven't been on ICE modeling. IamVFX is one of the person you can talk with, check his posts here and maybe pm him.

User avatar
Mr.Core
Posts: 148
Joined: 10 Aug 2011, 12:35
Skype: giga-core
Location: Kharkov, Ukraine

Re: Reimplement ICE Modeling to usable state.

Post by Mr.Core » 26 Jan 2014, 17:39

the main issue is not performance ( althoug Alexander did mean a large input lag when applying ICE compounds, not the global performance i guess ), this is a question of usability and what we can do with that. with the current paradigm we even close not able to do something complex ( like houdini can ), of course we can do some basic stuff like procedural fences and similar stuff which was possible to do without ICEmodeling using only instances. Extrude is almost useless, add vertex too ( since it cannot add it on existing polygons, we need to delete it, add vertex and then cap all that mess ), split edge is useless ( also need a ton of manual operations ), total absence of the important nodes like split between two points and mentioned bevel. Existing bunch of nodes assume VERY low level work with geometry, much more lower than ICE can allow you to do even if we consider constant usage of speedkiller features like nested repeat loops and array-to-set-to-array conversions. The problems with clusters I saw on xsibase when it was alive, and those topics have been dated by days where softimage was under AVID and the existing ICE modeling simply ignores those problems instead solving them.
In my opinion, the current implementation looks like "haha lets try to implement this thing and will see how it going" instead of "lets ask people what they are expect to be able to do with procedural modeling or see how this works in houdini or somewhere else". Sorry if i too harsh on it, but i agree with Alexander, it covers 5% of what it should cover. No SDK on it is a super dissapointing factor.

As far as know, IamVFX was in constant frustration doing a lot of hardcore stuff which must be implemented from the beginning and still not solved all of the problems yet. ( and as there is no SDK coverage on it - i doubt it is resolvable )

luceric
Posts: 1251
Joined: 22 Jun 2009, 00:08

Re: Reimplement ICE Modeling to usable state.

Post by luceric » 27 Jan 2014, 03:00

Mr.Core wrote:In my opinion, the current implementation looks like "haha lets try to implement this thing and will see how it going" instead of "lets ask people what they are expect to be able to do with procedural modeling or see how this works in houdini or somewhere else".
That's a side subject, but Guillaume Laforge worked on ICE Modeling, I think he knows Houdini pretty well.

ICE Modeling is 100% a reflection of what Softimage is and its philosophy. Let me explain. Unlike many or most other apps, there is a geometry engine in XSI, a guardian and supervisor of data integrity, as opposed to just have arrays of points and triangle representing a mesh, and letting "modifiers" or scripts raw access. In Sofitmage, it's more important to maintain the non-linearity of the construction stack than anything else.

This was a fundamental shift from Softimage|3D, where the raw-accessed data often and easily became corrupted (famously with Booleans, but there were also other easy cases) All the benefits are too complicated for me to get into here, but it includes all the magic that allows non-linear workflows that are hard to explain, such as adding points updating all the envelops, weight maps, and other property maps, etc. Stuff that simply doesn't work in other apps that have construction history. But as users we just assume it work in XSI. And the way that works is that we have this geometry core which has a series of low-level "commands" to modify geometry, and these commands do all the dirty to update all the clusters and the other magic. And built-in operators in XSI use those commands. And all the no-linearity works.

And so in ICE Modeling, you're building a tree that builds a playlist of these commands, and then execute them in one shot. Executing them at every step would be horrendously slow, which is why we collect them in a playlist and update all the clusters in one shot at the end. This super low level approach is completely inline with the rest of ICE, which is also extremely low-level. But when you use ICE for particle, you're most likely using a compound, which is made up of hundreds and in some cases thousands of low level nodes, so you don't realize that in fact ICE is actually low-level, like in assembly language.

It's not overly surprising that there is no SDK to add new ICE modelling nodes, because the low level nodes topo nodes that are offered are all the commands that the geometry engine supports. So higher level nodes are necessarily expressed in terms of these low-level nodes. You might for example create a new ICE modeling compound that includes some custom "normal" ICE nodes and built-in topo node.

The "thickness" operator was a disaster of performance because applying a very large compound (of any kind, particle too) was a very slow, but the singapore team did something about this in softimage 2014. It actually executes very fast, but applying the compound for the first time is slow. A combination of file io and type propagation, if I recall. It was important to have this node in to test and validate ICE modeling, and for sure it did help find some issues.

Btw it's a poor example to say that you could do "fences with instancing even without ICE Modeling". ICE Modeling is not about instancing, instancing is a particle problem. ICE Modelling is about building and modifying a geometry. It's also fairly nonsensical to complain that you can't modify topology in the simulation region, because the meaning of the simulation region is "the topology is not changing from here". It's the reason why that region exists. It was specifically created so that topology would be guaranteed to be stable during simulation. Simulation operators do not support topological changes. If you could modify topology there, it would not be called the simulation region.

NNois
Posts: 754
Joined: 09 Jun 2009, 20:33

Re: Reimplement ICE Modeling to usable state.

Post by NNois » 27 Jan 2014, 10:19

Off-topic but i love these insights of Softimage, thank you very much for this Luc-Eric.

Bullit
Moderator
Posts: 2621
Joined: 24 May 2012, 09:44

Re: Reimplement ICE Modeling to usable state.

Post by Bullit » 28 Jan 2014, 11:46

True, but the question is, can anything be done to improve ICE modeling? To justify its existence it needs to give sizable benefits to users over what they have now.

Something like this is not enough: Generating a Forest with ICE Modeling.



I see people using Milan Vasek or other scatters than using ICE modeling to do a Forest


luceric
Posts: 1251
Joined: 22 Jun 2009, 00:08

Re: Reimplement ICE Modeling to usable state.

Post by luceric » 28 Jan 2014, 14:34

Bullit wrote:True, but the question is, can anything be done to improve ICE modeling? To justify its existence it needs to give sizable benefits to users over what they have now.

Something like this is not enough: Generating a Forest with ICE Modeling.

I see people using Milan Vasek or other scatters than using ICE modeling to do a Forest
ICE Modeling is the name of a feature to modify polygon mesh topology, it isn't for instancing.

One is tutorial that procedurally generates one huge geometry by adding meshes together, and the other a tool that generates instances.

The first one is a cute tutorial, but In Real Life, ICE modeling would be used to create tools to design a few individual trees using procedural parameters, and then you would instance those trees across the hills with the other ICE tools. You would not be creating a forest in a single mesh with 40 billion polygons.

Bullit
Moderator
Posts: 2621
Joined: 24 May 2012, 09:44

Re: Reimplement ICE Modeling to usable state.

Post by Bullit » 28 Jan 2014, 17:26

Yes that is another reason that Softimage How Tos Tutorial doesn't make sense, i was going from a user ability viewpoint . I don't know how things run in Autodesk but it seems sometime people doesn't know to show the advantages of the product.

User avatar
Mr.Core
Posts: 148
Joined: 10 Aug 2011, 12:35
Skype: giga-core
Location: Kharkov, Ukraine

Re: Reimplement ICE Modeling to usable state.

Post by Mr.Core » 28 Jan 2014, 19:35

Thanks for the expanded answers.

>> Executing them at every step would be horrendously slow, which is why we collect them in a playlist and update all the clusters in one shot at the end.


You mean executing it by guard ? I agree in such case, but I think Alexander meant a different approach, where almost all nodes do not involve "core guard" and process exactly raw data arrrays, because for many ops its enough. building a simplified topology-connectivity structs or geometry queries acc structs can be a bit more slow but without refreshing all the external stuff you have listed ( weightmaps\clusters and entire non-linear workflow ) it should be very fast considering ICE is not targeted to extremely large data sizes ( e.g. ICE will crash if any of data port will exceeds 1GBytes - this is not very large treshold, considering we speak about bytes ). An approach like described by Alexander should not be hard to expose with sdk because you only have to allow write\read that topology raw arrays.

While ICE is extremely low-level, it is still SIMD-like way of data handling, it lacks pointers\references and all important low-level stuff for implementing things like mentioned bevel, so it should become a one of "building blocks" and work in a SIMD way as the rest of ICEnodes, not like the current "Extrude islands" stuff, forcing as to pull each component in a tons of repeating loops and context conversions.


>>Btw it's a poor example to say that you could do "fences with instancing even without ICE Modeling". ICE Modeling is not about instancing, instancing is a particle problem. ICE Modelling is about building and modifying a geometry.

Yep, the example is poor, but the "building and modifying a geometry" is also poor in its current state, I can imagine how we assign perpoint\perpoly attributes (e.g. matID) on leaves to hook it in RenderTree for sort of color randomization and using for this some island detection techniques inluding extended set of topology connectivity attributes already exposed, but again, we lack many of tools to do it in user-friendly ways and in bounds of a single tree.

This current set can be improved by adding more usefull nodes and making the existing set more high-level along with ability to accept per-component set\array of controls.

User avatar
AlexanderM
Posts: 28
Joined: 10 Jun 2009, 17:13
Contact:

Re: Reimplement ICE Modeling to usable state.

Post by AlexanderM » 28 Jan 2014, 20:45

luceric wrote: ICE Modeling is 100% a reflection of what Softimage is and its philosophy.
SI not so terribly ).

Oleg correctly explains this concept. Not necessarily to build the final topology, there are many variations to work with points, poly indexes and other data.
I believe that we need a different, more flexible concept. Such as presented. Better slowly go to it, rather than fast adding something done for ticks.

Port size restricting is a separate problem, for example, simply create a Build Array from Constant and set size 2000000000- vu à la Error. But in reality SI crashes with much more less values eg vector, color and matrix( exact treshold depends on port type ). Faced with this, trying to build a voxel grid (and my Mandelbulb compound). Was very upset :|
Image

Bullit
Moderator
Posts: 2621
Joined: 24 May 2012, 09:44

Re: Reimplement ICE Modeling to usable state.

Post by Bullit » 29 Jan 2014, 23:17

Modo's MeshFusion(booleans) was released. I appears to have some proceduralism.

http://community.thefoundry.co.uk/store ... eshfusion/



Is something like that possible with ICE in the future? With Piotrek Meshpaint like interaction:


User avatar
csaez
Posts: 253
Joined: 09 Jul 2012, 15:31
Skype: csaezmargotta
Location: Sydney, Australia
Contact:

Re: Reimplement ICE Modeling to usable state.

Post by csaez » 30 Jan 2014, 01:42

MeshFusion looks awesome!

User avatar
dwigfor
Posts: 395
Joined: 17 Nov 2009, 17:46

Re: Reimplement ICE Modeling to usable state.

Post by dwigfor » 30 Jan 2014, 08:28

Alexander, that's beautiful! Congrats

User avatar
McNistor
Posts: 605
Joined: 06 Aug 2009, 17:26

Re: Reimplement ICE Modeling to usable state.

Post by McNistor » 30 Jan 2014, 21:19

MeshFusion is indeed amazing. I didn't quite catch (due to the fast editing) whether it generates a topology after subtraction (or any other operation) or it's working only on a SubD display level? If it's the later then it's not very useful when you have to do UVs for texturing and whatnot, but I'm hoping it's the former.

I never considered Modo for its modeling features as it seemed not "so superior" to XSI's, but with this I'm starting to think otherwise.
The society that separates its scholars from its warriors will have its thinking done by cowards and its fighting done by fools.
-Thucydides

Post Reply

Who is online

Users browsing this forum: No registered users and 30 guests