User:Wendy

From K-3D

Jump to: navigation, search

K-3D - Physics Engine Integration

Unlike some other popular 3D suites such as Blender, K-3d currently does not provide any tools to automate animation involving the dynamics of rigid bodies. Such an addition is a large undertaking, but the beginning of the task can be broken into a few simple steps. The 'Open Dynamics Engine', or ODE, is a high-quality open-source physics engine which could be integrated into K-3d to provide such facilities. ODE excels at real-time simulation of articulated bodies. It's integrator sacrifices the goal of exact physical realism, and because of this it is highly stable. To the 'naked eye' the dynamics still appear completely realistic. These features make it ideal for integration into a suite such as K3d.

I think that this project could form the beginning of the addition of an extremely worthwhile new feature-set for K3d, and something that will definitely need to be added sooner or later. I am currently studying games-development, and I have a background in both physics and in open-source programming, so I would very be very excited to take on this task and use all of my background skills.

Deliverables

  1. A new 'Document Plugin' for creating a 'PhysicsTransform' node
  2. A new 'Document Plugin' for creating a 'ball-and-socket joint'
  3. A new 'Document Plugin' for creating a 'hinge joint'
  4. A new 'Document Plugin' for controlling the simulation process � starting it, stopping it, rewinding it, fast-forwarding it, possibly performing 'caching passes' to enable rewinding, and so on. This plug-in needs to take data about the objects in the scene and send it to ODE (in an appropriate form). The design for this will take some time, which is why the other deliverables I've listed only cover a small range of the possibilities allowed by ODE.
  5. A new 'Application plug-in' to add a new GUI interface element. This element will allow the editor to start/stop/rewind the simulation, as well as to edit overall physics-engine settings such as ERP, CFM, and time-step. This will probably be based on the existing AnimationTimeline panel.
  6. Regression tests for each of the above
  7. Documentation (Doxygen and wiki) for each of the above.

If it turns out to be necessary, I would drop the two hinge deliverables to focus on getting all the others working, including documentation and testing. If the design for the Transform Node and Simulation Controller are done right then adding the hinge-behaviours later should be ok for myself or another developer, but of the design and coding for the PhysicsTransformNode and the Simulation Controller are not rock-solid then we will have more problems.

Timeline

Warmup Period - April 11 to May 28 (6 weeks total)

  • Get to know the k-3D community
  • Spend lots of time playing with K-3D, without worrying too much about exactly where it is all headed � just become very comfortable with it all.
  • Ensure that I have the necessary K3D sdk, sources, etc all building correctly on both windows and ubuntu linux (including ODE stuff, OpenGL, and the QGLViewer used in the ODE tutorial by Xadek.). Generally have dev-environment set-up
  • If I haven't already done this, write&build the 'minimal plug-in' example from the K-3D wiki at http://wiki.k-3d.org/Plugin_Tutorial
  • Carry out all the examples in the ODE tutorial at http://artis.imag.fr/Membres/Xavier.Decoret/resources/ode/tutorial1.html#A

Main Project - May 28 to August 30 (13 weeks total)

May 29 - June 5 (1 week)

  • Set up regression tests to test the 'minimal plug-in' made during the warmup-period. This will form a basis for a proper 'test-bed' for regression testing and integration
  • Make the minimal plug-in do something .... anything ...
  • Ensure that doxygen documentation is generating ok and that I know how to do this.
  • Continue familiarizing myself with the code-base, but in a more focused way than during the warm-up (that was just 'read whatever catches my interest'. Now focus on the relevant bits!)


May 5 - June 19 (2 weeks)

  • Set up a file to programmatically set parameters that will eventually be set using the user interface. By this I mean things like the time-step and so on. Just a simple header file with constants defined will do at this stage. Just has to be a place where I can put all this stuff together whilst I am developing the nodes but before I have the UI element ready. Saves having stuff strewn everywhere, and provides a good basis for thinking about exactly what should be in the UI element later.
  • Learn to create simple k-3d meshes programmatically via the python krd interface. Create an ordinary PolyCube and Polysphere this way.
  • Create a script which create an ordinary PolyCube and Polysphere programmatically.
  • Modify the 'minimal plug-in' so that it creates a Transform Node which can be inserted into the pipeline for either the PolyCube or the PolySphere. Have it do nothing (ie no transformation) at the moment. But it must be able to be connected into the pipeline correctly, ie output connected to a mesh instance, input to a PolyCube,source. Settle on either Polycube or Polysphere now, and only carry out work for the decided one until it is working.
  • Once I've done this, think really hard about how to do the 'rewind' issue. Look at the current animation_track and key-frame stuff again!

June 20 - July 9 (2 weeks)

  • Modify the plug-in again so that when, given a particular mesh as input, it :
  • Creates an ODE world and bodies with the appropriate parameters (including mass and collision geometry)
  • Calls the ODE simulation step - uses the resulting position to translate the mesh to the position defined by the ODE library.
  • Don't worry yet about the timing loop � just have it start at some point in time and finish at a later one! Some thought will need to be given to concepts of k-3d space/camera space/ ODE 'space'
  • Once the previous step is working, start thinking about how to integrate it with the K-3D 'TimeSource', and generally how the timing issue is going to work.
  • Begin to write the basics of the Document Plugin that will control the timing. To begin with, only worry about time moving forward. Once that is understood and working think about how to do the 'rewind' . Since there are no joints yet, try just a simple scenario like a block falling under gravity.
  • Using the knowledge I've gained from the previous step, produce diagrams using Dia or some other tool showing how the timing will work. Be very specific, and get this clear with Bart, Tim and Joaquin to be sure that the design is going to be feasible.
  • Modify the programmatic script/testbeds so that they create a cube, a time-source, and a time simulator. The test-bed should then be able to create a Polycube programatically and test the simulation-control loop using various different time-step combinations and so on. simulate it falling under gravity with various different test parameters.

July 9 � July 16 (1 week )

  • Write an Application level plug-in to allow the user to interact directly with the simulation loop. This will be based closely on the existing animation time-line panel, but will have some additional settings such as time-step control. I don't want to do this until this late stage since until the timing/simulation control design is finalized it is not clear what exactly the user will be able to set . Thus up until now the simulation has been controlled programatically using the test-bed scripts - since these were necessary anyway for regression testing then this is no loss.

July 16 � July 30 (2 weeks)

  • Write a document plug-in that will allow the user to create either a 'hinge-joint' or a 'ball-and-socket joint' (choose one or the other to begin with)
  • Modify the programmatic script/testbeds so that they create two cubes and join them with the 'joint' type that has just been written, and then, again, simulate them falling under gravity.
  • Modify the programmatic script/testbeds so that they can create several cubes and join them with combinations of whichever the 'joint' type that has just been written, and then, again, simulate them falling under gravity, hitting each other, and so on.

July 31 � Aug 15 (2 weeks)

  • Finish the plug-in for whichever geometries/hinges were not chosen previously
  • Modify the programmatic script/testbeds so that they create a number of cubes and spheres, join them with the two 'joint' types that has just been written, and then, again, simulate them falling under gravity, hitting each other , and so on.


Aug 15 � Aug 22 (1 week)

  • Make sure that the user interface element allows users to easily edit things such as the ERP and the CFM, at least globally (perhaps locally per-joint as well). This will help them to ensure that their simulations are stable and look how they want (ie springy vs hard collisions, etc)


Aug 23 � Aug 30 (1 week)

  • Polish the code, make sure documentation is all up to date
  • If it is ready, record a tutorial for end-users showing how to create animated cubes with joints and hinges!
  • If there is any extra time (unlikely!), then expand into other areas � more complex meshes? More joint-types?


Deliverables � more detail

A new 'Document Plugin' for creating a 'PhysicsTransform' node

A 'PhysicsTransform' is essentially a modifier for an ordinary MeshSource that transforms its position and rotation. To find out how these should be transformed, it calls on functions provided by the ODE physics engine. It will of course need to provide data related to the current position and shape of the mesh to these ODE functions.


A 'PhysicsTransform' should fit into the visualization pipeline roughly as follows:

Image:K3d_single_physics_mesh_type_in_pipeline.png


Following this diagram we see that the MeshSource node plug-in produces the mesh data (for some sort of geometric object), then (possibly) other user modifications are applied.

Next ODE comes into things. Ode, when given information about the mesh and its current position, supplies the data for the 'Physics Transform Node'. This is a modifier which translates and rotates the mesh as determined by the output of the simulation step. Finally, the MeshInstance consumes this data, and renders the appropriate output to the screen.

Note that the physics transform node has to know a little about the node it is transforming. In particular, it will need to know the size and shape of the node. Differently shaped 'source' nodes need to have different mass-tensors set and also different collision shapes, because the information that ODE needs to simulate them differs.

In fact it turns out that the diagram above is something of a simplification. the visual appearance of the mesh to be rendered can be decoupled from the 'dynamical' state of the mesh.

The 'dynamical' state is what ODE cares about, and essentially consists of a mass tensor and a current position and velocity. Wherever possible, the dynamical state should be based on a simple geometry - a sphere, a box, a cylinder, and so on. However if necessary, ODE can calculate a simulation using generalized geometries by using an appropriate mass-tensor.

The visual appearance is what k3d usually deals in. This may involve a mesh that has a lot of polys and a complex topology.

Finally, ODE can be asked to do collision detection, in which case it needs to know the 'shape' of an object. In this context the shape is separate from both the visual appearance and from the dynamical state (mass-tensor). Like the dynamical state, the collision-shape should be kept simple wherever possible, although ODE does have the option of using a dTriMesh class to represent 'triangle soup' :)

Thus we actually can potentially have three separate meshes combining providing input for the final image. This situation is shown below:


Image:K3d_three_physics_mesh_types_in_pipeline_diagram.png

The PhysicsTransform node will have to invoke different ODE functions depending upon what kinds of input meshes it receives. For instance, to set up the mass-tensor ODE provides dSetMassSphere and dSetMassBox for setting the mass up as a sphere or a box, and it also provides the general dSetMass for setting an arbitrary mass distribution. There is a similar set of functions for setting the collision geometry.

To begin with, I would like to write a PhysicsTransform node as a class which is capable of carrying out the most general case, but start by implementing actual behavior only for the two simplest forms of input mesh � a polycube and a polysphere.

If these end up working ok, then I will look into more complex shapes.

There are a also number of other ODE parameters which may apply on a body-by-body basis (thresholds for automatic disabling and the like). These should implemented as K3d 'properties' on the transform node. Then when the ODE API is called, the values can be passed to it as appropriate.

A new 'Document Plugin' for creating a "ball-and-socket joint
A new 'Document Plugin' for creating a "hinge joint"

Joints will be implemented as nodes which are not rendered, but which connect two other node objects together. In particular, they will be able to connect any two appropriate 'physics-enabled' nodes. Thus, they will have two properties that are k3d::inode*, one pointing to the first body and the other to the second.

ODE expects the position information about joint 'anchor points' to be given in world co-ordinates. This means that

  • the bodies must be created and correctly positioned first, before the joints are attached.
  • the joint information should not be stored purely as a property of one or the other body's that it connects

There are a number of other parameters which may apply on a joint-by-joint or a body-by-body basis. These should be implemented as K3d 'properties' on the transform node. Some examples are:

  1. Local ERP
  2. Local CFM
  3. Other local joint parameters such as 'hi stop' and 'low stop', which determine the range of motion of the joint.
A new 'Application plug-in' to add a GUI panel for interacting with the simulation process

The user interface element which the user will use to interact with this simulation controller will definitely resemble the existing animation time-panel closely. It may even essentially be this panel, only with some extra properties (time-step, gravity etc), and hooked up differently behind-the-scenes! This GUI element will allow the editor to start/stop/rewind the simulation, as well as to edit overall physics-engine settings such as ERP, CFM, and time-step.

A new 'Document plug-in' to control the simulation process

The document plug-in which controls the simulation process will be responsible for a number of things - starting it, stopping it, rewinding it, fast-forwarding it, possibly performing 'caching passes' to enable rewinding, and so on. This plug-in needs to take data about the 'physics-enabled' nodes in the scene and send it to ODE (in an appropriate form). The design for this will need a lot of thought. The simple part will be taking the necessary position data from the k3-d meshes and forwarding them to ODE for a simulation step. The more difficult part will be setting things up allowing for 'rewinds' and possibly for things like adaptive time-steps or time 'super-sampling' . It will have some resemblance to existing code that deals with animation and key-frames, but there may be many issues that will have to be resolved in a new way. I am not yet sure how to do this but I have noted down some possibilities in the section on 'Possible Work-flow' below.

I have also now begun a seperate wiki page with my thoughts on this :)

Possible Work-flow

I have also created a new wiki page with some thoughts on the rewind-issue.

I have tried to imagine what kind of a work-flow an artist might follow in practice if an integrated physics engine were to suddenly appear by magic in K-3D !

This was to help me get more of a conceptual grip on what needs to be done and how the code should be designed. I'm sure there will be many modifications to this, but at least it's a start. In particular, the 'rewind 'issue needs a lot more thought.

To begin with, I have listed the steps required by ODE (these are listed in the ODE manual at http://www.ode.org/ode-latest-userguide.html#sec_3_10_0)

1. Create a dynamics world. 
2. Create bodies in the dynamics world. 
3. Set the state (position etc) of all bodies. 
4. Create joints in the dynamics world. 
5. Attach the joints to the bodies. 
6. Set the parameters of all joints. 
7. Create a collision world and collision geometry objects, as necessary. 
8. Create a joint group to hold the contact joints. 
Loop: 
    9. Apply forces to the bodies as necessary. 
    10. Adjust the joint parameters as necessary. 
    11. Call collision detection. 
    12. Create a contact joint for every collision point, and put it in the contact joint group. 
    13. Take a simulation step. 
    14. Remove all joints in the contact joint group. 
    15. Destroy the dynamics and collision worlds. 

So, in order to integrate ODE into K-3D, we need to 'hook into' this process using K-3d.

I envisage the following possible work-flow once there is a physics engine integrated into K-3d:


  1. The artist creates a scene with one or more objects in it. At least one of these objects must be a 'physics-enabled' node (discussed below. The simplest example might be a 'physics enabled' Polycube. Just like an ordinary Polycube, you can click on a 'PhysicsEnabledPolyCubeSource' node to add an instance of one to the scene, and so on. It will share most of the properties of an ordinary Polycube-node.
  2. Any of the objects in the scene which are 'physics-enabled' will also have extra properties that you can edit. At the simplest level, the user would be able to enable or disable this object, and to set whether it is part of an 'island' of other objects. At the next simplest level might be the ability to increase or decrease the total mass (independently of the size of the object). In the long run this could become more sophisticated and allow you to edit things such as the precise inertial mass-tensor (ie the overall weight-distribution), and to configure allowed joint-types, friction, and so on.
  3. The artist creates a number of 'Joint' objects. These are objects which are not rendered, they exist to hold information for ODE about how the objects are to be connected. Thus when each of these is created, it must also allow you to add two other existing nodes as 'joinees'. Once created, these joints should be able to be edited to set things such as the local ERP and other joint parameters. Also, it should be addable to a 'joint-group'
  4. The artist uses a new gui interface element (application plug-in) to set-up some overall settings for the physics engine. These would include things such as:
* The global ERP (error reduction parameter)
* The global CFM (Constraint Force Mixing Parameter)
* Global auto-disable thresholds (how long does a body  have to be idle in order to be automatically set to rest, and that kind of thing, etc)
* The strength of gravity, and whether it exists in this 'world'
* The time-step for the simulator. This may ultimately be 'adaptive', but there should be a simple constant setting allowed as well. 
* Whether to use the ordinary worldStep or the quickStep integration methods. Again, this may ultimately be 'adaptive', but there should be a simple constant setting allowed as well. 
* Since these are quite technical concepts, it would be worth thinking of better ways to convey to the user what it is that they do! (ie CFM becomes the 'sponginess parameter' and quickStep/normalStep becomes the 'low-realism/hi-realism simulation parameter)
  1. When ready, the artist uses the GUI panel plug-in to 'start simulation' This triggers K3-d to then call the 'simulation controller' document plug-in to run through a timing loop. This controller loop and the ways in which it interacts with existing animation code, such as the ikeyframes.h interface or the 'animation_track' class will need a lot of thought and careful design. Overall though, it will look at each physics-enabled node in the scene and determine from its mesh-shape what kind of rigid body it should be setup as for ODE, and also what kind of collision shape it should have, and then it will invoke the appropriate ODE functions with this data. The returned values will determine the position and rotation for the visual mesh to be rendered.

As discussed, in theory these can be 'decoupled', that is, the Rigid Body mass distribution doesn't have to match the collision shape and neither of them need to match the k-3d mesh-shape nor the rendered final outcome. However, in the interests of maintaining basic sanity, to begin with I think we should just have a straightforward mapping of:

: K-3d polycube -> ODE boxmass and ODE box collisions shape
: K-3d polyspere -> ODE spheremass and ODE sphere collisions shape
: K-3d spere -> ODE spheremass and ODE sphere collisions shape, and so on
  1. Ok, once we have values for all these parameters we are finally ready to hook into step '1' of the ODE process outlined earlier. When the artist hits 'start simulation', our 'simulation controller' application-plugin will call ODE with the information we have determined about the shapes and masses of the 'physics-enabled' objects in the scene. ODE will run through its steps 1 to 15 repeatedly. Now the timing issues become important.
  1. At the end of each simulation step, K-3d takes the updated values for position and rotation of each physics-enabled mesh from ODE using functions such as dBodyGetPosition and dBodyGetRotation. It uses this data to update the 'PhysicsTransform' node in the pipeline for the node in question. Once all nodes have been updated it re-renders the preview or final scene accordingly. This can possibly be tied in with the existing key-framing facilities (ie each time-step in the simulation is a key-frame), but I would have to look at them in more depth to see whether they do what is needed. Otherwise, the existing animation facilities (ie k3d/modules/animation/) can at least be used as a starting point for new code to carry out this 'simulate-render-simulate-render' loop.

As one possibility, I had a look at how Blender does this. Blender has two 'modes' when it comes to physics simulation.

  1. 'Animation Preview' mode.
  2. 'Simulation Baking' mode. 

The 'Simulation Baking' facility which allows a physics animation to be 'recorded' to an IPO.

An IPO seems to be Blender's equivalent of an 'Animation Track' such as what we have in k3d/modules/animation/animation_track.cpp

Thus a good way to go might be to use the same idea. Whilst in 'Preview Mode' a use can run the simulation and get a general idea of what it is doing. If they are interested in a particular run, then they can 'bake' that version into a set of key frame s(ie an animation track). Once baked, they can rewind and fast-forward as much as they like. If they choose to change something at a particular frame, then when they start it again they will be back in 'preview mode' for a new run-through on this new 'historical track'. They can then again choose to bake it when they are happy with it, and so on. There are a number of issues surrounding this that need careful thought - what happens to the first 'baking' when the artist makes a change and start a new preview?


There are some more notes and some links to relevant blender pages on the new wiki page I have created for discussing the 'rewind' issue.

The more I think about the whole timing issue, the more I come to feel that it is really quite complex and will need very careful design. Because of this, I think that a significant part of the early period of the project should be devoted to getting the timing design absolutely right. That way, even if not all that many features of ODE (arbitrary mass-tensors, many joint-types, and so on) get implemented, at least the core code is structurally correct to enable future workers to continue with this feature. This is why I have only included qa couple of simple mesh types and joint types amongst my deliverables - I think that a major deliverable will actually be the simulation controller plug-in, which must get all this timing stuff 'right'. Also, a properly set-up testbed which can test out a couple of different implementations of the timing loop, each with just a simple object being animated, would be useful.

Since the design won't be complete until after the project has begun, what I have listed here are mainly a series of 'questions that need to be answered'. I also have begun deconstructing what is happening in the pipeline during some existing animations such as the dancing_cube animation and the animation_track_test animation. I have created diagrams of the visualization pipeline for each of these, and discussed briefly how the TimeSource, MeshSource, MeshInstance, Position, and possibly other nodes interact in each case. This is to get me started thinking about how to design the simulation controller. These notes can be found here.

Some design questions are:

  • Will the 'timestep' between K-3d frames and ODE 'world steps' be tied together, or will they be allowed to vary independently.
  • If the former, what should be the ratio of the two?
  • If the latter, how will this be controllable by the user? How can we avoid letting the user select inherently 'bad' combinations of timestep/worldstep?
  • Whilst this is going on, if the artist is to be able to 'save out' a copy of the animation then this has to be done in one of two ways:

1) If the simulation is completely deterministic then it would be possible to just save the parameters and the start and end times that the user specified. Then upon re-opening the file, k3d would simply re-run the same animation with these parameters. This would save space in the saved files and be relatively simple to implement, however it would not be a portable solution for showing animations elsewhere, and there may be other issues with it that I haven't thought of.

2) The animation can be saved as a set of 'key-frames' using the existing key-framing base-class (k3d/k3dsdk/ikeyframer.h) or the ANimatiopnTrack classes (k3d/modules/animation/animation_track.cpp). One issue with this would be interpolation. When k-3d re-runs such as set of key-frames, it will interpolate between frames - will this look *exactly* the same as the version that ODE produced on first creation? Does it even need to, as long as the effect is close enough to the 'naked eye'? Note that how this issue plays out will depend on things such as the ratio between the ode world-step and he k-3d time-step mentioned earlier.

  • Separately from the issue of 'saving' and animation in a file for later use, we also have the question of 'rewinding' and 'scrubbing the time back and forth'. These issues are related however since the same design may be able to solve both. In terms of the work-flow apparent to the artist:
    • If the artist hits 'rewind', the simulation should be able to jump back the required number of time-steps. This should be selectable using a slider or numerically, with units in frames or seconds/milliseconds. Once back at the frame of interest, the artist can 're-arrange' the objects or edit their properties, and then resume the simulation. The last process ('rewind' functionality) will have to be carefully implemented. In addition to this, there will have to be careful safe-guards to stop the user from attempting 'rewinds' on systems that are too complex and will just eat all the memory / processor. I suspect that for a fairly small number of objects there could potentially be almost infinite 'rewind' facilities (that is, rewind as far as you like in the sequence), but that as the number of bodies grows, the resources needed will grow exponentially. Thus users should not be trying to rewind thousands of frames on a system with thousands of objects, for instance!
  1. When happy with the simulation, the artist can save out the sequence. Again I haven't put a lot of thought into this aspect of it yet!

Terminology and Background

There are a number of terms used by K-3d and by ODE that have special meanings. I have tried to describe and discuss those that are used in this document in the following section.


But first, a brief introduction to ODE. ODE stands for 'Open Dynamics Engine'. It as a mature, robust and high-quality engine from simulating rigid body dynamics, with both a C and a C++ API, and licensed under the BSD license.


Some key ODE Features

(These are as summarized in the ODE manual at http://www.ode.org/ode-latest-userguide.html#sec_3_2_0 . The original list is more detailed. I have left in mainly the features that are most relevant to this project.)


  • Good for simulating articulated rigid body structures. An articulated structure is created when rigid bodies of various shapes are connected together with joints of various kinds. Examples are ground vehicles (where the wheels are connected to the chassis), legged creatures (where the legs are connected to the body), or stacks of objects.
  • Designed to be used in interactive or real-time simulation. It is particularly good for simulating moving objects in changeable virtual reality environments. This is because it is fast, robust and stable, and the user has complete freedom to change the structure of the system even while the simulation is running.
  • Uses a highly stable integrator, so that the simulation errors should not grow out of control. The physical meaning of this is that the simulated system should not "explode" for no reason (believe me, this happens a lot with other simulators if you are not careful). ODE emphasizes speed and stability over physical accuracy.
  • ODE has hard contacts. This means that a special non-penetration constraint is used whenever two bodies collide. The alternative, used in many other simulators, is to use virtual springs to represent contacts. This is difficult to do right and extremely error-prone.
  • Built-in collision detection system. However you can ignore it and do your own collision detection if you want to. The current collision primitives are sphere, box, capped cylinder, plane, ray, and triangular mesh - more collision objects will come later.
  • ODE's collision system provides fast identification of potentially intersecting objects, through the concept of 'spaces'.
  • Rigid bodies with arbitrary mass distribution.
  • A number of joint types: ball-and-socket, hinge, slider (prismatic), hinge-2, fixed, angular motor, universal.
  • Choice of time stepping methods: either the standard ``big matrix method or the newer iterative QuickStep method can be used.
  • Has a native C interface (even though ODE is mostly written in C++).
  • Has a C++ interface built on top of the C one.
  • Many unit tests, and more being written all the time.
  • Has Platform specific optimizations.

K-3d terms

The K-3d visualization pipeline

The visualization pipeline is fundamental to K-3d. Essentially, the pipeline consists of a set of 'Nodes'. Each node is either a source, a modifier, or a sink of data. Easily recognizable entities such as meshes, cameras, and lights are all represented by K-3d as nodes in the pipeline. However more abstract things such as render-engines, transformations, and so on are also represented as nodes. Information flow in the pipeline is quite sophisticated, and can involve both 'fan out', where a single source node 'feeds' multiple sinks/modifier, and fan-in, where data from multiple sources is combined into a single sink. More information about the K3D visualization pipeline can be found at http://wiki.k-3d.org/Visualization_Pipeline.


K-3d Plug-ins

(The following is quoted with thanks from the K-3d wiki page at http://wiki.k-3d.org/Plugin_Design)


Document plug-ins

�are the type users are most aware of - a document plug-in is linked with a specific user document at the time of its creation, saving and restoring its state along with the containing document. Pipeline components - sources, modifiers, sinks - are all document plug-ins, as are render model components - cameras, render engines, lights, materials, etc. It is not possible to create a document plug-in without a valid open document. �


Application plug-ins

�in contrast, are not associated with any document, and do not save or restore any state. These plug-ins are usually created "behind the scenes" to perform a specific task, then destroyed, without any user intervention. Thus, application plug-ins often take on the role of strategy objects (as-in Strategy Design Pattern). Examples of application plug-ins include user interface plug-ins, scripting engines, and file format importers and exporters. �

Plugins are implemented as C++ classes. They can be instantiated using k3d::iapplication_plugin_factory and k3d::idocument_plugin_factory.


Mesh

A mesh is (one of ) the fundamental elements which k-3d deals with. At it's most basic, a mesh is typically made up of a set of triangles or polygons etc which represent some sort of three dimensional object. They do this via defining a whole lot of points on the two-dimensional surface of that object. These points are called 'vertices'.

A mesh is going to be in many ways the 'point of contact' between the two systems (ODE and the K-3d pipeline). In a nutshell:

  1. K-3d will be in control of the whole process, and will first create the meshes to represent each of the 'physics enabled' objects in our scene.
  2. K-3d will then tell ODE to create a rigid body and a collision geom for each of these objects
  3. K-3d will ask ODE to perform a simulation step
  4. K-3d will take the updated position/velocity etc information provided by ODE from the simulation step and use it to update a 'Physics Transform Node' attached to each 'physics-enabled' node.
  5. K-3d will render the updated scene
  6. Return to step 2.

All our physics back-end needs to be able to do is to provide k-3d with the information required to create an appropriate mesh instance. The mesh instance will provide information both about the position of the object and also about it's shape.

Note: The term 'mesh' is also used in ODE specifically to refer to a general type of mesh used in collisions detection called a dTriMeshClass . However, generally ODE has no concept of the shape of a body, it only cares about the mass distribution.


ODE terms

Rigid Body

A rigid body is the most fundamental element which ODE deals with. Also referred to as 'body' , or as 'dynamics entity'. These entities have a 'physical presence' and are acted upon by whatever physical forces are defined for the scene (ie gravity, joint forces and so on). This is in contrast to other objects which may be rendered by a graphics library but which do not participate in the dynamics simulation.

The way in which a rigid body behaves is determined by its mass distribution. Unlike in the real world, a Rigid Body's mass-distribution is not tied directly to it's shape (or to it's geometry or it's mesh). The mass distribution has to be explicitly set-up using ODE mass-setting functions

The best way to understand a rigid body is probably in terms of its essential properties. These are:

  • Position vector (x,y,z) of the body's point of reference.(*)
  • Linear velocity of the point of reference, a 3D vector. (*)
  • Orientation of a body, represented by a quaternion or a 3x3 rotation matrix. (*)
  • Angular velocity vector (another 3D vector) which describes how the orientation changes over time.(*)
  • Mass of the body. (a scalar)
  • Position of the center of mass with respect to the point of reference ( a 3D vector)
  • Inertia matrix. This is a 3x3 matrix that describes how the body's mass is distributed around the center of mass.


Items marked with an asterisk (*) will typically change during a simulation. The others will remain constant during a simulation.

So, a rigid body can be defined using the above list of 7 attributes. The important thing to remember is that, as far as the physics simulation is concerned, the above 7 attributes are the only characteristics that a rigid body has. In particular, it does not have a shape, a geometry, a mesh, a visual appearance, or any of those things we usually associate with 3d objects!


Inertia matrix

Also referred to as the mass distribution. This determines the way in which a body's mass is distributed around it's center of mass. This can be characterized for a particular body using the following:

  • Total mass
  • Position of center-of-gravity
  • A 3*3 inertia matrix

For complete generality , in ODE all these parameters can be set 'by hand', however ODE provides convenience functions for setting up the following four common situations:

  • dMassSetSphere (...)
  • dMassSetCappedCylinder (...)
  • dMassSetCylinder (...)
  • dMassSetBox (...)

For the purposes of this project I would like to begin with just the Spherical and the Box-shaped mass distributions. These will be mapped to Polysphere and Polycube K-3d meshes, respectively.


Physics Simulation

Also referred to as dynamics simulation or as integration. The physics simulation is the process whereby ODE takes all of the 'Bodies' that have been defined, along with their joints and so on, and simulates what happens as they move forward through time. The simulation step only needs to know about rigid bodies and their properties � it does not need to know about the collision shapes or the final visual shapes (ie 3d meshes) of the bodies.

Collision Detection

The physics simulation just described will simulate how bodies move under the influence of various forces and constraining joints. But In order to proceed to the next obvious step in terms of physical realism we need to implement �collision detection�. At this point, our objects will need to have a shape. ODE provides its own facilities for giving a rigid body a shape and for performing collision detection upon that shapes within the context of a simulation. ODE also allows you to swap in your own collision detection facilities (ie object shapes and techniques for determining if a collision has occurred). However I think it best if we stick to using the facilities that already come with ODE at this stage. The terminology used by ODE for the 'shape' used in collision detection is 'Geometry object'(or 'geom' ) .


Geometry Objects

Also referred to as 'collision -geometries' or 'collision shapes' or even just shapes' (geom for short) .. These are the fundamental objects used in the ODE collision system. Note that they are separate,conceptually, from the Rigid Bodies used by the main Physics Simulation. A geom can represents a single rigid shape (such as a sphere or box), or it can represents a group of other geoms - this is a special kind of geom called a ``space. Any geom can be collision-tested against any other geom .To use the collision engine in a rigid body simulation, geoms are associated with rigid body objects. This allows the collision engine to get the position and orientation of the geoms from the bodies.

A good way to remember the distinction between geoms and rigid bodies in ODE is this:

  • A geom has geometrical properties (size, shape, position and orientation) but no dynamical properties (such as velocity or mass).
  • A body has dynamical properties such as position, velocity, and mass-distribution, but no geometrical properties.

A body and a geom, taken together represent all the properties of the simulated object.


In ODE, every geom is an instance of a geometry class, such as sphere, plane, or box. There are a number of built-in classes, listed below, and you can define your own classes as well.

  1. Sphere Class
  2. Box Class
  3. Capped Cylinder Class
  4. Plane Class
  5. Ray Class
  6. Triangle Mesh Class - can be used to represent any kind of triangle 'soup'
  7. Composite objects - A composite collision object is one that is made out of several component parts, such as a table that is made out of a box for the top and a box for each leg.
  8. User defined classes - You can also define your own geometry classes, either by adding to ODEs source code directly or by using the API provided for this purpose.


ERP and CFM

ERP is the 'Error Reduction Parameter', and CFM is the 'Constraint Force Mixing'.

I won't go into these in great depth now, but essentially they set the 'springiness' and 'sponginess' of joints. Think springs and dampers but in a slightly different format. These can be set globally, and also locally for some joint types. These may need fiddling with if the system is unstable, ie things are 'exploding' for no apparent reason, or if it generally just doesn't 'look right'.

About me

Overall

I am a mature-age student from Melbourne, Australia. I am currently studying the final year of an Advanced Diploma in Games Programming at the Academy of Interactive Entertainment. I am particularly interested in 3d graphics programming - geometry, texturing, shading, and lighting.

In 1996 I obtained an honors degree in Physics from the University of Melbourne. Since then I have spent time working in web-development using various open-source technologies such as Python, Django, Zope, MySQL, and PostgreSQL. Please see my CV at http://lollipop.wordpress.com/about/curriculum-vitae/ for further details.

I became interested in Games Programming for a variety of reasons, but one important one was that it would make more use of my original physics degree than the web-development I had been doing for some time.

Coursework

As part of the requirements for the Advanced Diploma in Games Developement, during the second part of the year I will be carrying out a research and implementation project related to spherical harmonic lighting using the commercial Gamebryo Game Engine.

We will also be carrying out a group project which involves taking a student-designed game from initial pitch to completed executable. We will be working in groups of about fifteen, with roughly five programmers and ten artists in each group. This project will be carried out in C++, again using the Gamebryo Engine.

This course takes up 2.5 days per week, leaving me another 2.5 days plus, say, a half day on the weekend for GSoC related work.

The equivalent of the 'finals' for the Diploma course do not occur until a month after the finish of the GSoC project, so the timing is as good as it can be considering that in Australia the GSoC project does not actually run during 'Summer' :)

Opensource Software

I've been interested in open-source software since about 1997, when a friend helped me to get my first installation of slack-ware Linux running :) One of the highlights of this year so far has been attending Linux Conference Australia, which was held in Melbourne this January.

Currently, during my spare time, I do voluntary programming work for an open-source project at Computerbank Victoria.

Computerbank is a community organization in my local area which takes donated computer-hardware, installs Linux on it, and supplies these systems to financially disadvantaged individuals and to other community groups. The code and other documentation for the programming project I am involved with for them is hosted on google's 'googlecode' subversion repository at http://code.google.com/p/djangodb/

More information about Computerbank Victoria can be found at http://vic.computerbank.org.au/

Some Links

http://www.ode.org/ode-latest-userguide.html

http://wiki.k-3d.org/Plugin_Design

http://lollipop.wordpress.com/about/curriculum-vitae/

http://vic.computerbank.org.au/Members/jan/building-computerbanks-database-with-django