User:Wendy:Rewind

From K-3D

Jump to: navigation, search

This page is not properly edited, it's just in the form of notes at the moment, but may contain stuff of interest


Rewind and Timing Return to User:Wendy

Rewind and Timing

Tim Shead March 25:

"Wendy:

I forgot to mention earlier - if you put together a proposal on this topic, you will need to address a fundamental difference between a tool like K-3D and the typical game - in K-3D, time does not monotonically increase - time is a floating-point value that can change in arbitrary ways ... think of an artist "scrubbing" the current-time back-and-forth using a slider control. Even when rendering an animation, the delta between frames may change, due to skipped frames, shutter controls, time super-sampling, etc.

I believe some tools handle this by running a special "dynamics pass" and caching the results. It would be nice if we could avoid this."


Blender has two 'modes' when it comes to physics simulation.

  1. 'Animation Preview' mode.
  2. 'Simulation Baking' mode.

The 'Simulation Baking' facility which allows a physics animation to be 'recorded' to an IPO.

An IPO seems to be Blender's equivalent of an 'Animation Track' such as what we have in k3d/modules/animation/animation_track.cpp

Thus a good way to go might be to use the same idea. Whilst in 'Preview Mode' a use can run the simulation and get a general idea of what it is doing. If they are interested in a particular run, then they can 'bake' that version into a set of key frame s(ie an animation track). Once baked, they can rewind and fast-forward as much as they like. If they choose to change something at a particular frame, then when they start it again they will be back in 'preview mode' for a new run-through on this new 'historical track'. They can then again choose to bake it when they are happy with it, and so on.

There are a number of issues surrounding this that need careful thought - what happens to the first 'baking' when they make a change and start a new preview, etc.

Here are a few notes and links:


http://www.blender.org/development/release-logs/blender-240/bullet-physics/

"Simulation Baking

The physics engine does not have to be used strictly for playing games. Once a simulation has been created, it can be baked to IPOs (Blender animation curves) for rendering and modification. Each object's movement and rotation will be recorded as an IPO when this feature is enabled under the game menu. This allows for realistic animation for multiple colliding and falling objects to be built quickly with accurate results.


To create the IPO's a user must setup up their scene for the physics simulation. This includes identifying each object that will be affected as an actor and setting up each's attributes. Then the "Record Game Physics to IPO" option needs to be enabled in the Game menu, then start the game engine by using the "Start Game" in Game menu or pressing the 'P' Key in the 3D window. Each time the simulation is run the objects motion is recorded to IPO's."

http://wiki.blender.org/index.php/Manual/Ipo_Curves_and_Keyframes

"The Ipo Curve Editor allows you to edit the 2D curves that define animation in Blender. The curves represent the edited value in the vertical (Y) axis (location, size, rotation, energy, etc.) and time on the horizontal (X) axis. The rate of change of these values over time can be seen in the slope of the curve."

To clear the IPO data, un-check the "Record Game Physics to IPO" option and rerun the simulation. "


http://wiki.blender.org/index.php/Manual/Ipo_Curves_and_Keyframes " Special Notes on the Time Curve

With the Time curve you can manipulate the animation time of objects without changing the animation or the other Ipos. In fact, it changes the mapping of animation time to global animation time (Linear time IPO). The Time curve is a channel in the Object Ipo.

In frames where the slope of the Time curve is positive, your object will advance in its animation. The speed depends on the value of the slope. A slope bigger than 1 will animate faster than the base animation. A slope smaller than 1 will animate slower. A slope of 1 means no change in the animation, negative power slopes allow you to reverse the animation.

The Time curve is especially interesting for particle systems, allowing you to "freeze" the particles or to animate particles absorbed by an object instead of emitted. Other possibilities are to make a time lapse or slow motion animation. "


Animation Design

I read through the design notes for the animation tracker here:

http://wiki.k-3d.org/Animation_Design

Hints

Hints will *definitely* be useful for physics transform nodes, so that they can tell ie nodes further down the path whether they change the topology of a mesh etc.

[ http://wiki.k-3d.org/Hint_Design ]

" Implementation

Several hint classes are defined in k3dsdk/hints.h, and there are a few node implementation classes that are beginning to make use of them, k3d::dev::mesh_source and k3d::dev::mesh_modifier are good examples. "




New Array Based Mesh Design

http://wiki.k-3d.org/Array_Based_Mesh_Design Relevant bit "Geometric data in the K-3D pipeline is represented by the k3d::mesh datatype. Each instance of k3d::mesh is a heterogeneous container for the union of all the geometric types supported by K-3D (points, polyhedra, subdivision

surfaces, curves, patches, etc)."


Time Design

http://wiki.k-3d.org/Time_Design


Python Docs

http://www.k-3d.com/docs/epydoc/

visualisation pipeline

The only limitation on property connections is that properties can only be connected to properties that share the same

type - i.e. mesh properties can only be connected to mesh properties, string properties can only be connected to string

properties, etc.

Tools

[ http://wiki.k-3d.org/Tool_Design ]

"Tool Design


By design, K-3D plugins do not expose any user interface or interactive behavior - a plugin is a "black box" that

exposes some set of properties and interfaces. This ensures that plugins are easy to author, are independent of any

particular UI toolkit, and will be compatible with any UI plugins that exist / evolve.

In the Next Generation User Interface, a "Tool" is an object that layers interactive behavior on top of a plugin. As an

example, the Move Tool allows the user to click-and-drag in a viewport to interactively move nodes and points. In this

case a transformation modifier is doing the work, while the Move Tool in the UI layer is mapping mouse movement to

changes in the modifier properties. It is anticipated that some plugins will have dedicated Tools, some plugins may have

more than one Tool, and some plugins may have no Tool, in which case their properties will only be editable through the

general-purpose Node Properties Panel. Conversely, there is no reason why some special-purpose Tool might not modify the

properties of more than one underlying plugin. Tools are not limited to modifying node properties - they may create

nodes, make-and-break connections between properties, and destroy nodes, depending on their purpose.

Currently, the following behaviors apply to Tools in the NGUI:

   * There is always only one Tool active at a time.
   * The active Tool applies to all viewports (there are no per-viewport Tools).
   * The default Tool is the Selection Tool.
   * For the majority of Tools, viewport navigation works normally while the Tool is active.
   * In some cases, a Tool may need to override some-or-all viewport navigation events for its own needs.
   * In some cases, a Tool will need to draw graphics and/or text into the viewports, for manipulators, 

on-screen-display, etc.

   * In some cases, a Tool will need to have hotspots that respond to input events.
   * In some cases, a Tool will need support for interactive picking of on-screen objects from any viewport. Ideally 

this should also allow for picking objects from the Node List Panel, Node History Panel, Visualization Pipeline, etc.

   * In most cases, a Tool will display a custom cursor or cursors while active."

k-3d Data Containers

Data Container wiki documentation is at: http://wiki.k-3d.org/Data_Containers


For instance in the animationtrack class the time_t and value_t members are typedef'fed to be k3d_data objects as follows:


  • typedef k3d_data(time_t, immutable_name, change_signal, with_undo, local_storage, no_constraint, writable_property, with_serialization) time_property_t;
  • typedef k3d_data(value_t, immutable_name, change_signal, no_undo, local_storage, no_constraint, writable_property, with_serialization) value_property_t;

existing animation track

I have been looking more closely at the existing code for an animation_track. In order to get a better understanding I decided to deconstruct what is going on in a couple of animation examples first. I chose the animation_test one (because it is simple) and the 'dancing cube' one (because I like it :)

anim_track_test.k3d dancing_cube.k3d


I have made a diagram of the node relationships that are important for timing issues using Dia. These are inserted below. For the dancing cube one I skipped the camera and light pipelines as there is already enough going on in the diagram!

They hopefully should help when I start thinking about issues of how my diagrams on the main page (ie with the the three source meshes and the physics transform node) will fit in with a timing source.

Discussion

dancing_cube.k3d

As far as I can tell, the dancing cube animation works by

  1. Adding some new properties to a PolyCube source node (PhaseX, PhaseY, PhaseZ, ScaleX, ScaleY, ScaleZ)
  2. Using these to either add or multiply the Timesource value at various stages in the pipeline.
  3. The previous two things combine to give a Position Node. This position node will this be a function of time, since the time source was used as input early in in the pipeline, and also of the Phase and Scale associated with the Polycube Source node.
  4. The position Node is then combined with the mesh-shape from the Polycube source node to create the final Polycube instance.


Thus the Polycube Mesh source determines the final output in two ways:

    1. It determines the shape of the output mesh
    2. It determines the phase and scale of the 'dancing motion' via the new properties associated with it.

I'm guessing that this was done from within the k3d interface by creating some new properties (the scale and phase) and sticking them onto a Polycube source node, then connecting them up to a time source. The results was then saved as a .k3d file.

anim_track_test.k3d

The relation between the Mesh sources/instances and the timing source in the anim_track_test works differently than the dancing_cube example.

In this case we have a TimeSource and a Position node both acting as inputs to an AnimationTrack Node (specifically, and AnimationTrackDoubleMatrix4).

The output from this is then used as input to a Polycube source node.

Finally, the output_mesh of this Polycube source node is piped to the input_mesh of a Polycube instance node.

In this example there are no 'new properties' on the Polycube Mesh source - all timing-related behaviour is mediated via the animation_tracker class. This is probably closer to what is wise for the Physics Simulation timing loop.

Like the dancing_cube example, Mesh sources (ie Polycube) will certainly need new properties, but these properties will be things such as the mass and ERP and so on.

  • Unlike* the dancing_cube example however these properties will not be directly conected to other nodes in the pipeline. Rather all timing related issues will be managed by the TimingController Plugin, which will be situated in the pipeline in a way similar to where the animationtrack is sitting in this example.

anim_track_test.k3d

Image:K3d_anim_track_test_dependencies.png

dancing_cube.k3d

Image:k3d_dancing_cube_dependencies.png