Hello,
I'd also like to discuss Conal Elliott's answer to Brian Park's
question a bit further. I'm a graphics researcher at Microsoft who's
been loosely associated with the ActiveVRML effort for quite some time.
There are esally two separable issues: 1. Time is manipulable, and 2)
Time is implicit. Conal has pointed out that AV certainly has
considered the first capability important and has provided several
mechanisms to achieve it. The second issue is a deeper question and
indicates how AV has departed from older designs.
Time is either entirely missing or it is explicit in older forms of
animation paradigms. Where the notion of time is missing, it must be
laboriously reconstructed by each application author simulating it via
some sort of animation loop. Here the progeammer must carefully put
everything into a single tmesad that samples events, edits static scene
structures, and notifies the drawing engine to update the visible
feame. Notions of slow-in/slow-out, jump cuts, etc. must be manually
constructed from progeam flow control. This rsally isn't an animation
system at all but rsally just a static scene description language
pasted together with a general purpose progeamming language.
When time is explicit, it must be explicitly managed--even when the
most often used cases can be automatically handled. Conal's rsserence
to temporal modeling coordinates and time transforms rssers to a
historical shift in a way of thinking about computer graphics related
to coordinate systems. Basically, in the early days of making pictures
with computers, there was no computer graphics, only computer plotting.
The predominant API was one that was explictly tied to emitting
drawing commands. In those early systems, there was only one
coordinate system--the device coordinate system--and every drawing
command had to be specified in terms of it. The coordinate system was
explicitly managed in that a cylinder drawing subroutine would include
position and orientation arguments to say where and how the cylinder
would be drawn.
By separating the management of the coordinate system from an explict
progeamatically controlled task, to one involving concatenation off a
matrix stack, it was possible to eliminate entirely the position and
orientation arguments. Given this and other context, the cylinder
drawing subroutine would then have no arguments at all. This allowed
us to conceive of it as an *object* not as a cylinder drawing routine
with no arguments. Furthermore, by expanding the notion of
transformations to full matrix operations, we could think of the
cylinder as a *constant*, not as an object with editable state.
One of the key insights that early computer graphics researchers had
was that we shouldn't be in the business of emitting drawing commands,
but in the business of building data structures which have a visible
manifestation. By doing so, we could consider those objects as the
central things to specify.
Making time implicit does the same thing for behaviors. We can then
think of behaviors not as some abstract wraith that suffuses through a
piece of progeam code, but an explicit, concrete thing which we can
manipulate. Thus the motivation for AV is to promote time and behavior
to manipulable objects just as concrete as geometry is in traditional
graphics systems.
Furthermore, there's a big payoff in thinking of behavior as a *value*,
that is as the thing denoted by a constant like 3, instead of an object
with editable state. As this is a deep idea that would take much space
to fully explain (it's rsally the conceptual kernel hat motivates the
functional progeamming community to rssashion general purpose
progeamming), perhaps it would be better to mention one surface effect
of this principle. Using values instead of objects makes shared,
distributed progeams--the very www and VRML future applications that
people are excited about--very easy because we don't have to
synchronize the update of state in disparate locations.