VRML 2.0 and Behaviours

Chris Hall ([email protected])
Fri, 14 Apr 95 12:32:00 PDT


Just to put a quick version of my two cents into this discussion...
I believe that the "behaviour" pieces of VRML can be divided into two parts.
First we will need to provide a way of expressing the semantic content of a
scene -
ie the geometry underneath this separator is an arm, and it is supposed to
be able to move in the following fashion...
Next we will need to provide a way of driving the models based on this
semantic information - ie from time period 0-10 seconds raise arm at
shoulder, from time period 5-15 seconds bend arm at elbow, etc.

I haven't gone into much detail here, but I wanted to get people thinking
about the two problems seperately. In particular, as Don Brutzman pointed
out at the Symposium in Monterey, this kind of separation allows the
semantic models to be driven my different mechanisms (including (with the
right protocol) MUD/MUSH/MOO's).
This will also make the standard more easily adopted by people who want to
be able to share the geometry and semantics, but use it in different ways.
Again this can be the difference between a VRML browser viewing a "static"
dynamic scene which is moving in a pre-determined fashion, and some kind of
interactive viewer where some objects are being driver by a sophisticated
server process.

(Ducking the flames...)
Chris Hall
NeTpower Inc