> Before working on WebSpace, I worked on InPerson, which is a
> teleconferencing product. Users could join a remote conference with
> audio, video and a shared whiteboard. The part that is interesting to
> this discussion is the shared whiteboard. This is a flat surface upon
> which you can draw, drop images and 3D objects, and type text.
> Whiteboards on all machines display the same thing. If I mark on my
> whiteboard, I see it locally, and you see it remotely. Additionally
> everyone sees everyone elses cursor position in the form of unique sprite.
> When I talk about a 3D object that got placed on the whiteboard, I can
> move my cursor around it and everyone can see what I'm talking about.
> This seems like a 2D version of the same problem we're trying to solve.
Division's dVS does *exactly* this for 3D objects
> Here's how we could do it:
> [snip .... ]
> In InPerson, we encountered issues of latency and baton passing to make
> operations atomic but the basic protocol was very solid.
>
> Note that this deals only with the issue of shared presence in a world and
> not connectivity between worlds. I see that as a separate and orthogonal
> issue.
dVS takes this on also - we have a concept of Zones, which are simply
a collection of objects (visual audio etc.) Any user's presence
(avatar) can exist in a zone (or many, as they can overlap). dVS
agents across the network register in zones - this can be controlled
according to user's location, or the selection of a hyperlink for
example. Registering in the zone results in the zone 'owner' (the
vrml server in this case) serving the zone contents to the requestor.
Once this basic connection and exchange has happened, the agents
remain in communication, passing changes to the zone and avatars etc.
between them. Once the connection is broken, the remote agent can do
what it like with the data (we cache it at present).
This method of specific registration of interest in objects keeps the
update requirements low. Our scene description format also includes
basic animation functionality etc., so simple object behaviour and
interaction (if touched, do this...) can be passed between agents and
executed locally. As it is all time based, remote viewers see the
same world.
There are problems with latency, and these need to be overcome. We
have adopted the dead-reckoning ideas used in DIS to help reduce
bandwidth requirements and also allow some motion prediction to be
put in place to overcome lag inherent in tracking and rendering
systems.
>>-------------------------------------<<
Steve Ghee ( [email protected] )
Director of Technology
Division Ltd
19 Apex Court
Woodlands
Almondsbury
Bristol, UK
BS12 4JT
Tel : +44 1454 615554
Fax : +44 1454 615532
>>-------------------------------------<<