DIVE is a distributed VR system that's been around for a while. There
are a number of papers available off the Net which describe how they
use a space-based model to "scope" the use of sound (mostly speech audio
in a conference-type environment). For example, as another user's
virtual body gets closer, their model allows the "awareness" of one
user by another to be computed. This awareness can then be used
to degrade the audio so that it's just a mumble, quiet or whatever.
Check out http://sics.se/dce/dive/dive.html for more information.
There are also some more papers available from Steve Benford's web
at http://www.crg.cs.nott.ac.uk/~sdb/publications.html ... possibly
the most relevant one is "From Rooms to Cyberspace: Models of Interaction
in Large Virtual Computer Spaces".
Chris
-- Chris Hand, Senior Lecturer | Dept of Computer Science, e-mail: [email protected] | De Montfort University, The Gateway WWW: http://www.cms.dmu.ac.uk/~cph/ | LEICESTER, UK LE1 9BH talk: [email protected] "In Cyberspace nobody knows you're bald"