Re: MISC: Inlined Sound Support

Andrew C. Esh ([email protected])
Mon, 17 Apr 1995 08:48:36 -0600 (CST)


On Fri, 14 Apr 1995, Ross Carlson wrote:

> >I'm curious as to peoples ideas on inline sound? I read an idea
> >about it's volume being related to it's distance from you, but
> >what about direction? How difficult would it be to send the
> >"sound waves" towards a specific direction. So if you walk up
> >behind a radio it's not as loud as when you walk infront of it.
>
> Hello everyone. Let me start by saying that I, Ross Carlson, have been
> lurking here for quite some time, and the work that you are all doing is
> absolutely FASCINATING, to say the least. The possibilities of this are
> immense.
>
> About the inline noise thing. How hard would it be to code it into the
> browser to check for obstacles for that noise. The relative size of any
> obstacles in the path of the noise would diminish the sound volume
> accordingly. The radio itslef would be one obstacle, so the sound would
> appear louder when the user stands in front of the radio. Of course, the
> browser would have to know what side of the radio object is the front.

Maybe we could simplify this a bit by considering sound as particles, each
with a frequency attribute, and their color or visibility attribute set to
"invisible" or "transparent". They could be plotted like normal objects,
and moved. The speed and direction of movement causes the browser to add
small changes to the pitch of the sound. If the particle bumps into
something which has an absorbant surface, it is destroyed, otherwise is
is reflected, with an appropriate change in volume (reflected sound is
softer).

A sound source would be a point and direction (vector) for the particles
to be introduced to the environment, just like the end of a firehose would
be a vector for water to be introduced.

I know this will rankle those of you with experience in physics, but I
don't think we want to tackle the concept of sound waves, since then we'd
have to model all the air particles, and add another sort of object to
define the wave, and what it does to the air particles.

This also may be a prohibitively computationally intensive solution, since
what I am suggesting is that the environment server should ray trace all
the sound. It would make the browser simple though. The browser would only
have to add up all the sound particles that reached your virtual position,
and add any velocity of the particles to their frequency. The result is
one time period of sound, ready to be sent to the sound system.

Has anyone considered adding a MIDI track to the environment? Movies have
background music, shouldn't VRML? If we do, then we'd have to define
attributes for an "area". Other such attributes could be smell,
temperature, visibility (fog), light, vibration, radiation, atmosphere,
and so on.

---
Andrew C. Esh                 mailto:[email protected]
Computer Network Technology   [email protected] (finger for PGP key)
6500 Wedgwood Road            612.550.8000 (main)
Maple Grove MN 55311          612.550.8229 (direct)
<A HREF="http://www.mtn.org/~andrewes">ACE Home Page</A>