I am relatively new to the list, but I've read through the archive and
several days worth of messages. I have not seen any discussions about 3D
binocular stereo viewing, so I thought I would post to the list and see
what type of reaction I got (always a dangerous thing, but hey, you only
live once! :-)
I am involved with a research project here at Carnegie Mellon examining
applications for high-definition 3D stereoscopic displays (think HDTV).
We are very interested in any standards for 3D scene descriptions, which is
how VRML caught my eye. The type of research we are doing is not geared
toward full-immersion VR, but toward "VR in a box", where a standard
terminal is a window into a stereoscopic 3D world.
In a simple case, to create binocular stereo computer generated images
requires two perspective drawings, one for each eye (or camera). However,
it really is not that simple. There is normally one modelling matrix for a
scene, and two projection matrices that are linked to create a binocular
viewing system. For example, you really can't take out one of your
eyeballs to look at an object from a different position, but you move both
eyeballs together as a pair.
One of our (more obvious) results is that there is a unique
geometrically-correct rendering of a scene for each viewer. For example,
if you and the person next to you are viewing the same object, you each see
it differently. There are 12 parameters to describe an arbitrary
binocular viewing system -- the position and the orientation of the eyes
(6 parameters) relative to the terminal, and the position and orientation
of a screen in the virtual world (the screen is where the rendering takes
place and is the terminal in the real world). One of these degrees of
freedom can be eliminated if you assume that the eyes are seperated by a
fixed difference. The monocular viewing case has 9 parameters -- 6 for the
position and orientation of the screen, and 3 for the single eye. Unless I
am mistaken, the specification for VRML provides only 9 parameters for the
viewing system. (This is a very brief summary. People interested in more
details can check out our homepage (with links to research papers) at
http://www.cs.cmu.edu/afs/cs/project/sensor-9/ftp/www/homepage.html)
Some of the capabilities for accurate geometrically-correct binocular
rendering are obviously browser dependent. For example, head (eye)
tracking is necessary to generate correct scenes, and the viewers real
eye-position cannot be specified in any viewing language. However,
simplifying assumptions can be made (eg the viewer is always aligned with
the center of the screen).
I do not want to make any recommendations at this time, but wanted to
introduce myself, describe a little of our research, start some discussion
about binocular stereo, and possibly get some people thinking about it. If
VRML is because the world is not flat, why render the scene on a piece of
paper when you can do it in stereo and see real depth! :-)
Scott Safier Robotics Institute
internet: [email protected] Carnegie Mellon Univ.
http://www.cs.cmu.edu/afs/cs/user/scotts/www/homepage.html
check out my new graphical homepage!