Rather than hashing these points out abstractly, let me give a simple example.
If we were to look for the most stereotypical VR-esque scene sequence from
Snow Crash, True Names, or others of the genre, it might involve a virtual
cityspace with the navigator first flying in a survey view from above, then
sweeping down to street level amongst the virtual buildings, and finally
entering a particular establishment through transparent doors. It seems this
scenario should be straightforwardly supported with VRML, and yet I find it
breaking down on several levels.
First of all, with the navigator standing at street level in the cityspace, we
would expect the various buildings to be independently owned, with an
appearance determined largely by the owner, and with a straightforward and
consistent interface to the surrounding public infrastructure (parking lots
attached to streets, sidewalks to building entries, etc.). The VRML analogues
here would seem to demand that private and public building and infrastructure
representations be maintained independently across the distributed Web space,
while simultaneously linked seamlessly together in the spatial environment.
The requirement here is consistent with cal's comments on 9/16 re:
> ... I have seen no discussion of the "scene" itself as an
> object in the scene, i.e., the background that holds the objects together,
> sets the basic scale for the display itself, and shows the objects'
> relationships to each other. I think it needs to be considered.
Similarly, while it's clearly advantageous to see a building's entryway and
other surface-features from a distance of 10 meters, it's clearly
*dis*advantageous to have all building features represented from the kilometer
fly-over scale, as we quickly reach intractibility from a graphical
perspective, not to mention the net-bandwidth limitations stemming from the
distributed nature of the constituent scene-space elements.
There's a quick example of the distributed-elements/spatial continuity
concern. Equally fundamental is the set of bindings between the information
represented and the information itself. Granted, some spaces may be
constructed for their entertainment value, but at least as often it seems
visual spaces would be designed as semantic carriers for information best
viewed in native form (e.g., traditional HTML/Web content). If I wish to
cross-link the visual/acoustical/etc. representations of the above-said city
buildings with some non-spatial HTML multimedia presenting a more prosaic
expression of the building's content/meaning/etc., how am I to make the
expression?
If the proposed WWWFile is an information-association analogue of SoMaterial
and WWWButton is an SoSeparator-like grouping equivalent of the same, then it
seems another construct is needed for expressing a seamlessly-incorporated
spatial-continuation to a remote VRML scene. (This is actually quite
analogous to Mosaic's early support for inline graphics.) And if
WWWFile/WWWButton are intended not as attributes devoid of representation but
instead as text or iconic nodes or groups with a direct screen-representation,
(a) shudder, and (b) both the node-attribute and scene-continuation instances
are then separately needed. And this is not to bash WWWFile/WWWButton --
they're the only concrete attempts to relate visualization and content that
I've seen. In any case, a consistent addressing of these issues for VRML
isn't immediately clear to me.
Are these issues of concern to others? Are others on the list working on
developments addressing these particular concerns? Have I missed threads
keying on these issues? I've begun addressing the spatial continuity and
common-environment distributed-element challenges over multiple spatial scales
(presented in a paper net-linked as
gopher://gopher.math.scarolina.edu/00/computer/pgopher/refs/multiscale.ps
I'll append the abstract below), and have worked on several implementations
linking Inventor visual representations to WWW-based content, so would be
especially interested to learn of other work in the area.
Thanks!
Brygg ([email protected])
--- Abstract for "Multiscale Spatial Architectures for Complex Information Spaces" URL gopher://gopher.math.scarolina.edu/00/computer/pgopher/refs/multiscale.psTraditional browsing and querying information navigation techniques lose valuable contextual information by artificially partitioning information spaces. We introduce an architecture supporting the presentation of spatialized information which preserves spatial continuity (eliminating arbitrary partitioning) while maintaining support for unbounded volumes of distributed information. The multiscale structure enabling this functionality is discussed, and the role of spatial continuity in realizing both the presence and relation of dynamic information is described. Finally, the possible union between new spatialized information technologies and mainstream Internet distributed information systems is explored.