Re: WEB : Mapping out communal cyberspace

Mike Roberts ([email protected])
Tue, 14 Jun 1994 09:46:58 PDT


On Tue, 14 Jun 1994 10:57:38 -0400 John W. Barrus wrote:

> > Object libraries could be distributed on CDROM in some common format,
> >with symbolic indexing database. If the object is new to the local client,
> >it could request a copy from the server (perhaps not the same one the
VRML
> >document resides upon, but a different 'geometry server')

> I'm having a little trouble with this line of thought. I think that one of
> the most interesting thing about 3D environments is seeing what new and
> interesting things people have done. If we depended on a pre-defined set
> of objects (which we could view at our leisure from the CD-ROM) and the
> only thing different about the scene is which objects are contained in it
> and how they are arranged, I'm going to get bored fairly quickly.

I don't think that a media caching system has to necessarily be boring.
Generic media caching could be applied to all sorts of data types; texture
maps and object generation algorithms (assuming the existance of a scripting
language) can be stored alongside object models. Some form of inheritance
mechanism for the cached objects, working on similar lines to the attribute
inheritance in currenly used in moos/other places, could aid greatly in
producing introducing scenes from small amounts of data. For example, the
generic teapot object could be subclassed by a user so that it's texture map
is pink spots (both the basic teapot model and the pink spot texture map
reside on the local cache; either on a distribution CD, or having been
downloaded automatically at some earlier time).

With a suitable object coagulation tied to an inheritance mechanism which
allows you to parametrically affect a subclassed model (for example, by
specifying tranformation matrices), there is no reason why one could produce
an interesting two handled, pink spotted teapot, stretched in Y parametrically,
all for the transmission of a very, very small amount of data (the teapot
database reference, a handle database reference, texture map reference,
some model-meshing information (contact points, etc), elementary
tranformations, etc). In a non-cd (net-download only) based system you would
have to take an initial hit the first time you see a teapot, but subsequent
teapot references would be very inexpensive. Ideally the size/discard rate of
this cache would be tuneable, and it would update if an objects time/date
stamp indicated that the local cache version was past its sell-by-date. The
more cache you have, the faster you run over generic common objects, such
as teapots. Having dealt with getting stuff off CDs, I'd say that for real
performace, a several level deep cache (memory, disk, and CD) would
probably be essential. Most current interactive titles run quite slowly from
CD. Journeyman, for example, reccomends that you copy it **all** to disk for
real performance.

Where this type of scheme make very efficient use of bandwidth is in the
production of things like forests, where you have a zillion very similar trees,
all of which are derrived from a basic tree model, with variations in size,
color, position, number of limbs, leaf type, etc. Local caching (regarded as an
optimisation) also potentially gives us the ability to store "native"
(renderactor :) ) format objects and thus avoid the hit of running a text file
through the parser on each initial object reference.

But I think none of this really helps for the exact modeling of physical spaces
(Barcelona is a good example). It really only helps us in modeling
mindscapes in which we have a good **idea** of what they look like, and are
flexible to them look perhaps a little different on each machine, which I
beleive we are because of the difference is the machines our clients will run
apon.

Mike (Tamarac)