Re: VRML issues

Kevin Goldsmith ([email protected])
Mon, 13 Jun 1994 15:28:09 -0700


Ok, I'll try an example to show what I mean:

I'm flying over a landscape. This landscape has buildings and
mountains, etc... each located at a different place on the net. Now, let's say
that we have a really smart culling and level of detail algorithm, so that I
don't even try to see things too far away from me, and I see the things close
to me in not much detail (say, cubes) until I get closer. Well, for each
"frame" I see (and VR isn't worth it at less than 10-15 frames per second). I
have to look up on a remote map (I won't have my own map, because it is likely
to change too often) to find out what I can see, cull out the things I can't
see and then grab descriptions of areas off the net as I get closer to them.
Let's even say that my system is smarter than that, so it only grabs a little
bit per frame (only one building or mountain). This is still not a minor
amount of data grabbing per frame given the fact that I might be on a powerbook
100 running a slip-line over modem at 14.4 to netcom. Even on a 4 processor
Onyx Reality Engine 2, I would not get 30 fps, due to the network lag.

The powerbook is the actual setup I have at home, and it works fine for
mosaic, so I expect it should be just as fine for vrml.

Kevin

On Jun 13, 3:35pm, Mark Waks wrote:
> Subject: Re: VRML issues
> Kevin writes:
> >Unlike html, which is not real-time, vrml cannot have latency issues. The
> >net wit
> >will never be fast enough to expect to continually build a 3D scene while
> >grabbing data over a net of questionable bandwidth. So when I enter a vrml
> >document and traverse a link to another vrml document, I assume that I will
> >have to load that entire vr space into my machine.
>
> I question this assumption. *If* our definition of a space is a full
> description of the image, with all the details, then you're right that
> it'll be a while before the bandwidth catches up (saying that the net
> will *never* be fast enough is playing with fire, in my experience).
> But that's probably not the right model. As many people have said here,
> the best way to do it is probably with some combination of local models
> of objects, plus remote descriptions of how to associate those objects,
> maybe with incremental filling in of the details as objects get closer.
> That doesn't necessarily require all *that* much bandwidth...
>
> -- Justin
>
> Random Quote du Jour:
>
> What They Say: What They Mean:
> "We need to massage the numbers a bit." "We want to raise the price a
lot."
>-- End of excerpt from Mark Waks