Re: Implementing Browsers

Bernie Roehl ([email protected])
Fri, 2 Jun 1995 23:36:28 -0400


According to [email protected]:
> > Problem 1: Non-convex faces
> As a browser writer, I *do* sympathise with your position but the alterna-
> tive to allowing non-convex
> faces is to put the burden of decomposition onto the authoring tools

True. I can live with that, though; ensuring the model is consistent is
the burden of the modeling application in any case. It already has to
ensure that the faces are planar and non-complex, so ensuring that they're
convex as well shouldn't be much of a burden.

In any case, I suspect that the majority of modeling packages deal only
with convex faces already; for example, 3D Studio, arguably one of the
most widely used modelers in the world, deals only in triangles. My
guess is that it would be easier for the modelers to export convex
facets than for every single browser to implement render-time decomposition
of non-convex facets.

> and settle for (perhaps significantly) larger VRML files.

Clearly, we want to keep the file sizes small. However, I think it's
even more critical that we keep the rendering speed high; having to
decompose the polygons at render time would slow things down.

And yes, I do indeed mean *render* time; if we have to keep the original
scene graph intact (as discussed below) then that would include the
coordindates and the faces as separate nodes; to do things "right",
we'd have to do the convex decomposition each time we traverse.

> Much though I do not relish the task, I think that in terms of performance
> hits, it should be a
> browser's responsibility to render what it "sees". If this includes
> non-convex facets then so be it.

I'd be curious to hear how the various browser-writers out there are
planning to handle this; decompose at parse time, decompose at traverse
time, or ignore the convexity issue and let the scan-conversion routines
(or the hardware) do what it will?

> As a general pointer, whilst much of what you say regarding renderers'
> abilities is true, it would
> *always* be prudent for a browser developer to build his/her own layer
> of code between the VRML parser and the rendering library.

Hmm... that would involve basically ignoring the renderer's
hierarchy mechanisms, and relying on the scene graph instead. That's a
*lot* of overhead; I suspect it's part of the reason that VRML scenes
render relatively slowly, even on SGI boxes.

My major concern is with the performance hit we'd be taking.
Most of these renering libraries are pretty well-optimized; I'd prefer
not to bypass parts of them just for the sake of maintaining a scene
graph.

> > if a behaviour tries to update just one transform
> > node, it won't be able to (the information isn't independently available)
> Excellent point. However there must be some mechanism whereby VRML can
> communicate an object hierarchy to the browser.

Agreed -- the *object* hierarchy must be maintained. The question is
whether every transform, every material binding, every single node of
the entire scene graph has to be maintained as well.

If the answer to that question is "yes", then we're essentially looking
at re-implementing a large part of Open Inventor. Unless of course
SGI would be willing to put the source in the public domain... :-)

-- 
   Bernie Roehl
   University of Waterloo Dept of Electrical and Computer Engineering
   Mail: [email protected]    Voice:  (519) 888-4567 x 2607 [work]
   URL: http://sunee.uwaterloo.ca/~broehl