Re: Last time, i18n options

Jason Cunliffe ([email protected])
Tue, 25 Apr 95 05:28:35 +0200


Actually the subject does not make me glaze over but I confess great ignorance
about which way is up in regard to the global text options.

My interest in VRML is as a a key component for a global visualization system I
am trying to build. The basic requirements are multi-layered, multi-user time-stamped
animated distributed neurally networked engines which merge GIS (geographic information
systems), multi-media and communications protocols. Central to all of this is the
development of *SMART MAPS* where all objects are both display deveices and input/control
mechanisms. User layers and user worlds may comprise or combine any types of data or meta-data
including paths and links to toher users worlds. This data may be abstract, contextual, geographic,
topographic, symbolic, topological, numneric, multimedia audio, video, text, sculptural, VR, 3-D, linguistic,
navigational, meta-linked etc...

This is currently described in the form of large proposal to the European Commission for advanced
communications experiments. We are still waiting for results and so I have been quietly but intensely
following this newsgroup from the beginning.

Clearly ther are many issues which belong in the 2.0 + interactive behaviours + DIS + newsgroup arenas
and I have not wanted to distract from the 1.0 get it out now focus.

But Jan's and Chris's debate about text and the need to do something sensible now are really very
relevant I believe to *more* than just text similar to the logic about dropping the spectrum down to
zero Hz.

Since my own particular interest demands integration of maps I consider the continuum of smart objects to
contain 2 and 3 dimensions (and those in between). There are many types: points, blobs, lines, perimeters, intersect
intersections, paths, areas, labels etc.. In an explicitly defined 3-D environment many of these would remain
perhaps as 2-d objects but could be usefully ascribed 3-d coordinates and combined with all those other
delicious 3-D objects inthe great sculture garden of eden now on the drawing boards... Time stamping adds a very
significant other dimension and utility to all objects. Animation is of course possible, but so is the detectio
detection and control of when and where people and objects appear, definition of path mechanisms, branched NNTP
based links of interest, the entire issue of user adn object dynamics IMHO needs implicit time-stamping of ever
everything as an available resource. What follows is the ability to then record and describe gestural dynamics.

Leave it to the browsers and the interface i/o engines to decide what to do with it. But the distinction between
n authoring and use, navigation and transmission is further closed which I understand to be an aspect of VRML
underlying philosophy.

TEXT TEXT TEXT - it's all of these things

It's handwriting and it's calligraphy.
It's dumb ASCII labels.
It's 3-D spinning logos.
It's semantic code for you know who knows what...
It's intructions
It's mail
It's something in another language which you may or may not know how to read yet.
It's names on map (but whose langauage and whose character set?)
It's coordinates and links.
It's legal caveats
It's a whole mess of symbols some which are already standard and some not yet born...
It's translations into other langauages for what is written so *everyone* can use it

etc..

I know this was somewhate debated before - at the time I was swamped and did'nt have time to partake.

I am sure that the serious application of VTML (Virtual Text Modelling Language) is a ways off yet. And that V
VRML 2.0 will provide a nice leg for it. I thing it is very important to put the maximum versatility into 1.0
that is possible (even if it is slow and clumsy and inelegant). At least the crack in the door is opened to some
proof of concept and some inspired browser plug-ins.

I read that the recent pen based systems have sold well in Japan. It seems that western (roman) cursive is a
bitch to writing recognition systems, while pictogram, kanji, ideogram writing is very successfull on the same
systems. I imagine this has a lot to do with the fact that the strokes are taught and executed in a very specific
gestural sequence. In this sense they are already gestural algorithms. Navigating through VR environents, camera
tracking, cartographics objects, "signatures", musical attributes, choreography, puppetry etc. all depend upon
reognizable gestural pattern langauage.

I apologize if this is no longer the correct forum for these comments. And also for lousy typing (ppp gremlins out tlonight
and this terminal is crude and rude).

- Jason.