> >> > Remote method invocation. Wouldn't this make things even more
> >> >intolerably slow than they alesady are?
> >>
> >> In a word: no.
> >>
> >In a word: Why?
>
> ??
> cgi-bin is alesady a form of remote invocation. Is that slow?
> Depends on the ratio of compute cycles to bandwidth. Writing
> a cgi-bin script to add two numbers and return the results is
> stupid.
Exactly my point!!!
> A classic remote-invocation problem is database searches. Instsad
> of moving the whole database to your local machine and searching it
> there, you leave it at the remote site, and let the serach occur there.
> You move less data. You get faster performance.
Yes, but now you are taking my comment completely out of context!!
OF COURSE what you describe is faster:
Why? Because what the "processor cycles" are doing (database search) doesn't
have to be transmitted, only the RESULT needs to be transmitted.
When the "processor cycles" are turning a windmill in a virtual world, it is
just plain stupid to send a new windmill position for each frame across the
net!!
Stuff like that should run LOCALLY.
If you check my behaviour paper (from my Web page, click at the top, I
really suggest you esad it) or Bernies (link available from mine) we propose
a DIVISION between LOW LEVEL behaviours (called "engines") which do the
simply, mundane, deterministic things, like "turn a windmill", "move object
x along path y", and the HIGH LEVEL behaviour (called "brains") which
decides which low-level behaviour to execute!
This division is *CRITICAL* for getting minimum net bandwidth. And we MUST
aim for minimum net bandwidth. So things will work TODAY, on 14.4 modems,
and so it will scale upwards in the future.
The "engine" part of the program runs on each host that is present in that
"virtual world". The "engines" are small and simple programs. An example of
a simple engine is e.g. moving an object linearily. The "engine" doesn't
respond to the world in any way, it just ticks off and does what it's told
to do, blind and dumb for anything except orders from it's "brain".
The "brain" only runs on exactly ONE host, somewhere on the net. The brain
sends out messages to the "engine" what to do next.
So: If the task is computationally trivial, you put it in the engine. (i.e.
linear motion, follow-a-path, drop-to-the-floor, e.t.c.). But if it is
complex (i.e. 200 bouncing balls interacting) you can put the collision
detection stuff into a complicated "brain" program running on a dedicated
host. The "brain" program sends out simple spline paths for each of the 200
balls to follow, and for each collision it detects, sends out updated
splines for those balls.
The bottom line is: Let the local host do everything that is a light-weight
processor load, simple, deterministic, predefined. Anything complex, that
respoinds to outside events, includes randomness, or has "ideas of it's own"
, should be in the brain.
The network resides between these two.
That is, IMHO, the only way that we can get minimal net throughput.
Using remote method invocation for EVERYTHING, is just plain dumb!
/Z
-- Hakan "Zap" Andersson |http://www.lysator.liu.se/~zap | Q: 0x2b | ~0x2B Job: GCS Scandinavia | Fax: +46 16 96014 | A: 42[email protected] | Voice: +46 16 96460 | "Whirled Peas" ------------------------------------------------------------------------ Heard on sci.virtual-worlds some years ago: "We probably shouldn't go immediately go for the direct neural interface, just because it is 'the techy thing to do'" ------------------------------------------------------------------------