Re: two ideas...

[email protected]
Wed, 29 Nov 1995 23:21:31 -0800


> From [email protected] Wed Nov 29 19:56:31 1995

> On Wed, 29 Nov 1995 [email protected] wrote:
> > I would predict that they would scale to around BW = 7^3, or around
> > 350x, and latency would reduce from 2.1 RTT's down to 0.2 RTTs avg
> > per request.
> >
>
> I grabbed a copy of the Touch and Farber paper cited earlier in this
> thread, which seemed to deal with FTP. This described a pre-send selection
> algorithm of sending everything in the currently selected directory. The
> Boston University system used a simple 1-level probablistic model to pick
> the best candidates for pre-send, and used fare less extra bandwidth,
> though with a higher probablity of a cache-miss. There's lots of stuff to
> tune with speculation.
>
> Simon

This is correct - the Touch/Farber paper dealt with FTP, and outlined
a method to extend the specifics to the Web, which originally appeared
in a paper published in 1989 (as distributed interactive hypermedia).
The IEEE JSAC paper on "Five Challenges" outlines the method used
for the web, in specific.

The difference is precisely that of presending likely candidates
vs. presending *all* subordinate web pages. The main difference is
in how the protocol at the server side models the receiver state.
When all subordinate web pages are sent, the top-level web page
state can be removed from the "possible client states" set.

Probabilistic methods can be used to sequence the present pages,
as well as to create a "virtual" web graph with virtual intermediate
nodes. This virtual graph provides the same power, in a degenerate
case, as the probabilistic methods described by the OCEANS group
at BU.

Joe