Re: two ideas...

[email protected]
Thu, 7 Dec 1995 18:34:02 -0800


> From [email protected] Thu Dec 7 17:43:17 1995

> Although the HTTP working group hasn't really addressed this yet,
> it presumably will happen fairly soon, so I think one can assume
> that HTTP 1.1 will have preemption whether or not it has prefetching.

How "optimized" this is depends on the transport protocol.

> cache hits are forwarded to the server presender
> so that the server presender can update
> its speculation set
>
> This won't be popular with the HTTP community, since it adds
> server load. And it's not clear to me that the server's "speculation
> set" (predictive model) should be updated to reflect cache-hit
> behavior ... since (in our approach, at least) its purpose is
> to predict cache-miss behavior!

Predicting the next cache-miss is helped by feeding the cache-hit
behavior back to the source. Granted, this adds load.

The idea is as follows:
Client asks for the root page
Server sends the root page
Server presends the pages the root page points to
Server presends their children, recursively,
breadth-first, until getting feedback or running
out of capacity
Client HITS a root child in the cache
Client sends the name of the child hit to the server
Server deletes subtrees not based at that child,
refocusing its "presending" set

The idea is based on the information-theoretic implications of
sending and receiving packets. Receiving a packet decreases the
imprecision in the estimated state of the remote side. Sending a
packet (I have proposed) increases the imprecision in the estimated
state equivalently. The goal of presending is to create a
loosely-coupled feedback where the state imprecision is bounded.

(Aside - presending is a potentially EXPENSIVE techique that
is designed to get around *propagation* latency, primarily)

>
> Note - the server rules imply that cache updates
> arrive on a different IP port than direct requests,
> and that the cache loads come on different IP ports
> than direct responses.
>
> This might be an optimization, but it's not necessary. And if
> you don't insist on this optimization, the changes to HTTP are
> quite minimal (and hence easy to get into the standard).

I wasn't sure if it was required, but it sure makes backward
compatibility trivial. If you're not listening to the port, the
packets just get dropped. That way, I don't need it in the
"standard" - so long as I provide the client-side mux that
aggregates the data, it'll work with existing clients.

Joe