Re: two ideas...

[email protected]
Thu, 7 Dec 1995 18:23:24 -0800


> From [email protected] Thu Dec 7 17:12:27 1995
>
> We beleive that predictions need to be signalled not only to HTTP,
> but to the client's IP processing mechanism, and to the network.
> The problem is that predictions are opportunistic use of available
> bandwidth, which must not interfere with other "real" traffic.
>
> You can believe this as much as you want, but I don't think you
> will be able to insist on it. For one thing, HTTP already uses
> non-reserved bandwidth, and it's not possible to decide what is
> "real". (Should posting of sports scores get priority over

If I partition the direct request/response and guessed-answer/feedback
on different ports, and I provide the muxing mechanism, then I can
decide what's real and not before the client sees it. It relies
on running a proxy in the client host.

> My intuition is that, at the moment, the primary contributor to
> delay for the average web user (on a 14.4 or 28.8 modem) is the
> long transmission time over the "tail circuit" dialup link.

Actually, there are two major contributors to delay for the
"average" user -
- bandwith to the server
this has less to do with the modem speed,
and more to do with shared access to an
often limited and highly contended resource
i.e., even over a 14.4k modem, we often see
4-6 kbps transfer rates
- rendering speed
consider how much time it takes to display
a page, *after* it has been received, which
is a function of the client's processing
power

> You write:
> As a result, we send them:
> - to a different PORT on the client-side cache
> - flagged as "droppable" (ABR user-flagged 'red', in ATM terms)
> but then you also write:
> The other advantage is that this requires NO modification to HTTP.
>
> This seems inconsistent to me. Use of a different port and
> requiring network-level priority settings definitely means
> changes to the HTTP spec, and almost certainly would require
> major changes to proxies, routers, etc.

Not if I hide the additional ports between proxies I provide, which
is what I plan to do. It's invisible to the client and server, and
isn't an HTTP extension.

> Of course, one could get into an argument over whether prefetching
> or presending is more wasteful of bandwidth (for a given reduction
> in latency), but I'll leave that for later.

The difference between the two has been a source of contemplation,
and I think I have some examples of when each is better.

Prefetching
better when TRANSMISSION latency is dominant
prediction is the job of the RECEIVER

Presending
better when PROPAGATION latency is dominant
prediction is the job of the SENDER

As to which is more complicated to implement, or whether they are
two sides of the same coin, is something I'd be interested in discussing.

Joe