1) Prefetching, if it were to be widely deployed, throws web site traffic
analysis out the window. Right now we have a pretty good presumed
mapping between a page access and a liklihood that someone actually saw
it. Prefetching would mean that we couldn't tell, on the server side,
whether a fetched document actually ever got rendered or not. The
content provider needs some way of distinguishing between prefetches and
actual looks.
The solution to this might be to have the prefetcher obtain only the
first, say, 1500 bytes of the document-to-be-prefetched. This could be
accomplished via a Range: request, or perhaps even a different method
could be invented for this. Then, if the document is actually selected,
the first 1500 bytes are instantly rendered while the rest is being
grabbed. This should increase perceived performance, at least. Then on
the server end I can simply look for "give me the rest of this document"
- type requests.
2) Control of prefetching by the server. Let's say I have a page with
900 inlined clickable images, trying to emulate a 30x30 grid Brite-Lite
(and I'm not using an imagemap because, well, I'm not). If when the
person came to that page, each of those 900 were "prefetched", I might
have a server meltdown. The content provider needs some way of saying,
it would seem, that they're not interested in having each element
pre-fetched. Perhaps as an attribute to <A>? I don't have an easy
answer to this one.
Brian
--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
[email protected] [email protected] http://www.[hyperreal,organic].com/