The second disadvantage you list, I don't have an answer to - yeah, there
will be no prefetching there simply because it's a (potentially) different
adminstrative domain. However, the first problem isn't a problem when you
compare it to *no* prefetching - sure, a second document request is needed (I
won't say new tcp connection or round trip because we could be talking
persistant connections here), but at least you have the first screenful of
the document to read while the rest is loading, so the *perception* is that
there was no delay between the "click" and the beginning of the document
rendering. Furthermore, you don't have the bandwidth hit of having all pages
prefetched, only the very beginnings of those pages. I say make that
"beginning" mark arbitrary, so server/site authors can configure that on an
object-by-object basis.
If we want to push this "smarts" back to the client, we could have a new
method, say "TOP", which means "give me the headers and however many bytes of
content you think I should be able to see before the full request goes
through". In a typical persistant HTTP request, it means a GET is placed on
a document, the document is parsed for IMG and EMBED-ed objects, those are
fetched using GET, finally the document is parsed for HREF-linked objects,
and those objects are sent a TOP method. When an HREF is selected, another
full request happens just like nowadays, but the browser can render the TOP
info it got immediately. Just how many bytes a TOP request returns is left
up to the server/site author. Some servers may configure it to be to the
first <HR> in an HTML doc - others may say the fist 1500 bytes. The server
should also have some way of saying "look, the object you wanted was so
small, I gave you the whole thing anyways".
This is academic theory until it's implemented as a test somewhere, so
I won't press too much more on it.
Brian
--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
[email protected] [email protected] http://www.[hyperreal,organic].com/