Re: Protocol Benchmarking (HTTP protocol)

frans van hoesel ([email protected])
Wed, 2 Feb 1994 12:05:52 +0100 (MET)


My problem with http too!!

but the solution I would suggest is a mutliple GET request for
multiple files (perhaps named MGET ?) which has the big
advantages:

o closes the connection as soon as possible
o it doesn't break anything: a browsers could send the MGET
to any HTTP deamon, if it finds that it doesn't understand the MGET
it can go ahead with the old method.
Eventually all HTTP deamons would understand it and everybody would
be happy
o It's much faster! not only the transmission time is saved, but also
a lot of time is saved by not having to wait till the connection
starts over again. Allthough I am on a slip line I can see from the
LEDs on the modem that much time is wasted just in the phase where
the connection have to be made. the modem is just idle, so I'm
definitly not limited by transmission speed during that phase.

it has one disadvantage:

o for a given document it still needs two accesses: one to get the document
and one to get the images using MGET.

however there is an advantage hiding here: during the first GET, the
httpd could actualy tell the browser it understands MGET requests,
so any MGET request need never to fail on the system were the document
is coming from.

another possibility is that during Accept headers phase the browser tells
the server it want all images too, and the server would itself see what
files are needed from the local system and send them too (as if the
mget has been send). This might look as taking a lot of cpu cycles on the
server, but it could actually save a lot, because the images will be
requested anyway.
In all cases the browser would act as usual to get the needed images from
systems other than the one the document is coming from (using MGET)

- frans

> Though it would be hard to "see" in a protocol study. HTTP has a one
> probably that I would enjoy seeing fixed in the near future (it is my
> personal beef with HTTP).
>
> HTTP 1.0 example negotiating session:
>
> Connection opened:
> GET /index.html HTTP/1.0
> [ batch of "Accept" and other headers sent,
> mosaic sends about 1K worth ]
>
> File is sent
> Connection closed:
> Connection opened (to the same host):
> GET /logo.gif HTTP/1.0
> [ batch of "Accept" and other headers sent,
> mosaic sends about 1K worth ]
>
> File is sent
> Connection closed:
>
> This is awful, since not only is there connection creation/tear down
> expenses, but also retrasmission of "client information" to the server.
> Also, since (from the survey of my server) most of the HTML documents
> are ~1K in size, it means that twice the information is being sent
> than necessary... Not good for a slow link..
>
> This hopefully could get changed (HTTP 1.1?) into a protocol that
> doesn't close the connection after one file is trasmitted, but rather
> leaves it open for a "short" while. Where it is closed either through
> a client "QUIT" or a server timeout.
>
> [email protected]
>