Re: uh oh -- halp!

Tony Johnson ([email protected])
Tue, 21 Sep 1993 17:54:25 -0700 (PDT)


A couple of weeks ago there was discussion (below) of the problems of documents
getting truncated when sending HTTP/1.0 requests to HTTP 0.9 servers.

Did anyone come to a clear understanding of these problems, and decide on a
solution??

Tony

Marc wrote:
>Tony Sanders writes:
>> [problem about supporting HTTP/1.0 and HTTP0 in same client]
>> ...
>> > > The result is that the socket gets confused and the client only ends
>> > > up getting the first chunk of data (usually 1024 bytes).
>> I think the client is getting ENOTCONN when the server does the close.
>> You could work around this by detecting ENOTCONN and retrying without
>> "HTTP/1.0". For performance you would probably want to cache a list of
>> HTTP0 servers (though I wouldn't bother keeping it around between sessions).
>>
>> > 1) To change the HTTP/1.0 protocol to use a different separator
>> > between accept fields, and to use CR LF as a terminator. This
>> > means getting new versions of all the servers and clients, and
>> > also means getting all existing HTTP/1.0 servers upgraded. Anyone
>> > know the install base out there? This will cause lots of user
>> > headaches until all the servers get upgraded.
>> I don't think \r\n is the problem. Just set the connection to unbuffered
>> (I'll bet you have it line buffered now) and flush when done building the
>> request. However, I think you still get bitten if it fragments going out.
>>
>> Maybe someone with more TCP knowledge could dig into this and figure out
>> another solution.
>
>I'll try to pursue both of Tony's suggestions this weekend in Mosaic.
>If neither solves the problem completely, I suggest as the easiest
>solution that we require that existing HTTP0 servers *at least* be
>upgraded to be able to accept full, multiline HTTP/1.0 queries, even
>if they can't understand them.