Legacy documents are no problem. For the most part, valid HTML 2.0
documents will also be valid HTML 3.0 documents. If they aren't, the
differences are minor enough that any browser author should be able to
support 2.0 without much trouble. I can't imagine any browser company
refusing to do that.
> Breaking the browser may not be a serious problem for you, but there are a
> great number of people on the net who don't have the most up-to-date
> equipment or software, including government sites, educational sites and
> others, not to mention text-only and Braille browsers. What may for you
> seem like an aesthetic issue is for others an inability to read a document.
Absolutely correct. Content negotiation, when used correctly, will
ensure that (for example) HTML 2.0-only browsers are *not* given HTML 3.0
documents. Somewhere a downtranslation will happen, where 3.0 documnents
will be reduced to 2.0. There will definitely be some loss - instead of
using <table> to represent a table, the downconversion utility will
format it out for some default width and use <pre>, for example. Where
and how that translation happens isn't all that important - it can be
done in the server at the time of the request, or it could be done at
production time creating two separate files, etc.
Will downconversion be sufficient? That's mostly a quality of
implementation issue, i.e., how good your downconversion utility is.
Hopefully W3O will have a pointer to a reference utility for this
purpose... the only issue for the development of HTML 3.0 is that we try
and avoid tags which will be impossible to downconvert (and I can't think
of an example).
> Not to harp on Netscape (more than has already been done), but how many
> times have you been browsing and came upon a page that looked absolutely
> horrendous or was unreadable because it relied on code that your browser
> didn't support? Unless the language truly supports an alternative display
> method (such as ALT for images) it will inevitably create this have and
> have-not division. Relying on the server rather than the structure of the
> language seems to me to be stopgap.
If Netscape had put "Accept: text/x-Mozilla-html" in their HTTP headers,
this would not have been a problem! :)
> Kee, on the issue of legality, having a user jump to a new Web page lets
> them know that they have possibly changed sites. If we assume that the only
> way an included document will make sense when inserted into a display is
> that the author's header and footer graphics are absent (it would look
> pretty cnofusing if they were included), then it seems altogether unclear
> how a reader would ascertain authorship or the intellectual property status
> of the current page. I don't believe we can legislate behaviour via HTML,
> but that's where I see the legal problems arising. These issues currently
> exist in other easily-copied content.
I don't see a legal distinction between (on a document that sits on
organic.com)
<A rel="embed" src="http://hyperreal.com">
and
<IMG rel="embed" src="http://hyperreal.com/hyperreal.gif">
It would behoove the HTML browser to be able to note included text
somehow (make the mouse change shape when it passes over, have a
"highlight included text" mode, etc), but as long as the source can
be viewed there's no question where something came from.
Brian
--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
[email protected] [email protected] http://www.[hyperreal,organic].com/