I certainly hope we are going to do something about it. As the web gets larger
this problem is only going to get worse.
I currently run a spider about once a month on my server, but that doesn't
help when other servers are pointing at bits of my server that no-longer exist
(or never existed in the first place).
> Some random thoughts about this:
>
> Idea 2: rather than the client implementing this, the server can do so
> instead; when finding a failed URL it can initiated the BROKEN method
> to the server found in the Referer (pity so many Referers lie). This
> also reduces the repeats if a server remembers it has flagged a
> particular error situation.
>
> My favourite so far is number 2.
>
This is my favourite too. Additionally a REDIRECTED method would be useful
(this could even go so far as to perform an automatic correction :). Having a
separate broken links log would be really handy, save working through the
error file looking at all the "connection aborted" messages.
The main downside to this mechanism is that it won't cope with a link which
points to a server that doesn't exist.
I believe that in the long term we *have* to have some mechanism for finding
and correcting broken links.
Derek
--- Derek Harding [email protected] Product and Software Developer +44 1223 250422 PIPEX (The Public IP Exchange Ltd) http://www.pipex.net/people/derek/