Tim,
Some more results of wais/www/gopher collaboration.
I have a new WAIS server running at wais.cic.net, called
"midwest-weather". It's fed by loading in a bunch of weather reports
from a gopher at Minnesota every hour. That system gets them from the
"weather underground" at Michigan using some hairy expect scripts, I
figured it'd be easier to get things out of gopher instead.
The script looks like:
WEATHER=gopher://mermaid.micro.umn.edu:150/00/Weather
www -n -np ${WEATHER}/Indiana/Fort%20Wayne | sed -e 's/.$//' > fort-wayne.in
www -n -np ${WEATHER}/Indiana/Indianapolis | sed -e 's/.$//' > indianapolis.in
www -n -np ${WEATHER}/Indiana/South%20Bend | sed -e 's/.$//' > south-bend.in
[...]
For some reason the gopher files are coming out of www with extra ^M's
on the end, as if they were DOS files; so the sed thing gets rid of them.
I don't see a way to do this with just one invocation of www, so
instead it runs once for each file.
Neither gopher nor WWW have the notion of a "recursive directory
listing", either some complete overview of the structure of the system
or some skeleton outline. (I realize it's arbitrarily hard to do so
since any link could point off anywhere else.) That makes it tougher
to do an archie-style catalog. I think it wouldn't be that hard to
build a tree-walker for gopher that prints out a list of the
directories on every system that it can find and also the text of all
of the stuff that's in the ".about" directories. At the very least
I'm doing some of that by hand now (just a script like the one above)
& waising it so I have some clue what all is out there. *not* a
replacement for the per-site indexes, but a cross-section.
- --Ed
------- End of forwarded message -------