Multiple sockets (was Re: NetScape...)

Ramin Firoozye ([email protected])
Tue, 25 Oct 1994 15:06:59 -0700 (PDT)


Folks are complaining about NetScape opening multiple sockets at once.
>From a UI perspective, you can't be faulted with trying to display the
most material in the fastest possible time.

The Miss Manners approach to network programming dictates, however, that
a single application not go about hogging resources (like sockets,
bandwidth, etc...) to accomplish this.

Analogy time: Folks programming under Windows or Mac realized very
quickly that hogging the event loop pisses off the users (and other
application vendors). Under Unix, the O/S acts as the dictator and parcels
out cycles so this becomes a non-issue. The lesson: unless there's a
higher force at work, shared policies are self-enforcing.

In a TCP/IP based application, we're back to the cooperative model.
There is NO central authority passing out network "cycles" so it is
assumed that the various applications will behave in a "polite" manner
and not hog resources.

Allocating multiple sockets and pumping the max out of them is offensive
to some because it violates this spirit of cooperation. If a client is
simultaneously using NetScape and WhizBang (a distributed application),
and WhizBang uses a single socket to talk to its server, it isn't difficult
to understand WhizBang developers being pissed off at MCC for making their
application look sluggish. In their next release, they too will go about
soaking up resources and screw over PopSqueak, the cool whiteboard
application who is still foolish enough to open a single socket.

Under PC's and Mac's, running multiple TCP-based applications is not
very efficient in any case (mainly due to the non-preemptiveness of the OS).
Under Unix, VMS, OS/2, and NT though, the kernel does multitask, so it's
conceivable the user is running multiple TCP-based apps both in the foreground
and the background. However, there is no bandwidth allocation mechanism
amongst the various systems on the net. TCP-induced artificial delays
and routers are about the only thing giving applications a chance.

Is there a solution? Basically, there are four ways to handle the situation.

1. First is to say everyone's on their own. Go wild. Hell why stop at 4 sockets.
Go for the whole shebang. And while you're at it, create extra processes
on the server to handle cache remote stuff in the background. Take all the
free disk space as well.

2. The second way is for everyone to agree to certain polite rules of network
behavior. This is honor-based however, and everyone agrees to stick to
the rules.

3. The third is to enforce rules unilaterally. Servers WILL NOT accept
multiple connections from the same process on the same host, etc... Even
more Draconian would be to enforce this rule in the kernel. Going even
further, auto-throttling schemes would be put into the various kernels to
limit bandwidth hogging.

4. The last way, is to let the users decide. Have them pick how many sockets
they think they need. This is not too fair to the other users, but hey,
who says life's fair (:-)

Personally, I think until there are kernels and protocol stacks that enforce
bandwidth and resource allocation, options 2 and 4 are the polite way to
go. My personal preference would be for NetScape to set the default
socket count to 1, but allow the user to bump it up to a civilized limit
(say 4) with a message box that pops up and says something like
"Choosing more than 1 socket may speed up your access but will slow down
other running network applications and other users accessing the server."

So the choice is the user's. And if WhizBang slows to a crawl it's their
own fault. They probably shouldn't be trying to get work done while running
Mosa-I mean NetScape anyway... Caveat usor (:-)

Cheers,
Ramin.

-- 
Ramin Firoozye' 
rp&A Inc. - San Francisco, California
Internet: [email protected] - CIS: 70751,252
--