> By this logic if my chance of failure is originally .001 and I add a spoofer
> with a chance of failure of 0.001, I should not be distressed that my chance
> of failure has roughly doubled to (0.002)?
The problem is that it might increase by several orders of magnitude.
I would say that "failure" here is a false positive, i.e. data not
being delivered to a well-behaved receiver (i.e. one which does
all the read()s it is supposed to do) but the sender does not
realize -- basically the sender even  sees the last ack, but the
receiving endpoint crashes before passing all data to the application.
If the receiving protocol endpoint and the application are on the
same physical hw, then P(failure) can easily be several orders of
magnitude below 1.
 
When you split the receiving endpoint from the application (by
means of a proxy somewhere), depending on what you have in the
middle and how you implement the proxy, the P(failure) of the
connection between the proxy and the receiving application can
be much higher (or more vulnerable to attacks).
 
E.g. in the config below, if the proxy buffers everything and
even generates the last ack, the sender will be able to terminate
and it is easy to think that the connection on path <2> might be
teared down with a significantly higher probability than the
P(failure) above. Much better if the proxy retains the last
ack until it gets one from the remote end (this way you can
accommodate chains of proxies) -- modulo the timeout issues that
someone mentioned.
        [sender]----<1>-----[proxy]-----<2>-----[receiver]
        cheers
        luigi
----------------------------------+-----------------------------------------
 Luigi RIZZO, [email protected]  . ACIRI/ICSI (on leave from Univ. di Pisa)
 http://www.iet.unipi.it/~luigi/  . 1947 Center St, Berkeley CA 94704
 Phone (510) 666 2927             .
----------------------------------+-----------------------------------------
This archive was generated by hypermail 2b29 : Wed Jan 10 2001 - 12:08:48 EST