[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug report - TCPSink/*, packetSize_, headers and recvBytes()



I ran into an anomaly in trying to get an end-application receiver
view of a tcp transfer for an accurate application-level view of
transfer time/throughput.

It seems to affect all one-way TCP variants in ns up to and including
the current snapshot, but doesn't affect FullTcp, which does the Right
Thing.


One-way TCPSinks consider packetSize_ (confusingly defined as
size_ rather than packetSize_ in C++, alas) to be the segment size as
well, without taking header overhead into account at any point after
receiving a packet and before passing received segment data from the
window up to recvBytes() for an application to take.

I was expecting a difference of multiples of 40 or so with respect to
n*(packetSize_) in numtoDeliver to account for the necessary TCP and
IP header overhead. You can see this by adding to tcp-sink.{cc,h}:

void TcpSink::recvBytes(int bytes)
{
	// Would it be possible to have #define NOW available 
	// across all of ns globally? I'm really quite taken with it,
	// and surprised when it's not available without another
	// #include.
	double now = Scheduler::instance().clock();

	// show that chunks being passed up to application are
        // n*foo from set packetSize_ foo - it's now segment size!
	printf("%f\t%d\n",now,bytes);
}


do
set packetSize_ 1000
before initiating an FTP agent over a TCP connection, and you'll see
chunks of 1000 bytes being handed up to the application. So,
packetSize_ has become size_ which has mutated to segment size as far
as the sink receiver and the application it would be handing data to
are concerned, and the values passed to recvBytes() via numBytes
and numToDeliver are somewhat misleading.

(If there are better ways of recording transfer rates at the receiver,
 I'm all ears.)

Since as far as the tracefiles and ns are concerned, packetSize_
really *is* packet size (as nsDoc 25.1 states), numBytes and
numtoDeliver in tcp-sink.cc are at least multiples of 40 too large for
recvBytes()  (TCP options are probably moot since it's one-way), and
getting an accurate measure of the rate of data being delivered to the
application via recvBytes() means that postprocessing adjustment for
header overhead would be needed to correct this for one-way TCP - once
you'd noticed it.

Ouch.

I think you could use a variable similar to the bound
tcpip_base_hdr_size_ variable in tcp.h - which seems to be used to
adjust for headers on the way out in the one-way sender - to handle
this by subtraction; better might be to use hlen_ in hdr_tcp for
one-way TCPs as well.

(btw, if you're setting packetSize_ to less than 40, things get
 extremely odd. An impossible 30-byte tcp packet will actually send 30
 bytes up to recvBytes() for the application - imo this needs
 guarding/warning against, and a simple tcpip_base_hdr_size_
 equivalent subtraction to give the correct value would go negative
 and take bytes _back_ from the receiving application...)

Adding a similar recvBytes() method to tcp-full.cc shows that FullTcp
is doing the Right Thing as far as header overhead is concerned -
multiples of segsize_ are passed up, and the tracefile records a value
40 bytes bigger, including the header overhead just as you'd expect.

Anyone comparing throughputs from FullTcp/reality and one-way TCPs for
the same applications using the apparently-same recvBytes() for an
identical API could be tripped up by this (and I was). 40 bytes/packet
doesn't seem like much, but over long flows it adds up, and if your
application is depending on chunks of data to arrive before doing
any processing...

[How much performance overhead would turning the bytes received into a
 bound variable - to be read off at intervals - add?]

cheers,

L.

I'm blaming this all on calling it size_, you know.

<[email protected]>PGP<http://www.ee.surrey.ac.uk/Personal/L.Wood/>