This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Getting around delayed ack problem

Hi,

I have an application that sends data via BSD TCP sockets every 50ms to some software on the PC.
The problem is, windows uses delayed ack - msdn.microsoft.com/.../aa505957.aspx
which means that it waits 200ms before sending an ack back to my device to improve efficiency of the network.
Seems like the only way to turn it off is to change the registry, which I don't really want to do.
Sending data at a rate of 200ms per frame is really bad for this application, does anyone know a way to get around this in TcpNet?

I read on the forum somewhere that the Keil HTTP server does it by sending out a second frame containing a few bytes. With BSD sockets you can only send out one frame at a time, and will not send anything else till that frame has been acked.

Any ideas?

Thanks

  • As you mentioned already, the straightforward workaround it to split each packet into two. Since the sockets layer will not let you do it, it must be done at a lower level.
    By the way, I hope you realize that TCP is not really well suited for this kind of application.

  • I have no direct answer or suggestion for what you're wanting but you say:

    "With BSD sockets you can only send out one frame at a time, and will not send anything else till that frame has been acked."

    As far as I am aware, this is not a limit of the BSD socket but rather the way the lower levels of the protocol stack operate. I have not used BSD sockets with TCPnet myself, I have used raw TCP connections. With these, the sequence is certainly send packet and wait for ack before attempting to send the next.

    To see how it all goes together, you can look at:

    www.keil.com/.../rlarm_tn_using_tcpsoc_example.htm

  • Hi,

    Yes I believe it is a feature of the stack, not BSD.

    I am starting to see that TCP is not ideal, but UDP has its drawbacks as well.

    When you say splitting packets into two must be done at a lower level, do you mean outside of TCPNet?

  • When you say splitting packets into two must be done at a lower level, do you mean outside of TCPNet?

    I believe the splitting should be implemented inside TCPNet. uIP does it, for example.

  • I've just found this old thread after investigating a problem report from someone experiencing very slow performance between a Windows 7 PC and our product that uses the Keil TCP/IP stack and web server.

    I was provided with a Wireshark log from (a) the Windows 7 PC that exhibited the slow performance problem (access to the web server is extremely slow), and (b) another Windows 7 PC that accesses the web server just fine.

    In each case, the Wireshark log captures a particular .cgi file that is quite large. It is transferred over many TCP packets that are in the order of 800 bytes a time. So, the sequence of packets that I see when this .cgi file transfers over is:

    1. Web server to PC: 800(ish) byte packet with payload data.
    2. Web server to PC: A 60-byte packet with just a tiny bit of payload data, and PSH flag set.
    3. PC to web server: A 54-byte ACK packet.

    To transfer the whole file, obviously I see 1 to 3 repeating over and over.

    In the Wireshark log from the problem Windows 7 PC, there is a 200ms delay between steps 2 and 3 each time. In the log from the Windows 7 PC that doesn't exhibit the slow speed issue, the delay between all steps is just in the order of a few milliseconds.

    So, a few of questions:

    1: From comments I've read above, I gather that step 2 (sending a second packet with a tiny bit of payload) is Keil's way of mitigating the delayed ACK. Is that indeed why step 2 exists? Exactly how does sending that second packet 'get around' the delayed ACK, and why does it not 'work' with this one particular PC?

    2: Any ideas why this has only been seen by us on one particular Windows 7 machine? Also, this machine was apparently fine (i.e. communication with our product's web server was fast) until a particular point recently. I can only assume that installation of some software perhaps changed a registry setting.

    The only workaround I have to try at the moment is a registry change I found via Google.

    I realise this isn't the place for Keil support, so I'll submit a help request to technical support. However in the meantime I was just interested to hear any further thoughts on here.