This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

TCP Send failing from server application.

Hello all,

I'm having some issues getting my server application to work with TCPnet. I've gotten a few of the examples working with TCP and BSD sockets, but I must be missing something moving forward.

Here is a basic rundown of what I'm trying to accomplish right now:
The TCP Server resides on our embedded device (the cpu is an LPC1788) and waits for a client connection. When the client (a windows forms application on a PC) connects to the server, the server begins periodically sending some measurement data and responding to commands from the client application. Presently, I'm trying to get a proof of concept working where my server simply echo's back the command I sent it, while periodically (about every 100ms) sends a simple two byte message.

My problem is this:
My client attempts to connect and the server accepts the connection.
Upon establishing a connection, I send a command packet from the client.
The server echo's the client command, however, whenever the server attempts to send the "streaming" two byte message, the

send(socket_bsd_con, (char *)&sbuf, 2,0);

command always returns

SCK_EWOULDBLOCK

. I've tried using TCP sockets, BSD sockets, and a number of secnarios (streaming without echoing, echoing without streaming, playing with the timing) and it always seems as though the server side socket can only send a packet if has just received one.

Also, I'm attempting to use non-blocking sockets, and I am not presently using RTX or an RTOS. Rather, I'm using a simpler SysTick interrupt based scheduler.

Here are snippets of the code I'm currently trying to get working:

This is where I initialize the server socket for handling connections:

//initialize the BSD socket and start listening
socket_bsd_server = socket (AF_INET, SOCK_STREAM, 0);
sck_mode=1;//indicates non-blocking mode for the socket
ioctlres = ioctlsocket(socket_bsd_server,FIONBIO,&sck_mode); //pass control parameters
addr.sin_port        = htons(SMC_PORT);
addr.sin_family      = PF_INET;
addr.sin_addr.s_addr = INADDR_ANY;
bres=bind (socket_bsd_server, (SOCKADDR *)&addr, sizeof(addr)); //bind the socket to the addr
lres=listen (socket_bsd_server, 1); //listen for connection requests with a backlog of 1
MsTimerReset(5);//reset one of my "utility" timers

And then after initializing I call this code each 1ms tick in the scheduler:

void TCP_BsdServerTask (void)
{

  char dbuf[8]; //command buffer
  char sbuf[8]; //the "streaming" data bufer

  uint8_t buf_idx=0;

  if(listen_flag)
  {

     socket_bsd_con = accept (socket_bsd_server, NULL, NULL);


     if(socket_bsd_con>0) //if we got a valid socket handle
     {
       closesocket (socket_bsd_server); //close the server socket to free up resources
       listen_flag=FALSE; //we accepted a connecttion, don't need to listen anymores
     }
     if(socket_bsd_con ==SCK_EWOULDBLOCK)
     {
        //accept doesn't have a connection in queue, we have to try back later
        return;
     }

   }//if listen flag

   res = recv (socket_bsd_con, dbuf, sizeof (dbuf), 0); //try to grab data

   //handle cmd data
   if(res>0) //if there is data available and we're connected
   {
      listen_flag=FALSE;

      if (socket_bsd_con > 0) //if a valid socket handle exists to send with
      {
        send(socket_bsd_con, (uint8_t *)&dbuf, sizeof(dbuf),0);
      }
    }

    if(MsTimerGet(5)>100)//if 100ms has elapsed
    {
       GCB_TIMER_MsTimerReset(5);

        if (socket_bsd_con > 0) //if a valid socket handle exists to send with
        {
             sbuf[0]=0xAA;
             sbuf[1]=0xCC;
             success=send(socket_bsd_con, (char *)&sbuf, 2,0); //<--Problem Here
            /* ^^^^ ALWAYS RETURNS SCK_EWOULDBLOCK ^^^^ */
        }
    }//if Ms timer
}//server task


Sorry if it's a bit messy, I've been trying a bunch of different things to get this to work. Either I'm making some fundamental mistake in how I'm using these sockets, or perhaps I have some issue with the way memory is being allocated for sending date. I can't seem to figure out what I'm doing wrong.

Thanks in advance to anyone that might be able to help me accomplish my goal here.


  • Avoid writing normal text within the "pre" tags. As you can see, you get issues with line length
    breaking the forum formatting. After the formatting has been broken, everyone must manuall add
    line breaks in their posts to make them readable.

    I don't work with the Keil stack, so can't really help with specifics.

    But how have you configured the stack? What amount of memory does it have, so it has room to
    accept multiple outgoing messages even if it is waiting for an acknowledge on an earlier
    transmission? A block doesn't get free for reuse just because it has been sent - the TCP/IP stack
    must support retransmissions so the outging data must be kept until an acknowledge has been
    received.

    Is there any configuration option for handling of acknowledges? A number of TCP/IP stacks have
    issues with acknowledges - nothing wrong but the performance can take a big hit depending on the
    configured behaviour of the stack on the other side.

  • Hi Per,

    Thanks for your reply and apologies for my formatting error.

    Regarding your questions:

    1) The memory pool size, from which TCPnet allocates packet buffers, is 8192 bytes. I would think it would take much longer than a few packets to start seeing problems \ if the memory pool size were an issue, but I will investigate this.

    2) TCPnet does offer the option to turn off delayed ACKs (turn off Nagle's algorithm,
    I believe). I've done some reading on other forum posts stating that windows machines
    used delayed ACK (the client software resides on a windows machine) and that this can
    cause throughput problems. However, even if I add the code to "eliminate the delayed
    acknowledge impact" by calling something similar to the code below, it doesn't change
    the apparent behavior(Debugging verifies that ioctlsocket() is returning success on
    each call).

    ...
    socket_bsd_con = accept (socket_bsd_server, NULL, NULL);
    
    if(socket_bsd_con>0) //if we got a valid socket handle
    {
     ioctlres = ioctlsocket(socket_bsd_con,FIONBIO,&sck_mode); //set non-blocking
     ioctlres = ioctlsocket(socket_bsd_con,FIO_DELAY_ACK,&sck_mode); //turn off Nagle
     closesocket (socket_bsd_server); //close the server socket to free up resources
     listen_flag=FALSE; //we accepted a connection, don't need to listen anymore
    }
    ...
    

    I'm looking at the communications in Wireshark and I am seeing the following
    (where host refers to the uC running TCPnet)

    SRC------DEST------INFO
    client----host-----[SYN] seq=0, win=1024, len=, mss=1460, ws=1, sack_perm=1
    host-----client----[SYN,ACK] seq=0, ack=1, win=4380, len=0, mss=1460
    client----host-----[ACK] seq=1, ack=1, win=1024, len=0
    host-----client----[PSH, ACK] seq=1,ack-1, win=4380, len=2 //the only 2 "streaming" bytes
    client----host-----[PSH, ACK] seq=1, ack=3, win=4380, len=8 //8 byte cmd + ack of above
    host-----client----[PSH, ACK] seq=3. ack=9, win=1014, len =8 //cmd echo + ack of above
    client----host-----[ACK] seq=9, ack=11, win=1014, len=0 //ack the echo

    ...a long time passes where the host should be sending data but isn't

    host-----client----[FIN,ACK] seq=11, ack=9, win=4380, len=0
    client----host-----[ACK] seq=9, ack=12, win=1014, len=0

    Then the connection dies.

    It looks like the acks are all showing up to me (in less than 100ms), unless I'm reading
    the trace incorrectly, so I'm at a loss for why the send function should refuse to keep
    going along sending data every 100 ms. Instead the stack refuses to send, and the
    send function comes back with SCK_EWOULDBLOCK.

  • The connection dies because the host (your embedded device) sends out a FIN flag, which says that it wants to end its side of the communication. So the host can't send any more data after that - all it can do is receive data while waiting for the other side to also send a FIN.

    So the embedded side must die from timeout.

  • Ok, but the application running on the host is trying to send data every 100ms
    during the entire 100s period before the packet with the FIN flag set is sent from
    the host. So why do all of those attempts fail with ECK_EWOULDBLOCK ?

    That's what I don't get. Why isn't the server socket able to send data during that period.
    It would appear it has gotten all the acks it is waiting for, so why does it just sit
    for so long failing to send?

    We've just purchased MDK pro less than a month ago and are well within the support period.
    Is there an applications engineer or something I can contact at Keil that can help
    me with their stack specifically?

  • Look at the 'Contact' link at bottom-left of the page...

  • I'm becoming increasingly convinced that this is an issue with TCPnet's implementation
    with TCP/BSD sockets.

    Right now my experiment is simply that the client connects, the server accepts, and then
    the server should begin sending the 8-byte sequence to the client every 100ms until the
    client closes the connection. Instead what happens is, the server sends the heartbeat packet
    on the first two iterations, and the subsequently fails to send and returns
    SCK_EWOULDBLOCK forever.

    What's really weird is: I DO get the heartbeat from the server whenever I send data to the
    server from the client. For instance, if I send a command from the client, the server
    will respond with a heartbeat packet - even if I'm NEVER calling recv() anywhere for
    that socket.

    I've tried so many combinations to get this to work:
    Non-blocking sockets in a scheduler, blocking sockets in a task using RTX
    (based on the BSD_server demo for the EA LPC1788-32), BSD sockets, raw TCP sockets,
    changing numerous parameters using ioctlsocket(). I get the same result in all cases.

    I've demonstrated the desired server side functionality using simple BSD style sockets
    in windows, and I'm confident I can do the same on a Linux/Unix platform.
    I've also found other BSD socket tutorials on the web demonstrating this
    data-published-from-the-server functionality. My client application is fairly well
    tested and has been demonstrated to communicate correctly with other 3rd party
    TCP/IP capable devices. I don't have reason to think there is an issue with the
    communications engine on the client side.

    On the positive side, it only took me about two hours to get a working
    tcp-server-publishing-data demo going using LwIP.
    We'll see what Keil support has to say when they get back to me, maybe I'm just
    really misunderstanding aspects of their API. In the meantime, I'll continue to
    take the Pepsi challenge with some of the open source middle-ware available through
    the NXP community.

    If I learn anything on the Keil front, I'll post back for posterity.

    Thanks for your thoughtful replies. I'm a big fan of the community so far.

  • After spending some time working over the past weekend and earlier this week I narrowed
    my issue down to being related to the TCP window size. The receive buffer/window size
    on the client was initially selected to be small (1024 bytes), which doesn't seem to work
    with the TCPnet stack (though it seems to work with some other devices that are possibly
    running some flavor of embedded linux).

    I've done some experimenting and it looks like I get problems if the window sizes
    are set fairly low (less than several times the MSS). I get why it's a 'best practice'
    in many cases that the buffer/window size should be several times larger than the MSS and
    about the same size as the application buffer. However, practically speaking I don't think
    I should be touching those limits sending a stream of 8 byte messages at a rate that's
    in the 100-200 bytes per second range. The wireshark trace shows the acks making it back
    with negligible delay and there should be much much less than 1KB in either side's buffer
    at a given time, yet TCPnet seems to hang up under these circumstances as if the send buffer
    is getting full. Maybe there's something else at play that I'm not presently aware of.

    As a side note, I seem to have the opposite issue with LwIP: So far it seems like it
    only works if the window size on the client is set very low. I haven't really spent enough
    time tweaking all the buffer and memory allocation settings in LwIP to narrow down an
    issue there.

    At any rate, using a buffer size of 8760 (6 * MSS) appears to resolve the problem at hand
    for now. Though, I'd still like to get down to the nuts and bolts of why I was having
    issues at lower window sizes for my own peace of mind as I move forward.