This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

TCP Send failing from server application.

Hello all,

I'm having some issues getting my server application to work with TCPnet. I've gotten a few of the examples working with TCP and BSD sockets, but I must be missing something moving forward.

Here is a basic rundown of what I'm trying to accomplish right now:
The TCP Server resides on our embedded device (the cpu is an LPC1788) and waits for a client connection. When the client (a windows forms application on a PC) connects to the server, the server begins periodically sending some measurement data and responding to commands from the client application. Presently, I'm trying to get a proof of concept working where my server simply echo's back the command I sent it, while periodically (about every 100ms) sends a simple two byte message.

My problem is this:
My client attempts to connect and the server accepts the connection.
Upon establishing a connection, I send a command packet from the client.
The server echo's the client command, however, whenever the server attempts to send the "streaming" two byte message, the

send(socket_bsd_con, (char *)&sbuf, 2,0);

command always returns

SCK_EWOULDBLOCK

. I've tried using TCP sockets, BSD sockets, and a number of secnarios (streaming without echoing, echoing without streaming, playing with the timing) and it always seems as though the server side socket can only send a packet if has just received one.

Also, I'm attempting to use non-blocking sockets, and I am not presently using RTX or an RTOS. Rather, I'm using a simpler SysTick interrupt based scheduler.

Here are snippets of the code I'm currently trying to get working:

This is where I initialize the server socket for handling connections:

//initialize the BSD socket and start listening
socket_bsd_server = socket (AF_INET, SOCK_STREAM, 0);
sck_mode=1;//indicates non-blocking mode for the socket
ioctlres = ioctlsocket(socket_bsd_server,FIONBIO,&sck_mode); //pass control parameters
addr.sin_port        = htons(SMC_PORT);
addr.sin_family      = PF_INET;
addr.sin_addr.s_addr = INADDR_ANY;
bres=bind (socket_bsd_server, (SOCKADDR *)&addr, sizeof(addr)); //bind the socket to the addr
lres=listen (socket_bsd_server, 1); //listen for connection requests with a backlog of 1
MsTimerReset(5);//reset one of my "utility" timers

And then after initializing I call this code each 1ms tick in the scheduler:

void TCP_BsdServerTask (void)
{

  char dbuf[8]; //command buffer
  char sbuf[8]; //the "streaming" data bufer

  uint8_t buf_idx=0;

  if(listen_flag)
  {

     socket_bsd_con = accept (socket_bsd_server, NULL, NULL);


     if(socket_bsd_con>0) //if we got a valid socket handle
     {
       closesocket (socket_bsd_server); //close the server socket to free up resources
       listen_flag=FALSE; //we accepted a connecttion, don't need to listen anymores
     }
     if(socket_bsd_con ==SCK_EWOULDBLOCK)
     {
        //accept doesn't have a connection in queue, we have to try back later
        return;
     }

   }//if listen flag

   res = recv (socket_bsd_con, dbuf, sizeof (dbuf), 0); //try to grab data

   //handle cmd data
   if(res>0) //if there is data available and we're connected
   {
      listen_flag=FALSE;

      if (socket_bsd_con > 0) //if a valid socket handle exists to send with
      {
        send(socket_bsd_con, (uint8_t *)&dbuf, sizeof(dbuf),0);
      }
    }

    if(MsTimerGet(5)>100)//if 100ms has elapsed
    {
       GCB_TIMER_MsTimerReset(5);

        if (socket_bsd_con > 0) //if a valid socket handle exists to send with
        {
             sbuf[0]=0xAA;
             sbuf[1]=0xCC;
             success=send(socket_bsd_con, (char *)&sbuf, 2,0); //<--Problem Here
            /* ^^^^ ALWAYS RETURNS SCK_EWOULDBLOCK ^^^^ */
        }
    }//if Ms timer
}//server task


Sorry if it's a bit messy, I've been trying a bunch of different things to get this to work. Either I'm making some fundamental mistake in how I'm using these sockets, or perhaps I have some issue with the way memory is being allocated for sending date. I can't seem to figure out what I'm doing wrong.

Thanks in advance to anyone that might be able to help me accomplish my goal here.


Parents
  • After spending some time working over the past weekend and earlier this week I narrowed
    my issue down to being related to the TCP window size. The receive buffer/window size
    on the client was initially selected to be small (1024 bytes), which doesn't seem to work
    with the TCPnet stack (though it seems to work with some other devices that are possibly
    running some flavor of embedded linux).

    I've done some experimenting and it looks like I get problems if the window sizes
    are set fairly low (less than several times the MSS). I get why it's a 'best practice'
    in many cases that the buffer/window size should be several times larger than the MSS and
    about the same size as the application buffer. However, practically speaking I don't think
    I should be touching those limits sending a stream of 8 byte messages at a rate that's
    in the 100-200 bytes per second range. The wireshark trace shows the acks making it back
    with negligible delay and there should be much much less than 1KB in either side's buffer
    at a given time, yet TCPnet seems to hang up under these circumstances as if the send buffer
    is getting full. Maybe there's something else at play that I'm not presently aware of.

    As a side note, I seem to have the opposite issue with LwIP: So far it seems like it
    only works if the window size on the client is set very low. I haven't really spent enough
    time tweaking all the buffer and memory allocation settings in LwIP to narrow down an
    issue there.

    At any rate, using a buffer size of 8760 (6 * MSS) appears to resolve the problem at hand
    for now. Though, I'd still like to get down to the nuts and bolts of why I was having
    issues at lower window sizes for my own peace of mind as I move forward.

Reply
  • After spending some time working over the past weekend and earlier this week I narrowed
    my issue down to being related to the TCP window size. The receive buffer/window size
    on the client was initially selected to be small (1024 bytes), which doesn't seem to work
    with the TCPnet stack (though it seems to work with some other devices that are possibly
    running some flavor of embedded linux).

    I've done some experimenting and it looks like I get problems if the window sizes
    are set fairly low (less than several times the MSS). I get why it's a 'best practice'
    in many cases that the buffer/window size should be several times larger than the MSS and
    about the same size as the application buffer. However, practically speaking I don't think
    I should be touching those limits sending a stream of 8 byte messages at a rate that's
    in the 100-200 bytes per second range. The wireshark trace shows the acks making it back
    with negligible delay and there should be much much less than 1KB in either side's buffer
    at a given time, yet TCPnet seems to hang up under these circumstances as if the send buffer
    is getting full. Maybe there's something else at play that I'm not presently aware of.

    As a side note, I seem to have the opposite issue with LwIP: So far it seems like it
    only works if the window size on the client is set very low. I haven't really spent enough
    time tweaking all the buffer and memory allocation settings in LwIP to narrow down an
    issue there.

    At any rate, using a buffer size of 8760 (6 * MSS) appears to resolve the problem at hand
    for now. Though, I'd still like to get down to the nuts and bolts of why I was having
    issues at lower window sizes for my own peace of mind as I move forward.

Children
No data