Hi,
I have established five TCP socket connections using the Network and RTOS libraries.
uint32_t tcp_cb_func (int32_t socket, netTCP_Event event,const NET_ADDR *addr, const uint8_t *buf, uint32_t len) { switch (event) { case netTCP_EventConnect:break; case netTCP_EventEstablished:break; case netTCP_EventClosed:break; case netTCP_EventAborted:break; case netTCP_EventACK:break; case netTCP_EventData:break; } return 1; //1 - incoming connection accepted. } void Send_Message_To_TCP(int S){ uint8_t *Mem; if (Socket[S].SCK >= 0) { if (netTCP_SendReady (Socket[S].SCK)) { Mem=netTCP_GetBuffer (TCP_Size); if(Mem==NULL) { return; } Socket[S].sendbuf = Mem; memcpy (Socket[S].sendbuf, TCP_MSG, TCP_Size); netTCP_Send(Socket[S].SCK, Socket[S].sendbuf,TCP_Size); osDelay(5); } } } __NO_RETURN static void app_main (void *argument) { // //Other Codes // //each 30ms for(i=0;i<5;i++) { Send_Message_To_TCP(i) } }
The system functions correctly until a TCP connection is disrupted, either by disconnecting the Ethernet cable or by suddenly powering off the device. For instance, when Socket 4's Ethernet wire is disconnected abruptly, the timeout counter starts counting down from 60 seconds. Meanwhile, when a new connection for Socket 3 is accepted and its state changes to 'Established,' it sends only one packet and does not transmit any further data during the timeout period.
As the new connection (Socket 3) begins counting down its timeout, sometimes, the timeout for the previously disconnected Socket 4 is reset to 60 seconds. Consequently, the new connection is disconnected more quickly than the timeout for the previously disconnected socket. Upon investigation, I found that the netTCP_SendReady function only operates after the acceptance of a new connection. After that, it returns false during the established connection, which explains why only one packet is sent.
Socket 4
I also checked the thread priority, and all of them work fine.(I've disabled round robin)
What is my mistake? Any help would be appreciated.
A TCP socket stores data in a memory pool until the remote station acknowledges it. If the Ethernet connection is interrupted, the data is stored until the socket is closed or the data is acknowledged.
Therefore, it is possible that you have filled the memory pool with data because more sockets are open and sending data. This explains why no new data can be sent. Note that the memory pool is shared by all sockets and services that require dynamic memory.
TCP sockets have a timer for resending and disconnecting the connection. If the Ethernet cable is disconnected and reconnected before the socket is closed, and both ends have the socket in an established state, TCP socket communication would normally continue and the TCP socket would not even notice the brief disconnection. Retransmission and recovery algorithms correct errors and ensure consistent data transmission.
Dear Franc Urbanc,
I appreciate your response, and I am truly thankful for your consistent help with my network problems. I value your time and assistance.
I wish that the other sockets could fill the memory, and I was able to bring a software solution.
I have set the Network Memory pool to be more than the received data and the transmitted data (22K bytes). These are my TCP settings:
Number of TCP Sockets: 5 Maximum Segment Size: 576 TCP_RECEIVE_WIN_SIZE: 2200
It means each socket, if their packets are not acknowledged, can store about 600 bytes (I assume 1K byte for calculation). (5*1Kbyte=5kBytes) (My transmitted data is less than 450Bytes, which means it takes just one segment) If one of my sockets gets disconnected, it just stores 1Kbyte for the previous data.
Regarding the netTCP_SendReady() function:
The function netTCP_SendReady determines whether the TCP socket can send data. It does this by checking whether the TCP connection has been established and whether the socket has received an acknowledgment from the remote machine for data sent previously.
This function doesn't allow sending the new packet until the last one has been acknowledged (So the stored memory is about 1Kbytes) Due to using the netTCP_SendReady, each socket that works fine before disconnecting the other socket, means they are established (debug mode) and maybe their data is also being acknowledged, but they stop working well.
As I use netTCP_SendReady() before netTCP_GetBuffer, I make sure not to allocate any additional memory until each socket has acknowledged the data being sent.
However, as one socket becomes disconnected, all the sockets stop sending (other sockets don't send more than 1 packet). And each socket doesn't receive more than 2Kbytes each second: 5*2Kbytes=10Kbytes so, as I sum all of them, 10Kbyte+5Kbytes=15Kbytes, which is less than the 22Kbytes memory pool size.
Hi again,
I have introduced a new scenario to assess memory usage. I defined five sockets and disabled all packet transmissions to my embedded board, allowing only this board to transmit 450 bytes to the two connected devices(PC1 and PC2). When both devices are connected and I disconnect the cable from one of them(Ex. PC1), the other connection stops working and begins to count the timeout(PC2).
Well, TCP is more complex, and so is the implementation.
There are two windows, one is the receive window that you control. It tells the remote station how much data it is allowed to transmit. And in this case you are right.
But there is also transmit window that is controlled by the remote station, in your case a PC. This window is normally 256k or more in size. Since network TCP sockets implement a sliding window protocol, the embedded system follows this protocol. Initially, the send window is very small, e.g. 2 packets, and then gets larger with each packet sent. After sending a large amount of data, the send window can therefore be very large, so that a single socket can fill the entire memory pool with the send data. As mentioned, this does not happen under the control of the embedded system, but under the control of the remote PC.
netTCP_SendReady checks virtual acknowledgments. That is, when the TCP socket sees that it can send more data according to the sliding window protocol, it generates a virtual ack to the user. And the user can send the next packet. The larger send_window is, the more virtual acks are generated.
However, there are limits to the management of the memory pool. If the free memory becomes too small, the netTCP_SendReady function also fails. As a result, the system can wait until some data is released from the memory to continue with the transfer.
Thank you so much for your response.
I’ve been thinking about your explanation, and I realize that this issue will affect all MCUs with SRAM memory smaller than the PC TCP window(256Kbyte)
I’ve searched quite a bit online, but I haven’t found any effective ways to control the TCP receive window in Windows when using different software.
It’s frustrating because this situation leads to network interruptions for a simple disconnection, especially when the device is working as an IoT device. These interruptions can cause many problems.
Do you know if there's a solution to this problem? I would appreciate any advice you can provide. Thanks again for your help!
As already mentioned, TCP is a complex protocol. The default configuration is good enough for optimal performance, i.e. up to 10 Mbytes/s in both directions. Of course, there are many factors that affect transfer speed, to name just a few: MCU speed, media speed (usually Ethernet), network topology, etc. For example, if a packet has to pass through several routers and gateways, the speed drops.
I suggest that you don't worry so much about the internals of TCP, but instead focus on using the socket. You could use the BSD socket interface instead. BSD is a layer of software that converts native TCP sockets to behave like standard BSD sockets. This allows you to use network sockets on a PC and on an embedded system in generally the same way.
Thank you again for your quick response.
Sure, but I need to know how to handle this scenario. Because the memory is limited, the timeout should be about 60 seconds, and during the interruption in one socket, the other sockets should continue working.
Do you have any suggestions on how to bypass this interruption as quickly as possible and make the sockets operate independently of each other?
But sockets work independently. The Telnet server can be configured to accept multiple simultaneous connections, with each client doing something different. If you configure it for three sessions, the server can accept three simultaneous clients. They do not interfere with each other, ok, maybe if the first user turns on the LED on the board and the second user turns it off.
My suggestion is only for the test. Take an example of a Telnet_Server, configure it for three sessions and connect three PCs to it at the same time (as in your picture above). Then unplug the cable from one PC, the other two should still have active connections and work. You should not notice any problems with memory allocation.
Thank you Franc,
It seems to be working fine even after disconnecting the Ethernet wire. However, I need to understand how to send data from the embedded board to the PC and what function to use for this purpose. Based on my review of the Telnet documentation, it only transfers data by receiving (request) commands from the user. I would appreciate any guidance you could provide on this matter. Thank you.
You can take a look at the examples BSD_Client and BSD_Server. They show how to connect to a remote server via a stream socket (TCP) and transfer some data. I assume this is what you really need.
Dear Franc, I appreciate your help. I wish I could use the TCP_Socket function, which has many options. I try to complete the BSD protocol to check if it works properly. I am thankful again for your assistance.