I'm using keil TCP IP stack on many devices but I've observed a strange thing: My device is server and the client open the socket. If the client doesn't transmit data after socket opening (SYN sequence), the socket is closed by the server (my device) after less than a second. The TCP socket "lease" time is fixed to 120" but a real event of TCP_EVT_CLOSE is generated by stack..... Can anyone explain this, please?
It is better not to reduce the tick interval too much. It makes no difference for the TCP stack because the timeout timers have a resolution of 1 second. Higher tick timer rate means more decrements of timeout timers per second. This effectively means heavier CPU load.