I'm using keil TCP IP stack on many devices but I've observed a strange thing: My device is server and the client open the socket. If the client doesn't transmit data after socket opening (SYN sequence), the socket is closed by the server (my device) after less than a second. The TCP socket "lease" time is fixed to 120" but a real event of TCP_EVT_CLOSE is generated by stack..... Can anyone explain this, please?
Check if the timings for TCP/IP stack are correct.
Tick timer interval is configured from Net_Config.c
#define TICK_INTERVAL 100
Default setting is for 100ms ticks.
Make sure that the function timer_tick() is called with configured interval. This means every 100 ms for default TCP/IP configuration.
Thanks for the advice. It is right.
How is the TCP connect sequence?
Correct sequence for server should be: - receive SYN - send SYN+ACK - receive ACK
After this a timeout of 120 sec. has started. When a TCP socket is in TCP_STATE_CONNECT, every data packet resets the timeout timer.
Are you sure that you accept the connection? (callback function should return __TRUE for TCP_EVT_CONREQ event).
Are there any retransmitted packets?
Can you check this sequence with Ethereal?
You can also check the debug messages printed out to a serial port if you include a debug library RTLCD.lib to your project. Please select a FULL debug for TCP module.
I've seen the correct sequences and the wrong ones with Ethereal and the match with your indications is complete. Thanks a lot. Unfortunately I haven't a debug serial line but now all seems work well. About 100ms ticks, this value makes quite slow the "send_data" state machine. I've tried to speed up the system (tick of RTL = 1ms, timer and tcp tick = 5ms) and all run well. Thanks again.
It is better not to reduce the tick interval too much. It makes no difference for the TCP stack because the timeout timers have a resolution of 1 second. Higher tick timer rate means more decrements of timeout timers per second. This effectively means heavier CPU load.