Hi,
I have a webserver running on an LPC1768 with a maximum of 15 clients at any one time.
I am using an application, written in javscript, to do HTTP GET requests which uses one socket per GET request (I can't find a way of using one socket to do multiple GET requests in javascript).
I am facing a problem that when my application submits too many GET requests (20 when opening the application) in a short space of time (15 in ~100ms), I can see in Wireshark it is sending a RST after all 15 have been used, and the 16th is trying to open the socket.
I am assuming this is due to the TCP TIME_WAIT state. Could you tell me what the TIME_WAIT is set to? - It have found it is 2*MSL but this would be 2 minutes? It can connect again after the client side has gone through the reset timeout (3 seconds).
If I know the timeout time, I could probably slow my application down so it doesn't have to wait 3 seconds to connect again.
Increasing the amount of sockets/client connections is not possible for me.
Thanks,
Neil