This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

TCPNET response to PING

I'm using TCPNET and the RTX kernel on an STR912 ARM9.

A simple question first: can TCPNET be configured to respond to a broadcast PING?

Second, I have noticed that sometimes the response time to PINGs gets erratic and long. Normally the response time is ~1ms, but sometimes (for no reason I have been able to debug yet) the response times go up to 1 or 2 seconds (frequently almost exactly 1 or 2 seconds).

The only cure seems to be to power-cycle my board. This happens both on my own board and on an STRB9 from Keil. When I was running the http demo I noticed that when the ping time goes up, the HTTP response also becomes very slow. What is happening?

Christopher Hicks
==

NORMAL:

hiss:~# ping 192.168.2.12
PING 192.168.2.12 (192.168.2.12) 56(84) bytes of data.
64 bytes from 192.168.2.12: icmp_seq=1 ttl=128 time=1.03 ms
64 bytes from 192.168.2.12: icmp_seq=2 ttl=128 time=0.967 ms
64 bytes from 192.168.2.12: icmp_seq=3 ttl=128 time=0.998 ms
64 bytes from 192.168.2.12: icmp_seq=4 ttl=128 time=1.00 ms
64 bytes from 192.168.2.12: icmp_seq=5 ttl=128 time=0.908 ms

SOMETIMES:

hiss:~# ping 192.168.2.12
PING 192.168.2.12 (192.168.2.12) 56(84) bytes of data.
64 bytes from 192.168.2.12: icmp_seq=1 ttl=128 time=352 ms
64 bytes from 192.168.2.12: icmp_seq=2 ttl=128 time=2001 ms
64 bytes from 192.168.2.12: icmp_seq=3 ttl=128 time=1940 ms
64 bytes from 192.168.2.12: icmp_seq=4 ttl=128 time=941 ms
64 bytes from 192.168.2.12: icmp_seq=5 ttl=128 time=1001 ms
64 bytes from 192.168.2.12: icmp_seq=6 ttl=128 time=1186 ms
64 bytes from 192.168.2.12: icmp_seq=7 ttl=128 time=1011 ms
64 bytes from 192.168.2.12: icmp_seq=8 ttl=128 time=510 ms

Parents
  • Hi

    Was this issue ever resolved ?

    I too am experiencing this problem with RL-TCPnet (see Thread no 10561). I am using the MCB230 with LPC2368 and MDK 3.11.

    If you have any further information on this issue (or even better, a resolution) could you let me know.

    Many Thanks

    Des

Reply
  • Hi

    Was this issue ever resolved ?

    I too am experiencing this problem with RL-TCPnet (see Thread no 10561). I am using the MCB230 with LPC2368 and MDK 3.11.

    If you have any further information on this issue (or even better, a resolution) could you let me know.

    Many Thanks

    Des

Children
  • Not fully resolved.

    The situation was much improved with the (V3.05) driver. The long trains of long response times are gone, but still occasionally there is a single, isolated long response time.

    This happens about once per minute, pinging at 1 second intervals, with 100ms TCPNet timer_tick(). Here is a typical sequence:

    64 bytes from 192.168.2.203: icmp_seq=226 ttl=128 time=1.20 ms
    64 bytes from 192.168.2.203: icmp_seq=227 ttl=128 time=1.61 ms
    64 bytes from 192.168.2.203: icmp_seq=228 ttl=128 time=1.32 ms
    64 bytes from 192.168.2.203: icmp_seq=229 ttl=128 time=1001 ms
    64 bytes from 192.168.2.203: icmp_seq=230 ttl=128 time=2.88 ms
    64 bytes from 192.168.2.203: icmp_seq=231 ttl=128 time=1.51 ms
    64 bytes from 192.168.2.203: icmp_seq=232 ttl=128 time=1.25 ms
    64 bytes from 192.168.2.203: icmp_seq=233 ttl=128 time=1.64 ms
    

    This shows two interesting features:

    1. The lengthened response time is often, but not always, almost exactly 1 second.

    2. The response time to the ping immediately following the long one is always approximately double the normal response time.

    Taken together, this evidence strongly suggests to me that sometimes a packet is received, but TCPNet is not informed of this until the subsequent packet arrives, and then both are processed together, thankfully in the order in which they arrived.

    CH
    ==

  • Thanks Christopher.

    I am using the V3.05 driver but my PING response times are much greater :-
    64 bytes from dev_00 (10.51.21.48): icmp_seq=35 ttl=128 time=0.734 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=36 ttl=128 time=0.780 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=37 ttl=128 time=0.759 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=38 ttl=128 time=0.747 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=39 ttl=128 time=0.771 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=40 ttl=128 time=200.714 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=41 ttl=128 time=44.451 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=42 ttl=128 time=266.451 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=43 ttl=128 time=162.466 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=44 ttl=128 time=195.019 ms
    64 bytes from dev_00 (10.51.21.48): icmp_seq=45 ttl=128 time=214.114 ms

    It would appear the PING response time jumps whenever my TCP socket connects.
    If I disable TELNET in the NETCONFIG.C file and do not make a call to TCP_LISTEN then the PING responses are of the order of 0.7mS. If TELNET is enabled or I make a call to TCP_LISTEN then the PING response times jump up once the TCP connection is made.

    As an aside, I have also noticed the PING response times increase (with no TCP socket connection or TELNET) if you exit the debugger i.e., build your code, download to flash via the debugger and then exit the debugger.

    I have raised these issues with Keil and will let you know their findings whenever they get back to me.

    Des

  • Maybe that is expected behaviour (lengthed ping response times while there is TCP activity).

    Remember that TCPNet is single-threaded, so while a TCP packet is being processed (i.e. while in the TCP socket callback), any other incoming packets (including incoming pings) are buffered, and processed once the TCP callback has completed (or maybe the next time you call main_TcpNet() - I am not sure).

    This is in contrast to bigger systems where typically separate threads/processes respond to each TCP/UDP socket. In this case even if one thread takes a long time to respond to a packet on a given socket, the pings are handled by a separate thread and so respond immediately.

    In my case I have no IP activity except the pings themselves.

    CH

  • Hi Des,
    while investigating on this issue for the LPC2366 we noticed a possible problem in lpc23_emac.c (at least the version included in RL-ARM 3.10).

    It's isr interrupt_ethernet() appears to ignore the case of multiple frames already fetched in the rx buffer. IMHO there should be a loop checking for consume and produce indexes to pop all the available frames from the buffer (this could be the case if one or more frames get received before the isr gets triggered). It should be something like this (note the additional loop):

    static void interrupt_ethernet (void) __irq {
       /* EMAC Ethernet Controller Interrupt function. */
       OS_FRAME *frame;
       U32 idx,int_stat,RxLen,info;
       U32 *sp,*dp;
    
        while ((int_stat = (MAC_INTSTATUS & MAC_INTENABLE)) != 0) {    MAC_INTCLEAR = int_stat;
          if (int_stat & INT_RX_DONE) {
                            while   (MAC_RXCONSUMEINDEX != MAC_RXPRODUCEINDEX) {       <-- additional loop
    
                     /* Packet received, check if packet is valid. */
                     idx = MAC_RXCONSUMEINDEX;
                     info = Rx_Stat[idx].Info;
                     if (!(info & RINFO_LAST_FLAG)) {
                        goto rel;
                     }
    
                     RxLen = (info & RINFO_SIZE) - 3;
                     if (RxLen > ETH_MTU || (info & RINFO_ERR_MASK)) {
                        /* Invalid frame, ignore it and free buffer. */
                        goto rel;
                     }
                     /* Flag 0x80000000 to skip sys_error() call when out of memory. */
                     frame = alloc_mem (RxLen | 0x80000000);
                     /* if 'alloc_mem()' has failed, ignore this packet. */
                     if (frame != NULL) {
                        dp = (U32 *)&frame->data[0];
                        sp = (U32 *)Rx_Desc[idx].Packet;
                        for (RxLen = (RxLen + 3) >> 2; RxLen; RxLen--) {
                           *dp++ = *sp++;
                        }
                        put_in_queue (frame);
                     }
    
            rel:     if (++idx == NUM_RX_FRAG) idx = 0;
                     /* Release frames from EMAC buffer. */
                     MAC_RXCONSUMEINDEX = idx;
                            }
          }
    
          if (int_stat & INT_TX_DONE) {
               /* Frame transmit completed. */
          }
       }
       /* Acknowledge the interrupt. */
       VICVectAddr = 0;
    }
    

    This change was suggested by a different implementation of the driver found in the example code boundle for the lpc2300 from NXP (www.standardics.nxp.com/.../code.bundle.lpc23xx.lpc24xx.uvision.zip)

    We are still digging into the LPC2300 user manual to fully understand all the magic behind its internal Ethernet controller, but our preliminary patch to the driver shows promising results.

    Maybe this is the same fix that Christopher Hicks reported to improve ethernet performance in its setup, after upgrading to the latest driver source. If this is the case, since he's using a different CPU (ARM9) fixes migth have not been propagated to the lpc23_arm.c. I had no time yet to verify this, so, please, confirm.

    Please let us know if we are on the right way to solve the issue and/or any other change this could suggest for the mac driver.

  • Hi Andrea

    I have ammended the driver I was using (LPC23_EMAC.C Rev 3.05) as provided with MDK3.11 to include the extra While loop in the Ethernet Interrupt handler as you suggested. My initial testing indicates that you are correct with regard to the case of the old driver ignoring the reception of multiple frames - well done.

    I have also ammended the EasyWeb demo provided with the MDK3.11 to run with the RL-ARM RTOS. This demo does not use the same LPC23EMAC.C Ethernet driver. Instead it uses a polling mechanism to check the CONSUMEINDEX and PRODUCE INDEX to determine if a frame has been received. I ran this application for a period of 72 hours with typical PING response times of 0.7mS (similar to the timings I am now receiving with the ammended LPC23_EMAC.C driver). These results strengthen your assumptions of the Interrupt driver not dealing correctly with the reception of multiple frames.

    Thanks Andrea.

    Des

  • Thank you all for your help. We have fixed the original LPC23_EMAC.c driver. It now works without problems. A fix will be available in the next upcoming RL-ARM release.

    If you need an updated driver, please send email to support@keil.com

    Franc