This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

RTX os_itv_set microsec periodic wakeup

I wrote a small application using Keil RTX for smartfusion device and probed RTX overhead using Cortex-M3 systick timer (in Digital CRO). I have changed the OS_TICK macro to 50 micro sec. (Even though as per manual its suggested to set it >= 1msec). Using RTX as per our design we wanted to run the foreground processing frame every 100 micro sec. When probing the wave pattern in CRO i see RTX overhead time of ~= 34 micro sec (This changes with OS_TICK value). Am i doing something wrong or this is how it is suppose to behave ? (We won't be able to achive time resolution in micro sec using RTX os)

In task phaseA i set os_itv_set (2). The periodic wakeup interval is 2 sys ticks (i.e 100 micro sec). i switch on and off LED_A.

When i probe the via in eval board for LED_A (D1). I see every 134.2 micro sec the On operation is performed.

File: RTC_Conf_CM.c

// </h>
// <h>SysTick Timer Configuration
// =============================
//   <o>Timer clock value [Hz] <1-1000000000>
//    Set the timer clock value for selected timer.
//    Default: 100000000  (100MHz)
#ifndef OS_CLOCK
 #define OS_CLOCK       100000000
#endif

//   <o>Timer tick value [us] <1-1000000>
//    Set the timer tick value for selected timer.
//    Default: 10000  (10ms)
#ifndef OS_TICK
 #define OS_TICK        50
#endif
File: main.c

#include <RTL.h>
#include "a2fxxxm3.h"                   /* A2FxxxM3x definitions             */

OS_TID t_phaseA;                        /* assigned task id of task: phase_a */

#define LED_A      0x01
#define LED_On(led)     GPIO->GPIO_OUT &= ~led
#define LED_Off(led)    GPIO->GPIO_OUT |=  led

 __task void phaseA (void) {
  os_evt_wait_and (0x0001, 0xffff);    /* wait for an event flag 0x0001    */
  os_itv_set (2);

  for (;;) {
    os_itv_wait ();

    LED_On (LED_A);
    LED_Off(LED_A);

  }
}

__task void init (void) {

  GPIO->GPIO_0_CFG = 5;                  /* Configure GPIO for LEDs          */
  GPIO->GPIO_1_CFG = 5;
  GPIO->GPIO_2_CFG = 5;
  GPIO->GPIO_3_CFG = 5;
  GPIO->GPIO_4_CFG = 5;
  GPIO->GPIO_5_CFG = 5;
  GPIO->GPIO_6_CFG = 5;
  GPIO->GPIO_7_CFG = 5;
  GPIO->GPIO_OUT  |= 0xFF;


  t_phaseA = os_tsk_create (phaseA, 0);  /* start task phaseA                */
  os_evt_set (0x0001, t_phaseA);         /* send signal event to task phaseA */
  os_tsk_delete_self ();
}



int main (void) {
  WATCHDOG->WDOGENABLE = 0x4C6E55FA;    /* Disable the watchdog              */
  os_sys_init (init);                   /* Initialize RTX and start init     */
}

Parents
  • Tamir, Thanks for you reply. Appreciate your time and effort.

    The drift of 34% is cumilatively adding up that what worries me. Not sure where i am going wrong. Let say for periodic interval of 20 usec i get 36 usec [difference of 16 usec for context switch and other overheads], Now for 10 millisec periodic interval i get 13.4 millisec [difference of 3.4 millisec]. Just wanted to drill through why there is so much difference [theoretically i should get 10 millisec + 16 usec].

    i know the system overhead is going high with 100 usec. Discussed with my customer and he is fine with it. I suggested to use FPGA module to get life easier, But he wants to stick with soft implementation.

    What is your suggestion if i intermix timer ISR with RTX to get my job done ? Timer ISR will give me precision of usec and it will take fixed interval for servicing. While ISR is being serviced RTX doesn't stand a chance to intervene. The timer ISR will perform scheduling of task [using isr_set_evt]. Well there definitely is a chance of frame overrun. But this situation can happen with 100msec RTX priodic timer (if drift was deterministic).

Reply
  • Tamir, Thanks for you reply. Appreciate your time and effort.

    The drift of 34% is cumilatively adding up that what worries me. Not sure where i am going wrong. Let say for periodic interval of 20 usec i get 36 usec [difference of 16 usec for context switch and other overheads], Now for 10 millisec periodic interval i get 13.4 millisec [difference of 3.4 millisec]. Just wanted to drill through why there is so much difference [theoretically i should get 10 millisec + 16 usec].

    i know the system overhead is going high with 100 usec. Discussed with my customer and he is fine with it. I suggested to use FPGA module to get life easier, But he wants to stick with soft implementation.

    What is your suggestion if i intermix timer ISR with RTX to get my job done ? Timer ISR will give me precision of usec and it will take fixed interval for servicing. While ISR is being serviced RTX doesn't stand a chance to intervene. The timer ISR will perform scheduling of task [using isr_set_evt]. Well there definitely is a chance of frame overrun. But this situation can happen with 100msec RTX priodic timer (if drift was deterministic).

Children
  • While ISR is being serviced RTX doesn't stand a chance to intervene.

    As stated above, this is not the case. If the priority of the timer interrupt use by RTX is higher, it will preempt the any running ISR with a lower priority.
    I did some rigorous measurements in the past while (successfully) hunting down a bux in RTX (it was solved in the mean time - don't worry). I never encountered the drift you report.

  • Yes tamir, you are correct. Cortex-m3/RTx uses systick timer for generating clock for RTOS. By default i guess it has higher priority than timer 1 & 2.

    ahhhh... Nice catch though. These are the points which people usually miss and forum experts at rescue. Need to rethink on my design.