This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

RTX os_itv_set microsec periodic wakeup

I wrote a small application using Keil RTX for smartfusion device and probed RTX overhead using Cortex-M3 systick timer (in Digital CRO). I have changed the OS_TICK macro to 50 micro sec. (Even though as per manual its suggested to set it >= 1msec). Using RTX as per our design we wanted to run the foreground processing frame every 100 micro sec. When probing the wave pattern in CRO i see RTX overhead time of ~= 34 micro sec (This changes with OS_TICK value). Am i doing something wrong or this is how it is suppose to behave ? (We won't be able to achive time resolution in micro sec using RTX os)

In task phaseA i set os_itv_set (2). The periodic wakeup interval is 2 sys ticks (i.e 100 micro sec). i switch on and off LED_A.

When i probe the via in eval board for LED_A (D1). I see every 134.2 micro sec the On operation is performed.

File: RTC_Conf_CM.c

// </h>
// <h>SysTick Timer Configuration
// =============================
//   <o>Timer clock value [Hz] <1-1000000000>
//    Set the timer clock value for selected timer.
//    Default: 100000000  (100MHz)
#ifndef OS_CLOCK
 #define OS_CLOCK       100000000
#endif

//   <o>Timer tick value [us] <1-1000000>
//    Set the timer tick value for selected timer.
//    Default: 10000  (10ms)
#ifndef OS_TICK
 #define OS_TICK        50
#endif
File: main.c

#include <RTL.h>
#include "a2fxxxm3.h"                   /* A2FxxxM3x definitions             */

OS_TID t_phaseA;                        /* assigned task id of task: phase_a */

#define LED_A      0x01
#define LED_On(led)     GPIO->GPIO_OUT &= ~led
#define LED_Off(led)    GPIO->GPIO_OUT |=  led

 __task void phaseA (void) {
  os_evt_wait_and (0x0001, 0xffff);    /* wait for an event flag 0x0001    */
  os_itv_set (2);

  for (;;) {
    os_itv_wait ();

    LED_On (LED_A);
    LED_Off(LED_A);

  }
}

__task void init (void) {

  GPIO->GPIO_0_CFG = 5;                  /* Configure GPIO for LEDs          */
  GPIO->GPIO_1_CFG = 5;
  GPIO->GPIO_2_CFG = 5;
  GPIO->GPIO_3_CFG = 5;
  GPIO->GPIO_4_CFG = 5;
  GPIO->GPIO_5_CFG = 5;
  GPIO->GPIO_6_CFG = 5;
  GPIO->GPIO_7_CFG = 5;
  GPIO->GPIO_OUT  |= 0xFF;


  t_phaseA = os_tsk_create (phaseA, 0);  /* start task phaseA                */
  os_evt_set (0x0001, t_phaseA);         /* send signal event to task phaseA */
  os_tsk_delete_self ();
}



int main (void) {
  WATCHDOG->WDOGENABLE = 0x4C6E55FA;    /* Disable the watchdog              */
  os_sys_init (init);                   /* Initialize RTX and start init     */
}

Parents
  • see RTX overhead time of ~= 34 micro sec

    I think you can find RTX performance metrices on this website.
    Either way, a context switch within 34 microseconds is not bad at all! What is your processor speed? If you internal flash MAM enabled?
    Either way as already mentioned, you are trying to solve your problem with the wrong method.

Reply
  • see RTX overhead time of ~= 34 micro sec

    I think you can find RTX performance metrices on this website.
    Either way, a context switch within 34 microseconds is not bad at all! What is your processor speed? If you internal flash MAM enabled?
    Either way as already mentioned, you are trying to solve your problem with the wrong method.

Children
  • Thanks tamir for replying. Finally i found someone else other than "Per" in keil forum.

    My only question at this time, even if i set the interval wait time to 10 millisec why i am seeing a periodic wakeup of 13.4 millisec. I can understand my resolution of 100 microsec is actully turing out to 134 microsec. Where am i going wrong? Did you ever faced anything like this before in lab ?