This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Strange delay value on Task

I'm working with LPC2478 micro with RTX and I noticed a strange behaviour on delay associated to a task during execution.

Sometimes, in a unpredictable way, a task that should start every 1 second defined as follow:

__task void Background_1sTask (void)
{ os_itv_set (100);

while (1) { os_itv_wait (); //TASK A ITV {

stop working, so if a put a breakpoint inside this task the code never stop inside it.
In this blocked condition if I'm in a debug mode, checking the "RTX Tasks and System" windows I noticed that the delay column related to this task shows a very big strange value, for example 28520 when should be a value lower than 100 because with a system timer set to 10msec, 100 is the maximum delay expected in a 1 second scheduled task.

My application is made up of 6 - 7 task that running in the same time so could be related to a wrong priority managin or could be a wrong acces to the RAM memory where this delay is stored that overwrite it. I don't have other ideas.

Thanks in advance for any suggestion.

BR

Parents
  • Dear stefano & tamir,
    I recently had a long debugging session for a similar symptom.
    to sketch the situation:
    a monitoring task is invoked periodically using itv_set() itv_wait()
    In case of a abnormal situation data was written to flash which take some time (~100ms), and has to be done without being interrupted. The abnormal situation has to be acknowledged by the user. Now during stress-testing we hit the acknowledgment at high frequency while the abnormal situation persisted.
    The behavior observed was the task got slowed down, from a expected 10ms interval up to 2s! All other tasks continued working normally. After 1-2s without stressing the system magically recovered and continued execution.
    My explanation is that the worst-case-execution-time 100ms violates the initial 'hard' deadline assumptions of 10ms task-invocation, RTX somehow 'gets confused', accumulates task invocations but tries to recover.
    I never had time to deeply invesigate this 'RTX-is-confused' question in the RTX source code.

    Since periodic task invocation based on 10 ms was not mission critical we circumvented the problem using os_dly_wait() instead of itv_wait().....

    Good luck debugging random-runtime-behavior!
    Thomas

Reply
  • Dear stefano & tamir,
    I recently had a long debugging session for a similar symptom.
    to sketch the situation:
    a monitoring task is invoked periodically using itv_set() itv_wait()
    In case of a abnormal situation data was written to flash which take some time (~100ms), and has to be done without being interrupted. The abnormal situation has to be acknowledged by the user. Now during stress-testing we hit the acknowledgment at high frequency while the abnormal situation persisted.
    The behavior observed was the task got slowed down, from a expected 10ms interval up to 2s! All other tasks continued working normally. After 1-2s without stressing the system magically recovered and continued execution.
    My explanation is that the worst-case-execution-time 100ms violates the initial 'hard' deadline assumptions of 10ms task-invocation, RTX somehow 'gets confused', accumulates task invocations but tries to recover.
    I never had time to deeply invesigate this 'RTX-is-confused' question in the RTX source code.

    Since periodic task invocation based on 10 ms was not mission critical we circumvented the problem using os_dly_wait() instead of itv_wait().....

    Good luck debugging random-runtime-behavior!
    Thomas

Children
  • Dear all, thanks for your intersting considerations. It seems that I solved my strange freezing so I want to share my experience to help other people.
    My error was to have inserted in a task called periodically with a os_itv_wait method, also the OS function "os_sem_wait" to manage a shared resource.
    In the previous revison my code had:

    if(os_sem_wait(SEM_I2C_1, 100) != OS_R_TMO)
    { ...

    this means that the system tried to wait 100 (system tick) for the I2C is free; this is a lot of time in a so fast task! (in a particular way if this task has an high priority)

    my solution has been

    if(os_sem_wait(SEM_I2C_1, -1) != OS_R_TMO)
    {

    in this case I have a system very fast that try just once if it can use the i2c resource and if is not possible will try the next cicle whitout using all CPU time for this particular task.

    Have a nice day!

    Stefano

  • The user manual of RTX says clearly:

    "You cannot mix the wait method os_itv_wait and os_dly_wait (or any other wait with a timeout) in the same task. RL-RTX maintains only one timer per task. It is used either for interval or delay waits."

    http://www.keil.com/support/man/docs/rlarm/rlarm_os_itv_wait.htm

    So I don't think it is a very good solution. You should probably construct your program differently.

  • Hi Tamir,
    thanks for the hint. Actually I read that line some time ago but remembered only not to mix os_itv_wait() with os_dly_wait()... But now reading it again the quote from the manual implies: "You can NOT use mutexes or mailboxes in Tasks with strict periodic invocation!"

    Please correct me if I am mistaken.

    Thomas