This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Non allocatable reset of NXP LPC2368 using RTX when modifiing variables

Hello everyone,

today I'm asking for hints on a tricky problem. We have a firmware that uses RTX-Kernel running on a NXP LPC2368. Now the device that the firmware is written for should get a new lc display.
My honest mission is to change the firmware in order to use the new display.

I've spent some weeks this year to do so and some time I've had the problem that the controller resets short time after start and again and again...

Everytime this behaviour occured I have deleted one or more obsolete variables (mostly global) or functions. In most cases I solved the problem by searching other obsolete variables and deleting them from source code - try and error. That is really time-killing.

While testing the firmware on wednesday, I tried to make the adopted and modified routine for writing data to display RAM a little faster. I moved an global unsigned int to the function and changed it to static unsigned char because the value it has to carry is 0x0D at a maximum.

After flashing the firmware in the controller, the controller hung at a random short time.

Yesterday I was trying to solve the problem with hanging firmware on random time and found the problem when no task is running: OS calls os_idle_demon() and was not able to return from it. I found a solution in world wide web: Creating an empty low priority task without using any os_wait functions that prevents the OS from calling the idle task. (It has something to do with incorrect interrupt states on retunring from idle task.)

Today I further tried to make the display writing function faster and changed two unsigned char inside the function from static to non-static. After flashing this firmware the controller resets again and again. I will now try to find out why the controller behaves this way.

What I found out is, that no watchdog is enabled by user (is it part of the OS?). The os_stk_overflow an os_idle_demon are not called from OS. I debug the firmware using ULINK2.

Any ideas where to search the problem for?

Best regards

Parents
  • I use individually sized stacks for every task.

    So I have a number of global arrays that I send as parameters for the stacks when I create the tasks. It's quite easy to fill these arrays before the tasks are created as I already know their addresses and sizes. And if I verify that the linker doesn't split them into two memory regions (for processors that has multiple RAM memory regions), I can use a single loop to fill all stack memory space.

    If you configure the OS to supply the stacks, then you should still have access to a symbol for the memory area the OS will make use of, so you don't need to find the individual start address of each task stack.

Reply
  • I use individually sized stacks for every task.

    So I have a number of global arrays that I send as parameters for the stacks when I create the tasks. It's quite easy to fill these arrays before the tasks are created as I already know their addresses and sizes. And if I verify that the linker doesn't split them into two memory regions (for processors that has multiple RAM memory regions), I can use a single loop to fill all stack memory space.

    If you configure the OS to supply the stacks, then you should still have access to a symbol for the memory area the OS will make use of, so you don't need to find the individual start address of each task stack.

Children
  • I see. I'll try to generate an example with user-defined stack for my idle task 'task3', that needs no stack space for variables. So the only thing I have to do is reserve a stack space that has at least 68 bytes and fill it with a pattern:

    
    static U64 Idle_Stk[88/8];
    OS_TID id3;
    
    main(){
       unsigned char pattern[8]= {0xDE, 0xAD, 0xFA, 0xDE, 0xDE, 0xAD, 0xFA, 0xDE};
       int i;
       for(i= 0; i < sizeof(Idle_Stk); ++i)
          memcpy(&Idle_Stk[i], pattern, 8);
       // ...
       os_sys_init(task1);
    }
    //-------------------------------------------
    
    __task void task1(void){
       //...
       id3= os_tsk_create_user(task3, 1, &Idle_Stk, sizeof(Idle_Stk));
       //...
    }
    //-------------------------------------------
    
    

    Is that code right?

    How can I verify that the stack is not splitted by the linker?

    Best regards and thank you very much so far!

  • I've made a little mistake... sizeof(Idle_Stk) returns 88 and in the for-loop I need result 11. So the for-loop should look like this:

    for(i= 0; i < (sizeof(Idle_stk) / 8); ++i)
    

  • Not sure where you got your value 68 from. But it is a quit "odd" value - do note the alignment requirements for the stack. You would normally also size the stacks as x times your alignment requirement.

    So it's quite common to have something like:

    U64 render_stack[1280/8];
    U64 display_stack[1024/8];
    ...
    

  • I successful tested my first task-creation with user-defined stack (including a pattern initializing the stack). I have seen the pattern in debugger and how much it is overwritten. I'm very proud!

    The 68 byte come from here: http://www.keil.com/support/man/docs/rlarm/rlarm_ar_cfgstack.htm , where is written:
    On the full context task switch, the RTX kernel stores all ARM registers on the stack. Full task context storing requires 64 bytes of stack.

    Additionally I remember that I've read something these days, that in some cases 4 bytes more are needed for successful task switch, but I can't find it this minute.

    Thats why I "guessed" to need at least 68 bytes for the stack of my idle-task.

    I verified the stack-usage of my idle-task with debugger and found 4 byte used at the very beginning of the stack and 64 bytes used at the end of the stack, so I believe that 68 bytes are quite fine.

    I want to thank you very much again - I now have a wide set of tools if I need to find any error in the future!

    If I come in a situation again where the controller resets while starting I will investigat the reasons more deeper and report in this thread here.

    So lets go on to estimate how much stack space a task needs. Let's say I have another simple task. Looking in the file generated by --callgraph linker option I find a Max Depth of 128 bytes and the task itself needs 0 byte of extra stack. So I would simply estimate that the task needs a 196 bytes wide stack (68 bytes basic stack for task switch and 128 bytes for the longest call chain).
    May that be right?
    Another question regarding this task: The task has a local unsigned short. Why this variable needs no stack space?

    Best regards

  • Note that the compiler can decide to use a register instead of allocating a variable on the stack - then the stack space for that variable will be included in the stack space used for a state save during a task switch.

    The four bytes you saw at one end of the stack was probably the OS overwrite marker, that it uses to detect a stack overflow.

  • Ok, I see. Your explanations sound logical to me, thank you Per.

    To go on with user defined stack space for most of the tasks in our firmware I checked the 'Max Depth' value outputted by --callgraph. Then I added 68 bytes to estimate maximum stack space needed and increased the value to a multiple of eigth. That works fine so far.

    But now, there is a task in the callgraph output file that looks like this:

    task4 (ARM, 848 bytes, Stack size 0 bytes, ma96.o(.text), UNUSED)
    

    If I create the task with a user-defined stack of 68 (72) bytes, the os_stk_overflow() is called right after the task has been started.
    I wonder if there is the word 'UNUSED' in the callgraph output. The task is called often and there are several functions that will be called by the task on runtime.

    Why can callgraph not calculate any call chain?

    Why is the task marked as 'UNUSED' in callgraph output file?

    Should I manually estimate the worst case call chain for the task?

    Best regards

  • Hello,

    here we go again! Last days I've spent with optimizing tasks stack sizes, among other things.

    Today I tried to user-define the stack for a task that waits for an event wich is set when connection via usb is made to the device.

    I checked the callgraph output that estimates a Max Depth of 232 bytes for the task and created an own stack area with a size of 512 bytes.

    Then I changed the task creation instruction to user defined stack and incremented the number of user defined tasks by 1 in config file. My plan was to check if the stack is big enough after that with a pattern.

    Compiling the code and writing it to device resulted in permanent reset. Stack overflow handler is not called in RTOS. The reset occurs before the changed task is created in initial task.
    I changed the stack size experimentally to 1096 bytes (wich is the tasks default stack size in the config file), but nothing changed - the device resets permanently.
    If I change the task back to RTOS defined stack my program runs correct.

    So now I am able to check if a stack overflow occurs, have implemented the RTA in my program, have disabled caching while debugging. But I have no idea where to search for the reason for this reset failure.

    Any hints?

    Could the reason be, that my own stack is located at another area than the stack provided by the RTOS?

  • Maybe writing a pattern into user defined stacks can tell you which task is overwritten and maybe even by how much. On the other hand, RTX does have such a mechanism, and it is not triggered. Maybe the stacks are the ones writing one of _your_ buffers which causes entry into abort, and thus is not a stack overrun at all?

  • Note that it is possible to have a stack overflow while jumping past the marker the OS may use for overflow detection. When a program declares lots of auto variables, the stack may overflow but with holes not used - for example a 100char write buffer that isn't completely filled.

    A simulator that explicitly keeps track of the stack pointer can detect such a stack overflow. But an OS that is limited to a single marker word can not.

    And as noted - stack overflows are bad, but it is quite easy to get similar problems by having buffer overruns or using uninitialized pointers or uninitialized array indices.

  • Maybe ...

    It's one of those Dealy linker bugs???

  • Dealy? Deadly! Since I'm a native german, I don't know how to handle the last post. Is there any linker bug known that I can check my linker for?

    Some more background information for you: 'Use Microlib' is enabled in the target options. I do not know if it has any relevance.

    I really do not have any idea where to search. I spent the last hours running some debugger sessions and tried to get a fixed point to catch the error, but everytime the program behaves different. Now my program code is at the point it was this morning but it is not doing a reset any more... really no difference to the code some hours ago, but no reset occurs.

    I wish I had a pro here right by my side! I am tired of this sick program.

  • A simulator that explicitly keeps track of the stack pointer can detect such a stack overflow. But an OS that is limited to a single marker word can not.

    Is there a chance to use any simulator with this ability with my program?

  • Hi Robert,

    Please don't take that last post seriously.

    The general consensus is that there are no serious bugs in the linker.

    (Unless someone knows something different that they don't want to share.)

  • S(tunned) Steve is just bored and wants to throw gravel in the machinery by h(a)unting Tamir about a previous thread. Nothing you need to worry about.

  • Hi Robert,

    I don't think anyone has asked yet but what is the source of the reset?

    In other words what is the value of the RSID after the reset?

    M