We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hello, I am working on a STR9 scheduler (it is open source - will post a link here when it is ready on the condition you help me to solve this problem :-) ) I don't understand why my code to measure the CPU utilization does not work. It is very simply really: I have an idle task that counts as fast as possible (with the scheduler disabled) upon startup for one second. It saves the accumulated value, then the scheduler is re-enabled and the count begins again, evaluated every second. The ratio between the initial accumulated value and the once per second generated count is the CPU utilization - but the amazing thing is that the one per second count, with the scheduler enabled, tends to the bigger than the initial count (with the scheduler disabled) ! I am quite sure there is no overflow involved - it goes wrong even if I make a measurement every 300 milliseconds (as demonstrated below). Any ideas/recommendations?
// idle task. always executed, and scheduled first static void idle_task(void) { int32u l_scheduler_state ; int32u l_counter = 0 ; scheduler_disable(&l_scheduler_state) ; while (1) { ++l_counter ; // it is faster to count into a register-allocated variable // and later assigned the accumulated value into a static if (timer_poll(IDLE_TASK_TIMER, MILLISECONDS_X_10(30), 0) ) { s_performance_counter_init = l_counter ; l_counter = 0 ; break ; } } scheduler_restore(l_scheduler_state) ; while (1) { ++l_counter ; if (timer_poll(IDLE_TASK_TIMER, MILLISECONDS_X_10(30), 0) ) { s_cpu_utilization = 100 - ( (l_counter * 100) / s_performance_counter_init) ; l_counter = 0 ; debug_message("cpu load %d%\n", s_cpu_utilization) ; } } }
Arthur, F D, Thanks for your replies. I know what's wrong: the answer is in the function's name! Having several task execute instruction causes a jitter in the exact moment the value of the hardware timer is evaluated. I wanted to reduce interrupt latency by removing some functionality to program context but that is not functioning as expected and will not generate predictable delays which is unacceptable. I will fix it, but try to make it fit elegantly. You will get to see the result (hopefully, soon...)