I am really surprised that with the latest version of RTX that Keil has changed the functionality of the millisecond parameter to delay functions.
See
http://www.keil.com/support/docs/3766.htm
It sees that now the delay parameter is having 1 added to it in the latest version of RTX.
This is a significant functional change that I would have thought would not have been implemented without reaching out to the community of users. This change breaks a ton of existing code that relies on polling intervals of the tick frequency.
I regularly have threads that implement a 1 ms polling of hardware devices. This is implemented as a simple delay of 1 ms. Granted the first call to this delay function may return in less than 1 ms, but after that it is consistently 1 ms in duration. With the changes I don't believe that I will be able to poll at the tick frequency of 1 ms, it would be 2 ms. It seems to me that minimum polling time has been decreased to 2 times the tick frequency with the latest version.
I would strongly encourage KEIL to put back the original functionality, but I was wondering if others had the same concern.
But that is just it, it isn't a simple task.
To make my application work with the new RTOS I would either need to change my SYSTICK period to 0.5 ms so that I ensure my state machine polling states poll every 1 ms. Or I need to change many states to a new polling period of 2 ms.
Neither is desirable. Changing the SYSTICK period would mean that I now have twice as many timer interrupts causing a reduction in available bandwidth. Changing the polling period though possible, means that my syncronization period is now twice what it used to be with possible ramifications due to the real-time nature of my application. Also this change would impact many threads
In addition the code is now more confusing since either way the code will say a delay of 1 timer tick period but the actual delay between polling iterations will be a period of 2 timer ticks, how is this helpful.
It seems to me that as far as making the behavior easier to understand we have reduced the readability and understandably of the code, since an osDelay(1) in a loop will actually introduce a delay of 2ms, that doesn't make sense to me.
Hi Andrew,
we have documented that change in the Revision History of CMSIS-RTOS RTX. www.keil.com/.../rtx_revision_history.html Now a delay or timeout value of '1; ensures that it is at least 1 millisecond (actually between 1..2). Before the modification a delay or timeout value of '1' was actually between 0..1 milliseconds and this caused complains form other users.
The real issue is that the behaviour is not specified precisely. Also a osDelay function cannot be used to implement interval timers (use osTimer functions instead for that).
Currently we have a discussion in team. We consider to revert back to the previous behaviour where a delay or timeout value was effectively up to 1 millisecond less then than specified value.
Can we get some opinions from other users.
Thanks
Almost all delay functions in existence are implemented in a way where the user must understand the concept of timer granularity and that the initial period may be shorter because the delay might be called a random time into that first time quantum. If they aren't implemented that way, then they normally manage by internally operating on a much faster time base than the delay parameter - so if 1ms is requested, they might be based on a 1us timer meaning you get a shortetst delay of 999us.
Sometimes, people are lucky enough to reserve a single timer for their task, allowing them to run the timer at a high enough frequency that the length of the first tick doesn't matter. One such example are the multimedia timers in Windows.
In this case, Keil is doing something that is deviating from the normal practice, and for the single reason that they want to protect beginners from making a mistake. Should Keil protect beginners from stupid assumptions, by instead forcing their experienced users to suffer?
The scary scenario here would be that it is a junior developer at Keil that recently suffered from the specific assumption that the first tick will be a full time period - and so made the decision that the delay should always add +1 instead of letting the individual developers decide if they need a guaranteed minimum time, or if it is more important that multiple delays in sequence accumulates correctly. Or if they want to run a timer at 10 times the speed of the granularity to make sure that first tick can never be shorter than 0.9 times the nominal time.
I don't like this change even though the user can re-compile the source code to get his own compatible version. I agree osTimer is better than osDelay to implement interval timers. But this doesn't help preventing previous code from breaking.
Keil support just made me aware of another change to RTX that wasn't clear in the release notes.
>>> in this new CMSIS RTOS, you must call your delays in milliseconds, >>> so even if you speed up the OS clock, you can't get a delay of less than 1 mSec. >>> I don't see a way to use the RTOS to generate a timeout of less than 1 mSec. >>> You can use a hardware timer to create any timeout you like, but the RTOS is >>> not set up for that kind of use.
So not my only hope for a work-around (to reduce my SYSTICK period to 0.5ms) won't actually work.
Considering that the old behavior is the way that FreeRTOS, uC/OS-III and ThreadX all work, I am surprised that KEIL wants their RTOS to behave differently and quite unexpectedly to the advanced user.
Again please consider making this new functionality configurable.