I am really surprised that with the latest version of RTX that Keil has changed the functionality of the millisecond parameter to delay functions.
See
http://www.keil.com/support/docs/3766.htm
It sees that now the delay parameter is having 1 added to it in the latest version of RTX.
This is a significant functional change that I would have thought would not have been implemented without reaching out to the community of users. This change breaks a ton of existing code that relies on polling intervals of the tick frequency.
I regularly have threads that implement a 1 ms polling of hardware devices. This is implemented as a simple delay of 1 ms. Granted the first call to this delay function may return in less than 1 ms, but after that it is consistently 1 ms in duration. With the changes I don't believe that I will be able to poll at the tick frequency of 1 ms, it would be 2 ms. It seems to me that minimum polling time has been decreased to 2 times the tick frequency with the latest version.
I would strongly encourage KEIL to put back the original functionality, but I was wondering if others had the same concern.
I am really surprised to see that Keil would make such a change that will impact their customer's existing code so drastically! As Andrew mentioned, it now is impossible to get timing from the RTOS in 1ms granularity. This really messes up my existing code and will require quite a large change to be compatible. I would strongly urge Keil to reverse this change.
Thanks!
Am I misunderstanding something? The Keil change just adds one on entry to the function. The reason for doing so seems sound to me. What's difficult about changing existing code to subtract one for use with that change? For the meticulous, that addition could be a macro with a value based on the RTX version.
Yes you are missing the fact that you can't subtract 1 from 1. That leaves 0 which doesn't cause any delay.
The point is that there is lots of existing code that relies on being able to create repeating timing delays of single increments of the rtos timer interval which typically is 1ms. This change causes this code to break because now the minimum timing delay is 2ms.
An example is in my state machine code implemented as a thread that jumps to a function depending on a state variable.
In one state function I might wait on a mail message, in another I might wait on a signal and in another I might poll on a hardware signal. That polling currently is implemented as an osDelay(1), which effectively will poll the hardware once every ms. There also may be other side effects that is dependent on that delay having a 1 ms period.
I realize that the first time that this state runs that the delay may be less than 1ms, but after the first iteration the delay is very accurately 1 ms.
Now with the change my polling frequency is reduced with other potential side effects and there is no work around to get it to poll at 1ms without reducing the SYSTICK period to 0.5 ms, which reduces the processor bandwidth due to the increased number of timer interrupts it needs to process. This is all then notwithstanding the confusion from a readability perspective, where an osDelay(1) now really means a 2 ms delay.
Nope. Can't see the problem.
Reading: http://www.keil.com/support/docs/3766.htm
Looking at os_dly_wait (since I use RL-ARM RTX):
os_dly_wait(1 /* rest of current tick */ + 1) ; /* in KEIL RL-ARM RTX */
The library code now adds one on entry. I change my code to passing one less and hey-presto, the library uses the same value as it did before.
You are missing that I am using a delay of '1'. Subtracting 1 from 1 leaves 0. Looking at the RTX source you will find that when you pass 0 to the delay function it will not block, there will be no delay, I can't use your workaround in my case.
Please don't take offence, your code may have worked but it could be argued that it was not safe in the first instance.
If this is the only complaint of the Keil change, I would personally accept it. At least the operation is now fully defined.
No offence taken, but without looking at my code I don't know why you would think it wasn't safe? Do you mean because the initial delay was anywhere between 0 and 1 second instead of between 1-2 seconds? In either case there is a variable timing that needs to be accounted for.
I am curious however in the following code what would you expect to happen?
osDelay(1); osDelay(1); osDelay(1); osDelay(1); osDelay(1);
Regardless of whether anyone would do this or not, it seems that based on a simplistic reading of the code that it should cause a delay of approximately 5ms. With the new RTX however the delay will be between 9 and 10ms, this is not at all obvious and doesn't make sense to me.
In my first paragraph of the previous comment I meant ms not seconds.
My comment really relates to the small value you are using for the delay. With a granularity of 1ms, I consider it unwise to play around with delays around that value and I would endeavor not to do so.
I take your point concerning the decrease of the SYSTICK period, but that is the cost/compromise which must be considered.
Anyway. That's my view. It's likely others will have their own take on it.
"My comment really relates to the small value you are using for the delay. With a granularity of 1ms, I consider it unwise to play around with delays around that value and I would endeavor not to do so."
I agree with this - and would recommend developers to think carefully about potentially adding a manual +1 when they need to play with really short delays with the need for a hard minimum delay limit. Or maybe consider a secondary timing mechanism, like a faster timer + interrupt + event.
But I would not normally expect:
delay(10); delay(10); delay(10); delay(10); delay(10);
to be intentionally designed to give 54-55 ticks of delay. With a more traditional delay, the expected outcome would be 49-50 ticks as long as not something else jumps in and steals time (such as would be normal on a Windows machine where you can't control when next task switch will happen).
It seems Keil provided a simple work around in the original support ticket for the user to add 1 to the delay parameter if they wished for a minimum number of millisecond delay. Actually, I wouldn't call this a work around, I would call it simply explaining how an os delay function works. This is exactly what I do when I need minimum delay.
I use 1ms delay many places in my code, with the understanding that the first time through it may be less than 1ms, which is acceptable for my application. But now, Keil has made a change to RTX making it impossible to set a delay of 1ms. Also, now all the delays that I’ve added 1 to achieve a minimum delay will delay an additional millisecond, which I don’t want.
What isn't safe about using a 1ms delay? The os change was brought about by a user using a 1ms delay. If you assume that the first call to the delay function will delay exactly 1ms, then that is simply incorrect because that is not how delay functions work (at least in the older version).
I cannot take this new version of RTX. I hope Keil will revert this change and simply explain how users can achieve a minimum number of millisecond delay if required. I hope the examples above showing how multiple calls to osDelay do not accumulate time correctly will be justification enough for Keil to revert this change.
View all questions in Keil forum