I am really surprised that with the latest version of RTX that Keil has changed the functionality of the millisecond parameter to delay functions.
See
http://www.keil.com/support/docs/3766.htm
It sees that now the delay parameter is having 1 added to it in the latest version of RTX.
This is a significant functional change that I would have thought would not have been implemented without reaching out to the community of users. This change breaks a ton of existing code that relies on polling intervals of the tick frequency.
I regularly have threads that implement a 1 ms polling of hardware devices. This is implemented as a simple delay of 1 ms. Granted the first call to this delay function may return in less than 1 ms, but after that it is consistently 1 ms in duration. With the changes I don't believe that I will be able to poll at the tick frequency of 1 ms, it would be 2 ms. It seems to me that minimum polling time has been decreased to 2 times the tick frequency with the latest version.
I would strongly encourage KEIL to put back the original functionality, but I was wondering if others had the same concern.
Almost all delay functions in existence are implemented in a way where the user must understand the concept of timer granularity and that the initial period may be shorter because the delay might be called a random time into that first time quantum. If they aren't implemented that way, then they normally manage by internally operating on a much faster time base than the delay parameter - so if 1ms is requested, they might be based on a 1us timer meaning you get a shortetst delay of 999us.
Sometimes, people are lucky enough to reserve a single timer for their task, allowing them to run the timer at a high enough frequency that the length of the first tick doesn't matter. One such example are the multimedia timers in Windows.
In this case, Keil is doing something that is deviating from the normal practice, and for the single reason that they want to protect beginners from making a mistake. Should Keil protect beginners from stupid assumptions, by instead forcing their experienced users to suffer?
The scary scenario here would be that it is a junior developer at Keil that recently suffered from the specific assumption that the first tick will be a full time period - and so made the decision that the delay should always add +1 instead of letting the individual developers decide if they need a guaranteed minimum time, or if it is more important that multiple delays in sequence accumulates correctly. Or if they want to run a timer at 10 times the speed of the granularity to make sure that first tick can never be shorter than 0.9 times the nominal time.