RTX timing change

I am really surprised that with the latest version of RTX that Keil has changed the functionality of the millisecond parameter to delay functions.

See

http://www.keil.com/support/docs/3766.htm

It sees that now the delay parameter is having 1 added to it in the latest version of RTX.

This is a significant functional change that I would have thought would not have been implemented without reaching out to the community of users. This change breaks a ton of existing code that relies on polling intervals of the tick frequency.

I regularly have threads that implement a 1 ms polling of hardware devices. This is implemented as a simple delay of 1 ms. Granted the first call to this delay function may return in less than 1 ms, but after that it is consistently 1 ms in duration. With the changes I don't believe that I will be able to poll at the tick frequency of 1 ms, it would be 2 ms. It seems to me that minimum polling time has been decreased to 2 times the tick frequency with the latest version.

I would strongly encourage KEIL to put back the original functionality, but I was wondering if others had the same concern.

Parents
  • I agree that the API shouldn't try to hide the user from the sample point synchronization by an obligatory add of 1 to the requested delay value.

    It should be up to the individual developers to add that extra tick.

    Right now, you changed the behavior to "always" give a too long delay. It is quite strange that existing software should need to add a -1 just to get around a design change, when it's the users with lacking experience that should really learn about sampling theory.

    Changing the behavior of an existing API really is a no-no! Especially when the new design always produces a delay that is off-by-one. A "busy-loop" with delay 10ms will no longer manage 100 iterations in a second, since each delay will average 11ms. So 91 iterations instead of 100.

    You introduced medicine with worse side effects than the sickness you wanted to cure.

Reply
  • I agree that the API shouldn't try to hide the user from the sample point synchronization by an obligatory add of 1 to the requested delay value.

    It should be up to the individual developers to add that extra tick.

    Right now, you changed the behavior to "always" give a too long delay. It is quite strange that existing software should need to add a -1 just to get around a design change, when it's the users with lacking experience that should really learn about sampling theory.

    Changing the behavior of an existing API really is a no-no! Especially when the new design always produces a delay that is off-by-one. A "busy-loop" with delay 10ms will no longer manage 100 iterations in a second, since each delay will average 11ms. So 91 iterations instead of 100.

    You introduced medicine with worse side effects than the sickness you wanted to cure.

Children
More questions in this forum