Guys,
I am having a problem with float division using PK51. I was wondering if anyone could explain this or has seen it before.
I have a float, and want to divide it by 36000. If I do this directly, I get strange results, even -ve numbers although all the variables are +ve.
accrued_seconds =(total_accrued_seconds/36000);
Both variables are floats, however, if I do the following
accrued_seconds = (unsigned long)(total_accrued_seconds/10); accrued_hours = (float)(accrued_seconds/3600);
I get the correct result. For some reason that I don't understand using too big a divisor in the first code causes an error but not a two step as in the second. I have also tried the two step without the cast to an unsigned long and I get the errored result.
I tired the same thing on GCC on a PC and of course it works fine.
Any clues anyone?
Cheers,
Dirk
An increasing number of lint-type warnings have been introduced into most PC compilers. Of course with corresponding option flags to turn them on or off.
I can't see why embedded compilers should not go the same route. It takes seconds to silent a warning you don't like. It can take months to notice that assumption error you did - and the cost of updating already shipped devices can be very high.
A good developer should not make mistakes. And a good organisation should have good methods to test a product with as close to 100% coverage after any single little code change. In the real world, we know that developers do make mistakes - even if experienced - and we do know that some tests may be sometimes skipped because they take a lot of time and are considered not affected by a change. And no company manages to set up tests for 100% coverage. Module tests can't replace system tests, and system tests explodes the number of test combinations to infinity.
Take a digital camera. 100% coverage of exposure logic would mean every shutter value between 8s and 1/8000s in 1/2 stop and 1/3 stop steps multiplied with every aperture value spanned by any usable lens in 1/2 stop and 1/3 stop steps multiplied with every ISO setting betweeen 100 and xx ... multiplied with the different metering modes (spot, evaluative, partial, ...) multiplied with exposure compensation settings multiplied by flash settings... But the exposure logic is still just a tiny part of the camera.
In the end, the cheapest level to catch an error is immediately in the source code. Every further step the bug survives greatly increases the cost.
Yes, companies should invest in separate tools for static code analysis, but most companies probably don't. A number of good compiler warnings would help save cost/time. And catching a problem with the compiler is still faster than catching it with a separate analysis run.
I can't see why embedded compilers should not go the same route.
Here are two possible reasons why they don't, or shouldn't.
1) From a rather high-level point of view, it would be quite a waste of effort if every tool vendor were to re-invent half of lint. It increases overall tool costs to us all by duplicated work.
2) Different compilers tend to need different changes to the source code to silence warnings: #pragmas, special comments, whatever. In code intended for re-use it can be nigh-on impossible to pass through all used compilers without triggering unwanted warnings.
Ultimately the Unix gurus got it right: use one tool for one task. The tool for the task of checking C code for likely bugs is not the compiler, it's lint. Use it.