Guys,
I am having a problem with float division using PK51. I was wondering if anyone could explain this or has seen it before.
I have a float, and want to divide it by 36000. If I do this directly, I get strange results, even -ve numbers although all the variables are +ve.
accrued_seconds =(total_accrued_seconds/36000);
Both variables are floats, however, if I do the following
accrued_seconds = (unsigned long)(total_accrued_seconds/10); accrued_hours = (float)(accrued_seconds/3600);
I get the correct result. For some reason that I don't understand using too big a divisor in the first code causes an error but not a two step as in the second. I have also tried the two step without the cast to an unsigned long and I get the errored result.
I tired the same thing on GCC on a PC and of course it works fine.
Any clues anyone?
Cheers,
Dirk
The standard doesn't say anything about a requirement to issue warnings, but most compilers do generate warnings for integer constants larger than what fits in an int.
I tested a version of gcc with a value larger than signed in. In this case 3000000000 on a 32-bit machine. The warning: test.c:5: warning: this decimal constant is unsigned only in ISO C90
Do you linke a compiler that does not upgrade to unsigned or long but silently made it a negative value without a warning, even when all warnings are turned on? Isn't the goal with warnings to inform the developer that what he expects and what he gets will not match?
The suffixes u, l, ul etc are there to specify a required size of an integer. For a compiler for an embedded target, the compiler should really consider giving a warning about potential cost of upgrading a constant to long. And using a negative integer value instead of switching to an unsigned constant should merrit a warning.
"And using a negative integer value instead of switching to an unsigned constant should merrit a warning."
In the case of C51 with 16-bit int and 32-bit long, if it does in fact use the integer value -29536 for for the constant 36000, then it is not abiding by the standard, which says it should have treated 36000 as 'long int'. "Switching" to an unsigned type should not be an option in this case.
Correct. Since long is larger than int, it should first have stepped up to a signed long.
then it is not abiding by the standard, which says it should have treated 36000 as 'long int'
Which makes it all the more important for the OP to answer the question raised upthread: is this happening with Keil C51, and if so, was its option "standard integer promotions" turned on? Because if it is, and it wasn't (in that order ;-), it's not really surprising that this happened. C51 doesn't promise ANSI compatibility in that mode, so code shouldn't expect it.
I would prefer it to do what I expect, then no warning would be required.
For a compiler for an embedded target, the compiler should really consider giving a warning about potential cost of upgrading a constant to long.
I'm not so sure. One of the purposes of standardisation is to provide you with a manual which, if followed to the letter, allows you to write code which should produce the result you expect. If the compiler translates that code in any way that does not conform to the standard it should issue a warning.
While it may seem a good idea to warn about things other than this to be helpful before you know it you can't see the wood for the trees.
And using a negative integer value instead of switching to an unsigned constant should merrit a warning.
Absolutely. It's a deviation from the standard.
Your implicit trust of humans beings and their flawed abilities (evident in the forum, from time to time) is the most scary things about you, Jack.
"Your implicit trust of humans beings and their flawed abilities..."
It would be a terribly boring world if one didn't have trust.
Can't fly - Don't trust the pilot. Can't drive - Don't trust the EMU. Can't drink - Don't trust the bottling company.
etc
That assumes that your expectation is correct!
Unfortunately, the compiler has no way to check your expectations - so it tries to be "helpful" by issuing warnings where people's expectations are known to be frequently wrong.
"While it may seem a good idea to warn about things other than this to be helpful before you know it you can't see the wood for the trees."
Absolutely!
I have one more:
Jack Sprat: Can't program ? Don't trust the compiler... :-)
An increasing number of lint-type warnings have been introduced into most PC compilers. Of course with corresponding option flags to turn them on or off.
I can't see why embedded compilers should not go the same route. It takes seconds to silent a warning you don't like. It can take months to notice that assumption error you did - and the cost of updating already shipped devices can be very high.
A good developer should not make mistakes. And a good organisation should have good methods to test a product with as close to 100% coverage after any single little code change. In the real world, we know that developers do make mistakes - even if experienced - and we do know that some tests may be sometimes skipped because they take a lot of time and are considered not affected by a change. And no company manages to set up tests for 100% coverage. Module tests can't replace system tests, and system tests explodes the number of test combinations to infinity.
Take a digital camera. 100% coverage of exposure logic would mean every shutter value between 8s and 1/8000s in 1/2 stop and 1/3 stop steps multiplied with every aperture value spanned by any usable lens in 1/2 stop and 1/3 stop steps multiplied with every ISO setting betweeen 100 and xx ... multiplied with the different metering modes (spot, evaluative, partial, ...) multiplied with exposure compensation settings multiplied by flash settings... But the exposure logic is still just a tiny part of the camera.
In the end, the cheapest level to catch an error is immediately in the source code. Every further step the bug survives greatly increases the cost.
Yes, companies should invest in separate tools for static code analysis, but most companies probably don't. A number of good compiler warnings would help save cost/time. And catching a problem with the compiler is still faster than catching it with a separate analysis run.
This really is why people need to make sure that their expectations are correct. There's only one way to do that, but unfortunately not everyone is prepared to do it.
This must be too subtle for me. I don't implicitly trust human beings and/or their flawed abilities. You'll have to explain what you mean.
that assumes that the manual is written in language intelligible to the user...
If someone has the skill to interpret "Unexpected end of file" as "Missing closing brace" then I shouldn't forsee much of a problem with a mere manual!
Something that I do think is useful and Microsoft seem to be taking on board is adding a bit of helpful information to existing warnings/errors, so instead you get:
C12345: Unexpected end of file (Possibly a closing brace is missing)
or something similar. Now, there's progress.
"If someone has the skill to interpret "Unexpected end of file" as "Missing closing brace" then I shouldn't forsee much of a problem with a mere manual!"
That'd be a trivial example.
But trying to decipher what some of the arcane descriptions in the ISO spec actually mean is nigh-on impossible for mere mortals...
"Something that I do think is useful and Microsoft seem to be taking on board is adding a bit of helpful information to existing warnings/errors"
Keil made a start at this in C51 when you could press F1 on an error message a get a (hopefully) fuller description; eg,
http://www.keil.com/support/man/docs/c51/c51_c101.htm
Unfortunately, it's pretty half-baked and most of the "descriptions" are missing any really useful explanation.
:-(
The facility doesn't even exist at all in the ARM tools, as far as I can see.
Ach, Andy, I must agree and expand: Have you tried to look up details about linker errors/warnings for ARM chips at http://infocenter.arm.com? Oh, yeah.