Has anyone any idea where I can get a double precision maths library which will work with the Keil C51 compiler.
Hi Paul I see what you mean about the price - how on earth do they justify that! We have decided that we can just about do the job by using 64 bit integers providing we put some code in place to stop the the numbers overflowing so we will not need to do anything more, but thanks for the offer! I wish you well with your task and if I can help in any way don't hesitate to contact me. Dave
"I see what you mean about the price - how on earth do they justify that!" Have you tried adding-up how long you've spent on this so far? Give this to your accounts department, and ask what cost it represents - be sure to get the full cost, with full overhead weighting, etc. And don't forget the costs of testing, maintenance, documentation, etc, etc,... They may even want to factor-in the time that you haven't been spending on other work...
It's interesting to note that ANSI requires only a minimum 10 digits of precision for a double and 6 for a float. In fact, that is the only difference in requirements between the two. Long double has the same minimum requirements as double. I don't know whether there is an exemption for freestanding implementations from supporting 10 digit doubles or if this is an example of 'ANSI compliance' being slightly redefined. Either way, I don't think the ANSI minimum requirements for double are in line with most people's expectations.
Good lukc Dave I am going to write some functions to extend the maths to do double precision floating-point maths (+ - / * only) - it doesn't look too hard. Just let me know if you would like to receive a copy. best regards - Paul
Hello Ian Yes - this one had me perplexed. 10 decades would be enough for me, but since the IEEE format calls for 8 bytes with a 53-bit mantissa (including the assumed leading 1) I don't know why ANSI C only calls for 10 decades. Since the maths will be coded in binary, I will used all the mantissa bits, which will give me over 15 decades of respolution. best regards - Paul
It was mentioned before, let me repeat (with a suggestion). Why did you choose an 8 bit processor to do 64 bit processing? The processor (Cypress CY7C68013) was chosen becaus it has a built in USB2.0 engine. The Keil tools because a demo came with the Cypress development kit & looked good (still think it is). There are several ARM derivatives with USB 2.0 built in e.g Philips. They will cost about the same as the '51 derivative. Erik
Hello Erik The Cypress chip looked good & seemed to do all we needed (if the Keil tools had supported ANSI C we would hvave been happy). We found out too late that the Keil tools ignore double declarations. To change tracks now would costs fortune (re-designing & tooling a large multi-layer PCB). If only I we could re-run hstory with the benefit of hindsight... but sadly that isn't possible. Anyway, I am happy that an 8-bit processor with a little extra code will eat the job (A-2B+C)/D-2E+F) in double precision floating-point once every 100ms. thanks - Paul
(if the Keil tools had supported ANSI C we would hvave been happy) rather than missing "tools had supported" I think you missed "time to process using a slow 8 bitter". Alas, that is water under the bridge. If you do a bit of paper calculations, you will see that fixed point with scaling really is not that big a deal. in most cases you can, simply by knowing possible ranges, scale by fixed values. If not, there always is the "floating fixed point" method. let us have a look at x = ((a * b)/c) you need one variable "fudge" set to zero before the calculation. here we go (syntaxically incorrect)
if (a > one_half_max) { a /= 2; fudge--; } if (b > one_half_max) { b /= 2; fudge--; } temp = a*b while (temp < one_half_max) { temp *= 2; fudge ++; } x = temp/c
Hello Erik Many thanks for he code ideas (gives me food for thought). I am now in the throws of coding functions with double precision floats... handling NaN, zero, +INF & -INF conditions for each variable & combination like +IN+ + -INF seems to take up most of the effort!. best regards - Paul
I think you missed "time to process using a slow 8 bitter".<p> Time is not an issue in some applications. Not having the double datatype is probably just a way not to lead clueless programmers into the temptation of actually using them - which will then make them complain that their stuff does not work (especially when they try to cram heavy double crunching into ISRs).
Hello Christoph You are right (& all the other contributors to this duscussion)... it is slow doing this on an 8-bitter. I have just started to test my first attempt & adds go from around 30us with floats to 300us for doubles (10 times slower!!). I hadn't expected such a big impact, but I am writing it in C rather than assembler (I thought C was supposed to be just about as fast as assembler!). Back to the fun of testing... thanks - Paul.
"I hadn't expected such a big impact..." Maybe the $5000 is sounding a bit less unreasonable...? "I thought C was supposed to be just about as fast as assembler!" That does, of course, depend very largely upon your skill... And there are some tasks for which 'C' is particularly unsuited - I guess this could well be one of them?
Hello Paul Many thanks for the offer and I would welcome a copy when you get it all sorted. We are doing polynomial calculations linearising a 24 bit sensor input on some micro balances (scale). Speed is a secondary requirement, well, no requirement at all really, the most important thing is accuracy. best regards Dave
Hello David If you send me your contact details (to paul.bramley@metrosol-ltd.co.uk), I will get in touch & send you the results of my efforts. Best Regards - Paul
I hadn't expected such a big impact, but I am writing it in C rather than assembler (I thought C was supposed to be just about as fast as assembler!).<p> This is one of the cases where hand-coded assembler will beat any kind of high-level language (even C) hands down. Having access to the CPUs overflow- and carry-flag alone will speed up the calculations significantly since it makes checking for these conditions by doing actual comparisons unnecessary. You can, of course, access these flags in C code ... however, this is an ugly hack at best (when you know exactly what you are doing and double-check the resulting compiler output, since the actual behavior of the compiler is pretty much undefined in this case), but much more likely it is a sure-fire way to make the program a horrible, bug-infested nightmare. (Read: Don't do it unless you're really willing to verify that the compiler output does what you want it to do, and don't expect anyone after you to do any maintenance on the code.) If you need speed, it might be worth your while to familiarize yourself with '51 assembler (which, due to the simplicity of the CPU, isn't too bad, as you don't have to deal with pipelines and other things that make an assembler programmers life hard on more modern architectures) to optimize your double arithmetic routines.
We are doing polynomial calculations linearising a 24 bit sensor input on some micro balances (scale). Have you considered (and discarded) other approaches for doing this (e.g. lookup-table with interpolation) ?