This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

need to convert an ascii to float?

in embedded programming why do we convert an ascii to float initially?what is the use of it?

  • Define "we".

    If "we" convert a floating point number from ASCII format to binary format it is because someone gave it to us in ASCII format and we need to perform any computations - and processors don't perform numeric operations while the numbers are in "printed" form.

    But "we" think twice before using any floating point numbers - are our processors up to using floating point? And is it needed?