in embedded programming why do we convert an ascii to float initially?what is the use of it?
Define "we".
If "we" convert a floating point number from ASCII format to binary format it is because someone gave it to us in ASCII format and we need to perform any computations - and processors don't perform numeric operations while the numbers are in "printed" form.
But "we" think twice before using any floating point numbers - are our processors up to using floating point? And is it needed?