I am trying to write a very simple program to output a serial string including hex codes when I press a button. It works fine if the string does not include 0x00, but if it does this is treated as 'end of string' and does not output.
printf ("\x01\x02\x00\x03\x04")
SBUF =0x01 SBUF =0x02 SBUF =0x00 SBUF =0x03 SBUF =0x04
But who said fwrite() would have to end up at a serial port I didn't. But that shouldn't matter. It's similar to memcpy() in goal: both can easily be implemented by the user as a loop in C. fwrite() is defined to have the same effect as a loop of putc() (or putchar(), if you like) calls. memcpy() is defined to behave the same as a simple copy loop in C. So why are these "obvious" things defined as ANSI C Standard Library functions? Because a compiler implementor can do things that the user can't do, or which are rather difficult to achieve for a user. E.g. setup costs for a long string of calls to putc() can be avoided if you know you'll be doing lots of putc()s in a row. Calling a particular incarnation of putchar() in a tight loop may or not be wasteful because of such setup costs. If it is, having a hand-optimized fwrite() becomes a very useful thing. For now, stdout and stderr would be the only valid FILE* arguments to fwrite(), and a companion stdin for fread(), of course. But once these are in place, this concept could indeed be extended to support other types of output.