Hello all, I am trying to compile c++ code in Keil uVison3. Saving a file as .cpp causes all color formatting to dissappear, and code will not compile. There is no option to save as a cpp file either, i have to do it manually. there is an option that associates cpp file as c++ source, but .c++ extensions are not found in the file extension lists when i open or save a file. (i have to select *.* when i want to see the .cpp file)
I read another post that says saving as .cpp should be all that is needed. What gives? I need to be able to write classes, and this can not be done in c.
"It can be done in C, it's just not as pretty. In fact, the first C++ compiler actually compiled C++ to C, and the output then had to be compiled with a C compiler."
Now why did you have to wake memories of Cfont :)
I believe that the Ceibo C++ compiler uses the same process. I may be wrong there, though.
Never looked at Ceibo C++. Should have written Cfront however... Not sure what Cfont is - Courier? ;)
your belief is right, the Ceibo stuff generates Keil source.
My 10 cents worth on this: there seems to be more and more that do not understand that, while a GHz PC can easily handle code that is written for the programmers convenience, for a (relatively) slow 8 bitter you need to write for the processors convenience.
Erik
To some parts, you can make use of the improved namespace handling of C++ even on a C51.
If you already have structs and write functions:
struct a_struct { ... }; void do_xx_on_a_struct(struct a_struct *my_struct); struct a_struct my_struct; do_xx_on_a_struct(&my_struct);
then you can change to:
class a_class { public: ... void do_xx(); }; a_class my_object; my_object.do_xx();
without extra cost. The problem comes when trying to use full object orientation.
virtual methods isn't so fun on a processor that requires call tree analysis to globalize auto variables and parameters.
Dynamic object creations requires a heap, and a heap does not work well unless the processor has a lot of RAM and preferably also a virtual memory manager. It is evilishly hard to prove that the application doesn't fail from memory leaks or memory fragmentation.
Templated code trades code size for reduced number of source lines - but larger code size is normally also followed by a higher production price when comparing within a processor family.
A very large part of the C++ runtime library isn't applicable for an embedded target since the hardware is missing or there are no standardized ways to interface the C++ RTL methods with the existing hardware.
C++ constructors/destructors are very powerful to take care of leaks and avoid uninitialized struct members. But they represent hidden code that may end up consuming a significant percentage of a tiny processors available cycles.
Exceptions are "the way" to protect all resources and correctly clean up after a failure. But if you don't have a real stack, how can you den unroll the stack?
operator overloading gives elegance, but in a way that hides the computation cost. It is easy to get a very elegant program that does not fulfill the real-time requirements. And a higher clock speed affects the EMC tests and power consumption. And a faster chip may affect the production cost.
In the end, C++ is a quite nice language. But for embedded targets it is often better to write C:ish applications with limited objectifications just for the namespaces and with limited use of protected/private together with accessor methods.
The interesting thing is that C51 processors can be had with quite huge flash sizes. But in my view, that flash should be used only for quite small applications and with the extra space for storing const data or (if IAP is supported) captured measures or similar. If the code size starts to grow (making it more important to look at namespaces and very clearly specified function and extensive data structures) then I think it is better to switch to another processor than to start looking for C++. After all, there isn't a significant price difference between 32-bit ARM chips or the C51 processors.
The important thing is that switching to a "traditional" 32-bit processor do allow almost full use of the C++ extensions. Dynamic memory allocations should preferably still be avoided (unless we are talking about a processor with very much internal and/or external RAM) except when the application can preallocate all data directly on startup. The amount of non-deterministic behaviour should be minimized as much as possible. The timing betweeen interrupts and main loop or between tasks can't be avoided but the only reason I can see for intentionally adding non-determinism should be when implementing a (pseudo) random number generator.
http://www.keil.com/forum/docs/thread10074.asp