I am looking for opinions on how to create a code sequence that is written in "C" that must be performed as an uninterruptable sequence. I want to disable interrupts (globally) execute a code sequence and then re-enable interrupts. I am looking for your inputs as I don't see how to guarantee this from what I know about the "C" standard. I think the compiler is allowed to optimize the sequence so that the actual linear code could be placed outside my expected enable interrupt and disable interrupt sequence (start/end points). The compiler knows that the enable/disable of the interrupt is volatile and must be performed but it doesn't know that there is an architectural dependency to the code order I want. In other words it could be that part of my sequence gets optimized outside of where the interrupt is not globally disabled. As the opcode creation behavior is still correct but it is not from a system behavior point of view.
Outside of writing it in assembly has anyone experienced this and how did you end up handling it. Thanks in advance for your inputs.
=> Interrupts and multiple CPU's do need more support in the language standard. (Mike Kleshov) <=
I almost don't know anything about such an arduous subject. But I am still curious about what the hardware platform the OP is working on? Is it a Multi-core or Multithreading (hardware, like Hyper-threading) platform?
Is it a Multi-core or Multithreading (hardware, like Hyper-threading) platform?
It doesn't really have to be. A simple single core processor with interrupts is enough to cause the headaches mentioned by the OP.
Assuming that, there are some Parallel computing features exist on a specific platform.
Is it possible that?
part of the code get placed outside the disable enable sequence.
but,
Actually, those code are executed with the correct order during runtime,
because, the instructions are dispatched to different processing units.
Or maybe, it is an In-Order CPU (like Intel ATOM), so that the compiler has to provide some magic.
I just imagine, I don't have any real knowledge and experience on these arduous subjects. I even don't know how to code in assembly.
Processors with multiplie cores or intended to share memory has special instructions to force synchronization of cache contents, to make sure that writes aren't stuck in pipelines.
But that is a very different issue from the single-processor problem of synchronizing several software threads, or a main thread and an interrupt handler.
But in both situations, you often end up needing the use of assembler, to create code blocks that the compiler just can't fiddle with. And you need a "dumb" linker that doesn't try to reorder or rewrite code.
I don't think we're quite at a point where a CPU turn originally single-threaded code into something that's executed in several threads. That would be kind of a holy grail of parallel computing. ;)
A high-end x86 processor may spend a million transistors just for analyzing relationsships between instructions, and then reordering them and then splitting off the instructions into multiple ALU, FP, etc.
Some day, we may get similar behaviour in larger embedded processors too, but even when it can be done it would be impossible to keep track of tight timing so we would most probably still have "dumb" sequential processors for the hard real-time controller tasks, and use a superscalar, multi-core processor as a "back-end" computation engine responsible for crunching and non-critical tasks.
How do you prove something correct, when you have an infinite number of combinations that your source code can be converted into instructions, and these instructions can be sliced and diced between concurrently working execution units? A tiny little asynchronous interrupt (not to mention an exception) will completely rearrange the execution sequence.
Most probably, we will get better languages for describing concurrent and sequential operations, where the language will help with critical sections and concurrency. Even our embedded platforms have come a long way from the hardware platforms originally in existence when C was invented.
A Linux kernel documentation:
www.kernel.org/.../volatile-considered-harmful.txt
Why the "volatile" type class should not be used
Just for your reference. (I read the above kernel documentation, but I think I don't really understand it.)