I am looking for opinions on how to create a code sequence that is written in "C" that must be performed as an uninterruptable sequence. I want to disable interrupts (globally) execute a code sequence and then re-enable interrupts. I am looking for your inputs as I don't see how to guarantee this from what I know about the "C" standard. I think the compiler is allowed to optimize the sequence so that the actual linear code could be placed outside my expected enable interrupt and disable interrupt sequence (start/end points). The compiler knows that the enable/disable of the interrupt is volatile and must be performed but it doesn't know that there is an architectural dependency to the code order I want. In other words it could be that part of my sequence gets optimized outside of where the interrupt is not globally disabled. As the opcode creation behavior is still correct but it is not from a system behavior point of view.
Outside of writing it in assembly has anyone experienced this and how did you end up handling it. Thanks in advance for your inputs.
I guess it would depend on whether the compiler could determine any possible side-effects due to ConnectGrannyAcrossLiveAndNeutral() that might be affected by not having previously completed SwitchOffPower()...
=:0
The compiler can't reorder operations unless it has 100% control. Whenever it can't see the individual instructions of a called function, it has to assume that the function is un-touchable. If the function is inlineable, the compiler may look at individual instructions from the function and mix them - but still only when not changing order of secondary effects.
That is why the volatile keyword can be so important in multithreaded applications or when sharing variables with interrupt handlers. The compiler assumes that "normal" memory accesses can be reordered. A volatile variable that turns off the power will instantly tell the compiler that something magic happens that the compiler must not rearrange.
> That is why the volatile keyword can be so important in multithreaded > applications or when sharing variables with interrupt handlers. The > compiler assumes that "normal" memory accesses can be reordered. A > volatile variable that turns off the power will instantly tell the > compiler that something magic happens that the compiler must not > rearrange.
volatile does not guarantee this for all situations. A nice summary of everything that can go wrong with volatiles is here: www.cs.utah.edu/.../emsoft08-preprint.pdf
Enjoy!
Marcus http://www.doulos.com/arm/
Marcus,
A most outstanding article. Thanks!
An interesting article. However, I'm not quite sure that every description is correct. On page 2, right side, it states that the loop
for (i=0; i<BUF_SIZE; i++) buffer[i] = 0;
does not have any side effects - but modifying _any_ object is defined as a side effect in my copy of the C standard (merely _accessing_ a volatile object is another side effect, calling a function that does either of the two is the third side effect). Hence the compiler would not be free to move the loop around.
A common mistake I could see would be something along the lines of
int foo, bar; ... void some_function(void) { ... disable_interrupts(); foo = bar; enable_interrupts(); ... }
If bar is not volatile, the compiler is free to move the read access (since read accesses to non-volatile variables don't count as side effects), and it may end up being read before the interrupts are disabled.
I agree with you, but find that I could argue the point either way based on what I read in different sections of the standard.
Someone ought to post this on comp.lang.c for a definitive answer.
Ok, this is a lengthy thread but well worth a read. The code in question is dealt with starting around the 50th post:
groups.google.com/.../eec8b4a8060c510f
=> Interrupts and multiple CPU's do need more support in the language standard. (Mike Kleshov) <=
I almost don't know anything about such an arduous subject. But I am still curious about what the hardware platform the OP is working on? Is it a Multi-core or Multithreading (hardware, like Hyper-threading) platform?
Is it a Multi-core or Multithreading (hardware, like Hyper-threading) platform?
It doesn't really have to be. A simple single core processor with interrupts is enough to cause the headaches mentioned by the OP.
Assuming that, there are some Parallel computing features exist on a specific platform.
Is it possible that?
part of the code get placed outside the disable enable sequence.
but,
Actually, those code are executed with the correct order during runtime,
because, the instructions are dispatched to different processing units.
Or maybe, it is an In-Order CPU (like Intel ATOM), so that the compiler has to provide some magic.
I just imagine, I don't have any real knowledge and experience on these arduous subjects. I even don't know how to code in assembly.
Processors with multiplie cores or intended to share memory has special instructions to force synchronization of cache contents, to make sure that writes aren't stuck in pipelines.
But that is a very different issue from the single-processor problem of synchronizing several software threads, or a main thread and an interrupt handler.
But in both situations, you often end up needing the use of assembler, to create code blocks that the compiler just can't fiddle with. And you need a "dumb" linker that doesn't try to reorder or rewrite code.
I don't think we're quite at a point where a CPU turn originally single-threaded code into something that's executed in several threads. That would be kind of a holy grail of parallel computing. ;)
A high-end x86 processor may spend a million transistors just for analyzing relationsships between instructions, and then reordering them and then splitting off the instructions into multiple ALU, FP, etc.
Some day, we may get similar behaviour in larger embedded processors too, but even when it can be done it would be impossible to keep track of tight timing so we would most probably still have "dumb" sequential processors for the hard real-time controller tasks, and use a superscalar, multi-core processor as a "back-end" computation engine responsible for crunching and non-critical tasks.
How do you prove something correct, when you have an infinite number of combinations that your source code can be converted into instructions, and these instructions can be sliced and diced between concurrently working execution units? A tiny little asynchronous interrupt (not to mention an exception) will completely rearrange the execution sequence.
Most probably, we will get better languages for describing concurrent and sequential operations, where the language will help with critical sections and concurrency. Even our embedded platforms have come a long way from the hardware platforms originally in existence when C was invented.
A Linux kernel documentation:
www.kernel.org/.../volatile-considered-harmful.txt
Why the "volatile" type class should not be used
Just for your reference. (I read the above kernel documentation, but I think I don't really understand it.)