We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
How do I go about picking an architecture ? My first thoughts suggested Cortex M3 but the more I look into it the less sure I am.
Obviously I don't to go to the trouble of learning a new technology only to find that I've made a bad processor choice (ie nearly end of line). I've spent many hours looking at many websites and have yet to find any high-level stuff on choosing my first ARM device.
If there's one thing wrong with ARM its the almost infinite number of devices
I'm an embedded developer wanting to undertake my first ARM project, so I'm completely new to the ARM architecture. I want a low power device with serial, USB and some ADC channels.
I will not provide a quote but in this particular instance you are right - R0-R3 can freely be corrupted at any processor mode (I don't know the M3).
Based on: infocenter.arm.com/.../IHI0042C_aapcs.pdf
5.1.1 This paragraph seems to indicate that you need not preserve r0 to r3:
r0 named scratch register 1 ... r3 named scratch register 3
and: "The first four registers r0-r3 (a1-a4) are used to pass argument values into a subroutine and to return a result value from a function. They may also be used to hold intermediate values within a routine (but, in general, only between subroutine calls)."
the "(but, in general, only between subroutine calls)" seems to indicate that if the called function is visible to the compiler, the compiler can see if any of these registers are unused in the function. For the general case, they are assumed to be destroyed.
This also seems to indicate that r0 .. r3 need not be preserved (still 5.1.1): "A subroutine must preserve the contents of the registers r4-r8, r10, r11 and SP (and r9 in PCS variants that designate r9 as v6)."
---
Then something about using volatile in loops. We have had a debate about this earlier and not managed to find 100% proof from the C standard what goes.
The AAPCS specifies in 7.1.5 (my emphasis) "A data type declaration may be qualified with the volatile type qualifier. The compiler may not remove any access to a volatile data type unless it can prove that the code containing the access will never be executed; <v>however, a compiler may ignore a volatile qualification of an automatic variable whose address is never taken unless the function calls setjmp()."
In short, a loop with a volatile local loop variable that doesn't get the address taken may be removed. So volatile doesn't guarantee that a delay loop will survive.
Tapeer.
<quote>
</quote>
u r magnanimous in yo're learning.
Always yo're freind.
Zeusti.
(holding no gruge)
I know, I just found it myself following the lack of response from our talented peer.
'our talented peer.'
respect at last.
ha ha ha ho ho ho
I don't know what is worst - making a mistake because I got confused with another process architecture that I used to work with, or writing delay loops that can kill people. I have a guess that you will pick the first option - well, well, well....
in other words - preserving 2 redundant registers is not going to do any harm to anyone. but writing broken C code like you do - and you brag about it even (making me laugh time after time after time...) is on the verge on criminality in certain circles.
I have still never seen a compiler remove a loop where the loop variable has been volatile. And I haven't been able to find a sentence in the C standard that clearly indicates if the compiler is allowed to, or isn't allowed to.
But the ARM document seems to say that it is an allowed, but not required optimization.
Another issue here is that you may have delay loops in a program for more than one reason.
One reason may be to make something happen once/second. Such a product isn't likely to get out on the market without you noticing that it runs at 100 times the normal speed because of the lack of delay. Any use of a loop for creating exact delays are a way to sure disaster. Using assembler or __nop() could give a minimum delay, but the loop will not be able to adjust for the interrupt load. Obivously, a timer or similar should be used when actual delay length is important.
The other reason may be to create short micro-delays in the code. You maybe need 30 NOP instructions to let an external signal settle. Using a small loop with a few __nop() consumes less code space than having all the __nop() in a long sequence. For ARM mode, 30 nop instructions would consume 120 bytes. As long as the compiler treats the __nop() as having a side effect it isn't allowed to optimize away, such software loops should be ok, and shouldn't even need any "volatile" on the loop variable. Such a loop will not create a fixed delay, but instead a minimum delay. But that is fine if the requirement is just to get enough settle time on a signal. The extra cycles for the loop instructions or from an interrupt in the middle of the delay can then be ignored.
But the bottom line here is that whenever you design critical hardware - whatever algorithms the program contains - you will have to get the software through a release acceptance test of some form. Such a test is seriously broken if it does not verify all timing means used by the software. It should verify that the real-time clock ticks one second/second with a suitable precision. If changing a couple of source-code lines rsults in a day or a week of testing, a change of compiler, processor frequency or processor model would most definitely not lead to less testing.
The issue with software delays is that the testing may see that the delay is long enough. But it can be hard to get the test to figure out potential problems if the sw delay gets extended to the maximum because of maximum interrupt loads from all possible sources during the delay. Such behaviour will require a lot of theoretical work too since a test rig has limitations.
Tapir.
<quote> I have a guess that you will pick the first option - well, well, well.... </quote>
why do you keep judging the book when you cannot see it?
can u remembar back? when i first sayed you do not need to preserve r0 etc.
i then wrote a delay rootine in ASSEMBELY which you sayed was bad. becoz u sayed i did not preserve registers. { now you learn u do not need to }
today i wrote a loop in c and you compain. but i did it for u for example. bcoz u did not beleive me. i do not normelly write code like it. i did it bcoz i needed to hilite yo're pour understanding of the register useage.
now u know i was right u still try to say i am bad.
u plz read the books. then u 2 can maybe write efficient relieble code.
<quote> but writing broken C code like you do </quote>
books and false judgements agin?
plz read the previes postings
remembar:
u only guess what i do. i know what i do!
Tamir wrote: "I wonder: can you point to anywhere in the document (that you obviously did not read at all!) to prove that I'm wrong? "
from the point of view of a bystander, I just wanted to point out that you seem to put a lot of emphasis / weight / priority of "proving yourself right". have you ever thought about why that's the case? that sounds like a very insecure person looking for ways to build up self-esteem.
there is nothing wrong with being wrong: it is human to be wrong.
there is everything wrong with refusing to acknowledge and learn from your wrongs: because you stay wrong that way.
if you look at this very thread, there is no shortage of people trying to explain their bogus reasons in order to "prove" their right.
you don't have to be anyone of those people, because you are better, as long as you are comfortable with your wrongs.