This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

How do you choose an ARM family

How do I go about picking an architecture ? My first thoughts suggested Cortex M3 but the more I look into it the less sure I am.

Obviously I don't to go to the trouble of learning a new technology only to find that I've made a bad processor choice (ie nearly end of line). I've spent many hours looking at many websites and have yet to find any high-level stuff on choosing my first ARM device.

If there's one thing wrong with ARM its the almost infinite number of devices

I'm an embedded developer wanting to undertake my first ARM project, so I'm completely new to the ARM architecture. I want a low power device with serial, USB and some ADC channels.

Parents
  • I have still never seen a compiler remove a loop where the loop variable has been volatile. And I haven't been able to find a sentence in the C standard that clearly indicates if the compiler is allowed to, or isn't allowed to.

    But the ARM document seems to say that it is an allowed, but not required optimization.

    Another issue here is that you may have delay loops in a program for more than one reason.

    One reason may be to make something happen once/second. Such a product isn't likely to get out on the market without you noticing that it runs at 100 times the normal speed because of the lack of delay. Any use of a loop for creating exact delays are a way to sure disaster. Using assembler or __nop() could give a minimum delay, but the loop will not be able to adjust for the interrupt load. Obivously, a timer or similar should be used when actual delay length is important.

    The other reason may be to create short micro-delays in the code. You maybe need 30 NOP instructions to let an external signal settle. Using a small loop with a few __nop() consumes less code space than having all the __nop() in a long sequence. For ARM mode, 30 nop instructions would consume 120 bytes. As long as the compiler treats the __nop() as having a side effect it isn't allowed to optimize away, such software loops should be ok, and shouldn't even need any "volatile" on the loop variable. Such a loop will not create a fixed delay, but instead a minimum delay. But that is fine if the requirement is just to get enough settle time on a signal. The extra cycles for the loop instructions or from an interrupt in the middle of the delay can then be ignored.

    But the bottom line here is that whenever you design critical hardware - whatever algorithms the program contains - you will have to get the software through a release acceptance test of some form. Such a test is seriously broken if it does not verify all timing means used by the software. It should verify that the real-time clock ticks one second/second with a suitable precision. If changing a couple of source-code lines rsults in a day or a week of testing, a change of compiler, processor frequency or processor model would most definitely not lead to less testing.

    The issue with software delays is that the testing may see that the delay is long enough. But it can be hard to get the test to figure out potential problems if the sw delay gets extended to the maximum because of maximum interrupt loads from all possible sources during the delay. Such behaviour will require a lot of theoretical work too since a test rig has limitations.

Reply
  • I have still never seen a compiler remove a loop where the loop variable has been volatile. And I haven't been able to find a sentence in the C standard that clearly indicates if the compiler is allowed to, or isn't allowed to.

    But the ARM document seems to say that it is an allowed, but not required optimization.

    Another issue here is that you may have delay loops in a program for more than one reason.

    One reason may be to make something happen once/second. Such a product isn't likely to get out on the market without you noticing that it runs at 100 times the normal speed because of the lack of delay. Any use of a loop for creating exact delays are a way to sure disaster. Using assembler or __nop() could give a minimum delay, but the loop will not be able to adjust for the interrupt load. Obivously, a timer or similar should be used when actual delay length is important.

    The other reason may be to create short micro-delays in the code. You maybe need 30 NOP instructions to let an external signal settle. Using a small loop with a few __nop() consumes less code space than having all the __nop() in a long sequence. For ARM mode, 30 nop instructions would consume 120 bytes. As long as the compiler treats the __nop() as having a side effect it isn't allowed to optimize away, such software loops should be ok, and shouldn't even need any "volatile" on the loop variable. Such a loop will not create a fixed delay, but instead a minimum delay. But that is fine if the requirement is just to get enough settle time on a signal. The extra cycles for the loop instructions or from an interrupt in the middle of the delay can then be ignored.

    But the bottom line here is that whenever you design critical hardware - whatever algorithms the program contains - you will have to get the software through a release acceptance test of some form. Such a test is seriously broken if it does not verify all timing means used by the software. It should verify that the real-time clock ticks one second/second with a suitable precision. If changing a couple of source-code lines rsults in a day or a week of testing, a change of compiler, processor frequency or processor model would most definitely not lead to less testing.

    The issue with software delays is that the testing may see that the delay is long enough. But it can be hard to get the test to figure out potential problems if the sw delay gets extended to the maximum because of maximum interrupt loads from all possible sources during the delay. Such behaviour will require a lot of theoretical work too since a test rig has limitations.

Children
No data