We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
another newbie question.
I have a piece of code that I am porting from PICC. In PICC, you can make reference to specific pins ("#define sclk GPIO5" for example), and in your code, you can write to sclk to change the pin output on GPOI5. This makes porting code or changing pin layout so much easier as you can simple redefine sclk to a different pin to make the code work.
ARM seems to prefer to change its pin output through IOSET / IOCLR.
To make the existing PICC code work, I would prefer to be able to define certain pins logically, and change their states by referencing their logic names.
how do you do that in ARM? Thanks in advance.
There are no "in ARM". The ARM architecture is just that - an architecture.
It is then a question of what peripherial hardware that is clued around that core that will control what you can - and can not - do with your processor.
Because of this, it is hard to give an answer to your question.
But a processor that controls the pins with set and clear registers does not work well with that methaphore.
It may be better to write the source code so that it makes use of inline functions for:
void set_my_signal_x(); void clear_my_signal_x();
The compiler than then inline the actual code that writes a specific bit to a specific set or clear register.
Some ARM chips also have a mask register, so that you can mask what pins that should be updateable, and then allow you to perform a direct assign to a register updating (set or clear) the pins that the mask allows updates to. This can be used for multi-bit assigns - for example if you are going to write a 8-bit value to a 32-bit wide port (unless the 8 bits are byte-aligned and your ARM also have byte-wide port registers, in which case you don't need to play with any masks).
When designing hardware that requires you to update multiple bits at the same time, you should spend a lot of time considering exactly where to place these signals in relation to each other and to other signals.
thanks, per.
I took a different approach.
#define sclk_pin 2 ... IOSET=1<<sclk_pin; // set sclk_pin high; ... IOCLR=1<<sclk_pin; // clear sclk_pin; ...
so if I were to change to the pin connection (sclk on pin 5 for example), I would just change it to "#define sclk_pin 5"), and the rest code remains.
not as elegant as PIC but works.
you wrote: "When designing hardware that requires you to update multiple bits at the same time, you should spend a lot of time considering exactly where to place these signals in relation to each other and to other signals."
so if I do "IOSET=0b00010001;", are the 0th and 4th bits updated sequentially (if so, which ones first?) or concurrently (almost)?
in PIC, you can update them sequentially by setting the bits individually, or concurrently if you like by setting the port those bits belong to.
I normally use the approach you took.
But I normally name the pin name to include the port it is defined for, so I get
FIO1SET = 1u << P1_STROBE;
This means that if the hardware is modified so the pin is moved to a different port, I will then rename the pin constant. This will result in a compilation error if I don't update the code. While replacing P1_STROBE with P0_STROBE, I will be able to catch all accesses to FIO1SET and have them replaced with FIO0SET at the same time.
The code IOSET=0b00010001;" will set both pins concurrently. But IOSET will only be able to set pins. and IOCLR will only be able to reset pins. This means that you can't set one pin at the exact same time that you clear another pin. If your ARM has a PIN register, then you can directly assign values to the port pins, i.e. some pins may be set while others may be cleared. But this requires that you can either mask out which of the 32 pins to update, or that the relevant pins are grouped so that you can do a 16-bit or 8-bit write operation if you don't want all 32 pins to be changed.
Writing eight randomly located bits to a 32-bit port may require (example for fast GPIO access on NXP LPC23xx processor):
old_mask = FIO1MASK; FIO1MASK = ~((1u << P1_DATA0) | (1u << P1_DATA1) | ... | (1 << P1_DATA7)); FIO1PIN = data; FIO1MASK = old_mask;
If the eight bits had been consecutive and byte-aligned, this could have been simplified to a:
FIO1PINx = data;
where x had been 0, 1, 2 or 3 depending on in which of the four bytes the eight bits had been allocated.
Another thing here is that if you need to use the mask function to limit which pins to change, and you modify the same port from both main application and an ISR, or from two threads (using an RTOS) then having one task only operate on pins in one byte or 16-bit word or port while another task only modifies pins in another byte or 16-bit word or port means that you don't need to worry about one task destroying the mask configured by another task.
I never understood the way ARM did it: using ioset/ioclr. to me, the pic approach is so much easier and more intuitive.
Note that most ARM chips don't have direct access to the processor pins. They are on a bus several clock cycles away. Treating a pin as a one-bit variable requieres that you either slows down every access, or moves the pins to a very high-speed local bus supporting similar access times as the RAM.
It isn't so easy to scale things to 100MHz+
Another thing is that a number of ARM chips can perform multiple very fast accesses for a random set of pins on a port, without the need to use read/modify operations. A SET register doesn't require that you know what pins where already set before.
I understand the 1st point but I am not sure if I agree with it. PIC's pins are connected through the bus as well - I suppose most mcus are like that in that regard.
I am completely lost on your 2nd argument.
But I am happy that I have found my solution, :).
The pins on the PIC are integrated in the core, allowing direct access.
The ARM core is just a naked core. See it as a PC processor where the serial port etc are connected on PCI boards on the other side of a bus that has way lower bandwidth than the processor core.
When doing read/modify accesses to a device that are 10-20 clock cycles away, the processor core must lock up while waiting for the read access to produce any results. The set and clear registers means that you can change individual pins without needing a read - you don't need to know the old states of any pins. The actual processing of the set or clear access isn't handled by the ARM core, but out at the GPIO peripherial module.
Some later generation ARM releases have managed to move the GPIO functionality closer to the core, greatly increasing the bandwidth of port accesses. But the GPIO are still not integrated with the core, so while a 100 MIPS ARM may be able to toggle an output pin 100 million times in a second, it would not be able to do 100 million reads + writes for performing an xor of a pin.
When running a processor at 10 MIPS, it is important to have fast I/O pins. Having a processor capable of 500 MIPS reduces the need for having a 1:1 relation between core bandwidth and GPIO bandwidth.
I was just going through the literature on cortex-m3 and one of the marketing points for it is that the chip allows unaligned bit-for-bit access, unlike the arm7 chips. so looks like those pic programmers like myself will have an easier time with that chip, afterall.
not sure if that's true with all other cortex chips.
BTW, I thought the cortex chips sound quite good on paper - I haven't used any of them. what do you think?
Ashley; Please read Per's posts carefully. They contain a lot of useful information. As Per points out, the Cortex is just a licensed core. Or I should say a family of licensed cores. They range from the Cortex-A series to the Cortex-R series to the Cortex-M series.
One of the best references for the Cortex-M3 is "The Definitive Guide to the ARM Cortex-M3" by Joseph Yiu with the ISBN 978-0-7506-8534-4 at http://www.newsnespress.com. Also, the ARMv7-M Architecture Application Level Reference Manual on the ARM website. Do not read ARMv7 as ARM7. The ARM7 devices use the ARMv4 core. As Per points out, the actual peripherals of a device are the design responsibility of the chip vendor such as TI, NXP, StMicro, Atmel, etc. I don't know why I list these chip vendors. In fact you can name just about any major chip vendor and they will have one or more ARM cores in their product line. In TI's famous Beagleboard OMAP3530, they have a Cortex-A8 and a DSP chip in the same device at a very low price. But, having pointed to some of the ARM products available, they still may not be the right choice for your application. Such as you pointed out. If your need is a compact microcontroller with a lot of bit diddling, then the ARM devices may not be the best choice.
As Erik points out often on this forum, there are many PIC/8051 derivitives or diviates that support heavy bit diddling, DACs, ADCs, PWM, etc, etc, etc. So, the SYSTEM requirements should first dictate a device family or a single device.
Then what alternate devices are available that are close to 'drop in' replacement if a device is no longer available?
Then what tools sets are readily available to support the present selected device and possible alternates?
The ARM core devices and the 8051 devices are the least likly to rely on a proprietary instruction set and device type changes far easier to accomodate.
As you have found, the Cortex-M3 devices have indeed moved to support some of the features we have found most useful over the last many years. The one I find most helpful is the 'Real' NVIC interrupt controller.
Bradford
Al, I am sorry but your point is?
The point is you have questions about the move from PICC to ARM. The GPIO features left a lot to be desired in your estimation. The point that I took from your original post was that you had not looked at the many examples in the Keil tools to see how you could emulate bit diddling with macros that appear in the many Keil examples.
The point is that you are concerned with having to handle GPIO differently than the PICC. So why did you move from the PICC device? Does your ARM device offer any advantages over your previous device? Is the ARM device really needed in your design?
It's obvious that the good old 8 bitters are better at bit diddling than the newer ARM devices. So, I guess that I was asking you to verify to yourself your system requirements. You certainly do not owe me or anyone else reasons for your device choices.
And I guess I was reacting to Per's post as much as your original post.
And last of all is the point that I just wanted to ramble late at night when I'm too tired to do real work. Bradford
"The GPIO features left a lot to be desired in your estimation."
Nope. I was actually more interested in why ARM did it differently (from PIC) on older ARM, and why they changed their mind in ARMv7. what advantages / disadvantages there are to it the ARM way vs. the PIC way.
"The point that I took from your original post was that you had not looked at the many examples in the Keil tools to see how you could emulate bit diddling with macros that appear in the many Keil examples."
no, I haven't but would love to. what's the jest of it? Per seems to be doing it the ARM way too - aka using SET and CLR, instead of direct GPIO bit access (the PIC way).
I haven't had a chance to go over the ARMv7 datasheet in detail to see how that's done differently. if you have some sample code there, it would be greatly appreciated.
Thanks.
Note once more that it is the chip manufacturer that decides what peripherials to glue to the ARM core. So you can find different chips that has the identical ARM core but different registers and functionality for GPIO, UART, ADC, ...
When it comes to accessing GPIO, I use different methods depending on the requirements. I normally assign to set and clear registers. But I sometimes uses the masking feature of the NXP LPC23xx chips to allow me to do many millions of pin toggles/s while still running the ARM at a quite low clock frequency. In this project, I may need to use nop statements to slow down some of the output to get the required hold times between the toggling of individual port pins.
If the project requires very fast GPIO accesses, then it is important to select a chip (chip, not core) that has moved the GPIO functionality to a high-speed local bus to keep down the number of access cycles for the GPIO pins. With LPC23xx chips, two of the ports can be accessed using either a slow or a fast interface (different set of controlling registers selected by a configuration bit) and the speed difference is huge. This is similar to having a PC either controlling I/O operations using a PCI card at 33MHz or 66MHz, or having the I/O hardware on an old 16-bit ISA card running at 8MHz.
But since this it is the different chip manufacturers (Atmel, NXP, ...) that adds the peripherials, you can't just look at a core but must spend time reading up on the individual chip offerings. Just as it is simpler to build a slow ISA card than a PCI or PCI-E card, it is simpler for the chip manufacturers to design peripherial devices running at a significantly lower speed than the ARM core.
The thing is that even when the GPIO is on a high-speed local bus, they are still not part of the core so the behaviour will still not be identical to that of a PIC chip. Instead of consuming 15-20 clock cycles or more for an access, the clock logic in the chip may possibly run the GPIO at half or a quarter of the core speed.
"Note once more that it is the chip manufacturer that decides what peripherials to glue to the ARM core. So you can find different chips that has the identical ARM core but different registers and functionality for GPIO, UART, ADC, ..."
that makes sense.
I guess then the question becomes a) which ARM chips (chips that use an ARM core) allow for PIC-styled gpio access? I haven't seen one but I have very limited experience with those chips. so others with more experience may be able to provide a few examples. b) what prompted such diverse decisions, if such chips indeed exist.
I did read somewhere that some ARM chips have fast IO vs. regular IO but I am not entirely sure how that works.
It may be that no manufacturer has designed PIC-style GPIO, since that you require that they must modify the actual ARM core and insert the GPIO logic inside the core - similar to extra processor registers - instead of as external memory-mapped logic.
Remember that the ARM core is designed to load from memory and store to memory, but perform operations on registers. If the GPIO is memory-mapped, then the ARM core is designed to read into a register, perform an operation and store back. With such a core, the fastest pin manipulations you can get is when a write operation can be done with just a memory write, without first having a memory read and a register operation. Really fast read/modify/write with such a core would indirectly require that you took one or more of the processor registers and mapped it to 32 or more GPIO pins, in which case any load into the register would represent a write to GPIO pins.
As soon as the GPIO is outside the processor core, you further get the problem that a given process suitable for a 100MHz core is not fast enough to also manage 100MHz to external memory-mapped IO. And if you shrink the core and use a process that is fast enough for 100MHz GPIO then it would be better to instead use the advantages of such a design process and let the core run at 500MHz.
The NXP LPC23xx is a variant with fast GPIO and normal GPIO. Fast GPIO just means that the GPIO is controlled from a high-speed local bus. But this is still outside the core - representing extra delays needed - and the instruction set of the ARM is still designed for load/store operation, where changes are done to registers and not directly to memory locations. The write-only changes you get from the set and clear registers can be pipelined and processed at full speed, even when there are several cycles of delay between core and the GPIO-controlling hardware.
The important thing here is that the ARM chips are designed to run with quite high clock frequencies and designing it like the PIC would either hamper the allowed speed of the core, or break the existing separation between the core and the external logic that is separating one manufacturers offerings from other manufacturers offerings.
If the GPIO was part of the core - how would then a manufactorer be able to buy a license for the core and then release 80-pin, 100-pin, 144-pin, ... variants of the chip without having to modify that core? Or should ARM sell a core supporting 400 port pins, and then leave it to the manufacturers to decide if the now much larger and energy-consuming chip should have all pads bonded or not?