Many thanks for those who help me! (:
I got a problem here...
If i were to use 3 types of interrupts(external interrupt0, timer0 and timer1) in one single program, how should i go about writing the codes in the program?
Write three separate interrupt service routines.
One for each type of interrupt.
Alternatively, you can poll the interrupts, which means to write a loop that runs around and checks each interrupt status bit in turn to see if it's active, and if so, handles the interrupt.
Pros and cons of polling versus ISRs are left as an exercise for the student.
Drew, PLEASE give me one con re using interrupts.
Erik
Some disadvantages of ISRs compared to polling:
- More memory usage as ISRs cannot be overlaid; need to preserve and restore context that ISR interrupts - More care necessary with shared RAM to guarantee atomicity - Sometimes more difficult to guarantee worst-case service latency (with prioritized interrupts) - Often requires some sort of throttling mechanism to prevent the CPU from being hogged by a busy or stuck interrupt - Occasionally has those obscure bugs where someone managed to mask interrupts and not unmmask them, particularly with nested interrupts or deep call trees. (People hardly ever forget to put a service routine in the polling loop, and when they do, it's not hard to find...)
The advantages of ISRs are of course the flip side of these disadvantages
- Rapid and consistent response to events (polling has to wait the entire trip around the loop) - Main body code can remain ignorant of interrupt; as with an RTOS, the independent contexts make development of different tasks more modular, at the cost of a bit more overhead and synchronization effort - Hardware can prioritize your execution (once you define it) - It's why they put in interrupt hardware
More memory usage as ISRs cannot be overlaid; need to preserve and restore context that ISR interrupts OK, in 0.01% of cases that could be true
- More care necessary with shared RAM to guarantee atomicity care is required regardless and if "care avoidance" ever sneak into development, we are up the proverbial creek.
- Sometimes more difficult to guarantee worst-case service latency (with prioritized interrupts) you cantradict this yourself - Rapid and consistent response to events (polling has to wait the entire trip around the loop) - Often requires some sort of throttling mechanism to prevent the CPU from being hogged by a busy or stuck interrupt care, IP and KISS will take care of 'busy', stuck is 'care' again - Occasionally has those obscure bugs where someone managed to mask interrupts and not unmmask them, particularly with nested interrupts or deep call trees. care again
thus I think the pro's for looping can be summed up as one you can care less (pun intended)
you cantradict this yourself
No he's not. High priority interrupts can hog the CPU almost indefinitely, preventing low priority interrupts (and the rest of the program) from being serviced. With polling, this can't happen.
Ok, the scenario is very theoretical, but it's not contradictory.
It's not even all that theoretical. For example, you might have an interrupt that represents the arrival of a network packet. This is essentially a random event from the point of view of the program.
If I'm measuring the time between points A and B in the code, or the latency a lower priority interrupt, I can't know how long it will take, because I don't know how many high priority interrupts will occur. Some interrupts, like timer interrupts, may be predictable, but not all of them are. The maximum rate of some interrupts may be so slow, and the handling required so minor, that the CPU can always handle the worst case; but again, that might not be the case.
If you poll, then you know (because you wrote the code) that you'll handle each event once; therefore, the worst case to handle any event is the sum of the times to handle all the other events. (You can of course write a more complex scheduler.) You have a guaranteed time and predictable order for operation, at the cost of not being able to quickly service any event at any time. The latency might or might not be a problem, depending on your application.
Both polling and ISRs have their uses. Neither one is dominant in all possible virtues. Hence it's useful to understand the tradeoffs so as to select the right tool for the job.
High priority interrupts can hog the CPU almost indefinitely, preventing low priority interrupts (and the rest of the program) from being serviced. With polling, this can't happen.
Except for some extremely obscure cases this is incorrect.
If something, if it is done in a high priority interrupt, "can hog the CPU almost indefinitely" then changing to polling in order to service something else, will make what was previously done in the high priority interrupt 'miss'.
If you do not have the time to do it all, you need to redo your code (but NOT by changing to polling) or change processor.
A level-trigged interrupt can kill an application if the external hardware for some reason gets into trouble and don't deactivate the input signal.
A memory overwrite somewhere that happens to turn on an interrupt source too much which the ISR wasn't supposed to handle, can also result in the application locking up. Every time the ISR returns, the processor generates a new interrupt because the ISR didn't know to clear/acknowledge the event flag.
The interrupt handler would need some protective logic to try and detect an unreasonable amount of interrupts, while a polling loop would iterate through all input sources without needing deadlock-protection.
Remember that even if most embedded applications tend to use interrupts, our toolbox of possible solutions can be very large, and there are always situations where multiple solutions can solve a problem in a good way.
I can't really see a reason to argue against polling. It is just one very valid method to implement something.
In a situation where every single device is interrupt-driven, it can be very hard to know if the main loop is guaranteed to always have enough CPU capacity available to be able to do the required processing of received data.
In a system where some devices are polled, the system may for example gracefully decrease the poll frequency of the polled devices. An example is to poll the ADC to check the voltage of an accumulator. No data is lost by the poll frequency being reduced at high load. It just affects how fast the unit turns off to protect the accumulator from deep-discharge.
A level-trigged interrupt can kill an application if the external hardware for some reason gets into trouble and don't deactivate the input signal. 1) why ever use level triggered 2) how would polling not "kill an application" if the above happened.
A memory overwrite somewhere that happens to turn on an interrupt source too much which the ISR wasn't supposed to handle, can also result in the application locking up. Every time the ISR returns, the processor generates a new interrupt because the ISR didn't know to clear/acknowledge the event flag. 1) "A memory overwrite" will get you in trouble regardless of interrupt or polling. 2) "the ISR didn't know to clear/acknowledge the event flag" is a programming error which will give trouble regardless
The interrupt handler would need some protective logic to try and detect an unreasonable amount of interrupts, while a polling loop would iterate through all input sources without needing deadlock-protection. here you, while being correct in the statemnt are wrong. If "an unreasonable amount of interrupts" happen something is wrong and polling will not remove what is wrong.
Remember that even if most embedded applications tend to use interrupts, our toolbox of possible solutions can be very large, and there are always situations where multiple solutions can solve a problem in a good way. Of course, I agree with this, however, I see waaaay too much interruptifobia getting people in trouble.
I can't really see a reason to argue against polling. It is just one very valid method to implement something. I both agree and disagree. Polling IS a solution in some (rare) instances, however, it should not be a preference because of the side effects or interruptifobia. I have seen way more polling, that ISRs falling by the wayside when an unrelated change was made.
In a situation where every single device is interrupt-driven, it can be very hard to know if the main loop is guaranteed to always have enough CPU capacity available to be able to do the required processing of received data. same for polling. Whether you use an ISR or polling, the "CPU capacity available" must handle the same 'exceptions' (e.g. much UART processing vs none)
In a system where some devices are polled, the system may for example gracefully decrease the poll frequency of the polled devices. An example is to poll the ADC to check the voltage of an accumulator. No data is lost by the poll frequency being reduced at high load. It just affects how fast the unit turns off to protect the accumulator from deep-discharge. equally easy to do in an ISR. just say "if the main do not get every value read in the (2 line ling) ISR processed, so what.
ONE ISSUE: if you use this microcontroller (the '51) as a microprocessor it does, of course, not matter the least which method you use. However in most microcontroller applications the 'cost' of missing an event is very high.
PS I do, very rarely, use polling and IMHO polling routines takes much more care to write re impact than ISRs.
Well, it's good enough for the PCI bus on a PC. OH, come on. do not make "PC technology" a qualifier.
A polling loop would just adjust its loop frequency. A interrupt-driven solution will completely starve low-prio interrupts. Don't assume that all interrupts produce received data, or ticks, and that a lower service frequency means a failure. A lower polling frequency of a temperature sensor need not break the application. and as I said in my previous post: "ISR reading it is cheap (2 lines) you can skip processing it if you need to. This, coming to think of it makes the reading an ISR and the processing polled. Anyhow, I would consider it an error if something, whether significant or not, was occasionally skipped.
2) we must also try to make our code self-repairing. ... were perfect, we would not need any watchdog timers... well a watchdog tries, in effect make a 'self repair" regardless of polled or ISR
An ISR is written to handle the conditions it has been specified to handle. That is not the same as being required to handle all possible events a device may be able to generate. If I don't turn on a FIFO for a serial device, there is no need for the ISR to know about FIFO events. However, a problem that results in the serial device issuing a FIFO event to an ISR that doesn't know about it may result in the device generating an infinite number of FIFO interrupts. I have no idea what this means, there is no '51 with a FIFO.
It is a programming error to not service any events I have enabled. which is MUCH MORE likely to happen with polling than with an ISR
3) It normally takes less code for a polled solution to gracefully continue with limited capabilities in case of a hradware error. Now you are talking PC again. an embedded syatem has no purpose "continuing with limited capabilities"
Interruptifobia? No! Just a question of selecting tools. My current ARM project has 16 out of 28 possible interrupt sources covered by interrupt handlers, so I can't be too scared of interrupts I would not suspect you as a sufferer, but many exist.
The polling loop will continue to do the "batch" jobs. A fully interrupted solution will permanently stop batch jobs if any interrupt source jams. Again you are talking PC again. an embedded syatem has no purpose "continuing with limited capabilities"
Always remember: There is no use to have an ISR never miss to detect the reception of a serial character, if the "main loop" is jammed and can't process the data in the receive buffers. Such a unit is dead. If the problem is because of a broken hw device, it doesn't matter if the unit has a watchdog. After the reboot it will just deadlock again. AH, now you realize we are talking embedded
Per, I see from your statements that you are "talking PC (alike)" and I am "talking embedded". This is two totally different worlds.
I know of no true embedded system where there is ANY point in "continuing woth limited capabilities" if you want to give some non-exotic examples, I'd like to see them.
OH, come on. do not make "PC technology" a qualifier. Completely ignoring my argument: level-trigged interrupts allows interrupt sharing. I han have a hardware with 100+ signals to supervise. I can't readily have a hw chip with 100+ interrupt pins. Oring of interrupt sources is a very valid concept and can't be ignored "just because". I have no problem with your statement "sometimes you need level" however I DO have a problem with "PC technology as a qualifier". Now I ask you: "what makes it more difficult to kill a hung level in an ISR than in the main" and "how often in true non-exotic embedded do you operate with limited capability" DO note, that I am by no means saying that you, in case of (partial) failure should not provide a safe "shut down"
ISR reading it is cheap (2 lines) you can skip processing it if you need to. Yes, it's enough to turn off a flag to disable an interrupt. But it can be very hard for an ISR to know that there is a problem and make the decision to disable itself. And if you do add code, it can be very hard to test that code. not really. I'll illustrate with an example: A 'parallel read' ISR detect that the value it supplied to the main has not been read OOPS>
A round-robin loop will not starve any other device/signal making sure that low-priority (as in allowing a long latency but requiring a finite finish time) will get serviced. as stated above an ISR can do the same
Now you are talking PC again. an embedded syatem has no purpose "continuing with limited capabilities" No! You have a rigid mind, and have decided that you want to think that I am talking PC. You also want to think that no embedded system has a purpose continuing with limited capabilities. as above I hold "continuing with limited capabilities" to be advanced exotic, whereas I fully agree in "going failsafe" e.g. an elevator shold go to a floor and stop, open the door and shut down.
The electronics in your car detects a malfunction. The "best" way would probably be to turn off. However, loosing your engine on a freeway can be very dangerous, so your little embedded system will just have to try to continue, even if blindfolded. OK, I'll almost buy that one. However the transmission processor in my former truck did not "continue with limited capabilities" but went failsafe (permanent 2nd gear)
An elevator controller can't just blindly shut down either. but neither "continue with limited capability" that would be dangerous, It should go to a safe place.
Should a bus-stop sign turn off just because the light sensor has started to report completely bogus values? Not a bus stop but a bus sign (my product) goes 'failsafe' in that case. There are rules and regulations that prohibit e.g. full intensity after dark.
A traffic light that detects an error should do it's darndest to try to flash a yellow light again not "working with limited capability, but going to failsafe mode.
Everywhere around you, the world is full of embedded systems that can't be allowed to just "blue screen" or turn off, but should go to failsafe mode
I would always, where appropiate provide 'failsafe' but have yet to come across 'working with limited capabilities'.
Now to conclude: I really do not see what "working with limited capabilities" has to do with ISR vs polling. In either case you can detect the OOPS and, in my case, go failsafe. what you do is, ofc course depending on the spec you are given.
I know of NOTHING you can not equally well make detect an oops in an ISR as in polled mode. But I do know a whole lot of failures that have been due to someones interruptifobia.
have a great weekend
Lots of too narrow assumptions in your answers.
You are still missing the issue.
Completely ignoring my argument:
My full text did specifically say...
No! You have a rigid mind, and have decided that you want to think that I am talking PC.
Give up, it's hopeless. You're venturing into another universe that operates according to completely rigid laws invented by its creator.
erik, As Per Westermark mentioned in the following post, a level triggered interrupt that remains asserted can cause havoc. That can happen even if a peripheral is shutdown! I still didn't finish reading all posts, but another con I can think of, is that often when using interrupts you are bound to a certain processor pin, making portability a little more difficult.
Hello again erik, you wrote: I would always, where appropiate provide 'failsafe' but have yet to come across 'working with limited capabilities'
Now I am probably a little fish when compared to you, but yesterday I had a discussion with a colleague that worked for a company making communication equipment for trains and she seemed to contadict the above statement. In their products, when digital control is gone (processor fried, some software failure) they switch to analog control driven by a small CPLD, which has very limited capabilites indeed (due to hardware cost).