We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Currently we use the CAPCOM units to read in frequency inputs. The results are fast and accurate but are quite consumptive of processor resources. For example, moving two frequency inputs from 40 Hz to 1500 Hz results in 5% less processor idle time. I was looking for a method outside of hardware to reduce the consumption to 1%. We fire an interrupt on the rising edge of the incoming frequency signal. The ISR determines the period of elapsed time from the last interrupt, then generates a value based on the average of the last n periods, rollovers are handled. A timer ISR which fires every 26 mS is used to track rollovers. If I fired a PEC transfer on the rising edge of the incoming frequency signal, I could fill an array of times. The timer ISR would then be modified to average the array of times and create a frequency value, reset the CC unit and reset the PEC. This would mean that the frequency would be updated only every 26 mS which is acceptable for our application. Given that the new PEC method would entail parsing an array while the old method did not, I am suspicious that the benefit gained by going to a PEC method would be offset by the use of for loops to process an array of data. Has anybody tried this before and if so, was the PEC method substantially less consumptive of system resources? (I have written a PEC method for AD conversions and it was markedly less consumptive.)
Hi Brad, Would a worst case scenario of 13% idle time be acceptable to you? I'm not familiar with the application so I cannot tell if it's acceptabe or not. If it works, then it's OK. If it doesn't, then it needs to be fixed. Once I designed a stepper motor controller which used periodic interrupts to generate control pulses. I had to make sure that CPU load stays below 100%, because otherwise obviously some interrupts would be skipped which would lead to unexpected behaviour. Any spare CPU time went to processing of commands from RS-232 interface, which didn't require much CPU time anyway. How low would your idle time percentage have to be before you decided to optimize? Sometimes I can't resist and start optimizing if I see that there is a potential for optimization even if it's not necessary. But I try to remember that I do it for the fun of it rather than to get the job done :-) People tend to overoptimize. Do you have your own separate hardware to process frequency inputs or do you use the CAPCOM? I haven't worked with an application where I'd have to measure frequency. But I wouldn't mind using CAPCOM for that purpose. If it gets the job done, then why not? It might not seem like an elegant solution to some, but it would seem very elegant to others, like the hardware engineer :-) There are things to consider when making design choices: cost, hardware and software complexity, etc. Quite often there are tradeoffs. - mike
Check out http://cache.national.com/ds/LM/LM2907.pdf
Yeah I know. The hardware guy won't go for it though. I did go ahead and implement the frequency reading with the PEC controller. What an improvement!! Now at high frequencies there is no observable additional load on processor idle time. In fact, I have more idle time at high frequencies now than I had at low frequencies without the PEC. The reason is simple. Instead of averaging the array of values every 26 ms, I just take the last value and subtract from the first value and divide it by the number of (samples - 1). It works beautifully.