How can I increase the interrupt priority of UART1 of the LPC2364? Due to heavy load on timer interrupts, I'm missing receiving characters every 5 - 10 seconds.
The initialization of UART1 is as follow:
void enableUART1(void) { VICVectAddr7 = (unsigned long) UART1_IRQHandler; // Set Interrupt Vector VICVectCntl7 = 17; //15; // use it for UART1 Interrupt VICIntEnable |= (1<<7); // Enable UART1 Interrupt }
// Enable RS485 P2.7 Initialisiert auf Out high PINSEL4 = PINSEL4|P2_00TXD1|P2_01RXD1; U1LCR = bits|parity|stops|_DLABON; U1DLL = (U8)baud; U1DLM = (U8)(baud>>8); U1LCR = bits|parity|stops|_DLABOFF; uart[_UART1].openChar1 =uart[_UART1].openChar2 =uart[_UART1].openChar3 =uart[_UART1].openChar4 =0; uart[_UART1].closeChar1=uart[_UART1].closeChar2=uart[_UART1].closeChar3=uart[_UART1].closeChar4=0; uart[_UART1].closeTimeout=timeout; flushBufferUART1( ); U1IER =(1<<0); // Enable Rx Interrupt uart[_UART1].initialized=1;
Many thanks for any input. Peter
Try to measure the min/max/avg time consumed by each individual interrupt.
Then look at the expected frequency (at burst rate) for the individual interrupts. Check if you have the raw processing power to fulfill your requirements.
Unless the protocol or connection mode have very strict timing requirements for the serial communication, it may be far worse to increase the UART priority.
You should - at all times - have enough processing power to be able to service all critical interrupts within the allowed latency and still have a reasonable amount of time left for non-critical interrupts and any main loops. If you don't have nested interrupts (which really complicates your analyses) then you should consider all interrupts as critical, and see what happens if all interrupt sources are trigged almost at the same time, with the longest-running low-priority interrupt arriving just before all others.
Will you still be able to service all interrupts (potentially multiple high-priority interrupts before the last low-priority interrupt) without failing any interrupt, and without failing your narrowest timing-requirement in any main loop?
Everything isn't testable but you need to produce some form of confidence intervals for the system, or you need to perform traditional testing at a significantly higher load (higher baudrates with faster/longer bursts of data) etc. than the system will be used for in real life.
In some situations you can let different parts of the software drive a large number of LED and use a logic analyzer or similar to trace the LED pattern. Just remember that a LED turned on/off in an ISR does not represent the time needed to enter/leave the ISR.
Anyway - it is very important to have a lot of spare capacity, since "normal" testing seldom manages to come close to worst-time scenarios. Some worst-case scenarios may be hard to even identify, and you can't create a test case for a scenario you can't identify.
Per's comments are very correct. What you can do to calculate your timing budget is create a table that contains all the time consuming operations in your system in the form of period, duration, total cost. if you know that your system should do all its work within X[ms] (that is usually the least common multiple of all periods), then you can calculate a worst case scenario and the CPU load. this assumes of course a simple scheduling (non-preemptive).
Hi, people,
I've tried your project and It's works fine... Thanks again...
Regards, Marine bOY