GDay to all,
once again I return with an HID Question
Recently I implemented a lowlewel protocol based upon HID in order to transport more data. In general, it uses packet counters to embed larger data within reports and the usual sender/receiver states (send data, busy, receiving data etc)
It worked very well when it was based on the USBHID example. Then we integrated it into our RL-ARM based application, but this did not work and threw exceptions all the time. My USB and RL-ARM understanding is not deep enough to see why - so I went to use the RTX_HID sample as a new basis.
Our application is based on LPC17x and we use an OS-tick of 1000 -> should result in 1 ms slices for the scheduler
After integration in the target, it is still working but with totally different timing and response behaviour. Very slow and many duplicated reports - quite close to unusable.
After that discovery - I inspected the priorities being assigned to the USB tasks (Default USB is 3, and the endpoint tasks 2) and found them to be way too low, so I raised them above all my own tasks. But no change in behaviour
Put together, the main symptom is: Hosts says: Do something Target should immediately start with the job and should return BUSY in an instant
Using plain HID, it worked as expected, using RTX_HID it does not
If I consider the time slice and the high prios for USB and the endpoint tasks I assigned (above all others) I simply don't understand why it takes from 5-30 msec to see a change on the input reports (when it was 1 ms on the other sample - reliably) I could live with 2-3 reports, but not with 30
1) Why can't I use the original USB code? I thought all the "code down there" would execute in IRQ anyway ... and my lowlevel protocol could live with that well (as there is a clean layer in between to separate it from the rtx-driven application)
2) Any suggestions what parameters to change in order to speed up the RTX sample ?
many thanks in advance, as usual ULI
We added the code where the original GetReport in the Keil-sample was - without bothering about the style - and the non RL-ARM version worked very well.
> We added the code where the original GetReport in the Keil-sample was
I don't see any GetReport() function on the original code, neither non RL nor RL. Maybe you mean GetInReport() function?
> I simply don't understand why it takes from 5-30 msec to see a change on the input reports (when it was 1 ms on the other sample - reliably)
What is the bInterval value of the interrupt IN endpoint? The original examples set it to 32ms
C:\Keil\ARM\Boards\Keil\MCB1700\RL\USB\RTX_HID\usbdesc.c /* USB Configuration Descriptor */ /* All Descriptors (Configuration, Interface, Endpoint, Class, Vendor) */ const U8 USB_ConfigDescriptor[] = { ... ... /* Endpoint, HID Interrupt In */ USB_ENDPOINT_DESC_SIZE, /* bLength */ USB_ENDPOINT_DESCRIPTOR_TYPE, /* bDescriptorType */ USB_ENDPOINT_IN(1), /* bEndpointAddress */ USB_ENDPOINT_TYPE_INTERRUPT, /* bmAttributes */ WBVAL(0x0004), /* wMaxPacketSize */ 0x20, /* 32ms */ /* bInterval */ <---------- /* Terminator */ 0 /* bLength */ };
Tsuneo
SORRY - a dumb typo!
GetInReport is correct. And thank you Tsuneo for thinking of everything !! But in this particular case, no - bInterval - was already set to 1 ms.
the fact that really confuses me is the general behaviour in terms of timing. Using the RL - less sample brought us _VERY_ close to 64kB/s .. and everything was fine. Now, using the OS based sample, the trhoughput is gone as some aspects of the task-switching cause these problems (at least I think so)
thank you as usual Uli