This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

KEIL Sample USBHID vs RTX_HID

GDay to all,

once again I return with an HID Question

Recently I implemented a lowlewel protocol based upon HID in order to transport more data.
In general, it uses packet counters to embed larger data within reports and the usual sender/receiver states (send data, busy, receiving data etc)

It worked very well when it was based on the USBHID example.
Then we integrated it into our RL-ARM based application, but this did not work and threw exceptions all the time. My USB and RL-ARM understanding is not deep enough to see why - so I went to use the RTX_HID sample as a new basis.

Our application is based on LPC17x and we use an OS-tick of 1000
-> should result in 1 ms slices for the scheduler

After integration in the target, it is still working but with totally different timing and response behaviour. Very slow and many duplicated reports - quite close to unusable.

After that discovery - I inspected the priorities being assigned to the USB tasks (Default USB is 3, and the endpoint tasks 2) and found them to be way too low, so I raised them above all my own tasks. But no change in behaviour

Put together, the main symptom is:
Hosts says: Do something
Target should immediately start with the job and should return BUSY in an instant

Using plain HID, it worked as expected, using RTX_HID it does not

If I consider the time slice and the high prios for USB and the endpoint tasks I assigned (above all others) I simply don't understand why it takes from 5-30 msec to see a change on the input reports (when it was 1 ms on the other sample - reliably)
I could live with 2-3 reports, but not with 30

1) Why can't I use the original USB code?
I thought all the "code down there" would execute in IRQ anyway ... and my lowlevel protocol could live with that well (as there is a clean layer in between to separate it from the rtx-driven application)

2) Any suggestions what parameters to change in order to speed up the RTX sample ?

many thanks in advance, as usual
ULI

Parents
  • SORRY - a dumb typo!

    GetInReport is correct. And thank you Tsuneo for thinking of everything !!
    But in this particular case, no - bInterval - was already set to 1 ms.

    the fact that really confuses me is the general behaviour in terms of timing.
    Using the RL - less sample brought us _VERY_ close to 64kB/s .. and everything was fine. Now, using the OS based sample, the trhoughput is gone as some aspects of the task-switching cause these problems (at least I think so)

    thank you as usual
    Uli

Reply
  • SORRY - a dumb typo!

    GetInReport is correct. And thank you Tsuneo for thinking of everything !!
    But in this particular case, no - bInterval - was already set to 1 ms.

    the fact that really confuses me is the general behaviour in terms of timing.
    Using the RL - less sample brought us _VERY_ close to 64kB/s .. and everything was fine. Now, using the OS based sample, the trhoughput is gone as some aspects of the task-switching cause these problems (at least I think so)

    thank you as usual
    Uli

Children
No data