We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hoping to get some expert advice here,
Have an embedded USB application running on a Cortex M4. Application uses full-speed USB and is based on a Freescale-provided CDC USB sample project. My bulk "IN" endpoint is defined for 64 bytes. Embedded application is communicating with an XP-SP3 2.5GHZ Core2 Duo laptop using the usbser.sys device driver and a COM program. I'm also running a software USB analyzer (USBlyzer). I have come up to speed on USB about as much as I humanly can in a few week's time, including reading Jan Axelson's "USB Complete", so I'd like to think I know mostly what is going on.
Every millisecond, my embedded application needs to send to the host PC roughly 500 bytes in a single "IN" transaction. According to everything I read, I should be able to do this by sending a burst of eight 64 byte packets followed by a ZLP to terminate the transaction. So here's what's happening; the terminal application polls my device with an IN request using a 4096 byte buffer. This generates a "token received" interrupt on my embedded device which I _immediately_ service by sending a burst of 8 consecutive packets followed by a ZLP. (I also made sure to wait for clear of the "OWN" bit during this burst). Half these packets appear to be lost on the other end. The only way I can seem to get reliable transfer is if I send single packet per transaction and wait for the host PC to send the next "IN" request before starting a new transaction. This winds up killing my performance.
Am I simply asking too much of the usbser.sys driver or am I missing something simple here?