This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

USB Mass Storage High Latency Memory

Hi there,

let's have a slow memory which is interfaced to the USB (as a MSC) via an ARM7 (LPC23xx).
If I understood the MSC correctly, then:

1) when the host issues CBW to "out" the desired data block
- the device must get the data from the bulk endpoint,
- write it to the memory
- respond with the CSW

2) when the host issues CBW to "in" the desired data block
- the device must read the data from the memory,
- write it to the bulk enpoint
- respond with the CSW

So whole transactions are blocking. Unfortunately our memory is slow, so we can't dare to wait for read/write completion in the endpoint's interrupt service and have to split the operation somehow:

For the "in" operation, the device automagically NACKs the host's repeated "in" requests until the endpoint is filled with (part of) requested data (upon slow memory read completion). After all block data is subsequently put to the endpoint, the device can finally send the CSW.

For the "out" operation, the device subsequently reads the (parts of) data block to be written. When data block is complete the slow memory write is started. After the write is completed the device can finally send the CSW.

Does this make sense? Would it work if we consider the slow memory is quick enough to fit within the USB timeouts?

Thanks for opinions.

Regards Pavel

Parents Reply Children
  • Tsuneo,

    I would like to ask you - as the skilled USB expert - for more details about following thing:

    I have added some buffering stuff, so the host gets some NAKs only at the very beginning of the SCSI READ(10) command. Then each further IN token is serviced immediately as it arrives.

    Being very optimistic I would believe the design could perform faster (currently the read speed is about 670kB/sec), but seems like the host is the slow side now. It sends the IN tokens approximately every 100 us (and I can provide maximally 64 Bytes of data).

    Is there any well known method the host (OS? => WinXP) determines/chooses the interval? Can I alter it in any way? Didn't I catch anything important? I would rather see some NAKs indicating the design is overloaded than "lazy IN tokening" from the host side;)

    Thank for any info.

    Regards Pavel