This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Why does osGetMessageQueue in a thread result in host buffer overflows?

I have multiple threads that try to read from a message queue via

osMessageQueueGet(MsgQueue, &msg, NULL, 0U)

The Event Recorder show me very frequent "Host Buffer Overflows". According to https://www.keil.com/support/man/docs/uv4/uv4_db_dbg_evr_view.htm this happens due to too frequent buffer writes, but it also happens when the queue is empty and nothing is actually written to the message buffer. When I set the timeout of osMessageQueueGet() to 1ms or more, the overflows get less fequent, but that also makes the algorithm less responsive.

Does someone have an idea why this happens? The code is running on a Cortex-M3. Is it too slow?

I followed the documentation and there it is described exactly as I have implemented it. The event switching doesn't look good.

Parents Reply Children