Hi !
I'm using MCB2300 and Tcpnet V3.7.
I have the following application :
process tsk : while( true ) Wait (event) led0_off do some stuff timer1 match interrupt : led0_on set event
process tsk has the highest priority.
I check with oscilloscope latency time which is about 25us, and is stable. When I used TCPNet (with 100ms tick) I noticed that the latency time jitters up to 43 us.
The only way I reproduced this phenomenon is creating a task that loops (for()) and has some code with tsk_lock()/tsk_unlock().
So my conclusion is that even if TCPnet is standalone it "detects" RTX presence and disable the scheduler to protect some non-reentrant functions ?
What do you think about it ?
In doubt about the way I launched the experience, I added to my test 'numerical measure' with timer2 and display on LCD the max latency measured.
I do the following :
plug ethernet cable reset CPU. Unplug ethernet cable (restart if value display is 10% over the min value measured, because it means that an ethernet interrupt may occured).
I let the stuff run for hours. After 10 minutes no jitter is observed.
When I let the cable plugged there is still some ethernet activity ( process ARP request, etc ...) that may influence the test.
I will give the result of this test but I'm confident that there may not be any jitter anymore.