Hi !
I'm using MCB2300 and Tcpnet V3.7.
I have the following application :
process tsk : while( true ) Wait (event) led0_off do some stuff timer1 match interrupt : led0_on set event
process tsk has the highest priority.
I check with oscilloscope latency time which is about 25us, and is stable. When I used TCPNet (with 100ms tick) I noticed that the latency time jitters up to 43 us.
The only way I reproduced this phenomenon is creating a task that loops (for()) and has some code with tsk_lock()/tsk_unlock().
So my conclusion is that even if TCPnet is standalone it "detects" RTX presence and disable the scheduler to protect some non-reentrant functions ?
What do you think about it ?
Even though round-robin is disabled, a task switch from tcpnet task to some other task is a reduced-context switch (on os_tsk_pass) or full-context switch (on os_dly_wait, isr_evt_set, etc.) of a hither priority task, that becomes ready. That is why latency jitter.
You should be right about task switch distinction, but you may consider that context switch is not 'triggered' from the TCPnet code. This is : interruption + isr_event_set() + switch to task that IS waiting for the event. Whatever the task executed that does not lock the scheduler or disable OS IT, the duration for switching to timer IT context and should not vary unless some other interrupt occurs.
In doubt about the way I launched the experience, I added to my test 'numerical measure' with timer2 and display on LCD the max latency measured.
I do the following :
plug ethernet cable reset CPU. Unplug ethernet cable (restart if value display is 10% over the min value measured, because it means that an ethernet interrupt may occured).
I let the stuff run for hours. After 10 minutes no jitter is observed.
When I let the cable plugged there is still some ethernet activity ( process ARP request, etc ...) that may influence the test.
I will give the result of this test but I'm confident that there may not be any jitter anymore.