We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
I have written a small embedded TCP/IP server application but it needs to work lock-step: one query then one response.
My problem is that the client (not under my control) making the requests is running ahead and I don't have the resources to buffer-up an arbitrarily large number of queries.
When a query comes in to the server, it arrives in the tcp_callback function. Data is arriving before I've fully sent the response to the previous one.
How do I impose some flow control on incoming data so that I can do things lock-step ?
state machine?
Err. I don't understand. I already have a state machine.
The point is that I CAN NOT CONTROL when TCP data is received. It comes into a callback function, which means I MUST deal with it there and then, whatever my state machine is doing. I want to defer dealing with it. If I had infinite resources of memory, I could buffer everything up. I don't have infinite resources.
How do you suggest I implement a state-machine to achieve this ? I'm using a state machine for sending TCP data and using the callback for handling reception.
What I would like to do is to be able to "peek" to see if there is another TCP packet incoming and only deal with it when my state machine is ready to do so. I can't do this, because the callback mechanism effectively works asynchronously to the state machine.
Richard, I have to admit that I never worked with TCP/IP in an embedded environment (that is going to change...). I can only offer you some help based on my experiences in a PC environment. the API 'recv' in windows environment
int recv( SOCKET s, char FAR *buf, int len, int flags );
offers the 'flags' parameter which can also be: MSG_PEEK, which peeks at the incoming data. The data is copied into the buffer but is not removed from the input queue. The function then returns the number of bytes currently pending to receive.
you can also create a blocking socket by using the 'ioctlsocket' API, FIONBIO command. this API also allows peeking at what the socket has in store, but I guess it is very platform/application specific. you may be able to use 'setsockopt' to setup different parameters. generally, if you can use built-in buffers of TCP/IP, your problem is solved.
my previous response was related to control mechanisms of the protocol itself. at application level, can't you implemented some internal protocol to offer you more control over meaningful (that is, valuable to you) messages that are exchanged?
Oops, I did work with TCP/IP in an embedded environment but it was with the support of TCP/IP library for vx-works. Am I correct in assuming that you are not working with an RTOS...?
I suppose the answer depends on the API of the TCP/IP library you are using. I am using LwIP and its so-called 'raw API'. It has this function
void tcp_recved(struct tcp_pcb *pcb, u16_t len) Must be called when the application has received the data. The len argument indicates the length of the received data.
The function is used to aknowledge the reception of data. If this function is not called, data reception will be stalled through the TCP window flow control mechanism.
- mike
I am using RL-ARM without the RTX (because I can't get it to work). The characteristic of this environment is that all the TCP/IP functions have to work in a single thread: the TCP/IP API functions are non re-entrant.
In any other operating environment, I'd just have the receiving thread block when it has no resources to deal with incoming packets, but I can't do this here because blocking the reception engine (which has to be a callback) would necessarily block all TCP/IP activity.
The system that I am designing is an HTTP server for a specific application. For various reasons, it needs to maintain a continuous TCP connexion between it and the client across several transactions. Unfortunately, this seems to mean that the HTTP client (browser) will start making a new transaction request before it has finished receiving the result of the previous one. Taken to extremes, it may well be possible for the browser to pipeline, in theory, several hundred requests while still waiting on the result of the first one.
This is why I need to impose some flow control on the incoming TCP stream. Documentation on the API is very patchy, and nothing more than a hastily-written note.
Richard,
you wrote:
The point is that I CAN NOT CONTROL when TCP data is received
I am not sure you can do that without access to the client - which is a browser so you probably don't have any control over when it sends data.
and then
if the functions involved are not re-entrant, why can't you make then such by a small wrapper? according to your posts you cannot get RTX to work and at the same time the TCP/IP API is not re-entrant. so, if you need a "threaded" environment and given some basic re-entrance measures, you can build a small task scheduler without too much effort! this link http://www.keil.com/forum/docs/thread12635.asp will help you get the job done for an ARM7 and ARM9.
I am confused as to what you are wanting me to do.
The fact that the RTX doesn't work isn't really that relevant; even if it did work, I couldn't separate the transmit/receive aspects of my TCP application because ALL of the TCP/IP API functions are non-re-entrant WITH EACH OTHER as well as internally. This means, that even in a multithreaded environment, I couldn't have a "send" and a "receive" on the go simultaneously.
The only interface that the TCP API offers to me (once the connexion is established) is this:
a) a callback to say "data has arrived" b) a call to tcp_send()
Both those (indeed, all TCP activity) needs to operate in a single thread (which is all I have in a non-RTX environment anyway).
I have a state machine ultimately driven by the reception of data, which clocks the sending of data out.
This is a very simplistic TCP/IP implementation, but I am forced to use it by management.
Richard, I am trying hard to understand your situation and to offer help - forgive me if I'm not exactly on target...The internal non-reentrance you indicated is indeed a bigger problem: you are saying that your "send" is asynchronous (as it usually is), and you cannot receive until the data is physically sent? I understand that you do not have a callback saying "data was sent", right? but how then can your software work at all, if you call these functions one after the other? However if this is not the case (hence, you can "send" and immediately after that "receive") - you certainly can implement your own task scheduler as I suggested, and separate sending and receiving data to separate tasks: "send" and "receive" cannot collide if you disable the interrupt of the preemptive scheduler until such a non-reentrant call returns.
we are currently adding a TCP Flow Control to the TCPnet stack. It will allow you to reduce the TCP Window Size and in this way control the client to stop sending data (TCP Sliding Window). When the data is processed, window size will be reset to default value and data transfer will continue.
Please contact support in a couple of days for RLARM update.
Franc
Richard, I had a quick look at the docs for RL-ARM TCP/IP library here: http://www.keil.com/support/man/docs/rlarm/rlarm_tn_tcpip_prot.htm
It does appear that with this API there is no way to stall reception of TCP data without stalling all TCP activity. Basically, you must process the incoming data as soon as it arrives. This appears to be a severe limitation of this library. One could argue that for some applications this simply cannot work.
This isn't a question about TCP/IP in general. It really needs to be answered by someone who knows or has used the TcpNet subsystem in the RL-ARM.
The situation is this. TcpNet is a very minimalist implementation of TCP/IP. According to the "framework" examples provided by Keil, you have to structure your program in a very specific way. After initialisation, your program's main loop basically has to run like this:
void main_loop() { main_TcpNet(); timer_Task(); clock_application_state_machine(); } U16 tcp_callback(..event data..) { }
and that's it. All the above works in a single thread. The actual transmission of packets is handled by the timer task. The higher levels of the TC protocol are handled in main_TcpNet()
TcpNet doesn't maintain a pipeline of outgoing packets: it only allows one packet in flight at once. This is why it is so slow (not a problem for me)
Also, it doesn't maintain a pipeline of incoming packets. Once an incoming packet has hit the network layer, it *must* call the callback, and I *have* to be prepared to handle it.
All I need is some method whereby I can return some value from the callback saying "I can't handle this packet; please throttle". The callback, however, does not take notice of any value returned from this event.
My clock_application_state_machine() runs the business part of my program.
The callback function provides notification on a few things: a) connection of the TCP socket b) closing of the TCP socket c) incoming TCP packet d) acknowledgement of the sending of a TCP packet.
My clock_application_state_machine() is generally kicked-off by the connection of a TCP socket. It then collects incoming packets and parses the incoming request. The results of this request move the state machine into a different state where it transmits the result, with the (d) above moving it from "waiting for ack" to "sending next packet".
This proceeds until the end of the request, whereupon it can receive another one.
The problem is that I CAN'T STOP DATA COMING IN, even though TCP has this facility naturally. As soon as one request has come in, a new one can come in, and another, and another, and another, when the state machine is still transmitting the result of several requests back.
Ah, thank you. That seems like just what I wanted. I am currently working on an out-of-date version of RL-ARM anyway, due to management messing about where I work.
I will convince my boss that I need a more up-to-date version of RL-ARM.
of course you can stop data from coming in - just disable the respective interrupt for a short while. the client will eventually retransmit, because it won't receive an ACK from you for that packet. how about that? and of course you can manipulate the size of the sliding window and suggested here already, but at a performance cost.