stm32plus::net, TCP module

Here’s the big one, the Transport Control Protocol (TCP) is the connection-oriented, reliable end-to-end messaging transport required by all the biggest internet protocols, most notably HTTP.

My expectations for writing this module were that it would take the longest time to get right, and I wasn’t wrong. The TCP protocol itself is not a difficult one to implement. The state machine is well documented and the state variables for the opposing sides are easy to maintain.

The fun part came when tuning the module so that it played well with stacks on the ‘other end’. With so few resources available to our little microcontroller it was really easy to get into a state where the other end thought we were in trouble and backed off its transmissions, killing throughput. Or even worse its behaviour forced us into silly window syndrome, throttling throughput down to a trickle. I’m happy to say that all those problems were overcome and stm32plus::net can be a first-class citizen of the net.

TCP implementation

TCP is fundamentally based around the concept of a connection which represents the current state of a conversation between you and your peer.

stm32plus::net embraces this concept with a TcpConnection base class providing basic connection functionality that you are expected to subclass in order to provide your functionality.

There is no requirement to periodically call any kind of ‘loop’ method, TCP data is received if you’re ready for it and transmissions are done when you ask them to be done with any necessary retransmissions being taken care of at that time.

TCP has no compile-time parameters, you can include it by adding its module name to your transport layer configuration like this:

typedef TransportLayer<Tcp> MyTransportLayer;

Note that this code fragment only shows Tcp in the transport layer. In practice a useful transport layer will include more than just TCP.

Configuration parameters

There are many configuration options, some of which are applicable to the TCP module as a whole and some of which are applicable on a per-connection basis. You can extend the per-connection parameters to provide configuration options of your own.

Let’s start with the module-level configuration that you can apply when the stack starts up.

// maximum number of servers at any one time. default is 5
uint16_t tcp_maxServers;

// maximum segment lifetime, in seconds. default is 30
uint16_t tcp_msl;

// the time, in millis to wait for a SYN-ACK before sending another. Default is 4000.
uint16_t tcp_connectRetryInterval;   

// number of times to retry a connect if SYN-ACK not received. Default is 5.
uint16_t tcp_connectMaxRetries;

tcp_maxServers is the maximum number of TCP servers that can be created at any time. As you are in complete control of the number of servers (listeners) that you create this should only ever be a ‘safety belt’ option.

TCP defines a constant called the maximum segment lifetime. It represents a guess of how long a network segment can exist on the network before being considered delivered or lost.

When a connection is actively closed the TCP protocol mandates that the socket we were using goes into a state called TIME_WAIT which persists for 2*MSL seconds until the port can be re-used and the state variables for the closed connection can be cleaned up.

The default value for MSL is 30 seconds. You can change it with the tcp_msl option.

Establishing a TCP connection involves sending a SYN segment and then waiting for a matching SYN-ACK from the other side. tcp_connectRetryInterval is the time, in milliseconds to wait for that SYN-ACK before giving up and sending another one.

tcp_connectMaxRetries defines the number of times that we will automatically retry sending a SYN segment before giving up and returning an error code to indicate that there is probably nothing listening on the requested port at the far end.

The following configuration parameters are connection-specific and are available to you in the constructor of your subclass of TcpConnection. See the MyHttpConnection class in the net_web_server example for an example of how to customise the default values.

// per-connection receive buffer size. Default is 256 bytes.
uint16_t tcp_receiveBufferSize;

// first delay to resend an un-acked segment. Default is 4 seconds.
uint32_t tcp_initialResendDelay;

// the resend delay exponential backoff is capped at this value. default is 60 (1 minute)
uint32_t tcp_maxResendDelay;

// if true, set the PSH flag in sent segments. Default is false.
bool tcp_push;											

// if true, single packet sends are broken into 2 to force the receiver's Nagle algorithm to generate an ACK without delay. Default is true.
bool tcp_nagleAvoidance; 

// if true, delay ack-ing data until the application has consumed all the receive buffer. Default is true.
bool tcp_delayedDataAck;

Each connection has an internal buffer that is used to receive incoming bytes from the sender. The size of that buffer is configured with tcp_receiveBufferSize. This buffer size sets an upper limit on the amount of data that we allow the sender to send in one segment. Increasing it can increase network throughput at the expense of increased use of SRAM. We set a conservative low limit of 256 bytes as the default.

When we send a segment to the other end we expect to receive an acknowledgement within tcp_initialResendDelay seconds. If we don’t receive one then we will resend the segment.

If an acknowledgement is still not received in response to the resend then we back off exponentially, doubling the amount of seconds that we wait for a response up to a maximum cap of tcp_maxResendDelay seconds.

TCP supports something called a PUSH flag which is supposed to tell the sender to not buffer the data in the segment and send it directly to the application. Support for this flag is inconsistent across implementations and indeed stm32plus::net ignores it. You can use the tcp_push flag to indicate that data sent on this connection should have the PUSH flag set.

TCP is acknowledgement based (ACK) as opposed to negative acknowledgement based (NACK). By choosing to send replies for the common case (message delivery) instead of the uncommon case (message loss) throughput is reduced due to the chattiness of the protocol and a strategy is required to make the best of the available network bandwidth.

Windows and Linux both implement the controversial Nagle algorithm as part of their congestion control strategy. While this band-aid does increase throughput by reducing the number of ACK segments it increases network latency in the case where small amounts of data are transmitted in one segment. Since we are a small microcontroller, sending small amounts of data is the common case and so we must provide a strategy to avoid being the victim of the 200ms ACK-delay mandated by the Nagle algorithm.

If you set tcp_nagleAvoidance to true (the default) then the send() method of the TCPConnection class will ensure that a minimum of two segments are sent to other end, even if all the data would fit in a single segment. This strategy forces the other end’s Nagle implementation to immediately send an ACK without delay. To make best use of this you should ensure that you send as much data as you can in each call to send().

tcp_delayedDataAck could also be described as the silly window syndrome avoidance flag, and the default is true.

With this flag set, the TCP module delays sending an ACK for the received data to the remote end until it’s all been consumed by you. The module can then advertise an empty buffer, optimising the amount of data that can be transferred by the network at the possible expense of un-necessary retransmits by the sender if you are slow to consume the data from the receive buffer.

If the flag is not set (not recommended) then un-necessary retransmits will be avoided but it’s highly likely you’ll cause the sender to back-off due to our advertisement of a small receive buffer size.

Methods exposed by the TCP module

template<class TConnection,class TUser=void>
bool tcpCreateServer(uint16_t port,
                     TcpServer<TConnection,TUser> *&server,
                     TUser *userptr=nullptr);

template<class TConnection>
bool tcpConnect(const IpAddress& remoteAddress,
                uint16_t remotePort,
                TConnection *& connection);

template<class TConnection>
bool tcpConnect(const IpAddress& remoteAddress,
                uint16_t localPort,
                uint16_t remotePort,
                TConnection *& connection);

TCP servers

tcpCreateServer is the method used to create a new TCP server. A TCP server listens on the port that you specify for incoming connections. When a connection arrives an instance of the TConnection type is created to handle it.

This method is templated with the class name of your subclass of TcpConnection and the name of an optional TUser type that you would like to pass in to the constructor of your connection class. This feature can be used to ‘link back’ cleanly into one of your main application classes without having to resort to unpleasant hacks such as a global class variable pointer.

For example, the net_tcp_server example subclasses TcpConnection with a class called MyTcpConnection and sets up its server like this:

TcpServer<MyTcpConnection> *tcpServer;

if(!_net->tcpCreateServer(12345,tcpServer))
  error();

The net_web_server example uses the optional TUser parameter because connections need to use the FileSystem class to read the file to send to the user, so it creates its server like this:

TcpServer<MyHttpConnection,FileSystem> *httpServer;

if(!_net->tcpCreateServer(80,httpServer,_fs))
  error();

The low-level way to get notified when a new connection has arrived at your server is to subscribe to the TcpAcceptEvent event on the TcpServer instance, like this:

server.TcpAcceptEventSender.insertSubscriber(
    TcpAcceptEventSourceSlot::bind(
        this,
        &MyClass::onAccept
    )
  );

You would then implement onAccept() as a member of MyClass:

void MyClass::onAccept(TcpAcceptEvent& event) {
}

Here’s the definition of TcpAcceptEvent:

		struct TcpAcceptEvent : NetEventDescriptor {

			TcpServerBase& server;							///< server that raised this event
			TcpConnection *connection;					///< new connection reference
			bool accepted;


			/**
			 * Constructor
			 * @param s The server that's accepting this connection
			 * @param c The connection to potentially accept
			 */

			TcpAcceptEvent(TcpServerBase&s,TcpConnection *c)
				: NetEventDescriptor(NetEventDescriptor::NetEventType::TCP_ACCEPT),
				  server(s),
				  connection(c),
				  accepted(false) {
			}


			/**
			 * The client MUST call this to accept the connection. If not then the server
			 * will delete the connection.
			 * @return the connection
			 */

			TcpConnection *acceptConnection() {
				accepted=true;
				return connection;
			}
		};

If you want to accept the connection then set accepted to true and stash the connection pointer somewhere safe because you now own it. If you do not set accepted to true then the connection will be refused.

Looking after arrays of incoming connections is just the kind of repetitive, boring task that a framework should do for you, and guess what, that’s what stm32plus::net offers using a class called TcpConnectionArray.

TcpConnectionArray accepts incoming connections on your behalf and calls methods on your connection class when data is ready to receive or the other end is ready to accept new data from you. All of the TCP server examples use this method so please see the example code and inline documentation for TcpConnectionArray for how to use it.

When you are ready to begin accepting connections on your server, call the start() method on your instance of TcpServer.

To close an instance of TcpServer just delete it.

TCP clients

tcpConnect is the method that you can use to create TCP client connections for connecting to remote servers. Both of the overloads require you to specify the remoteAddress, remotePort and a connection output parameter for the new connection. One of the overloads allows you to specify the localPort number for the connection. You probably don’t need this because the usual case is to allow the TCP module to choose a free port from the ephemeral port range.

stm32plus::net includes a TcpClientConnection class that subclasses TcpConnection so that you don’t have to.

For example, the net_tcp_client example creates its connection to the remote server like this:

TcpClientConnection *ptr;

if(_net->tcpConnect<TcpClientConnection>(
           "192.168.1.9",
           12345,
           ptr)) {
  
  // interact with the server

}

TCP connections

Now that we’ve seen how to create TCP connections either indirectly as a server or directly as a client, here’s how to send data to and receive data from the peer. TcpConnection exposes the following public methods:

bool receive(void *data,
             uint32_t dataSize,
             uint32_t& actuallyReceived,
             uint32_t timeoutMillis=0);

bool send(const void *data,
          uint32_t dataSize,
          uint32_t& actuallySent,
          uint32_t timeoutMillis=0);

uint16_t getLocalPort() const;
const IpAddress& getRemoteAddress() const;
const TcpConnectionState& getConnectionState() const;

bool abort();

bool isRemoteEndClosed() const;
bool isLocalEndClosed() const;

uint16_t getTransmitWindowSize() const;
uint16_t getDataAvailable() const;

receive() allows you to receive data from the remote end, blocking until it’s either all arrived or the remote end closes the connection or you optionally time out. timeoutMillis can be set to zero to block until all data has arrived or the remote end closes the connection.

If all your data is received then the method returns true and actuallyReceived will equal dataSize. If the remote end closes the connection then actuallyReceived will be less than dataSize and the method returns true. If timeoutMillis is greater than zero and no data is received for timeoutMillis milliseconds then the method will return false if no data at all has been received (actuallyReceived==0) or true if some data has been received.

send() allows you to send data to the other end. dataSize is the amount of data you would like to send, actuallySent reports how much was sent and timeoutMillis allows you to set a timeout for the send to complete. If timeoutMillis is set to zero then the method does not timeout. Data counted as ‘sent’ by this method has been sent and acknowledged by the recipient. For efficiencies sake it is much better to call send when you have as much data available as possible. Try to avoid sequences of calls to send() that just send a few bytes each.

abort() allows you to forcibly terminate a connection without doing the TCP graceful shutdown. A ‘reset’ segment is sent to the remote end. This is a highly abnormal way to close a connection, under normal circumstances you just need to delete the TcpConnection and let the framework take care of the graceful shutdown.

isRemoteEndClosed() is a helper method that you can use to tell if the remote end has closed. A closed remote-end will not send any more data but may receive more data from you. Half-open TCP connections are permitted, but are quite unusual.

isLocalEndClosed() is a helper method that you can use to tell if the local end of the connection has closed. If you close the local end then you are saying that you will not send any more data to the remote end, but you may receive data from it.

getTransmitWindowSize() is a helper method that tells you the current size of the sender’s transmit window, i.e. the amount of data that the remote end is willing to accept from us. The value returned by this method may change as a result of IRQ activity.

getDataAvailable() allows you to determine how many bytes are available to read using the receive() method without blocking. The value returned by this method may change as a result of IRQ activity but will only ever increase. Only you can cause it to decrease by reading some data.

Closing a TCP connection is simple, just delete the object. The framework will take care of the TCP graceful shutdown process for you.

Events raised by the TCP module

The TCP module raises the following events:

sender TcpReceiveEventSender
event TcpSegmentEvent
identifier NetEventType::TCP_SEGMENT
context IRQ
purpose TCP segment received
		/**
		 * Event that is sent when a TCP segment is received from the IP module
		 */

		struct TcpSegmentEvent : NetEventDescriptor {

			IpPacket& ipPacket;								///< the original IP packet
			TcpHeader& tcpHeader;							///< the TCP header structure
			uint8_t *payload;										///< pointer to the payload
			uint16_t payloadLength;							///< the size of the payload
			uint16_t sourcePort;								///< taken from the header and converted to host order
			uint16_t destinationPort;						///< taken from the header and converted to host order
			bool handled;												///< set if any connection or server recognises this segment

			TcpSegmentEvent(IpPacket& packet,TcpHeader& header,uint8_t *data,uint16_t datalen,uint16_t sPort,uint16_t dPort)
				: NetEventDescriptor(NetEventType::TCP_SEGMENT),
				  ipPacket(packet),
				  tcpHeader(header),
				  payload(data),
				  payloadLength(datalen),
				  sourcePort(sPort),
				  destinationPort(dPort),
				  handled(false) {
			}
		};

This event is raised when a TCP segment arrives. Various other objects within the stack such as TcpServer and TcpConnection instances subscribe to this event. If the segment is consumed by a subscriber then the handled flag is set to true.

If no subscribers set handled to true then the module tries to match the segment against any of the connections it is managing that are currently in the closing state. If it still doesn’t match then this must be an errant segment and the module sends an RST (reset) segment back to the sender.

Events raised by a TCP server

If you have any active TCP servers then they can raise the following events:

sender NetworkNotificationEventSender
event TcpServerReleasedEvent
identifier NetEventType::TCP_SERVER_RELEASED
context Normal code
purpose TCP server destructor invoked
		/**
		 * Event descriptor for a TCP server being released
		 */

		class TcpServerBase;

		struct TcpServerReleasedEvent : NetEventDescriptor {

			const TcpServerBase& server;

			/**
			 * Constructor
			 * @param s The server being released
			 */

			TcpServerReleasedEvent(const TcpServerBase& s)
				: NetEventDescriptor(NetEventDescriptor::NetEventType::TCP_SERVER_RELEASED),
				  server(s) {
			}
		};

A TCP server will send this event during the execution of its destructor. It’s a notification to anyone that’s interested that the TCP server is going away. The main TCP module subscribes to this event so that it can decrement the number of active TCP servers that it is monitoring.

sender NetworkNotificationEventSender
event TcpFindConnectionNotificationEvent
identifier NetEventType::TCP_FIND_CONNECTION
context Normal code
purpose TCP server looking for a connection
		/**
		 * Event used to find an existing TCP connection that matches
		 * the given source and destination ports
		 */

		class TcpConnection;

		struct TcpFindConnectionNotificationEvent : NetEventDescriptor {

			const IpAddress& remoteAddress;				///< the remote address
			uint16_t localPort;											///< local port
			uint16_t remotePort;										///< remote port

			TcpConnection *tcpConnection;   				///< the connection if found, nullptr if not


			/**
			 * Constructor
			 * @param raddr The remote address
			 * @param sPort The source port (remote end)
			 * @param dPort The destination port (local end)
			 */

			TcpFindConnectionNotificationEvent(const IpAddress& raddr,uint16_t sPort,uint16_t dPort)
				: NetEventDescriptor(NetEventType::TCP_FIND_CONNECTION),
				  remoteAddress(raddr),
				  localPort(dPort),
				  remotePort(sPort),
				  tcpConnection(nullptr) {
			}
		};

This notificatication event is used internally by a TcpServer to search for any outstanding TcpConnection objects that are handling this address/port combination. This query is one of the decision points that a TcpServer looks at when deciding whether or not to accept a new connection from a remote client.

sender TcpAcceptEventSender
event TcpAcceptEvent
identifier NetEventType::TCP_ACCEPT
context IRQ
purpose TCP server accepting a connection
		struct TcpAcceptEvent : NetEventDescriptor {

			TcpServerBase& server;							///< server that raised this event
			TcpConnection *connection;					///< new connection reference
			bool accepted;


			/**
			 * Constructor
			 * @param s The server that's accepting this connection
			 * @param c The connection to potentially accept
			 */

			TcpAcceptEvent(TcpServerBase&s,TcpConnection *c)
				: NetEventDescriptor(NetEventDescriptor::NetEventType::TCP_ACCEPT),
				  server(s),
				  connection(c),
				  accepted(false) {
			}


			/**
			 * The client MUST call this to accept the connection. If not then the server
			 * will delete the connection.
			 * @return the connection
			 */

			TcpConnection *acceptConnection() {
				accepted=true;
				return connection;
			}
		};

This event is raised by a TCP server when a new client is attempting to connect. If you are not using the automation offered by the TcpConnectionArray class then you should subscribe to this event and call acceptConnection() if the incoming connection is acceptable to you. The TcpConnection pointer returned by this method will then be owned by you and should be deleted when you want it to be closed.

If the incoming connection is not acceptable to you then don’t do anything and the TcpServer will clean up.

Events raised by a TCP connection

If you have any active TCP connections then they can raise the following events:

sender TcpConnectionClosedEventSender
event TcpConnectionClosedEvent
identifier NetEventType::TCP_CONNECTION_CLOSED
context IRQ code
purpose Remote end closed connection
		/**
		 * TCP connection closed event. This event signifies that the remote end has closed and will
		 * not be sending any more data. It is legal to send data to the remote end in this state but
		 * most likely you'll want to delete your TcpConnection object, thereby closing your end and
		 * cleaning up.
		 */

		struct TcpConnectionClosedEvent : NetEventDescriptor {

			/**
			 * Reference to the TCP connection object.
			 */

			TcpConnection& connection;


			/**
			 * Constructor
			 * @param The connection reference
			 */

			TcpConnectionClosedEvent(TcpConnection& c)
				: NetEventDescriptor(NetEventDescriptor::NetEventType::TCP_CONNECTION_CLOSED),
				  connection(c) {
			}
		};

This event is raised when the remote end closes its end of the connection either by sending us a RST or a FIN segment.

sender NetworkNotificationEventSender
event TcpConnectionReleasedEvent
identifier NetEventType::TCP_CONNECTION_RELEASED
context Normal code
purpose TCP connection destructor invoked
		/**
		 * Event descriptor for a TCP connection being released
		 */

		struct TcpConnectionReleasedEvent : NetEventDescriptor {

			const TcpConnection& connection;

			/**
			 * Constructor
			 * @param c The connection being released
			 */

			TcpConnectionReleasedEvent(const TcpConnection& c)
				: NetEventDescriptor(NetEventDescriptor::NetEventType::TCP_CONNECTION_RELEASED),
				  connection(c) {
			}
		};

This event is raised during the execution of the TcpConnection object’s destructor. The stack’s TCP module subscribes to this event and uses it to take over the graceful close sequence that the connection must go through before its local port can be released for re-use.

sender TcpConnectionDataReadyEventSender
event TcpConnectionDataReadyEvent
identifier NetEventType::TCP_CONNECTION_DATA_READY
context IRQ
purpose TCP data is read to read
		/**
		 * TCP connection data ready event. This event signifies that we have buffered
		 * some data from the remote end that the application should consume.
		 */

		struct TcpConnectionDataReadyEvent : NetEventDescriptor {

			/**
			 * Reference to the TCP connection object.
			 */

			TcpConnection& connection;

			/**
			 * Constructor
			 * @param The connection reference
			 */

			TcpConnectionDataReadyEvent(TcpConnection& c)
				: NetEventDescriptor(NetEventDescriptor::NetEventType::TCP_CONNECTION_DATA_READY),
				  connection(c) {
			}
		};

This event is raised by the connection when there is data in the receive buffer ready to read. If you subscribe to this event then it’s possible for you to process data as fast as it arrives, bearing in mind that you are in an IRQ context.