Boost.Asio async_send question

I am using Boost.Asio for the server application I am writing.

async_send requires the caller to retain ownership of the data that is sent until the data is successfully sent. This means that my code (which looks like this) will fail, and this will happen because data will no longer be a valid object.

 void func() { std::vector<unsigned char> data; // ... // fill data with stuff // ... socket.async_send(boost::asio::buffer(data), handler); } 

So, I decided to do something like this:

 std::vector<unsigned char> data; void func() { // ... // fill data with stuff // ... socket.async_send(boost::asio::buffer(data), handler) } 

But now I wonder if I have several clients, do I need to create a separate vector for each connection?

Or can I use one vector? If I can use this single vector, if I rewrite the contents inside it, will it ruin the data that I send to all my clients?

+6
c ++ boost-asio
source share
7 answers

A possible solution would be to use shared_ptr to store the local vector and change the signature of the handler to get shared_ptr in order to extend the life of the data until the sending is complete (thanks to Tim pointed me to this):

 void handler( boost::shared_ptr<std::vector<char> > data ) { } void func() { boost::shared_ptr<std::vector<char> > data(new std::vector<char>); // ... // fill data with stuff // ... socket.async_send(boost::asio::buffer(*data), boost:bind(handler,data)); } 
+13
source share

I solved a similar problem by passing shared_ptr my handler function data. Since asio holds onto the handler functor until it is called, and the guard functor saves the shared_ptr link, the data remains highlighted as long as it has an open request.

edit - here is the code:

Here, the connection object rests on the current data buffer that is being written, so shared_ptr refers to the connection object, and the bind call attaches the method functor to the object reference, and the asio call keeps the object alive.

The key is that each handler must start a new asyc operation with a different link or the connection will be closed. As soon as the connection is completed or an error occurs, we simply stop generating new read / write requests. One caveat is that you need to make sure that you check the error object in all of your callbacks.

 boost::asio::async_write( mSocket, buffers, mHandlerStrand.wrap( boost::bind( &TCPConnection::InternalHandleAsyncWrite, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred))); void TCPConnection::InternalHandleAsyncWrite( const boost::system::error_code& e, std::size_t bytes_transferred) { 
+6
source share

But now I wonder if I have several clients, will I need to create a separate vector for each connection?

Yes, although each vector should not be in the global area. A typical solution to this problem is to save buffer as a member of the object and bind the member function of this object to the functor passed to the async_write completion async_write . In this way, the buffer will be kept to scale throughout the lifetime of the asynchronous write. Asio examples are dotted with this use of binding member functions with this and shared_from_this . In general, it is recommended to use shared_from_this to simplify the life of the object, especially in the person of io_service:stop() and ~io_service() . Although for simple examples, these scaffolds are often not needed.

The destruction sequence described above allows programs to simplify their resource management by using shared_ptr <>. Where the lifetime object is tied to the lifetime of the connection (or some other sequence of asynchronous operations), shared_ptr to the object will be associated with handlers for all asynchronous operations associated with it.

A good place to start is the asynchronous echo server due to its simplicity.

 boost::asio::async_write( socket, boost::asio::buffer(data, bytes_transferred), boost::bind( &session::handle_write, this, boost::asio::placeholders::error ) ); 
+5
source share

The way I do this is to really perceive the concept of "TCP is a stream." So I have boost::asio::streambuf for each connection to represent what I am sending to the client.

Like most examples in boost, I have a tcp_connection class with an object for each connection. Each of them has memeber boost::asio::streambuf response_; , and when I want to send something to the client, I just do this:

 std::ostream responce_stream(&response_); responce_stream << "whatever my responce message happens to be!\r\n"; boost::asio::async_write( socket_, response_, boost::bind( &tcp_connection::handle_write, shared_from_this(), boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); 
+3
source share

You cannot use a single vector if you do not send the same and constant data to all clients (for example, an invitation). This is due to the nature of asynchronous I / O. If you send, the system will keep a pointer to its buffer in its queue along with some structure of the AIO packet. As soon as this is done with some previous queuing operations, and there will be free space in its own buffer, the system will begin to form packets for your data and copy pieces of your buffer inside the appropriate places in TCP frames. Therefore, if you change the contents of your buffer along the way, you will corrupt the data sent to the client. If you receive, the system can optimize it even further and feed your buffer to the network adapter as a target for the DMA operation. In this case, a significant number of CPU cycles can be saved when copying data, since it is performed by the DMA controller. It is possible, however, that this optimization will only work if the NIC supports TCP hardware offloading.

UPDATE: On Windows, Boost.Asio uses an overlapping WSA IO with completion notifications via IOCP .

+2
source share

Krit explained the data corruption, so I will give you an implementation suggestion.

I would suggest that you use a separate vector for each send operation that is currently in progress. You probably do not want to use them for each connection, since you can send several messages on the same connection sequentially, without waiting for the previous ones to complete.

+2
source share

You will need one write buffer for each connection, others say that it uses a vector for each connection, as was your original idea, but I would recommend using the row vector with your new approach for simplicity.

Boost.ASIO has some special cases built using lines with buffers for writing, which simplifies their work.

Just a thought.

0
source share

All Articles