-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
I'm trying to understand the exact behavior intended for this function:
poco/Net/include/Poco/Net/SocketImpl.h
Lines 224 to 231 in 3f76ad6
| virtual int receiveBytes(Poco::Buffer<char>& buffer, int flags = 0, const Poco::Timespan& timeout = 100000); | |
| /// Receives data from the socket and stores it in the buffer. | |
| /// If needed, the buffer will be resized to accomodate the | |
| /// data. Note that this function may impose additional | |
| /// performance penalties due to the check for the available | |
| /// amount of data. | |
| /// | |
| /// Returns the number of bytes received. |
Does it simply return 0 in case of a timeout? Is it intended as a polling function?
Similar functions throw a TimeoutException in case of timeout, and this limit would be set by setReceiveTimeout. This method here takes a separate timeout parameter, though, with a default value, and it might seem at first that it would supersede the socket timeout attribute, but it's not exactly that. First of all, there's still a timeout test happening inside the function body. But like I mentioned before, the behavior seems to be returning 0 in case the parameter-passed timeout is reached.
Is this by design, should I make a patch to update the documentation?
And in this case, how would I obtain the behavior of receiving a TimeoutException if I'm using a Poco::Buffer<char>& instead of a SocketBufVec& or a void * buffer, int length to store the output? This seems like an inconsistent behavior depending only on the choice of the output data structure.