Commit b91d29a28e170c16d65d956db79f2cd3a82372d2 introduces a bug and breaks Curl_closesocket function. sock_accepted flag for the second socket should be tagged as TRUE before the sockopt callback is called because in case the callback returns an error, Curl_closesocket function is going to call the - fclosesocket - callback for the accept()ed socket
For active FTP connections, applications may need setting the sockopt after accept() call returns successful. This fix gives a call to the callback registered with CURL_SOCKOPTFUNCTION option. Also a new sock type - CURLSOCKTYPE_ACCEPT - is added. This type is to be passed to application callbacks with - purpose - parameter. Applications may use this parameter to distinguish between socket types.
Commit e351972bc8 brought in the ssh agent support but some uses of
the libssh2 agent API was done unconditionally which wasn't good enough
since that API hasn't always been present.
By reading the ->head pointer and using that instead of the ->size
number to figure out if there's a list remaining we avoid the (false
positive) clang-analyzer warning that we might dereference of a null
pointer.
I suspect this is a regression introduced in commit 207cf150, included
since 7.24.0.
Avoid showing '(nil)' as hostname in verbose output by making sure the
hostname fixup function is called early enough to set the pointers that
are used for this. The name data is set again for each request even for
re-used connections to handle multiple hostnames over the same
connection (like with proxy) or that the casing etc of the host name is
changed between requests (which has proven to be important at least once
in the past).
Test1011 was modified to use a redirect with a re-used a connection
since it then showed the bug and now lo longer does. There's currently
no easy way to have the test suite detect 'nil' texts in verbose ouputs
so no tests will detect if this problem gets reintroduced.
Bug: http://curl.haxx.se/mail/lib-2012-07/0111.html
Reported by: Gisle Vanem
We found a problem with ftp transfer using libcurl (7.23 and 7.25)
inside an application which is receiving unix signals (SIGUSR1,
SIGUSR2...) almost continuously. (Linux 2.4, PowerPC, HAVE_POLL_FINE
defined).
Curl_socket_check() uses poll() to wait for the socket, and retries it
when a signal is received (EINTR). However, if a signal is received and
it also happens that the timeout has been reached, Curl_socket_check()
returns -1 instead of 0 (indicating an error instead of a timeout).
In our case, the result is an aborted connection even before the ftp
banner is received from the server, and a return value of
CURLE_OUT_OF_MEMORY from curl_easy_perform() (Curl_pp_multi_statemach(),
in pingpong.c, actually returns OOM if Curl_socket_check() fails :-)
Funny to debug on a system on which OOM is a possible cause).
Bug: http://curl.haxx.se/mail/lib-2012-07/0122.html
Due to WSAPoll bugs, libcurl does not work as intended. When the cURL
library is used to setup a connection to an incorrect port, normally the
result is CURLE_COULDNT_CONNECT, /* 7 */, but due to the bug in WSAPoll,
the result now is CURLE_OPERATION_TIMEDOUT, /* 28 - the timeout time was
reached */.
On August 1, Jan Koen Annot opened a case for this to Microsoft Premier
Online (https://premier.microsoft.com/). The support engineer handling
the case wrote that the case description is quite clear. He will try to
reproduce the issue and then proceed with troubleshooting it.
Reported by: Jan Koen Annot
Bug: http://curl.haxx.se/mail/lib-2012-07/0310.html
When figuring out if the data stream needs to be rewound when the
request is to be resent, we must not access the HTTP struct unless the
protocol used is indeed HTTP...
Bug: http://curl.haxx.se/bug/view.cgi?id=3544688
Previously the curl_multi interface would freeze if darwinssl was
enabled and at least one of the handles tried to connect to a Web site
using HTTPS. Removed the "wouldblock" state darwinssl was using because
I figured out a solution for our "would block but in which direction?"
dilemma.
In many states the easy_conn pointer is referenced and just assumed to
be working. This is an added extra check since analyzing indicates
there's a risk we can end up in these states with a NULL pointer there.
A HEAD response has no body length and gets the headers like the
corresponding GET would so it should not get closed after the response
based on the same rules. This mistake caused connections that did HEAD
to get closed too often without a valid reason.
Bug: http://curl.haxx.se/bug/view.cgi?id=3542731
Reported by: Eelco Dolstra
The function https_getsock was only implemented properly when USE_SSLEAY
or USE_GNUTLS is defined, but it is also necessary for USE_SCHANNEL.
The problem occurs when Curl_read_plain or Curl_write_plain returns
CURLE_AGAIN. In that case CURL_OK is returned to the multi-interface an
the used socket is set to state CURL_POLL_REMOVE and the easy-state is
set to CURLM_STATE_PROTOCONNECT. This is fine, because later the socket
should be set to CURL_POLL_IN or CURL_POLL_OUT via multi_getsock. That's
where https_getsock is called and doesn't return any sockets.
The code was printing a warning when SNI was set up successfully. Oops.
Printing the cipher number in verbose mode was something only TLS/SSL
programmers might understand, so I had it print the name of the cipher,
just like in the OpenSSL code. That'll be at least a little bit easier
to understand. The SecureTransport API doesn't have a method of getting
a string from a cipher like OpenSSL does, so I had to generate the
strings manually.
When doing CONNECT requests, libcurl must make sure the connection is
alive as much as possible. NTLM requires it and it is generally good for
other cases as well.
NTLM over CONNECT requests has been broken since this regression I
introduced in my CONNECT cleanup commits that started with 41b0237834,
included since 7.25.0.
Bug: http://curl.haxx.se/bug/view.cgi?id=3538625
Reported by: Marcel Raad