When switching to HTTP because a HTTP proxy is being used, the existing
handler is now checked if it already is "compatible". This allows the https
handler remain while other non-http handlers will be redirected.
Bug: http://curl.haxx.se/mail/lib-2011-05/0214.html
Reported by: Jerome Robert
Fix compiler warning: `keycheck' might be used uninitialized in this function.
Fix compiler warning: `keybit' might be used uninitialized in this function.
Introduced the initial setup to allow closesocket callbacks by making
sure sclose() is only ever called from one place in the libcurl source
and still run all test cases fine.
The protocol handler's flags field now can set that the protocol
requires a password, so that the set_userpass function doesn't have to
have the specific knowledge of which protocols that do.
Made several functions static
Made one function defined to nothing when RTSP is disabled to avoid
the #ifdefs in code.
Removed explicit rtsp.h includes
Using 'socks5h' as proxy protocol will make it a
CURLPROXY_SOCKS5_HOSTNAME proxy which is SOCKS5 and asking the proxy to
resolve host names. I found no "standard" protocol name for this.
Introduce an INIT state for the SSH state machine and set libssh2
non-blocking in that so that it is set properly before
libssh2_session_startup() is called.
Bug: http://curl.haxx.se/mail/archive-2011-05/0001.html
Now use gai_strerror() to get proper error messages when getaddrinfo()
has failed. Detect the function in configure.
Code based on work and suggestions by Jeff Pohlmeyer and Guenter Knauf
Improved library search by check_function_exists_concat() macro:
it does not revert the list of libraries any more.
Improved OpenSSL library search: first find zlib, then search for
openssl libraries that may depend on zlib.
For Unix: openssl libraries can now be detected in nonstandard
locations. Supply CMAKE_LIBRARY_PATH to CMake on command line.
Added installation capability (very basic one yet).
When connecting to a socks or similar proxy we do the proxy handshake at
once when we know the TCP connect is completed and we only consider the
"connection" complete after the proxy handshake. This fixes test 564
which is now no longer considered disabled.
Reported by: Dmitri Shubin
Bug: http://curl.haxx.se/mail/lib-2011-04/0127.html
The make target checksrc now works in the root makefile and in both the
src and lib directories.
It is also run automatically on "all" if configure --enable-debug was
used.
It now scans multiple files and outputs an error+warning count summary
at the end in case at least one was detected.
-D can be used to specify in which dir the files are located
The script now scans for conditions that starts with a space for
if/while/for lines.
For now provide prototypes instead of including the
non-standard normalisation.h which is only available in the
"Internationalized Domain Names Mitigation APIs" download.
asyn-ares.c and asyn-thread.c are two separate backends that implement
the same (internal) async resolver API for libcurl to use. Backend is
specified at build time.
The internal resolver API is defined in asyn.h for asynch resolvers.
Fixed indents, coding conventions and white space edits.
Modified the c-ares completion callback function to again NOT read the
conn data when the ares handle is being taken down as then it may have
been freed already.
For now we directly import the Idn* symbols with the linker;
an upcoming release of OWC will have these added to the import
lib normaliz.lib, and prototypes are added to winnnls.h.
Make sure that files are closed before the post quote commands run as if
they operate on the just transferred file they could otherwise easily
fail.
Patch by: Rajesh Naganathan (edited)
libcurl failed to check the correct struct for HTTPS after CONNECT was
issued to the proxy, so it didn't do the TLS handshake and subsequently
failed the connection. A regression released in 7.21.5 (introduced
around commit 8831000bc0).
Bug: http://curl.haxx.se/mail/lib-2011-04/0134.html
Reported by: Josue Andrade Gomes
It is now possible to use any combination of features without
having to 1st add makefile targets to the main makefile. The
main makefile now passes the 'mingw32-feat1-feat2' as var CFG,
and the ./[lib|src]/Makefile.m32 parses the CFG var to determine
the features to be enabled.
changed windows.h include to system header;
changed obsolete 2nd check for str_w to str_utf8 in order to catch
malloc() failure and avoid a free(NULL);
changed calls to GetLastError() to void to kill unsused var compiler
warnings;
moved one call to GetLastError() into else case so that its only
called when WideCharToMultiByte() really fails.
Added CURLOPT_TRANSFER_ENCODING as the option to set to request Transfer
Encoding in HTTP requests (if built zlib enabled). I also renamed
CURLOPT_ENCODING to CURLOPT_ACCEPT_ENCODING (while keeping the old name
around) to reduce the confusion when we have to encoding options for
HTTP.
--tr-encoding is now the new command line option for curl to request
this, and thus I updated the test cases accordingly.
When TE: is inserted in the request, we must add a "Connection: TE" as
well to be HTTP 1.1 compliant. If a custom Connection: header is passed
in, we must use that and only append TE to it. Test case 1125 verifies
TE: + custom Connection:.
Since this struct member is used in the code to determine what and how
to decode automatically and since it is now also used for compressed
Transfer-Encodings, I renamed it to the more suitable 'auto_decoding'
Transfer-Encoding differs from Content-Encoding in a few subtle ways,
but primarily it concerns the transfer only and not the content so when
discovered to be compressed we know we have to uncompress it. There will
only arrive compressed transfers in a response after we have requested
them with the appropriate TE: header.
Test case 1122 and 1123 verify.
When checking if an existing RTSP connection is alive or not, the
checkconnection function might be called with a SessionHandle pointer
being NULL and then referenced causing a crash. This happened only using
the multi interface.
Reported by: Tinus van den Berg
Bug: http://curl.haxx.se/bug/view.cgi?id=3280739
In case a client certificate is used, invalidate SSL session cache
at the end of a session. This forces NSS to ask for a new client
certificate when connecting second time to the same host.
Bug: https://bugzilla.redhat.com/689031
* Rename the object object directory from 'objs' to 'BCC_obj'. I feel
it should be named properly. Ref. Makefile.Watcom where it's called
'WC_Win32.obj'.
* Turn off these warnings to keep the build totally silent (with CBuilder-6
that is).
-w-inl 8026 Functions X are not expanded inline.
-w-pia 8060 Possibly incorrect assignment
-w-pin 8061 Initialization is only partially bracketed
I'm sure the warnings could be fixed the "proper" way or with some added
"#pragma" statements. But that just clutters the sources IMHO.
* $(MKDIR) and $(RMDIR) have been replaced with the shell-commands 'md'
and 'rd'. When having MingW/Msys programs 'mkdir.exe' and 'rmdir.exe' in
$PATH, this confuses Borland's make and the result (the cleaning etc.) would
not be as expected.
* Added a ".path.int = $(OBJDIR)" to tell make where the $(PREPROCESSED)
files are. Why we need the preprocess step in the fist place is beyond me
(Yang?). But I'll leave that for now.
Stop the abuse of CURLE_FAILED_INIT as return code for things not being
init related by introducing two new return codes:
CURLE_NOT_BUILT_IN and CURLE_UNKNOWN_OPTION
CURLE_NOT_BUILT_IN replaces return code 4 that has been obsoleted for
several years. It is used for returning error when something is
attempted to be used but the feature/option was not enabled or
explictitly disabled at build-time. Getting this error mostly means that
libcurl needs to be rebuilt.
CURLE_FAILED_INIT is now saved and used strictly for init
failures. Getting this problem means something went seriously wrong,
like a resource shortage or similar.
CURLE_UNKNOWN_OPTION is the option formerly known as
CURLE_UNKNOWN_TELNET_OPTION (and the old name is still present,
separately defined to be removed in a very distant future). This error
code is meant to be used to return when an option is given to libcurl
that isn't known. This problem would mostly indicate a problem in the
program that uses libcurl.
In my attempts to reduce #ifdefs in code, the SOCKS functions are now
macros when libcurl is built without proxy support and therefore the FTP
code could avoid some #ifs.
The new http_proxy.* files now host HTTP proxy specific code (500+ lines
moved out from http.c), and as a consequence there is a macro introduced
for the Curl_proxyCONNECT() function so that code can use it without
actually supporting proxy (or HTTP) in builds.
1 - make sure to #define macros for cookie functions in the cookie
header when cookies are disabled to avoid having to use #ifdefs in code
using those functions.
2 - move cookie-specific code to cookie.c and use the functio
conditionally as mentioned in (1).
net result: 6 #if lines removed, and 9 lines of code less
Within multi_socket when conn is used as a shorthand, data could be
changed and multi_runsingle could modify the connectdata struct to deal
with. This bug has not been included in a public release.
Using 'conn' like that turned out to be ugly. This change is a partial
revert of commit f1c6cd42f4.
Reported by: Miroslav Spousta
Bug: http://curl.haxx.se/bug/view.cgi?id=3265485
When asked to bind the local end of a connection when doing a request,
the code will now disqualify other existing connections from re-use even
if they are connected to the correct remote host.
This will also affect which connections that can be used for pipelining,
so that only connections that aren't bound or bound to the same
device/port you're asking for will be considered.
The RTSP-specific function for checking for "dead" connection is better
located in rtsp.c. The code using this is now written without #ifdefs as
the function call is instead turned into a macro (in rtsp.h) when RTSP
is disabled.
Move ipv6-functional-probe into a single function that is used from all
places that need to know.
Make the probe function store the result in a static variable so that
subsequent invokes just returns the previous result and won't have to
probe again.
Curl_posttransfer is called too soon to add the final new line.
Moved the new line logic to pgrsDone as there is no more call to
update the progress status after this call.
Reported by: Dmitri Shubin <sbn_at_tbricks.com>
http://curl.haxx.se/mail/lib-2010-12/0162.html
When libcurl sends a HTTP request on a re-used connection and detects it
being closed (ie no data at all was read from it), it is important to
rewind if any data in the request was sent using the read callback or
was read from file, as otherwise the retried request will be broken.
Reported by: Chris Smowton
Bug: http://curl.haxx.se/bug/view.cgi?id=3195205
When NSS-powered libcurl connected to a SSL server with
CURLOPT_SSL_VERIFYPEER equal to zero, NSS remembered that the peer
certificate was accepted by libcurl and did not ask the second time when
connecting to the same server with CURLOPT_SSL_VERIFYPEER equal to one.
This patch turns off the SSL session cache for the particular SSL socket
if peer verification is disabled. In order to avoid any performance
impact, the peer verification is completely skipped in that case, which
makes it even faster than before.
Bug: https://bugzilla.redhat.com/678580
The PROT_* set of internal defines for the protocols is no longer
used. We now use the same bits internally as we have defined in the
public header using the CURLPROTO_ prefix. This is for simplicity and
because the PROT_* prefix was already used duplicated internally for a
set of KRB4 values.
The PROTOPT_* defines were moved up to just below the struct definition
within which they are used.
The protocol handler struct got a 'flags' field for special information
and characteristics of the given protocol.
This now enables us to move away central protocol information such as
CLOSEACTION and DUALCHANNEL from single defines in a central place, out
to each protocol's definition. It also made us stop abusing the protocol
field for other info than the protocol, and we could start cleaning up
other protocol-specific things by adding flags bits to set in the
handler struct.
The "protocol" field connectdata struct was removed as well and the code
now refers directly to the conn->handler->protocol field instead. To
make things work properly, the code now always store a conn->given
pointer that points out the original handler struct so that the code can
learn details from the original protocol even if conn->handler is
modified along the way - for example when switching to go over a HTTP
proxy.
The non-blocking connect improvement for IMAP showed that we didn't
properly define the Curl_ssl_connect_nonblocking function for non-SSL
builds.
Reported by: Tor Arntsen
Only download and convert the certdata to the ca-bundle.crt if Mozilla
changed the data
The Perl LWP module (which in a bit of a circular reference is used by
mk-ca-bundle.pl) is now indirectly using this script. I made this small
tweak to make it easier to automatically maintain the generated
ca-bundle.crt file in version control.
Some protocols have to call the underlying functions without regard to
what exact state the socket signals. For example even if the socket says
"readable", the send function might need to be called while uploading,
or vice versa. This is the case for libssh2 based protocols: SCP and
SFTP and we now introduce a define to set those protocols and we make
the multi interface code aware of this concept.
This is another fix to make test 582 run properly.
As a new state recently was added to the IMAP state machine it has to be
in the array of names as well as otherwise libcurl crashes when a debug
version runs...
For uploads we want to use the _sending_ function even when the socket
turns out readable as the underlying libssh2 sftp send function will
deal with both accordingly. This is what the cselect_bits magic is for.
Fixes test 582.
Make GSS authentication work when a curl handle is reused for multiple
authenticated requests, by always setting negdata->state in
output_auth_headers().
Signed-off-by: Marcus Sundberg <marcus.sundberg@aptilo.com>
When using the multi interface and a handle using SFTP was removed very
early on, we would get a segfault due to the code assumed data was there
that hadn't yet been setup.
Bug: http://curl.haxx.se/mail/lib-2011-03/0066.html
Reported by: Saqib Ali
Both SFTP and SCP are protocols that need to shut down stuff properly
when the connection is about to get torned down. The primary effect of
not doing this shows up as memory leaks (when using SCP or SFTP with the
multi interface).
This is one of the problems detected by test 582.
As we know how much to send, we can and should stop once we've sent that
much data as it avoids having to rely on other mechanisms to detect the
end.
This is one of the problems detected by test 582.
Reported by: Henry Ludemann <misc@hl.id.au>
When using the multi_socket API to do SFTP upload, it is important that
we set a quick expire when leaving the SSH_SFTP_UPLOAD_INIT state as
there's nothing happening on the socket so there's no read or write to
wait for, but the next libssh2 API function needs to be called to get
the ball rolling.
This is one of the problems detected by test 582.
Reported by: Henry Ludemann <misc@hl.id.au>
All C and H files now (should) feature the proper project curl source
code header, which includes basic info, a copyright statement and some
basic disclaimers.
CyaSSL (available from git@github.com:cyassl/cyassl.git) has been
added to the SSL abstraction layer.
To test:
1) git CyaSSL sources
2) autoreconf -i
3) ./configure --disable-static
4) make
5) sudo make install
6) autoreconf -i
7) git curl sources (and this patch)
8) ./configure --disable-shared --with-cyassl --without-ssl --enable-debug
9) make
10) normal testing
Please send questions or comments to todd@yassl.com .
libssh2_knownhost_readfile() returns a negative value on error or
otherwise number of parsed known hosts - this was previously not
documented correctly in the libssh2 man page for the function.
Bug: http://curl.haxx.se/mail/lib-2011-02/0327.html
Reported by: murat
Removed the "netrc_debug" keyword replaced with --netrc-file additions.
Removed the debug code from Curl_parsenetrc as it is superseeded by
--netrc-file.
After a request times out, the connection wasn't properly closed and
prevented to get re-used, so subsequent transfers could still mistakenly
get to use the previously aborted connection.
When failing to connect the protocol during the CURLM_STATE_PROTOCONNECT
state, Curl_done() has to be called with the premature flag set TRUE as
for the pingpong protocols this can be important.
When Curl_done() is called with premature == TRUE, it needs to call
Curl_disconnect() with its 'dead_connection' argument set to TRUE as
well so that any protocol handler's disconnect function won't attempt to
use the (control) connection for anything.
This problem caused the pingpong protocols to fail to disconnect when
STARTTLS failed.
Reported by: Alona Rossen
Bug: http://curl.haxx.se/mail/lib-2011-02/0195.html
Introducing a few CURL_SOCKOPT* defines for conveniance. The new
CURL_SOCKOPT_ALREADY_CONNECTED signals to libcurl that the socket is to
be treated as already connected and thus it will skip the connect()
call.
It turns out some systems rely on the gmtime or gmtime_r to be defined
already in the system headers and thus my "precaution" redefining of
them only caused trouble. They are now removed.
On second thought, I think CURLE_TLSAUTH_FAILED should be eliminated. It
was only being raised when an internal error occurred while allocating
or setting the GnuTLS SRP client credentials struct. For TLS
authentication failures, the general CURLE_SSL_CONNECT_ERROR seems
appropriate; its error string already includes "passwords" as a possible
cause. Having a separate TLS auth error code might also cause people to
think that a TLS auth failure means the wrong username or password was
entered, when it could also be a sign of a man-in-the-middle attack.
When the callback returns an error, this function must make sure to return
CURLE_ABORTED_BY_CALLBACK properly and not CURLE_OK as before to allow the
callback to properly abort the operation.
The main has not been updated from some time and is out of sync with
the code. The code is now tested by several test cases so no need for
a seperate code path.
Instead of polluting many places with #ifdefs, we create a single place
for this function, and also check return code properly so that a NULL
pointer returned won't cause problems.
The official Mozilla page at http://www.mozilla.org/projects/security/certs/
points out a new place as the "proper" place to get Mozilla's CA certs from
so this script is now updated to use that instead.
Reported by: Daniel Mentz
The official Mozilla page at
http://www.mozilla.org/projects/security/certs/ points out a new place
as the "proper" place to get Mozilla's CA certs from so this script is
now updated to use that instead.
Reported by: Daniel Mentz
The code in the toofast state needs to first recalculate the values
before it uses them again since it may have been a while since it last
did it when it reaches this point.
This will be used by file_do() and Curl_readwrite() as a unified method
of checking to see if a remote document meets the supplied
CURLOPT_TIMEVAL and CURLOPT_TIMECONDITION.
Signed-off-by: Dave Reisner <d@falconindy.com>
When this callback is called due to the destruction of the ares handle,
the connection pointer passed in as an argument may no longer pointing
to valid data and this function doesn't need to do anything with it
anyway so we make sure it doesn't.
Bug: http://curl.haxx.se/mail/lib-2011-01/0333.html
Reported by: Vsevolod Novikov
The HTTP parser allocated memory on each received Location: header
without properly freeing old data. Starting now, the code only considers
the first Location: header and will blissfully ignore subsequent ones.
Bug: http://curl.haxx.se/bug/view.cgi?id=3165129
Reported by: Martin Lemke
... and update the curl.1 and curl_easy_setopt.3 man pages such that
they do not suggest to use an OpenSSL utility if curl is not built
against OpenSSL.
Bug: https://bugzilla.redhat.com/669702
The idea that the protocol and socktype is part of name resolving in the
libc functions is nuts. We keep the name resolver functions assume
TCP/STREAM and we make sure that when we want to connect to a UDP
service we use the correct UDP/DGRAM set instead. This bug was because
the ->protocol field was not always set correctly.
This bug was only affecting ipv6-disabled non-cares non-threaded builds.
Bug: http://curl.haxx.se/bug/view.cgi?id=3154436
Reported by: "dperham"
When configure --enable-debug has been used, all files in lib/ are now
built twice and a separate static library crafted for unit-testing will
be linked. The unit tests in the tests/unit subdir will use that
library.
Since some systems don't have PATH_MAX and it isn't that clever to
assume a fixed maximum path length, the code now allocates buffer space
instead of using stack.
Reported by: Samuel Thibault
Bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=608521
Sending "pwd" as a QUOTE command only sent the reply to the
DEBUGFUNCTION. Now it also sends an FTP-like header to the header
callback to allow similar operations as with FTP, and apps can re-use
the same parser.
When built IPv6-enabled, we could do Curl_done() with one of the two
resolves having returned already, so when ares_cancel() is called the
resolve callback ends up doing funny things (sometimes resulting in a
segfault) since it would try to actually store the previous resolve even
though we're shutting down the resolve.
This bug was introduced in commit 8ab137b2bc so it hasn't been
included in any public release.
Bug: http://curl.haxx.se/bug/view.cgi?id=3145445
Reported by: Pedro Larroy
Providing multiple dots in a series in the domain field (domain=..com) could
trick the cookie engine to wrongly accept the cookie believing it to be
fine. Since the tailmatching would then match all .com sites, the cookie would
then be sent to all of them.
The code now requires at least one letter between each dot for them to be
counted. Edited test case 61 to verify this.
When using the multi interface and connecting to a host name that
resolves to multiple IP addresses, there was no logic that made it
continue to the next IP if connecting to the first address times
out. This is now corrected.
The info about pipe status and expire cleared are clearly debug-related
and not anything mere mortals will or should care about so they are now
ifdef'ed DEBUGBUILD
Similar to what is done already for RCPT TO, the code now checks for and
adds angle brackets (<>) around the email address that is provided for
CURLOPT_MAIL_RCPT unless the app has done so itself.
Make sure that Curl_cache_addr() errors are propagated to callers of
loadhostpairs().
(this loadhostpairs function caused a scan-build warning due to the
'dns' variable getting assigned but never used)
Doing curlx_strtoofft() on the size just to figure out the end of it
causes a compiler warning since the result wasn't used, but is also a
bit of a waste.
Since the original `conn' pointer was used after the `connectdata' it
points to has been closed/cleaned up by Curl_reconnect_request it caused
a crash. We must make sure to use the newly created connection instead!
URL: http://curl.haxx.se/mail/lib-2010-12/0202.html
Make the c-ares resolver code ask for both IPv4 and IPv6 addresses when
IPv6 is enabled.
This is a workaround for the missing ares_getaddrinfo() and is a lot
easier to implement.
Note that as long as c-ares returns IPv4 addresses when IPv6 addresses
were requested but missing, this will cause a host's IPv4 addresses to
occur twice in the DNS cache.
URL: http://curl.haxx.se/mail/lib-2010-12/0041.html
The SSL_SERVER_VERIFY_LATER bit in the ssl_ctx_new() call allows the
code to verify the peer certificate explicitly after the handshake and
then the "data->set.ssl.verifypeer" option works.
The public axTLS header (at least as of 1.2.7) redefines the memory
functions. We #undef those again immediately after the public header to
limit the damage. This should be fixed in axTLS.
Failed HTTPS tests: 301, 306, 311, 312, 313, 560
311, 312 need more detailed error reporting from axTLS.
313 relates to CRL, which hasn't been implemented yet.
Added axTLS to autotool files and glue code to misc other files.
axtls.h maps SSL API functions, but may change.
axtls.c is just a stub file and will definitely change.
The function that checks if pipelining is possible now requires the HTTP
bit to be set so that it doesn't mistakenly tries to do it for other
protocols.
Bug: http://curl.haxx.se/mail/lib-2010-12/0152.html
Reported by: Dmitri Shubin
The generic timeout code must not check easy handles that are already
completed. Going to completed (again) within there risked decreasing the
number of alive handles again and thus it could go negative.
This regression bug was added in 7.21.2 in commit ca10e28f06
ossl_connect_common() now checks whether or not 'struct
connectdata->state' is equal 'ssl_connection_complete' and if so, will
return CURLE_OK with 'done' set to 'TRUE'. This check prevents
ossl_connect_common() from creating a new ssl connection on an existing
ssl session which causes openssl to fail when it tries to parse an
encrypted TLS packet since the cipher data was effectively thrown away
when the new ssl connection was created.
Bug: http://curl.haxx.se/mail/lib-2010-11/0169.html
It helps to prevent a hangup with some FTP servers in case idle session
timeout has exceeded. But it may be useful also for other protocols
that send any quit message on disconnect. Currently used by FTP, POP3,
IMAP and SMTP.
When looping in this function and checking for the timeout being
expired, it was not updating the reference time when calculating the
timediff since previous round which made it think each subsequent loop
to have taken longer than it actually did.
I also modified the function to use the generic Curl_timeleft() function
instead of the custom logic.
Bug: http://curl.haxx.se/bug/view.cgi?id=3112579
Ensure that spurious results from system's getaddrinfo() ares not propagated
by Curl_getaddrinfo_ex() into the library.
Also ensure that the ai_addrlen member of Curl_getaddrinfo_ex()'s output linked
list of Curl_addrinfo structures has appropriate family-specific address size.
On Windows, translate WSAGetLastError() to errno values as GNU
TLS does it internally, too. This is necessary because send() and
recv() on Windows don't set errno when they fail but GNU TLS
expects a proper errno value.
Bug: http://curl.haxx.se/bug/view.cgi?id=3110991
When no timeout is set, we call the socket_ready function with a timeout
value of 0 during handshake, which makes it loop too much/fast in this
function. It also made this function return CURLE_OPERATION_TIMEDOUT
wrongly on a slow handshake.
However, the particular bug report that highlighted this problem is not
solved by this fix, as this fix only makes the more proper error get
reported instead.
Bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=594150
Reported by: Johannes Ernst
While changing Curl_sec_read_msg to accept an enum protection_level
instead of an int, I went ahead and fixed the usage of the associated
fields.
Some code was assuming that prot_clear == 0. Fixed those to use the
proper value. Added assertions prior to any code that would set the
protection level.
This is the advised way of checking for errors in the GSS-API RFC.
Also added some '\n' to the error message so that they are not mixed
with other outputs.
The IP version choice was previously only in the UserDefined struct
within the SessionHandle, but since we sometimes alter that option
during a request we need to have it on a per-connection basis.
I also moved more "init conn" code into the allocate_conn() function
which is designed for that purpose more or less.
CURLOPT_RESOLVE is a new option that sends along a curl_slist with
name:port:address sets that will populate the DNS cache with entries so
that request can be "fooled" to use another host than what otherwise
would've been used. Previously we've encouraged the use of Host: for
that when dealing with HTTP, but this new feature has the added bonus
that it allows the name from the URL to be used for TLS SNI and server
certificate name checks as well.
This is a first change. Surely more will follow to make it decent.
If the query result has a binary attribute, the binary attribute is
base64 encoded. But all following non binary attributes are also base64
encoded which is wrong.
This is a test (LDAP server is public).
curl
ldap://x500.bund.de:389/o=Bund,c=DE?userCertificate,certificateSerialNumber?sub
?cn=*Woehleke*
If you use a custom Host: name in a request to a SSL server, libcurl
will now use that given name when it verifies the server certificate to
be correct rather than using the host name used in the actual URL.
When given a custom host name in a Host: header, we can use it for
several different purposes other than just cookies, so we rename it and
use it for SSL SNI etc.
Some FTP servers (e.g. Pure-ftpd) end up hanging if we close the data
connection before transferring all the requested data. If we send ABOR
in that case, it prevents the server from hanging.
Bug: https://bugzilla.redhat.com/643656
Reported by: Pasi Karkkainen, Patrick Monnerat
These haven't worked in at least 8 years due to missing source
files, and most active RiscOS developers these days apparently
cross-compile anyway.
Signed-off-by: James Bursa <james@zamez.org>
In libssh2 1.2.8, libssh2_session_handshake() replaces
libssh2_session_startup() to fix the previous portability problem with
the socket type that was too small for win64 and thus easily could cause
crashes and more.
It is a bad idea to use the public prefix used by another library and
now we realize that libssh2 introduces a symbol in the upcoming version
1.2.8 that conflicts with our static function named libssh2_free.
When failing to build form post due to an error, the code now does a
proper failf(). Previously libcurl would report an error like "failed
creating formpost data" when a file wasn't possible to open which was
not easy for users to figure out.
I also lower cased a function name to be named more curl-style and
removed some unnecessary code.
The URL parser got a little stricter as it now considers a ? to be a
host name divider so that the slightly sloppier URLs work too. The
problem that made me do this change was the reported problem with an URL
like: www.example.com?email=name@example.com This form of URL is not
really a legal URL (due to the missing slash after the host name) but is
widely accepted by all major browsers and libcurl also already accepted
it, it was just the '@' letter that triggered the problem now.
The side-effect of this change is that now libcurl no longer accepts the
? letter as part of user-name or password when given in the URL, which
it used to accept (and is tested in test 191). That letter is however
mentioned in RFC3986 to be required to be percent encoded since it is
used as a divider.
Bug: http://curl.haxx.se/bug/view.cgi?id=3090268
In order to avoid for example the pingpong protocols to issue STARTTLS
(or equivalent) even though there's no SSL support built-in.
Reported by: Sune Ahlgren
Bug: http://curl.haxx.se/mail/archive-2010-10/0045.html
As the change in 5f0ae7a062 added a precaution against negative
file sizes that for some reason managed to get returned, this change now
introduces the same check at the second place in the code where the file
size from the libssh2 stat call is used.
This check might not be suitable for a 32 bit curl_off_t, but libssh2.h
assumes long long to work and to be 64 bit so I believe such a small
curl_off_t will be very unlikely to occur in the wild.
Renamed SDK_* to NDK_*; made NDK_* defines overwriteable from
environment; removed now obsolete YACC macro;
moved some curl_config.h defines to IPv6 section since they
are only needed when IPv6 is enabled - this makes libcurl compile
with older NDKs too which were not IPv6-aware.
We forgot to release the buffer passed to gss_init_sec_context.
The previous logic was difficult to read as we were reusing the same
variable (gssbuf) for both input buffer and output buffer. Splitted the
logic in 2 variables to better underline who needs to be released.
Also made the code break at 80 lines.
This fixes a memory leak related to the GSS-API code.
Added a krb5_init and krb5_end functions. Also removed a work-around
the lack of proper initialization of the GSS-API context.
It was pointed out that the special case libcurl did for 416 was
incorrect and wrong. 416 is not really different to other errors so the
response body must be handled like for other errors/http responses.
Reported by: Chris Smowton
Bug: http://curl.haxx.se/bug/view.cgi?id=3076808
It is still not clarified exactly why this happens, but libssh2
sometimes report a negative file size for the remote SFTP file and that
deeply confuses libcurl (or crashes it) so this precaution is added to
avoid badness.
Reported by: Ernest Beinrohr
Bug: http://curl.haxx.se/bug/view.cgi?id=3076430
Remove a leak seen on Kerberos/MIT (gss_OID is copied internally and
we were leaking it). Now we just pass NULL as advised in RFC2744.
|tmp| was never set back to buf->data.
Cleaned up Curl_sec_end to take into account failure in Curl_sec_login
(where conn->mech would be NULL but not conn->app_data or
conn->in_buffer->data).
Following a change in the way socket handler are registered, the custom
recv and send method were conditionaly registered.
We need to register them everytime to handle the ftp security
extensions.
Re-added the clear text handling in sec_recv.
Curl_sec_login was returning the opposite result that the code in ftp.c
was expecting. Simplified the return code (using a CURLcode) so to see
more clearly what is going on.
The functions Curl_disconnect() and Curl_done() are both used within the
scope of a single request so they cannot be allowed to use
Curl_expire(... 0) to kill all timeouts as there are some timeouts that
are set before a request that are supposed to remain until the request
is done.
The timeouts are now instead cleared at curl_easy_cleanup() and when the
multi state machine changes a handle to the complete state.
The date format in RFC822 allows that the seconds part of HH:MM:SS is
left out, but this function didn't allow it. This change also includes a
modified test case that makes sure that this now works.
Reported by: Matt Ford
Bug: http://curl.haxx.se/bug/view.cgi?id=3076529
tftpd-hpa has a bug where it will send an incorrect ack when the block
counter wraps and tftp options have been sent. Work around that by
accepting an ack for 65535 when we're expecting one for 0.
- |fd| is now a curl_socket_t and |len| a size_t to avoid conversions.
- Added 2 FIXMEs about the 2 unsigned -> signed conversions.
- Included 2 minor changes to Curl_sec_end.
- Renamed it to do_sec_send as it is the function doing the actual
transfer.
- Do not return any values as no one was checking it and it never
reported a failure (added a FIXME about checking for errors).
- Renamed the variables to make their use more specific.
- Removed some casts (int -> curl_socket_t, ...)
- Avoid doing the htnl <-> nthl twice by caching the 2 results.
- Renamed the variables name to better match their intend.
- Unified the |decoded_len| checks.
- Added some FIXMEs to flag some improvement that did not go in this
change.
- Removed sec_prot_internal as it is now inlined in the function (this removed
a redundant check).
- Changed the prototype to return an error code.
- Updated the method to use the new ftp_send_command function.
- Added a level_to_char helper method to avoid relying on the compiler's
bound checks. This default to the maximum security we have in case of a
wrong input.
Tighten the type of the |data| parameter to avoid a cast. Also made
it const as we should not modify it.
Added a DEBUGASSERT on the size to be written while changing it.
To do so, made block_read call Curl_read_plain instead of read.
While changing them renamed block_read to socket_read and sec_get_data
to read_data to better match their function.
Also fixed a potential memory leak in block_read.
Obviously, browsers ignore a colon without a following port number. Both
Firefox and Chrome just removes the colon for such URLs. This change
does not remove the colon for URLs sent over a HTTP proxy, so we should
consider doing that change as well.
Reported by: github user 'kreshano'
curl_easy_duphandle() was not properly duping the ares channel. The
ares_dup() function was introduced in c-ares 1.6.0 so by starting to use
this function we also raise the bar and require c-ares >= 1.6.0
(released Dec 9, 2008) for such builds.
Reported by: Ning Dong
Bug: http://curl.haxx.se/mail/lib-2010-08/0318.html
If built without HTTP or proxy support it would cause a compiler warning
due to the unused variable. I moved the declaration of it into the only
scope it is used.
bool_false is the internal name used in the setup_once.h definition
we fall back to for non-C99 non-stdbool systems, it's not the actual
name to use in assignments (we use bool_false, bool_true there to
avoid global namespace problems, see comment in setup_once.h).
The correct C99 value to use is 'false', but let's use FALSE as
used elsewhere when assigning to bits.close. FALSE is set equal
to 'false' in setup_once.h when possible.
This fixes a build problem on C99 targets.
As of curl-7.21.1 tunnelling ldap queries through HTTP Proxies is not
supported. Actually if --proxytunnel command-line option (or equivalent
CURLOPT_HTTPPROXYTUNNEL) is used for ldap queries like
ldap://ldap.my.server.com/... You are unable to successfully execute the
query. In facts ldap_*_bind is executed directly against the ldap server
and proxy is totally ignored. This is true for both openLDAP and
Microsoft LDAP API.
Step to reproduce the error:
Just launch "curl --proxytunnel --proxy 192.168.1.1:8080
ldap://ldap.my.server.com/dc=... "
This fix adds an invocation to Curl_proxyCONNECT against the provided
proxy address and on successful "CONNECT" it tunnels ldap query to the
final ldap server through the HTTP proxy. As far as I know Microsoft
LDAP APIs don't permit tunnelling in any way so the patch provided is
for OpenLDAP only. The patch has been developed against OpenLDAP 2.4.23
and has been tested with Microsoft ISA Server 2006 and works properly
with basic, digest and NTLM authentication.
Rodric provide an awesome recipe that proved libcurl didn't timeout at
the requested time - it instead often timed out at [connect time] +
[timeout time] instead of the documented and intended [timeout time]
only. This bug was due to the code using the wrong base offset when
comparing against "now". I could also take the oppurtinity to simplify
the code by properly using of the generic help function for this:
Curl_timeleft.
Reported by: Rodric Glaser
Bug: http://curl.haxx.se/bug/view.cgi?id=3061535
As this function uses return code 0 to mean that there is no timeout, it
needs to check that it doesn't return a time left value that is exactly
zero. It could lead to libcurl doing an extra 1000 ms select() call and
thus not timing out as accurately as it should.
I fell over this bug when working on the bug 3061535 but this fix does
not correct that problem alone, although this is a problem that needs to
be fixed.
Reported by: Rodric Glaser
Bug: http://curl.haxx.se/bug/view.cgi?id=3061535
The timeout is set for the connect phase already at the start of the
request so we should not add a new one, and we MUST not set expire to 0
as that will remove any other potentially existing timeouts.
The code reading chunked encoding attempts to rewind the code if it had
read more data than the chunky parser consumes. The rewinding can fail
and it will then cause an error. This change now makes the rewinding
only happen if pipelining is in use - as that's the only time it really
needs to be done.
Bug: http://curl.haxx.se/mail/lib-2010-08/0297.html
Reported by: Ron Parker
Curl_getconnectinfo() is changed to return a proper curl_socket_t for
the last socket so that it'll work more portably (and cause less
compiler warnings).
Add a timeout check for handles in the state machine so that they will
timeout in all states disregarding what actions that may or may not
happen.
Fixed a bug in socket_action introduced recently when looping over timed
out handles: it wouldn't assign the 'data' variable and thus it wouldn't
properly take care of handles.
In the update_timer function, the code now checks if the timeout has
been removed and then it tells the application. Previously it would
always let the remaining timeout(s) just linger to expire later on.
Each easy handle has a list of timeouts, so as soon as the main timeout
for a handle expires, we must make sure to get the next entry from the
list and re-add the handle to the splay tree.
This was attempted previously but was done poorly in my commit
232ad6549a.
When a new transfer is about to start we now set the proper timeouts to
expire for the multi interface if they are set for the handle. This is a
follow-up bugfix to make sure that easy handles timeout properly when
the times expire and the multi interface is used. This also improves
curl_multi_timeout().
The fix for the busyloop really only is a temporary work-around. It
causes a BLOCKING behavior which is a NO-NO. This function should rather
be split up in a do and a doing piece where the pieces that aren't
possible to send now will be sent in the doing function repeatedly until
the entire request is sent.
HTTP allows that a server sends trailing headers after all the chunks
have been sent WITHOUT signalling their presence in the first response
headers. The "Trailer:" header is only a SHOULD there and as we need to
handle the situation even without that header I made libcurl ignore
Trailer: completely.
Test case 1116 was added to verify this and to make sure we handle more
than one trailer header properly.
Reported by: Patrick McManus
Bug: http://curl.haxx.se/bug/view.cgi?id=3052450
The script works exactly same as the Perl one except for one thing:
when the text descriptions generated with openssl are included then
the md5 fingerprints are missing; seems openssl has either a bug or
a feature which prints the md5 fingerprint output to stdout instead
of writing them to specified file; this script could here do the same
as what the Perl scripr does (redirect stdout into file) but this
makes the script take up double the time because it needs to launch
cmd.exe 140 times (fo each openssl call). So I think for now we just
ommit the md5 fingerprints, and see if openssl will be fixed.
I fell over this bug report that mentioned that libcurl could wrongly
send more than one complete messages at the end of a transfer. Reading
the code confirmed this, so I've added a new multi state to make it not
happen. The mentioned bug report was made by Brad Jorsch but is (oddly
enough) filed in Debian's bug tracker for the "wmweather+" tool.
Bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=593390
There's an error in http_negotiation.c where a mistake is using only
userpwd even for proxy requests. Ludek provided a patch, but I decided
to write the fix slightly different using his patch as inspiration.
Reported by: Ludek Finstrle
Bug: http://curl.haxx.se/bug/view.cgi?id=3046066
When detecting that the send or recv speed, the multi interface changes
state to TOOFAST and previously there was no timeout set that would
force a recheck but it would rely on the application to somehow call
libcurl anyway. This now sets a timeout for a suitable future time to
check again if the average transfer speed is then below the threshold
again.
Curl_expire() is now expanded to hold a list of timeouts for each easy
handle. Only the closest in time will be the one used as the primary
timeout for the handle and will be used for the splay tree (which sorts
and lists all handles within the multi handle).
When the main timeout has triggered/expired, the next timeout in time
that is kept in the list will be moved to the main timeout position and
used as the key to splay with. This way, all timeouts that are set with
Curl_expire() internally will end up as a proper timeout. Previously any
Curl_expire() that set a _later_ timeout than what was already set was
just silently ignored and thus missed.
Setting Curl_expire() with timeout 0 (zero) will cancel all previously
added timeouts.
Corrects known bug #62.
Instead of looping over all attached easy handles, this now keeps a list
of messages in the multi handle. It allows curl_multi_info_read() to
perform O(1) no matter how many easy handles that are handled. This is
of importance since this function may be polled very frequently by apps
using the multi interface.
When the progress callback is called during the TCP connection, an error
return would accidentally not abort the operation as intended but would
instead be counted as a failure to connect to that particular IP and
libcurl would just continue to try the next. I made singleipconnect()
and trynextip() return CURLcode properly.
Added bonus: it corrected the error code for bad --interface usages,
like tested in test 1084 and test 1085.
Reported by: Adam Light
Bug: http://curl.haxx.se/mail/lib-2010-08/0105.html
Added the -br switch to dynamic builds which fixes the issue I saw
with curl's --version output. Added debug info and symfile for debug
builds to linker opts. Added DLL loader for wlink back, but this time
dependend on wlink version.
Patch posted to the list by malak.jiri AT gmail.com.
The var %MAKEFLAGS is only set in 3 cases: if set as environment
var or as macro definition from commandline, and either with the
-u or -ms switch. Since all these cases are unlikely for the average
user it should be safe to only test if %MAKEFLAGS is defined; this
has the benefit that now all other switches can be used again in
addition to the -u which was formerly not possible.
Curl_llist_init is never used outside of llist.c and thus it should be
static. I also removed the protos for Curl_llist_insert_prev and
Curl_llist_remove_next which are functions we removed from llist.c ages
ago.
Test 563 is enabled now and verifies that the combo FTP type=A URL,
CURLOPT_PORT set and proxy work fine. As a bonus I managed to remove the
somewhat odd FTP check in parse_remote_port() and instead converted it
to a better and more generic 'slash_removed' struct field. Checking the
->protocol field isn't right since when an FTP:// URL is sent over a
HTTP proxy, the protocol is HTTP but the URL was handled by the FTP code
and thus slash_removed is set TRUE for this case.
The struct used for storing the message for a completed transfer is now
no longer allocated separatly but is kept within the main struct kept
for each easy handle so that we avoid one malloc (and the subsequent
free).
When libcurl internally decided to wait for a 100-continue header, there
was no call to the timeout function so there was no timeout callback
called when the multi_socket API was used and thus applications became
either completely wrong or at least ineffecient depending on how they
handled the situation. We now set a timeout to get triggered.
Reported by: Ben Darnell
Bug: http://curl.haxx.se/bug/view.cgi?id=3039744
libssh2 1.2.6 and later handle >32bit file sizes properly even on 32bit
architectures and we make sure to use that ability.
Reported by: Mikael Johansson
Bug: http://curl.haxx.se/mail/lib-2010-08/0052.html
Simply because the TCP might be connected already we cannot skip the
proxy connect procedure. We need to be careful to not overload more
meaning to the bits.tcpconnect field like this.
With this fix, SOCKS proxies work again when the multi interface is
used. I believe this regression was added with commit 4b351d018e,
released as 7.20.1.
Left todo: add a test case that verifies this functionality that
prevents us from breaking it again in the future!
Reported by: Robin Cornelius
Bug: http://curl.haxx.se/bug/view.cgi?id=3033966
Commit 496002ea1c (released in 7.20.1) broke FTPS when using the
multi interface and OpenSSL was used. The condition for the non-blocking
connect was incorrect.
Reported by: Georg Lippitsch
Bug: http://curl.haxx.se/mail/lib-2010-07/0270.html
Previously the host name buffer was only used if gethostname() exists,
but since we converted that into a curl private function that function
always exists and will be used so the buffer needs to exist for all
cases/systems.
A shared library tests/libtest/.libs/lihostname.so is preloaded in NTLM
test-cases to override the system implementation of gethostname(). It
makes it possible to test the NTLM authentication for exact match, and
this way test the implementation of MD4 and DES.
If LD_PRELOAD doesn't work, a debug build willl also workk as debug
builds are now made to prefer a specific environment variable and will
then return that content as host name instead of the actual one.
Kamil wrote the bulk of this, Daniel Stenberg polished it.
lib/Makefile.Watcom works fine already, for src/Makefile.Watcom we
need first to tweak src/Makefile.inc a bit - therefore the handtweaked
list still exists for now.
- make both libcurl and curl makefiles use register calling convention
(previously libcurl had stack calling convention).
- added include paths to the Watcom headers so its no longer required
to set the environment vars for this.
- added -wcd=201 to supress compiler warning about unreachable code.
- use macros for all tools, and removed dependency on GNU tools like rm.
- make ipv6 and debug builds controlable via env vars and so make them
optional instead of default.
- commented WINLDAPAPI and WINBERAPI since they broke with OW 1.8, and
it seems they're not needed (anymore?).
- added rule for hugehelp.c.cvs so that it will be created when not
already exist - this is required for building from a release tarball
since there we have no hugehelp.c.cvs, thus compilation broke.
- removed C_ARG creation from lib/Makefile.Watcom and use CFLAGS
directly as done too in src/Makefile.Watcom - this has the benefit
that we will see all active cflags and defines during compile.
- added LINK-ARG to src/Makefile.Watcom in order to better control
linker input.
- a couple of other minor makefile tweaks here and there ...
- added largefile support for Watcom builds to config-win32.h. Not yet
tested if it really works, but should since Win32 supports it.
- added loaddll stuff to speed up builds if supported.
Win64's 32 bit long but 64 bit size_t caused a warning that we avoid
with a typecast. A small whitespace indent fix was also applied.
Reported by: Adam Light
This passes -Werror to gcc when building curl and libcurl,
allowing easy dection of compile warnings.
Signed-off-by: Ben Greear <greearb@candelatech.com>
... since FTP is using it as well, and potentially other protocols!
Also, an #endif CURL_DISABLE_HTTP was incorrectly marked, as it seems to
end the proxy block instead.
The FTP implementation was missing a timestamp reset point, making the
waiting for responses after sending a post-transfer "QUOTE" command not
working as supposedly. This bug was introduced in 7.20.0
curl_multi perform has two phases: run through every easy handle calling
multi_runsingle and remove expired timers (timer removal).
If a small timer (e.g. 1-10ms) is set during multi_runsingle, then it's
possible that the timer has passed by when the timer removal runs. The
timer which was just added is then removed. This will potentially cause
the timer list to be empty and cause the next call to curl_multi_timeout
to return -1. Ideally, curl_multi_timeout should return 0 in this case.
One way to fix this is to move the struct timeval now = Curl_tvnow(); to
the top of curl_multi_perform. The change does that.
As mentioned in bug report #2956968, the HTTP code wouldn't send the
first empty chunk during the auth negotiation phase of the HTTP request
sending, so the server would wait for data to come and libcurl would
wait for data to arrive... I've made the code not enable chunked
encoding until the auth negotiation is done and thus this scenario
doesn't occur anymore.
Reported by: Sidney San Martn
Bug: http://curl.haxx.se/bug/view.cgi?id=2956968
When curl_multi_remove_handle() is called and an easy handle is returned
to the connection cache held in the multi handle, then we cannot allow
CURLINFO_LASTSOCKET to extract it since that will more or less encourage
that the user uses the socket while it can get used by libcurl again.
Without this fix, we'd get a segfault in Curl_getconnectinfo() trying to
dereference the NULL pointer in 'data->state.connc'.
Bug: http://curl.haxx.se/bug/view.cgi?id=3023840
When configured with '--without-ssl --with-nss', NTLM authentication
now uses NSS crypto library for MD5 and DES. For MD4 we have a local
implementation in that case. More details are available at
https://bugzilla.redhat.com/603783
In order to get it working, curl_global_init() must be called with
CURL_GLOBAL_SSL or CURL_GLOBAL_ALL. That's necessary because NSS needs
to be initialized globally and we do so only when the NSS library is
actually required by protocol. The mentioned call of curl_global_init()
is responsible for creating of the initialization mutex.
There was also slightly changed the NSS initialization scenario, in
particular, loading of the NSS PEM module. It used to be loaded always
right after the NSS library was initialized. Now the library is
initialized as soon as any SSL or NTLM is required, while the PEM module
is prevented from being loaded until the SSL is actually required.
When a hostname resolves to multiple IP addresses and the first one
tried doesn't work, the socket for the second attempt may get dropped on
the floor, causing the request to eventually time out. The issue is that
when using kqueue (as on mac and bsd platforms) instead of select, the
kernel removes the first fd from kqueue when it is closed (in trynextip,
connect.c:503). Trynextip() then goes on to open a new socket, which
gets assigned the same number as the one it just closed. Later in
multi.c, socket_cb is not called because the fd is already in
multi->sockhash, so the new socket is never added to kqueue.
The correct fix is to ensure that socket_cb is called to remove the fd
when trynextip() closes the socket, and again to re-add it after
singleipsocket(). I'm not sure how to cleanly do that, but the attached
patch works around the problem in an admittedly kludgy way by delaying
the close to ensure that the newly-opened socket gets a different fd.
Daniel's added comment: I didn't spot a way to easily do a nicer fix so
I've proceeded with Ben's patch.
Bug: http://curl.haxx.se/bug/view.cgi?id=3017819
Patch by: Ben Darnell
For example the libssh2 based functions return other negative
values than -1 to signal errors and it is important that we catch
them properly. Right before this, various failures from libssh2
were treated as negative download amounts which caused havoc.
My additional call to Curl_pgrsUpdate() would sometimes get
called even though there's no connection (left) so a NULL pointer
would get passed, causing a segfault.
1) no need to call the progress function twice when in the
CURLM_STATE_TOOFAST state.
2) Make sure that the progress callback's return code is
acknowledged when used
As long as no error is reported, the progress function can get
called. This may be a little TOO often so we should keep an eye
on this and possibly make this conditional somehow.
Older unixes want an 'int' instead of 'size_t' as the 3rd
argumment so before this change it would cause warnings such as:
There is an implicit conversion from "unsigned long" to "int";
rounding, sign extension, or loss of accuracy may result.
Was seeing spurious SSL connection aborts using libcurl and
OpenSSL. I tracked it down to uncleared error state on the
OpenSSL error stack - patch attached deals with that.
Rough idea of problem:
Code that uses libcurl calls some library that uses OpenSSL but
don't clear the OpenSSL error stack after an error.
ssluse.c calls SSL_read which eventually gets an EWOULDBLOCK from
the OS. Returns -1 to indicate an error
ssluse.c calls SSL_get_error. First thing, SSL_get_error calls
ERR_get_error to check the OpenSSL error stack, finds an old
error and returns SSL_ERROR_SSL instead of SSL_ERROR_WANT_READ or
SSL_ERROR_WANT_WRITE.
ssluse.c returns an error and aborts the connection
Solution:
Clear the openssl error stack before calling SSL_* operation if
we're going to call SSL_get_error afterwards.
Notes:
This is much more likely to happen with multi because it's easier
to intersperse other calls to the OpenSSL library in the same
thread.
Enable OpenLDAP support for cygwin builds. This support was disabled back
in 2008 due to incompatibilities between OpenSSL and OpenLDAP headers.
cygwin's OpenSSL 0.9.8l and OpenLDAP 2.3.43 versions on cygwin 1.5.25
allow building an OpenLDAP enabled libcurl supporting back to Windows 95.
Remove non-functional CURL_LDAP_HYBRID code and references.
Jason McDonald posted bug report #3006786 when he found that the
SFTP code didn't timeout properly in several places in the code
even if a timeout was set properly.
Based on his suggested patch, I wrote a different implementation
that I think addressed the issue better and also uses the connect
timeout for the initial part of the SSH/SFTP done during the
"protocol connect" phase.
(http://curl.haxx.se/bug/view.cgi?id=3006786)
Igor Novoseltsev reported a problem with the multi socket API and
using timeouts and timers. It boiled down to a problem with
libcurl's use of GetTickCount() interally to figure out the
current time, while Igor's own application code used another
function call.
It made his app call the socket API timeout function a bit
_before_ libcurl would consider the timeout to trigger, and that
could easily lead to timeouts or stalls in the app. It seems
GetTickCount() in general often has no better resolution than
16ms and switching to the alternative function
QueryPerformanceCounter has its share of problems:
http://www.virtualdub.org/blog/pivot/entry.php?id=106
We address this problem by simply having libcurl treat timers
that already has occured or will occur within 40ms subject for
treatment. I'm confident that there are other implementations and
operating systems with similarly in accurate timer functions so
it makes sense to have applied generically and I don't believe we
sacrifice much by adding a 40ms inaccuracy on these timeouts.
makes the LDAP code much cleaner, nicer and in general being a
better libcurl citizen. If a new enough OpenLDAP version is
detect, the new and shiny lib/openldap.c code is then used
instead of the old cruft
Code by Howard, minor cleanups by Daniel.
bool in curl internals is unsigned char and should not be used
to receive return value from functions returning int - this fails
when using IBM VisualAge and Tru64 compilers.
Eric Mertens posted bug #3003705: when we made TFTP use the
correct timeout option when sent to the server (fixed May 18th
2010) it became obvious that libcurl used invalid timeout values
(300 by default while the RFC allows nothing above 255). While of
course it is obvious that as TFTP has worked thus far without
being able to set timeout at all, just removing the setting
wouldn't make any difference in behavior. I decided to still keep
it (but fix the problem) as it now actually allows for easier
(future) customization of the timeout.
(http://curl.haxx.se/bug/view.cgi?id=3003705)
In a normal expression, doing [unsigned short] + 1 will not wrap
at 16 bits so the comparisons and outputs were done wrong. I
added a macro do make sure it gets done right.
Douglas Kilpatrick filed bug report #3004787 about it:
http://curl.haxx.se/bug/view.cgi?id=3004787
By undefing a bunch of E* defines that VC10 has started to define
but that we redefine internally to their WSA* alternatives when
building for Windows.