adding unit test for Curl_llist_move, documenting unit-tested functions
in llist.c, changing unit-test to unittest, replacing assert calls with
abort_unless calls
The CURLFORM_STREAM is documented to only insert a file name (and thus
look like a file upload) in the part if CURLFORM_FILENAME is set, but in
reality it always inserted a filename="" and if CURLFORM_FILENAME wasn't
set, it would insert insert rubbish (or possibly crash).
This is now fixed to work as documented, and test 554 has been extended
to verify this.
Reported by: Sascha Swiercy
Bug: http://curl.haxx.se/mail/lib-2011-06/0070.html
Properly deal with the fact that the last fread() call most probably is
a short read, and when using callbacks in fact all calls can be short
reads. No longer consider a file read done until it returns a 0 from the
read function.
Reported by: Aaron Orenstein
Bug: http://curl.haxx.se/mail/lib-2011-06/0048.html
If a piece is set to use a callback to get the data, it should not be
treated as data. It unfortunately also requires that curl_easy_perform()
or similar has been used as otherwise the callback function hasn't been
figured out and curl_formget won't know how to get the content.
When closing a connection, the speedchecker's timestamp is now deleted
so that it cannot accidentally be used by a fresh connection on the same
handle when examining the transfer speed.
Bug: https://bugzilla.redhat.com/679709
When a time condition isn't met, so that no body is delivered to the
application even though a 2xx response is being read from the server, we
must close the connection to avoid a re-use of the connection to be
completely tricked.
Added test 1128 to verify.
cross-compilation of unit tests static library/programs fails when
libcurl shared library is also built. This might be due to a libtool or
automake issue. In this case we disable unit tests.
When switching to HTTP because a HTTP proxy is being used, the existing
handler is now checked if it already is "compatible". This allows the https
handler remain while other non-http handlers will be redirected.
Bug: http://curl.haxx.se/mail/lib-2011-05/0214.html
Reported by: Jerome Robert
Fix compiler warning: `keycheck' might be used uninitialized in this function.
Fix compiler warning: `keybit' might be used uninitialized in this function.
Introduced the initial setup to allow closesocket callbacks by making
sure sclose() is only ever called from one place in the libcurl source
and still run all test cases fine.
Added test 1126 and 1127 to verify curl's behaviour when If-Modified-Since
is used and a 200 is returned.
The list of test cases in Makefile.am is now sorted numerically.
Made the public headers checksrc compliant
Removed types.h (it's been unused since April 2004)
Made the root makefile do make in include by default as well, so that
TAGS and the checksrc will work better.
The protocol handler's flags field now can set that the protocol
requires a password, so that the set_userpass function doesn't have to
have the specific knowledge of which protocols that do.
Made several functions static
Made one function defined to nothing when RTSP is disabled to avoid
the #ifdefs in code.
Removed explicit rtsp.h includes
Using 'socks5h' as proxy protocol will make it a
CURLPROXY_SOCKS5_HOSTNAME proxy which is SOCKS5 and asking the proxy to
resolve host names. I found no "standard" protocol name for this.
Follow style of GNU layout (cp, mv ...) where options are separated with
comma: -o, --option
Order item alphabetically (by length also): -o, -O, --option
Follow style of GNU layout by moving help related options to the end:
--help, -M, --version
Clarify that the '-', '.', '_' or '~' letters are also not escaped since
they shouldn't according to RFC3986 section 2.3.
This is how this function has behaved since sep 2010, commit
5df13c3173.
Introduce an INIT state for the SSH state machine and set libssh2
non-blocking in that so that it is set properly before
libssh2_session_startup() is called.
Bug: http://curl.haxx.se/mail/archive-2011-05/0001.html
As it is already included by curlbuild.h if it exists on the platform it
was included here superfluously anyway.
Reported by: Dagobert Michelsen
Bug: http://curl.haxx.se/bug/view.cgi?id=3294509
Now use gai_strerror() to get proper error messages when getaddrinfo()
has failed. Detect the function in configure.
Code based on work and suggestions by Jeff Pohlmeyer and Guenter Knauf
Improved library search by check_function_exists_concat() macro:
it does not revert the list of libraries any more.
Improved OpenSSL library search: first find zlib, then search for
openssl libraries that may depend on zlib.
For Unix: openssl libraries can now be detected in nonstandard
locations. Supply CMAKE_LIBRARY_PATH to CMake on command line.
Added installation capability (very basic one yet).
When connecting to a socks or similar proxy we do the proxy handshake at
once when we know the TCP connect is completed and we only consider the
"connection" complete after the proxy handshake. This fixes test 564
which is now no longer considered disabled.
Reported by: Dmitri Shubin
Bug: http://curl.haxx.se/mail/lib-2011-04/0127.html
The make target checksrc now works in the root makefile and in both the
src and lib directories.
It is also run automatically on "all" if configure --enable-debug was
used.
It now scans multiple files and outputs an error+warning count summary
at the end in case at least one was detected.
-D can be used to specify in which dir the files are located
The script now scans for conditions that starts with a space for
if/while/for lines.
For now provide prototypes instead of including the
non-standard normalisation.h which is only available in the
"Internationalized Domain Names Mitigation APIs" download.
asyn-ares.c and asyn-thread.c are two separate backends that implement
the same (internal) async resolver API for libcurl to use. Backend is
specified at build time.
The internal resolver API is defined in asyn.h for asynch resolvers.
Fixed indents, coding conventions and white space edits.
Modified the c-ares completion callback function to again NOT read the
conn data when the ares handle is being taken down as then it may have
been freed already.
For now we directly import the Idn* symbols with the linker;
an upcoming release of OWC will have these added to the import
lib normaliz.lib, and prototypes are added to winnnls.h.
Make sure that files are closed before the post quote commands run as if
they operate on the just transferred file they could otherwise easily
fail.
Patch by: Rajesh Naganathan (edited)
libcurl failed to check the correct struct for HTTPS after CONNECT was
issued to the proxy, so it didn't do the TLS handshake and subsequently
failed the connection. A regression released in 7.21.5 (introduced
around commit 8831000bc0).
Bug: http://curl.haxx.se/mail/lib-2011-04/0134.html
Reported by: Josue Andrade Gomes
It is now possible to use any combination of features without
having to 1st add makefile targets to the main makefile. The
main makefile now passes the 'mingw32-feat1-feat2' as var CFG,
and the ./[lib|src]/Makefile.m32 parses the CFG var to determine
the features to be enabled.
changed windows.h include to system header;
changed obsolete 2nd check for str_w to str_utf8 in order to catch
malloc() failure and avoid a free(NULL);
changed calls to GetLastError() to void to kill unsused var compiler
warnings;
moved one call to GetLastError() into else case so that its only
called when WideCharToMultiByte() really fails.
Added CURLOPT_TRANSFER_ENCODING as the option to set to request Transfer
Encoding in HTTP requests (if built zlib enabled). I also renamed
CURLOPT_ENCODING to CURLOPT_ACCEPT_ENCODING (while keeping the old name
around) to reduce the confusion when we have to encoding options for
HTTP.
--tr-encoding is now the new command line option for curl to request
this, and thus I updated the test cases accordingly.
When TE: is inserted in the request, we must add a "Connection: TE" as
well to be HTTP 1.1 compliant. If a custom Connection: header is passed
in, we must use that and only append TE to it. Test case 1125 verifies
TE: + custom Connection:.
Since this struct member is used in the code to determine what and how
to decode automatically and since it is now also used for compressed
Transfer-Encodings, I renamed it to the more suitable 'auto_decoding'
Transfer-Encoding differs from Content-Encoding in a few subtle ways,
but primarily it concerns the transfer only and not the content so when
discovered to be compressed we know we have to uncompress it. There will
only arrive compressed transfers in a response after we have requested
them with the appropriate TE: header.
Test case 1122 and 1123 verify.
curl-config --version didn't output the correct version string (bug
introduced in commit 0355e33b5f), and unfortunately the test
case 1022 that was supposed to check for this was broken.
This change fixes the test to detect this problem and it fixes the
output.
Bug: http://curl.haxx.se/bug/view.cgi?id=3288727
As we're closing in on the release, I give up on the remaining ones but
I leave them in here for now to try to fix for next release.
I removed the 281 issue about warnings from the statical analyzer scans,
as they seem to be mostly false positives at this point.
The script didn't properly add the -lssh2 link option when it enabled
libssh2 linking where pkg-config isn't found.
Reported by: Saqib Ali
Bug: http://curl.haxx.se/mail/lib-2011-04/0054.html
When checking if an existing RTSP connection is alive or not, the
checkconnection function might be called with a SessionHandle pointer
being NULL and then referenced causing a crash. This happened only using
the multi interface.
Reported by: Tinus van den Berg
Bug: http://curl.haxx.se/bug/view.cgi?id=3280739
In case a client certificate is used, invalidate SSL session cache
at the end of a session. This forces NSS to ask for a new client
certificate when connecting second time to the same host.
Bug: https://bugzilla.redhat.com/689031
* Rename the object object directory from 'objs' to 'BCC_obj' to be in
sync with my previous patch for lib/Makefile.b32.
* Turn off these warnings to keep the build totally silent (with CBuilder-6
that is).
-w-inl 8026 Functions X are not expanded inline.
-w-pia 8060 Possibly incorrect assignment
-w-pin 8061 Initialization is only partially bracketed
(same added in src/Makefile.b32)
* $(MKDIR) and $(RMDIR) have been replaced with the shell-commands 'md'
and 'rd'. When having MingW/Msys programs 'mkdir.exe' and 'rmdir.exe' in
$PATH, this confuses Borland's make and the result (the cleaning etc.) would
not be as expected.
* Removed the preprocessing step; no need for PP_CMD and the .int files.
curl.exe builds fine w/o and the makefile gets simpler.
* Added a target for creating a compressed hugehelp.c if WITH_ZLIB is defined.
It assumes groff, gzip and perl is available if such an "advanced" users
requests it. Okay? BTW. My groff and Perl needs unix-slashes ('/').
Other perls should handle both forms ('/' and '\').
* Rename the object object directory from 'objs' to 'BCC_obj'. I feel
it should be named properly. Ref. Makefile.Watcom where it's called
'WC_Win32.obj'.
* Turn off these warnings to keep the build totally silent (with CBuilder-6
that is).
-w-inl 8026 Functions X are not expanded inline.
-w-pia 8060 Possibly incorrect assignment
-w-pin 8061 Initialization is only partially bracketed
I'm sure the warnings could be fixed the "proper" way or with some added
"#pragma" statements. But that just clutters the sources IMHO.
* $(MKDIR) and $(RMDIR) have been replaced with the shell-commands 'md'
and 'rd'. When having MingW/Msys programs 'mkdir.exe' and 'rmdir.exe' in
$PATH, this confuses Borland's make and the result (the cleaning etc.) would
not be as expected.
* Added a ".path.int = $(OBJDIR)" to tell make where the $(PREPROCESSED)
files are. Why we need the preprocess step in the fist place is beyond me
(Yang?). But I'll leave that for now.
These problems have gotten no interest/feedback from users:
-275 - Introduce a way to avoid sending USER for FTP connections
-288 - bug 3219997 curl rtmp request curl: (55) select/poll returned error
This problem is rather an autoconf bug with little user interest and it
can be worked around with an older autoconf:
-278 - "Configure $as_echo does not work"
This problem is not fixed:
-286 - bug 3214223 Pipelined HTTP requests with a zero-length body broken
Stop the abuse of CURLE_FAILED_INIT as return code for things not being
init related by introducing two new return codes:
CURLE_NOT_BUILT_IN and CURLE_UNKNOWN_OPTION
CURLE_NOT_BUILT_IN replaces return code 4 that has been obsoleted for
several years. It is used for returning error when something is
attempted to be used but the feature/option was not enabled or
explictitly disabled at build-time. Getting this error mostly means that
libcurl needs to be rebuilt.
CURLE_FAILED_INIT is now saved and used strictly for init
failures. Getting this problem means something went seriously wrong,
like a resource shortage or similar.
CURLE_UNKNOWN_OPTION is the option formerly known as
CURLE_UNKNOWN_TELNET_OPTION (and the old name is still present,
separately defined to be removed in a very distant future). This error
code is meant to be used to return when an option is given to libcurl
that isn't known. This problem would mostly indicate a problem in the
program that uses libcurl.
In my attempts to reduce #ifdefs in code, the SOCKS functions are now
macros when libcurl is built without proxy support and therefore the FTP
code could avoid some #ifs.
The new http_proxy.* files now host HTTP proxy specific code (500+ lines
moved out from http.c), and as a consequence there is a macro introduced
for the Curl_proxyCONNECT() function so that code can use it without
actually supporting proxy (or HTTP) in builds.
1 - make sure to #define macros for cookie functions in the cookie
header when cookies are disabled to avoid having to use #ifdefs in code
using those functions.
2 - move cookie-specific code to cookie.c and use the functio
conditionally as mentioned in (1).
net result: 6 #if lines removed, and 9 lines of code less
Within multi_socket when conn is used as a shorthand, data could be
changed and multi_runsingle could modify the connectdata struct to deal
with. This bug has not been included in a public release.
Using 'conn' like that turned out to be ugly. This change is a partial
revert of commit f1c6cd42f4.
Reported by: Miroslav Spousta
Bug: http://curl.haxx.se/bug/view.cgi?id=3265485
The read callback must return the exact requested amount of data when it
is used for doing TFTP uploads. This is due to how it deals with data
internally. This could/should be fixed but for now we document the
existing behavior.
Reported by: Colin Blair
Bug: http://curl.haxx.se/mail/lib-2011-03/0319.html
When asked to bind the local end of a connection when doing a request,
the code will now disqualify other existing connections from re-use even
if they are connected to the correct remote host.
This will also affect which connections that can be used for pipelining,
so that only connections that aren't bound or bound to the same
device/port you're asking for will be considered.
The RTSP-specific function for checking for "dead" connection is better
located in rtsp.c. The code using this is now written without #ifdefs as
the function call is instead turned into a macro (in rtsp.h) when RTSP
is disabled.
Fixed:
271 - fix the IPv6-working probing to only exist at one place in the code and
only get done once
A problem not repeatable and no proper recipe given and therefore simply
removed for now until we hear something else:
282 - 100 Continue responses should return the "final" HTTP response code:
"Getting the HTTP response code following a 100 Continue"
Move ipv6-functional-probe into a single function that is used from all
places that need to know.
Make the probe function store the result in a static variable so that
subsequent invokes just returns the previous result and won't have to
probe again.
This is a new documentation for the source tree. This information has
been present since a long time at
http://curl.haxx.se/mail/etiquette.html but now it is put into a plain
text version too for wider distribution. The web version will be
automatically generated from this source document.
Curl_posttransfer is called too soon to add the final new line.
Moved the new line logic to pgrsDone as there is no more call to
update the progress status after this call.
Reported by: Dmitri Shubin <sbn_at_tbricks.com>
http://curl.haxx.se/mail/lib-2010-12/0162.html
When libcurl sends a HTTP request on a re-used connection and detects it
being closed (ie no data at all was read from it), it is important to
rewind if any data in the request was sent using the read callback or
was read from file, as otherwise the retried request will be broken.
Reported by: Chris Smowton
Bug: http://curl.haxx.se/bug/view.cgi?id=3195205
When NSS-powered libcurl connected to a SSL server with
CURLOPT_SSL_VERIFYPEER equal to zero, NSS remembered that the peer
certificate was accepted by libcurl and did not ask the second time when
connecting to the same server with CURLOPT_SSL_VERIFYPEER equal to one.
This patch turns off the SSL session cache for the particular SSL socket
if peer verification is disabled. In order to avoid any performance
impact, the peer verification is completely skipped in that case, which
makes it even faster than before.
Bug: https://bugzilla.redhat.com/678580
The PROT_* set of internal defines for the protocols is no longer
used. We now use the same bits internally as we have defined in the
public header using the CURLPROTO_ prefix. This is for simplicity and
because the PROT_* prefix was already used duplicated internally for a
set of KRB4 values.
The PROTOPT_* defines were moved up to just below the struct definition
within which they are used.
The protocol handler struct got a 'flags' field for special information
and characteristics of the given protocol.
This now enables us to move away central protocol information such as
CLOSEACTION and DUALCHANNEL from single defines in a central place, out
to each protocol's definition. It also made us stop abusing the protocol
field for other info than the protocol, and we could start cleaning up
other protocol-specific things by adding flags bits to set in the
handler struct.
The "protocol" field connectdata struct was removed as well and the code
now refers directly to the conn->handler->protocol field instead. To
make things work properly, the code now always store a conn->given
pointer that points out the original handler struct so that the code can
learn details from the original protocol even if conn->handler is
modified along the way - for example when switching to go over a HTTP
proxy.
The non-blocking connect improvement for IMAP showed that we didn't
properly define the Curl_ssl_connect_nonblocking function for non-SSL
builds.
Reported by: Tor Arntsen
Only download and convert the certdata to the ca-bundle.crt if Mozilla
changed the data
The Perl LWP module (which in a bit of a circular reference is used by
mk-ca-bundle.pl) is now indirectly using this script. I made this small
tweak to make it easier to automatically maintain the generated
ca-bundle.crt file in version control.
Some protocols have to call the underlying functions without regard to
what exact state the socket signals. For example even if the socket says
"readable", the send function might need to be called while uploading,
or vice versa. This is the case for libssh2 based protocols: SCP and
SFTP and we now introduce a define to set those protocols and we make
the multi interface code aware of this concept.
This is another fix to make test 582 run properly.
As a new state recently was added to the IMAP state machine it has to be
in the array of names as well as otherwise libcurl crashes when a debug
version runs...
For uploads we want to use the _sending_ function even when the socket
turns out readable as the underlying libssh2 sftp send function will
deal with both accordingly. This is what the cselect_bits magic is for.
Fixes test 582.
These issues are now addressed:
276 - Karl M's vc makefile patch
277 - The "Stall when uploading to sftp using multi interface" bug
279 - curl_multi_remove_handle() crashes
280 - Marcus Sundberg's gss patch
Make GSS authentication work when a curl handle is reused for multiple
authenticated requests, by always setting negdata->state in
output_auth_headers().
Signed-off-by: Marcus Sundberg <marcus.sundberg@aptilo.com>
This test case is meant to verify that the logic in commit
60172a0446 actually works. This test failed for me before that
change and it works after it.
When using the multi interface and a handle using SFTP was removed very
early on, we would get a segfault due to the code assumed data was there
that hadn't yet been setup.
Bug: http://curl.haxx.se/mail/lib-2011-03/0066.html
Reported by: Saqib Ali
recvfrom in bionic (the android libc) deviates from POSIX and uses a
const in the 5th argument ("const struct sockaddr *") so the check now
tests for that as well.
Both SFTP and SCP are protocols that need to shut down stuff properly
when the connection is about to get torned down. The primary effect of
not doing this shows up as memory leaks (when using SCP or SFTP with the
multi interface).
This is one of the problems detected by test 582.
As we know how much to send, we can and should stop once we've sent that
much data as it avoids having to rely on other mechanisms to detect the
end.
This is one of the problems detected by test 582.
Reported by: Henry Ludemann <misc@hl.id.au>
When using the multi_socket API to do SFTP upload, it is important that
we set a quick expire when leaving the SSH_SFTP_UPLOAD_INIT state as
there's nothing happening on the socket so there's no read or write to
wait for, but the next libssh2 API function needs to be called to get
the ball rolling.
This is one of the problems detected by test 582.
Reported by: Henry Ludemann <misc@hl.id.au>
All C and H files now (should) feature the proper project curl source
code header, which includes basic info, a copyright statement and some
basic disclaimers.
CyaSSL (available from git@github.com:cyassl/cyassl.git) has been
added to the SSL abstraction layer.
To test:
1) git CyaSSL sources
2) autoreconf -i
3) ./configure --disable-static
4) make
5) sudo make install
6) autoreconf -i
7) git curl sources (and this patch)
8) ./configure --disable-shared --with-cyassl --without-ssl --enable-debug
9) make
10) normal testing
Please send questions or comments to todd@yassl.com .
Stress that it is for client certificates and then mention that it also
works for all other SSL-based protocols apart from HTTPS and
FTPS. Namely POP3S, IMAPS and SMTPS for now.
Add test 582 for uploading a file using sftp and the multi interface.
(Patch and test slightly tweaked by Daniel Stenberg)
Initially marked as disabled until it is fixed in the source.
libssh2_knownhost_readfile() returns a negative value on error or
otherwise number of parsed known hosts - this was previously not
documented correctly in the libssh2 man page for the function.
Bug: http://curl.haxx.se/mail/lib-2011-02/0327.html
Reported by: murat
The stopserver function would append pids to kill and could append them
without separating them with space properly. The result would be a very
large number that by (some implementations of) kill would be interpreted
as a negative number and that process group would be wiped...
Bug: http://curl.haxx.se/bug/view.cgi?id=3188836
Reported by: Greg Pratt
Removed the "netrc_debug" keyword replaced with --netrc-file additions.
Removed the debug code from Curl_parsenetrc as it is superseeded by
--netrc-file.
This enables people to specify a path to the netrc file to use.
The new option override --netrc if both are present. However it
does follow --netrc-optional if specified.
After a request times out, the connection wasn't properly closed and
prevented to get re-used, so subsequent transfers could still mistakenly
get to use the previously aborted connection.
When failing to connect the protocol during the CURLM_STATE_PROTOCONNECT
state, Curl_done() has to be called with the premature flag set TRUE as
for the pingpong protocols this can be important.
When Curl_done() is called with premature == TRUE, it needs to call
Curl_disconnect() with its 'dead_connection' argument set to TRUE as
well so that any protocol handler's disconnect function won't attempt to
use the (control) connection for anything.
This problem caused the pingpong protocols to fail to disconnect when
STARTTLS failed.
Reported by: Alona Rossen
Bug: http://curl.haxx.se/mail/lib-2011-02/0195.html
Introducing a few CURL_SOCKOPT* defines for conveniance. The new
CURL_SOCKOPT_ALREADY_CONNECTED signals to libcurl that the socket is to
be treated as already connected and thus it will skip the connect()
call.
It turns out some systems rely on the gmtime or gmtime_r to be defined
already in the system headers and thus my "precaution" redefining of
them only caused trouble. They are now removed.
Since the feature requires support for TCP_KEEPIDLE and TCP_KEEPINTVL to
function as documented, it now warns if that support is missing when the
option is used.
On second thought, I think CURLE_TLSAUTH_FAILED should be eliminated. It
was only being raised when an internal error occurred while allocating
or setting the GnuTLS SRP client credentials struct. For TLS
authentication failures, the general CURLE_SSL_CONNECT_ERROR seems
appropriate; its error string already includes "passwords" as a possible
cause. Having a separate TLS auth error code might also cause people to
think that a TLS auth failure means the wrong username or password was
entered, when it could also be a sign of a man-in-the-middle attack.
When the callback returns an error, this function must make sure to return
CURLE_ABORTED_BY_CALLBACK properly and not CURLE_OK as before to allow the
callback to properly abort the operation.
The main has not been updated from some time and is out of sync with
the code. The code is now tested by several test cases so no need for
a seperate code path.
Instead of polluting many places with #ifdefs, we create a single place
for this function, and also check return code properly so that a NULL
pointer returned won't cause problems.
The official Mozilla page at http://www.mozilla.org/projects/security/certs/
points out a new place as the "proper" place to get Mozilla's CA certs from
so this script is now updated to use that instead.
Reported by: Daniel Mentz
The official Mozilla page at
http://www.mozilla.org/projects/security/certs/ points out a new place
as the "proper" place to get Mozilla's CA certs from so this script is
now updated to use that instead.
Reported by: Daniel Mentz
The code in the toofast state needs to first recalculate the values
before it uses them again since it may have been a while since it last
did it when it reaches this point.
This will be used by file_do() and Curl_readwrite() as a unified method
of checking to see if a remote document meets the supplied
CURLOPT_TIMEVAL and CURLOPT_TIMECONDITION.
Signed-off-by: Dave Reisner <d@falconindy.com>
"6.7 What are my obligations when using libcurl in my commercial apps?"
got the piece about what exactly "in all copies" mean to a user of the
code.
This interpretation is based on what other MIT-like licenses have made
more explicit.
This is a separate makefile for MSVC builds. It is deliberately put in
another dir than src/ and lib/ to allow a different build experience
than the previous - at least during a period. Eventually we should
unify.
When this callback is called due to the destruction of the ares handle,
the connection pointer passed in as an argument may no longer pointing
to valid data and this function doesn't need to do anything with it
anyway so we make sure it doesn't.
Bug: http://curl.haxx.se/mail/lib-2011-01/0333.html
Reported by: Vsevolod Novikov
The HTTP parser allocated memory on each received Location: header
without properly freeing old data. Starting now, the code only considers
the first Location: header and will blissfully ignore subsequent ones.
Bug: http://curl.haxx.se/bug/view.cgi?id=3165129
Reported by: Martin Lemke
... to not make the connection between the tool and the libcurl used
tighter than necessary, the tlsauth options are now always present but
if the used libcurl doesn't have TLSAUTH support it will return failure.
Also, replaced strncmp() with strequal to get case insensitive matching.
Extended the intial HTTP protcol part and added a mention of --trace and
--trace-ascii.
Replaced most URLs in the text to use example.com instead of all the
made up strange names.
Shortened a bunch of lines.
... and update the curl.1 and curl_easy_setopt.3 man pages such that
they do not suggest to use an OpenSSL utility if curl is not built
against OpenSSL.
Bug: https://bugzilla.redhat.com/669702
The idea that the protocol and socktype is part of name resolving in the
libc functions is nuts. We keep the name resolver functions assume
TCP/STREAM and we make sure that when we want to connect to a UDP
service we use the correct UDP/DGRAM set instead. This bug was because
the ->protocol field was not always set correctly.
This bug was only affecting ipv6-disabled non-cares non-threaded builds.
Bug: http://curl.haxx.se/bug/view.cgi?id=3154436
Reported by: "dperham"
This makes it possible to skip the call to unit_stop() in such
cases. Also use Curl_safefree() in unit test 1302 so it will
pass the memory torture test.
The CheckTypeSize module that comes with CMake 2.6.2 and above does
everything we need and also supports cross-compiling. Avoid duplicating
an older version of it here. This also fixes a cross-compiling error
because the old line
include ("${CMAKE_MODULE_PATH}/CheckTypeSize.cmake")
failed because CMAKE_MODULE_PATH is a search path and not a directory.
Signed-off-by: Brad King <brad.king@kitware.com>
The UNITTEST_START and UNITTEST_STOP defines needed to do a new brace
level so that test cases can declare variables fine and still remain
fine C89 code.
The test runner script now knows if unittests can run and the unit test
setup file says it is one. I also made runtests.pl deal with no
<command> tag set, so that the description file can get even simpler.
When configure --enable-debug has been used, all files in lib/ are now
built twice and a separate static library crafted for unit-testing will
be linked. The unit tests in the tests/unit subdir will use that
library.
Since some systems don't have PATH_MAX and it isn't that clever to
assume a fixed maximum path length, the code now allocates buffer space
instead of using stack.
Reported by: Samuel Thibault
Bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=608521
Sending "pwd" as a QUOTE command only sent the reply to the
DEBUGFUNCTION. Now it also sends an FTP-like header to the header
callback to allow similar operations as with FTP, and apps can re-use
the same parser.
configure.ac: Test harness libhostname library will not be built for Windows.
runtests.pl: LD_PRELOAD mechanism will not be used to load libhostname
library on operating systems which lack LD_PRELOAD support.
When built IPv6-enabled, we could do Curl_done() with one of the two
resolves having returned already, so when ares_cancel() is called the
resolve callback ends up doing funny things (sometimes resulting in a
segfault) since it would try to actually store the previous resolve even
though we're shutting down the resolve.
This bug was introduced in commit 8ab137b2bc so it hasn't been
included in any public release.
Bug: http://curl.haxx.se/bug/view.cgi?id=3145445
Reported by: Pedro Larroy
Providing multiple dots in a series in the domain field (domain=..com) could
trick the cookie engine to wrongly accept the cookie believing it to be
fine. Since the tailmatching would then match all .com sites, the cookie would
then be sent to all of them.
The code now requires at least one letter between each dot for them to be
counted. Edited test case 61 to verify this.
When using the multi interface and connecting to a host name that
resolves to multiple IP addresses, there was no logic that made it
continue to the next IP if connecting to the first address times
out. This is now corrected.
The info about pipe status and expire cleared are clearly debug-related
and not anything mere mortals will or should care about so they are now
ifdef'ed DEBUGBUILD
They were all wrong previously since none used the <brackets> they
should for MAIL FROM. Now libcurl adds them itself if the app doesn't so
they end up wrong less easy.
Similar to what is done already for RCPT TO, the code now checks for and
adds angle brackets (<>) around the email address that is provided for
CURLOPT_MAIL_RCPT unless the app has done so itself.
Make sure that Curl_cache_addr() errors are propagated to callers of
loadhostpairs().
(this loadhostpairs function caused a scan-build warning due to the
'dns' variable getting assigned but never used)
Doing curlx_strtoofft() on the size just to figure out the end of it
causes a compiler warning since the result wasn't used, but is also a
bit of a waste.
Since the original `conn' pointer was used after the `connectdata' it
points to has been closed/cleaned up by Curl_reconnect_request it caused
a crash. We must make sure to use the newly created connection instead!
URL: http://curl.haxx.se/mail/lib-2010-12/0202.html
Make the c-ares resolver code ask for both IPv4 and IPv6 addresses when
IPv6 is enabled.
This is a workaround for the missing ares_getaddrinfo() and is a lot
easier to implement.
Note that as long as c-ares returns IPv4 addresses when IPv6 addresses
were requested but missing, this will cause a host's IPv4 addresses to
occur twice in the DNS cache.
URL: http://curl.haxx.se/mail/lib-2010-12/0041.html
Add a simple SMTP example program, patterned after some of the existing
examples, and the curl application.
This version addresses issues raised by David Woodhouse on comments in
the simplesmtp.c example.
The SSL_SERVER_VERIFY_LATER bit in the ssl_ctx_new() call allows the
code to verify the peer certificate explicitly after the handshake and
then the "data->set.ssl.verifypeer" option works.
The public axTLS header (at least as of 1.2.7) redefines the memory
functions. We #undef those again immediately after the public header to
limit the damage. This should be fixed in axTLS.
Failed HTTPS tests: 301, 306, 311, 312, 313, 560
311, 312 need more detailed error reporting from axTLS.
313 relates to CRL, which hasn't been implemented yet.
Added axTLS to autotool files and glue code to misc other files.
axtls.h maps SSL API functions, but may change.
axtls.c is just a stub file and will definitely change.
The function that checks if pipelining is possible now requires the HTTP
bit to be set so that it doesn't mistakenly tries to do it for other
protocols.
Bug: http://curl.haxx.se/mail/lib-2010-12/0152.html
Reported by: Dmitri Shubin
The generic timeout code must not check easy handles that are already
completed. Going to completed (again) within there risked decreasing the
number of alive handles again and thus it could go negative.
This regression bug was added in 7.21.2 in commit ca10e28f06
ossl_connect_common() now checks whether or not 'struct
connectdata->state' is equal 'ssl_connection_complete' and if so, will
return CURLE_OK with 'done' set to 'TRUE'. This check prevents
ossl_connect_common() from creating a new ssl connection on an existing
ssl session which causes openssl to fail when it tries to parse an
encrypted TLS packet since the cipher data was effectively thrown away
when the new ssl connection was created.
Bug: http://curl.haxx.se/mail/lib-2010-11/0169.html
It helps to prevent a hangup with some FTP servers in case idle session
timeout has exceeded. But it may be useful also for other protocols
that send any quit message on disconnect. Currently used by FTP, POP3,
IMAP and SMTP.
When looping in this function and checking for the timeout being
expired, it was not updating the reference time when calculating the
timediff since previous round which made it think each subsequent loop
to have taken longer than it actually did.
I also modified the function to use the generic Curl_timeleft() function
instead of the custom logic.
Bug: http://curl.haxx.se/bug/view.cgi?id=3112579
Ensure that spurious results from system's getaddrinfo() ares not propagated
by Curl_getaddrinfo_ex() into the library.
Also ensure that the ai_addrlen member of Curl_getaddrinfo_ex()'s output linked
list of Curl_addrinfo structures has appropriate family-specific address size.
On Windows, translate WSAGetLastError() to errno values as GNU
TLS does it internally, too. This is necessary because send() and
recv() on Windows don't set errno when they fail but GNU TLS
expects a proper errno value.
Bug: http://curl.haxx.se/bug/view.cgi?id=3110991
Temporarily, When cross-compiling with gcc 3.0 or later, enable strict aliasing
rules and warnings. Given that cross-compiled targets autobuilds do not run the
If --librtmp was specified but pkg-config could not find the librtmp
file, we would have undefined symbols when linking curl.
We prevent this error by disabling this case as suggested on the mailing
list.
When no timeout is set, we call the socket_ready function with a timeout
value of 0 during handshake, which makes it loop too much/fast in this
function. It also made this function return CURLE_OPERATION_TIMEDOUT
wrongly on a slow handshake.
However, the particular bug report that highlighted this problem is not
solved by this fix, as this fix only makes the more proper error get
reported instead.
Bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=594150
Reported by: Johannes Ernst
While changing Curl_sec_read_msg to accept an enum protection_level
instead of an int, I went ahead and fixed the usage of the associated
fields.
Some code was assuming that prot_clear == 0. Fixed those to use the
proper value. Added assertions prior to any code that would set the
protection level.
This script is the start of a helper tool that scans a source code and
outputs the most recent libcurl version it finds symbols for. Meaning
that if there's no conditions in the code, that's the earliest libcurl
version the scanned code requires.
It is not added to the Makefile.am yet as it is still a bit crude, but
I'm committing it to keep it and allow us to work on it.
This is the advised way of checking for errors in the GSS-API RFC.
Also added some '\n' to the error message so that they are not mixed
with other outputs.
This is a meta symbol. OR this value together with a single specific
auth value to force libcurl to probe for un-restricted auth and if not,
only that single auth algorithm is acceptable.
For example you can use CURLAUTH_DIGEST|CURLAUTH_ONLY to make libcurl
first probe for what method to use, but yet only consider Digest to be
acceptable.
Using _only_ CURLAUTH_DIGEST without the CURLAUTH_ONLY field, will make
libcurl explicitly use Digest right away and not do any probing.
The IP version choice was previously only in the UserDefined struct
within the SessionHandle, but since we sometimes alter that option
during a request we need to have it on a per-connection basis.
I also moved more "init conn" code into the allocate_conn() function
which is designed for that purpose more or less.
Instead of reopening the downloaded file, fsetxattr uses the (already
open) file descriptor to attach extended attributes. This makes the
procedure more robust against errors caused by moved or deleted files.
CURLOPT_RESOLVE is a new option that sends along a curl_slist with
name:port:address sets that will populate the DNS cache with entries so
that request can be "fooled" to use another host than what otherwise
would've been used. Previously we've encouraged the use of Host: for
that when dealing with HTTP, but this new feature has the added bonus
that it allows the name from the URL to be used for TLS SNI and server
certificate name checks as well.
This is a first change. Surely more will follow to make it decent.
If the query result has a binary attribute, the binary attribute is
base64 encoded. But all following non binary attributes are also base64
encoded which is wrong.
This is a test (LDAP server is public).
curl
ldap://x500.bund.de:389/o=Bund,c=DE?userCertificate,certificateSerialNumber?sub
?cn=*Woehleke*
setxattr is a glibc call to set extended attributes, so configure now
checks for it and the code is adapted to only build when the
functionality is present.
It is often convinient to track back the source of a once downloaded
file; this patch makes curl store the source URL and other metadata
alongside the retrieved file by using the extended attributes (if
supported by the file system and enabled by --xattr).
Test 580 is removed again for two reasons:
1) Some compilers aren't satisfied by just a data variable called 'test'
when first.o wants a function called 'test'. The Solaris compiler says
"ld: warning: symbol `test' has differing types:" while the AIX compiler
downright rejects it.
2) Test case 1119 that was added after this test is way more complete
and cover everything test 580 does and more without introducing the same
problems.
If you use a custom Host: name in a request to a SSL server, libcurl
will now use that given name when it verifies the server certificate to
be correct rather than using the host name used in the actual URL.
When given a custom host name in a Host: header, we can use it for
several different purposes other than just cookies, so we rename it and
use it for SSL SNI etc.
An example application source code sending SMTP mail with the multi
interface. It is based on the code Alona Rossen provided, which in turn
is based on existing example/test code, and I converted it even more
into a decent example with a fair multi API use, put the info required
to edit at the top and I added some comments.
If a command is set type="perl", it can now specify a perl program that will
be run instead of an ordinary curl or built tool.
A perl test automatically disables memory and valgrind debugging.
This new script scans for all enums and #defines used by the curl/curl.h
and curl/multi.h headers. Then it reads all symbols mentioned in
symbols-in-vesions and make sure that there's no entries missing in
there. It then proceeds to verify that the entries that
symbols-in-vesions mentions but aren't found in the sources are truly
documented as removed.
This script is used in the new test case 1119
I've developed a script I call symbol-scan.pl that scans the curl.h and
multi.h header files and compare the symbols it finds in there with the
symbols symbols-in-versions documents and outputs a report on the
differences. Using this I've dug through the history to fill up
symbols-in-versions with all the symbols my script found mismatches for.
I will commit symbol-scan.pl separatly and think of a way to put it to
use in the build/tests so that we from now on will get this in-sync
check automatically.
The new perl script mk580.pl generates a C table in a fresh source file
named lib580.c and if that compiles fine we know that the file
docs/libcurl/symbols-in-versions at least doesn't include any symbols
that are misspelled.
An additional feature would be to somehow scan curl/curl.h and compare
with symbols-in-versions to see if there are symbols missing.
Some FTP servers (e.g. Pure-ftpd) end up hanging if we close the data
connection before transferring all the requested data. If we send ABOR
in that case, it prevents the server from hanging.
Bug: https://bugzilla.redhat.com/643656
Reported by: Pasi Karkkainen, Patrick Monnerat
These haven't worked in at least 8 years due to missing source
files, and most active RiscOS developers these days apparently
cross-compile anyway.
Signed-off-by: James Bursa <james@zamez.org>
In libssh2 1.2.8, libssh2_session_handshake() replaces
libssh2_session_startup() to fix the previous portability problem with
the socket type that was too small for win64 and thus easily could cause
crashes and more.
It is a bad idea to use the public prefix used by another library and
now we realize that libssh2 introduces a symbol in the upcoming version
1.2.8 that conflicts with our static function named libssh2_free.
When failing to build form post due to an error, the code now does a
proper failf(). Previously libcurl would report an error like "failed
creating formpost data" when a file wasn't possible to open which was
not easy for users to figure out.
I also lower cased a function name to be named more curl-style and
removed some unnecessary code.
The URL parser got a little stricter as it now considers a ? to be a
host name divider so that the slightly sloppier URLs work too. The
problem that made me do this change was the reported problem with an URL
like: www.example.com?email=name@example.com This form of URL is not
really a legal URL (due to the missing slash after the host name) but is
widely accepted by all major browsers and libcurl also already accepted
it, it was just the '@' letter that triggered the problem now.
The side-effect of this change is that now libcurl no longer accepts the
? letter as part of user-name or password when given in the URL, which
it used to accept (and is tested in test 191). That letter is however
mentioned in RFC3986 to be required to be percent encoded since it is
used as a divider.
Bug: http://curl.haxx.se/bug/view.cgi?id=3090268
In order to avoid for example the pingpong protocols to issue STARTTLS
(or equivalent) even though there's no SSL support built-in.
Reported by: Sune Ahlgren
Bug: http://curl.haxx.se/mail/archive-2010-10/0045.html
Some options, such as the automatic decompression and some SSL related
ones now will bail out if the underlying libcurl doesn't have support
for the particular feature needed.
Do not match the trailing '\n' in the regular expression as this would
make us dump a ) parenthesis on a new line.
This fixes the following error:
would get transformed into:
)
Bug: http://curl.haxx.se/mail/lib-2010-10/0065.html
Reported by: Dimitre Dimitrov
If the filename contains a backslash, only use filename portion. The
idea is that even systems that don't handle backslashes as path
separators probably want that path removed for convenience.
This flaw is considered a security problem, see the curl security
vulnerability http://curl.haxx.se/docs/adv_20101013.html
As the change in 5f0ae7a062 added a precaution against negative
file sizes that for some reason managed to get returned, this change now
introduces the same check at the second place in the code where the file
size from the libssh2 stat call is used.
This check might not be suitable for a 32 bit curl_off_t, but libssh2.h
assumes long long to work and to be 64 bit so I believe such a small
curl_off_t will be very unlikely to occur in the wild.
Having an open brace without a closing brace caused a segfault.
Having a closing brace too many caused a silent error to occur, which
caused curl to bail out and return an error code but no error message
was shown. It does now!
All error message outputs no longer wrongly get _two_ newlines written
after the error message.
Reported by: Vlad Ureche
Bug: http://curl.haxx.se/bug/view.cgi?id=3083942
The invocation of autoconf's AC_PATH_PROG( ) is not quite right for
finding curl-config. This fix corrects the negative case (where
curl-config is not found).
"261 - configure and libidn" is removed from the list since Julien
Chaffraix tried to repeat it but failed and the reporter did not return
to provide further details.
Reported by: Lyndon Hill
Bug: http://curl.haxx.se/mail/lib-2010-07/0029.html
The macro provides a --with-libcurl option that expects a PREFIX to be
specified and not actually a "directory" in which libcurl will be found.
This now spells that out more clearly.
Reported by: Dan Locks
Bug: http://curl.haxx.se/bug/view.cgi?id=3079891
Renamed SDK_* to NDK_*; made NDK_* defines overwriteable from
environment; removed now obsolete YACC macro;
moved some curl_config.h defines to IPv6 section since they
are only needed when IPv6 is enabled - this makes libcurl compile
with older NDKs too which were not IPv6-aware.
We forgot to release the buffer passed to gss_init_sec_context.
The previous logic was difficult to read as we were reusing the same
variable (gssbuf) for both input buffer and output buffer. Splitted the
logic in 2 variables to better underline who needs to be released.
Also made the code break at 80 lines.
This fixes a memory leak related to the GSS-API code.
Added a krb5_init and krb5_end functions. Also removed a work-around
the lack of proper initialization of the GSS-API context.
It was pointed out that the special case libcurl did for 416 was
incorrect and wrong. 416 is not really different to other errors so the
response body must be handled like for other errors/http responses.
Reported by: Chris Smowton
Bug: http://curl.haxx.se/bug/view.cgi?id=3076808
This delays between write operations, hopefully making it easier
to spot problems where libcurl doesn't flush the socket properly
before waiting for the next response.
It is still not clarified exactly why this happens, but libssh2
sometimes report a negative file size for the remote SFTP file and that
deeply confuses libcurl (or crashes it) so this precaution is added to
avoid badness.
Reported by: Ernest Beinrohr
Bug: http://curl.haxx.se/bug/view.cgi?id=3076430
all multi and hiper examples:
* don't loop curl_multi_perform calls, that was <7.20.0 style, currently
the exported multi functions will not return CURLM_CALL_MULTI_PERFORM
all hiper examples:
* renamed check_run_count to check_multi_info
* don't compare current running handle count with previous value, this
was the wrong way to check for finished requests, simply call
curl_multi_info_read
* it's also safe to call curl_multi_remove_handle inside the
curl_multi_info_read loop.
ghiper.c:
* replaced curl_multi_socket (that function is marked as obsolete) calls
with curl_multi_socket_action calls (as in hiperfifo.c and
evhiperfifo.c)
ghiper.c and evhiperfifo.c:
* be smart as hiperfifo.c, don't do uncessary curl_multi_* calls in
new_conn and main
Remove a leak seen on Kerberos/MIT (gss_OID is copied internally and
we were leaking it). Now we just pass NULL as advised in RFC2744.
|tmp| was never set back to buf->data.
Cleaned up Curl_sec_end to take into account failure in Curl_sec_login
(where conn->mech would be NULL but not conn->app_data or
conn->in_buffer->data).
Following a change in the way socket handler are registered, the custom
recv and send method were conditionaly registered.
We need to register them everytime to handle the ftp security
extensions.
Re-added the clear text handling in sec_recv.
Curl_sec_login was returning the opposite result that the code in ftp.c
was expecting. Simplified the return code (using a CURLcode) so to see
more clearly what is going on.
The functions Curl_disconnect() and Curl_done() are both used within the
scope of a single request so they cannot be allowed to use
Curl_expire(... 0) to kill all timeouts as there are some timeouts that
are set before a request that are supposed to remain until the request
is done.
The timeouts are now instead cleared at curl_easy_cleanup() and when the
multi state machine changes a handle to the complete state.
The date format in RFC822 allows that the seconds part of HH:MM:SS is
left out, but this function didn't allow it. This change also includes a
modified test case that makes sure that this now works.
Reported by: Matt Ford
Bug: http://curl.haxx.se/bug/view.cgi?id=3076529
tftpd-hpa has a bug where it will send an incorrect ack when the block
counter wraps and tftp options have been sent. Work around that by
accepting an ack for 65535 when we're expecting one for 0.
- |fd| is now a curl_socket_t and |len| a size_t to avoid conversions.
- Added 2 FIXMEs about the 2 unsigned -> signed conversions.
- Included 2 minor changes to Curl_sec_end.
- Renamed it to do_sec_send as it is the function doing the actual
transfer.
- Do not return any values as no one was checking it and it never
reported a failure (added a FIXME about checking for errors).
- Renamed the variables to make their use more specific.
- Removed some casts (int -> curl_socket_t, ...)
- Avoid doing the htnl <-> nthl twice by caching the 2 results.
- Renamed the variables name to better match their intend.
- Unified the |decoded_len| checks.
- Added some FIXMEs to flag some improvement that did not go in this
change.
- Removed sec_prot_internal as it is now inlined in the function (this removed
a redundant check).
- Changed the prototype to return an error code.
- Updated the method to use the new ftp_send_command function.
- Added a level_to_char helper method to avoid relying on the compiler's
bound checks. This default to the maximum security we have in case of a
wrong input.
Tighten the type of the |data| parameter to avoid a cast. Also made
it const as we should not modify it.
Added a DEBUGASSERT on the size to be written while changing it.
To do so, made block_read call Curl_read_plain instead of read.
While changing them renamed block_read to socket_read and sec_get_data
to read_data to better match their function.
Also fixed a potential memory leak in block_read.
... for example when LDAP is not compiled.
Fixed the logic to match the rest of the options' message that is we
update the default message only if the option is not disabled after the
different checks.
Reported by: Guenter Knauf
Obviously, browsers ignore a colon without a following port number. Both
Firefox and Chrome just removes the colon for such URLs. This change
does not remove the colon for URLs sent over a HTTP proxy, so we should
consider doing that change as well.
Reported by: github user 'kreshano'
curl_easy_duphandle() was not properly duping the ares channel. The
ares_dup() function was introduced in c-ares 1.6.0 so by starting to use
this function we also raise the bar and require c-ares >= 1.6.0
(released Dec 9, 2008) for such builds.
Reported by: Ning Dong
Bug: http://curl.haxx.se/mail/lib-2010-08/0318.html
1) PPC64 appears to be an 10.5 only supported architecture, so I
forced 10.5 for 64bit if there is a need for PPC64, else 64bit only
does x86_64
2) proper "make clean" after every ./configure. fixes a bug where
subsequent runs the 32bit do not get compiled
3) Added a version numbering curl-$VERSION} rather than the "stock standard" A
librtmp is often statically linked and using sub dependencies like
OpenSSL, so we need to make sure we can actually link with it properly
before enabling it. Otherwise we easily end up trying to link with a
RTMP lib that fails.
1 - libcurl assumes that there are gcrypt functions available when
GnuTLS is.
2 - GnuTLS can be built to use libnettle instead as crypto library,
which breaks assumption (1)
This change makes configure make sure that if GnuTLS is requested and
detected, it also makes sure that gcrypt is present or it errors
out. This is mostly a way to make the user more aware of this flaw, the
correct fix would be to detect which crypto layer that is in use and
adapt our code to use that instead of blindly assuming gcrypt.
Reported by: Michal Gorny
Bug: http://curl.haxx.se/bug/view.cgi?id=3071038
If built without HTTP or proxy support it would cause a compiler warning
due to the unused variable. I moved the declaration of it into the only
scope it is used.
bool_false is the internal name used in the setup_once.h definition
we fall back to for non-C99 non-stdbool systems, it's not the actual
name to use in assignments (we use bool_false, bool_true there to
avoid global namespace problems, see comment in setup_once.h).
The correct C99 value to use is 'false', but let's use FALSE as
used elsewhere when assigning to bits.close. FALSE is set equal
to 'false' in setup_once.h when possible.
This fixes a build problem on C99 targets.
As of curl-7.21.1 tunnelling ldap queries through HTTP Proxies is not
supported. Actually if --proxytunnel command-line option (or equivalent
CURLOPT_HTTPPROXYTUNNEL) is used for ldap queries like
ldap://ldap.my.server.com/... You are unable to successfully execute the
query. In facts ldap_*_bind is executed directly against the ldap server
and proxy is totally ignored. This is true for both openLDAP and
Microsoft LDAP API.
Step to reproduce the error:
Just launch "curl --proxytunnel --proxy 192.168.1.1:8080
ldap://ldap.my.server.com/dc=... "
This fix adds an invocation to Curl_proxyCONNECT against the provided
proxy address and on successful "CONNECT" it tunnels ldap query to the
final ldap server through the HTTP proxy. As far as I know Microsoft
LDAP APIs don't permit tunnelling in any way so the patch provided is
for OpenLDAP only. The patch has been developed against OpenLDAP 2.4.23
and has been tested with Microsoft ISA Server 2006 and works properly
with basic, digest and NTLM authentication.
Rodric provide an awesome recipe that proved libcurl didn't timeout at
the requested time - it instead often timed out at [connect time] +
[timeout time] instead of the documented and intended [timeout time]
only. This bug was due to the code using the wrong base offset when
comparing against "now". I could also take the oppurtinity to simplify
the code by properly using of the generic help function for this:
Curl_timeleft.
Reported by: Rodric Glaser
Bug: http://curl.haxx.se/bug/view.cgi?id=3061535
As this function uses return code 0 to mean that there is no timeout, it
needs to check that it doesn't return a time left value that is exactly
zero. It could lead to libcurl doing an extra 1000 ms select() call and
thus not timing out as accurately as it should.
I fell over this bug when working on the bug 3061535 but this fix does
not correct that problem alone, although this is a problem that needs to
be fixed.
Reported by: Rodric Glaser
Bug: http://curl.haxx.se/bug/view.cgi?id=3061535
1. Remove the comment warning that it's "not been verified to work". It
works with no problems in my testing.
2. Remove 2 unnecessary includes.
3. Remove the myrealloc(). Initialize chunk.memory with malloc() instead
of NULL. The comments for these two parts contradicted each other.
4. Handle out of memory from realloc() instead of continuing.
5. Print a brief status message at the end.
The timeout is set for the connect phase already at the start of the
request so we should not add a new one, and we MUST not set expire to 0
as that will remove any other potentially existing timeouts.
When curl calls a function from that library then it needs to
explicitly link to the library instead of piggybacking on
libcurl's own dependency. Without this, GNU ld with the
--no-add-needed flag fails when linking (which Fedora now does
by default).
Reported by: Quanah Gibson-Mount
Bug: http://curl.haxx.se/mail/lib-2010-09/0085.html
The code reading chunked encoding attempts to rewind the code if it had
read more data than the chunky parser consumes. The rewinding can fail
and it will then cause an error. This change now makes the rewinding
only happen if pipelining is in use - as that's the only time it really
needs to be done.
Bug: http://curl.haxx.se/mail/lib-2010-08/0297.html
Reported by: Ron Parker
Curl_getconnectinfo() is changed to return a proper curl_socket_t for
the last socket so that it'll work more portably (and cause less
compiler warnings).
Add a timeout check for handles in the state machine so that they will
timeout in all states disregarding what actions that may or may not
happen.
Fixed a bug in socket_action introduced recently when looping over timed
out handles: it wouldn't assign the 'data' variable and thus it wouldn't
properly take care of handles.
In the update_timer function, the code now checks if the timeout has
been removed and then it tells the application. Previously it would
always let the remaining timeout(s) just linger to expire later on.
Each easy handle has a list of timeouts, so as soon as the main timeout
for a handle expires, we must make sure to get the next entry from the
list and re-add the handle to the splay tree.
This was attempted previously but was done poorly in my commit
232ad6549a.
When a new transfer is about to start we now set the proper timeouts to
expire for the multi interface if they are set for the handle. This is a
follow-up bugfix to make sure that easy handles timeout properly when
the times expire and the multi interface is used. This also improves
curl_multi_timeout().
Fixed some issues that caused xmllint failures, added features
and keywords, fixed some quotes and removed some <strip> sections
that unnecessarily limited test checking.
Introduced in the initial gopher commits, there was added logic to do
GOPHER test serving in the pingpong server but as it resembles HTTP much
more than FTP or SMTP, the gopher testing has been moved over to instead
use the sws (HTTP) server. This change simply removes unused code.
The fix for the busyloop really only is a temporary work-around. It
causes a BLOCKING behavior which is a NO-NO. This function should rather
be split up in a do and a doing piece where the pieces that aren't
possible to send now will be sent in the doing function repeatedly until
the entire request is sent.
HTTP allows that a server sends trailing headers after all the chunks
have been sent WITHOUT signalling their presence in the first response
headers. The "Trailer:" header is only a SHOULD there and as we need to
handle the situation even without that header I made libcurl ignore
Trailer: completely.
Test case 1116 was added to verify this and to make sure we handle more
than one trailer header properly.
Reported by: Patrick McManus
Bug: http://curl.haxx.se/bug/view.cgi?id=3052450
It was introduced in commit eeb2cb05 along with the -F type=
change. Also fixed a typo in the name of the magic filename=
parameter. Tweaked tests 39 and 173 to better test this path.
The numerical value passed to CURLOPT_RESUME_FROM for FTP uploads is
interpreted and used as position where to resume the _reading_ of the
local file and it will "blindly" append that data on the remote
file. This was certainly not clear in the docs previously.
Reported by: catalin
Bug: http://curl.haxx.se/bug/view.cgi?id=3048174
The -F option allows some custom parameters within the given string, and
those strings are separated with semicolons. You can for example specify
"name=daniel;type=text/plain" to set content-type for the
field. However, the use of semicolons like that made it not work fine if
you specified one within the content-type, like for:
"name=daniel;type=text/plain;charset=UTF-8"
... as the second one would be seen as a separator and "charset" is no
parameter curl knows anything about so it was just silently discarded.
The new logic now checks if the semicolon and following keyword looks
like a parameter it knows about and if it isn't it is assumed to be
meant to be used within the content-type string itself.
I modified test case 186 to verify that this works as intended.
Reported by: Larry Stone
Bug: http://curl.haxx.se/bug/view.cgi?id=3048988
The script works exactly same as the Perl one except for one thing:
when the text descriptions generated with openssl are included then
the md5 fingerprints are missing; seems openssl has either a bug or
a feature which prints the md5 fingerprint output to stdout instead
of writing them to specified file; this script could here do the same
as what the Perl scripr does (redirect stdout into file) but this
makes the script take up double the time because it needs to launch
cmd.exe 140 times (fo each openssl call). So I think for now we just
ommit the md5 fingerprints, and see if openssl will be fixed.
It seems that its time to look at some better ideas for the win32
non-configure builds; probably a prebuild target which copies
config-win32.h to curl_config.h and appends also then feature
defines like USE_ARES.
The 66 bytes checked are those 38 bytes with the chunked encoding
headers added: 8+8+10+35+5 = 66
The three-letter words become 8 bytes on the wire because they are sent
like: "3\r\none\r\n"
... and there's the trailing 5 bytes write after the four lines since
the final chunk is sent (which is "0\r\n\r\n").
I fell over this bug report that mentioned that libcurl could wrongly
send more than one complete messages at the end of a transfer. Reading
the code confirmed this, so I've added a new multi state to make it not
happen. The mentioned bug report was made by Brad Jorsch but is (oddly
enough) filed in Debian's bug tracker for the "wmweather+" tool.
Bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=593390
In some situations, libtool will change directories and perform
a link step before executing the libtest test app. Since
LD_PRELOAD is in effect for this entire process, the path to the
binary must be absolute so it will be valid no matter in which
directory the app is running.
There's an error in http_negotiation.c where a mistake is using only
userpwd even for proxy requests. Ludek provided a patch, but I decided
to write the fix slightly different using his patch as inspiration.
Reported by: Ludek Finstrle
Bug: http://curl.haxx.se/bug/view.cgi?id=3046066
When detecting that the send or recv speed, the multi interface changes
state to TOOFAST and previously there was no timeout set that would
force a recheck but it would rely on the application to somehow call
libcurl anyway. This now sets a timeout for a suitable future time to
check again if the average transfer speed is then below the threshold
again.
Curl_expire() is now expanded to hold a list of timeouts for each easy
handle. Only the closest in time will be the one used as the primary
timeout for the handle and will be used for the splay tree (which sorts
and lists all handles within the multi handle).
When the main timeout has triggered/expired, the next timeout in time
that is kept in the list will be moved to the main timeout position and
used as the key to splay with. This way, all timeouts that are set with
Curl_expire() internally will end up as a proper timeout. Previously any
Curl_expire() that set a _later_ timeout than what was already set was
just silently ignored and thus missed.
Setting Curl_expire() with timeout 0 (zero) will cancel all previously
added timeouts.
Corrects known bug #62.
Instead of looping over all attached easy handles, this now keeps a list
of messages in the multi handle. It allows curl_multi_info_read() to
perform O(1) no matter how many easy handles that are handled. This is
of importance since this function may be polled very frequently by apps
using the multi interface.
Due to the layout of the singletest function there are situations where
it returns before it clears the environment variables that were
especially set for the single specific test case. That could lead to
subsequent tests getting executed with environment variables sticking
around from a previous test which could lead to badness.
This change makes sure to clear all custom variables that may be laying
around from a previous round, before running a test case.
Reported by: Kamil Dudka
Bug: http://curl.haxx.se/mail/lib-2010-08/0141.html
When the progress callback is called during the TCP connection, an error
return would accidentally not abort the operation as intended but would
instead be counted as a failure to connect to that particular IP and
libcurl would just continue to try the next. I made singleipconnect()
and trynextip() return CURLcode properly.
Added bonus: it corrected the error code for bad --interface usages,
like tested in test 1084 and test 1085.
Reported by: Adam Light
Bug: http://curl.haxx.se/mail/lib-2010-08/0105.html
Added the -br switch to dynamic builds which fixes the issue I saw
with curl's --version output. Added debug info and symfile for debug
builds to linker opts. Added DLL loader for wlink back, but this time
dependend on wlink version.
Patch posted to the list by malak.jiri AT gmail.com.
The var %MAKEFLAGS is only set in 3 cases: if set as environment
var or as macro definition from commandline, and either with the
-u or -ms switch. Since all these cases are unlikely for the average
user it should be safe to only test if %MAKEFLAGS is defined; this
has the benefit that now all other switches can be used again in
addition to the -u which was formerly not possible.
Curl_llist_init is never used outside of llist.c and thus it should be
static. I also removed the protos for Curl_llist_insert_prev and
Curl_llist_remove_next which are functions we removed from llist.c ages
ago.
Test 563 is enabled now and verifies that the combo FTP type=A URL,
CURLOPT_PORT set and proxy work fine. As a bonus I managed to remove the
somewhat odd FTP check in parse_remote_port() and instead converted it
to a better and more generic 'slash_removed' struct field. Checking the
->protocol field isn't right since when an FTP:// URL is sent over a
HTTP proxy, the protocol is HTTP but the URL was handled by the FTP code
and thus slash_removed is set TRUE for this case.
The struct used for storing the message for a completed transfer is now
no longer allocated separatly but is kept within the main struct kept
for each easy handle so that we avoid one malloc (and the subsequent
free).
In some places where the name 'stream' has been used for naming a
function argument that is in fact settable with a setopt() option we now
call that argument 'userdata' to make it more obvious that it is in fact
possible to set by the application.
Suggested by: Jeff Pohlmeyer
When libcurl internally decided to wait for a 100-continue header, there
was no call to the timeout function so there was no timeout callback
called when the multi_socket API was used and thus applications became
either completely wrong or at least ineffecient depending on how they
handled the situation. We now set a timeout to get triggered.
Reported by: Ben Darnell
Bug: http://curl.haxx.se/bug/view.cgi?id=3039744
libssh2 1.2.6 and later handle >32bit file sizes properly even on 32bit
architectures and we make sure to use that ability.
Reported by: Mikael Johansson
Bug: http://curl.haxx.se/mail/lib-2010-08/0052.html
I added all OBJECTPOINT curl_easy_setopt() options from 178 to 202. Left
to add: the five FUNCTIONPOINT (callbacks) options added since:
SSH_KEYFUNCTION
INTERLEAVEFUNCTION
CHUNK_BGN_FUNCTION
CHUNK_END_FUNCTION
FNMATCH_FUNCTION
Simply because the TCP might be connected already we cannot skip the
proxy connect procedure. We need to be careful to not overload more
meaning to the bits.tcpconnect field like this.
With this fix, SOCKS proxies work again when the multi interface is
used. I believe this regression was added with commit 4b351d018e,
released as 7.20.1.
Left todo: add a test case that verifies this functionality that
prevents us from breaking it again in the future!
Reported by: Robin Cornelius
Bug: http://curl.haxx.se/bug/view.cgi?id=3033966
The --retry logic does retry HTTP when some specific response codes are
returned, but because the -f option sets the CURLOPT_FAILONERROR to
libcurl, the return codes are different for such situations and then the
curl tool failed to consider it for retrying.
Reported by: Mike Power
Bug: http://curl.haxx.se/bug/view.cgi?id=3037362
Commit 496002ea1c (released in 7.20.1) broke FTPS when using the
multi interface and OpenSSL was used. The condition for the non-blocking
connect was incorrect.
Reported by: Georg Lippitsch
Bug: http://curl.haxx.se/mail/lib-2010-07/0270.html
The SOCKET type in Win64 is 64 bits large (and thus so is curl_socket_t
on that platform), and long is only 32 bits. It makes it impossible for
curl_easy_getinfo() to return a socket properly with the
CURLINFO_LASTSOCKET option as for all other operating systems.
Previously the host name buffer was only used if gethostname() exists,
but since we converted that into a curl private function that function
always exists and will be used so the buffer needs to exist for all
cases/systems.
A shared library tests/libtest/.libs/lihostname.so is preloaded in NTLM
test-cases to override the system implementation of gethostname(). It
makes it possible to test the NTLM authentication for exact match, and
this way test the implementation of MD4 and DES.
If LD_PRELOAD doesn't work, a debug build willl also workk as debug
builds are now made to prefer a specific environment variable and will
then return that content as host name instead of the actual one.
Kamil wrote the bulk of this, Daniel Stenberg polished it.
lib/Makefile.Watcom works fine already, for src/Makefile.Watcom we
need first to tweak src/Makefile.inc a bit - therefore the handtweaked
list still exists for now.
- make both libcurl and curl makefiles use register calling convention
(previously libcurl had stack calling convention).
- added include paths to the Watcom headers so its no longer required
to set the environment vars for this.
- added -wcd=201 to supress compiler warning about unreachable code.
- use macros for all tools, and removed dependency on GNU tools like rm.
- make ipv6 and debug builds controlable via env vars and so make them
optional instead of default.
- commented WINLDAPAPI and WINBERAPI since they broke with OW 1.8, and
it seems they're not needed (anymore?).
- added rule for hugehelp.c.cvs so that it will be created when not
already exist - this is required for building from a release tarball
since there we have no hugehelp.c.cvs, thus compilation broke.
- removed C_ARG creation from lib/Makefile.Watcom and use CFLAGS
directly as done too in src/Makefile.Watcom - this has the benefit
that we will see all active cflags and defines during compile.
- added LINK-ARG to src/Makefile.Watcom in order to better control
linker input.
- a couple of other minor makefile tweaks here and there ...
- added largefile support for Watcom builds to config-win32.h. Not yet
tested if it really works, but should since Win32 supports it.
- added loaddll stuff to speed up builds if supported.
The curl-config now features a --built-shared command line option that
will output 'yes' or 'no' depending if the build process was asked to
build shared library/libraries or not.
It is primarily made to offer more details to the test suite to know
what kind of stunts it can expect to work.
Win64's 32 bit long but 64 bit size_t caused a warning that we avoid
with a typecast. A small whitespace indent fix was also applied.
Reported by: Adam Light
This passes -Werror to gcc when building curl and libcurl,
allowing easy dection of compile warnings.
Signed-off-by: Ben Greear <greearb@candelatech.com>
... since FTP is using it as well, and potentially other protocols!
Also, an #endif CURL_DISABLE_HTTP was incorrectly marked, as it seems to
end the proxy block instead.
The FTP implementation was missing a timestamp reset point, making the
waiting for responses after sending a post-transfer "QUOTE" command not
working as supposedly. This bug was introduced in 7.20.0
The --remote-header-name option for the command-line tool assumes that
everything beyond the filename= field is part of the filename, but that
might not always be the case, for example:
Content-Disposition: attachment; filename=file.txt; modification-date=...
This fix chops the filename off at the next semicolon, if there is one.
When getting multiple URLs, curl didn't properly reset the byte counter
after a successful transfer so if the subsequent transfer failed it
would wrongly use the previous byte counter and behave badly (segfault)
because of that. The code assumes that the byte counter and the 'stream'
pointer is well in synch.
Reported by: Jon Sargeant
Bug: http://curl.haxx.se/bug/view.cgi?id=3028241
curl_multi perform has two phases: run through every easy handle calling
multi_runsingle and remove expired timers (timer removal).
If a small timer (e.g. 1-10ms) is set during multi_runsingle, then it's
possible that the timer has passed by when the timer removal runs. The
timer which was just added is then removed. This will potentially cause
the timer list to be empty and cause the next call to curl_multi_timeout
to return -1. Ideally, curl_multi_timeout should return 0 in this case.
One way to fix this is to move the struct timeval now = Curl_tvnow(); to
the top of curl_multi_perform. The change does that.
configure checks for grep, egrep, sed and ar and set the variables GREP,
EGREP, SED and AR accordingly. We now let already set variables override
the internal choices to let users make decisions when they know the
right choice already. This is a regression as our configure script used
to allow this back before commit 0b57c475 (up to 7.18.2).
Reported by: "kdekker"
Bug: http://curl.haxx.se/bug/view.cgi?id=3028318
Since uploading from stdin is very likely to not work with anyauth and
its multi-phase probing for what authentication to actually use, alert
the user about it. Multi-phase negotiate almost certainly will involve
sending data and thus libcurl will need to rewind the stream to send
again, and it cannot do that with stdin.
As mentioned in bug report #2956968, the HTTP code wouldn't send the
first empty chunk during the auth negotiation phase of the HTTP request
sending, so the server would wait for data to come and libcurl would
wait for data to arrive... I've made the code not enable chunked
encoding until the auth negotiation is done and thus this scenario
doesn't occur anymore.
Reported by: Sidney San Martn
Bug: http://curl.haxx.se/bug/view.cgi?id=2956968
I think the [REMARK] and commented function calls cluttered the code a
bit too much and made the generated code ugly to read. Now we instead
track the remarks one specially and just lists them at the end of the
generated code more as additional information.
And additionally, don't show function or object pointers actual value
since they make no sense to anyone. Show 'functionpointer' and
'objectpointer' instead.
In the generated code --libcurl makes, all calls to curl_easy_setopt()
that use *_LARGE options now have the value typecasted to curl_off_t, so
that it works correctly for 32bit systems with 64bit curl_off_t type.
When curl_multi_remove_handle() is called and an easy handle is returned
to the connection cache held in the multi handle, then we cannot allow
CURLINFO_LASTSOCKET to extract it since that will more or less encourage
that the user uses the socket while it can get used by libcurl again.
Without this fix, we'd get a segfault in Curl_getconnectinfo() trying to
dereference the NULL pointer in 'data->state.connc'.
Bug: http://curl.haxx.se/bug/view.cgi?id=3023840
When configured with '--without-ssl --with-nss', NTLM authentication
now uses NSS crypto library for MD5 and DES. For MD4 we have a local
implementation in that case. More details are available at
https://bugzilla.redhat.com/603783
In order to get it working, curl_global_init() must be called with
CURL_GLOBAL_SSL or CURL_GLOBAL_ALL. That's necessary because NSS needs
to be initialized globally and we do so only when the NSS library is
actually required by protocol. The mentioned call of curl_global_init()
is responsible for creating of the initialization mutex.
There was also slightly changed the NSS initialization scenario, in
particular, loading of the NSS PEM module. It used to be loaded always
right after the NSS library was initialized. Now the library is
initialized as soon as any SSL or NTLM is required, while the PEM module
is prevented from being loaded until the SSL is actually required.
curl didn't properly handle escaping characters in a URL with the use of
backslash. It did an attempt, but that failed as reported in bug
3022551. The described example was using the URL
"http://example.com?{AB,C\,D}".
I've now removed the special-handling of letters following the backslash
and I also removed the bad extra check that triggered this particular
bug.
Bug: http://curl.haxx.se/bug/view.cgi?id=3022551
Reported by: Jon Sargeant
When a hostname resolves to multiple IP addresses and the first one
tried doesn't work, the socket for the second attempt may get dropped on
the floor, causing the request to eventually time out. The issue is that
when using kqueue (as on mac and bsd platforms) instead of select, the
kernel removes the first fd from kqueue when it is closed (in trynextip,
connect.c:503). Trynextip() then goes on to open a new socket, which
gets assigned the same number as the one it just closed. Later in
multi.c, socket_cb is not called because the fd is already in
multi->sockhash, so the new socket is never added to kqueue.
The correct fix is to ensure that socket_cb is called to remove the fd
when trynextip() closes the socket, and again to re-add it after
singleipsocket(). I'm not sure how to cleanly do that, but the attached
patch works around the problem in an admittedly kludgy way by delaying
the close to ensure that the newly-opened socket gets a different fd.
Daniel's added comment: I didn't spot a way to easily do a nicer fix so
I've proceeded with Ben's patch.
Bug: http://curl.haxx.se/bug/view.cgi?id=3017819
Patch by: Ben Darnell
--decorate=full is needed with my git 1.7.1 to get the necessary
output so that the previous edit would work to extract the
Version stuff.
... but I had to edit how the refs/tags was extracted since it
had a little flaw that made it miss the 7.20.1 output.
Finally, I changed so that Version is outputted even more similar
to how CHANGES does it.
$ git log --pretty=fuller --no-color --date=short | ./log2changes.pl
Of course, limiting the log output with a range like with
"[tag]..HEAD" appended can be very useful too.
For example the libssh2 based functions return other negative
values than -1 to signal errors and it is important that we catch
them properly. Right before this, various failures from libssh2
were treated as negative download amounts which caused havoc.
My additional call to Curl_pgrsUpdate() would sometimes get
called even though there's no connection (left) so a NULL pointer
would get passed, causing a segfault.
1) no need to call the progress function twice when in the
CURLM_STATE_TOOFAST state.
2) Make sure that the progress callback's return code is
acknowledged when used
As long as no error is reported, the progress function can get
called. This may be a little TOO often so we should keep an eye
on this and possibly make this conditional somehow.
Older unixes want an 'int' instead of 'size_t' as the 3rd
argumment so before this change it would cause warnings such as:
There is an implicit conversion from "unsigned long" to "int";
rounding, sign extension, or loss of accuracy may result.
Was seeing spurious SSL connection aborts using libcurl and
OpenSSL. I tracked it down to uncleared error state on the
OpenSSL error stack - patch attached deals with that.
Rough idea of problem:
Code that uses libcurl calls some library that uses OpenSSL but
don't clear the OpenSSL error stack after an error.
ssluse.c calls SSL_read which eventually gets an EWOULDBLOCK from
the OS. Returns -1 to indicate an error
ssluse.c calls SSL_get_error. First thing, SSL_get_error calls
ERR_get_error to check the OpenSSL error stack, finds an old
error and returns SSL_ERROR_SSL instead of SSL_ERROR_WANT_READ or
SSL_ERROR_WANT_WRITE.
ssluse.c returns an error and aborts the connection
Solution:
Clear the openssl error stack before calling SSL_* operation if
we're going to call SSL_get_error afterwards.
Notes:
This is much more likely to happen with multi because it's easier
to intersperse other calls to the OpenSSL library in the same
thread.
Enable OpenLDAP support for cygwin builds. This support was disabled back
in 2008 due to incompatibilities between OpenSSL and OpenLDAP headers.
cygwin's OpenSSL 0.9.8l and OpenLDAP 2.3.43 versions on cygwin 1.5.25
allow building an OpenLDAP enabled libcurl supporting back to Windows 95.
Remove non-functional CURL_LDAP_HYBRID code and references.
Jason McDonald posted bug report #3006786 when he found that the
SFTP code didn't timeout properly in several places in the code
even if a timeout was set properly.
Based on his suggested patch, I wrote a different implementation
that I think addressed the issue better and also uses the connect
timeout for the initial part of the SSH/SFTP done during the
"protocol connect" phase.
(http://curl.haxx.se/bug/view.cgi?id=3006786)
Igor Novoseltsev reported a problem with the multi socket API and
using timeouts and timers. It boiled down to a problem with
libcurl's use of GetTickCount() interally to figure out the
current time, while Igor's own application code used another
function call.
It made his app call the socket API timeout function a bit
_before_ libcurl would consider the timeout to trigger, and that
could easily lead to timeouts or stalls in the app. It seems
GetTickCount() in general often has no better resolution than
16ms and switching to the alternative function
QueryPerformanceCounter has its share of problems:
http://www.virtualdub.org/blog/pivot/entry.php?id=106
We address this problem by simply having libcurl treat timers
that already has occured or will occur within 40ms subject for
treatment. I'm confident that there are other implementations and
operating systems with similarly in accurate timer functions so
it makes sense to have applied generically and I don't believe we
sacrifice much by adding a 40ms inaccuracy on these timeouts.
makes the LDAP code much cleaner, nicer and in general being a
better libcurl citizen. If a new enough OpenLDAP version is
detect, the new and shiny lib/openldap.c code is then used
instead of the old cruft
Code by Howard, minor cleanups by Daniel.
bool in curl internals is unsigned char and should not be used
to receive return value from functions returning int - this fails
when using IBM VisualAge and Tru64 compilers.
Eric Mertens posted bug #3003705: when we made TFTP use the
correct timeout option when sent to the server (fixed May 18th
2010) it became obvious that libcurl used invalid timeout values
(300 by default while the RFC allows nothing above 255). While of
course it is obvious that as TFTP has worked thus far without
being able to set timeout at all, just removing the setting
wouldn't make any difference in behavior. I decided to still keep
it (but fix the problem) as it now actually allows for easier
(future) customization of the timeout.
(http://curl.haxx.se/bug/view.cgi?id=3003705)
In a normal expression, doing [unsigned short] + 1 will not wrap
at 16 bits so the comparisons and outputs were done wrong. I
added a macro do make sure it gets done right.
Douglas Kilpatrick filed bug report #3004787 about it:
http://curl.haxx.se/bug/view.cgi?id=3004787
By undefing a bunch of E* defines that VC10 has started to define
but that we redefine internally to their WSA* alternatives when
building for Windows.
curl_easy_getinfo() called with a pointer to long instead of double
would sigbus on RISC processors (e.g. MIPS) due to wrong alignment
of pointer address.
Eric Mertens posted bug report #3003005 pointing out that the
libcurl TFTP code was not sending the timeout option properly to
the server, and suggested a fix.
(http://curl.haxx.se/bug/view.cgi?id=3003005)
John-Mark Bell filed bug #3000052 that identified a problem (with
an associated patch) with the OpenSSL handshake state machine
when the multi interface is used:
Performing an https request using a curl multi handle and using
select or epoll to wait for events results in a hang. It appears
that the cause is the fix for bug #2958179, which makes
ossl_connect_common unconditionally return from the step 2 loop
when fetching from a multi handle.
When ossl_connect_step2 has completed, it updates
connssl->connecting_state to ssl_connect_3. ossl_connect_common
will then return to the caller, as a multi handle is in
use. Eventually, the client code will call curl_multi_fdset to
obtain an updated fdset to select or epoll on. For https
requests, curl_multi_fdset will cause https_getsock to be called.
https_getsock will only return a socket handle if the
connecting_state is ssl_connect_2_reading or
ssl_connect_2_writing. Therefore, the client will never obtain a
valid fdset, and thus not drive the multi handle, resulting in a
hang.
(http://curl.haxx.se/bug/view.cgi?id=3000052)
Sebastian V reported bug #3000056 identifying a problem with
redirect following. It showed that when curl followed redirects
it didn't properly ignore the response body of the 30X response
if that response was using compressed Content-Encoding!
(http://curl.haxx.se/bug/view.cgi?id=3000056)
"The BSD version of PolarSSL was made for migratory purposes only and is not
maintained. The GPL version of PolarSSL is actually the only actively
developed version, so I would be very reluctant to use the BSD version." /
Paul Bakker, PolarSSL hacker.
Signed-off-by: Hoi-Ho Chan <hoiho.chan@gmail.com>
FTP(S) use two connections that can be set to different recv and
send functions independently, so by introducing recv+send pairs
in the same manner we already have sockets/connections we can
work with FTPS fine.
This commit fixes the FTPS regression introduced in change d64bd82.
Kalle Vahlman's patch applied a while ago broke how the findtool
function searches for tools, as it would always check if "$file"
was present first, which thus made the bad assumption that a file
in the current directory would be a match.
I noticed when it found 'libtool' in the current directory but
libtoolize is not there, which confused the script.
Dirk Manske reported a regression. When connecting with the multi
interface, there were situations where libcurl wouldn't store
connect time correctly as it used to (and is documented to) do.
Using his fine sample program we could repeat it, and I wrote up
test case 573 using that code. The problem does not easily show
itself using the local test suite though.
The fix, also as suggested by Dirk, is a bit on the ugly side as
it adds yet another call to Curl_verboseconnect() and setting the
TIMER_CONNECT time. That situation is subject for some closer
inspection in the future.
Howard Chu brought the bulk work of this patch that properly
moves out the sending and recving of data to the parts of the
code that are properly responsible for the various ways of doing
so.
Daniel Stenberg assisted with polishing a few bits and fixed some
minor flaws in the original patch.
Another upside of this patch is that we now abuse CURLcodes less
with the "magic" -1 return codes and instead use CURLE_AGAIN more
consistently.
This is Hoi-Ho Chan's patch with some minor fixes by me. There
are some potential issues in this, but none worse than we can
sort out on the list and over time.
The main change is to allow input from user-specified methods,
when they are specified with CURLOPT_READFUNCTION.
All calls to fflush(stdout) in telnet.c were removed, which makes
using 'curl telnet://foo.com' painful since prompts and other data
are not always returned to the user promptly. Use
'curl --no-buffer telnet://foo.com' instead. In general,
the user should have their CURLOPT_WRITEFUNCTION do a fflush
for interactive use.
Also fix assumption that reading from stdin never returns < 0.
Old code could crash in that case.
Call progress functions in telnet main loop.
Signed-off-by: Ben Greear <greearb@candelatech.com>
Make sure we don't call memcpy() if the argument is NULL even
though we also passed a zero length then, as the clang analyzer
whined and we want to limit warnings (even false positives) when
they're this easy to fix.
The change of (char) to (unsigned char) will fix long user names
and passwords on systems that have the char type signed by
default.
The feature that uses the file name given in a
Content-disposition: header didn't properly skip trailing
carriage returns and linefeed characters from the end of the file
name when it was given without quotes.
The recent overhaul of the SSL recv function made this treat a
zero returned from gnutls_record_recv() as an error, and this
caused our HTTPS test cases to fail. We leave it to upper layer
code to detect if an EOF is a problem or not.
On some ancient distributions such as RHEL-3, <gssapi/gssapi_krb5.h> needs
to be processed after <gssapi/gssapi.h>, but does not include it itself.
This patch checks for <gssapi/gssapi.h> first and then includes it
in the test for <gssapi/gssapi_krb5.h>, resolving the problem.
Without the patch, <gssapi/gssapi_krb5.h> is "present but cannot be
compiled".
This code would previously use dns_entry->addr->ai_canonname
instead of the given host name, which caused us grief and
problems since not all our resolver options do the reverse lookup
and I would also guess that it caused problems with KRB5/GSS with
virtual name-based hosts. Now the host name from the URL is used.
As reported in bug report #2987196, the code for ipv6 already did
the setting of this bit correctly so we copied that logic into
the Curl_ipv4_resolve_r() function as well. KRB code is the only
code we know that might need the cannonical name so only resolve
it for such requests!
curl_multi_timeout(3) is simply the wrong function to use
if you're using the multi_socket API and this document now
states this pretty clearly to help guiding users.
I've done this blindly, and the last piece that works with ares
should possibly be done differently now that c-ares isn't a
subtree within the curl tree anymore...
Prefixing the FTP quote commands with an asterisk really only
worked for the postquote actions. This is now fixed and test case
227 has been extended to verify.
Matt Wixson found and fixed a bug in the SCP/SFTP area where the
code treated a 0 return code from libssh2 to be the same as
EAGAIN while in reality it isn't. The problem caused a hang in
SFTP transfers from a MessageWay server.
strlen() returns size_t, but ssh libraries are wanting 'unsigned int'. Add
explicit casts and use _ex versions of the ssh library calls.
Signed-off-by: Ben Greear <greearb@candelatech.com>
If you pass a URL to pop3 that does not contain a message ID as
part of the URL, it will currently ask for 'INBOX' which just
causes the pop3 server to return an error.
The change makes libcurl treat en empty message ID as a request
for LIST (list of pop3 message IDs). User's code could then
parse this and download individual messages as desired.
My first instinct was to run the test script within the checked out
repository. This small change to the script allows that to work as
expected.
Signed-off-by: Ben Greear <greearb@candelatech.com>
Ben Greear brought a patch that from now on allows all protocols
to specify name and user within the URL, in the same manner HTTP
and FTP have been allowed to in the past - although far from all
of the libcurl supported protocols actually have that feature in
their URL definition spec.
The backtick command which extracts 'git log' lines come with a
newline, so chomp the newline before calling logit(), as the logit
function adds a newline by itself.
'git log --oneline' is a relatively recent Git function. It is
documented to be the same as 'git log --pretty=oneline --abbrev-commit',
so use that instead. It works all the way back to Git 1.5.0.
since c-ares no longer embedded, we must not touch such files
anymore
we show the 5 last git commits if git was proven in use, to help
us see exactly what's being tested
Bob Richmond: There's an annoying situation where libcurl will
read new HTTP response data from a socket, then check if it's a
timeout if one is set. If the last packet received constitutes
the end of the response body, libcurl still treats it as a
timeout condition and reports a message like:
"Operation timed out after 3000 milliseconds with 876 out of 876
bytes received"
It should only a timeout if the timer lapsed and we DIDN'T
receive the end of the response body yet.
This commit fixes the cmake build of curl, and cleans up the
cmake code a little. It removes some commented out code and
some trailing whitespace. To get curl to build the binary
tree include/curl directory needed to be added to the include
path. Also, SIZEOF_SHORT needed to be added. A check for the
lack of defines of SIZEOF_* for warnless.c was added.
Christopher Conroy fixed a problem with RTSP and GET_PARAMETER
reported to us by Massimo Callegari. There's a new test case 572
that verifies this now.
In order to get back on track, I've removed all the plans for
stuff I had in the queue. I will instead focus on fixing bugs and
relying on that people who truly want things added will come back
on the mailing list and nag and provide patches.
7.20.1 should be possible to release in April 2010
c-ares is now hosted entirely separate from the curl project
see http://c-ares.haxx.se/ for all details concerning c-ares,
its source repository and more.
Kenny To filed the bug report #2963679 with patch to fix a
problem he experienced with doing multi interface HTTP POST over
a proxy using PROXYTUNNEL. He found a case where it would connect
fine but bits.tcpconnect was not set correct so libcurl didn't
work properly.
(http://curl.haxx.se/bug/view.cgi?id=2963679)
Akos Pasztory filed debian bug report #572276http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572276
mentioning a problem with a resource that returns chunked-encoded
_and_ with a Content-Length and libcurl failed to properly ignore
the latter information.
Hauke Duden provided an example program that made the multi
interface crash. His example simply used the multi interface and
did first one FTP transfer and after completion it used a second
easy handle and did another FTP transfer on the same FTP server.
This triggered a bug in the "delayed easy handle kill" system
that curl uses: when an FTP connection is left alive it must keep
an easy handle around internally - only for the purpose of having
an easy handle when it later disconnects it. The code assumed
that when the easy handle was removed and an internal reference
was made, that version could be killed later on when a new easy
handle came using the same connection. This was wrong as Hauke's
example showed that the removed handle wasn't killed for real
until later. This caused a double close attempt => segfault.
Looking at the code of Curl_resolv_timeout() in hostip.c, I think
that in case of a timeout, the signal handler for SIGALRM never
gets removed. I think that in my case it gets executed at some
point later on when execution has long left Curl_resolv_timeout()
or even the cURL library.
The code that is jumped to with siglongjmp() simply sets the
error message to "name lookup timed out" and then returns with
CURLRESOLV_ERROR. I guess that instead of simply returning
without cleaning up, the code should have a goto that jumps to
the spot right after the call to Curl_resolv().
Error codes were not properly returned to the main curl code (and on to apps
using libcurl).
tftp was crapping out when tsize == 0 on upload, but I see no reason to fail
to upload just because the remote file is zero-length. Ignore tsize option on
upload.
The problem mentioned on Dec 10 2009
(http://curl.haxx.se/bug/view.cgi?id=2905220) was only partially fixed.
Partially because an easy handle can be associated with many connections in
the cache (e.g. if there is a redirect during the lifetime of the easy
handle). The previous patch only cleaned up the first one. The new fix now
removes the easy handle from all connections, not just the first one.
ran into some issues with the GSSAPI tests in configure.ac. The tests first
try to determine the include dirs and libs and set CPPFLAGS and LIBS
accordingly. It then checks for the headers and finally sets LIBS a second
time, causing the libs to be included twice. The first setting of LIBS seems
redundant and should be left out, since the first part is otherwise just
about finding headers.
My second issue is that 'krb5-config --libs gssapi' on Darwin is less than
useless and returns junk that, while it happens to work with gcc, causes
clang to choke. For example, --libs returns $CFLAGS along with the libs,
which is really retarded. Simply setting 'LIBS="$LIBS -lgssapi_krb5
-lresolv"' on Darwin is sufficient.
makes sure that when using sub-second timeouts, there's no final bad 1000ms
wait. Previously, a sub-second timeout would often make the elapsed time end
up the time rounded up to the nearest second (e.g. 1s for 200ms timeout)
the global timeout if set. Also, as was reported in the bug report #2956437
by Ryan Chan, the time stamp to use as basis for the per command timeout was
not set properly in the DONE phase for FTP (and not for SMTP) so I fixed
that just now. This was a regression compared to 7.19.7 due to the
conversion of FTP code over to the generic pingpong concepts.
http://curl.haxx.se/bug/view.cgi?id=2956437
(http://curl.haxx.se/bug/view.cgi?id=2958074) that curl on Windows with
option --trace-time did not use local time when timestamping trace lines.
This could also happen on other systems depending on time souurce.
- SMTP falls back to RFC821 HELO when EHLO fails (and SSL is not required).
- Use of true local host name (i.e.: via gethostname()) when available, as default argument to SMTP HELO/EHLO.
- Test case 804 for HELO fallback.
properly in angle brackets. Recipients provided with CURLOPT_MAIL_RCPT now
get angle bracket wrapping automatically by libcurl unless the recipient
starts with an angle bracket as then the app is assumed to deal with that
properly on its own.
full DATA has been sent, and I modified the test SMTP server to also send
that response. As usual, the DONE operation that is made after a completed
transfer is still not doable in a non-blocking way so this waiting for 250
is unfortunately made blockingly.
in the same RCPT TO line, when they should be sent in separate single
commands. I updated test case 802 to verify this.
- I also fixed a bad use of my_setopt_str() of CURLOPT_MAIL_RCPT in the curl
tool which made it try to output it as string for the --libcurl feature
which could lead to crashes.
VMS builder bad behavior when used in a batch job.
Various ".LIS" and ".MAP" files created without being requested
by a "LIST" command-line option, and in the wrong place, too.
Some minor typographical changes.
to automatically uncompress it with the CURLOPT_ENCODING option, libcurl
could wrongly provide the callback with more data than what the maximum
documented amount. An application could thus get tricked into badness if the
maximum limit was trusted to be enforced by libcurl itself (as it is
documented).
This is further detailed and explained in the libcurl security advisory
20100209 at
http://curl.haxx.se/docs/adv_20100209.html
simply check for CURLM_CALL_MULTI_PERFORM internally. This has the added
benefit that this goes in line with my long-term wishes to get rid of the
CURLM_CALL_MULTI_PERFORM all together from the public API.
from hostip.h to setup.h in order to allow proper inclusion in any file.
This represents no functional change at all in which resolver is used,
everything still works as usual, internally and externally there is no
difference in behavior.
HTTP Cookie: header _needs_ to be sorted on the path length in the cases
where two cookies using the same name are set more than once using
(overlapping) paths. Realizing this, identically named cookies must be
sorted correctly. But detecting only identically named cookies and take care
of them individually is harder than just to blindly and unconditionally sort
all cookies based on their path lengths. All major browsers also already do
this, so this makes our behavior one step closer to them in the cookie area.
Test case 8 was the only one that broke due to this change and I updated it
accordingly.
again when downloading files over FTP using ASCII and it turns out that the
final size of the file is not the same as the initial size the server
reported. This is very common since servers don't take the newline
conversions into account.
being properly detected under certain circumstances. It had been caused by
strange behavior of pkg-config when handling PKG_CONFIG_LIBDIR. pkg-config
distinguishes among empty and non-existent environment variable in that case.
transfers: curl_multi_fdset() would return -1 and not set and file
descriptors several times during a transfer of a single file. It turned out
to be due to two different flaws now fixed. Gil's excellent recipe helped me
nail this.
much as possible in one go, as long as it doesn't block and hasn't reached the
end of the state machine.
This avoids spurious -1 returns from curl_multi_fdset() simply because
previously it would return from this function without anything in EWOUDLBLOCK
and thus basically it wasn't actually waiting for anything!!
state, we return CURLM_CALL_MULTI_PERFORM unconditionally then so that we
can act faster like in the case the protocol-specific connect doesn't block
on anything and we can just persue on the next action immediately. It also
then avoids a case where curl_multi_fdset() would return -1.
present in the tests/data/Makefile.am and outputs a notice message on the
screen if not. Each test file has to be included in that Makefile.am to get
included in release archives and forgetting to add files there is a common
mistake. This is an attempt to make it harder to forget.
ossl_connect_step3() increments an SSL session handle reference counter on
each call. When sessions are re-used this reference counter may be
incremented many times, but it will be decremented only once when done (by
Curl_ossl_session_free()); and the internal OpenSSL data will not be freed
if this reference count remains positive. When a session is re-used the
reference counter should be corrected by explicitly calling
SSL_SESSION_free() after each consecutive SSL_get1_session() to avoid
introducing a memory leak.
(http://curl.haxx.se/bug/view.cgi?id=2926284)
versions --ftp-ssl and --ftp-ssl-reqd as these options are now used to
control SSL/TLS for IMAP, POP3 and SMTP as well in addition to FTP. The old
option names are still working but the new ones are the prefered ones
(listed and documented).
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
sequences in uploaded data. The test server doesn't "decode" escaped dot-lines
but instead test cases must be written to take them into account. Added test
case 803 to verify dot-escaping.
replaced with equivalent /RTCsu for Visual Studio 2003 and newer versions.
- Compiler option /GX is now replaced with equivalent /EHsc for all versions.
detects and uses proxies based on the environment variables. If the proxy
was given as an explicit option it worked, but due to the setup order
mistake proxies would not be used fine for a few protocols when picked up
from '[protocol]_proxy'. Obviously this broke after 7.19.4. I now also added
test case 1106 that verifies this functionality.
(http://curl.haxx.se/bug/view.cgi?id=2913886)
on FTP errors in the transient 5xx range. Transient FTP errors are in the
4xx range. The code itself only tried on 5xx errors that occured _at login_.
Now the retry code retries on all FTP transfer failures that ended with a
4xx response.
(http://curl.haxx.se/bug/view.cgi?id=2911279)
accessing alredy freed memory and thus crash when using HTTPS (with
OpenSSL), multi interface and the CURLOPT_DEBUGFUNCTION and a certain order
of cleaning things up. I fixed it.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
instead of being repeated several times. This also include Authenticate: and
Proxy-Authenticate: headers and while this hardly every happens in real life
it will confuse libcurl which does not properly support it for all headers -
like those Authenticate headers.
curl_easy_setopt with CURLOPT_HTTPHEADER, the library should set
data->state.expect100header accordingly - the current code (in 7.19.7 at
least) doesn't handle this properly. Martin Storsjo provided the fix!
given file, serverpid sub is renamed to pidfromfile. In addition it is
enhanced to make sure that it always returns zero unless a numerical
positive value is returned.
- To better reflect that only process existance is actually checked,
checkserver sub is renamed to processexists. In addition it is enhanced
making it remove the given pid file when the extracted pid is no longer
alive.
rework patch that now integrates TFTP properly into libcurl so that it can
be used non-blocking with the multi interface and more. BLKSIZE also works.
The --tftp-blksize option was added to allow setting the TFTP BLKSIZE from
the command line.
meter/callback during FTP command/response sequences. It turned out it was
really lame before and now the progress meter SHOULD get called at least
once per second.
though it failed to write a very small download to disk (done in a single
fwrite call). It turned out to be because fwrite() returned success, but
there was insufficient error-checking for the fclose() call which tricked
curl to believe things were fine.
ares_addr6ttl in order to prevent name space pollution, along with
necessary changes to code base and man pages.This change does not break
ABI, there is no need to recompile existing applications. But existing
applications using these structs with the old name will need source code
adjustments when recompiled using c-ares 1.6.1.
for use by non-configure systems. As intended, configure would overwrite the
distributed one when doing in-tree builds. But VPATH builds would end having
two curlbuild.h files, one in the source tree and another in the build tree.
CURLOPT_HTTPPROXYTUNNEL enabled over a proxy, a subsequent request using the
same proxy with the tunnel option disabled would still wrongly re-use that
previous connection and the outcome would only be badness.
to return a linked lists of results. These were also modified to internally
use the ares_data memory struct and as such its result must be free'ed with
ares_free_data().
end up with entries that wouldn't time-out:
1. Set up a first web server that redirects (307) to a http://server:port
that's down
2. Have curl connect to the first web server using curl multi
After the curl_easy_cleanup call, there will be curl dns entries hanging
around with in_use != 0.
(http://curl.haxx.se/bug/view.cgi?id=2891591)
its pkg-config file. So -Wl stuff ended up in the .pc file, which is really
bad, and breaks if there are multiple -Wl in our LDFLAGS (which are in
PTXdist). bug #2893592 (http://curl.haxx.se/bug/view.cgi?id=2893592)
implement the function even when h_errno is not a macro.
The h_errno macro test now only done on systems for which there
is no hard coded knowledge about getaddrinfo's thread safeness.
--with-nss is set but not "yes".
I think we can still improve that to check for pkg-config in that path etc,
but at least this patch brings back the same functionality we had before.
the client certificate. It also disable the key name test as some engines
can select a private key/cert automatically (When there is only one key
and/or certificate on the hardware device used by the engine)
- Constantine Sapuntzakis reported that Darwin 6.0 a.k.a. MAC OS X 10.2
and newer have a threadsafe getaddrinfo.
- Fix Dragonfly BSD triplet detection.
- In case the hard-coded knowledge says that getaddrinfo is threadsafe,
an additional check is done to verify that h_errno is also defined.
If h_errno isn't defined, we finally assume that it isn't threadsafe.
Jamie Lokier provided the inspiration for this extra check.
No need for a separate variable ndns.
The memory leak detection will detect code that fails to release a dns reference.
The DEBUGASSERT will detect code that releases too many references.
closed NSPR descriptor. The issue was hard to find, reported several times
before and always closed unresolved. More info at the RH bug:
https://bugzilla.redhat.com/534176
(http://curl.haxx.se/bug/view.cgi?id=2891595) which identified how an entry
in the DNS cache would linger too long if the request that added it was in
use that long. He also provided the patch that now makes libcurl capable of
still doing a request while the DNS hash entry may get timed out.
used during the FTP connection phase (after the actual TCP connect), while
it of course should be. I also made the speed check get called correctly so
that really slow servers will trigger that properly too.
c-ares libraries in debug and release flavours.
Additionally each of the three sample programs is built against
each of the four possible c-ares libraries, generating all this
a total number of 12 executables and 4 libraries.
wrong percentage for small files, most notable for <1000 bytes and could
easily end up showing more than 100% at the end. It also didn't show any
percentage, transfer size or estimated transfer times when transferring
less than 100 bytes.
moved to 7.19.8. I removed the bugs already in KNOWN_BUGS (but they should
of course still get fixed).
Added three recent bugs. 7.19.8 is targetted to get shipped in Janurary 2010
c-ares with --enable-curldebug uses memdebug.h from libcurl's lib subdirectory.
memdebug.h needs access to libcurl's setup.h from libcurl's lib subdirectory
and also needs access to libcurl's generated curl_config.h
--enable-symbol-hiding and --disable-symbol-hiding as well as related
macro names and some internal variables used for them.
Related configuration file preprocessor symbols named to
CARES_SYMBOL_HIDING and CARES_SYMBOL_SCOPE_EXTERN.
auth is used, as it caused a crash. I failed to repeat the issue, but still
made a change that now forces the TCP connection used for a freed SCP
session to get closed and not be re-used.
POST using a read callback, with Digest authentication and
"Transfer-Encoding: chunked" enforced. I would then cause the first request
to be wrongly sent and then basically hang until the server closed the
connection. I fixed the problem and added test case 565 to verify it.
shows that this one is actually a modified copy of ares_parse_a_reply.c.
In order to comply with ares_parse_a_reply.c's M.I.T. license, the old
1998 M.I.T. copyright notice is now also preserved in this file the same
as it is done in other ares_parse_*.c files.
ares_parse_txt_reply() current version:
- Fixed a couple of potential double free's.
- Fixed memory leaks upon out of memory condition.
- Fixed pointer arithmetic.
- Setting ntxtreply to zero upon entry for all failure cases.
- Changed data type to size_t for variables substr_len, str_len and
the length member of ares_txt_reply struct.
- Avoided a couple of memcpy() calls.
- Changed i data type to unsigned int to prevent compiler warnings.
- Adjusted a comment.
- Use ARES_SUCCESS literal for successfull completion.
- Added CVS Id tag.
dynamic and static c-ares libraries in debug and release flavours.
Additionally each of the three sample programs is built against
each of the four possible c-ares libraries, generating all this
a total number of 12 executables and 4 libraries.
based on the 'visibility' attribute for GNUC and __global for Sun
compilers, taking also in account __declspec function decoration
for Win32 and Symbian DLL's.
Introducing configure options --enable-hidden-symbols and
--disable-hidden-symbols following libcurl's naming.
unparsable expiry dates and then treat them as session cookies - previously
libcurl would reject cookies with a date format it couldn't parse. Research
shows that the major browser treat such cookies as session cookies. I
modified test 8 and 31 to verify this.
by user 'koresh' introduced the --crlfile option to curl, which makes curl
tell libcurl about a file with CRL (certificate revocation list) data to
read.
fail to build when this happens, and show an appropriate error.
The brave of heart can circumvect this. Defining ALLOW_MSVC6_WITHOUT_PSDK
in lib/config-win32.h, although absolutely discouraged and unsupported,
this will allow the die hard MSVC hacker to build in such a discouraged
environment.
The actually supported 'fix' is to install 'February 2003 Platform SDK'
a.k.a. 'Windows Server 2003 PSDK' which can be freely downloaded from
http://www.microsoft.com/msdownload/platformsdk/sdkupdate/psdk-full.htm
(http://curl.haxx.se/bug/view.cgi?id=2873666) which identified a problem which
made libcurl loop infinitely when given incorrect credentials when using HTTP
GSS negotiate authentication.
(http://curl.haxx.se/bug/view.cgi?id=2870221) that libcurl returned an
incorrect return code from the internal trynextip() function which caused
him grief. This is a regression that was introduced in 7.19.1 and I find it
strange it hasn't hit us harder, but I won't persue into figuring out
exactly why.
SO_SNDBUF to CURL_WRITE_SIZE even if the SO_SNDBUF starts out larger. The
patch doesn't do a setsockopt if SO_SNDBUF is already greater than
CURL_WRITE_SIZE. This should help folks who have set up their computer with
large send buffers.
the define CURL_MAX_HTTP_HEADER which is even exposed in the public header
file to allow for users to fairly easy rebuild libcurl with a modified
limit. The rationale for a fixed limit is that libcurl is realloc()ing a
buffer to be able to put a full header into it, so that it can call the
header callback with the entire header, but that also risk getting it into
trouble if a server by mistake or willingly sends a header that is more or
less without an end. The limit is set to 100K.
saving received cookies with no given path, if the path in the request had a
query part. That is means a question mark (?) and characters on the right
side of that. I wrote test case 1105 and fixed this problem.
transfer.c for blocking. It is currently used only by SCP and SFTP protocols.
This enhancement resolves an issue with 100% CPU usage during SFTP upload,
reported by Vourhey.
(http://curl.haxx.se/bug/view.cgi?id=2861587) identifying that libcurl used
the OpenSSL function X509_load_crl_file() wrongly and failed if it would
load a CRL file with more than one certificate within. This is now fixed.
powered libcurl in 7.19.6. If there was a X509v3 Subject Alternative Name
field in the certficate it had to match and so even if non-DNS and non-IP
entry was present it caused the verification to fail.
statically linking since libssh2 needs the SSL library link flags to be
set up already to satisfy its dependencies. This wouldn't be necessary
if the libssh2 configure check was changed to use pkg-config since the
--static flag would add the dependencies automatically.
POLLIN, and sets POLLERR without setting POLLIN and POLLOUT. In some
libcurl code execution paths this could trigger busy wait loops with
high CPU usage until a timeout condition aborted the loop.
This fix for Curl_poll adresses the above in a libcurl-wide mode.
Some systems poll function sets POLLHUP in revents without setting
POLLIN, and sets POLLERR without setting POLLIN and POLLOUT. In some
libcurl code execution paths this could trigger busy wait loops with
high CPU usage until a timeout condition aborted the loop.
The reverted patch addressed the above issue for a very specific case,
when awaiting c-ares to resolve. A libcurl-wide fix superceeds this one.
http://cool.haxx.se/cvs.cgi/curl/lib/select.c.diff?r1=1.52&r2=1.53
start second "Thu Jan 1 00:00:00 GMT 1970" as the date parser then returns 0
which internally then is treated as a session cookie. That particular date
is now made to get the value of 1.
libcurl to resolve 'localhost' whatever name you use in the URL *if* you set
the --interface option to (exactly) "LocalHost". This will enable us to
write tests for custom hosts names but still use a local host server.
when cross-compiling. The key to success is then you properly setup
PKG_CONFIG_PATH before invoking configure.
I also improved how NSS is detected by trying nss-config if pkg-config isn't
present, and as a last resort just use the lib name and force the user to
setup the LIBS/LDFLAGS/CFLAGS etc properly. The previous last resort would
add a range of various libs that would almost never be quite correct.
sends the 220 response or otherwise is dead slow, libcurl will not
acknowledge the connection timeout during that phase but only the "real"
timeout - which may surprise users as it is probably considered to be the
connect phase to most people. Brought up (and is being misunderstood) in:
http://curl.haxx.se/bug/view.cgi?id=2844077
QUOTE commands and the request used the same path as the connection had
already changed to, it would decide that no commands would be necessary for
the "DO" action and that was not handled properly but libcurl would instead
hang.
read stdin in a non-blocking fashion. This also brings back -T- (minus) to
the previous blocking behavior since it could break stuff for people at
times.
strdup() that could lead to segfault if it returned NULL. I extended his
suggest patch to now have Curl_retry_request() return a regular return code
and better check that.
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
than what's absolutely necessary:
curl will do its best to use what you pass to it as a URL. It is not trying to
validate it as a syntactically correct URL by any means but is instead
VERY liberal with what it accepts.
mail posted to the http-state mailing list, from Adam Barth, and is said to be
the set of date formats the Chrome browser code is tested against:
http://www.ietf.org/mail-archive/web/http-state/current/msg00129.html
libcurl parses most of them identically, but not all of them.
sending of the TSIZE option. I don't like fixing bugs just hours before
a release, but since it was broken and the patch fixes this for him I decided
to get it in anyway.
each test, so that the test suite can now be used to actually test the
verification of cert names etc. This made an error show up in the OpenSSL-
specific code where it would attempt to match the CN field even if a
subjectAltName exists that doesn't match. This is now fixed and verified
in test 311.
Fix OS400 makefile for tests to use the new Makefile.inc in libtest
Update the OS400 wrappers and RPG binding according to the current CVS source state
POSIX.1-2001. Note that RFC 2553 defines a prototype where the last parameter cnt is of type size_t.
Many systems follow RFC 2553. Glibc 2.0 and 2.1 have size_t, but 2.2 has socklen_t.
and the name length differ in those cases and thus leave the matching function
unmodified from before, as the matching functions never have to bother with
the zero bytes in legitimate cases. Peter Sylvester helped me realize that
this fix is slightly better as it leaves more code unmodified and makes the
detection a bit more obvious in the code.
should introduce an option to disable SNI, but as we're in feature freeze
now I've addressed the obvious bug here (pointed out by Peter Sylvester): we
shouldn't try to enable SNI when SSLv2 or SSLv3 is explicitly selected.
Code for OpenSSL and GnuTLS was fixed. NSS doesn't seem to have a particular
option for SNI, or are we simply not using it?
(http://curl.haxx.se/bug/view.cgi?id=2829955) mentioning the recent SSL cert
verification flaw found and exploited by Moxie Marlinspike. The presentation
he did at Black Hat is available here:
https://www.blackhat.com/html/bh-usa-09/bh-usa-09-archives.html#Marlinspike
Apparently at least one CA allowed a subjectAltName or CN that contain a
zero byte, and thus clients that assumed they would never have zero bytes
were exploited to OK a certificate that didn't actually match the site. Like
if the name in the cert was "example.com\0theatualsite.com", libcurl would
happily verify that cert for example.com.
libcurl now better use the length of the extracted name, not assuming it is
zero terminated.
only in some OpenSSL installs - like on Windows) isn't thread-safe and we
agreed that moving it to the global_init() function is a decent way to deal
with this situation.
something beyond ascii but currently libcurl will only pass in the verbatim
string the app provides. There are several browsers that already do this
encoding. The key seems to be the updated draft to RFC2231:
http://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02
CURLOPT_PREQUOTE) now accept a preceeding asterisk before the command to
send when using FTP, as a sign that libcurl shall simply ignore the response
from the server instead of treating it as an error. Not treating a 400+ FTP
response code as an error means that failed commands will not abort the
chain of commands, nor will they cause the connection to get disconnected.
"you replaced the old SSLeay_add_ssl_algorithms() call
with OpenSSL_add_all_algorithms(), however unlike the name suggests,
the second function is not a superset of the first. When using SSL
both these functions will need to be called in order to offer complete
functionality"
out that OpenSSL-powered libcurl didn't support the SHA-2 digest algorithm,
and provided the solution too: to use OpenSSL_add_all_algorithms() instead
of the older SSLeay_* alternative. OpenSSL_add_all_algorithms was added in
OpenSSL 0.9.5
in NSS-powered libcurl. Now the client certificates can be selected
automatically by a NSS built-in hook. Additionally pre-login to all PKCS11
slots is no more performed. It used to cause problems with HW tokens.
- Fixed reference counting for NSS client certificates. Now the PEM reader
module should be always properly unloaded on Curl_nss_cleanup(). If the unload
fails though, libcurl will try to reuse the already loaded instance.
(http://curl.haxx.se/bug/view.cgi?id=2813123) and an a patch that fixes the
problem:
Url A is accessed using auth. Url A redirects to Url B (on a different
server0. Url B reuses a persistent connection. Url B has auth, even though
it's on a different server.
Note: if Url B does not reuse a persistent connection, auth is not sent.
to use the "standard" ENABLE_IPV6 one. Also, if port number cannot be figured
out to connect to after a name resolve (due to it not being IPv4 or IPv6),
that particular address will now simply be skipped.
don't know how they got wrong in the first place, but using this output
format makes it possible to quite easily separate the string into an array
of multiple items.
This allows curl(1) to be used as a client-side tunnel for arbitrary stream
protocols by abusing chunked transfer encoding in both the HTTP request and
HTTP response. This requires server support for sending a response while a
request is still being read, of course.
If attempting to read from stdin returns EAGAIN, then we pause our sender.
This leaves curl to attempt to read from the socket while reading from stdin
(and thus sending) is paused.
With the curl memory tracking feature decoupled from the debug build feature,
CURLDEBUG and DEBUGBUILD preprocessor symbol definitions are used as follows:
CURLDEBUG used for curl debug memory tracking specific code (--enable-curldebug)
DEBUGBUILD used for debug enabled specific code (--enable-debug)
to detect gnutls build options with pkg-config only and not libgnutls-config
anymore since GnuTLS has stopped distributing that tool. If an explicit path
is given to configure, we will instead guess on how to link and use that
lib. I did not use the patch from the bug report.
installed in the subdirectory at different stages. With some versions it is
installed when libtoolize finishes, but with others it is not installed
until automake has finished.
So we can not attempt to use config.guess until the very last buildconf stage.
is almost always a VERY BAD IDEA. Yet there are still apps out there doing
this, and now recently it triggered a bug/side-effect in libcurl as when
libcurl sends a POST or PUT with NTLM, it sends an empty post first when it
knows it will just get a 401/407 back. If the app then replaced the
Content-Length header, it caused the server to wait for input that libcurl
wouldn't send. Aaron Oneal reported this problem in bug report #2799008http://curl.haxx.se/bug/view.cgi?id=2799008) and helped us verify the fix.
out that the cookie parser would leak memory when it parses cookies that are
received with domain, path etc set multiple times in the same header. While
such a cookie is questionable, they occur in the wild and libcurl no longer
leaks memory for them. I added such a header to test case 8.
not in the mood enough to fight this now.
65. When doing FTP over a socks proxy or CONNECT through HTTP proxy and the
multi interface is used, libcurl will fail if the (passive) TCP connection
for the data transfer isn't more or less instant as the code does not
properly wait for the connect to be confirmed. See test case 564 for a first
shot at a test case.
of streams that had some parts (legitimately) missing. We now provide and use
a proper cleanup function for the content encoding submodule.
http://curl.haxx.se/mail/lib-2009-05/0092.html
as reported by Ebenezer Ikonne (on curl-users) and Laurent Rabret (on
curl-library). The transfer was mistakenly marked to get more data to send
but since it didn't actually have that, it just hung there...
KEEP_RECV to better match the general terminology: receive and send is what we
do from the (remote) servers. We read and write from and to the local fs.
(http://curl.haxx.se/bug/view.cgi?id=2784055) identifying a problem to
connect to SOCKS proxies when using the multi interface. It turned out to
almost not work at all previously. We need to wait for the TCP connect to
be properly verified before doing the SOCKS magic.
There's still a flaw in the FTP code for this.
(http://curl.haxx.se/bug/view.cgi?id=2786255) with a patch, identifying how
libcurl did not deal with SSL session ids properly if the server rejected a
re-use of one. Starting now, it will forget the rejected one and remember
the new. This change was for OpenSSL only, it is likely that other SSL lib
code needs similar fixes.
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
I've now made TFTP "connections" not being kept for re-use within libcurl.
TFTP is UDP-based so the benefit was really low (if even existing) to begin
with so instead of tracking down to fix this problem we instead removed the
re-use. I also enabled test case 1099 that I wrote a few days ago to verify
that this change fixes the reported problem.
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
Previous workaround proved useful, but triggered the following warning:
warning #556: a value of type "volatile Curl_addrinfo *" cannot be assigned to an entity of type "Curl_addrinfo *"
Koenig pointed out that the man page didn't tell that the *_proxy
environment variables can be specified lower case or UPPER CASE and the
lower case takes precedence,
Previous 'volatile' variables workaround proved useful, but it triggered the following warning:
warning #167: argument of type "volatile Curl_addrinfo *" is incompatible with parameter of type "void *"
how it occurs (http://curl.haxx.se/mail/lib-2009-04/0289.html). The
conclusion was that if an error is detected and Curl_done() is called for
the connection, ftp_done() could at times return another error code that
then would take precedence and that new code confused existing logic that
works for the first error code (CURLE_SEND_ERROR) only.
OBJECTPOINT options. Now we've introduced a new function - my_setopt_str -
within the app for setting plain string options to avoid the risk of this
mistake happening.
for any further requests or transfers. The work-around is then to close that
handle with curl_easy_cleanup() and create a new. Some more details:
http://curl.haxx.se/mail/lib-2009-04/0300.html
proxy. libcurl would then wrongly close the connection after each
request. In his case it had the weird side-effect that it killed NTLM auth
for the proxy causing an inifinite loop!
I added test case 1098 to verify this fix. The test case does however not
properly verify that the transfers are done persistently - as I couldn't
think of a clever way to achieve it right now - but you need to read the
stderr output after a test run to see that it truly did the right thing.
Storsjo pointed out how setting CURLOPT_NOBODY to 0 could be downright
confusing as it set the method to either GET or HEAD. The example he showed
looked like:
curl_easy_setopt(curl, CURLOPT_PUT, 1);
curl_easy_setopt(curl, CURLOPT_NOBODY, 0);
The new way doesn't alter the method until the request is about to start. If
CURLOPT_NOBODY is then 1 the HTTP request will be HEAD. If CURLOPT_NOBODY is
0 and the request happens to have been set to HEAD, it will then instead be
set to GET. I believe this will be less surprising to users, and hopefully
not hit any existing users badly.
out to be leaking cacerts. Kamil Dudka helped me complete the fix. The issue
is found in Redhat's bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=453612
There are still memory leaks present, but they seem to have other reasons.
and 1 on fatal errors. Previously it only mentioned non-zero on fatal
errors. This is a slight change in meaning, but it follows what we've done
elsewhere before and it opens up for LOTS of more useful return codes
whenever we can think of them...
non-configured libcurl. In this case curl_off_t data type was gated
to the off_t data type which depends on the _FILE_OFFSET_BITS. This
configuration is exactly the unwanted configuration for our curl_off_t
data type which must not depend on such setting. This breaks ABI for
libcurl libraries built with Sun compilers which were built without
having run the configure script with _FILE_OFFSET_BITS different than
64 and using the ILP32 data model.
curl_easy_duphandle did not necessarily duplicate the CURLOPT_COOKIEFILE
option. It only enabled the cookie engine in the destination handle if
data->cookies is not NULL (where data is the source handle). In case of a
newly initialized handle which just had the cookie support enabled by a
curl_easy_setopt(handle, CURL_COOKIEFILE, "")-call, handle->cookies was
still NULL because the setopt-call only appends the value to
data->change.cookielist, hence duplicating this handle would not have the
cookie engine switched on.
We also concluded that the slist-functionality would be suitable for being
put in its own module rather than simply hanging out in lib/sendf.c so I
created lib/slist.[ch] for them.
scripts to make it detect a bad checkout earlier. People with older
checkouts who don't do cvs update with the -d option won't get the new dirs
and then will get funny outputs that can be a bit hard to understand and
fix.
in the gnutls code where we were checking for negative values for errors,
when the man pages state that GNUTLS_E_SUCCESS is returned on success and
other values indicate error conditions.
curl didn't use sprintf() in a way that is documented to work in POSIX but
since we use our own printf() code (from libcurl) that shouldn't be a
problem. Nonetheless I modified the code to not rely on such particular
features and to not cause further raised eyebrowse with no good reason.
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
curl_global_init() function to properly maintain the performing functions
thread-safe. We've previously (28 April 2007) moved the init to a later time
just to avoid it to fail very early when libgcrypt dislikes the situation,
but that move was bad and the fix should rather be in libgcrypt or
elsewhere.
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
libcurl did a superfluous 1000ms wait when doing SFTP downloads!
We read data with libssh2 while doing the "DO" operation for SFTP and then
when we were about to start getting data for the actual file part, the
"TRANSFER" part, we waited for socket action (in 1000ms) before doing a
libssh2-read. But in this case libssh2 had already read and buffered the
data so we ended up always just waiting 1000ms before we get working on the
data!
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
the condition in the previous request was unmet. This is typically a time
condition set with CURLOPT_TIMECONDITION and was previously not possible to
reliably figure out. From bug report #2565128
(http://curl.haxx.se/bug/view.cgi?id=2565128)
getaddrinfo() sorts the response list
This isn't a libcurl bug since this is how getaddrinfo() is *supposed* to work!
Apparently you deal with this using the /etc/gai.conf file.
interface and setting CURLMOPT_MAXCONNECTS to something less than the number
of handles you add to the multi handle. All the connections that didn't fit
in the cache would not be properly disconnected nor freed!
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
enabled, we can now take advantage of its brand new AF_UNSPEC support in
ares_gethostbyname(). This makes test case 241 finally run fine for me wtih
this setup since it now parses the "::1 ip6-localhost" line fine in my
/etc/hosts file!
(http://curl.haxx.se/bug/view.cgi?id=2550061) mentioning that I failed to
properly make sure that the VC9 makefiles got included in the latest
release. I've now fixed the release script and verified it so next release
will hopefully include them properly!
Curl_sspi_global_init() and Curl_sspi_global_cleanup() which previously were
named Curl_ntlm_global_init() and Curl_ntlm_global_cleanup() in http_ntlm.c
Also adjusted socks_sspi.c to remove the link-time dependency on the Windows
SSPI library using it now in the same way as it was done in http_ntlm.c.
CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC to allow libcurl
to do GSS-style authentication with SOCKS5 proxies. The curl tool got the
options called --socks5-gssapi-service and --socks5-gssapi-nec to enable
these.
disable "rfc4507bis session ticket support". rfc4507bis was later turned
into the proper RFC5077 it seems: http://tools.ietf.org/html/rfc5077
The enabled extension concerns the session management. I wonder how often
libcurl stops a connection and then resumes a TLS session. also, sending the
session data is some overhead. .I suggest that you just use your proposed
patch (which explicitly disables TICKET).
If someone writes an application with libcurl and openssl who wants to
enable the feature, one can do this in the SSL callback.
Sharad Gupta brought this to my attention. Peter Sylvester helped me decide
on the proper action.
(http://curl.haxx.se/bug/view.cgi?id=2535504) pointing out that realms with
quoted quotation marks in HTTP Digest headers didn't work. I've now added
test case 1095 that verifies my fix.
They basically offer the same thing the NO_PROXY environment variable only
offered previously: list a set of host names that shall not use the proxy
even if one is specified.
clarity. This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale.
by Daniel Black, I've now added magic to the configure script that makes it
use pkg-config to detect gnutls details as well if the existing method
(using libgnutls-config) fails. While doing this, I cleaned up and unified
the pkg-config usage when detecting openssl and nss as well.
that is now used by the ares_parse_*_reply() functions instead of the
ares_expand_name() simply to easier return ARES_EBADRESP for the cases where
the name expansion fails as in responses that really isn't expected.
When using the multi interface over HTTP and the server returns a Location
header, the running easy handle will get stuck in the CURLM_STATE_PERFORM
state, leaving the external event loop stuck waiting for data from the
ingoing socket (when using the curl_multi_socket_action stuff). While this
bug was pretty hard to find, it seems to require only a one-line fix. The
break statement on line 1374 in multi.c caused the function to skip the call
to multistate().
How to reproduce this bug? Well, that's another question. evhiperfifo.c in
the examples directory chokes on this bug only _sometimes_, probably
depending on how fast the URLs are added. One way of testing the bug out is
writing to hiper.fifo from more than one source at the same time.
curl_easy_reset() by creating Curl_init_userdefined(). This had the side effect
of fixing curl_easy_reset() so it now also resets CURLOPT_FTP_FILEMETHOD and
CURLOPT_SSL_SESSIONID_CACHE
I have to jump through a few hoops now with the NSS library initialization
since another part of an application may have already initialized NSS by the
time Curl gets invoked. This patch is more careful to only shutdown the NSS
library if Curl did the initialization.
It also adds in a bit of code to set the default ciphers if the app that
call NSS_Init* did not call NSS_SetDomesticPolicy() or set specific
ciphers. One might argue that this lets other application developers get
lazy and/or they aren't using the NSS API correctly, and you'd be right.
But still, this will avoid terribly difficult-to-trace crashes and is
generally helpful.
(http://curl.haxx.se/bug/view.cgi?id=2413067) that identified a problem that
would cause libcurl to mark a DNS cache entry "in use" eternally if the
subsequence TCP connect failed. It would thus never get pruned and refreshed
as it should've been.
The curl tool parts are postponed to a later time
201 - "bug: header data output to the body callback function after set header"
Was probably not a bug, I asked about it but I didn't get any response.
202 - "hangs up of application above libcurl" - problems with the multi_socket
Fixes from Igor have been committed and there's currently no pending ones.
pipelining, as libcurl could then easily get confused and A) work on the
handle that was not "first in queue" on a pipeline, or even B) tell the app
to REMOVE a socket while it was in use by a second handle in a pipeline. Both
errors caused hanging or stalling applications.
was actually ready to get done, as the internal time resolution is higher
than the returned millisecond timer. Therefore it could cause applications
running on fast processors to do short bursts of busy-loops.
curl_multi_timeout() will now only return 0 if the timeout is actually
alreay triggered.
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
operation didn't complete properly if the EAGAIN equivalent was returned but
libcurl would simply continue with a half-completed close operation
performed. This ruined persistent connection re-use and cause some
SSH-protocol errors in general. The correction is unfortunately adding a
blocking function - doing it entirely non-blocking should be considered for
a better fix.
If USE_WATT32=1 one needs to use stack-based calls (-3s).
So to keep the makefile nice and clean, specify -3s for
Winsock target too (there's hardly any speed-gain using -3r).
removing easy handles from multi handles when the easy handle is/was within
a HTTP pipeline. His bug report #2351653
(http://curl.haxx.se/bug/view.cgi?id=2351653) was also related and was
eventually fixed by a patch by Igor himself.
duphandle+curl_mutli" (http://curl.haxx.se/bug/view.cgi?id=2416182) showed
that curl_easy_duphandle() wrongly also copied the pointer to the connection
cache, which was plain wrong and caused a segfault if the handle would be
used in a different multi handle than the handle it was duplicated from.
_ Adjust OS400 make script for non-CVS distributions.
_ Upgrade ILE/RPG binding.
_ Define CURL_HIDDEN_SYMBOLS on OS400, since only CURL_EXTERN-marked symbols are exported.
there are servers "out there" that relies on the client doing this broken
Digest authentication. Apache even comes with an option to work with such
broken clients.
The difference is only for URLs that contain a query-part (a '?'-letter and
text to the right of it).
libcurl now supports this quirk, and you enable it by setting the
CURLAUTH_DIGEST_IE bit in the bitmask you pass to the CURLOPT_HTTPAUTH or
CURLOPT_PROXYAUTH options. They are thus individually controlled to server
and proxy.
particular state for the control connection like it did before for implicit
FTPS (libcurl assumed such control connections to be encrypted while some
FTPS servers such as FileZilla assumes such connections to be clear
mode). Use the CURLOPT_USE_SSL option to set your desired level.
researching it, it turned out he got a 550 response back from a SIZE command
and then I fell over the text in RFC3659 that says:
The presence of the 550 error response to a SIZE command MUST NOT be taken
by the client as an indication that the file cannot be transferred in the
current MODE and TYPE.
In other words: the change I did on September 30th 2008 and that has been
included in the last two releases were a regression and a bad idea. We MUST
NOT take a 550 response from SIZE as a hint that the file doesn't exist.
(http://curl.haxx.se/bug/view.cgi?id=2221237) that identified an infinite
loop during GSS authentication given some specific conditions. With his
patience and great feedback I managed to narrow down the problem and
eventually fix it although I can't test any of this myself!
(http://curl.haxx.se/bug/view.cgi?id=2351645) that identified a problem with
the multi interface that occured if you removed an easy handle while in
progress and the handle was used in a HTTP pipeline.
function when built to support SCP and SFTP that helps the library to know
in which direction a particular libssh2 operation would return EAGAIN so
that libcurl knows what socket conditions to wait for before trying the
function call again. Previously (and still when using libssh2 0.18 or
earlier), libcurl will busy-loop in this situation when the easy interface
is used!
when uploading files to a single FTP server using multiple easy handle
handles with the multi interface. Occasionally a handle would stall in
mysterious ways.
The problem turned out to be a side-effect of the ConnectionExists()
function's eagerness to re-use a handle for HTTP pipelining so it would
select it even if already being in use, due to an inadequate check for its
chances of being used for pipelnining.
codes for all calls to malloc and strdup that were missing. I also changed
a few malloc(13) to use arrays on the stack and a few malloc(PATH_MAX) to
instead use aprintf() to lower memory use.
I also fixed a memory leak in Curl_nss_connect() when CURLOPT_ISSUERCERT is
in use.
(http://curl.haxx.se/bug/view.cgi?id=2255627) which pointed out that a
program using libcurl's multi interface to download a HTTPS page with a
libcurl built powered by OpenSSL, would easily get silly and instead hand
over SSL details as data instead of the actual HTTP headers and body. This
happened because libcurl would consider the connection handshake done too
early. This problem was introduced at September 22nd 2008 with my fix of the
bug #2107377
The correct fix is now instead done within the GnuTLS-handling code, as both
the OpenSSL and the NSS code already deal with this situation in similar
fashion. I added test case 560 in an attempt to verify this fix, but
unfortunately it didn't trigger it even before this fix!
This test was added after the HTTPS-using-multi-interface with OpenSSL
regression of 7.19.1 to hopefully prevent this embarassing mistake from
appearing again... Unfortunately the bug wasn't triggered by this test, which
presumably is because the connect to a local server is too fast/different
compared to the real/distant servers we saw the bug happen with.
problem with MSVC 6 makefile that caused a build failure. It was noted that
the curl_addrinfo.obj reference was missing. I took the opportunity to sort
the list in which this was missing.
problem with my CURLINFO_PRIMARY_IP fix from October 7th that caused a NULL
pointer read. I also took the opportunity to clean up this logic (storing of
the connection's IP address) somewhat as we had it stored in two different
places and ways previously and they are now unified.
in man resolv.conf:
causes round robin selection of nameservers from among those listed. This
has the effect of spreading the query load among all listed servers, rather
than having all clients try the first listed server first every time.
You can enable it with ARES_OPT_ROTATE
can be created before resolving the IPv6 name. In the context of running
a test, it doesn't make sense to run an IPv6 test when a host is resolvable
but IPv6 isn't usable. This should fix failures of test 1085 on hosts with
library and DNS support for IPv6 but where actual use of IPv6 has been
administratively disabled.
Changed checkprefix() to use it and those instances of strnequal() that
compare host names or other protocol strings that are defined to be
independent of case in the C locale. This should fix a few more
Turkish locale problems.
decided it was a good idea to properly document my thoughts in a comment near
the code that was identified as a possible flaw. A false positive as far as I
can see.
make CURLOPT_PROXYUSERPWD sort of deprecated. The primary motive for adding
these new options is that they have no problems with the colon separator
that the CURLOPT_PROXYUSERPWD option does.
are consecutive and with a 0x20 "distance" to the uppercase letter), since we do
support EBCDIC as well. Thus I replaced the macro with a (larger) switch case.
I better change the function name...
(http://curl.haxx.se/bug/view.cgi?id=2154627) which pointed out that libcurl
uses strcasecmp() in multiple places where it causes failures when the
Turkish locale is used. This is because 'i' and 'I' isn't the same letter so
strcasecmp() on those letters are different in Turkish than in English (or
just about all other languages). I thus introduced a totally new internal
function in libcurl (called Curl_ascii_equal) for doing case insentive
comparisons for english-(ascii?) style strings that thus will make "file"
and "FILE" match even if the Turkish locale is selected.
return code. This way, if the precheck command can't be run at all for
whatever reason, it's treated as a precheck failure which causes the
test to be skipped.
(http://curl.haxx.se/bug/view.cgi?id=2155496) pointing out an error case
without a proper human-readable error message. When a read callback returns
a too large value (like when trying to return a negative number) it would
trigger and the generic error message then makes the proplem slightly
different to track down. I've added an error message for this now.
Better disable following warnings when cross-compiling with a gcc older
than 3.0, to avoid warnings from third party system headers:
-Wmissing-declarations
-Wmissing-prototypes
-Wunused
-Wshadow
Disable following warnings when cross-compiling with a gcc older
than 3.0, to avoid warnings from third party system headers:
-Wmissing-prototypes
-Wunused
-Wshadow
Highest warning level is double -A, next is single -A.
Due to the big number of warnings these trigger on third
party header files it is impratical for us to use any of
them here. If you want them simply define it in CPPFLAGS.
Due to the HP-UX socklen_t issue it is insane to use the +w1 warning level.
It generates more than 1100 warnings on socklen_t related statements.
Until the issue is somehow fixed we will just use the +w2 warning level.
because the struct is declared on the stack and not all members are used so
we could just as well make struct with only struct members we actually need.
systems supporting getifaddrs(). Also fixed a problem where an IPv6
address could be chosen instead of an IPv4 one for --interface when it
involved a name lookup.
Disallow run-time dereferencing of null pointers.
Disable some remarks:
#4227: padding struct with n bytes to align member.
#4255: padding size of struct with n bytes to alignment boundary.
fixed a CURLINFO_REDIRECT_URL memory leak and an additional wrong-doing:
Any subsequent transfer with a redirect leaks memory, eventually crashing
the process potentially.
Any subsequent transfer WITHOUT a redirect causes the most recent redirect
that DID occur on some previous transfer to still be reported.
eventually identified a flaw in how the multi_socket interface in some cases
missed to call the timeout callback when easy interfaces are removed and
added within the same millisecond.
curl_easy_setopt: CURLOPT_USERNAME and CURLOPT_PASSWORD that sort of
deprecates the good old CURLOPT_USERPWD since they allow applications to set
the user name and password independently and perhaps more importantly allow
both to contain colon(s) which CURLOPT_USERPWD doesn't fully support.
a fresh connection to be made in such cases and the request retransmitted.
This should fix test case 160. Added test case 1079 in an attempt to
test a similar connection dropping scenario, but as a race condition, it's
hard to test reliably.
the app re-used the handle to do a connection to host B and then again
re-used the handle to host A, it would not update the info with host A's IP
address (due to the connection being re-used) but it would instead report
the info from host B.
option to specify dis(activation) of compiler optimizations.
If option is specified, it will be honored independant of the
--(dis|en)able-debug option.
option to specify dis(activation) of picky compiler warnings.
If option is specified, it will be honored independant of the
--(dis|en)able-debug option.
If option is not specified, it will follow --(dis|en)able-debug
setting, whose default is disabled if not specified.
gets a 550 response back for the cases where a download (or NOBODY) is
wanted. It still allows a 550 as response if the SIZE is used as part of an
upload process (like if resuming an upload is requested and the file isn't
there before the upload). I also modified the FTP test server and a few test
cases accordingly to match this modified behavior.
and when not crosscompiling verifies if it is IPv6 capable.
HAVE_INET_NTOP will only be defined when an IPv6 capable working
inet_ntop function is available.
2008-09-24 stable snapshot have a buf_mem_st.length structure member with
'int' data type.
OpenSSL un-released 0.9.9 CVS version has a buf_mem_st.length structure member
with 'size_t' data type since 2007-Oct-09.
These 4 typecasts should silence compiler warnings in all cases.
switching from one protocol to another in a single request (e.g.
redirecting from HTTP to FTP as in test 1055) by resetting
state.expect100header before every request.
date parser function. This makes our function less dependent on system-
provided functions and instead we do all the magic ourselves. We also no
longer depend on the TZ environment variable.
Markus Moeller reported: http://curl.haxx.se/mail/archive-2008-09/0016.html
- recv() errors other than those equal to EAGAIN now cause proper
CURLE_RECV_ERROR to get returned. This made test case 160 fail so I've now
disabled it until we can figure out another way to exercise that logic.
proxy" (http://curl.haxx.se/bug/view.cgi?id=2107377) that showed how a multi
interface using program didn't work when built with GnuTLS and a CONNECT
request was done over a proxy (basically test 502 over a proxy to a HTTPS
site). It turned out the ssl connect function would get called twice which
caused the second call to fail.
Disable remark #981: operands are evaluated in unspecified order
Function calls which are triggering this remark, today, do not depend
on the order of evaluation of its arguments.
Disable remark #1469: "cc" clobber ignored
Remark triggered on htons() and ntohs() due to glibc header files.
sites in cases where the cookie clearly has a very old expiry date. The
condition was simply that libcurl's date parser would fail to convert the
date and it would then count as a (timed-based) match. Starting now, a
missed date due to an unsupported date format or date range will now cause
the cookie to not match.
CURLOPT_POST301 (but adds a define for backwards compatibility for you who
don't define CURL_NO_OLDIES). This option allows you to now also change the
libcurl behavior for a HTTP response 302 after a POST to not use GET in the
subsequent request (when CURLOPT_FOLLOWLOCATION is enabled). I edited the
patch somewhat before commit. The curl tool got a matching --post302
option. Test case 1076 was added to verify this.
enabling this feature with CURLOPT_CERTINFO for a request using SSL (HTTPS
or FTPS), libcurl will gather lots of server certificate info and that info
can then get extracted by a client after the request has completed with
curl_easy_getinfo()'s CURLINFO_CERTINFO option. Linus Nielsen Feltzing
helped me test and smoothen out this feature.
Unfortunately, this feature currently only works with libcurl built to use
OpenSSL.
This feature was sponsored by networking4all.com - thanks!
file for libcurl, and while doing that fix he unified with curl-config.in
how the supported protocols and features are extracted and used, so both those
tools should now always be synced.
to HTTP 1.0 upon receiving a response from the HTTP server. Tests 1072
and 1073 are similar to test 1069 in that they involve the impossible
scenario of sending chunked data to a HTTP 1.0 server. All these currently
fail and are added to DISABLED.
Added test 1075 to test --anyauth with Basic authentication.
"Connection: close" and actually close the connection after the
response-body, libcurl could still have outstanding data to send and it
would not properly notice this and stop sending. This caused weirdness and
sad faces. http://curl.haxx.se/bug/view.cgi?id=2080222
Note that there are still reasons to consider libcurl's behavior when
getting a >= 400 response code while sending data, as Craig Perras' note
"http upload: how to stop on error" specifies:
http://curl.haxx.se/mail/archive-2008-08/0138.html
an unlock in between) for a certain case and that in fact works when using
regular windows mutexes but not with pthreads'! Locks should of course not
get locked again so this is now fixed.
http://curl.haxx.se/mail/lib-2008-08/0422.html
supporting configure's --disable-largefile option for WIN32 targets also.
Non-configure systems which do not use config-win32.h configuration file,
and want to use the WIN32 file API, must define USE_WIN32_LARGE_FILES or
USE_WIN32_SMALL_FILES as appropriate in their own configuration files.
- Logic based on CURL_SIZEOF_CURL_OFF_T and SIZEOF_OFF_T already adjusted.
- Test case 557 already passes on all autobuilds.
- System off_t, or equivalent, size is finally not recorded in curlbuild.h
for this release. SIZEOF_OFF_T from config file is used.
firefox-db2pem.sh conversion script that converts a local Firefox db of ca
certs into PEM format, suitable for use with a OpenSSL or GnuTLS built
libcurl.
which caused an error when the second header was dumped due to stdout
being closed. Added test case 1066 to verify. Also fixed a potential
problem where a closed file descriptor might be used for an upload
when more than one URL is given.
this really hasn't bitten anyone else. The issuer of the report (Felix) suggested
the closure himself and he will get back when (if?) he manage to get a more
reliable way to see the problem.
154 - bug #2041827 "Segfault in http_output_auth w/ FORBID_REUSE (7.18.2)"
Server with the correct content-length. Sending a file with 511 or less
bytes, content-length 512 is used. Sending a file with 513 - 1023 bytes,
content-length 1024 is used. Files with a length of a multiple of 512 Bytes
show the correct content-length. Only these files work for upload.
http://curl.haxx.se/bug/view.cgi?id=2057858
memory leak because it never called the OpenSSL function
CRYPTO_cleanup_all_ex_data() as it was supposed to. This was because of a
missing define in config-win32.h!
when including the OpenSSL header files. This is the recommended setting, this
prevents the undesired inclusion of header files with the same name as those
of OpenSSL but which do not belong to the OpenSSL package. The visible change
from previously released libcurl versions is that now OpenSSl enabled NetWare
builds also define USE_OPENSSL in config files, and that OpenSSL header files
must be located in a subdirectory named 'openssl'.
remain in use as internal curl_off_t print formatting strings for the internal
*printf functions which still cannot handle print formatting string directives
such as "I64d", "I64u", and others available on MSVC, MinGW, Intel's ICC, and
other DOS/Windows compilers.
This reverts previous commit part which did:
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
was discovered to be problematic while investigating an incident reported by
Von back in May. curl in this case doesn't include a Content-Length: or
Transfer-Encoding: chunked header which is illegal. This test case is
added to DISABLED until a solution is found.
the names of the curl_off_t formatting string directives now become
CURL_FORMAT_CURL_OFF_T and CURL_FORMAT_CURL_OFF_TU.
CURL_FMT_OFF_T -> CURL_FORMAT_CURL_OFF_T
CURL_FMT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
Remove the use of an internal name for the curl_off_t formatting string directives
and use the common one available from the inside and outside of the library.
FORMAT_OFF_T -> CURL_FORMAT_CURL_OFF_T
FORMAT_OFF_TU -> CURL_FORMAT_CURL_OFF_TU
when a server responded with long headers and data. Luckily, the buffer
overflowed into another unused buffer, so no actual harm was done.
Added test cases 1060 and 1061 to verify.
line of a multiline FTP response whose last byte landed exactly at the end
of the BUFSIZE-length buffer would be treated as the terminal response
line. The following response code read in would then actually be the
end of the previous response line, and all responses from then on would
correspond to the wrong command. Test case 1062 verifies this.
Stop closing a never-opened ftp socket.
(http://curl.haxx.se/bug/view.cgi?id=2042430) with a patch. "NTLM Windows
SSPI code is not thread safe". This was due to libcurl using static
variables to tell wether to load the necessary SSPI DLL, but now the loading
has been moved to the more suitable curl_global_init() call.
(http://curl.haxx.se/bug/view.cgi?id=2042440) with a patch. He identified a
problem when using NTLM over a proxy but the end-point does Basic, and then
libcurl would do wrong when the host sent "Connection: close" as the proxy's
NTLM state was erroneously cleared.
NetWare curlbuild.h settings depend on whether LIBC or CLIB is used.
The NetWare specific Makefile is capable of knowing which target is being built.
So, finally, the NetWare Makefile will take care of generating curlbuild.h
CVS checked out curlbuild.h.dist as curlbuild.h for any non-configure target
when host system is not running buildconf.bat.
All the curlbuild.h stuff was done taking in consideration that no adjustment
would be needed in non-configure makefiles.
As it is documented, when trying to build on non-configure capable systems or on
systems which for any reason don't run the true configure script, it is required
to have the proper curlbuild.h in place before calling any makefile.
Due to the hardcore memory debugging stuff c-ares enabled debug builds also need
the file in the proper place before attempting to build c-ares.
in a set of double-quoted strings, this macro will now return an expansion which
consists of a single double-quoted string result of concatenating all of them.
Validate that aclocal and automake versions match.
Improve removal of previous run generated files.
Remove verbose debug logging of aclocal on Solaris.
connection with the multi interface even if a previous use of it caused a
CURLE_PEER_FAILED_VERIFICATION to get returned. I now make sure that failed
SSL connections properly close the connections.
proved how PUT and POST with a redirect could lead to a "hang" due to the
data stream not being rewound properly when it had to in order to get sent
properly (again) to the subsequent URL. This is now fixed and these test
cases are no longer disabled.
The symptom:
* Users (usually, but not always) on 2-Wire routers and the Comcast service
and a wired connection to their router would find that the second and
subsequent DNS lookups from fresh processes using c-ares to resolve the same
address would cause the process to never see a reply (it keeps polling for
around 1m15s before giving up).
The repro:
* On such a machine (and yeah, it took us a lot of QA to find the systems
that reproduce such a specific problem!), do 'ahost www.secondlife.com',
then do it again. The first process's lookup will work, subsequent lookups
will time-out and fail.
The cause:
* init_id_key() was calling randomize_key() *before* it initialized
key->state, meaning that the randomness generated by randomize_key() is
immediately overwritten with deterministic values. (/dev/urandom was also
being read incorrectly in the c-ares version we were using, but this was
fixed in a later version.)
* This makes the stream of generated query-IDs from any new c-ares process
be an identical and predictable sequence of IDs.
* This makes the 2-Wire's default built-in DNS server detect these queries
as probable-duplicates and (erroneously) not respond at all.
Prior versions of autoconf defined _ALL_SOURCE if _AIX was defined. But,
autoconf 2.62 version of AC_AIX defines _ALL_SOURCE along with other four
preprocessor symbols no matter if the system is AIX or not. To keep the
traditional behaviour, as well as an uniform one, across autoconf versions
AC_AIX is replaced with our own internal macro.
with -C - sent garbage in the Content-Range: header. I fixed this problem by
making sure libcurl always sets the size of the _entire_ upload if an app
attemps to do resumed uploads since libcurl simply cannot know the size of
what is currently at the server end. Test 1041 is no longer disabled.
Rebooting the Solaris system, releasing allocated memory and swap,
has allowed buildconf and configure to complete sucessfully. Further
tests on the system might allow determination of the problem origin.
Solaris AutoBuilds suceeded on August 2 and 3.
when we have been doing this since revision 1.47 of configure.ac 4 years and
5 months ago when cross-compiling a Windows target. We actually don't use any
function from the Windows GDI (Graphics Device Interface) related with drawing
or graphics-related operations.
incorrectly--the host name is treated as part of the user name and the
port number becomes the password. This can be observed in test 279
(was KNOWN_ISSUE #54).
an URL in a Location: header didn't have the scope ID removed, so an
invalid host name was used. Second, when the scope ID was removed, it
also removed any port number that may have existed in the URL.
parser to allow numerical IPv6-addresses to be specified with the scope
given, as per RFC4007 - with a percent letter that itself needs to be URL
escaped. For example, for an address of fe80::1234%1 the HTTP URL is:
"http://[fe80::1234%251]/"
true bug in libcurl built with OpenSSL. It made curl_easy_getinfo() more or
less always return 0 for CURLINFO_SSL_VERIFYRESULT because the function that
would set it to something non-zero would return before the assign in almost
all error cases. The internal variable is now set to non-zero from the start
of the function only to get cleared later on if things work out fine.
by Ben Sutcliffe. The test when run manually shows a problem in curl,
but the test harness web server doesn't run the test correctly so it's
disabled for now.
server using the multi interface, the commands are not being sent correctly
and instead the connection is "cancelled" (the operation is considered done)
prematurely. There is a half-baked (busy-looping) patch provided in the bug
report but it cannot be accepted as-is. See
http://curl.haxx.se/bug/view.cgi?id=2006544
146 - Yehoshua Hershberg's re-using of connections that failed with
CURLE_PEER_FAILED_VERIFICATION
147 - PHP's bug report #43158 (http://bugs.php.net/bug.php?id=43158) identifies
a true bug in libcurl built with OpenSSL.
This quadigraph used before a C preprocessor 'define' directive could
be fooling M4, when processing this file, and make it think that the
line contains a pure M4 'define' macro.
in top Makefile.am triggered a problem that prevented aclocal from running
successfully on SunOS 5.10 with GNU m4 1.4.5 and GNU Autoconf 2.61
A tarball which reproduces mentioned problem is the one dated July-28-2008
http://cool.haxx.se/curl-daily/curl-7.19.0-20080728.tar.gz
We actually don't need all the bells and whistles that the above mechanism
provides. We only need to include our m4/reentrant.m4 file in acinclude.m4
so here we go with this simpler mechanism.
because at the current point in time I think the benefit of adding that new
return code is very slim and it is a lot of work to introduce new return codes
(for docs and maintenance etc)
I added "145 - Phil Blundell's CURLOPT_SCOPE patch/work" since I want it
sorted/committed.
but it breaks aclocal execution on some systems, with the following error:
Can't locate object method "rel2abs" via package "File::Spec" at /usr/local/bin/aclocal line 256.
overrun" (http://curl.haxx.se/bug/view.cgi?id=2026240) identifying two
problems, and providing the fix for them:
- CURL_READFUNC_PAUSE did in fact not pause the _sending_ of data that it is
designed for but paused _receiving_ of data!
- libcurl didn't internally set the read counter to zero when this return
code was detected, which would potentially lead to junk getting sent to
the server.
http://sources.redhat.com/automake/automake.html#Extending-aclocal documents
that starting with Automake 1.8, aclocal will warn about all underquoted calls
to AC_DEFUN due to the fact that in a single aclocal run it might include more
than once all .m4 files which it finds available, this includes .m4 files from
other software packages.
If the first argument to AC_DEFUN is underquoted and the same macro is included
more than once, successive inclusions after the first one will expand the macro
instead of assuming it is the same as the first one included.
needed, and being able to define it if appropriate for further configure tests
as well as for the generated config file.
Introduced reentrant.m4 intended for our reentrant related autotools/m4 macros.
non-zero with the fixed value of 1. We should strive at making options
support '1' for enabling them mentioned explicitly, as that then will allow
us for to extend them in the future without breaking older programs.
function recvfrom as a result of the info additionally logged when running on a
Solaris system.
The compiler error showed that the prototype being used on Solaris was the one
declared in line 427 of "/usr/include/sys/socket.h" as:
function(int,
pointer to void,
unsigned int,
int,
pointer to struct sockaddr,
pointer to void) returning int
finds out its return type and the types of its arguments. Added definitions
for non-configure systems config files, and introduced macro sreadfrom which
will be used on udp sockets as a recvfrom() wrapper.
set the attribute that has changed instead of all possible ones. Hopefully,
this will solve the "Permission denied" problem that Nagarajan Sreenivasan
reported when setting some modes, but regardless, it saves a protocol
round trip in the chmod case.
curl-library list on July 9th 2008 by Mathew Hounsell)
NOTE: the name resolve functions of various libc implementations don't re-read
name server information unless explicitly told so (by for example calling
Ires_init(3). This may cause libcurl to keep using the older server even
if DHCP has updated the server info, and this may look like a DNS cache issue
to the casual libcurl-app user.
is set in fdset.events" (http://curl.haxx.se/bug/view.cgi?id=2015126) which
exactly pinpointed the problem only triggered on Windows Vista, provided
reference to docs and also a fix. There is much work behind Peter Lamberg's
excellent bug report. Thank You!
fix for it. It occured when you did a FTP transfer using
CURLFTPMETHOD_SINGLECWD and then did another one on the same easy handle but
switched to CURLFTPMETHOD_NOCWD. Due to the "dir depth" variable not being
cleared properly. Scott's test case is now known as test 539 and it
verifies the fix.
the target host has only A records, it automatically falls back to an
AF_INET lookup and gives you the A results. However, if the target host has
a CNAME record, this behaviour is defeated since the original query does
return some data even though ares_parse_aaa_reply() doesn't consider it
relevant. Here's a small patch to make it behave the same with and without
the CNAME.
CURLINFO_APPCONNECT_TIME. This is set with the "application layer"
handshake/connection is completed (typically SSL, TLS or SSH). By using this
you can figure out the application layer's own connect time. You can extract
the time stamp using curl's -w option and the new variable named
'time_appconnect'. This feature was sponsored by Lenny Rachitsky at NeuStar.
not posix or anything and thus c-ares failed to build on hurd (and possibly
elsewhere). The define was also somewhat artificially used in the windows
port. Now, I instead rewrote the use of gethostbyname to enlarge the host
name buffer in case of need and totally avoid the use of the MAXHOSTNAMELEN
define. I thus also removed the defien from the namser.h file where it was
once added for the windows build.
I also fixed init_by_defaults() function to not leak memory in case if
error.
some systems" (http://curl.haxx.se/bug/view.cgi?id=1999181). The problem was
that the configure script did not use the _POSIX_MONOTONIC_CLOCK feature test
macro when checking monotonic clock availability. This is now fixed and the
monotonic clock will not be used unless the feature test macro is defined
with a value greater than zero indicating always supported.
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=487567) pointing out that
libcurl used Content-Range: instead of Range when doing a range request with
--head (CURLOPT_NOBODY). This is now fixed and test case 1032 was added to
verify.
enough at detecting compilation errors or at least it has been properly
configured to do so. Configuration heavily depends on this capability, so
if this compiler sanity check fails the configuration process will now fail.
handshake with a SSLv2 server, and it turned out to be because it didn't
recognize the cipher named "rc4-md5". In our list that cipher was named
plainly "rc4". I've now added rc4-md5 to work as an alias as Phil reported
that it made things work for him again.
crashed libcurl. This is now addressed by making sure we use "plain send"
internally when doing the socks handshake instead of the Curl_write()
function which is designed to use the "target" protocol. That's then SCP or
SFTP in this case. I also took the opportunity and cleaned up some ssh-
related #ifdefs in the code for readability.
libcurl to not tell the app properly when a socket was closed (when the name
resolve done by c-ares is done) and then immediately re-created and put to
use again (for the actual connection). Since the closure will make the
"watch status" get lost in several event-based systems libcurl will need to
tell the app about this close/re-create case.
multi interface with pipelining enabled as it would wrongly check for,
detect and close "dead connections" even though that connection was already
in use!
warning in the code though but we need NSS' base64.h header for that and we
don't currently have a suitable way to include it as our own base64.h header
kind of "blocks" it.
libraries are supported. Starting now, each underlying SSL library support
code does a set of defines for the 16 functions the generic layer (sslgen.c)
uses (all these new function defines use the prefix "curlssl_"). This
greatly simplified the generic layer in readability by involving much less
#ifdefs and other preprocessor stuff and should make it easier for people to
make libcurl work with new SSL libraries.
Hopefully I can later on document these 16 functions somewhat as well.
I also made most of the internal SSL-dependent functions (using Curl_ssl_
prefix) #defined to nothing when no SSL support is requested - previously
they would unnecessarily call mostly empty functions.
All boolean options (such as -O, -I, -v etc), both short and long versions,
now always switch on/enable the option named. Using the same option multiple
times thus make no difference. To switch off one of those options, you need
to use the long version of the option and type --no-OPTION. Like to disable
verbose mode you use --no-verbose!
- Added --remote-name-all to curl, which if used changes the default for all
given URLs to be dealt with as if -O is used. So if you want to disable that
for a specific URL after --remote-name-all has been used, you muse use -o -
or --no-remote-name.
curl_easy_getinfo. It returns a pointer to a string with the most recently
used IP address. Modified test case 500 to also verify this feature. The
implementing of this feature was sponsored by Lenny Rachitsky at NeuStar.
- Updated main.c to return CURLE_OK if PARAM_HELP_REQUESTED was returned
from getparameter instead of CURLE_FAILED_INIT. No point in returning
an error if --help or --version were requested.
the curl_multi_socket() API with HTTP pipelining enabled and could lead to
the pipeline basically stalling for a very long period of time until it took
off again.
provided excellent repeat recipes. I fixed the cases I managed to reproduce
but Jeff still got some (SCP) problems even after these fixes:
http://curl.haxx.se/mail/lib-2008-05/0342.html
due to KfW's library header files exporting symbols/macros that should be
kept private to the KfW library. See ticket #5601 at http://krbdev.mit.edu/rt/
how the HTTP redirect following code didn't properly follow to a new URL if
the new url was but a query string such as "Location: ?moo=foo". Test case
1031 was added to verify this fix.
_ Updated packages/OS400/curl.inc.in with new definitions.
_ New connect/bind/sendto/recvfrom wrappers to support AF_UNIX sockets.
_ Include files line length shortened below 100 chars.
_ Const parameter in lib/qssl.[ch].
_ Typos in packages/OS400/initscript.sh.
interface problems:
o with pipelining disabled, the state should never be set to WAITDO but
rather go straight to DO
o we had multiple states for which the internal function returned no socket
at all to wait for, with the effect that libcurl calls the socket callback
(when curl_multi_socket() is used) with REMOVE prematurely (as it would be
added again within very shortly)
o when in DO and DOING states, the HTTP and HTTPS protocol handler functions
didn't return that the socket should be waited for writing, but instead it
was treated as if no socket was needing monitoring so again REMOVE was
called prematurely.
go straight to DO
we had multiple states for which the internal function returned no socket at
all to wait for, with the effect that libcurl calls the socket callback (when
curl_multi_socket() is used) with REMOVE prematurely (as it would be added
again within very shortly)
handler functions didn't return that the socket should be waited for writing,
but instead it was treated as if no socket was needing monitoring so REMOVE
was called prematurely
When cross compiling WinCE with the arm-wince-cegcc-gcc C compiler
symbol __CEGCC__ is defined and the unix-like compatibility layer
is used. For our purposes this is not a native Windows build.
When cross compiling WinCE with the arm-wince-mingw32ce-gcc C compiler
symbol __MINGW32CE__ is defined and the unix-like compatibility layer
is not used. For our purposes this _is_ a native Windows build.
and receive data over a connection previously setup with curl_easy_perform()
and its CURLOPT_CONNECT_ONLY option. The sendrecv.c example was added to
show how they can be used.
when function clock_gettime() is available and the monotonic timer is
also available. Otherwise, in some cases, librt or libposix4 could be used
for linking even when finally not using the clock_gettime() function due
to lack of the monotonic clock.
autoconf 2.57 usage (which is the version you have specified as the minimum
version). It's a minor change but it does clean up some warnings with newer
autoconf (specifically 2.62).
when using CURL_AUTH_ANY" (http://curl.haxx.se/bug/view.cgi?id=1945240).
The problem was that when libcurl rewound a stream meant for upload when it
would prepare for a second request, it could accidentally continue the
sending of the rewound data on the first request instead of on the second.
Ben also provided test case 1030 that verifies this fix.
since libcurl used getprotobyname() and that isn't thread-safe. We now
switched to use IPPROTO_TCP unconditionally, but perhaps the proper fix is
to detect the thread-safe version of the function and use that.
http://curl.haxx.se/mail/lib-2008-05/0011.html
redirections and thus cannot use CURLOPT_FOLLOWLOCATION easily, we now
introduce the new CURLINFO_REDIRECT_URL option that lets applications
extract the URL libcurl would've redirected to if it had been told to. This
then enables the application to continue to that URL as it thinks is
suitable, without having to re-implement the magic of creating the new URL
from the Location: header etc. Test 1029 verifies it.
server input and response request files of the test harness sws server.
Reintroduce, for test # 1001, the <postcheck> small delay. The delay is
needed even with the accelerated writing of server input and response
request files in test harness sws server.
http://curl.haxx.se/mail/lib-2008-04/0385.html
Define HAVE_GSSMIT if <gssapi/{gssapi.h,gssapi_generic.h,gssapi_krb5.h}> are
available, otherwise define HAVE_GSSHEIMDAL if <gssapi.h> is available.
Only define GSS_C_NT_HOSTBASED_SERVICE to gss_nt_service_name if
GSS_C_NT_HOSTBASED_SERVICE isn't declared by the gssapi headers. This should
avoid breakage in case we wrongly recognize Heimdal as MIT again.
message when libcurl doesn't get a 220 back immediately on connect, I now
changed it to be more specific on what the problem is. Also worth noticing:
while the bug report contains an example where the response is:
421 There are too many connected users, please try again later
we cannot assume that the error message will always be this readable nor
that it fits within a particular boundary etc.
GET simply because previously when you set CURLOPT_NOBODY to TRUE first and
then FALSE you'd end up in a broken state where a HTTP request would do a
HEAD by still act a lot like for a GET and hang waiting for the content etc.
application to provide data for a multipart with the read callback. Note
that the size needs to be provided with CURLFORM_CONTENTSLENGTH when the
stream option is used. This feature is verified by the new test case
554. This feature was sponsored by Xponaut.
# The output .so file lacks the soname number which we currently have within the lib/Makefile.am file
# Add full (4 or 5 libs) SSL support
# Add INSTALL target (EXTRA_DIST variables in Makefile.am may be moved to Makefile.inc so that CMake/CPack is aware of what's to include).
# Add CTests(?)
# Check on all possible platforms
# Test with as many configurations possible (With or without any option)
# Create scripts that help keeping the CMake build system up to date (to reduce maintenance). According to Tetetest:
# - lists of headers that 'configure' checks for;
# - curl-specific tests (the ones that are in m4/curl-*.m4 files);
# - (most obvious thing:) curl version numbers.
# Add documentation subproject
#
# To check:
# (From Daniel Stenberg) The cmake build selected to run gcc with -fPIC on my box while the plain configure script did not.
# (From Daniel Stenberg) The gcc command line use neither -g nor any -O options. As a developer, I also treasure our configure scripts's --enable-debug option that sets a long range of "picky" compiler options.
string(REGEXREPLACE"\\$\\(([a-zA-Z_][a-zA-Z0-9_]*)\\)""\${\\1}"MAKEFILE_INC_TEXT${MAKEFILE_INC_TEXT})# Replace $() with ${}
string(REGEXREPLACE"@([a-zA-Z_][a-zA-Z0-9_]*)@""\${\\1}"MAKEFILE_INC_TEXT${MAKEFILE_INC_TEXT})# Replace @@ with ${}, even if that may not be read by CMake scripts.
file(WRITE${OUTPUT_FILE}${MAKEFILE_INC_TEXT})
endfunction()
add_subdirectory(lib)
if(BUILD_CURL_EXE)
add_subdirectory(src)
endif()
if(BUILD_CURL_TESTS)
add_subdirectory(tests)
endif()
# This needs to be run very last so other parts of the scripts can take advantage of this.
if(NOTCURL_CONFIG_HAS_BEEN_RUN_BEFORE)
set(CURL_CONFIG_HAS_BEEN_RUN_BEFORE1CACHEINTERNAL"Flag to track whether this is the first time running CMake or if CMake has been configured before")
/* No domain search to do; just try the name as-is. */
*s=strdup(name);
return(*s)?ARES_SUCCESS:ARES_ENOMEM;
}
*s=NULL;
returnARES_SUCCESS;
}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.