message when libcurl doesn't get a 220 back immediately on connect, I now
changed it to be more specific on what the problem is. Also worth noticing:
while the bug report contains an example where the response is:
421 There are too many connected users, please try again later
we cannot assume that the error message will always be this readable nor
that it fits within a particular boundary etc.
GET simply because previously when you set CURLOPT_NOBODY to TRUE first and
then FALSE you'd end up in a broken state where a HTTP request would do a
HEAD by still act a lot like for a GET and hang waiting for the content etc.
application to provide data for a multipart with the read callback. Note
that the size needs to be provided with CURLFORM_CONTENTSLENGTH when the
stream option is used. This feature is verified by the new test case
554. This feature was sponsored by Xponaut.
support when curl didn't even have regular LDAP support. It looks like
this could happen when the --enable-ldaps configure switch is given but
configure couldn't find the LDAP headers or libraries.
default instead of a ca bundle. The configure script will also look for a
ca path if no ca bundle is found and no option given.
- Fixed detection of previously installed curl-ca-bundle.crt
CURLOPT_FTP_CREATE_MISSING_DIRS to create a directory, and then re-used the
handle and uploaded another file to another directory that needed to be
created, the second upload would fail. Another case of a state variable that
wasn't properly reset between requests.
- I rewrote the 100-continue code to use a single state variable instead of
the previous two ones. I think it made the logic somewhat clearer.
(http://curl.haxx.se/bug/view.cgi?id=1911069) that identified a race
condition in the name resolver code when the DNS cache is shared between
multiple easy handles, each running in simultaneous threads that could cause
crashes.
does nothing with them, just to make sure libcurl users always use three
arguments to this function. Due to its use of ... for the third argument, it
is otherwise hard to detect abuse.
easy handle if curl_easy_reset() was used between them. I fixed it and Brian
verified that it cured his problem.
- Brian Ulm reported that if you first tried to download a non-existing SFTP
file and then fetched an existing one and re-used the handle, libcurl would
still report the second one as non-existing as well! I fixed it abd Brian
verified that it cured his problem.
select/poll calls will only be retried upon EINTR failures as
it previously was in lib/select.c revision 1.29
In this way Curl_socket_ready() and Curl_poll() will again fail
on any select/poll errors different than EINTR.
better control at the exact state of the connection's SSL status so that we
know exactly when it has completed the SSL negotiation or not so that there
won't be accidental re-uses of connections that are wrongly believed to be
in SSL-completed-negotiate state.
such as the CURLOPT_SSL_CTX_FUNCTION one treat that as if it was a Location:
following. The patch that introduced this feature was done for 7.11.0, but
this code and functionality has been broken since about 7.15.4 (March 2006)
with the introduction of non-blocking OpenSSL "connects".
It was a hack to begin with and since it doesn't work and hasn't worked
correctly for a long time and nobody has even noticed, I consider it a very
suitable subject for plain removal. And so it was done.
get a fresh one downloaded and created with 'make ca-bundle' or you can get
one from here => http://curl.haxx.se/docs/caextract.html if you want a fresh
new one extracted from Mozilla's recent list of ca certs.
The configure option --with-ca-bundle now lets you specify what file to use
as default ca bundle for your build. If not specified, the configure script
will check a few known standard places for a global ca cert to use.
connection by force when it was called before the entire request is
completed, simply because we can't know if the connection really can be
re-used safely at that point.
verification is requested. Previously it would even return failure if gnutls
failed to get the server cert even though no verification was asked for.
- Fix my Curl_timeleft() leftover mistake in the gnutls code
out and provides test program that demonstrates that libcurl might not set
error description message for error CURLE_COULDNT_RESOLVE_HOST for Windows
threaded name resolver builds. Fixed now.
(http://curl.haxx.se/bug/view.cgi?id=1889856): When using the gnutls ssl
layer, cleaning-up and reinitializing curl ends up with https requests
failing with "ASN1 parser: Element was not found" errors. Obviously a
regression added in 7.16.3.
all curl's tests generated configuration and key files are fine, a real
connection is established to the test harness sftp server authenticating
and running a simple sftp remote pwd command.
The verification is done using OpenSSH's or SunSSH's sftp client tool with
a configuration file with the same options as the test harness socks server
with the exception that dynamic forwarding is not used for sftp.
creates a suitable ca-bundle.crt file in PEM format for use with curl. The
recommended way to run it is to use 'make ca-bundle' in the build tree root.
"HttpOnly" feature introduced by Microsoft and apparently also supported by
Firefox: http://msdn2.microsoft.com/en-us/library/ms533046.aspx . HttpOnly
is now supported when received from servers in HTTP headers, when written to
cookie jars and when read from existing cookie jars.
the SingleRequest one to make pipelining better. It is a bit tricky to keep
them in the right place, to keep things related to the actual request or to
the actual connection in the right place.
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
silly code left from when we switched to let the multi handle "hold" the dns
cache when using the multi interface... Of course this only triggered when a
certain function call returned error at the correct moment.
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
--keepalive-time to curl to set the keepalive probe interval. I also took
the opportunity to rename the recently added no-keep-alive option to
no-keepalive to keep a consistent naming and to avoid getting two dashes in
these option names. Eric also provided an update to the man page for the new
option.
spanking new CURLOPT_SEEKFUNCTION simply to take advantage of the improved
performance for the upload resume cases where you want to upload the last
few bytes of a very large file. To implement this decently, I had to switch
the client code for uploading from fopen()/fread() to plain open()/read() so
that we can use lseek() to do >32bit seeks (as fseek() doesn't allow that)
on systems that offer support for that.
(it already before skipped /usr/lib). /usr/lib64 is the default library
directory on many 64bit systems and it's unlikely that anyone would use the
path privately on systems where it's not.
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
(http://curl.haxx.se/bug/view.cgi?id=1868255) with a patch. It identifies
and fixes a problem with parsing WWW-Authenticate: headers with additional
spaces in the line that the parser wasn't written to deal with.
(http://curl.haxx.se/bug/view.cgi?id=1863171) where he pointed out that
libcurl's date parser didn't accept a +1300 time zone which actually is used
fairly often (like New Zealand's Dailight Savings Time), so I modified the
parser to now accept up to and including -1400 to +1400.
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.
The default SOCKS5 proxy is again back to sending the IP address to the
proxy. The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
made it an unsigned int. The type was only used in the curl_sockaddr struct
definition (only used by the curl_opensocket_callback). On all platforms I
could find information about, socklen_t is 32 unsigned bits large so I don't
think this will break the API or ABI. The main reason for this change is of
course for all the platforms that don't have a socklen_t definition in their
headers to build fine again. Providing our own configure magic and custom
definition of socklen_t on those systems proved to work but was a lot of
cruft, code and extra magic needed - when this very small change of type seems
harmless and still solves the missing socklen_t problem.
is an inofficial PROXY4 variant that sends the hostname to the proxy instead
of the resolved address (which is already supported by SOCKS5). --socks4a is
the curl command line option for it and CURLOPT_PROXYTYPE can now be set to
CURLPROXY_SOCKS4A as well.
(http://curl.haxx.se/bug/view.cgi?id=1850730) I wrote up test case 552. The
test is doing a 70K POST with a read callback and an ioctl callback over a
proxy requiring Digest auth. The test case code is more or less identical to
the test recipe code provided by Spacen Jasset (who submitted the bug report).
(http://curl.haxx.se/bug/view.cgi?id=1856628) and provided a fix for the
(small) memory leak in the SSL session ID caching code. It happened when a
previous entry in the cache was re-used.
and makes wrong asumptions of build target when it isn't specified. So,
if no build target has been defined we will target WinXP when building
with MSVC 9.0 (VS2008).
(http://curl.haxx.se/bug/view.cgi?id=1849764) with an included fix. He
identified a problem for re-used connections that previously had sent
Expect: 100-continue and in some situations the subsequent POST (that didn't
use Expect:) still had the internal flag set for its use. David's fix (that
makes the setting of the flag in every single request unconditionally) is
fine and is now used!
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
the same state struct as the host auth, so both could never be used at the
same time! I fixed it (without being able to check) to use two separate
structs to allow authentication using Negotiate on host and proxy
simultanouesly.
callback was used, as it could wrongly pass on a bad size for the outgoing
HTTP header. The bad size would be a very large value as it was a wrapped
size_t content. This happened when the whole HTTP request failed to get sent
in one single send. http://curl.haxx.se/mail/lib-2007-11/0165.html
forwarded from the Gentoo bug tracker by Daniel Black and was originally
submitted by Robin Johnson, pointed out that libcurl would do bad memory
references when it failed and bailed out before the handler thing was
setup. My fix is not done like the provided patch does it, but instead I
make sure that there's never any chance for a NULL pointer in that struct
member.
out that SFTP requests didn't use persistent connections. Neither did SCP
ones. I gave the SSH code a good beating and now both SCP and SFTP should
use persistent connections fine. I also did a bunch for indent changes as
well as a bug fix for the "keyboard interactive" auth.
out a problem in curl.h when building C++ apps with MSVC. To fix it, the
inclusion of header files in curl.h is moved outside of the C++ extern "C"
linkage block.
building with VC8 to get the "manifest" embedded to make fine stand-alone
binaries. The maketgz and the src/Makefile.vc6 files were adjusted
accordingly.
https://bugzilla.novell.com/show_bug.cgi?id=332917 about a HTTP redirect to
FTP that caused memory havoc. His work together with my efforts created two
fixes:
#1 - FTP::file was moved to struct ftp_conn, because is has to be dealt with
at connection cleanup, at which time the struct HandleData could be
used by another connection.
Also, the unused char *urlpath member is removed from struct FTP.
#2 - provide a Curl_reset_reqproto() function that frees
data->reqdata.proto.* on connection setup if needed (that is if the
SessionHandle was used by a different connection).