Zero-copy and "Avoid having to remove/readd handles" are not really
features we think are worthwhile to add. Removed.
SRP features have been added already, removed.
11.9 IPv6 addresses with globbing added
Trimmed the newlines to be LF-only. Converted the source to plain C, to
use curl style indents, to compile warning-free with picky options and
fixed the minor fprintf() bug on line 245. Added to makefile.
Drop the pre-release part from this text as we don't use that in
practise since many years.
Update the phrasing to reflect our more strict interpretation:
http://curl.haxx.se/mail/lib-2011-08/0064.html
Using 'socks5h' as proxy protocol will make it a
CURLPROXY_SOCKS5_HOSTNAME proxy which is SOCKS5 and asking the proxy to
resolve host names. I found no "standard" protocol name for this.
Follow style of GNU layout (cp, mv ...) where options are separated with
comma: -o, --option
Order item alphabetically (by length also): -o, -O, --option
Follow style of GNU layout by moving help related options to the end:
--help, -M, --version
Clarify that the '-', '.', '_' or '~' letters are also not escaped since
they shouldn't according to RFC3986 section 2.3.
This is how this function has behaved since sep 2010, commit
5df13c3173.
As it is already included by curlbuild.h if it exists on the platform it
was included here superfluously anyway.
Reported by: Dagobert Michelsen
Bug: http://curl.haxx.se/bug/view.cgi?id=3294509
Improved library search by check_function_exists_concat() macro:
it does not revert the list of libraries any more.
Improved OpenSSL library search: first find zlib, then search for
openssl libraries that may depend on zlib.
For Unix: openssl libraries can now be detected in nonstandard
locations. Supply CMAKE_LIBRARY_PATH to CMake on command line.
Added installation capability (very basic one yet).
Added CURLOPT_TRANSFER_ENCODING as the option to set to request Transfer
Encoding in HTTP requests (if built zlib enabled). I also renamed
CURLOPT_ENCODING to CURLOPT_ACCEPT_ENCODING (while keeping the old name
around) to reduce the confusion when we have to encoding options for
HTTP.
--tr-encoding is now the new command line option for curl to request
this, and thus I updated the test cases accordingly.
Stop the abuse of CURLE_FAILED_INIT as return code for things not being
init related by introducing two new return codes:
CURLE_NOT_BUILT_IN and CURLE_UNKNOWN_OPTION
CURLE_NOT_BUILT_IN replaces return code 4 that has been obsoleted for
several years. It is used for returning error when something is
attempted to be used but the feature/option was not enabled or
explictitly disabled at build-time. Getting this error mostly means that
libcurl needs to be rebuilt.
CURLE_FAILED_INIT is now saved and used strictly for init
failures. Getting this problem means something went seriously wrong,
like a resource shortage or similar.
CURLE_UNKNOWN_OPTION is the option formerly known as
CURLE_UNKNOWN_TELNET_OPTION (and the old name is still present,
separately defined to be removed in a very distant future). This error
code is meant to be used to return when an option is given to libcurl
that isn't known. This problem would mostly indicate a problem in the
program that uses libcurl.
The read callback must return the exact requested amount of data when it
is used for doing TFTP uploads. This is due to how it deals with data
internally. This could/should be fixed but for now we document the
existing behavior.
Reported by: Colin Blair
Bug: http://curl.haxx.se/mail/lib-2011-03/0319.html
This is a new documentation for the source tree. This information has
been present since a long time at
http://curl.haxx.se/mail/etiquette.html but now it is put into a plain
text version too for wider distribution. The web version will be
automatically generated from this source document.
When NSS-powered libcurl connected to a SSL server with
CURLOPT_SSL_VERIFYPEER equal to zero, NSS remembered that the peer
certificate was accepted by libcurl and did not ask the second time when
connecting to the same server with CURLOPT_SSL_VERIFYPEER equal to one.
This patch turns off the SSL session cache for the particular SSL socket
if peer verification is disabled. In order to avoid any performance
impact, the peer verification is completely skipped in that case, which
makes it even faster than before.
Bug: https://bugzilla.redhat.com/678580
All C and H files now (should) feature the proper project curl source
code header, which includes basic info, a copyright statement and some
basic disclaimers.
Stress that it is for client certificates and then mention that it also
works for all other SSL-based protocols apart from HTTPS and
FTPS. Namely POP3S, IMAPS and SMTPS for now.
This enables people to specify a path to the netrc file to use.
The new option override --netrc if both are present. However it
does follow --netrc-optional if specified.
On second thought, I think CURLE_TLSAUTH_FAILED should be eliminated. It
was only being raised when an internal error occurred while allocating
or setting the GnuTLS SRP client credentials struct. For TLS
authentication failures, the general CURLE_SSL_CONNECT_ERROR seems
appropriate; its error string already includes "passwords" as a possible
cause. Having a separate TLS auth error code might also cause people to
think that a TLS auth failure means the wrong username or password was
entered, when it could also be a sign of a man-in-the-middle attack.
"6.7 What are my obligations when using libcurl in my commercial apps?"
got the piece about what exactly "in all copies" mean to a user of the
code.
This interpretation is based on what other MIT-like licenses have made
more explicit.
Extended the intial HTTP protcol part and added a mention of --trace and
--trace-ascii.
Replaced most URLs in the text to use example.com instead of all the
made up strange names.
Shortened a bunch of lines.
... and update the curl.1 and curl_easy_setopt.3 man pages such that
they do not suggest to use an OpenSSL utility if curl is not built
against OpenSSL.
Bug: https://bugzilla.redhat.com/669702
Add a simple SMTP example program, patterned after some of the existing
examples, and the curl application.
This version addresses issues raised by David Woodhouse on comments in
the simplesmtp.c example.
This script is the start of a helper tool that scans a source code and
outputs the most recent libcurl version it finds symbols for. Meaning
that if there's no conditions in the code, that's the earliest libcurl
version the scanned code requires.
It is not added to the Makefile.am yet as it is still a bit crude, but
I'm committing it to keep it and allow us to work on it.
This is a meta symbol. OR this value together with a single specific
auth value to force libcurl to probe for un-restricted auth and if not,
only that single auth algorithm is acceptable.
For example you can use CURLAUTH_DIGEST|CURLAUTH_ONLY to make libcurl
first probe for what method to use, but yet only consider Digest to be
acceptable.
Using _only_ CURLAUTH_DIGEST without the CURLAUTH_ONLY field, will make
libcurl explicitly use Digest right away and not do any probing.
An example application source code sending SMTP mail with the multi
interface. It is based on the code Alona Rossen provided, which in turn
is based on existing example/test code, and I converted it even more
into a decent example with a fair multi API use, put the info required
to edit at the top and I added some comments.
I've developed a script I call symbol-scan.pl that scans the curl.h and
multi.h header files and compare the symbols it finds in there with the
symbols symbols-in-versions documents and outputs a report on the
differences. Using this I've dug through the history to fill up
symbols-in-versions with all the symbols my script found mismatches for.
I will commit symbol-scan.pl separatly and think of a way to put it to
use in the build/tests so that we from now on will get this in-sync
check automatically.
The invocation of autoconf's AC_PATH_PROG( ) is not quite right for
finding curl-config. This fix corrects the negative case (where
curl-config is not found).
The macro provides a --with-libcurl option that expects a PREFIX to be
specified and not actually a "directory" in which libcurl will be found.
This now spells that out more clearly.
Reported by: Dan Locks
Bug: http://curl.haxx.se/bug/view.cgi?id=3079891
all multi and hiper examples:
* don't loop curl_multi_perform calls, that was <7.20.0 style, currently
the exported multi functions will not return CURLM_CALL_MULTI_PERFORM
all hiper examples:
* renamed check_run_count to check_multi_info
* don't compare current running handle count with previous value, this
was the wrong way to check for finished requests, simply call
curl_multi_info_read
* it's also safe to call curl_multi_remove_handle inside the
curl_multi_info_read loop.
ghiper.c:
* replaced curl_multi_socket (that function is marked as obsolete) calls
with curl_multi_socket_action calls (as in hiperfifo.c and
evhiperfifo.c)
ghiper.c and evhiperfifo.c:
* be smart as hiperfifo.c, don't do uncessary curl_multi_* calls in
new_conn and main
curl_easy_duphandle() was not properly duping the ares channel. The
ares_dup() function was introduced in c-ares 1.6.0 so by starting to use
this function we also raise the bar and require c-ares >= 1.6.0
(released Dec 9, 2008) for such builds.
Reported by: Ning Dong
Bug: http://curl.haxx.se/mail/lib-2010-08/0318.html
1. Remove the comment warning that it's "not been verified to work". It
works with no problems in my testing.
2. Remove 2 unnecessary includes.
3. Remove the myrealloc(). Initialize chunk.memory with malloc() instead
of NULL. The comments for these two parts contradicted each other.
4. Handle out of memory from realloc() instead of continuing.
5. Print a brief status message at the end.
The numerical value passed to CURLOPT_RESUME_FROM for FTP uploads is
interpreted and used as position where to resume the _reading_ of the
local file and it will "blindly" append that data on the remote
file. This was certainly not clear in the docs previously.
Reported by: catalin
Bug: http://curl.haxx.se/bug/view.cgi?id=3048174
Curl_expire() is now expanded to hold a list of timeouts for each easy
handle. Only the closest in time will be the one used as the primary
timeout for the handle and will be used for the splay tree (which sorts
and lists all handles within the multi handle).
When the main timeout has triggered/expired, the next timeout in time
that is kept in the list will be moved to the main timeout position and
used as the key to splay with. This way, all timeouts that are set with
Curl_expire() internally will end up as a proper timeout. Previously any
Curl_expire() that set a _later_ timeout than what was already set was
just silently ignored and thus missed.
Setting Curl_expire() with timeout 0 (zero) will cancel all previously
added timeouts.
Corrects known bug #62.
In some places where the name 'stream' has been used for naming a
function argument that is in fact settable with a setopt() option we now
call that argument 'userdata' to make it more obvious that it is in fact
possible to set by the application.
Suggested by: Jeff Pohlmeyer
The SOCKET type in Win64 is 64 bits large (and thus so is curl_socket_t
on that platform), and long is only 32 bits. It makes it impossible for
curl_easy_getinfo() to return a socket properly with the
CURLINFO_LASTSOCKET option as for all other operating systems.
curl_multi_timeout(3) is simply the wrong function to use
if you're using the multi_socket API and this document now
states this pretty clearly to help guiding users.
- SMTP falls back to RFC821 HELO when EHLO fails (and SSL is not required).
- Use of true local host name (i.e.: via gethostname()) when available, as default argument to SMTP HELO/EHLO.
- Test case 804 for HELO fallback.
properly in angle brackets. Recipients provided with CURLOPT_MAIL_RCPT now
get angle bracket wrapping automatically by libcurl unless the recipient
starts with an angle bracket as then the app is assumed to deal with that
properly on its own.
simply check for CURLM_CALL_MULTI_PERFORM internally. This has the added
benefit that this goes in line with my long-term wishes to get rid of the
CURLM_CALL_MULTI_PERFORM all together from the public API.
versions --ftp-ssl and --ftp-ssl-reqd as these options are now used to
control SSL/TLS for IMAP, POP3 and SMTP as well in addition to FTP. The old
option names are still working but the new ones are the prefered ones
(listed and documented).
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
on FTP errors in the transient 5xx range. Transient FTP errors are in the
4xx range. The code itself only tried on 5xx errors that occured _at login_.
Now the retry code retries on all FTP transfer failures that ended with a
4xx response.
(http://curl.haxx.se/bug/view.cgi?id=2911279)
instead of being repeated several times. This also include Authenticate: and
Proxy-Authenticate: headers and while this hardly every happens in real life
it will confuse libcurl which does not properly support it for all headers -
like those Authenticate headers.
sends the 220 response or otherwise is dead slow, libcurl will not
acknowledge the connection timeout during that phase but only the "real"
timeout - which may surprise users as it is probably considered to be the
connect phase to most people. Brought up (and is being misunderstood) in:
http://curl.haxx.se/bug/view.cgi?id=2844077
read stdin in a non-blocking fashion. This also brings back -T- (minus) to
the previous blocking behavior since it could break stuff for people at
times.
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
than what's absolutely necessary:
curl will do its best to use what you pass to it as a URL. It is not trying to
validate it as a syntactically correct URL by any means but is instead
VERY liberal with what it accepts.
something beyond ascii but currently libcurl will only pass in the verbatim
string the app provides. There are several browsers that already do this
encoding. The key seems to be the updated draft to RFC2231:
http://tools.ietf.org/html/draft-reschke-rfc2231-in-http-02
With the curl memory tracking feature decoupled from the debug build feature,
CURLDEBUG and DEBUGBUILD preprocessor symbol definitions are used as follows:
CURLDEBUG used for curl debug memory tracking specific code (--enable-curldebug)
DEBUGBUILD used for debug enabled specific code (--enable-debug)
not in the mood enough to fight this now.
65. When doing FTP over a socks proxy or CONNECT through HTTP proxy and the
multi interface is used, libcurl will fail if the (passive) TCP connection
for the data transfer isn't more or less instant as the code does not
properly wait for the connect to be confirmed. See test case 564 for a first
shot at a test case.
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
Koenig pointed out that the man page didn't tell that the *_proxy
environment variables can be specified lower case or UPPER CASE and the
lower case takes precedence,
for any further requests or transfers. The work-around is then to close that
handle with curl_easy_cleanup() and create a new. Some more details:
http://curl.haxx.se/mail/lib-2009-04/0300.html
and 1 on fatal errors. Previously it only mentioned non-zero on fatal
errors. This is a slight change in meaning, but it follows what we've done
elsewhere before and it opens up for LOTS of more useful return codes
whenever we can think of them...
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
the condition in the previous request was unmet. This is typically a time
condition set with CURLOPT_TIMECONDITION and was previously not possible to
reliably figure out. From bug report #2565128
(http://curl.haxx.se/bug/view.cgi?id=2565128)
getaddrinfo() sorts the response list
This isn't a libcurl bug since this is how getaddrinfo() is *supposed* to work!
Apparently you deal with this using the /etc/gai.conf file.
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC to allow libcurl
to do GSS-style authentication with SOCKS5 proxies. The curl tool got the
options called --socks5-gssapi-service and --socks5-gssapi-nec to enable
these.