Use preprocessor symbols WINBIND_NTLM_AUTH_ENABLED and WINBIND_NTLM_AUTH_FILE
for Samba's winbind daemon ntlm_auth helper code implementation and filename.
Retain preprocessor symbol USE_NTLM_SSO for NTLM single-sign-on feature
availability implementation independent.
For test harness, prefix NTLM_AUTH environment vars with CURL_
Refactor and rename configure option --with-ntlm-auth to --enable-wb-ntlm-auth[=FILE]
Introduced the initial setup to allow closesocket callbacks by making
sure sclose() is only ever called from one place in the libcurl source
and still run all test cases fine.
The protocol handler's flags field now can set that the protocol
requires a password, so that the set_userpass function doesn't have to
have the specific knowledge of which protocols that do.
Added CURLOPT_TRANSFER_ENCODING as the option to set to request Transfer
Encoding in HTTP requests (if built zlib enabled). I also renamed
CURLOPT_ENCODING to CURLOPT_ACCEPT_ENCODING (while keeping the old name
around) to reduce the confusion when we have to encoding options for
HTTP.
--tr-encoding is now the new command line option for curl to request
this, and thus I updated the test cases accordingly.
Since this struct member is used in the code to determine what and how
to decode automatically and since it is now also used for compressed
Transfer-Encodings, I renamed it to the more suitable 'auto_decoding'
Transfer-Encoding differs from Content-Encoding in a few subtle ways,
but primarily it concerns the transfer only and not the content so when
discovered to be compressed we know we have to uncompress it. There will
only arrive compressed transfers in a response after we have requested
them with the appropriate TE: header.
Test case 1122 and 1123 verify.
When asked to bind the local end of a connection when doing a request,
the code will now disqualify other existing connections from re-use even
if they are connected to the correct remote host.
This will also affect which connections that can be used for pipelining,
so that only connections that aren't bound or bound to the same
device/port you're asking for will be considered.
The PROT_* set of internal defines for the protocols is no longer
used. We now use the same bits internally as we have defined in the
public header using the CURLPROTO_ prefix. This is for simplicity and
because the PROT_* prefix was already used duplicated internally for a
set of KRB4 values.
The PROTOPT_* defines were moved up to just below the struct definition
within which they are used.
The protocol handler struct got a 'flags' field for special information
and characteristics of the given protocol.
This now enables us to move away central protocol information such as
CLOSEACTION and DUALCHANNEL from single defines in a central place, out
to each protocol's definition. It also made us stop abusing the protocol
field for other info than the protocol, and we could start cleaning up
other protocol-specific things by adding flags bits to set in the
handler struct.
The "protocol" field connectdata struct was removed as well and the code
now refers directly to the conn->handler->protocol field instead. To
make things work properly, the code now always store a conn->given
pointer that points out the original handler struct so that the code can
learn details from the original protocol even if conn->handler is
modified along the way - for example when switching to go over a HTTP
proxy.
Some protocols have to call the underlying functions without regard to
what exact state the socket signals. For example even if the socket says
"readable", the send function might need to be called while uploading,
or vice versa. This is the case for libssh2 based protocols: SCP and
SFTP and we now introduce a define to set those protocols and we make
the multi interface code aware of this concept.
This is another fix to make test 582 run properly.
Both SFTP and SCP are protocols that need to shut down stuff properly
when the connection is about to get torned down. The primary effect of
not doing this shows up as memory leaks (when using SCP or SFTP with the
multi interface).
This is one of the problems detected by test 582.
When using the multi interface and connecting to a host name that
resolves to multiple IP addresses, there was no logic that made it
continue to the next IP if connecting to the first address times
out. This is now corrected.
Make the c-ares resolver code ask for both IPv4 and IPv6 addresses when
IPv6 is enabled.
This is a workaround for the missing ares_getaddrinfo() and is a lot
easier to implement.
Note that as long as c-ares returns IPv4 addresses when IPv6 addresses
were requested but missing, this will cause a host's IPv4 addresses to
occur twice in the DNS cache.
URL: http://curl.haxx.se/mail/lib-2010-12/0041.html
The public axTLS header (at least as of 1.2.7) redefines the memory
functions. We #undef those again immediately after the public header to
limit the damage. This should be fixed in axTLS.
Added axTLS to autotool files and glue code to misc other files.
axtls.h maps SSL API functions, but may change.
axtls.c is just a stub file and will definitely change.
It helps to prevent a hangup with some FTP servers in case idle session
timeout has exceeded. But it may be useful also for other protocols
that send any quit message on disconnect. Currently used by FTP, POP3,
IMAP and SMTP.
While changing Curl_sec_read_msg to accept an enum protection_level
instead of an int, I went ahead and fixed the usage of the associated
fields.
Some code was assuming that prot_clear == 0. Fixed those to use the
proper value. Added assertions prior to any code that would set the
protection level.
The IP version choice was previously only in the UserDefined struct
within the SessionHandle, but since we sometimes alter that option
during a request we need to have it on a per-connection basis.
I also moved more "init conn" code into the allocate_conn() function
which is designed for that purpose more or less.
CURLOPT_RESOLVE is a new option that sends along a curl_slist with
name:port:address sets that will populate the DNS cache with entries so
that request can be "fooled" to use another host than what otherwise
would've been used. Previously we've encouraged the use of Host: for
that when dealing with HTTP, but this new feature has the added bonus
that it allows the name from the URL to be used for TLS SNI and server
certificate name checks as well.
This is a first change. Surely more will follow to make it decent.
When given a custom host name in a Host: header, we can use it for
several different purposes other than just cookies, so we rename it and
use it for SSL SNI etc.
HTTP allows that a server sends trailing headers after all the chunks
have been sent WITHOUT signalling their presence in the first response
headers. The "Trailer:" header is only a SHOULD there and as we need to
handle the situation even without that header I made libcurl ignore
Trailer: completely.
Test case 1116 was added to verify this and to make sure we handle more
than one trailer header properly.
Reported by: Patrick McManus
Bug: http://curl.haxx.se/bug/view.cgi?id=3052450
Curl_expire() is now expanded to hold a list of timeouts for each easy
handle. Only the closest in time will be the one used as the primary
timeout for the handle and will be used for the splay tree (which sorts
and lists all handles within the multi handle).
When the main timeout has triggered/expired, the next timeout in time
that is kept in the list will be moved to the main timeout position and
used as the key to splay with. This way, all timeouts that are set with
Curl_expire() internally will end up as a proper timeout. Previously any
Curl_expire() that set a _later_ timeout than what was already set was
just silently ignored and thus missed.
Setting Curl_expire() with timeout 0 (zero) will cancel all previously
added timeouts.
Corrects known bug #62.
Test 563 is enabled now and verifies that the combo FTP type=A URL,
CURLOPT_PORT set and proxy work fine. As a bonus I managed to remove the
somewhat odd FTP check in parse_remote_port() and instead converted it
to a better and more generic 'slash_removed' struct field. Checking the
->protocol field isn't right since when an FTP:// URL is sent over a
HTTP proxy, the protocol is HTTP but the URL was handled by the FTP code
and thus slash_removed is set TRUE for this case.
"The BSD version of PolarSSL was made for migratory purposes only and is not
maintained. The GPL version of PolarSSL is actually the only actively
developed version, so I would be very reluctant to use the BSD version." /
Paul Bakker, PolarSSL hacker.
Signed-off-by: Hoi-Ho Chan <hoiho.chan@gmail.com>
FTP(S) use two connections that can be set to different recv and
send functions independently, so by introducing recv+send pairs
in the same manner we already have sockets/connections we can
work with FTPS fine.
This commit fixes the FTPS regression introduced in change d64bd82.
Howard Chu brought the bulk work of this patch that properly
moves out the sending and recving of data to the parts of the
code that are properly responsible for the various ways of doing
so.
Daniel Stenberg assisted with polishing a few bits and fixed some
minor flaws in the original patch.
Another upside of this patch is that we now abuse CURLcodes less
with the "magic" -1 return codes and instead use CURLE_AGAIN more
consistently.
This is Hoi-Ho Chan's patch with some minor fixes by me. There
are some potential issues in this, but none worse than we can
sort out on the list and over time.
The main change is to allow input from user-specified methods,
when they are specified with CURLOPT_READFUNCTION.
All calls to fflush(stdout) in telnet.c were removed, which makes
using 'curl telnet://foo.com' painful since prompts and other data
are not always returned to the user promptly. Use
'curl --no-buffer telnet://foo.com' instead. In general,
the user should have their CURLOPT_WRITEFUNCTION do a fflush
for interactive use.
Also fix assumption that reading from stdin never returns < 0.
Old code could crash in that case.
Call progress functions in telnet main loop.
Signed-off-by: Ben Greear <greearb@candelatech.com>
from hostip.h to setup.h in order to allow proper inclusion in any file.
This represents no functional change at all in which resolver is used,
everything still works as usual, internally and externally there is no
difference in behavior.
command is a special "hack" used by the drftpd server, but even though it is
a custom extension I've deemed it fine to add to libcurl since this server
seems to survive and people keep using it and want libcurl to support
it. The new libcurl option is named CURLOPT_FTP_USE_PRET, and it is also
usable from the curl tool with --ftp-pret. Using this option on a server
that doesn't support this command will make libcurl fail.
rework patch that now integrates TFTP properly into libcurl so that it can
be used non-blocking with the multi interface and more. BLKSIZE also works.
The --tftp-blksize option was added to allow setting the TFTP BLKSIZE from
the command line.
transfer.c for blocking. It is currently used only by SCP and SFTP protocols.
This enhancement resolves an issue with 100% CPU usage during SFTP upload,
reported by Vourhey.
Fix SIGSEGV on free'd easy_conn when pipe unexpectedly breaks
Fix data corruption issue with re-connected transfers
Fix use after free if we're completed but easy_conn not NULL
in NSS-powered libcurl. Now the client certificates can be selected
automatically by a NSS built-in hook. Additionally pre-login to all PKCS11
slots is no more performed. It used to cause problems with HW tokens.
- Fixed reference counting for NSS client certificates. Now the PEM reader
module should be always properly unloaded on Curl_nss_cleanup(). If the unload
fails though, libcurl will try to reuse the already loaded instance.
KEEP_RECV to better match the general terminology: receive and send is what we
do from the (remote) servers. We read and write from and to the local fs.
out to be leaking cacerts. Kamil Dudka helped me complete the fix. The issue
is found in Redhat's bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=453612
There are still memory leaks present, but they seem to have other reasons.
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
the condition in the previous request was unmet. This is typically a time
condition set with CURLOPT_TIMECONDITION and was previously not possible to
reliably figure out. From bug report #2565128
(http://curl.haxx.se/bug/view.cgi?id=2565128)
Curl_sspi_global_init() and Curl_sspi_global_cleanup() which previously were
named Curl_ntlm_global_init() and Curl_ntlm_global_cleanup() in http_ntlm.c
Also adjusted socks_sspi.c to remove the link-time dependency on the Windows
SSPI library using it now in the same way as it was done in http_ntlm.c.
CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC to allow libcurl
to do GSS-style authentication with SOCKS5 proxies. The curl tool got the
options called --socks5-gssapi-service and --socks5-gssapi-nec to enable
these.
They basically offer the same thing the NO_PROXY environment variable only
offered previously: list a set of host names that shall not use the proxy
even if one is specified.
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
there are servers "out there" that relies on the client doing this broken
Digest authentication. Apache even comes with an option to work with such
broken clients.
The difference is only for URLs that contain a query-part (a '?'-letter and
text to the right of it).
libcurl now supports this quirk, and you enable it by setting the
CURLAUTH_DIGEST_IE bit in the bitmask you pass to the CURLOPT_HTTPAUTH or
CURLOPT_PROXYAUTH options. They are thus individually controlled to server
and proxy.
(http://curl.haxx.se/bug/view.cgi?id=2221237) that identified an infinite
loop during GSS authentication given some specific conditions. With his
patience and great feedback I managed to narrow down the problem and
eventually fix it although I can't test any of this myself!
problem with my CURLINFO_PRIMARY_IP fix from October 7th that caused a NULL
pointer read. I also took the opportunity to clean up this logic (storing of
the connection's IP address) somewhat as we had it stored in two different
places and ways previously and they are now unified.
make CURLOPT_PROXYUSERPWD sort of deprecated. The primary motive for adding
these new options is that they have no problems with the colon separator
that the CURLOPT_PROXYUSERPWD option does.
curl_easy_setopt: CURLOPT_USERNAME and CURLOPT_PASSWORD that sort of
deprecates the good old CURLOPT_USERPWD since they allow applications to set
the user name and password independently and perhaps more importantly allow
both to contain colon(s) which CURLOPT_USERPWD doesn't fully support.