Compare commits

...

242 Commits

Author SHA1 Message Date
Daniel Stenberg
a7b98f5f6b 7.18.0 2008-01-28 17:28:21 +00:00
Daniel Stenberg
6bae091c1b Add the three currently discussed bugs that won't make it into the 7.18.0
release but hopefully they'll all be fixed in 7.18.1...
2008-01-28 16:04:52 +00:00
Daniel Stenberg
33d68653f0 this was modified this year so we bump the copyright year 2008-01-28 11:56:13 +00:00
Daniel Stenberg
267836e83c updated copyright year in the generated configure 2008-01-28 11:48:41 +00:00
Daniel Stenberg
87fdfe770d Dmitry Kurochkin: In "real world" testing I found more bugs in
pipelining. Broken connection is not restored and we get into infinite
loop. It happens because of wrong is_in_pipeline values.
2008-01-27 22:53:09 +00:00
Yang Tse
8fca5c2e69 Dont rely on PAMAuthenticationViaKbdInt default being 'no' 2008-01-27 02:35:20 +00:00
Daniel Stenberg
5f2055729e added test 1021 to verify my fix for bug report #1879375 2008-01-26 00:13:38 +00:00
Daniel Stenberg
c6df788866 - Kevin Reed filed bug report #1879375
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
  got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
  proxy authentication and the proxy replies with an auth (like NTLM) and then
  closes the connection after that initial informational response.

  libcurl would not properly re-initialize the connection to the proxy and
  continue the auth negotiation like supposed. It does now however, as it will
  now detect if one or more authentication methods were available and asked
  for, and will thus retry the connection and continue from there.

- I made the progress callback get called properly during proxy CONNECT.
2008-01-25 23:33:45 +00:00
Daniel Stenberg
e67b2524d1 using anyauth isn't unconditionally an extra roundtrip 2008-01-25 22:35:06 +00:00
Daniel Stenberg
d7bcc26179 just wanted to mention two uclinux archs I've tried libcurl builds on myself 2008-01-25 22:10:10 +00:00
Yang Tse
69e540dfa6 improve request initialization for test harness HTTP server 2008-01-25 05:08:53 +00:00
Yang Tse
2198869eb1 Dmitry Kurochkin's test harness HTTP server pipelining fix fot test 530 2008-01-25 05:07:04 +00:00
Daniel Stenberg
fb07259e0d and Igor Franchuk is his name! 2008-01-24 17:17:18 +00:00
Gunter Knauf
9d28a0252c fixed link to latest native awk. 2008-01-24 15:39:51 +00:00
Gunter Knauf
d54c14ccf9 updated makefiles to use global copyright define. 2008-01-24 15:28:47 +00:00
Gunter Knauf
41def4be6e updated awk script to fetch copyright from header. 2008-01-24 15:27:06 +00:00
Gunter Knauf
2d38d0d515 minor makefile tweaks. 2008-01-24 15:05:56 +00:00
Gunter Knauf
e796c79d18 happy new year 2008-01-24 14:15:49 +00:00
Gunter Knauf
c93ba48da2 use more correctly named define. 2008-01-24 14:14:34 +00:00
Gunter Knauf
e322513698 use copyright define instead of hardcoded string. 2008-01-24 14:10:59 +00:00
Gunter Knauf
6fa72e6417 added copyright define to curlver.h. 2008-01-24 14:05:56 +00:00
Daniel Stenberg
c914e6ea5d "Igor" pointed out that CURLOPT_COOKIELIST set to "ALL" leaked memory, and so
did "SESS". Fixed now.
2008-01-23 22:22:12 +00:00
Daniel Stenberg
79cb74f03a Dmitry Kurochkin's pipelining close-down segfault fix 2008-01-23 12:22:04 +00:00
Yang Tse
34cf35051a update openssl version 2008-01-23 07:27:40 +00:00
Yang Tse
9bd28a021f STDIN_FILENO, STDOUT_FILENO and STDERR_FILENO clone macros 2008-01-23 06:11:11 +00:00
Gunter Knauf
5ee3f41e0d happy new year 2008-01-23 02:12:13 +00:00
Gunter Knauf
64e88ff6a7 removed inclusion of libcurl memory debug headers since this lib stub is a well proofed method suggested by Novell. This enables usage of the stub with language bindings. 2008-01-23 02:10:40 +00:00
Yang Tse
acd7c94598 when unable to initialize sftp session, also log failure reason 2008-01-22 17:26:42 +00:00
Yang Tse
bdb2beb8e4 check availability of poll.h header at configuration time, and include
it when sys/poll.h is unavailable
2008-01-22 14:52:54 +00:00
Yang Tse
727e23322f update copyright year 2008-01-22 03:48:16 +00:00
Daniel Stenberg
ef0ed9b720 Dmitry Kurochkin removed the cancelled state for pipelining, as we agreed
that it is bad anyway. Starting now, removing a handle that is in used in a
pipeline will break the pipeline - it'll be set back up again but still...
2008-01-21 23:48:58 +00:00
Yang Tse
a674654f83 Disable ldap support for cygwin builds, since it breaks whole build process. 2008-01-21 20:22:33 +00:00
Yang Tse
3caeb0a91f undo using internal *printf() clones for test #530 2008-01-21 05:35:08 +00:00
Yang Tse
a4eddf0d0d use internal *printf() clones since snprintf() not available on all platforms 2008-01-20 22:53:56 +00:00
Daniel Stenberg
fcf9029179 Judson provided an example, and the added mirror adds the count 2008-01-20 11:29:30 +00:00
Daniel Stenberg
e40327ba00 This is a multi threaded application that uses a progress bar to show
status.  It uses Gtk+ to make a smooth pulse. Written by Jud Bishop
2008-01-20 11:12:11 +00:00
Daniel Stenberg
bdd0e3d3f5 http://curl.very-clever.com/ is a new mirror in Nuremberg, Germany 2008-01-20 11:07:43 +00:00
Yang Tse
e9490fdbd9 Also disable GSSAPIAuthentication for the test harness ssh client 2008-01-20 04:05:25 +00:00
Daniel Stenberg
bd40b3ff3f added a (sample) target for 64bit msvc builds 2008-01-19 11:33:06 +00:00
Daniel Stenberg
8c66811e09 rephrased the --socks5-hostname help output somewhat 2008-01-19 10:30:15 +00:00
Daniel Stenberg
daadcfd1de Dmitry Kurochkin fixed test case 530 (pipelining) 2008-01-19 10:14:45 +00:00
Daniel Stenberg
62df0ff025 Lau Hang Kin found and fixed a problem with the multi interface when doing
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
2008-01-18 21:51:10 +00:00
Yang Tse
01d95b56a0 fix failure to properly detect SSH and SOCKS servers start up on loaded systems 2008-01-18 09:18:59 +00:00
Yang Tse
f6adae8d35 to actually allow really big HTTP POSTs curl's postfieldsize type is changed to
curl_off_t and CURLOPT_POSTFIELDSIZE_LARGE is used to pass value to libcurl
2008-01-18 05:58:00 +00:00
Daniel Stenberg
bcaadb4284 curl-java 0.2.1 2008-01-17 22:43:29 +00:00
Daniel Stenberg
8d963aa0e2 the java binding is not really maintained 2008-01-17 21:46:21 +00:00
Yang Tse
0530b0a5ca Don't abort tests 518 and 537 when unable to raise the open-file soft limit 2008-01-17 18:57:50 +00:00
Yang Tse
5396121595 fix compiler warning 2008-01-17 18:03:07 +00:00
Dan Fandrich
bcfc7d90d1 Put the comments in an XML-valid location. 2008-01-17 04:10:28 +00:00
Gunter Knauf
47246eb401 updated lib versions. 2008-01-17 01:25:46 +00:00
Gunter Knauf
3620e71010 updated copyright for new year. 2008-01-17 01:20:03 +00:00
Daniel Stenberg
c522f349fe Added test 553. This test case and code is based on the bug recipe Joe Malicki
provided for bug report #1871269, fixed on Jan 14 2008 before the 7.18.0
release.
2008-01-16 22:54:54 +00:00
Daniel Stenberg
6893fcaa9b remove trailing comma too, even though I don't think it does any harm 2008-01-16 22:09:51 +00:00
Daniel Stenberg
301ae1ae1b Nathan Coulter's patch that makes runtests.pl respect the PATH when figuring
out what valgrind to run.
2008-01-16 22:08:37 +00:00
Daniel Stenberg
ddaa78f08b Dmitry Kurochkin's additional pipelining bugfix 2008-01-16 21:33:52 +00:00
Yang Tse
3d55877764 fix handling of out of memory in the command line tool that afected
data url encoded HTTP POSTs when reading it from a file.
2008-01-16 21:01:30 +00:00
Patrick Monnerat
3ee32d7920 OS/400 update:
New declarations in curl.h reported to curl.inc.in.
Copyrights extended to 2008.
SONAME handling introduced in build scripts.
2008-01-16 16:04:47 +00:00
Daniel Stenberg
b3de497d83 Dmitry Kurochkin worked a lot on improving the HTTP Pipelining support that
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
2008-01-16 12:24:00 +00:00
Daniel Stenberg
ed6466d176 Calls to Curl_failf() are not supposed to provide a trailing newline as the
function itself adds that. Fixed on 50 or something strings!
2008-01-15 23:19:02 +00:00
Daniel Stenberg
991505e077 Woops, partly revert my previous commit and do it slightly differently instead.
The signalling of that a global DNS cache is wanted is done by setting the
option but the setting of the internal variable that it is in use must not be
done until it finally actually gets used!

NOTE and WARNING: I noticed that you can't actually switch off the global dns
cache with CURLOPT_DNS_USE_GLOBAL_CACHE but you couldn't do that previously
either and the option is very clearly and loudly documented as DO NOTE USE so
I won't bother to fix this bug now.
2008-01-15 22:44:12 +00:00
Daniel Stenberg
56f17d2c9f I made the torture test on test 530 go through. This was actually due to
silly code left from when we switched to let the multi handle "hold" the dns
cache when using the multi interface... Of course this only triggered when a
certain function call returned error at the correct moment.
2008-01-15 22:15:55 +00:00
Daniel Stenberg
19ae96f4d0 Michal Marek's improved .curlrc syntax description 2008-01-15 08:45:22 +00:00
Daniel Stenberg
53108806af Joe Malicki filed bug report #1871269
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
2008-01-14 22:02:14 +00:00
Yang Tse
1d620a3df4 fix compiler warning 2008-01-14 19:40:10 +00:00
Yang Tse
69f685056d startnew() shouldn't return a positive pid as reported in the pidfile
by the spawned server itself unless it is actually alive
2008-01-14 19:28:54 +00:00
Daniel Stenberg
9c7d4394f9 5.3 support FF3 sqlite cookie files 2008-01-14 17:49:06 +00:00
Gisle Vanem
bcc3c9279a Trying GnuTLS and OpenSSL together fails to compile in not so
obvious ways. Give an explicit error.
2008-01-14 16:51:32 +00:00
Yang Tse
5d63404966 #115 is done 2008-01-14 01:53:17 +00:00
Yang Tse
a8ae8087c4 fix compiler warning 2008-01-13 04:39:32 +00:00
Yang Tse
502da27d65 add client features part 2008-01-13 03:27:14 +00:00
Daniel Stenberg
4ab8ebb232 I re-arranged the curl --help output. All the options are now sorted on
their long option names and all descriptions are one-liners.
2008-01-12 22:56:12 +00:00
Daniel Stenberg
f866af912d Eric Landes provided the patch (edited by me) that introduces the
--keepalive-time to curl to set the keepalive probe interval. I also took
the opportunity to rename the recently added no-keep-alive option to
no-keepalive to keep a consistent naming and to avoid getting two dashes in
these option names. Eric also provided an update to the man page for the new
option.
2008-01-12 22:10:53 +00:00
Daniel Stenberg
4f00a8db73 added release dates for four very old releases 2008-01-12 10:31:07 +00:00
Yang Tse
5004529685 Remove hardcoded verbosity 2008-01-12 04:32:03 +00:00
Yang Tse
2b63eb8511 Ooops 2008-01-12 00:12:16 +00:00
Yang Tse
f09fe4b49f Ooops 2008-01-11 21:59:05 +00:00
Daniel Stenberg
22c76df44d new year 2008-01-11 21:23:57 +00:00
Yang Tse
35be09cf58 When verifying that test harness's SSH and SOCKS servers have been
started check also that the process is actually alive, since they
could have died once the pidfile was written out
2008-01-11 20:17:33 +00:00
Yang Tse
3564aec388 fix compiler warning 2008-01-11 17:35:10 +00:00
Yang Tse
a042090467 fix compiler warning 2008-01-11 16:49:35 +00:00
Daniel Stenberg
148d727525 "114 - Ranged downloads on file:// URLs" done 2008-01-11 15:21:21 +00:00
Daniel Stenberg
08adf67969 Daniel Egger made CURLOPT_RANGE work on file:// URLs the very same way it
already worked for FTP:// URLs
2008-01-11 14:20:41 +00:00
Daniel Stenberg
e2c817731a I made the curl tool switch from using CURLOPT_IOCTLFUNCTION to now use the
spanking new CURLOPT_SEEKFUNCTION simply to take advantage of the improved
performance for the upload resume cases where you want to upload the last
few bytes of a very large file. To implement this decently, I had to switch
the client code for uploading from fopen()/fread() to plain open()/read() so
that we can use lseek() to do >32bit seeks (as fseek() doesn't allow that)
on systems that offer support for that.
2008-01-11 14:00:47 +00:00
Daniel Stenberg
8df7e0bdba Michal Marek made curl-config --libs not include /usr/lib64 in the output
(it already before skipped /usr/lib).  /usr/lib64 is the default library
directory on many 64bit systems and it's unlikely that anyone would use the
path privately on systems where it's not.
2008-01-10 22:14:02 +00:00
Yang Tse
14ff7e75e0 Temporary change to help debugging SSH server verification failures 2008-01-10 16:19:14 +00:00
Daniel Stenberg
d270d6518a Two more items done:
109 - curl_easy_pause
110 - seekfunction
2008-01-10 10:31:01 +00:00
Daniel Stenberg
18faa50940 Georg Lippitsch brought CURLOPT_SEEKFUNCTION and CURLOPT_SEEKDATA to allow
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
2008-01-10 10:30:19 +00:00
Daniel Stenberg
0ce484eed9 Nikitinskit Dmitriy filed bug report #1868255
(http://curl.haxx.se/bug/view.cgi?id=1868255) with a patch. It identifies
and fixes a problem with parsing WWW-Authenticate: headers with additional
spaces in the line that the parser wasn't written to deal with.
2008-01-10 09:17:07 +00:00
Daniel Stenberg
bce5ae9a07 corrected comment 2008-01-10 09:16:21 +00:00
Yang Tse
15f832d1c2 fix compiler warning 2008-01-09 19:11:56 +00:00
Yang Tse
c249a8aa1b Fix file Id 2008-01-09 01:11:59 +00:00
Yang Tse
fc794ae012 Add /usr/freeware/sbin and /usr/freeware/libexec to the ssh binaries
locations search list.
2008-01-09 00:58:48 +00:00
Daniel Stenberg
07227e8089 added the --retry problems mention on the curl-library list today 2008-01-08 22:15:19 +00:00
Yang Tse
32cc75d6cb Partially cleanup debugging messages in test harness, introduced for
new minimum SSH version support for SCP, SFTP and SOCKS tests.

Some verbosity which still remains, will go out before next release.
2008-01-08 20:12:43 +00:00
Yang Tse
1c0a19ad53 Remove increased loglevel intended to debug autobuild's publickey
authentication failures when using OpenSSH 2.9.9 or SunSSH.

Verified fact: Even when only using publickey authentication,
OpenSSH and SunSSH first validate the user, this implies that
if the user validation fails, 'invalid user', the publickey
authentication will not be allowed to complete.
2008-01-08 19:18:25 +00:00
Daniel Stenberg
de23b98522 Introducing curl_easy_pause() and new magic return codes for both the read
and the write callbacks that now can make a connection's reading and/or
writing get paused.
2008-01-08 14:52:05 +00:00
Daniel Stenberg
5e1c9e90d9 removed 113, both bugs #1850730 and #1854175 are fixed in CVS 2008-01-08 11:11:20 +00:00
Yang Tse
59b4bdf78d Change typecast due to http://cool.haxx.se/cvs.cgi/curl/include/curl/curl.h.diff?r1=1.336&r2=1.337 2008-01-08 01:05:50 +00:00
Yang Tse
34d02d1969 Increase loglevel to debug autobuild's publickey authentication
failures when using OpenSSH 2.9.9 or SunSSH
2008-01-08 00:40:02 +00:00
Yang Tse
2408b236ca Display ssh server log and configuration upon socks server failure 2008-01-08 00:39:31 +00:00
Dan Fandrich
4acd437952 Fixed test description 2008-01-07 19:54:40 +00:00
Patrick Monnerat
314f62958d ILE RPG support update (from include/curl/curl.h) 2008-01-07 16:32:49 +00:00
Daniel Stenberg
c616d56e96 updated URLs and moved down two issues to the new "less likely" section 2008-01-06 23:22:06 +00:00
Daniel Stenberg
f111c9edae more SOCKS5_HOSTNAME adjustments from Richard Atterer 2008-01-06 21:41:38 +00:00
Daniel Stenberg
7138296633 make sure we deal with SOCKS5_HOSTNAME as a proxy type as well 2008-01-06 12:56:34 +00:00
Daniel Stenberg
195e94c0fa Richard Atterer reverted back what I missed in my previous revert ;-) 2008-01-06 12:56:19 +00:00
Daniel Stenberg
cadd08f36a make sure CURLPROXY_SOCKS5_HOSTNAME is taken care of as well 2008-01-06 12:54:16 +00:00
Daniel Stenberg
7306b7829b fixed: 116 - bug #1863171, curl_getdate() bug
added: 117 - Eric Landes patch for introducing the --tcp-keep* options
2008-01-06 11:10:35 +00:00
Daniel Stenberg
423309541a Jeff Johnson filed bug report #1863171
(http://curl.haxx.se/bug/view.cgi?id=1863171) where he pointed out that
libcurl's date parser didn't accept a +1300 time zone which actually is used
fairly often (like New Zealand's Dailight Savings Time), so I modified the
parser to now accept up to and including -1400 to +1400.
2008-01-06 10:50:57 +00:00
Yang Tse
9c6533d287 Increase MaxAuthTries from 0 to 10. Using a value of 0 is too restrictive 2008-01-06 02:02:55 +00:00
Daniel Stenberg
b430576436 Based on further discussion on curl-library, I reverted yesterday's SOCKS5
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.

The default SOCKS5 proxy is again back to sending the IP address to the
proxy.  The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
2008-01-05 22:04:18 +00:00
Daniel Stenberg
65008a4e55 Added Daniel Egger and extended the --no-keep-alive description 2008-01-05 21:04:18 +00:00
Daniel Stenberg
3df484088f added keyword 2008-01-05 12:15:41 +00:00
Yang Tse
2912189875 Don't abort operation when attempting to set SO_KEEPALIVE
fails, just issue a warning and ignore the failure.
2008-01-05 01:39:07 +00:00
Dan Fandrich
fcb2595ed6 "yes" must be in quotes to be XML compatible 2008-01-04 23:57:39 +00:00
Daniel Stenberg
0878af3ec0 111 - DNS resolve over socks5 is done
added 116 - bug #1863171, curl_getdate() bug
2008-01-04 23:55:22 +00:00
Daniel Stenberg
fe0d7aee49 Daniel Egger provided 'nonewline=yes' support for the <stdout> section 2008-01-04 23:31:04 +00:00
Daniel Stenberg
2e42b0a252 Based on Maxim Perenesenko's patch, we now do SOCKS5 operations and let the
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
2008-01-04 23:01:00 +00:00
Daniel Stenberg
fcc485092a 14.3 extend CURLOPT_SOCKOPTFUNCTION prototype
(for next SONAME bump)
2008-01-04 22:16:16 +00:00
Yang Tse
a4945fe687 Missing newline at end of message 2008-01-04 19:56:56 +00:00
Yang Tse
88d89b2177 Fix 'format string' compiler warning 2008-01-04 15:39:06 +00:00
Yang Tse
61a2d5ea75 'ControlPath' ssh client configuration file option requires OpenSSH 4.2 or
later to accept 'none' as an indication to disable connection multiplexing
2008-01-04 14:12:10 +00:00
Yang Tse
c479c64333 SunSSH 1.1 ssh client does not support config file options:
ConnectTimeout
 ForwardX11Trusted
 HashKnownHosts
 RekeyLimit
 ServerAliveCountMax
 ServerAliveInterval
2008-01-04 13:24:17 +00:00
Yang Tse
7a2177dc42 - Display curl_ssh_config when socks server fails to start.
- Capability of running socks5 tests must be based on ssh daemon version
  and not on ssh client version.
2008-01-04 13:00:40 +00:00
Yang Tse
bf6e2f28ba Make sure @INC is modified before 'using' the sshhelp module. 2008-01-04 03:05:33 +00:00
Yang Tse
f5da1e5484 'LocalCommand' no longer used for ssh client config file. When used it
requires a non blank argument.
2008-01-04 03:04:30 +00:00
Yang Tse
fd8d862c37 Modify test harness so that the minimum SSH version required to run
SCP, SFTP and SOCKS4 tests is now OpenSSH 2.9.9 or SunSSH 1.0

For SOCKS5 tests minimum versions are OpenSSH 3.7 or SunSSH 1.0
2008-01-03 20:48:22 +00:00
Gisle Vanem
083d3190e5 'false' and 'true' are not built-ins on most compilers.
Use TRUE/FALSE from setup_once.h.
2008-01-03 15:18:27 +00:00
Daniel Stenberg
6787d1ed35 one gone, one added 2008-01-02 22:46:15 +00:00
Daniel Stenberg
d9023c16ab - I fixed two cases of missing return code checks when handling chunked
decoding where a write error (or abort return from a callback) didn't stop
  libcurl's processing.
2008-01-02 22:30:34 +00:00
Daniel Stenberg
193d33fd4a I removed the socklen_t use from the public curl/curl.h header and instead
made it an unsigned int. The type was only used in the curl_sockaddr struct
definition (only used by the curl_opensocket_callback). On all platforms I
could find information about, socklen_t is 32 unsigned bits large so I don't
think this will break the API or ABI. The main reason for this change is of
course for all the platforms that don't have a socklen_t definition in their
headers to build fine again. Providing our own configure magic and custom
definition of socklen_t on those systems proved to work but was a lot of
cruft, code and extra magic needed - when this very small change of type seems
harmless and still solves the missing socklen_t problem.
2008-01-02 22:23:27 +00:00
Daniel Stenberg
a46b40b7fd Richard Atterer brought a patch that added support for SOCKS4a proxies, which
is an inofficial PROXY4 variant that sends the hostname to the proxy instead
of the resolved address (which is already supported by SOCKS5).  --socks4a is
the curl command line option for it and CURLOPT_PROXYTYPE can now be set to
CURLPROXY_SOCKS4A as well.
2008-01-02 21:40:11 +00:00
Daniel Stenberg
0b9b8acb08 updated 2008-01-02 21:39:46 +00:00
Gisle Vanem
bf98b635cd Added '-d' option for Watt-32 debugging. 2008-01-02 05:30:52 +00:00
Daniel Stenberg
7795eb6db8 Mohun Biswas pointed out that --libcurl generated a source code with an int
function but without a return statement. While fixing that, I also took care
about adding some better comments for the generated code.
2008-01-01 21:11:26 +00:00
Daniel Stenberg
31674559d3 --libcurl was added in 7.16.1, a useful information 2007-12-27 21:44:21 +00:00
Daniel Stenberg
04e4d9a0b3 Dmitry Kurochkin mentioned a flaw
(http://curl.haxx.se/mail/lib-2007-12/0252.html) in detect_proxy() which
failed to set the bits.proxy variable properly when an environment variable
told libcurl to use a http proxy.
2007-12-26 23:29:35 +00:00
Daniel Stenberg
f277124a0f In an attempt to repeat the problem in bug report #1850730
(http://curl.haxx.se/bug/view.cgi?id=1850730) I wrote up test case 552. The
test is doing a 70K POST with a read callback and an ioctl callback over a
proxy requiring Digest auth. The test case code is more or less identical to
the test recipe code provided by Spacen Jasset (who submitted the bug report).
2007-12-26 21:48:52 +00:00
Daniel Stenberg
6adf5880f5 what we're having atm 2007-12-26 21:46:51 +00:00
Gunter Knauf
4e8c4fc80b added missing semicolon fromn last commit. 2007-12-25 13:26:01 +00:00
Daniel Stenberg
fc1d1ea934 Gary Maxwell filed bug report #1856628
(http://curl.haxx.se/bug/view.cgi?id=1856628) and provided a fix for the
(small) memory leak in the SSL session ID caching code. It happened when a
previous entry in the cache was re-used.
2007-12-24 23:45:48 +00:00
Dan Fandrich
9cd30c2012 Use getcwd() to get the directory, which works even if one of the directory
components doesn't have read permission set.
2007-12-22 18:25:43 +00:00
Dan Fandrich
d639ed1aaf Use getcwd() to get the directory, which works even if one of the
directory components doesn't have read permission set.
2007-12-20 21:21:43 +00:00
Dan Fandrich
c3a02f5407 Ensure that nroff doesn't put anything but ASCII characters into the
--manual text.
2007-12-19 21:19:01 +00:00
Yang Tse
674845f239 (http://curl.haxx.se/mail/archive-2007-12/0039.html) reported and fixed
a file truncation problem on Windows build targets triggered when retrying
a download with curl.
2007-12-18 18:33:24 +00:00
Yang Tse
07a1857d59 MSVC 9.0 (VS2008) does not support Windows build targets prior to WinXP,
and makes wrong asumptions of build target when it isn't specified. So,
if no build target has been defined we will target WinXP when building
with MSVC 9.0 (VS2008).
2007-12-18 18:08:19 +00:00
Yang Tse
f4ffa85f60 pollfd struct and WSA_poll fixes for Windows Vista already present in CVS 2007-12-18 10:36:32 +00:00
Daniel Stenberg
bcd7d03b3b Mateusz Loskot pointed out that VC++ 9.0 (2008) has the pollfd struct and
defines in the SDK somehow differently so we have to add a define to the
config-win32.h file to make select.h compile nicely.
2007-12-17 21:19:42 +00:00
Daniel Stenberg
82c9379b6c spell! 2007-12-15 22:19:08 +00:00
Daniel Stenberg
c1730dc50a Add test 551 that tests callback-post over a proxy that requires Digest auth.
A failed attempt to repeat bug report #1850730 (ie the test works fine).
2007-12-15 22:13:07 +00:00
Daniel Stenberg
20695098c8 remove mistaken "-d" from here 2007-12-14 22:09:15 +00:00
Daniel Stenberg
ee52ae001c -u addition: If you just give the user name (without entering a colon) curl
will prompt for a password. Denis Bredelet pointed out!
2007-12-14 11:19:56 +00:00
Dan Fandrich
26115aac5d Added missing <features> 2007-12-14 01:09:45 +00:00
Dan Fandrich
ca6b27aed2 Fixed typo in test title 2007-12-14 01:05:30 +00:00
Yang Tse
4fabe22173 Fix compiler warning 2007-12-13 14:39:51 +00:00
Daniel Stenberg
7b1a22147e David Wright filed bug report #1849764
(http://curl.haxx.se/bug/view.cgi?id=1849764) with an included fix. He
identified a problem for re-used connections that previously had sent
Expect: 100-continue and in some situations the subsequent POST (that didn't
use Expect:) still had the internal flag set for its use. David's fix (that
makes the setting of the flag in every single request unconditionally) is
fine and is now used!
2007-12-13 10:00:06 +00:00
Daniel Stenberg
dc24540ed1 Gilles Blanc made the curl tool enable SO_KEEPALIVE for the connections and
added the --no-keep-alive option that can disable that on demand.
2007-12-12 11:22:15 +00:00
Daniel Stenberg
92eae30f4d clarify that the CURLMOPT_TIMERFUNCTION callback can pass in 0 and -1 as legal
values and what they mean
2007-12-11 21:19:38 +00:00
Daniel Stenberg
79ef08f631 build acountry too 2007-12-11 19:34:31 +00:00
Gisle Vanem
e3c5f8374b Added acountry.c. 2007-12-11 17:26:07 +00:00
Gisle Vanem
6dc68b4193 Added build of acountry.nlm. 2007-12-11 17:24:43 +00:00
Gisle Vanem
afab4d888f Added build of acountry.exe. 2007-12-11 17:23:18 +00:00
Gisle Vanem
c751dfd65d Build acountry.exe. Added 'socklen_t' define. 2007-12-11 17:22:20 +00:00
Gisle Vanem
dbca1347f1 Another sample application that returns country-code and
name from an IPv4-address or host-name. Using the service of
countries.nerd.dk.
2007-12-11 17:21:12 +00:00
Daniel Stenberg
3b6315ce1f grrr, the previous commit was meant to properly make sure that we don't
link any executables when doing debug builds since they kind of assume
symbols provided by libcurl, but it also wrongly included acountry.c
2007-12-10 22:20:26 +00:00
Daniel Stenberg
3c1db5f250 when building 2007-12-10 22:19:06 +00:00
Daniel Stenberg
562e9b7bf3 build ahost and adig by default but don't install them 2007-12-10 21:42:04 +00:00
Patrick Monnerat
a83e72692f Define new options in OS400 RPG interface
Port OS400 compilation scripts to >= V5R2M0
2007-12-10 17:09:09 +00:00
Gisle Vanem
bd99a7dc8c Fix for targets that do have 'struct in6_addr', but which doesn't
define 's6_addr' as a macro.
2007-12-10 16:14:02 +00:00
Daniel Stenberg
db2d52a792 cut out the number of contributors from this file since it'll always be wrong 2007-12-10 11:33:46 +00:00
Daniel Stenberg
24602edc17 5.13 How do I stop an ongoing transfer? 2007-12-10 10:28:56 +00:00
Daniel Stenberg
b0b40d9a00 Andrew Moise filed bug report #1847501
(http://curl.haxx.se/bug/view.cgi?id=1847501) and pointed out a memcpy()
that should be memmove() in the convert_lineends() function.
2007-12-09 22:31:53 +00:00
Daniel Stenberg
71b105ceb1 add in toc too 2007-12-09 12:26:05 +00:00
Daniel Stenberg
ccb4956145 RTMP support? 2007-12-09 12:22:22 +00:00
Daniel Stenberg
3d09cb0a88 oops another bad numbering 2007-12-09 12:20:06 +00:00
Daniel Stenberg
a03c2d825b oops duplicate numbering 2007-12-09 12:12:52 +00:00
Daniel Stenberg
06fb242e23 slightly rephrased 2007-12-09 12:00:54 +00:00
Gisle Vanem
a086952244 Removed use of '..\lib\libcurl_wc.lib' as this is not really
a static-lib. Renamed 'OBJ_DIR' to 'WC_Win32.obj'.
2007-12-09 09:58:56 +00:00
Gisle Vanem
2b314064ae Removed building 'libcurl_wc.lib' as this isn't a static-library
in the common sense. Renamed 'OBJ_DIR' to 'WC_Win32.obj'.
2007-12-09 09:44:05 +00:00
Daniel Stenberg
439990be88 Travelling some 500km by train back and forth on the same day gives you time
to do things you don't otherwise do, but here's the summary of today's work...
2007-12-08 23:01:46 +00:00
Daniel Stenberg
41d8186c7e reformat to FAQ/CONTRIBUTE style, for nicer web-look when I apply the magic
script(s) on it online
2007-12-08 23:00:00 +00:00
Daniel Stenberg
6e9276229f cleanup 2007-12-08 22:58:12 +00:00
Daniel Stenberg
636f5eb882 fix a crash in oom situations (thanks runtests.pl -t!) 2007-12-08 22:57:17 +00:00
Daniel Stenberg
963ef5414c add keywords 2007-12-08 22:56:17 +00:00
Daniel Stenberg
975812d246 add missing files 2007-12-08 22:56:05 +00:00
Daniel Stenberg
089668ec73 correct the comment about size 2007-12-08 22:53:49 +00:00
Daniel Stenberg
cc0ce38acc add test 549 and 550 2007-12-08 22:53:28 +00:00
Daniel Stenberg
8cdff55b80 mention how to enable chunked encoding for POSTs 2007-12-08 22:52:39 +00:00
Daniel Stenberg
662bee7193 All static functions that were previously name Curl_* something no longer
use that prefix as we use that prefix only for library-wide internal global
symbols.
2007-12-08 22:50:55 +00:00
Daniel Stenberg
f8172f85b1 clarify that when curl_multi_timeout() returns -1 it just means that there
is no current timeout. It does not mean wait forever and it does not mean
do not wait at all. It means there is no timeout value known at this point in
time.
2007-12-06 22:36:52 +00:00
Daniel Stenberg
7d3ea12b62 Spacen Jasset reported a problem with doing POST (with data read with a
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
2007-12-05 21:20:14 +00:00
Daniel Stenberg
59dc9085d1 fix compiler warning 2007-12-05 11:10:24 +00:00
Daniel Stenberg
4e4f33a297 added test548 which uses the lib547 source file, preparing for test547 which
is supposed to repeat the bug report "NTLM proxy authentication with
CURLOPT_READDATA seems broken." posted on the curl-library mailing list on dec
3 2007.
2007-12-05 11:08:56 +00:00
Yang Tse
8fa599215b Fix compiler warning: variable may be used uninitialized 2007-12-04 00:15:03 +00:00
Daniel Stenberg
31e2409d6b Ray Pekowski filed bug report #1842029 2007-12-03 22:44:47 +00:00
Yang Tse
15c304225f Fix three issues previous cleanup introduces. 2007-12-03 19:57:18 +00:00
Daniel Stenberg
e1998e3b58 SSL session id caching bugfix 2007-12-03 11:49:20 +00:00
Daniel Stenberg
5c447f2499 Bug report #1842029 (http://curl.haxx.se/bug/view.cgi?id=1842029) identified
a problem with SSL session caching that prevent it from working, and the
associated fix!
2007-12-03 11:48:09 +00:00
Daniel Stenberg
9d0ffb9cc6 mention "no longer default-appends ;type= on FTP URLs thru proxies" as a bug
fix even if kind of implied by the new option
2007-12-03 11:41:36 +00:00
Daniel Stenberg
2be50baf97 Now libcurl (built with OpenSSL) doesn't return error anymore if the remote
SSL-based server doesn't present a certificate when the request is told to
ignore certificate verification anyway.
2007-12-03 11:39:27 +00:00
Daniel Stenberg
a1772ca406 Erik Kline cleaned up ares_gethostbyaddr.c:next_lookup() somewhat 2007-12-03 10:25:05 +00:00
Daniel Stenberg
30eda92a53 Brad Spencer fixed the configure script to assume that there's no
/dev/urandom when built cross-compiled as then the script cannot check for
it.
2007-12-03 10:22:29 +00:00
Daniel Stenberg
1f058f1014 removed the ;type= thing for FTP urls through proxy, since that's now only
present when enabled by on option which isn't done by default (and isn't even
available for the curl app atm)
2007-12-03 09:50:32 +00:00
Daniel Stenberg
84d0477cb9 107 - resolve the type= thing for FTP URLs over HTTP proxies, is solved 2007-12-02 23:39:39 +00:00
Daniel Stenberg
1c93e75375 Michal Marek introduced CURLOPT_PROXY_TRANSFER_MODE which is used to control
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
2007-12-02 23:38:23 +00:00
Dan Fandrich
380ed8bebf Upped copyright year 2007-11-30 02:31:07 +00:00
Daniel Stenberg
98e8978857 uh, corrected pretty major write error! 2007-11-29 22:27:51 +00:00
Daniel Stenberg
56ddfbea6e ftp resumed upload and long Digest nonces 2007-11-29 22:15:22 +00:00
Daniel Stenberg
45a2240ead A bug report on the curl-library list showed a HTTP Digest session going on
with a 700+ letter nonce. Previously libcurl only support 127 letter ones
and now I bumped it to 1023.
2007-11-29 22:14:48 +00:00
Daniel Stenberg
f75ba55b51 Fixed the resumed FTP upload loop to not require that the read callback
returns a full buffer on each invoke.
2007-11-29 22:14:33 +00:00
Daniel Stenberg
46e6115d72 include the libssh2 return code in the output for these failures to ease
debugging
2007-11-29 11:25:10 +00:00
Daniel Stenberg
800a72878a the gethostbyname fix applied here as well 2007-11-28 15:18:27 +00:00
Daniel Stenberg
649f7b7fd3 fix next_lookup() to continue searching even if c-ares failed to load the
/etc/hosts file, pointed out by Erik Kline:
http://daniel.haxx.se/projects/c-ares/mail/c-ares-archive-2007-11/0027.shtml
2007-11-28 10:46:40 +00:00
Daniel Stenberg
c1b734a3e1 When --with-gssapi (without given path) is used, we must use krb5-config to
get the libs as well and not only the include path like we used to.
2007-11-28 10:33:47 +00:00
Yang Tse
cf806748ec To allow remote log inspection avoid redirecting messages to stderr.
Cleanup some debugging messages. Unlink log file on exit.
2007-11-28 01:46:28 +00:00
Daniel Stenberg
b28dc011e0 Remove the check for libdl since that isn't actually used and it causes
warnings. Pointed out by Robin Cornelius.
2007-11-27 22:41:53 +00:00
Daniel Stenberg
ee4fef3768 pkgconfig fix by Andreas Schuldei 2007-11-27 22:38:11 +00:00
Daniel Stenberg
058a023fae spellfix 2007-11-27 22:37:55 +00:00
Yang Tse
0c367fef94 ConnectTimeout requires OpenSSH 3.7 or later 2007-11-27 20:57:22 +00:00
Yang Tse
a418d290f1 Explicitly disallow remote hosts to connect to local forwarded ports,
the socks server port in the test suite. This is the default setting
unless a tinkered built ssh is being used.
2007-11-27 00:52:30 +00:00
Yang Tse
08cb30801c Stop ssh and socks servers when verification fails 2007-11-26 14:26:40 +00:00
Yang Tse
788de4f7ba Providing an explicit bind address besides the port for dynamic application-level
port forwarding, our socks port, prevents ssh from running on some systems.

By default, ssh binds local port forwardings to the loopback address, since this
was the address being given as the explicit bind address, now it isn't given.
2007-11-26 14:07:09 +00:00
Daniel Stenberg
ebce0a16f6 more blurb 2007-11-26 12:26:58 +00:00
Daniel Stenberg
df546bd58c Added recent changes and spellchecked 2007-11-26 11:04:51 +00:00
Daniel Stenberg
05221e9056 test1015 --data-urlencode 2007-11-26 11:04:21 +00:00
Daniel Stenberg
e963714de6 #1 fixed --data-urlencode when no = or @ was used
#2 extended the user-agent buffer since I hit the 128 byte boundary!
2007-11-26 11:03:32 +00:00
Daniel Stenberg
dc11239ff1 slightly less outdated 2007-11-26 11:02:45 +00:00
Yang Tse
d59841618d Temporary change to better debug startup failures
of test suite ssh and socks servers.
2007-11-26 02:45:24 +00:00
Yang Tse
8d3964782a Allow different start timeout specification for each server 2007-11-25 03:55:53 +00:00
Daniel Stenberg
162c039e9d reqdata doesn't exist anymore and the path moved to the UrlState struct 2007-11-24 23:18:21 +00:00
Daniel Stenberg
13648f8ccd struct HandleData is now called struct SingleRequest, and is only for data that
is inited at the start of the DO action. I removed the Curl_transfer_keeper
struct completely, and I had to move out a few struct members (that had to
be set before DO or used after DONE) to the UrlState struct. The SingleRequest
struct is accessed with SessionHandle->req.

One of the biggest reasons for doing this was the bunch of duplicate struct
members in HandleData and Curl_transfer_keeper since it was really messy to
keep track of two variables with the same name and basically the same purpose!
2007-11-24 23:16:55 +00:00
Yang Tse
5b809a3104 make 'checkdied' in runtests.pl more robust 2007-11-23 12:18:45 +00:00
Yang Tse
3daa54d636 Revert last change since it breaks running the test suite
when builddir is different from srcdir.
2007-11-23 09:50:44 +00:00
Yang Tse
8f1829d1d2 Improve chance of running runtests.pl from outside the
source tree 'tests' directory
2007-11-23 04:03:46 +00:00
Yang Tse
6efb6addf2 Debugging messages to trace startnew failures 2007-11-22 19:56:38 +00:00
Yang Tse
d789097af0 Provide a socklen_t definition in curl.h for Win32 API build targets
which don't have one.
2007-11-22 16:35:07 +00:00
Daniel Stenberg
4bd2d49ca1 make nlen a size_t to better hold diffs between pointers etc 2007-11-22 09:39:04 +00:00
Daniel Stenberg
ecfede9b3c Alessandro Vesely helped me improve the --data-urlencode's syntax, parser
and documentation.
2007-11-22 09:36:28 +00:00
Daniel Stenberg
cb04619de2 Make the do_complete() function not get called until the DO actually is
compelete, which bascially means when used with the multi interface
2007-11-21 22:37:55 +00:00
Yang Tse
61e2e86aef Temporary change adding additional debugging messages to better pinpoint
startup failures of test suite ssh and socks servers.
2007-11-21 19:33:09 +00:00
Yang Tse
9b86eecb94 Fix trying to return outside of a subroutine 2007-11-21 17:50:30 +00:00
Daniel Stenberg
35212da048 and we start on 1.5.2! 2007-11-21 10:16:44 +00:00
149 changed files with 7553 additions and 2291 deletions

325
CHANGES
View File

@@ -6,24 +6,338 @@
Changelog
Version 7.18.0 (28 January 2008)
Daniel S (27 Jan 2008)
- Dmitry Kurochkin: In "real world" testing I found more bugs in
pipelining. Broken connection is not restored and we get into infinite
loop. It happens because of wrong is_in_pipeline values.
Daniel S (26 Jan 2008)
- Kevin Reed filed bug report #1879375
(http://curl.haxx.se/bug/view.cgi?id=1879375) which describes how libcurl
got lost in this scenario: proxy tunnel (or HTTPS over proxy), ask to do any
proxy authentication and the proxy replies with an auth (like NTLM) and then
closes the connection after that initial informational response.
libcurl would not properly re-initialize the connection to the proxy and
continue the auth negotiation like supposed. It does now however, as it will
now detect if one or more authentication methods were available and asked
for, and will thus retry the connection and continue from there.
- I made the progress callback get called properly during proxy CONNECT.
Daniel S (23 Jan 2008)
- Igor Franchuk pointed out that CURLOPT_COOKIELIST set to "ALL" leaked
memory, and so did "SESS". Fixed now.
Yang Tse (22 Jan 2008)
- Check poll.h at configuration time, and use it when sys/poll.h unavailable
Daniel S (22 Jan 2008)
- Dmitry Kurochkin removed the cancelled state for pipelining, as we agreed
that it is bad anyway. Starting now, removing a handle that is in used in a
pipeline will break the pipeline - it'll be set back up again but still...
Yang Tse (21 Jan 2008)
- Disable ldap support for cygwin builds, since it breaks whole build process.
Fixing it will affect other platforms, so it is postponed for another release.
Daniel S (18 Jan 2008)
- Lau Hang Kin found and fixed a problem with the multi interface when doing
CONNECT over a proxy. curl_multi_fdset() didn't report back the socket
properly during that state, due to a missing case in the switch in the
multi_getsock() function.
Yang Tse (17 Jan 2008)
- Don't abort tests 518 and 537 when unable to raise the open-file soft limit.
Daniel S (16 Jan 2008)
- Nathan Coulter's patch that makes runtests.pl respect the PATH when figuring
out what valgrind to run.
Yang Tse (16 Jan 2008)
- Improved handling of out of memory in the command line tool that afected
data url encoded HTTP POSTs when reading it from a file.
Daniel S (16 Jan 2008)
- Dmitry Kurochkin worked a lot on improving the HTTP Pipelining support that
previously had a number of flaws, perhaps most notably when an application
fired up N transfers at once as then they wouldn't pipeline at all that
nicely as anyone would think... Test case 530 was also updated to take the
improved functionality into account.
- Calls to Curl_failf() are not supposed to provide a trailing newline as the
function itself adds that. Fixed on 50 or something strings!
Daniel S (15 Jan 2008)
- I made the torture test on test 530 go through. This was actually due to
silly code left from when we switched to let the multi handle "hold" the dns
cache when using the multi interface... Of course this only triggered when a
certain function call returned error at the correct moment.
Daniel S (14 Jan 2008)
- Joe Malicki filed bug report #1871269
(http://curl.haxx.se/bug/view.cgi?id=1871269) and we could fix his hang-
problem that occurred when doing a large HTTP POST request with the
response-body read from a callback.
Daniel S (12 Jan 2008)
- I re-arranged the curl --help output. All the options are now sorted on
their long option names and all descriptions are one-liners.
- Eric Landes provided the patch (edited by me) that introduces the
--keepalive-time to curl to set the keepalive probe interval. I also took
the opportunity to rename the recently added no-keep-alive option to
no-keepalive to keep a consistent naming and to avoid getting two dashes in
these option names. Eric also provided an update to the man page for the new
option.
Daniel S (11 Jan 2008)
- Daniel Egger made CURLOPT_RANGE work on file:// URLs the very same way it
already worked for FTP:// URLs.
- I made the curl tool switch from using CURLOPT_IOCTLFUNCTION to now use the
spanking new CURLOPT_SEEKFUNCTION simply to take advantage of the improved
performance for the upload resume cases where you want to upload the last
few bytes of a very large file. To implement this decently, I had to switch
the client code for uploading from fopen()/fread() to plain open()/read() so
that we can use lseek() to do >32bit seeks (as fseek() doesn't allow that)
on systems that offer support for that.
Daniel S (10 Jan 2008)
- Michal Marek made curl-config --libs not include /usr/lib64 in the output
(it already before skipped /usr/lib). /usr/lib64 is the default library
directory on many 64bit systems and it's unlikely that anyone would use the
path privately on systems where it's not.
- Georg Lippitsch brought CURLOPT_SEEKFUNCTION and CURLOPT_SEEKDATA to allow
libcurl to seek in a given input stream. This is particularly important when
doing upload resumes when there's already a huge part of the file present
remotely. Before, and still if this callback isn't used, libcurl will read
and through away the entire file up to the point to where the resuming
begins (which of course can be a slow opereration depending on file size,
I/O bandwidth and more). This new function will also be preferred to get
used instead of the CURLOPT_IOCTLFUNCTION for seeking back in a stream when
doing multi-stage HTTP auth with POST/PUT.
- Nikitinskit Dmitriy filed bug report #1868255
(http://curl.haxx.se/bug/view.cgi?id=1868255) with a patch. It identifies
and fixes a problem with parsing WWW-Authenticate: headers with additional
spaces in the line that the parser wasn't written to deal with.
Daniel S (8 Jan 2008)
- Introducing curl_easy_pause() and new magic return codes for both the read
and the write callbacks that now can make a connection's reading and/or
writing get paused.
Daniel S (6 Jan 2008)
- Jeff Johnson filed bug report #1863171
(http://curl.haxx.se/bug/view.cgi?id=1863171) where he pointed out that
libcurl's date parser didn't accept a +1300 time zone which actually is used
fairly often (like New Zealand's Dailight Savings Time), so I modified the
parser to now accept up to and including -1400 to +1400.
Daniel S (5 Jan 2008)
- Based on further discussion on curl-library, I reverted yesterday's SOCKS5
code to instead introduce support for a new proxy type called
CURLPROXY_SOCKS5_HOSTNAME that is used to send the host name to the proxy
instead of IP address and there's thus no longer any need for a new
curl_easy_setopt() option.
The default SOCKS5 proxy is again back to sending the IP address to the
proxy. The new curl command line option for enabling sending host name to a
SOCKS5 proxy is now --socks5-hostname.
Daniel S (4 Jan 2008)
- Based on Maxim Perenesenko's patch, we now do SOCKS5 operations and let the
proxy do the host name resolving and only if --socks5ip (or
CURLOPT_SOCKS5_RESOLVE_LOCAL) is used we resolve the host name locally and
pass on the IP address only to the proxy.
Yang Tse (3 Jan 2008)
- Modified test harness to allow SCP, SFTP and SOCKS4 tests to run with
OpenSSH 2.9.9, SunSSH 1.0 or later versions. SOCKS5 tests need OpenSSH
3.7, SunSSH 1.0 or later.
Daniel S (2 Jan 2008)
- I fixed two cases of missing return code checks when handling chunked
decoding where a write error (or abort return from a callback) didn't stop
libcurl's processing.
- I removed the socklen_t use from the public curl/curl.h header and instead
made it an unsigned int. The type was only used in the curl_sockaddr struct
definition (only used by the curl_opensocket_callback). On all platforms I
could find information about, socklen_t is 32 unsigned bits large so I don't
think this will break the API or ABI. The main reason for this change is of
course for all the platforms that don't have a socklen_t definition in their
headers to build fine again. Providing our own configure magic and custom
definition of socklen_t on those systems proved to work but was a lot of
cruft, code and extra magic needed - when this very small change of type
seems harmless and still solves the missing socklen_t problem.
- Richard Atterer brought a patch that added support for SOCKS4a proxies,
which is an inofficial PROXY4 variant that sends the hostname to the proxy
instead of the resolved address (which is already supported by SOCKS5).
--socks4a is the curl command line option for it and CURLOPT_PROXYTYPE can
now be set to CURLPROXY_SOCKS4A as well.
Daniel S (1 Jan 2008)
- Mohun Biswas pointed out that --libcurl generated a source code with an int
function but without a return statement. While fixing that, I also took care
about adding some better comments for the generated code.
Daniel S (27 Dec 2007)
- Dmitry Kurochkin mentioned a flaw
(http://curl.haxx.se/mail/lib-2007-12/0252.html) in detect_proxy() which
failed to set the bits.proxy variable properly when an environment variable
told libcurl to use a http proxy.
Daniel S (26 Dec 2007)
- In an attempt to repeat the problem in bug report #1850730
(http://curl.haxx.se/bug/view.cgi?id=1850730) I wrote up test case 552. The
test is doing a 70K POST with a read callback and an ioctl callback over a
proxy requiring Digest auth. The test case code is more or less identical to
the test recipe code provided by Spacen Jasset (who submitted the bug
report).
Daniel S (25 Dec 2007)
- Gary Maxwell filed bug report #1856628
(http://curl.haxx.se/bug/view.cgi?id=1856628) and provided a fix for the
(small) memory leak in the SSL session ID caching code. It happened when a
previous entry in the cache was re-used.
Daniel Fandrich (19 Dec 2007)
- Ensure that nroff doesn't put anything but ASCII characters into the
--manual text.
Yang Tse (18 Dec 2007)
- MSVC 9.0 (VS2008) does not support Windows build targets prior to WinXP,
and makes wrong asumptions of build target when it isn't specified. So,
if no build target has been defined we will target WinXP when building
curl/libcurl with MSVC 9.0 (VS2008).
- (http://curl.haxx.se/mail/archive-2007-12/0039.html) reported and fixed
a file truncation problem on Windows build targets triggered when retrying
a download with curl.
Daniel S (17 Dec 2007)
- Mateusz Loskot pointed out that MSVC 9.0 (VS2008) has the pollfd struct and
defines in winsock2.h somehow differently than previous versions and that
curl 7.17.1 would fail to compile out of the box.
Daniel S (13 Dec 2007)
- David Wright filed bug report #1849764
(http://curl.haxx.se/bug/view.cgi?id=1849764) with an included fix. He
identified a problem for re-used connections that previously had sent
Expect: 100-continue and in some situations the subsequent POST (that didn't
use Expect:) still had the internal flag set for its use. David's fix (that
makes the setting of the flag in every single request unconditionally) is
fine and is now used!
Daniel S (12 Dec 2007)
- Gilles Blanc made the curl tool enable SO_KEEPALIVE for the connections and
added the --no-keep-alive option that can disable that on demand.
Daniel S (9 Dec 2007)
- Andrew Moise filed bug report #1847501
(http://curl.haxx.se/bug/view.cgi?id=1847501) and pointed out a memcpy()
that should be memmove() in the convert_lineends() function.
Daniel S (8 Dec 2007)
- Renamed all internal static functions that had Curl_ prefixes to no longer
have them. The Curl_ prefix is exclusively used for library internal global
symbols. Static functions can be named anything, except for using Curl_ or
curl_ prefixes. This is for consistency and for easier maintainance and
overview.
- Cleaned up and reformatted the TODO document to look like the FAQ and
CONTRIBUTE, which makes nicer web pages
- Added test cases 549 and 550 that test CURLOPT_PROXY_TRANSFER_MODE.
- Added keywords on a bunch of test cases
- Fixed an OOM problem in the curl code that would lead to fclose on a bad
handle and crash
Daniel S (5 Dec 2007)
- Spacen Jasset reported a problem with doing POST (with data read with a
callback) over a proxy when NTLM is used as auth with the proxy. The bug
also concerned Digest and was limited to using callback only. Spacen worked
with us to provide a useful patch. I added the test case 547 and 548 to
verify two variations of POST over proxy with NTLM.
Daniel S (3 Dec 2007)
- Ray Pekowski filed bug report #1842029
(http://curl.haxx.se/bug/view.cgi?id=1842029) in which he identified a
problem with SSL session caching that prevent it from working, and provided
the associated fix!
- Now libcurl (built with OpenSSL) doesn't return error anymore if the remote
SSL-based server doesn't present a certificate when the request is told to
ignore certificate verification anyway.
- Michal Marek introduced CURLOPT_PROXY_TRANSFER_MODE which is used to control
the appending of the "type=" thing on FTP URLs when they are passed to a
HTTP proxy. Some proxies just don't like that appending (which is done
unconditionally in 7.17.1), and some proxies treat binary/ascii transfers
better with the appending done!
Daniel S (29 Nov 2007)
- A bug report on the curl-library list showed a HTTP Digest session going on
with a 700+ letter nonce. Previously libcurl only support 127 letter ones
and now I bumped it to 1023.
- Fixed the resumed FTP upload loop to not require that the read callback
returns a full buffer on each invoke.
Daniel S (25 Nov 2007)
- Added test case 1015 that tests --data-urlencode in multiple ways
- Fixed --data-urlencode for when no @ or = are used
- Extended the user-agent buffer curl uses, since we can hit the 128 byte
border with plenty development libraries used. Like my current set: "curl
7.17.2-CVS (i686-pc-linux-gnu) libcurl/7.17.2-CVS OpenSSL/0.9.8g
zlib/1.2.3.3 c-ares/1.5.2-CVS libidn/1.1 libssh2/0.19.0-CVS"
Daniel S (24 Nov 2007)
- Internal rearrangements, so that the previous struct HandleData is no more.
It is now known as SingleRequest and the Curl_transfer_keeper struct within
that was remove entirely. This has the upside that there are less duplicate
struct members that made it hard to see and remember what struct that was
used to store what data. The transfer_keeper thing was once stored on a
per-connection basis and then it made sense to have the duplicate info but
since it was moved to the SessionHandle (in 7.16.0) it just added weirdness.
The SingleRequest struct is used by data that only is valid for this single
request.
Yang Tse (22 Nov 2007)
- Provide a socklen_t definition in curl.h for Win32 API build targets
which don't have one.
Daniel S (22 Nov 2007)
- Alessandro Vesely helped me improve the --data-urlencode's syntax, parser
and documentation.
Daniel S (21 Nov 2007)
- While inspecting the Negotiate code, I noticed how the proxy auth was using
the same state struct as the host auth, so both could never be used at the
same time! I fixed it (without being able to check) to use two separate
structs to allow authentication using Negotiate on host and proxy
simultanouesly.
simultaneously.
Daniel S (20 Nov 2007)
- Emil Romanus pointed out a bug that made an easy handle get the cookie
engine activated when set to use a share (even if the share doesn't share
cookies). I fixed it.
- Fixed a very long-lasting mprintf() bug that occured when we did "%.*s%s",
- Fixed a very long-lasting mprintf() bug that occurred when we did "%.*s%s",
since the second %s would then wrongly used the numerical precision argument
instead and crash.
- Introuced --data-urlencode to the curl tool for easier url encoding of the
- Introduced --data-urlencode to the curl tool for easier url encoding of the
data sent in a post.
Daniel S (18 Nov 2007)
@@ -68,6 +382,11 @@ Daniel S (12 Nov 2007)
make sure that there's never any chance for a NULL pointer in that struct
member.
Yang Tse (10 Nov 2007)
- Vikram Saxena (http://curl.haxx.se/mail/lib-2007-11/0096.html) pointed out
that the pollfd struct was being multi defined when using VS2008. This is
now fixed in /curl/lib/select.h
Daniel S (8 Nov 2007)
- Bug report #1823487 (http://curl.haxx.se/bug/view.cgi?id=1823487) pointed
out that SFTP requests didn't use persistent connections. Neither did SCP

View File

@@ -10864,7 +10864,7 @@ Version 6.2
the configure script to leave SSL alone. The previous functionality has
been retained. Troy Engel helped test this new one.
Version 6.1
Version 6.1 (October 17 1999)
Daniel (17 October 1999):
- I ifdef'ed or commented all the zlib stuff in the sources and configure
@@ -10939,7 +10939,7 @@ Version 6.1beta
- Made the -F option allow stdin when specifying files. By using '-' instead
of file name, the data will be read from stdin.
Version 6.0
Version 6.0 (September 14 1999)
Daniel (13 September 1999)
- Added -X/--http-request <request> to enable any HTTP command to be sent.
@@ -11201,7 +11201,7 @@ Version 5.9.1
with form posting where the variable shouldn't have any content, as in curl
-F "form=" www.site.com. It was now fixed.
Version 5.9
Version 5.9 (May 22 1999)
Daniel (22 May 1999)
- I've got a bug report from Aaron Scarisbrick in which he states he has some
@@ -11939,7 +11939,7 @@ Version 4.8.1
had nothing but header. Appearantly Solaris deals with negative sizes in
fwrite() calls a lot better than Linux does... =B-]
Version 4.8
Version 4.8 (Aug 31, 1998)
Daniel Stenberg
- Continue FTP file transfer. -c is the switch. Note that you need to
specify a file name if you wanna resume a download (you can't resume a

View File

@@ -1,6 +1,6 @@
COPYRIGHT AND PERMISSION NOTICE
Copyright (c) 1996 - 2007, Daniel Stenberg, <daniel@haxx.se>.
Copyright (c) 1996 - 2008, Daniel Stenberg, <daniel@haxx.se>.
All rights reserved.

View File

@@ -5,7 +5,7 @@
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
# Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
@@ -128,6 +128,12 @@ vc:
cd ..\src
nmake /f Makefile.$(VC)
vc-x64:
cd lib
MACHINE=x64 nmake /f Makefile.$(VC) cfg=release
cd ..\src
MACHINE=x64 nmake /f Makefile.$(VC)
vc-zlib:
cd lib
nmake /f Makefile.$(VC) cfg=release-zlib

View File

@@ -1,16 +1,25 @@
Curl and libcurl 7.17.2
Curl and libcurl 7.18.0
Public curl releases: 103
Command line options: 122
curl_easy_setopt() options: 147
Public functions in libcurl: 55
Public web site mirrors: 42
Command line options: 126
curl_easy_setopt() options: 150
Public functions in libcurl: 56
Public web site mirrors: 43
Known libcurl bindings: 36
Contributors: 597
This release includes the following changes:
o --data-urlencode was added
o --data-urlencode
o CURLOPT_PROXY_TRANSFER_MODE
o --no-keepalive - now curl does connections with keep-alive enabled by
default
o --socks4a added (proxy type CURLPROXY_SOCKS4A for libcurl)
o --socks5-hostname added (CURLPROXY_SOCKS5_HOSTNAME for libcurl)
o curl_easy_pause()
o CURLOPT_SEEKFUNCTION and CURLOPT_SEEKDATA
o --keepalive-time
o curl --help output was re-ordered
This release includes the following bugfixes:
@@ -26,6 +35,34 @@ This release includes the following bugfixes:
o SSL connections with NSS done with the multi-interface
o setting a share no longer activates cookies
o Negotiate now works on auth and proxy simultanouesly
o support HTTP Digest nonces up to 1023 letters
o resumed ftp upload no longer requires the read callback to return full
buffers
o no longer default-appends ;type= on FTP URLs thru proxies
o SSL session id caching
o POST with callback over proxy requiring NTLM or Digest
o Expect: 100-continue flaw on re-used connection with POSTs
o build fix for MSVC 9.0 (VS2008)
o Windows curl builds failed file truncation when retry downloading
o SSL session ID cache memory leak
o bad connection re-use check with environment variable-activated proxy use
o --libcurl now generates a return statement as well
o socklen_t is no longer used in the public includes
o time zone offsets from -1400 to +1400 are now accepted by the date parser
o allows more spaces in WWW/Proxy-Authenticate: headers
o curl-config --libs skips /usr/lib64
o range support for file:// transfers
o libcurl hang with huge POST request and request-body read from callback
o removed extra newlines from many error messages
o improved pipelining
o improved OOM handling for data url encoded HTTP POSTs when read from a file
o test suite could pick wrong tool(s) if more than one existed in the PATH
o curl_multi_fdset() failed to return socket while doing CONNECT over proxy
o curl_multi_remove_handle() on a handle that is in used for a pipeline now
break that pipeline
o CURLOPT_COOKIELIST memory leaks
o progress meter/callback during http proxy CONNECT requests
o auth for http proxy when the proxy closes connection after first response
This release includes the following known bugs:
@@ -35,16 +72,23 @@ Other curl-related news:
o TclCurl 7.17.1 => http://personal1.iddeo.es/andresgarci/tclcurl/english/
o Ruby Curl::Multi 0.1 => http://curl-multi.rubyforge.org/
o curl-java 0.2.1 => http://curl.haxx.se/libcurl/java/
New curl mirrors:
o http://curl.gominet.net/ is new web mirror in Vizcaya, Portugal
o http://curl.gominet.net/ is new mirror in Vizcaya, Portugal
o http://curl.very-clever.com/ is a new mirror in Nuremberg, Germany
This release would not have looked like this without help, code, reports and
advice from friends like these:
Dan Fandrich, Gisle Vanem, Toby Peterson, Yang Tse, Daniel Black,
Robin Johnson, Michal Marek, Ates Goral, Andres Garcia, Rob Crittenden,
Emil Romanus
Emil Romanus, Alessandro Vesely, Ray Pekowski, Spacen Jasset, Andrew Moise,
Gilles Blanc, David Wright, Vikram Saxena, Mateusz Loskot, Gary Maxwell,
Dmitry Kurochkin, Mohun Biswas, Richard Atterer, Maxim Perenesenko,
Daniel Egger, Jeff Johnson, Nikitinskit Dmitriy, Georg Lippitsch, Eric Landes,
Joe Malicki, Nathan Coulter, Lau Hang Kin, Judson Bishop, Igor Franchuk,
Kevin Reed
Thanks! (and sorry if I forgot to mention someone)

View File

@@ -1,6 +1,4 @@
To be addressed before 7.17.2 (planned release: December 2007)
To be addressed before 7.18.0 (planned release: January 2008)
=============================
107 - resolve the type= thing for FTP URLs over HTTP proxies
108 -
118 -

View File

@@ -1,13 +1,27 @@
Changelog for the c-ares project
Version 1.5.1 (Nov 20, 2007)
* December 11 2007 (Gisle Vanem)
* November 20 2007 (Daniel Stenberg)
- Added another sample application; acountry.c which converts an
IPv4-address(es) and/or host-name(s) to country-name and country-code.
This uses the service of the DNSBL at countries.nerd.dk.
* December 3 2007 (Daniel Stenberg)
- Brad Spencer fixed the configure script to assume that there's no
/dev/urandom when built cross-compiled as then the script cannot check for
it.
- Erik Kline cleaned up ares_gethostbyaddr.c:next_lookup() somewhat
Version 1.5.1 (Nov 21, 2007)
* November 21 2007 (Daniel Stenberg)
- Robin Cornelius pointed out that ares_llist.h was missing in the release
archive for 1.5.0
Version 1.5.0 (Nov 20, 2007)
Version 1.5.0 (Nov 21, 2007)
* October 2 2007 (Daniel Stenberg)
@@ -58,7 +72,7 @@ Version 1.5.0 (Nov 20, 2007)
* July 14 2007 (Daniel Stenberg)
- Vlad Dinulescu fixed two outstanding valgrind reports:
1. In ares_query.c , in find_query_by_id we compare q->qid (which is a short
int variable) with qid, which is declared as an int variable. Moreover,
DNS_HEADER_SET_QID is used to set the value of qid, but DNS_HEADER_SET_QID
@@ -144,7 +158,7 @@ Version 1.4.0 (June 8, 2007)
- Brad House added ares_save_options() and ares_destroy_options() that can be
used to keep options for later re-usal when ares_init_options() is used.
Problem: Calling ares_init() for each lookup can be unnecessarily resource
intensive. On windows, it must LoadLibrary() or search the registry
on each call to ares_init(). On unix, it must read and parse

View File

@@ -8,11 +8,21 @@ MSVCFILES = vc/adig/adig.dep vc/adig/adig.dsp vc/vc.dsw vc/ahost/ahost.dep \
vc/ahost/ahost.dsp vc/areslib/areslib.dep vc/areslib/areslib.dsp \
vc/areslib/areslib.dsw
if DEBUGBUILD
PROGS =
else
PROGS = ahost adig acountry
endif
noinst_PROGRAMS =$(PROGS)
# adig and ahost are just sample programs and thus not mentioned with the
# regular sources and headers
EXTRA_DIST = CHANGES README.cares Makefile.inc adig.c ahost.c $(man_MANS) \
$(MSVCFILES) AUTHORS config-win32.h RELEASE-NOTES libcares.pc.in
pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = libcares.pc
VER=-version-info 2:0:0
# This flag accepts an argument of the form current[:revision[:age]]. So,
@@ -61,6 +71,15 @@ libcares_ladir = $(includedir)
# what headers to install on 'make install':
libcares_la_HEADERS = ares.h ares_version.h ares_dns.h
ahost_SOURCES = ahost.c ares_getopt.c
ahost_LDADD = $(top_builddir)/$(lib_LTLIBRARIES)
adig_SOURCES = adig.c ares_getopt.c
adig_LDADD = $(top_builddir)/$(lib_LTLIBRARIES)
acountry_SOURCES = acountry.c ares_getopt.c
acountry_LDADD = $(top_builddir)/$(lib_LTLIBRARIES)
# Make files named *.dist replace the file without .dist extension
dist-hook:
find $(distdir) -name "*.dist" -exec rm {} \;

View File

@@ -22,7 +22,7 @@ CFLAGS += -DWATT32 -DHAVE_AF_INET6 -DHAVE_PF_INET6 -DHAVE_FIONBIO \
-DRECV_TYPE_ARG1='int' -DRECV_TYPE_ARG2='void*' \
-DRECV_TYPE_ARG3='int' -DRECV_TYPE_ARG4='int' \
-DRECV_TYPE_RETV='int' -DHAVE_STRUCT_TIMEVAL \
-Dselect=select_s -UHAVE_CONFIG_H
-Dselect=select_s -Dsocklen_t=int -UHAVE_CONFIG_H
LDFLAGS = -s
@@ -49,7 +49,7 @@ EX_LIBS += $(WATT32_ROOT)/lib/libwatt.a
OBJECTS = $(addprefix $(OBJ_DIR)/, $(CSOURCES:.c=.o))
all: $(OBJ_DIR) libcares.a ahost.exe adig.exe
all: $(OBJ_DIR) libcares.a ahost.exe adig.exe acountry.exe
@echo Welcome to c-ares.
libcares.a: $(OBJECTS)
@@ -61,11 +61,14 @@ ahost.exe: ahost.c $(OBJ_DIR)/ares_getopt.o $(OBJ_HACK)
adig.exe: adig.c $(OBJ_DIR)/ares_getopt.o $(OBJ_HACK)
$(CC) $(LDFLAGS) $(CFLAGS) -o $@ $^ $(EX_LIBS)
acountry.exe: acountry.c $(OBJ_DIR)/ares_getopt.o $(OBJ_HACK)
$(CC) $(LDFLAGS) $(CFLAGS) -o $@ $^ $(EX_LIBS)
clean:
rm -f $(OBJECTS) libcares.a
vclean realclean: clean
rm -f ahost.exe adig.exe depend.dj
rm -f ahost.exe adig.exe acountry.exe depend.dj
- rmdir $(OBJ_DIR)
-include depend.dj

View File

@@ -32,7 +32,7 @@ $(LIB): $(OBJLIB)
all: $(LIB) demos
demos: adig.exe ahost.exe
demos: adig.exe ahost.exe acountry.exe
tags:
etags *.[ch]
@@ -61,7 +61,7 @@ install:
done)
clean:
$(RM) ares_getopt.o $(OBJLIB) $(LIB) adig.exe ahost.exe
$(RM) ares_getopt.o $(OBJLIB) $(LIB) adig.exe ahost.exe acountry.exe
distclean: clean
$(RM) config.cache config.log config.status Makefile

View File

@@ -18,10 +18,10 @@ INSTDIR = ../curl-$(LIBCURL_VERSION_STR)-bin-nw
endif
# Edit the vars below to change NLM target settings.
TARGETS = adig.nlm ahost.nlm
TARGETS = adig.nlm ahost.nlm acountry.nlm
LTARGET = libcares.$(LIBEXT)
VERSION = $(LIBCARES_VERSION)
COPYR = Copyright (C) 1996 - 2007, Daniel Stenberg, <daniel@haxx.se>
COPYR = Copyright (C) 1996 - 2008, Daniel Stenberg, <daniel@haxx.se>
DESCR = cURL $(subst .def,,$(notdir $@)) $(LIBCARES_VERSION_STR) - http://curl.haxx.se
MTSAFE = YES
STACK = 64000

View File

@@ -76,7 +76,7 @@ OBJECTS = $(OBJ_DIR)\ares_fds.obj \
$(OBJ_DIR)\inet_net_pton.obj \
$(OBJ_DIR)\inet_ntop.obj
all: $(OBJ_DIR) cares.lib cares.dll cares_imp.lib ahost.exe adig.exe
all: $(OBJ_DIR) cares.lib cares.dll cares_imp.lib ahost.exe adig.exe acountry.exe
@echo Welcome to c-ares library and examples
$(OBJ_DIR):
@@ -131,6 +131,9 @@ ahost.exe: $(OBJ_DIR) $(OBJ_DIR)\ahost.obj $(OBJ_DIR)\ares_getopt.obj cares_imp.
adig.exe: $(OBJ_DIR) $(OBJ_DIR)\adig.obj $(OBJ_DIR)\ares_getopt.obj cares_imp.lib
link $(LDFLAGS) -out:$@ $(OBJ_DIR)\adig.obj $(OBJ_DIR)\ares_getopt.obj cares_imp.lib $(EX_LIBS)
acountry.exe: $(OBJ_DIR) $(OBJ_DIR)\acountry.obj $(OBJ_DIR)\ares_getopt.obj cares_imp.lib
link $(LDFLAGS) -out:$@ $(OBJ_DIR)\acountry.obj $(OBJ_DIR)\ares_getopt.obj cares_imp.lib $(EX_LIBS)
clean:
- del $(OBJ_DIR)\*.obj *.ilk *.pdb *.pbt *.pbi *.pbo *._xe *.map

View File

@@ -1,9 +1,9 @@
This is what's new and changed in the c-ares 1.5.1 release:
This is what's new and changed in the c-ares 1.5.2 release:
o added the ares_llist.h header that was missing in the 1.5.0 release
o
Thanks go to these friendly people for their efforts and contributions:
Robin Cornelius
Have fun!

589
ares/acountry.c Normal file
View File

@@ -0,0 +1,589 @@
/*
* $Id$
*
* IP-address/hostname to country converter.
*
* Problem; you want to know where IP a.b.c.d is located.
*
* Use ares_gethostbyname ("d.c.b.a.zz.countries.nerd.dk")
* and get the CNAME (host->h_name). Result will be:
* CNAME = zz<CC>.countries.nerd.dk with address 127.0.x.y (ver 1) or
* CNAME = <a.b.c.d>.zz.countries.nerd.dk with address 127.0.x.y (ver 2)
*
* The 2 letter country code in <CC> and the ISO-3166 country
* number in x.y (number = x*256 + y). Version 2 of the protocol is missing
* the <CC> number.
*
* Ref: http://countries.nerd.dk/more.html
*
* Written by G. Vanem <gvanem@broadpark.no> 2006, 2007
*
* NB! This program may not be big-endian aware.
*
* Permission to use, copy, modify, and distribute this
* software and its documentation for any purpose and without
* fee is hereby granted, provided that the above copyright
* notice appear in all copies and that both that copyright
* notice and this permission notice appear in supporting
* documentation, and that the name of M.I.T. not be used in
* advertising or publicity pertaining to distribution of the
* software without specific, written prior permission.
* M.I.T. makes no representations about the suitability of
* this software for any purpose. It is provided "as is"
* without express or implied warranty.
*/
#include "setup.h"
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
#include <string.h>
#include <ctype.h>
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#if defined(WIN32)
#include <winsock.h>
#else
#include <arpa/inet.h>
#include <netinet/in.h>
#include <netdb.h>
#endif
#include "ares.h"
#include "ares_getopt.h"
#include "inet_net_pton.h"
#include "inet_ntop.h"
static const char *usage = "acountry [-vh?] {host|addr} ...\n";
static const char nerd_fmt[] = "%u.%u.%u.%u.zz.countries.nerd.dk";
static const char *nerd_ver1 = nerd_fmt + 14;
static const char *nerd_ver2 = nerd_fmt + 11;
static int verbose = 0;
#define TRACE(fmt) do { \
if (verbose > 0) \
printf fmt ; \
} while (0)
static void wait_ares(ares_channel channel);
static void callback(void *arg, int status, int timeouts, struct hostent *host);
static void callback2(void *arg, int status, int timeouts, struct hostent *host);
static void find_country_from_cname(const char *cname, struct in_addr addr);
static void Abort(const char *fmt, ...)
{
va_list args;
va_start(args, fmt);
vfprintf(stderr, fmt, args);
va_end(args);
exit(1);
}
int main(int argc, char **argv)
{
ares_channel channel;
int ch, status;
#ifdef WIN32
WORD wVersionRequested = MAKEWORD(USE_WINSOCK,USE_WINSOCK);
WSADATA wsaData;
WSAStartup(wVersionRequested, &wsaData);
#endif
while ((ch = ares_getopt(argc, argv, "dvh?")) != -1)
switch (ch)
{
case 'd':
#ifdef WATT32
dbug_init();
#endif
break;
case 'v':
verbose++;
break;
case 'h':
case '?':
default:
Abort(usage);
}
argc -= optind;
argv += optind;
if (argc < 1)
Abort(usage);
status = ares_init(&channel);
if (status != ARES_SUCCESS)
{
fprintf(stderr, "ares_init: %s\n", ares_strerror(status));
return 1;
}
/* Initiate the queries, one per command-line argument. */
for ( ; *argv; argv++)
{
struct in_addr addr;
char buf[100];
/* If this fails, assume '*argv' is a host-name that
* must be resolved first
*/
if (ares_inet_pton(AF_INET, *argv, &addr) != 1)
{
ares_gethostbyname(channel, *argv, AF_INET, callback2, &addr);
wait_ares(channel);
if (addr.s_addr == INADDR_NONE)
{
printf("Failed to lookup %s\n", *argv);
continue;
}
}
sprintf(buf, nerd_fmt,
(unsigned int)(addr.s_addr >> 24),
(unsigned int)((addr.s_addr >> 16) & 255),
(unsigned int)((addr.s_addr >> 8) & 255),
(unsigned int)(addr.s_addr & 255));
TRACE(("Looking up %s...", buf));
fflush(stdout);
ares_gethostbyname(channel, buf, AF_INET, callback, buf);
}
wait_ares(channel);
ares_destroy(channel);
#ifdef WIN32
WSACleanup();
#endif
return 0;
}
/*
* Wait for the queries to complete.
*/
static void wait_ares(ares_channel channel)
{
while (1)
{
struct timeval *tvp, tv;
fd_set read_fds, write_fds;
int nfds;
FD_ZERO(&read_fds);
FD_ZERO(&write_fds);
nfds = ares_fds(channel, &read_fds, &write_fds);
if (nfds == 0)
break;
tvp = ares_timeout(channel, NULL, &tv);
select(nfds, &read_fds, &write_fds, NULL, tvp);
ares_process(channel, &read_fds, &write_fds);
}
}
/*
* This is the callback used when we have the IP-address of interest.
* Extract the CNAME and figure out the country-code from it.
*/
static void callback(void *arg, int status, int timeouts, struct hostent *host)
{
const char *name = (const char*)arg;
const char *cname;
char buf[20];
(void)timeouts;
if (!host || status != ARES_SUCCESS)
{
printf("Failed to lookup %s: %s\n", name, ares_strerror(status));
return;
}
TRACE(("\nFound address %s, name %s\n",
ares_inet_ntop(AF_INET,(const char*)host->h_addr,buf,sizeof(buf)),
host->h_name));
cname = host->h_name; /* CNAME gets put here */
if (!cname)
printf("Failed to get CNAME for %s\n", name);
else
find_country_from_cname(cname, *(struct in_addr*)host->h_addr);
}
/*
* This is the callback used to obtain the IP-address of the host of interest.
*/
static void callback2(void *arg, int status, int timeouts, struct hostent *host)
{
struct in_addr *addr = (struct in_addr*) arg;
(void)timeouts;
if (!host || status != ARES_SUCCESS)
memset(addr, INADDR_NONE, sizeof(*addr));
else
memcpy(addr, host->h_addr, sizeof(*addr));
}
struct search_list {
int country_number; /* ISO-3166 country number */
char short_name[3]; /* A2 short country code */
const char *long_name; /* normal country name */
};
const struct search_list *list_lookup(int number, const struct search_list *list, int num)
{
while (num > 0 && list->long_name)
{
if (list->country_number == number)
return (list);
num--;
list++;
}
return (NULL);
}
/*
* Ref: ftp://ftp.ripe.net/iso3166-countrycodes.txt
*/
static const struct search_list country_list[] = {
{ 4, "af", "Afghanistan" },
{ 248, "ax", "<EFBFBD>land Island" },
{ 8, "al", "Albania" },
{ 12, "dz", "Algeria" },
{ 16, "as", "American Samoa" },
{ 20, "ad", "Andorra" },
{ 24, "ao", "Angola" },
{ 660, "ai", "Anguilla" },
{ 10, "aq", "Antarctica" },
{ 28, "ag", "Antigua & Barbuda" },
{ 32, "ar", "Argentina" },
{ 51, "am", "Armenia" },
{ 533, "aw", "Aruba" },
{ 36, "au", "Australia" },
{ 40, "at", "Austria" },
{ 31, "az", "Azerbaijan" },
{ 44, "bs", "Bahamas" },
{ 48, "bh", "Bahrain" },
{ 50, "bd", "Bangladesh" },
{ 52, "bb", "Barbados" },
{ 112, "by", "Belarus" },
{ 56, "be", "Belgium" },
{ 84, "bz", "Belize" },
{ 204, "bj", "Benin" },
{ 60, "bm", "Bermuda" },
{ 64, "bt", "Bhutan" },
{ 68, "bo", "Bolivia" },
{ 70, "ba", "Bosnia & Herzegowina" },
{ 72, "bw", "Botswana" },
{ 74, "bv", "Bouvet Island" },
{ 76, "br", "Brazil" },
{ 86, "io", "British Indian Ocean Territory" },
{ 96, "bn", "Brunei Darussalam" },
{ 100, "bg", "Bulgaria" },
{ 854, "bf", "Burkina Faso" },
{ 108, "bi", "Burundi" },
{ 116, "kh", "Cambodia" },
{ 120, "cm", "Cameroon" },
{ 124, "ca", "Canada" },
{ 132, "cv", "Cape Verde" },
{ 136, "ky", "Cayman Islands" },
{ 140, "cf", "Central African Republic" },
{ 148, "td", "Chad" },
{ 152, "cl", "Chile" },
{ 156, "cn", "China" },
{ 162, "cx", "Christmas Island" },
{ 166, "cc", "Cocos Islands" },
{ 170, "co", "Colombia" },
{ 174, "km", "Comoros" },
{ 178, "cg", "Congo" },
{ 180, "cd", "Congo" },
{ 184, "ck", "Cook Islands" },
{ 188, "cr", "Costa Rica" },
{ 384, "ci", "Cote d'Ivoire" },
{ 191, "hr", "Croatia" },
{ 192, "cu", "Cuba" },
{ 196, "cy", "Cyprus" },
{ 203, "cz", "Czech Republic" },
{ 208, "dk", "Denmark" },
{ 262, "dj", "Djibouti" },
{ 212, "dm", "Dominica" },
{ 214, "do", "Dominican Republic" },
{ 218, "ec", "Ecuador" },
{ 818, "eg", "Egypt" },
{ 222, "sv", "El Salvador" },
{ 226, "gq", "Equatorial Guinea" },
{ 232, "er", "Eritrea" },
{ 233, "ee", "Estonia" },
{ 231, "et", "Ethiopia" },
{ 238, "fk", "Falkland Islands" },
{ 234, "fo", "Faroe Islands" },
{ 242, "fj", "Fiji" },
{ 246, "fi", "Finland" },
{ 250, "fr", "France" },
{ 249, "fx", "France, Metropolitan" },
{ 254, "gf", "French Guiana" },
{ 258, "pf", "French Polynesia" },
{ 260, "tf", "French Southern Territories" },
{ 266, "ga", "Gabon" },
{ 270, "gm", "Gambia" },
{ 268, "ge", "Georgia" },
{ 276, "de", "Germany" },
{ 288, "gh", "Ghana" },
{ 292, "gi", "Gibraltar" },
{ 300, "gr", "Greece" },
{ 304, "gl", "Greenland" },
{ 308, "gd", "Grenada" },
{ 312, "gp", "Guadeloupe" },
{ 316, "gu", "Guam" },
{ 320, "gt", "Guatemala" },
{ 324, "gn", "Guinea" },
{ 624, "gw", "Guinea-Bissau" },
{ 328, "gy", "Guyana" },
{ 332, "ht", "Haiti" },
{ 334, "hm", "Heard & Mc Donald Islands" },
{ 336, "va", "Vatican City" },
{ 340, "hn", "Honduras" },
{ 344, "hk", "Hong kong" },
{ 348, "hu", "Hungary" },
{ 352, "is", "Iceland" },
{ 356, "in", "India" },
{ 360, "id", "Indonesia" },
{ 364, "ir", "Iran" },
{ 368, "iq", "Iraq" },
{ 372, "ie", "Ireland" },
{ 376, "il", "Israel" },
{ 380, "it", "Italy" },
{ 388, "jm", "Jamaica" },
{ 392, "jp", "Japan" },
{ 400, "jo", "Jordan" },
{ 398, "kz", "Kazakhstan" },
{ 404, "ke", "Kenya" },
{ 296, "ki", "Kiribati" },
{ 408, "kp", "Korea (north)" },
{ 410, "kr", "Korea (south)" },
{ 414, "kw", "Kuwait" },
{ 417, "kg", "Kyrgyzstan" },
{ 418, "la", "Laos" },
{ 428, "lv", "Latvia" },
{ 422, "lb", "Lebanon" },
{ 426, "ls", "Lesotho" },
{ 430, "lr", "Liberia" },
{ 434, "ly", "Libya" },
{ 438, "li", "Liechtenstein" },
{ 440, "lt", "Lithuania" },
{ 442, "lu", "Luxembourg" },
{ 446, "mo", "Macao" },
{ 807, "mk", "Macedonia" },
{ 450, "mg", "Madagascar" },
{ 454, "mw", "Malawi" },
{ 458, "my", "Malaysia" },
{ 462, "mv", "Maldives" },
{ 466, "ml", "Mali" },
{ 470, "mt", "Malta" },
{ 584, "mh", "Marshall Islands" },
{ 474, "mq", "Martinique" },
{ 478, "mr", "Mauritania" },
{ 480, "mu", "Mauritius" },
{ 175, "yt", "Mayotte" },
{ 484, "mx", "Mexico" },
{ 583, "fm", "Micronesia" },
{ 498, "md", "Moldova" },
{ 492, "mc", "Monaco" },
{ 496, "mn", "Mongolia" },
{ 500, "ms", "Montserrat" },
{ 504, "ma", "Morocco" },
{ 508, "mz", "Mozambique" },
{ 104, "mm", "Myanmar" },
{ 516, "na", "Namibia" },
{ 520, "nr", "Nauru" },
{ 524, "np", "Nepal" },
{ 528, "nl", "Netherlands" },
{ 530, "an", "Netherlands Antilles" },
{ 540, "nc", "New Caledonia" },
{ 554, "nz", "New Zealand" },
{ 558, "ni", "Nicaragua" },
{ 562, "ne", "Niger" },
{ 566, "ng", "Nigeria" },
{ 570, "nu", "Niue" },
{ 574, "nf", "Norfolk Island" },
{ 580, "mp", "Northern Mariana Islands" },
{ 578, "no", "Norway" },
{ 512, "om", "Oman" },
{ 586, "pk", "Pakistan" },
{ 585, "pw", "Palau" },
{ 275, "ps", "Palestinian Territory" },
{ 591, "pa", "Panama" },
{ 598, "pg", "Papua New Guinea" },
{ 600, "py", "Paraguay" },
{ 604, "pe", "Peru" },
{ 608, "ph", "Philippines" },
{ 612, "pn", "Pitcairn" },
{ 616, "pl", "Poland" },
{ 620, "pt", "Portugal" },
{ 630, "pr", "Puerto Rico" },
{ 634, "qa", "Qatar" },
{ 638, "re", "Reunion" },
{ 642, "ro", "Romania" },
{ 643, "ru", "Russia" },
{ 646, "rw", "Rwanda" },
{ 659, "kn", "Saint Kitts & Nevis" },
{ 662, "lc", "Saint Lucia" },
{ 670, "vc", "Saint Vincent" },
{ 882, "ws", "Samoa" },
{ 674, "sm", "San Marino" },
{ 678, "st", "Sao Tome & Principe" },
{ 682, "sa", "Saudi Arabia" },
{ 686, "sn", "Senegal" },
{ 891, "cs", "Serbia and Montenegro" },
{ 690, "sc", "Seychelles" },
{ 694, "sl", "Sierra Leone" },
{ 702, "sg", "Singapore" },
{ 703, "sk", "Slovakia" },
{ 705, "si", "Slovenia" },
{ 90, "sb", "Solomon Islands" },
{ 706, "so", "Somalia" },
{ 710, "za", "South Africa" },
{ 239, "gs", "South Georgia" },
{ 724, "es", "Spain" },
{ 144, "lk", "Sri Lanka" },
{ 654, "sh", "St. Helena" },
{ 666, "pm", "St. Pierre & Miquelon" },
{ 736, "sd", "Sudan" },
{ 740, "sr", "Suriname" },
{ 744, "sj", "Svalbard & Jan Mayen Islands" },
{ 748, "sz", "Swaziland" },
{ 752, "se", "Sweden" },
{ 756, "ch", "Switzerland" },
{ 760, "sy", "Syrian Arab Republic" },
{ 626, "tl", "Timor-Leste" },
{ 158, "tw", "Taiwan" },
{ 762, "tj", "Tajikistan" },
{ 834, "tz", "Tanzania" },
{ 764, "th", "Thailand" },
{ 768, "tg", "Togo" },
{ 772, "tk", "Tokelau" },
{ 776, "to", "Tonga" },
{ 780, "tt", "Trinidad & Tobago" },
{ 788, "tn", "Tunisia" },
{ 792, "tr", "Turkey" },
{ 795, "tm", "Turkmenistan" },
{ 796, "tc", "Turks & Caicos Islands" },
{ 798, "tv", "Tuvalu" },
{ 800, "ug", "Uganda" },
{ 804, "ua", "Ukraine" },
{ 784, "ae", "United Arab Emirates" },
{ 826, "gb", "United Kingdom" },
{ 840, "us", "United States" },
{ 581, "um", "United States Minor Outlying Islands" },
{ 858, "uy", "Uruguay" },
{ 860, "uz", "Uzbekistan" },
{ 548, "vu", "Vanuatu" },
{ 862, "ve", "Venezuela" },
{ 704, "vn", "Vietnam" },
{ 92, "vg", "Virgin Islands (British)" },
{ 850, "vi", "Virgin Islands (US)" },
{ 876, "wf", "Wallis & Futuna Islands" },
{ 732, "eh", "Western Sahara" },
{ 887, "ye", "Yemen" },
{ 894, "zm", "Zambia" },
{ 716, "zw", "Zimbabwe" }
};
/*
* Check if start of 'str' is simply an IPv4 address.
*/
#define BYTE_OK(x) ((x) >= 0 && (x) <= 255)
static int is_addr(char *str, char **end)
{
int a0, a1, a2, a3, num, rc = 0, length = 0;
if ((num = sscanf(str,"%3d.%3d.%3d.%3d%n",&a0,&a1,&a2,&a3,&length)) == 4 &&
BYTE_OK(a0) && BYTE_OK(a1) && BYTE_OK(a2) && BYTE_OK(a3) &&
length >= (3+4))
{
rc = 1;
*end = str + length;
}
return rc;
}
/*
* Find the country-code and name from the CNAME. E.g.:
* version 1: CNAME = zzno.countries.nerd.dk with address 127.0.2.66
* yields ccode_A" = "no" and cnumber 578 (2.66).
* version 2: CNAME = <a.b.c.d>.zz.countries.nerd.dk with address 127.0.2.66
* yields cnumber 578 (2.66). ccode_A is "";
*/
static void find_country_from_cname(const char *cname, struct in_addr addr)
{
const struct search_list *country;
char ccode_A2[3], *ccopy, *dot_4;
int cnumber, z0, z1, ver_1, ver_2;
u_long ip;
ip = ntohl(addr.s_addr);
z0 = tolower(cname[0]);
z1 = tolower(cname[1]);
ccopy = strdup(cname);
ver_1 = (z0 == 'z' && z1 == 'z' && !strcasecmp(cname+4,nerd_ver1));
ver_2 = (is_addr(ccopy,&dot_4) && !strcasecmp(dot_4,nerd_ver2));
if (ver_1)
{
const char *dot = strchr(cname, '.');
if ((z0 != 'z' && z1 != 'z') || dot != cname+4)
{
printf("Unexpected CNAME %s (ver_1)\n", cname);
return;
}
}
else if (ver_2)
{
z0 = tolower(dot_4[1]);
z1 = tolower(dot_4[2]);
if (z0 != 'z' && z1 != 'z')
{
printf("Unexpected CNAME %s (ver_2)\n", cname);
return;
}
}
else
{
printf("Unexpected CNAME %s (ver?)\n", cname);
return;
}
if (ver_1)
{
ccode_A2[0] = tolower(cname[2]);
ccode_A2[1] = tolower(cname[3]);
ccode_A2[2] = '\0';
}
else
ccode_A2[0] = '\0';
cnumber = ip & 0xFFFF;
TRACE(("Found country-code `%s', number %d\n",
ver_1 ? ccode_A2 : "<n/a>", cnumber));
country = list_lookup(cnumber, country_list,
sizeof(country_list) / sizeof(country_list[0]));
if (!country)
printf("Name for country-number %d not found.\n", cnumber);
else
{
if (ver_1 && *(unsigned short*)&country->short_name != *(unsigned*)&ccode_A2)
printf("short-name mismatch; %s vs %s\n", country->short_name, ccode_A2);
printf("%s (%s), number %d.\n",
country->long_name, country->short_name, cnumber);
}
free(ccopy);
}

View File

@@ -245,7 +245,7 @@ int ares_expand_name(const unsigned char *encoded, const unsigned char *abuf,
int ares_expand_string(const unsigned char *encoded, const unsigned char *abuf,
int alen, unsigned char **s, long *enclen);
#ifndef s6_addr
#if !defined(HAVE_STRUCT_IN6_ADDR) && !defined(s6_addr)
struct in6_addr {
union {
unsigned char _S6_u8[16];

View File

@@ -58,6 +58,7 @@ static void addr_callback(void *arg, int status, int timeouts,
static void end_aquery(struct addr_query *aquery, int status,
struct hostent *host);
static int file_lookup(union ares_addr *addr, int family, struct hostent **host);
static void ptr_rr_name(char *name, int family, union ares_addr *addr);
void ares_gethostbyaddr(ares_channel channel, const void *addr, int addrlen,
int family, ares_host_callback callback, void *arg)
@@ -101,48 +102,26 @@ static void next_lookup(struct addr_query *aquery)
{
const char *p;
char name[128];
int a1, a2, a3, a4, status;
int status;
struct hostent *host;
unsigned long addr;
for (p = aquery->remaining_lookups; *p; p++)
{
switch (*p)
{
case 'b':
if (aquery->family == AF_INET)
{
addr = ntohl(aquery->addr.addr4.s_addr);
a1 = (int)((addr >> 24) & 0xff);
a2 = (int)((addr >> 16) & 0xff);
a3 = (int)((addr >> 8) & 0xff);
a4 = (int)(addr & 0xff);
sprintf(name, "%d.%d.%d.%d.in-addr.arpa", a4, a3, a2, a1);
aquery->remaining_lookups = p + 1;
ares_query(aquery->channel, name, C_IN, T_PTR, addr_callback,
aquery);
}
else
{
unsigned char *bytes;
bytes = (unsigned char *)&aquery->addr.addr6.s6_addr;
sprintf(name, "%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.ip6.arpa",
bytes[15]&0xf, bytes[15] >> 4, bytes[14]&0xf, bytes[14] >> 4,
bytes[13]&0xf, bytes[13] >> 4, bytes[12]&0xf, bytes[12] >> 4,
bytes[11]&0xf, bytes[11] >> 4, bytes[10]&0xf, bytes[10] >> 4,
bytes[9]&0xf, bytes[9] >> 4, bytes[8]&0xf, bytes[8] >> 4,
bytes[7]&0xf, bytes[7] >> 4, bytes[6]&0xf, bytes[6] >> 4,
bytes[5]&0xf, bytes[5] >> 4, bytes[4]&0xf, bytes[4] >> 4,
bytes[3]&0xf, bytes[3] >> 4, bytes[2]&0xf, bytes[2] >> 4,
bytes[1]&0xf, bytes[1] >> 4, bytes[0]&0xf, bytes[0] >> 4);
aquery->remaining_lookups = p + 1;
ares_query(aquery->channel, name, C_IN, T_PTR, addr_callback,
aquery);
}
ptr_rr_name(name, aquery->family, &aquery->addr);
aquery->remaining_lookups = p + 1;
ares_query(aquery->channel, name, C_IN, T_PTR, addr_callback,
aquery);
return;
case 'f':
status = file_lookup(&aquery->addr, aquery->family, &host);
if (status != ARES_ENOTFOUND)
/* this status check below previously checked for !ARES_ENOTFOUND,
but we should not assume that this single error code is the one
that can occur, as that is in fact no longer the case */
if (status == ARES_SUCCESS)
{
end_aquery(aquery, status, host);
return;
@@ -264,3 +243,31 @@ static int file_lookup(union ares_addr *addr, int family, struct hostent **host)
*host = NULL;
return status;
}
static void ptr_rr_name(char *name, int family, union ares_addr *addr)
{
if (family == AF_INET)
{
unsigned long laddr = ntohl(addr->addr4.s_addr);
int a1 = (int)((laddr >> 24) & 0xff);
int a2 = (int)((laddr >> 16) & 0xff);
int a3 = (int)((laddr >> 8) & 0xff);
int a4 = (int)(laddr & 0xff);
sprintf(name, "%d.%d.%d.%d.in-addr.arpa", a4, a3, a2, a1);
}
else
{
unsigned char *bytes = (unsigned char *)&addr->addr6.s6_addr;
sprintf(name,
"%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x."
"%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.ip6.arpa",
bytes[15]&0xf, bytes[15] >> 4, bytes[14]&0xf, bytes[14] >> 4,
bytes[13]&0xf, bytes[13] >> 4, bytes[12]&0xf, bytes[12] >> 4,
bytes[11]&0xf, bytes[11] >> 4, bytes[10]&0xf, bytes[10] >> 4,
bytes[9]&0xf, bytes[9] >> 4, bytes[8]&0xf, bytes[8] >> 4,
bytes[7]&0xf, bytes[7] >> 4, bytes[6]&0xf, bytes[6] >> 4,
bytes[5]&0xf, bytes[5] >> 4, bytes[4]&0xf, bytes[4] >> 4,
bytes[3]&0xf, bytes[3] >> 4, bytes[2]&0xf, bytes[2] >> 4,
bytes[1]&0xf, bytes[1] >> 4, bytes[0]&0xf, bytes[0] >> 4);
}
}

View File

@@ -138,7 +138,11 @@ static void next_lookup(struct host_query *hquery, int status_code)
case 'f':
/* Host file lookup */
status = file_lookup(hquery->name, hquery->family, &host);
if (status != ARES_ENOTFOUND)
/* this status check below previously checked for !ARES_ENOTFOUND,
but we should not assume that this single error code is the one
that can occur, as that is in fact no longer the case */
if (status == ARES_SUCCESS)
{
end_hquery(hquery, status, host);
return;

View File

@@ -22,7 +22,7 @@
#define PF_INET6 AF_INET6
#endif
#ifndef s6_addr
#if !defined(HAVE_STRUCT_IN6_ADDR) && !defined(s6_addr)
struct in6_addr {
union {
unsigned char _S6_u8[16];
@@ -43,7 +43,7 @@ struct sockaddr_in6
#endif
#ifndef HAVE_STRUCT_ADDRINFO
struct addrinfo
struct addrinfo
{
int ai_flags;
int ai_family;

View File

@@ -5,11 +5,11 @@
#define ARES_VERSION_MAJOR 1
#define ARES_VERSION_MINOR 5
#define ARES_VERSION_PATCH 1
#define ARES_VERSION_PATCH 2
#define ARES_VERSION ((ARES_VERSION_MAJOR<<16)|\
(ARES_VERSION_MINOR<<8)|\
(ARES_VERSION_PATCH))
#define ARES_VERSION_STR "1.5.1-CVS"
#define ARES_VERSION_STR "1.5.2-CVS"
#ifdef __cplusplus
extern "C" {

View File

@@ -169,6 +169,20 @@
#define _CRT_NONSTDC_NO_DEPRECATE 1
#endif
/* VS2008 does not support Windows build targets prior to WinXP, */
/* so, if no build target has been defined we will target WinXP. */
#if defined(_MSC_VER) && (_MSC_VER >= 1500)
# ifndef _WIN32_WINNT
# define _WIN32_WINNT 0x0501
# endif
# ifndef WINVER
# define WINVER 0x0501
# endif
# if (_WIN32_WINNT < 0x0501) || (WINVER < 0x0501)
# error VS2008 does not support Windows build targets prior to WinXP
# endif
#endif
/* ---------------------------------------------------------------- */
/* IPV6 COMPATIBILITY */
/* ---------------------------------------------------------------- */

View File

@@ -46,6 +46,8 @@ AC_HELP_STRING([--disable-debug],[Disable debug options]),
dnl when doing the debug stuff, use static library only
AC_DISABLE_SHARED
debugbuild="yes"
dnl the entire --enable-debug is a hack that lives and runs on top of
dnl libcurl stuff so this BUILDING_LIBCURL is not THAT much uglier
AC_DEFINE(BUILDING_LIBCURL, 1, [when building as static part of libcurl])
@@ -70,6 +72,7 @@ AC_HELP_STRING([--disable-debug],[Disable debug options]),
esac ],
AC_MSG_RESULT(no)
)
AM_CONDITIONAL(DEBUGBUILD, test x$debugbuild = xyes)
dnl skip libtool C++ and Fortran compiler checks
m4_ifdef([AC_PROG_CXX], [m4_undefine([AC_PROG_CXX])])
@@ -258,9 +261,6 @@ fi
dnl socket lib?
AC_CHECK_FUNC(connect, , [ AC_CHECK_LIB(socket, connect) ])
dnl dl lib?
AC_CHECK_FUNC(dlclose, , [ AC_CHECK_LIB(dl, dlopen) ])
AC_MSG_CHECKING([whether to use libgcc])
AC_ARG_ENABLE(libgcc,
AC_HELP_STRING([--enable-libgcc],[use libgcc when linking]),
@@ -831,8 +831,15 @@ AC_HELP_STRING([--with-random=FILE],
[read randomness from FILE (default=/dev/urandom)]),
[ RANDOM_FILE="$withval" ],
[
dnl Check for random device
AC_CHECK_FILE("/dev/urandom", [ RANDOM_FILE="/dev/urandom"] )
dnl Check for random device. If we're cross compiling, we can't
dnl check, and it's better to assume it doesn't exist than it is
dnl to fail on AC_CHECK_FILE or later.
if test "$cross_compiling" = "no"; then
AC_CHECK_FILE("/dev/urandom", [ RANDOM_FILE="/dev/urandom"] )
else
AC_MSG_WARN([cannot check for /dev/urandom while cross compiling; assuming none])
fi
]
)
if test -n "$RANDOM_FILE" && test X"$RANDOM_FILE" != Xno ; then

View File

@@ -12,7 +12,7 @@ includedir=@includedir@
Name: c-ares
URL: http://daniel.haxx.se/projects/c-ares/
Description: asyncronous DNS lookup library
Description: asynchronous DNS lookup library
Version: @VERSION@
Requires:
Requires.private:

View File

@@ -5,7 +5,7 @@
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
# Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
@@ -24,11 +24,11 @@ dnl Process this file with autoconf to produce a configure script.
AC_PREREQ(2.57)
dnl We don't know the version number "staticly" so we use a dash here
dnl We don't know the version number "statically" so we use a dash here
AC_INIT(curl, [-], [a suitable curl mailing list => http://curl.haxx.se/mail/])
dnl configure script copyright
AC_COPYRIGHT([Copyright (c) 1998 - 2006 Daniel Stenberg, <daniel@haxx.se>
AC_COPYRIGHT([Copyright (c) 1998 - 2008 Daniel Stenberg, <daniel@haxx.se>
This configure script may be copied, distributed and modified under the
terms of the curl license; see COPYING for more details])
@@ -323,10 +323,29 @@ AC_HELP_STRING([--disable-ldap],[Disable LDAP support]),
AC_DEFINE(CURL_DISABLE_LDAP, 1, [to disable LDAP])
AC_SUBST(CURL_DISABLE_LDAP, [1])
;;
*) AC_MSG_RESULT(yes)
*)
case $host in
*-*-cygwin*)
# Force no ldap. config/build process is broken for cygwin
AC_DEFINE(CURL_DISABLE_LDAP, 1, [to disable LDAP])
AC_SUBST(CURL_DISABLE_LDAP, [1])
AC_MSG_RESULT(no)
;;
*)
AC_MSG_RESULT(yes)
esac
;;
esac ],
AC_MSG_RESULT(yes)
esac ],[
case $host in
*-*-cygwin*)
# Force no ldap. config/build process is broken for cygwin
AC_DEFINE(CURL_DISABLE_LDAP, 1, [to disable LDAP])
AC_SUBST(CURL_DISABLE_LDAP, [1])
AC_MSG_RESULT(no)
;;
*)
AC_MSG_RESULT(yes)
esac ]
)
AC_MSG_CHECKING([whether to support ldaps])
AC_ARG_ENABLE(ldaps,
@@ -903,6 +922,9 @@ dnl **********************************************************************
dnl Check for GSS-API libraries
dnl **********************************************************************
dnl check for gss stuff in the /usr as default
GSSAPI_ROOT="/usr"
AC_ARG_WITH(gssapi-includes,
AC_HELP_STRING([--with-gssapi-includes=DIR],
[Specify location of GSSAPI header]),
@@ -923,6 +945,10 @@ AC_ARG_WITH(gssapi,
GSSAPI_ROOT="$withval"
if test x"$GSSAPI_ROOT" != xno; then
want_gss="yes"
if test x"$GSSAPI_ROOT" = xyes; then
dnl if yes, then use default root
GSSAPI_ROOT="/usr"
fi
fi
])
@@ -934,11 +960,15 @@ if test x"$want_gss" = xyes; then
if test -z "$GSSAPI_INCS"; then
if test -f "$GSSAPI_ROOT/bin/krb5-config"; then
GSSAPI_INCS=`$GSSAPI_ROOT/bin/krb5-config --cflags gssapi`
GSSAPI_LIBS=`$GSSAPI_ROOT/bin/krb5-config --libs gssapi`
elif test "$GSSAPI_ROOT" != "yes"; then
GSSAPI_INCS="-I$GSSAPI_ROOT/include"
GSSAPI_LIBS="-lgssapi"
fi
fi
CPPFLAGS="$CPPFLAGS $GSSAPI_INCS"
LIBS="$LIBS $GSSAPI_LIBS"
AC_CHECK_HEADER(gss.h,
[
@@ -1520,7 +1550,7 @@ if test "$OPENSSL_ENABLED" != "1" -a "$GNUTLS_ENABLED" != "1"; then
dnl Check for functionPK11_CreateGenericObject
dnl this is needed for using the PEM PKCS#11 module
AC_CHECK_LIB(nss3, PK11_CreateGenericObject-d,
AC_CHECK_LIB(nss3, PK11_CreateGenericObject,
[
AC_DEFINE(HAVE_PK11_CREATEGENERICOBJECT, 1, [if you have the function PK11_CreateGenericObject])
AC_SUBST(HAVE_PK11_CREATEGENERICOBJECT, [1])
@@ -1790,7 +1820,7 @@ if test x$cross_compiling != xyes; then
)
fi
else
dnl and for crosscompilings
dnl and for crosscompiling
AC_CHECK_FUNCS(gmtime_r)
fi
@@ -1835,6 +1865,7 @@ AC_CHECK_HEADERS(
utime.h \
sys/utime.h \
sys/poll.h \
poll.h \
sys/resource.h \
libgen.h \
locale.h \
@@ -1882,6 +1913,7 @@ AC_CHECK_SIZEOF(curl_off_t, ,[
AC_CHECK_SIZEOF(size_t)
AC_CHECK_SIZEOF(long)
AC_CHECK_SIZEOF(time_t)
AC_CHECK_SIZEOF(off_t)
AC_CHECK_TYPE(long long,
[AC_DEFINE(HAVE_LONGLONG, 1, [if your compiler supports long long])]
@@ -2096,6 +2128,8 @@ if test "$disable_poll" = "no"; then
AC_RUN_IFELSE([
#ifdef HAVE_SYS_POLL_H
#include <sys/poll.h>
#elif defined(HAVE_POLL_H)
#include <poll.h>
#endif
int main(void)

View File

@@ -6,7 +6,7 @@
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 2001 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
# Copyright (C) 2001 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
@@ -189,7 +189,7 @@ while test $# -gt 0; do
;;
--libs)
if test "X@libdir@" != "X/usr/lib"; then
if test "X@libdir@" != "X/usr/lib" -a "X@libdir@" != "X/usr/lib64"; then
CURLLIBDIR="-L@libdir@ "
else
CURLLIBDIR=""

View File

@@ -72,7 +72,7 @@ glib/GTK+
Java
Maintained by Vic Hanson
Maintained by [blank]
http://curl.haxx.se/libcurl/java/
Lisp

View File

@@ -1,4 +1,4 @@
Updated: July 30, 2007 (http://curl.haxx.se/docs/faq.html)
Updated: Dec 10, 2007 (http://curl.haxx.se/docs/faq.html)
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -83,6 +83,7 @@ FAQ
5.10 How do I prevent libcurl from writing the response to stdout?
5.11 How do I make libcurl not receive the whole HTTP response?
5.12 Can I make libcurl fake or hide my real IP address?
5.13 How do I stop an ongoing transfer?
6. License Issues
6.1 I have a GPL program, can I use the libcurl library?
@@ -214,8 +215,7 @@ FAQ
improvements and have them inserted in the main sources (of course on the
condition that developers agree on that the fixes are good).
The full list of the more than 530 contributors is found in the docs/THANKS
file.
The full list of all contributors is found in the docs/THANKS file.
curl is developed by a community, with Daniel at the wheel.
@@ -1033,6 +1033,18 @@ FAQ
that makes you see and use a different IP address locally than what the
remote server will see you coming from.
5.13 How do I stop an ongoing transfer?
There are several ways, but none of them are instant. There is no function
you can call from another thread or similar that will stop it immediately.
Instead you need to make sure that one of the callbacks you use return an
appropriate value that will stop the transfer.
Suitable callbacks that you can do this with include the progress callback,
the read callback and the write callback.
If you're using the multi interface, you also stop a transfer by removing
the particular easy handle from the multi stack.
6. License Issues

View File

@@ -188,7 +188,7 @@ Win32
environment variables, for example:
set ZLIB_PATH=c:\zlib-1.2.3
set OPENSSL_PATH=c:\openssl-0.9.8e
set OPENSSL_PATH=c:\openssl-0.9.8g
set LIBSSH2_PATH=c:\libssh2-0.17
ATTENTION: if you want to build with libssh2 support you have to use latest
@@ -257,7 +257,7 @@ Win32
Before running nmake define the OPENSSL_PATH environment variable with
the root/base directory of OpenSSL, for example:
set OPENSSL_PATH=c:\openssl-0.9.8e
set OPENSSL_PATH=c:\openssl-0.9.8g
Then run 'nmake vc-ssl' or 'nmake vc-ssl-dll' in curl's root
directory. 'nmake vc-ssl' will create a libcurl static and dynamic
@@ -521,7 +521,7 @@ NetWare
http://www.gknw.net/development/ossl/netware/
for CLIB-based builds OpenSSL needs to be patched to build with BSD
sockets (currently only a winsock-based CLIB build is supported):
http://www.gknw.net/development/ossl/netware/patches/v_0.9.8e/openssl-0.9.8e.diff
http://www.gknw.net/development/ossl/netware/patches/v_0.9.8g/openssl-0.9.8g.diff
- optional SSH2 sources (version 0.17 or later);
Set a search path to your compiler, linker and tools; on Linux make
@@ -808,10 +808,12 @@ PORTS
- ia64 Linux 2.3.99
- m68k AmigaOS 3
- m68k Linux
- m68k uClinux
- m68k OpenBSD
- m88k dg-dgux5.4R3.00
- s390 Linux
- XScale/PXA250 Linux 2.4
- Nios II uClinux
Useful URLs
===========

View File

@@ -97,7 +97,9 @@ Library
... analyzes the URL, it separates the different components and connects to
the remote host. This may involve using a proxy and/or using SSL. The
Curl_gethost() function in lib/hostip.c is used for looking up host names.
Curl_resolv() function in lib/hostip.c is used for looking up host names
(it does then use the proper underlying method, which may vary between
platforms and builds).
When Curl_connect is done, we are connected to the remote site. Then it is
time to tell the server to get a document/file. Curl_do() arranges this.
@@ -122,17 +124,20 @@ Library
Curl_Transfer() function (in lib/transfer.c) to setup the transfer and
returns.
Starting in 7.9.1, if this DO function fails and the connection is being
re-used, libcurl will then close this connection, setup a new connection
and re-issue the DO request on that. This is because there is no way to be
perfectly sure that we have discovered a dead connection before the DO
function and thus we might wrongly be re-using a connection that was closed
by the remote peer.
If this DO function fails and the connection is being re-used, libcurl will
then close this connection, setup a new connection and re-issue the DO
request on that. This is because there is no way to be perfectly sure that
we have discovered a dead connection before the DO function and thus we
might wrongly be re-using a connection that was closed by the remote peer.
Some time during the DO function, the Curl_setup_transfer() function must
be called with some basic info about the upcoming transfer: what socket(s)
to read/write and the expected file tranfer sizes (if known).
o Transfer()
Curl_perform() then calls Transfer() in lib/transfer.c that performs
the entire file transfer.
Curl_perform() then calls Transfer() in lib/transfer.c that performs the
entire file transfer.
During transfer, the progress functions in lib/progress.c are called at a
frequent interval (or at the user's choice, a specified callback might get
@@ -236,9 +241,8 @@ Library
URL encoding and decoding, called escaping and unescaping in the source code,
is found in lib/escape.c.
While transfering data in Transfer() a few functions might get
used. curl_getdate() in lib/getdate.c is for HTTP date comparisons (and
more).
While transfering data in Transfer() a few functions might get used.
curl_getdate() in lib/parsedate.c is for HTTP date comparisons (and more).
lib/getenv.c offers curl_getenv() which is for reading environment variables
in a neat platform independent way. That's used in the client, but also in
@@ -254,10 +258,6 @@ Library
A function named curl_version() that returns the full curl version string is
found in lib/version.c.
If authentication is requested but no password is given, a getpass_r() clone
exists in lib/getpass.c. libcurl offers a custom callback that can be used
instead of this, but it doesn't change much to us.
Persistent Connections
======================
@@ -269,9 +269,11 @@ Persistent Connections
all the options etc that the library-user may choose.
o The 'SessionHandle' struct holds the "connection cache" (an array of
pointers to 'connectdata' structs). There's one connectdata struct
allocated for each connection that libcurl knows about.
o This also enables the 'curl handle' to be reused on subsequent transfers,
something that was illegal before libcurl 7.7.
allocated for each connection that libcurl knows about. Note that when you
use the multi interface, the multi handle will hold the connection cache
and not the particular easy handle. This of course to allow all easy handles
in a multi stack to be able to share and re-use connections.
o This enables the 'curl handle' to be reused on subsequent transfers.
o When we are about to perform a transfer with curl_easy_perform(), we first
check for an already existing connection in the cache that we can use,
otherwise we create a new one and add to the cache. If the cache is full
@@ -281,11 +283,46 @@ Persistent Connections
o When the transfer operation is complete, we try to leave the connection
open. Particular options may tell us not to, and protocols may signal
closure on connections and then we don't keep it open of course.
o When curl_easy_cleanup() is called, we close all still opened connections.
o When curl_easy_cleanup() is called, we close all still opened connections,
unless of course the multi interface "owns" the connections.
You do realize that the curl handle must be re-used in order for the
persistent connections to work.
multi interface/non-blocking
============================
We make an effort to provide a non-blocking interface to the library, the
multi interface. To make that interface work as good as possible, no
low-level functions within libcurl must be written to work in a blocking
manner.
One of the primary reasons we introduced c-ares support was to allow the name
resolve phase to be perfectly non-blocking as well.
The ultimate goal is to provide the easy interface simply by wrapping the
multi interface functions and thus treat everything internally as the multi
interface is the single interface we have.
The FTP and the SFTP/SCP protocols are thus perfect examples of how we adapt
and adjust the code to allow non-blocking operations even on multi-stage
protocols. The DICT, TELNET and TFTP are crappy examples and they are subject
for rewrite in the future to better fit the libcurl protocol family.
SSL libraries
=============
Originally libcurl supported SSLeay for SSL/TLS transports, but that was then
extended to its successor OpenSSL but has since also been extended to several
other SSL/TLS libraries and we expect and hope to further extend the support
in future libcurl versions.
To deal with this internally in the best way possible, we have a generic SSL
function API as provided by the sslgen.[ch] system, and they are the only SSL
functions we must use from within libcurl. sslgen is then crafted to use the
appropriate lower-level function calls to whatever SSL library that is in
use.
Library Symbols
===============
@@ -309,6 +346,13 @@ Return Codes and Informationals
them. They are best used when revealing information that isn't otherwise
obvious.
API/ABI
=======
We make an effort to not export or show internals or how internals work, as
that makes it easier to keep a solid API/ABI over time. See docs/libcurl/ABI
for our promise to users.
Client
======

View File

@@ -3,6 +3,24 @@ join in and help us correct one or more of these! Also be sure to check the
changelog of the current development status, as one or more of these problems
may have been fixed since this was written!
52. Gautam Kachroo's issue that identifies a problem with the multi interface
where a connection can be re-used without actually being properly
SSL-negoatiated:
http://curl.haxx.se/mail/lib-2008-01/0277.html
51.Kevin Reed's reported problem with a proxy when doing CONNECT and it
wants NTLM and close the connection to the initial CONNECT response:
http://curl.haxx.se/bug/view.cgi?id=1879375
50. Curl_done() and pipelning aren't totally cool together:
http://curl.haxx.se/mail/lib-2008-01/0330.html
49. If using --retry and the transfer timeouts (possibly due to using -m or
-y/-Y) the next attempt doesn't resume the transfer properly from what was
downloaded in the previous attempt but will truncate and restart at the
original position where it was at before the previous failed attempt. See
http://curl.haxx.se/mail/lib-2008-01/0080.html
48. If a CONNECT response-headers are larger than BUFSIZE (16KB) when the
connection is meant to be kept alive (like for NTLM proxy auth), the
function will return prematurely and will confuse the rest of the HTTP
@@ -43,23 +61,9 @@ may have been fixed since this was written!
Also see #12. According to bug #1556528, even the SOCKS5 connect code does
not do it right: http://curl.haxx.se/bug/view.cgi?id=1556528,
33. Doing multi-pass HTTP authentication on a non-default port does not work.
This happens because the multi-pass code abuses the redirect following code
for doing multiple requests, and when we following redirects to an absolute
URL we must use the newly specified port and not the one specified in the
original URL. A proper fix to this would need to separate the negotiation
"redirect" from an actual redirect.
32. (At least on Windows) If libcurl is built with c-ares and there's no DNS
server configured in the system, the ares_init() call fails and thus
curl_easy_init() fails as well. This causes weird effects for people who use
numerical IP addresses only.
31. "curl-config --libs" will include details set in LDFLAGS when configure is
run that might be needed only for building libcurl. Similarly, it might
include options that perhaps aren't suitable both for static and dynamic
linking. Further, curl-config --cflags suffers from the same effects with
CFLAGS/CPPFLAGS.
run that might be needed only for building libcurl. Further, curl-config
--cflags suffers from the same effects with CFLAGS/CPPFLAGS.
30. You need to use -g to the command line tool in order to use RFC2732-style
IPv6 numerical addresses in URLs.

713
docs/TODO
View File

@@ -4,346 +4,569 @@
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
TODO
Things that could be nice to do in the future
Things to do in project cURL. Please tell us what you think, contribute and
send us patches that improve things! Also check the http://curl.haxx.se/dev
web section for various technical development notes.
send us patches that improve things!
All bugs documented in the KNOWN_BUGS document are subject for fixing!
LIBCURL
1. libcurl
1.1 Zero-copy interface
1.2 More data sharing
1.3 struct lifreq
1.4 Get IP address
1.5 c-ares ipv6
1.6 configure-based info in public headers
* Introduce another callback interface for upload/download that makes one
less copy of data and thus a faster operation.
[http://curl.haxx.se/dev/no_copy_callbacks.txt]
2. libcurl - multi interface
2.1 More non-blocking
2.2 Pause transfers
2.3 Remove easy interface internally
2.4 Avoid having to remove/readd handles
* More data sharing. curl_share_* functions already exist and work, and they
can be extended to share more. For example, enable sharing of the ares
channel and the connection cache.
3. Documentation
3.1 More and better
* Introduce a new error code indicating authentication problems (for proxy
CONNECT error 407 for example). This cannot be an error code, we must not
return informational stuff as errors, consider a new info returned by
curl_easy_getinfo() http://curl.haxx.se/bug/view.cgi?id=845941
4. FTP
4.1 PRET
4.2 Alter passive/active on failure and retry
4.3 Earlier bad letter detection
4.4 REST for large files
4.5 FTP proxy support
4.6 PORT port range
4.7 ASCII support
* Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
To support ipv6 interface addresses properly.
5. HTTP
5.1 Other HTTP versions with CONNECT
5.2 Better persistancy for HTTP 1.0
5.3 support FF3 sqlite cookie files
* Add the following to curl_easy_getinfo(): GET_HTTP_IP, GET_FTP_IP and
GET_FTP_DATA_IP. Return a string with the used IP. Suggested by Alan.
6. TELNET
6.1 ditch stdin
6.2 ditch telnet-specific select
* Add option that changes the interval in which the progress callback is
called at most.
7. SSL
7.1 Disable specific versions
7.2 Provide mytex locking API
7.3 dumpcert
7.4 Evaluate SSL patches
7.5 Cache OpenSSL contexts
7.6 Export session ids
7.7 Provide callback for cert verfication
7.8 Support other SSL libraries
7.9 Support SRP on the TLS layer
7.10 improve configure --with-ssl
* Make libcurl built with c-ares use c-ares' IPv6 abilities. They weren't
present when we first added c-ares support but they have been added since!
When this is done and works, we can actually start considering making c-ares
powered libcurl the default build (which of course would require that we'd
bundle the c-ares source code in the libcurl source code releases).
8. GnuTLS
8.1 Make NTLM work without OpenSSL functions
8.2 SSl engine stuff
8.3 SRP
8.4 non-blocking
8.5 check connection
* Make the curl/*.h headers include the proper system includes based on what
was present at the time when configure was run. Currently, the sys/select.h
header is for example included by curl/multi.h only on specific platforms
we know MUST have it. This is error-prone. We therefore want the header
files to adapt to configure results. Those results must be stored in a new
header and they must use a curl name space, i.e not be HAVE_* prefix (as
that would risk collide with other apps that use libcurl and that runs
configure).
9. LDAP
9.1 ditch ldap-specific select
Work on this has been started but hasn't been finished, and the initial
patch and some details are found here:
http://curl.haxx.se/mail/lib-2006-12/0084.html
10. New protocols
10.1 RTSP
10.2 RSYNC
10.3 RTMP
LIBCURL - multi interface
11. Client
11.1 Content-Disposition
11.2 sync
11.3 glob posts
11.4 prevent file overwriting
11.5 ftp wildcard download
11.6 simultaneous parallel transfers
11.7 provide formpost headers
11.8 url-specific options
* Make sure we don't ever loop because of non-blocking sockets return
EWOULDBLOCK or similar. The GnuTLS connection etc.
12. Build
12.1 roffit
* Make transfers treated more carefully. We need a way to tell libcurl we
have data to write, as the current system expects us to upload data each
time the socket is writable and there is no way to say that we want to
upload data soon just not right now, without that aborting the upload. The
opposite situation should be possible as well, that we tell libcurl we're
ready to accept read data. Today libcurl feeds the data as soon as it is
available for reading, no matter what.
13. Test suite
13.1 SSL tunnel
13.2 nicer lacking perl message
13.3 more protocols supported
13.4 more platforms supported
* Make curl_easy_perform() a wrapper-function that simply creates a multi
handle, adds the easy handle to it, runs curl_multi_perform() until the
transfer is done, then detach the easy handle, destroy the multi handle and
return the easy handle's return code. This will thus make everything
internally use and assume the multi interface. The select()-loop should use
curl_multi_socket().
14. Next SONAME bump
14.1 http-style HEAD output for ftp
14.2 combine error codes
14.3 extend CURLOPT_SOCKOPTFUNCTION prototype
* curl_multi_handle_control() - this can control the easy handle (while)
added to a multi handle in various ways:
o RESTART, unconditionally restart this easy handle's transfer from the
start, re-init the state
o RESTART_COMPLETED, restart this easy handle's transfer but only if the
existing transfer has already completed and it is in a "finished state".
o STOP, just stop this transfer and consider it completed
o PAUSE?
o RESUME?
15. Next major release
15.1 cleanup return codes
15.2 remove obsolete defines
15.3 size_t
15.4 remove several functions
15.5 remove CURLOPT_FAILONERROR
15.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
DOCUMENTATION
==============================================================================
* More and better
1. libcurl
FTP
1.1 Zero-copy interface
* PRET is a command that primarily "drftpd" supports, which could be useful
when using libcurl against such a server. It is a non-standard and a rather
oddly designed command, but...
http://curl.haxx.se/bug/feature.cgi?id=1729967
Introdue another callback interface for upload/download that makes one less
copy of data and thus a faster operation.
[http://curl.haxx.se/dev/no_copy_callbacks.txt]
* When trying to connect passively to a server which only supports active
connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
connection. There could be a way to fallback to an active connection (and
vice versa). http://curl.haxx.se/bug/feature.cgi?id=1754793
1.2 More data sharing
* Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in
the process to avoid doing a resolve and connect in vain.
curl_share_* functions already exist and work, and they can be extended to
share more. For example, enable sharing of the ares channel and the
connection cache.
* REST fix for servers not behaving well on >2GB requests. This should fail
if the server doesn't set the pointer to the requested index. The tricky
(impossible?) part is to figure out if the server did the right thing or
not.
1.3 struct lifreq
* Support the most common FTP proxies, Philip Newton provided a list
allegedly from ncftp:
http://curl.haxx.se/mail/archive-2003-04/0126.html
Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
To support ipv6 interface addresses for network interfaces properly.
* Make CURLOPT_FTPPORT support an additional port number on the IP/if/name,
like "blabla:[port]" or possibly even "blabla:[portfirst]-[portsecond]".
http://curl.haxx.se/bug/feature.cgi?id=1505166
1.4 Get IP address
* FTP ASCII transfers do not follow RFC959. They don't convert the data
accordingly.
Add the following to curl_easy_getinfo(): GET_HTTP_IP, GET_FTP_IP and
GET_FTP_DATA_IP. Return a string with the used IP.
* Since USERPWD always override the user and password specified in URLs, we
might need another way to specify user+password for anonymous ftp logins.
1.5 c-ares ipv6
* The FTP code should get a way of returning errors that is known to still
have the control connection alive and sound. Currently, a returned error
from within ftp-functions does not tell if the control connection is still
OK to use or not. This causes libcurl to fail to re-use connections
slightly too often.
Make libcurl built with c-ares use c-ares' IPv6 abilities. They weren't
present when we first added c-ares support but they have been added since!
When this is done and works, we can actually start considering making c-ares
powered libcurl the default build (which of course would require that we'd
bundle the c-ares source code in the libcurl source code releases).
HTTP
1.6 configure-based info in public headers
* When doing CONNECT to a HTTP proxy, libcurl always uses HTTP/1.0. This has
never been reported as causing trouble to anyone, but should be considered
to use the HTTP version the user has chosen.
Make the public headers include the proper system includes based on what was
present at the time when configure was run. Currently, the sys/select.h
header is for example included by curl/multi.h only on specific platforms we
know MUST have it. This is error-prone. We therefore want the header files to
adapt to configure results. Those results must be stored in a new header and
they must use a curl name space, i.e not be HAVE_* prefix (as that would risk
collide with other apps that use libcurl and that runs configure).
* "Better" support for persistent connections over HTTP 1.0
http://curl.haxx.se/bug/feature.cgi?id=1089001
Work on this has been started but hasn't been finished, and the initial patch
and some details are found here:
http://curl.haxx.se/mail/lib-2006-12/0084.html
TELNET
The remaining problems to solve involve the platforms that can't run
configure.
* Reading input (to send to the remote server) on stdin is a crappy solution
for library purposes. We need to invent a good way for the application to
be able to provide the data to send.
2. libcurl - multi interface
* Move the telnet support's network select() loop go away and merge the code
into the main transfer loop. Until this is done, the multi interface won't
work for telnet.
2.1 More non-blocking
SSL
Make sure we don't ever loop because of non-blocking sockets return
EWOULDBLOCK or similar. The GnuTLS connection etc.
* Provide an option that allows for disabling specific SSL versions, such as
SSLv2 http://curl.haxx.se/bug/feature.cgi?id=1767276
2.2 Pause transfers
* Provide a libcurl API for setting mutex callbacks in the underlying SSL
library, so that the same application code can use mutex-locking
independently of OpenSSL or GnutTLS being used.
Make transfers treated more carefully. We need a way to tell libcurl we have
data to write, as the current system expects us to upload data each time the
socket is writable and there is no way to say that we want to upload data
soon just not right now, without that aborting the upload. The opposite
situation should be possible as well, that we tell libcurl we're ready to
accept read data. Today libcurl feeds the data as soon as it is available for
reading, no matter what.
* Anton Fedorov's "dumpcert" patch:
http://curl.haxx.se/mail/lib-2004-03/0088.html
2.3 Remove easy interface internally
* Evaluate/apply Gertjan van Wingerde's SSL patches:
http://curl.haxx.se/mail/lib-2004-03/0087.html
Make curl_easy_perform() a wrapper-function that simply creates a multi
handle, adds the easy handle to it, runs curl_multi_perform() until the
transfer is done, then detach the easy handle, destroy the multi handle and
return the easy handle's return code. This will thus make everything
internally use and assume the multi interface. The select()-loop should use
curl_multi_socket().
* "Look at SSL cafile - quick traces look to me like these are done on every
request as well, when they should only be necessary once per ssl context
(or once per handle)". The major improvement we can rather easily do is to
make sure we don't create and kill a new SSL "context" for every request,
but instead make one for every connection and re-use that SSL context in
the same style connections are re-used. It will make us use slightly more
memory but it will libcurl do less creations and deletions of SSL contexts.
2.4 Avoid having to remove/readd handles
* Add an interface to libcurl that enables "session IDs" to get
exported/imported. Cris Bailiff said: "OpenSSL has functions which can
serialise the current SSL state to a buffer of your choice, and
recover/reset the state from such a buffer at a later date - this is used
by mod_ssl for apache to implement and SSL session ID cache".
curl_multi_handle_control() - this can control the easy handle (while) added
to a multi handle in various ways:
* OpenSSL supports a callback for customised verification of the peer
certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
it be? There's so much that could be done if it were! (brought by Chris
Clark)
o RESTART, unconditionally restart this easy handle's transfer from the
start, re-init the state
* Make curl's SSL layer capable of using other free SSL libraries. Such as
MatrixSSL (http://www.matrixssl.org/).
o RESTART_COMPLETED, restart this easy handle's transfer but only if the
existing transfer has already completed and it is in a "finished state".
* Peter Sylvester's patch for SRP on the TLS layer.
Awaits OpenSSL support for this, no need to support this in libcurl before
there's an OpenSSL release that does it.
o STOP, just stop this transfer and consider it completed
* make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
then NSS...
o PAUSE?
GnuTLS
o RESUME?
* Get NTLM working using the functions provided by libgcrypt, since GnuTLS
already depends on that to function. Not strictly SSL/TLS related, but
hey... Another option is to get available DES and MD4 source code from the
cryptopp library. They are fine license-wise, but are C++.
3. Documentation
* SSL engine stuff?
3.1 More and better
* Work out a common method with Peter Sylvester's OpenSSL-patch for SRP
on the TLS to provide name and password
Exactly
* Fix the connection phase to be non-blocking when multi interface is used
4. FTP
* Add a way to check if the connection seems to be alive, to correspond to
the SSL_peak() way we use with OpenSSL.
4.1 PRET
LDAP
PRET is a command that primarily "drftpd" supports, which could be useful
when using libcurl against such a server. It is a non-standard and a rather
oddly designed command, but...
http://curl.haxx.se/bug/feature.cgi?id=1729967
4.2 Alter passive/active on failure and retry
When trying to connect passively to a server which only supports active
connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
connection. There could be a way to fallback to an active connection (and
vice versa). http://curl.haxx.se/bug/feature.cgi?id=1754793
4.3 Earlier bad letter detection
Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in the
process to avoid doing a resolve and connect in vain.
4.4 REST for large files
REST fix for servers not behaving well on >2GB requests. This should fail if
the server doesn't set the pointer to the requested index. The tricky
(impossible?) part is to figure out if the server did the right thing or not.
4.5 FTP proxy support
Support the most common FTP proxies, Philip Newton provided a list allegedly
from ncftp. This is not a subject without debate, and is probably not really
suitable for libcurl. http://curl.haxx.se/mail/archive-2003-04/0126.html
4.6 PORT port range
Make CURLOPT_FTPPORT support an additional port number on the IP/if/name,
like "blabla:[port]" or possibly even "blabla:[portfirst]-[portsecond]".
http://curl.haxx.se/bug/feature.cgi?id=1505166
4.7 ASCII support
FTP ASCII transfers do not follow RFC959. They don't convert the data
accordingly.
5. HTTP
5.1 Other HTTP versions with CONNECT
When doing CONNECT to a HTTP proxy, libcurl always uses HTTP/1.0. This has
never been reported as causing trouble to anyone, but should be considered to
use the HTTP version the user has chosen.
5.2 Better persistancy for HTTP 1.0
"Better" support for persistent connections over HTTP 1.0
http://curl.haxx.se/bug/feature.cgi?id=1089001
5.3 support FF3 sqlite cookie files
Firefox 3 is changing from its former format to a a sqlite database instead.
We should consider how (lib)curl can/should support this.
http://curl.haxx.se/bug/feature.cgi?id=1871388
6. TELNET
6.1 ditch stdin
Reading input (to send to the remote server) on stdin is a crappy solution for
library purposes. We need to invent a good way for the application to be able
to provide the data to send.
6.2 ditch telnet-specific select
Move the telnet support's network select() loop go away and merge the code
into the main transfer loop. Until this is done, the multi interface won't
work for telnet.
7. SSL
7.1 Disable specific versions
Provide an option that allows for disabling specific SSL versions, such as
SSLv2 http://curl.haxx.se/bug/feature.cgi?id=1767276
7.2 Provide mytex locking API
Provide a libcurl API for setting mutex callbacks in the underlying SSL
library, so that the same application code can use mutex-locking
independently of OpenSSL or GnutTLS being used.
7.3 dumpcert
Anton Fedorov's "dumpcert" patch:
http://curl.haxx.se/mail/lib-2004-03/0088.html
7.4 Evaluate SSL patches
Evaluate/apply Gertjan van Wingerde's SSL patches:
http://curl.haxx.se/mail/lib-2004-03/0087.html
7.5 Cache OpenSSL contexts
"Look at SSL cafile - quick traces look to me like these are done on every
request as well, when they should only be necessary once per ssl context (or
once per handle)". The major improvement we can rather easily do is to make
sure we don't create and kill a new SSL "context" for every request, but
instead make one for every connection and re-use that SSL context in the same
style connections are re-used. It will make us use slightly more memory but
it will libcurl do less creations and deletions of SSL contexts.
7.6 Export session ids
Add an interface to libcurl that enables "session IDs" to get
exported/imported. Cris Bailiff said: "OpenSSL has functions which can
serialise the current SSL state to a buffer of your choice, and recover/reset
the state from such a buffer at a later date - this is used by mod_ssl for
apache to implement and SSL session ID cache".
7.7 Provide callback for cert verfication
OpenSSL supports a callback for customised verification of the peer
certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
it be? There's so much that could be done if it were!
7.8 Support other SSL libraries
Make curl's SSL layer capable of using other free SSL libraries. Such as
MatrixSSL (http://www.matrixssl.org/).
7.9 Support SRP on the TLS layer
Peter Sylvester's patch for SRP on the TLS layer. Awaits OpenSSL support for
this, no need to support this in libcurl before there's an OpenSSL release
that does it.
7.10 improve configure --with-ssl
make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
then NSS...
8. GnuTLS
8.1 Make NTLM work without OpenSSL functions
Get NTLM working using the functions provided by libgcrypt, since GnuTLS
already depends on that to function. Not strictly SSL/TLS related, but
hey... Another option is to get available DES and MD4 source code from the
cryptopp library. They are fine license-wise, but are C++.
8.2 SSl engine stuff
Is this even possible?
8.3 SRP
Work out a common method with Peter Sylvester's OpenSSL-patch for SRP on the
TLS to provide name and password. GnuTLS already supports it...
8.4 non-blocking
Fix the connection phase to be non-blocking when multi interface is used
8.5 check connection
Add a way to check if the connection seems to be alive, to correspond to the
SSL_peak() way we use with OpenSSL.
9. LDAP
9.1 ditch ldap-specific select
* Look over the implementation. The looping will have to "go away" from the
lib/ldap.c source file and get moved to the main network code so that the
multi interface and friends will work for LDAP as well.
NEW PROTOCOLS
10. New protocols
* RTSP - RFC2326 (protocol - very HTTP-like, also contains URL description)
10.1 RTSP
* RSYNC (no RFCs for protocol nor URI/URL format). An implementation should
most probably use an existing rsync library, such as librsync.
RFC2326 (protocol - very HTTP-like, also contains URL description)
CLIENT
10.2 RSYNC
* Add option that is similar to -O but that takes the output file name from
the Content-Disposition: header, and/or uses the local file name used in
redirections for the cases the server bounces the request further to a
different file (name): http://curl.haxx.se/bug/feature.cgi?id=1364676
There's no RFC for protocol nor URI/URL format. An implementation should
most probably use an existing rsync library, such as librsync.
* "curl --sync http://example.com/feed[1-100].rss" or
"curl --sync http://example.net/{index,calendar,history}.html"
10.3 RTMP
Downloads a range or set of URLs using the remote name, but only if the
remote file is newer than the local file. A Last-Modified HTTP date header
should also be used to set the mod date on the downloaded file.
(idea from "Brianiac")
There exists a patch that claims to introduce this protocol:
http://osdir.com/ml/gnu.gnash.devel2/2006-11/msg00278.html, further details
in the feature-request: http://curl.haxx.se/bug/feature.cgi?id=1843469
* Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
Requested by Dane Jensen and others. This is easily scripted though.
11. Client
* Add an option that prevents cURL from overwriting existing local files. When
used, and there already is an existing file with the target file name
(either -O or -o), a number should be appended (and increased if already
existing). So that index.html becomes first index.html.1 and then
index.html.2 etc. Jeff Pohlmeyer suggested.
11.1 Content-Disposition
* "curl ftp://site.com/*.txt"
Add option that is similar to -O but that takes the output file name from the
Content-Disposition: header, and/or uses the local file name used in
redirections for the cases the server bounces the request further to a
different file (name): http://curl.haxx.se/bug/feature.cgi?id=1364676
* The client could be told to use maximum N simultaneous parallel transfers
and then just make sure that happens. It should of course not make more
than one connection to the same remote host. This would require the client
to use the multi interface. http://curl.haxx.se/bug/feature.cgi?id=1558595
11.2 sync
* Extending the capabilities of the multipart formposting. How about leaving
the ';type=foo' syntax as it is and adding an extra tag (headers) which
works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
fil1.hdr contains extra headers like
"curl --sync http://example.com/feed[1-100].rss" or
"curl --sync http://example.net/{index,calendar,history}.html"
Content-Type: text/plain; charset=KOI8-R"
Content-Transfer-Encoding: base64
X-User-Comment: Please don't use browser specific HTML code
Downloads a range or set of URLs using the remote name, but only if the
remote file is newer than the local file. A Last-Modified HTTP date header
should also be used to set the mod date on the downloaded file.
which should overwrite the program reasonable defaults (plain/text,
8bit...) (Idea brough to us by kromJx)
11.3 glob posts
* ability to specify the classic computing suffixes on the range
specifications. For example, to download the first 500 Kilobytes of a file,
be able to specify the following for the -r option: "-r 0-500K" or for the
first 2 Megabytes of a file: "-r 0-2M". (Mark Smith suggested)
Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
This is easily scripted though.
* --data-encode that URL encodes the data before posting
http://curl.haxx.se/mail/archive-2003-11/0091.html (Kevin Roth suggested)
11.4 prevent file overwriting
* Provide a way to make options bound to a specific URL among several on the
command line. Possibly by letting ':' separate options between URLs,
similar to this:
Add an option that prevents cURL from overwriting existing local files. When
used, and there already is an existing file with the target file name
(either -O or -o), a number should be appended (and increased if already
existing). So that index.html becomes first index.html.1 and then
index.html.2 etc.
curl --data foo --url url.com : \
--url url2.com : \
--url url3.com --data foo3
11.5 ftp wildcard download
(More details: http://curl.haxx.se/mail/archive-2004-07/0133.html)
"curl ftp://site.com/*.txt"
The example would do a POST-GET-POST combination on a single command line.
11.6 simultaneous parallel transfers
BUILD
The client could be told to use maximum N simultaneous parallel transfers and
then just make sure that happens. It should of course not make more than one
connection to the same remote host. This would require the client to use the
multi interface. http://curl.haxx.se/bug/feature.cgi?id=1558595
* Consider extending 'roffit' to produce decent ASCII output, and use that
instead of (g)nroff when building src/hugehelp.c
11.7 provide formpost headers
TEST SUITE
Extending the capabilities of the multipart formposting. How about leaving
the ';type=foo' syntax as it is and adding an extra tag (headers) which
works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
fil1.hdr contains extra headers like
* Make our own version of stunnel for simple port forwarding to enable HTTPS
and FTP-SSL tests without the stunnel dependency, and it could allow us to
provide test tools built with either OpenSSL or GnuTLS
Content-Type: text/plain; charset=KOI8-R"
Content-Transfer-Encoding: base64
X-User-Comment: Please don't use browser specific HTML code
* If perl wasn't found by the configure script, don't attempt to run the
tests but explain something nice why it doesn't.
which should overwrite the program reasonable defaults (plain/text,
8bit...)
* Extend the test suite to include more protocols. The telnet could just do
ftp or http operations (for which we have test servers).
11.8 url-specific options
* Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
fork()s and it should become even more portable.
Provide a way to make options bound to a specific URL among several on the
command line. Possibly by letting ':' separate options between URLs,
similar to this:
NEXT soname bump
curl --data foo --url url.com : \
--url url2.com : \
--url url3.com --data foo3
* #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
from being output in NOBODY requests over ftp
(More details: http://curl.haxx.se/mail/archive-2004-07/0133.html)
* Combine some of the error codes to remove duplicates. The original
numbering should not be changed, and the old identifiers would be
macroed to the new ones in an CURL_NO_OLDIES section to help with
backward compatibility.
The example would do a POST-GET-POST combination on a single command line.
Candidates for removal and their replacements:
12. Build
CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
12.1 roffit
NEXT MAJOR RELEASE
Consider extending 'roffit' to produce decent ASCII output, and use that
instead of (g)nroff when building src/hugehelp.c
* curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
CURLMcode. These should be changed to be the same.
13. Test suite
* remove obsolete defines from curl/curl.h
13.1 SSL tunnel
* make several functions use size_t instead of int in their APIs
Make our own version of stunnel for simple port forwarding to enable HTTPS
and FTP-SSL tests without the stunnel dependency, and it could allow us to
provide test tools built with either OpenSSL or GnuTLS
* remove the following functions from the public API:
curl_getenv
curl_mprintf (and variations)
curl_strequal
curl_strnequal
13.2 nicer lacking perl message
They will instead become curlx_ - alternatives. That makes the curl app
still capable of building with them from source.
If perl wasn't found by the configure script, don't attempt to run the tests
but explain something nice why it doesn't.
* Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
internally. Let the app judge success or not for itself.
13.3 more protocols supported
Extend the test suite to include more protocols. The telnet could just do ftp
or http operations (for which we have test servers).
13.4 more platforms supported
Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
fork()s and it should become even more portable.
14. Next SONAME bump
14.1 http-style HEAD output for ftp
#undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
from being output in NOBODY requests over ftp
14.2 combine error codes
Combine some of the error codes to remove duplicates. The original
numbering should not be changed, and the old identifiers would be
macroed to the new ones in an CURL_NO_OLDIES section to help with
backward compatibility.
Candidates for removal and their replacements:
CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
14.3 extend CURLOPT_SOCKOPTFUNCTION prototype
The current prototype only provides 'purpose' that tells what the
connection/socket is for, but not any protocol or similar. It makes it hard
for applications to differentiate on TCP vs UDP and even HTTP vs FTP and
similar.
15. Next major release
15.1 cleanup return codes
curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
CURLMcode. These should be changed to be the same.
15.2 remove obsolete defines
remove obsolete defines from curl/curl.h
15.3 size_t
make several functions use size_t instead of int in their APIs
15.4 remove several functions
remove the following functions from the public API:
curl_getenv
curl_mprintf (and variations)
curl_strequal
curl_strnequal
They will instead become curlx_ - alternatives. That makes the curl app
still capable of building with them from source.
15.5 remove CURLOPT_FAILONERROR
Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
internally. Let the app judge success or not for itself.
15.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
Remove support for a global DNS cache. Anything global is silly, and we
already offer the share interface for the same functionality but done
"right".

View File

@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
.\" * Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" * Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -21,7 +21,7 @@
.\" * $Id$
.\" **************************************************************************
.\"
.TH curl 1 "20 Nov 2007" "Curl 7.17.2" "Curl Manual"
.TH curl 1 "5 Jan 2008" "Curl 7.18.0" "Curl Manual"
.SH NAME
curl \- transfer a URL
.SH SYNOPSIS
@@ -82,7 +82,7 @@ specified on a single command line and cannot be used between separate curl
invokes.
.SH "PROGRESS METER"
curl normally displays a progress meter during operations, indicating amount
of transfered data, transfer speeds and estimated time left etc.
of transferred data, transfer speeds and estimated time left etc.
However, since curl displays data to the terminal by default, if you invoke
curl to do an operation and it is about to write data to the terminal, it
@@ -115,10 +115,10 @@ used.
.IP "--anyauth"
(HTTP) Tells curl to figure out authentication method by itself, and use the
most secure one the remote site claims it supports. This is done by first
doing a request and checking the response-headers, thus inducing an extra
network round-trip. This is used instead of setting a specific authentication
method, which you can do with \fI--basic\fP, \fI--digest\fP, \fI--ntlm\fP, and
\fI--negotiate\fP.
doing a request and checking the response-headers, thus possibly inducing an
extra network round-trip. This is used instead of setting a specific
authentication method, which you can do with \fI--basic\fP, \fI--digest\fP,
\fI--ntlm\fP, and \fI--negotiate\fP.
Note that using --anyauth is not recommended if you do uploads from stdin,
since it may require data to be sent twice and then the client must be able to
@@ -224,56 +224,62 @@ To create remote directories when using FTP or SFTP, try
If this option is used several times, the following occurrences make no
difference.
.IP "-d/--data <data>"
(HTTP) Sends the specified data in a POST request to the HTTP server, in a way
that can emulate as if a user has filled in a HTML form and pressed the submit
button. Note that the data is sent exactly as specified with no extra
processing (with all newlines cut off). The data is expected to be
\&"url-encoded". This will cause curl to pass the data to the server using the
content-type application/x-www-form-urlencoded. Compare to \fI-F/--form\fP. If
this option is used more than once on the same command line, the data pieces
specified will be merged together with a separating &-letter. Thus, using '-d
name=daniel -d skill=lousy' would generate a post chunk that looks like
\&'name=daniel&skill=lousy'.
(HTTP) Sends the specified data in a POST request to the HTTP server, in the
same way that a browser does when a user has filled in an HTML form and
presses the submit button. This will cause curl to pass the data to the server
using the content-type application/x-www-form-urlencoded. Compare to
\fI-F/--form\fP.
\fI-d/--data\fP is the same as \fI--data-ascii\fP. To post data purely binary,
you should instead use the \fI--data-binary\fP option. To URL encode the value
of a form field you may use \fI--data-urlencode\fP.
If any of these options is used more than once on the same command line, the
data pieces specified will be merged together with a separating
&-letter. Thus, using '-d name=daniel -d skill=lousy' would generate a post
chunk that looks like \&'name=daniel&skill=lousy'.
If you start the data with the letter @, the rest should be a file name to
read the data from, or - if you want curl to read the data from stdin. The
contents of the file must already be url-encoded. Multiple files can also be
specified. Posting data from a file named 'foobar' would thus be done with
\fI--data\fP @foobar".
To post data purely binary, you should instead use the \fI--data-binary\fP
option.
\fI-d/--data\fP is the same as \fI--data-ascii\fP.
If this option is used several times, the ones following the first will
append data.
.IP "--data-ascii <data>"
(HTTP) This is an alias for the \fI-d/--data\fP option.
If this option is used several times, the ones following the first will
append data.
\fI--data @foobar\fP.
.IP "--data-binary <data>"
(HTTP) This posts data in a similar manner as \fI--data-ascii\fP does,
although when using this option the entire context of the posted data is kept
as-is. If you want to post a binary file without the strip-newlines feature of
the \fI--data-ascii\fP option, this is for you.
(HTTP) This posts data exactly as specified with no extra processing
whatsoever.
If this option is used several times, the ones following the first will
append data.
If you start the data with the letter @, the rest should be a filename. Data
is posted in a similar manner as \fI--data-ascii\fP does, except that newlines
are preserved and conversions are never done.
If this option is used several times, the ones following the first will append
data. As described in \fI-d/--data\fP.
.IP "--data-urlencode <data>"
(HTTP) This posts data, similar to the other --data options with the exception
that this will do partial URL encoding. (Added in 7.17.2)
that this performs URL encoding. (Added in 7.18.0)
The <data> part should be using one of the two following syntaxes:
To be CGI compliant, the <data> part should begin with a \fIname\fP followed
by a separator and a content specification. The <data> part can be passed to
curl using one of the following syntaxes:
.RS
.IP "content"
This will make curl URL encode the content and pass that on. Just be careful
so that the content doesn't contain any = or @ letters, as that will then make
the syntax match one of the other cases below!
.IP "=content"
This will make curl URL encode the content and pass that on. The preceding =
letter is not included in the data.
.IP "name=content"
This will make curl URL encode the content part and pass that on. Note that
the name part is expected to be URL encoded already.
.IP "@filename"
This will make curl load data from the given file (including any newlines),
URL encode that data and pass it on in the POST.
.IP "name@filename"
This will make curl load data from the given file, URL encode that data and
pass it on in the POST like \fIname=urlencoded-data\fP. Note that the name
is expected to be URL encoded already.
This will make curl load data from the given file (including any newlines),
URL encode that data and pass it on in the POST. The name part gets an equal
sign appended, resulting in \fIname=urlencoded-file-content\fP. Note that the
name is expected to be URL encoded already.
.RE
.IP "--digest"
(HTTP) Enables HTTP Digest authentication. This is a authentication that
@@ -560,7 +566,7 @@ See also the \fI-A/--user-agent\fP and \fI-e/--referer\fP options.
This option can be used multiple times to add/replace/remove multiple headers.
.IP "--hostpubmd5"
Pass a string containing 32 hexadecimal digits. The string should be the 128
bit MD5 cheksum of the remote host's public key, curl will refuse the
bit MD5 checksum of the remote host's public key, curl will refuse the
connection with the host unless the md5sums match. This option is only for SCP
and SFTP transfers. (Added in 7.17.1)
.IP "--ignore-content-length"
@@ -606,6 +612,14 @@ See this online resource for further details:
\fBhttp://curl.haxx.se/docs/sslcerts.html\fP
If this option is used twice, the second time will again disable it.
.IP "--keepalive-time <seconds>"
This option sets the time a connection needs to remain idle before sending
keepalive probes and the time between individual keepalive probes. It is
currently effective on operating systems offering the TCP_KEEPIDLE and
TCP_KEEPINTVL socket options (meaning Linux, recent AIX, HP-UX and more). This
option has no effect if \fI--no-keepalive\fP is used. (Added in 7.18.0)
If this option is used multiple times, the last occurrence sets the amount.
.IP "--key <key>"
(SSL/SSH) Private key file name. Allows you to provide your private key in this
separate file.
@@ -631,8 +645,12 @@ If this option is used several times, the last one will be used.
Specify which config file to read curl arguments from. The config file is a
text file in which command line arguments can be written which then will be
used as if they were written on the actual command line. Options and their
parameters must be specified on the same config file line. If the parameter is
to contain white spaces, the parameter must be enclosed within quotes. If the
parameters must be specified on the same config file line, separated by
white space, colon, the equals sign or any combination thereof (however,
the preferred separator is the equals sign). If the parameter is to contain
white spaces, the parameter must be enclosed within quotes. Within double
quotes, the following escape sequences are available: \\\\, \\", \\t, \\n,
\\r and \\v. A backlash preceding any other letter is ignored. If the
first column of a config line is a '#' character, the rest of the line will be
treated as a comment. Only write one option per physical line in the config
file.
@@ -687,7 +705,8 @@ NOTE: this does not properly support -F and the sending of multipart
formposts, so in those cases the output program will be missing necessary
calls to \fIcurl_formadd(3)\fP, and possibly more.
If this option is used several times, the last given file name will be used.
If this option is used several times, the last given file name will be
used. (Added in 7.16.1)
.IP "--limit-rate <speed>"
Specify the maximum transfer rate you want curl to use. This feature is useful
if you have a limited pipe and you'd like your transfer not use your entire
@@ -813,6 +832,11 @@ will output the data in chunks, not necessarily exactly when the data arrives.
Using this option will disable that buffering.
If this option is used twice, the second will again switch on buffering.
.IP "--no-keepalive"
Disables the use of keepalive messages on the TCP connection, as by default
curl enables them.
If this option is used twice, the second will again enable keepalive.
.IP "--no-sessionid"
(SSL) Disable curl's use of SSL session-ID caching. By default all transfers
are done using the cache. Note that while nothing ever should get hurt by
@@ -875,7 +899,7 @@ a redirection. This option is meaningful only when using \fI-L/--location\fP
(Added in 7.17.1)
.IP "--proxy-anyauth"
Tells curl to pick a suitable authentication method when communicating with
the given proxy. This will cause an extra request/response round-trip. (Added
the given proxy. This might cause an extra request/response round-trip. (Added
in 7.13.2)
If this option is used twice, the second will again disable the proxy use-any
@@ -962,9 +986,9 @@ This option can be used multiple times.
random data. The data is used to seed the random engine for SSL connections.
See also the \fI--egd-file\fP option.
.IP "-r/--range <range>"
(HTTP/FTP)
Retrieve a byte range (i.e a partial document) from a HTTP/1.1 or FTP
server. Ranges can be specified in a number of ways.
(HTTP/FTP/FILE) Retrieve a byte range (i.e a partial document) from a
HTTP/1.1, FTP server or a local FILE. Ranges can be specified in a number of
ways.
.RS
.TP 10
.B 0-499
@@ -1063,9 +1087,28 @@ This option overrides any previous use of \fI-x/--proxy\fP, as they are
mutually exclusive.
If this option is used several times, the last one will be used.
.IP "--socks4a <host[:port]>"
Use the specified SOCKS4a proxy. If the port number is not specified, it is
assumed at port 1080. (Added in 7.18.0)
This option overrides any previous use of \fI-x/--proxy\fP, as they are
mutually exclusive.
If this option is used several times, the last one will be used.
.IP "--socks5-hostname <host[:port]>"
Use the specified SOCKS5 proxy (and let the proxy resolve the host name). If
the port number is not specified, it is assumed at port 1080. (Added in
7.18.0)
This option overrides any previous use of \fI-x/--proxy\fP, as they are
mutually exclusive.
If this option is used several times, the last one will be used. (This option
was previously wrongly documented and used as --socks without the number
appended.)
.IP "--socks5 <host[:port]>"
Use the specified SOCKS5 proxy. If the port number is not specified, it is
assumed at port 1080. (Added in 7.11.1)
Use the specified SOCKS5 proxy - but resolve the host name locally. If the
port number is not specified, it is assumed at port 1080.
This option overrides any previous use of \fI-x/--proxy\fP, as they are
mutually exclusive.
@@ -1142,6 +1185,9 @@ If this option is used several times, each occurrence will toggle it on/off.
Specify user and password to use for server authentication. Overrides
\fI-n/--netrc\fP and \fI--netrc-optional\fP.
If you just give the user name (without entering a colon) curl will prompt for
a password.
If you use an SSPI-enabled curl binary and do NTLM authentication, you can
force curl to pick up the user name and password from your environment by
simply specifying a single colon with this option: "-u :".

View File

@@ -0,0 +1,217 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*
* This is a multi threaded application that uses a progress bar to show
* status. It uses Gtk+ to make a smooth pulse.
*
* Written by Jud Bishop after studying the other examples provided with
* libcurl.
*
* To compile (on a single line):
* gcc -ggdb `pkg-config --cflags --libs gtk+-2.0` -lcurl -lssl -lcrypto
* -lgthread-2.0 -dl smooth-gtk-thread.c -o smooth-gtk-thread
*/
#include <stdio.h>
#include <gtk/gtk.h>
#include <glib.h>
#include <unistd.h>
#include <pthread.h>
#include <curl/curl.h>
#include <curl/types.h> /* new for v7 */
#include <curl/easy.h> /* new for v7 */
#define NUMT 4
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
int j = 0;
gint num_urls = 9; /* Just make sure this is less than urls[]*/
char *urls[]= {
"90022",
"90023",
"90024",
"90025",
"90026",
"90027",
"90028",
"90029",
"90030"
};
size_t write_file(void *ptr, size_t size, size_t nmemb, FILE *stream)
{
/* printf("write_file\n"); */
return fwrite(ptr, size, nmemb, stream);
}
/* http://xoap.weather.com/weather/local/46214?cc=*&dayf=5&unit=i */
void *pull_one_url(void *NaN)
{
CURL *curl;
CURLcode res;
gchar *http;
FILE *outfile;
gint i;
/* Stop threads from entering unless j is incremented */
pthread_mutex_lock(&lock);
while ( j < num_urls )
{
printf("j = %d\n", j);
http =
g_strdup_printf("xoap.weather.com/weather/local/%s?cc=*&dayf=5&unit=i\n",
urls[j]);
printf( "http %s", http );
curl = curl_easy_init();
if(curl)
{
outfile = fopen(urls[j], "w");
/* printf("fopen\n"); */
/* Set the URL and transfer type */
curl_easy_setopt(curl, CURLOPT_URL, http);
/* Write to the file */
curl_easy_setopt(curl, CURLOPT_WRITEDATA, outfile);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_file);
j++; /* critical line */
pthread_mutex_unlock(&lock);
res = curl_easy_perform(curl);
fclose(outfile);
printf("fclose\n");
curl_easy_cleanup(curl);
}
g_free (http);
/* Adds more latency, testing the mutex.*/
sleep(1);
} /* end while */
return NULL;
}
gboolean pulse_bar(gpointer data)
{
gdk_threads_enter();
gtk_progress_bar_pulse (GTK_PROGRESS_BAR (data));
gdk_threads_leave();
/* Return true so the function will be called again;
* returning false removes this timeout function.
*/
return TRUE;
}
void *create_thread(void *progress_bar)
{
pthread_t tid[NUMT];
int i;
int error;
/* Make sure I don't create more threads than urls. */
for(i=0; i < NUMT && i < num_urls ; i++) {
error = pthread_create(&tid[i],
NULL, /* default attributes please */
pull_one_url,
NULL);
if(0 != error)
fprintf(stderr, "Couldn't run thread number %d, errno %d\n", i, error);
else
fprintf(stderr, "Thread %d, gets %s\n", i, urls[i]);
}
/* Wait for all threads to terminate. */
for(i=0; i < NUMT && i < num_urls; i++) {
error = pthread_join(tid[i], NULL);
fprintf(stderr, "Thread %d terminated\n", i);
}
/* This stops the pulsing if you have it turned on in the progress bar
section */
g_source_remove(GPOINTER_TO_INT(g_object_get_data(G_OBJECT(progress_bar),
"pulse_id")));
/* This destroys the progress bar */
gtk_widget_destroy(progress_bar);
/* [Un]Comment this out to kill the program rather than pushing close. */
/* gtk_main_quit(); */
return NULL;
}
static gboolean cb_delete(GtkWidget *window, gpointer data)
{
gtk_main_quit();
return FALSE;
}
int main(int argc, char **argv)
{
GtkWidget *top_window, *outside_frame, *inside_frame, *progress_bar;
GtkAdjustment *adj;
/* Init thread */
g_thread_init(NULL);
gdk_threads_init ();
gdk_threads_enter ();
gtk_init(&argc, &argv);
/* Base window */
top_window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
/* Frame */
outside_frame = gtk_frame_new(NULL);
gtk_frame_set_shadow_type(GTK_FRAME(outside_frame), GTK_SHADOW_OUT);
gtk_container_add(GTK_CONTAINER(top_window), outside_frame);
/* Frame */
inside_frame = gtk_frame_new(NULL);
gtk_frame_set_shadow_type(GTK_FRAME(inside_frame), GTK_SHADOW_IN);
gtk_container_set_border_width(GTK_CONTAINER(inside_frame), 5);
gtk_container_add(GTK_CONTAINER(outside_frame), inside_frame);
/* Progress bar */
progress_bar = gtk_progress_bar_new();
gtk_progress_bar_pulse (GTK_PROGRESS_BAR (progress_bar));
/* Make uniform pulsing */
gint pulse_ref = g_timeout_add (300, pulse_bar, progress_bar);
g_object_set_data(G_OBJECT(progress_bar), "pulse_id",
GINT_TO_POINTER(pulse_ref));
gtk_container_add(GTK_CONTAINER(inside_frame), progress_bar);
gtk_widget_show_all(top_window);
printf("gtk_widget_show_all\n");
g_signal_connect(G_OBJECT (top_window), "delete-event",
G_CALLBACK(cb_delete), NULL);
if (!g_thread_create(&create_thread, progress_bar, FALSE, NULL) != 0)
g_warning("can't create the thread");
gtk_main();
gdk_threads_leave();
printf("gdk_threads_leave\n");
return 0;
}

View File

@@ -18,7 +18,8 @@ man_MANS = curl_easy_cleanup.3 curl_easy_getinfo.3 curl_easy_init.3 \
curl_multi_strerror.3 curl_share_strerror.3 curl_global_init_mem.3 \
libcurl-tutorial.3 curl_easy_reset.3 curl_easy_escape.3 \
curl_easy_unescape.3 curl_multi_setopt.3 curl_multi_socket.3 \
curl_multi_timeout.3 curl_formget.3 curl_multi_assign.3
curl_multi_timeout.3 curl_formget.3 curl_multi_assign.3 \
curl_easy_pause.3
HTMLPAGES = curl_easy_cleanup.html curl_easy_getinfo.html \
curl_easy_init.html curl_easy_perform.html curl_easy_setopt.html \
@@ -36,7 +37,7 @@ HTMLPAGES = curl_easy_cleanup.html curl_easy_getinfo.html \
curl_share_strerror.html curl_global_init_mem.html libcurl-tutorial.html \
curl_easy_reset.html curl_easy_escape.html curl_easy_unescape.html \
curl_multi_setopt.html curl_multi_socket.html curl_multi_timeout.html \
curl_formget.html curl_multi_assign.html
curl_formget.html curl_multi_assign.html curl_easy_pause.html
PDFPAGES = curl_easy_cleanup.pdf curl_easy_getinfo.pdf curl_easy_init.pdf \
curl_easy_perform.pdf curl_easy_setopt.pdf curl_easy_duphandle.pdf \
@@ -53,7 +54,7 @@ PDFPAGES = curl_easy_cleanup.pdf curl_easy_getinfo.pdf curl_easy_init.pdf \
curl_share_strerror.pdf curl_global_init_mem.pdf libcurl-tutorial.pdf \
curl_easy_reset.pdf curl_easy_escape.pdf curl_easy_unescape.pdf \
curl_multi_setopt.pdf curl_multi_socket.pdf curl_multi_timeout.pdf \
curl_formget.pdf curl_multi_assign.pdf
curl_formget.pdf curl_multi_assign.pdf curl_easy_pause.pdf
CLEANFILES = $(HTMLPAGES) $(PDFPAGES)

View File

@@ -0,0 +1,63 @@
.\" $Id$
.\"
.TH curl_easy_pause 3 "17 Dec 2007" "libcurl 7.18.0" "libcurl Manual"
.SH NAME
curl_easy_pause - pause and unpause a connection
.SH SYNOPSIS
.B #include <curl/curl.h>
.BI "CURLcode curl_easy_pause(CURL *"handle ", int "bitmask " );"
.SH DESCRIPTION
Using this function, you can explicitly mark a running connection to get
paused, and you can unpause a connection that was previously paused.
A connection can made to pause by using this function or by letting the read
or the write callbacks return the proper magic return code
(\fICURL_READFUNC_PAUSE\fP and \fICURL_WRITEFUNC_PAUSE\fP).
NOTE: while it may feel tempting, take care and notice that you cannot call
this function from another thread.
When this function is called to unpause reading, the chance is high that you
will get your write callback called before this function returns.
The \fBhandle\fP argument is of course identifying the handle that operates on
the connection you want to pause or unpause.
The \fBbitmask\fP argument is a set of bits that sets the new state of the
connection. The following bits can be used:
.IP CURLPAUSE_RECV
Pause receiving data. There will be no data received on this conneciton until
this function is called again without this bit set. Thus, the write callback
(\fICURLOPT_WRITEFUNCTION\fP) won't be called.
.IP CURLPAUSE_SEND
Pause sending data. There will be no data sent on this connection until this
function is called again without this bit set. Thus, the read callback
(\fICURLOPT_READFUNCTION\fP) won't be called.
.IP CURLPAUSE_ALL
Convenience define that pauses both directions.
.IP CURLPAUSE_CONT
Convenience define that unpauses both directions
.SH RETURN VALUE
CURLE_OK (zero) means that the option was set properly, and a non-zero return
code means something wrong occurred after the new state was set. See the
\fIlibcurl-errors(3)\fP man page for the full list with descriptions.
.SH AVAILABILITY
This function was added in libcurl 7.18.0. Before this version, there was no
explicit support for pausing transfers.
.SH "MEMORY USE"
When pausing a read by returning the magic return code from a write callback,
the read data is already in libcurl's internal buffers so it'll have to keep
it in an allocated buffer until the reading is again unpaused using this
function.
If the downloaded data is compressed and is asked to get uncompressed
automatially on download, libcurl will continue to uncompress the entire
downloaded chunk and it will cache the data uncompressed. This has the side-
effect that if you download something that is compressed a lot, it can result
in a very large data amount needing to be allocated to save the data during
the pause. This said, you should probably consider not using paused reading if
you allow libcurl to uncompress data automatically.
.SH "SEE ALSO"
.BR curl_easy_cleanup "(3), " curl_easy_reset "(3)"

View File

@@ -5,7 +5,7 @@
.\" * | (__| |_| | _ <| |___
.\" * \___|\___/|_| \_\_____|
.\" *
.\" * Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" * Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
.\" *
.\" * This software is licensed as described in the file COPYING, which
.\" * you should have received as part of this distribution. The terms
@@ -21,7 +21,7 @@
.\" * $Id$
.\" **************************************************************************
.\"
.TH curl_easy_setopt 3 "30 Aug 2007" "libcurl 7.17.0" "libcurl Manual"
.TH curl_easy_setopt 3 "5 Jan 2008" "libcurl 7.18.0" "libcurl Manual"
.SH NAME
curl_easy_setopt \- set options for a curl easy handle
.SH SYNOPSIS
@@ -95,6 +95,10 @@ of bytes actually taken care of. If that amount differs from the amount passed
to your function, it'll signal an error to the library and it will abort the
transfer and return \fICURLE_WRITE_ERROR\fP.
From 7.18.0, the function can return CURL_WRITEFUNC_PAUSE which then will
cause writing to this connection to become paused. See
\fIcurl_easy_pause(3)\fP for further details.
This function may be called with zero bytes data if the transfered file is
empty.
@@ -142,6 +146,10 @@ The read callback may return \fICURL_READFUNC_ABORT\fP to stop the current
operation immediately, resulting in a \fICURLE_ABORTED_BY_CALLBACK\fP error
code from the transfer (Added in 7.12.1)
From 7.18.0, the function can return CURL_READFUNC_PAUSE which then will cause
reading from this connection to become paused. See \fIcurl_easy_pause(3)\fP
for further details.
If you set the callback pointer to NULL, or doesn't set it at all, the default
internal read function will be used. It is simply doing an fread() on the FILE
* stream set with \fICURLOPT_READDATA\fP.
@@ -163,11 +171,32 @@ something special I/O-related needs to be done that the library can't do by
itself. For now, rewinding the read data stream is the only action it can
request. The rewinding of the read data stream may be necessary when doing a
HTTP PUT or POST with a multi-pass authentication method. (Option added in
7.12.3)
7.12.3).
Use \fICURLOPT_SEEKFUNCTION\fP instead to provide seeking!
.IP CURLOPT_IOCTLDATA
Pass a pointer that will be untouched by libcurl and passed as the 3rd
argument in the ioctl callback set with \fICURLOPT_IOCTLFUNCTION\fP. (Option
added in 7.12.3)
.IP CURLOPT_SEEKFUNCTION
Function pointer that should match the following prototype: \fIint
function(void *instream, curl_off_t offset, int origin);\fP This function gets
called by libcurl to seek to a certain position in the input stream and can be
used to fast forward a file in a resumed upload (instead of reading all
uploaded bytes with the normal read function/callback). It is also called to
rewind a stream when doing a HTTP PUT or POST with a multi-pass authentication
method. The function shall work like "fseek" or "lseek" and accepted SEEK_SET,
SEEK_CUR and SEEK_END as argument for origin, although (in 7.18.0) libcurl
only passes SEEK_SET. The callback must return 0 on success as returning
non-zero will cause the upload operation to fail.
If you forward the input arguments directly to "fseek" or "lseek", note that
the data type for \fIoffset\fP is not the same as defined for curl_off_t on
many systems! (Option added in 7.18.0)
.IP CURLOPT_SEEKDATA
Data pointer to pass to the file read function. If you use the
\fICURLOPT_SEEKFUNCTION\fP option, this is the pointer you'll get as input. If
you don't specify a seek callback, NULL is passed. (Option added in 7.18.0)
.IP CURLOPT_SOCKOPTFUNCTION
Function pointer that should match the \fIcurl_sockopt_callback\fP prototype
found in \fI<curl/curl.h>\fP. This function gets called by libcurl after the
@@ -430,13 +459,21 @@ Pass a long with this option to set the proxy port to connect to unless it is
specified in the proxy string \fICURLOPT_PROXY\fP.
.IP CURLOPT_PROXYTYPE
Pass a long with this option to set type of the proxy. Available options for
this are \fICURLPROXY_HTTP\fP, \fICURLPROXY_SOCKS4\fP (added in 7.15.2)
\fICURLPROXY_SOCKS5\fP. The HTTP type is default. (Added in 7.10)
this are \fICURLPROXY_HTTP\fP, \fICURLPROXY_SOCKS4\fP (added in 7.15.2),
\fICURLPROXY_SOCKS5\fP, \fICURLPROXY_SOCKS4A\fP (added in 7.18.0) and
\fICURLPROXY_SOCKS5_HOSTNAME\fP (added in 7.18.0). The HTTP type is
default. (Added in 7.10)
.IP CURLOPT_HTTPPROXYTUNNEL
Set the parameter to non-zero to get the library to tunnel all operations
through a given HTTP proxy. There is a big difference between using a proxy
and to tunnel through it. If you don't know what this means, you probably
don't want this tunneling option.
.IP CURLOPT_SOCKS5_RESOLVE_LOCAL
Set the parameter to 1 to get the library to resolve the host name locally
instead of passing it to the proxy to resolve, when using a SOCKS5 proxy.
Note that libcurl before 7.18.0 always resolved the host name locally even
when SOCKS5 was used. (Added in 7.18.0)
.IP CURLOPT_INTERFACE
Pass a char * as parameter. This set the interface name to use as outgoing
network interface. The name can be an interface name, an IP address or a host
@@ -670,7 +707,9 @@ and \fICURLOPT_READDATA\fP options but then you must make sure to not set
\fICURLOPT_POSTFIELDS\fP to anything but NULL. When providing data with a
callback, you must transmit it using chunked transfer-encoding or you must set
the size of the data with the \fICURLOPT_POSTFIELDSIZE\fP or
\fICURLOPT_POSTFIELDSIZE_LARGE\fP option.
\fICURLOPT_POSTFIELDSIZE_LARGE\fP option. To enable chunked encoding, you
simply pass in the appropriate Transfer-Encoding header, see the
post-callback.c example.
You can override the default POST Content-Type: header by setting your own
with \fICURLOPT_HTTPHEADER\fP.
@@ -1064,6 +1103,13 @@ or similar.
libcurl does not do a complete ASCII conversion when doing ASCII transfers
over FTP. This is a known limitation/flaw that nobody has rectified. libcurl
simply sets the mode to ascii and performs a standard transfer.
.IP CURLOPT_PROXY_TRANSFER_MODE
Pass a long. If the value is set to 1 (one), it tells libcurl to set the
transfer mode (binary or ASCII) for FTP transfers done via an HTTP proxy, by
appending ;type=a or ;type=i to the URL. Without this setting, or it being
set to 0 (zero, the default), \fICURLOPT_TRANSFERTEXT\fP has no effect when
doing FTP via a proxy. Beware that not all proxies support this feature.
(Added in 7.18.0)
.IP CURLOPT_CRLF
Convert Unix newlines to CRLF newlines on transfers.
.IP CURLOPT_RANGE
@@ -1073,6 +1119,8 @@ transfers also support several intervals, separated with commas as in
\fI"X-Y,N-M"\fP. Using this kind of multiple intervals will cause the HTTP
server to send the response document in pieces (using standard MIME separation
techniques). Pass a NULL to this option to disable the use of ranges.
Ranges work on HTTP, FTP and FILE (since 7.18.0) transfers only.
.IP CURLOPT_RESUME_FROM
Pass a long as parameter. It contains the offset in number of bytes that you
want the transfer to start from. Set this option to 0 to make the transfer

View File

@@ -47,10 +47,11 @@ changes. The timeout value is at what latest time the application should call
one of the \&"performing" functions of the multi interface
(\fIcurl_multi_socket(3)\fP, \fIcurl_multi_socket_all(3)\fP and
\fIcurl_multi_perform(3)\fP) - to allow libcurl to keep timeouts and retries
etc to work. Libcurl attempts to limit calling this only when the fixed future
timeout time actually change. See also \fICURLMOPT_TIMERDATA\fP. This callback
can be used instead of, or in addition to, \fIcurl_multi_timeout(3)\fP. (Added
in 7.16.0)
etc to work. A timeout value of -1 means that there is no timeout at all, and
0 means that the timeout is already reached. Libcurl attempts to limit calling
this only when the fixed future timeout time actually change. See also
\fICURLMOPT_TIMERDATA\fP. This callback can be used instead of, or in addition
to, \fIcurl_multi_timeout(3)\fP. (Added in 7.16.0)
.IP CURLMOPT_TIMERDATA
Pass a pointer to whatever you want passed to the
\fBcurl_multi_timer_callback\fP's third argument, the userp pointer. This is

View File

@@ -23,6 +23,10 @@ The timeout value returned in the long \fBtimeout\fP points to, is in number
of milliseconds at this very moment. If 0, it means you should proceed
immediately without waiting for anything. If it returns -1, there's no timeout
at all set.
Note: if libcurl returns a -1 timeout here, it just means that libcurl
currently has no stored timeout value. You must not wait too long (more than a
few seconds perhaps) before you call curl_multi_perform() again.
.SH "RETURN VALUE"
The standard CURLMcode for multi interface error codes.
.SH "TYPICAL USAGE"

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -230,7 +230,9 @@ typedef int (*curl_progress_callback)(void *clientp,
time for those who feel adventurous. */
#define CURL_MAX_WRITE_SIZE 16384
#endif
/* This is a magic return code for the write callback that, when returned,
will signal libcurl to pause receving on the current transfer. */
#define CURL_WRITEFUNC_PAUSE 0x10000001
typedef size_t (*curl_write_callback)(char *buffer,
size_t size,
size_t nitems,
@@ -239,6 +241,13 @@ typedef size_t (*curl_write_callback)(char *buffer,
/* This is a return code for the read callback that, when returned, will
signal libcurl to immediately abort the current transfer. */
#define CURL_READFUNC_ABORT 0x10000000
/* This is a return code for the read callback that, when returned, will
signal libcurl to pause sending data on the current transfer. */
#define CURL_READFUNC_PAUSE 0x10000001
typedef int (*curl_seek_callback)(void *instream,
curl_off_t offset,
int origin); /* 'whence' */
typedef size_t (*curl_read_callback)(char *buffer,
size_t size,
size_t nitems,
@@ -257,7 +266,9 @@ struct curl_sockaddr {
int family;
int socktype;
int protocol;
socklen_t addrlen;
unsigned int addrlen; /* addrlen was a socklen_t type before 7.18.0 but it
turned really ugly and painful on the systems that
lack this type */
struct sockaddr addr;
};
@@ -493,10 +504,15 @@ typedef CURLcode (*curl_ssl_ctx_callback)(CURL *curl, /* easy handle */
void *userptr);
typedef enum {
CURLPROXY_HTTP = 0,
CURLPROXY_SOCKS4 = 4,
CURLPROXY_SOCKS5 = 5
} curl_proxytype;
CURLPROXY_HTTP = 0, /* added in 7.10 */
CURLPROXY_SOCKS4 = 4, /* support added in 7.15.2, enum existed already
in 7.10 */
CURLPROXY_SOCKS5 = 5, /* added in 7.10 */
CURLPROXY_SOCKS4A = 6, /* added in 7.18.0 */
CURLPROXY_SOCKS5_HOSTNAME = 7 /* Use the SOCKS5 protocol but pass along the
host name rather than the IP address. added
in 7.18.0 */
} curl_proxytype; /* this enum was added in 7.10 */
#define CURLAUTH_NONE 0 /* nothing */
#define CURLAUTH_BASIC (1<<0) /* Basic (default) */
@@ -941,7 +957,7 @@ typedef enum {
CINIT(SHARE, OBJECTPOINT, 100),
/* indicates type of proxy. accepted values are CURLPROXY_HTTP (default),
CURLPROXY_SOCKS4 and CURLPROXY_SOCKS5. */
CURLPROXY_SOCKS4, CURLPROXY_SOCKS4A and CURLPROXY_SOCKS5. */
CINIT(PROXYTYPE, LONG, 101),
/* Set the Accept-Encoding string. Use this to tell a server you would like
@@ -1165,6 +1181,13 @@ typedef enum {
/* POST volatile input fields. */
CINIT(COPYPOSTFIELDS, OBJECTPOINT, 165),
/* set transfer mode (;type=<a|i>) when doing FTP via an HTTP proxy */
CINIT(PROXY_TRANSFER_MODE, LONG, 166),
/* Callback function for seeking in the input stream */
CINIT(SEEKFUNCTION, FUNCTIONPOINT, 167),
CINIT(SEEKDATA, OBJECTPOINT, 168),
CURLOPT_LASTENTRY /* the last unused */
} CURLoption;
@@ -1739,6 +1762,26 @@ CURL_EXTERN const char *curl_easy_strerror(CURLcode);
*/
CURL_EXTERN const char *curl_share_strerror(CURLSHcode);
/*
* NAME curl_easy_pause()
*
* DESCRIPTION
*
* The curl_easy_pause function pauses or unpauses transfers. Select the new
* state by setting the bitmask, use the convenience defines below.
*
*/
CURL_EXTERN CURLcode curl_easy_pause(CURL *handle, int bitmask);
#define CURLPAUSE_RECV (1<<0)
#define CURLPAUSE_RECV_CONT (0)
#define CURLPAUSE_SEND (1<<2)
#define CURLPAUSE_SEND_CONT (0)
#define CURLPAUSE_ALL (CURLPAUSE_RECV|CURLPAUSE_SEND)
#define CURLPAUSE_CONT (CURLPAUSE_RECV_CONT|CURLPAUSE_SEND_CONT)
#ifdef __cplusplus
}
#endif

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -26,15 +26,18 @@
/* This header file contains nothing but libcurl version info, generated by
a script at release-time. This was made its own header file in 7.11.2 */
/* This is the global package copyright */
#define LIBCURL_COPYRIGHT "1996 - 2008 Daniel Stenberg, <daniel@haxx.se>."
/* This is the version number of the libcurl package from which this header
file origins: */
#define LIBCURL_VERSION "7.17.2-CVS"
#define LIBCURL_VERSION "7.18.0-CVS"
/* The numeric version number is also available "in parts" by using these
defines: */
#define LIBCURL_VERSION_MAJOR 7
#define LIBCURL_VERSION_MINOR 17
#define LIBCURL_VERSION_PATCH 2
#define LIBCURL_VERSION_MINOR 18
#define LIBCURL_VERSION_PATCH 0
/* This is the numeric version of the libcurl version number, meant for easier
parsing and comparions by programs. The LIBCURL_VERSION_NUM define will
@@ -51,7 +54,7 @@
and it is always a greater number in a more recent release. It makes
comparisons with greater than and less than work.
*/
#define LIBCURL_VERSION_NUM 0x071102
#define LIBCURL_VERSION_NUM 0x071200
/*
* This is the date and time when the full source package was created. The

View File

@@ -1,18 +1,19 @@
#
# Watcom / OpenWatcom / Win32 makefile for libcurl.
# G. Vanem <giva@bgnett.no>
# G. Vanem <gvanem@broadpark.no>
#
# $Id$
TARGETS = ca-bundle.h libcurl_wc.lib libcurl_wc.dll libcurl_wc_imp.lib
TARGETS = ca-bundle.h libcurl_wc.dll libcurl_wc_imp.lib
CC = wcc386
CFLAGS = -3r -mf -d3 -hc -zff -zgf -zq -zm -zc -s -fr=con -w2 -fpi -oilrtfm -bt=nt -bd &
-d+ -dWIN32 -dCURL_CA_BUNDLE=getenv("CURL_CA_BUNDLE") &
CFLAGS = -3r -mf -d3 -hc -zff -zgf -zq -zm -zc -s -fr=con -w2 -fpi -oilrtfm -bt=nt &
-bd -d+ -dWIN32 -dCURL_CA_BUNDLE=getenv("CURL_CA_BUNDLE") &
-dBUILDING_LIBCURL -dWITHOUT_MM_LIB -dHAVE_SPNEGO=1 -dENABLE_IPV6 &
-dDEBUG_THREADING_GETADDRINFO -dDEBUG=1 -dCURLDEBUG -d_WIN32_WINNT=0x0501 &
-I. -I..\include -dWINBERAPI=__declspec(cdecl) -dWINLDAPAPI=__declspec(cdecl)
-dWINBERAPI=__declspec(cdecl) -dWINLDAPAPI=__declspec(cdecl) &
-I. -I..\include
#
# Change to suite.
@@ -24,9 +25,8 @@ USE_ZLIB = 0
CFLAGS += -dHAVE_ZLIB_H -dHAVE_LIBZ -I$(ZLIB_ROOT)
!endif
OBJ_DIR = Watcom_obj
OBJ_DIR = WC_Win32.obj
C_ARG = $(OBJ_DIR)\wcc386.arg
LIB_ARG = $(OBJ_DIR)\wlib.arg
LINK_ARG = $(OBJ_DIR)\wlink.arg
OBJS = $(OBJ_DIR)\base64.obj $(OBJ_DIR)\connect.obj &
@@ -70,17 +70,14 @@ $(OBJ_DIR):
ca-bundle.h:
@echo /* dummy ca-bundle.h. Not used */ > $@
libcurl_wc.lib: $(OBJS) $(LIB_ARG)
wlib -q -b -c $@ @$(LIB_ARG)
libcurl_wc.dll: $(OBJS) $(RESOURCE) $(LINK_ARG)
libcurl_wc.dll libcurl_wc_imp.lib: $(OBJS) $(RESOURCE) $(LINK_ARG)
wlink name libcurl_wc.dll @$(LINK_ARG)
clean: .SYMBOLIC
- rm -f $(OBJS) $(RESOURCE)
vclean realclean: clean .SYMBOLIC
- rm -f $(TARGETS) $(C_ARG) $(LIB_ARG) $(LINK_ARG) libcurl_wc.map
- rm -f $(TARGETS) $(C_ARG) $(LINK_ARG) libcurl_wc.map
- rmdir $(OBJ_DIR)
.ERASE
@@ -95,10 +92,6 @@ $(C_ARG): $(__MAKEFILES__)
%create $^@
%append $^@ $(CFLAGS)
$(LIB_ARG): $(__MAKEFILES__)
%create $^@
for %f in ($(OBJS)) do @%append $^@ +- %f
$(LINK_ARG): $(__MAKEFILES__)
%create $^@
@%append $^@ system nt dll

View File

@@ -21,11 +21,11 @@ ZLIB_PATH = ../../zlib-1.2.3
endif
# Edit the path below to point to the base of your OpenSSL package.
ifndef OPENSSL_PATH
OPENSSL_PATH = ../../openssl-0.9.8e
OPENSSL_PATH = ../../openssl-0.9.8g
endif
# Edit the path below to point to the base of your LibSSH2 package.
ifndef LIBSSH2_PATH
LIBSSH2_PATH = ../../libssh2-0.17
LIBSSH2_PATH = ../../libssh2-0.18
endif
# Edit the path below to point to the base of your Novell LDAP NDK.
ifndef LDAP_SDK

View File

@@ -35,7 +35,7 @@ endif
# Edit the vars below to change NLM target settings.
TARGET = libcurl
VERSION = $(LIBCURL_VERSION)
COPYR = Copyright (C) 1996 - 2007, Daniel Stenberg, <daniel@haxx.se>
COPYR = Copyright (C) $(LIBCURL_COPYRIGHT_STR)
DESCR = cURL libcurl $(LIBCURL_VERSION_STR) ($(LIBARCH)) - http://curl.haxx.se
MTSAFE = YES
STACK = 64000
@@ -72,7 +72,7 @@ else
CC = gcc
endif
# a native win32 awk can be downloaded from here:
# http://www.gknw.net/development/prgtools/awk-20050424.zip
# http://www.gknw.net/development/prgtools/awk-20070501.zip
AWK = awk
YACC = bison -y
CP = cp -afv
@@ -336,9 +336,9 @@ endif
ifdef IMPORTS
@echo $(DL)import $(IMPORTS)$(DL) >> $@
endif
ifeq ($(LD),nlmconv)
@echo $(DL)input $(OBJL)$(DL) >> $@
ifeq ($(findstring nlmconv,$(LD)),nlmconv)
@echo $(DL)input $(PRELUDE)$(DL) >> $@
@echo $(DL)input $(OBJL)$(DL) >> $@
#ifdef LDLIBS
# @echo $(DL)input $(LDLIBS)$(DL) >> $@
#endif

View File

@@ -35,7 +35,7 @@ IMPLIB_NAME = libcurl_imp
IMPLIB_NAME_DEBUG = libcurld_imp
!IFNDEF OPENSSL_PATH
OPENSSL_PATH = ../../openssl-0.9.8e
OPENSSL_PATH = ../../openssl-0.9.8g
!ENDIF
!IFNDEF ZLIB_PATH

View File

@@ -353,6 +353,20 @@
#define _CRT_NONSTDC_NO_DEPRECATE 1
#endif
/* VS2008 does not support Windows build targets prior to WinXP, */
/* so, if no build target has been defined we will target WinXP. */
#if defined(_MSC_VER) && (_MSC_VER >= 1500)
# ifndef _WIN32_WINNT
# define _WIN32_WINNT 0x0501
# endif
# ifndef WINVER
# define WINVER 0x0501
# endif
# if (_WIN32_WINNT < 0x0501) || (WINVER < 0x0501)
# error VS2008 does not support Windows build targets prior to WinXP
# endif
#endif
/* ---------------------------------------------------------------- */
/* LDAP SUPPORT */
/* ---------------------------------------------------------------- */

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -694,7 +694,7 @@ singleipconnect(struct connectdata *conn,
addr->protocol=ai->ai_protocol;
addr->addrlen =
(ai->ai_addrlen < (socklen_t)sizeof(struct Curl_sockaddr_storage)) ?
ai->ai_addrlen : (socklen_t)sizeof(struct Curl_sockaddr_storage);
(unsigned int)ai->ai_addrlen : sizeof(struct Curl_sockaddr_storage);
memcpy(&addr->addr, ai->ai_addr, addr->addrlen);
/* If set, use opensocket callback to get the socket */

View File

@@ -77,7 +77,7 @@ exit_zlib(z_stream *z, zlibInitState *zlib_init, CURLcode result)
static CURLcode
inflate_stream(struct connectdata *conn,
struct Curl_transfer_keeper *k)
struct SingleRequest *k)
{
int allow_restart = 1;
z_stream *z = &k->z; /* zlib state structure */
@@ -152,7 +152,7 @@ inflate_stream(struct connectdata *conn,
CURLcode
Curl_unencode_deflate_write(struct connectdata *conn,
struct Curl_transfer_keeper *k,
struct SingleRequest *k,
ssize_t nread)
{
z_stream *z = &k->z; /* zlib state structure */
@@ -265,7 +265,7 @@ static enum {
CURLcode
Curl_unencode_gzip_write(struct connectdata *conn,
struct Curl_transfer_keeper *k,
struct SingleRequest *k,
ssize_t nread)
{
z_stream *z = &k->z; /* zlib state structure */

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2006, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -32,10 +32,10 @@
#endif
CURLcode Curl_unencode_deflate_write(struct connectdata *conn,
struct Curl_transfer_keeper *k,
struct SingleRequest *req,
ssize_t nread);
CURLcode
Curl_unencode_gzip_write(struct connectdata *conn,
struct Curl_transfer_keeper *k,
struct SingleRequest *k,
ssize_t nread);

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -812,7 +812,7 @@ struct Cookie *Curl_cookie_getlist(struct CookieInfo *c,
void Curl_cookie_clearall(struct CookieInfo *cookies)
{
if(cookies) {
Curl_cookie_freelist(cookies->cookies);
Curl_cookie_freelist(cookies->cookies, TRUE);
cookies->cookies = NULL;
cookies->numcookies = 0;
}
@@ -824,16 +824,22 @@ void Curl_cookie_clearall(struct CookieInfo *cookies)
*
* Free a list of cookies previously returned by Curl_cookie_getlist();
*
* The 'cookiestoo' argument tells this function whether to just free the
* list or actually also free all cookies within the list as well.
*
****************************************************************************/
void Curl_cookie_freelist(struct Cookie *co)
void Curl_cookie_freelist(struct Cookie *co, bool cookiestoo)
{
struct Cookie *next;
if(co) {
while(co) {
next = co->next;
free(co); /* we only free the struct since the "members" are all
just copied! */
if(cookiestoo)
freecookie(co);
else
free(co); /* we only free the struct since the "members" are all just
pointed out in the main cookie list! */
co = next;
}
}
@@ -867,7 +873,7 @@ void Curl_cookie_clearsess(struct CookieInfo *cookies)
else
prev->next = next;
free(curr);
freecookie(curr);
cookies->numcookies--;
}
else

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2006, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -91,7 +91,7 @@ struct CookieInfo *Curl_cookie_init(struct SessionHandle *data,
const char *, struct CookieInfo *, bool);
struct Cookie *Curl_cookie_getlist(struct CookieInfo *, const char *,
const char *, bool);
void Curl_cookie_freelist(struct Cookie *);
void Curl_cookie_freelist(struct Cookie *cookies, bool cookiestoo);
void Curl_cookie_clearall(struct CookieInfo *cookies);
void Curl_cookie_clearsess(struct CookieInfo *cookies);
void Curl_cookie_cleanup(struct CookieInfo *);

View File

@@ -87,7 +87,7 @@
* Forward declarations.
*/
static CURLcode Curl_dict(struct connectdata *conn, bool *done);
static CURLcode dict_do(struct connectdata *conn, bool *done);
/*
* DICT protocol handler.
@@ -96,7 +96,7 @@ static CURLcode Curl_dict(struct connectdata *conn, bool *done);
const struct Curl_handler Curl_handler_dict = {
"DICT", /* scheme */
ZERO_NULL, /* setup_connection */
Curl_dict, /* do_it */
dict_do, /* do_it */
ZERO_NULL, /* done */
ZERO_NULL, /* do_more */
ZERO_NULL, /* connect_it */
@@ -142,7 +142,7 @@ static char *unescape_word(struct SessionHandle *data, const char *inp)
return dictp;
}
static CURLcode Curl_dict(struct connectdata *conn, bool *done)
static CURLcode dict_do(struct connectdata *conn, bool *done)
{
char *word;
char *eword;
@@ -155,8 +155,8 @@ static CURLcode Curl_dict(struct connectdata *conn, bool *done)
struct SessionHandle *data=conn->data;
curl_socket_t sockfd = conn->sock[FIRSTSOCKET];
char *path = data->reqdata.path;
curl_off_t *bytecount = &data->reqdata.keep.bytecount;
char *path = data->state.path;
curl_off_t *bytecount = &data->req.bytecount;
*done = TRUE; /* unconditionally */

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -466,13 +466,23 @@ CURLcode curl_easy_perform(CURL *curl)
return CURLE_BAD_FUNCTION_ARGUMENT;
if( ! (data->share && data->share->hostcache) ) {
/* this handle is not using a shared dns cache */
if(data->set.global_dns_cache &&
(data->dns.hostcachetype != HCACHE_GLOBAL)) {
/* global dns cache was requested but still isn't */
struct curl_hash *ptr;
if(Curl_global_host_cache_use(data) &&
(data->dns.hostcachetype != HCACHE_GLOBAL)) {
if(data->dns.hostcachetype == HCACHE_PRIVATE)
/* if the current cache is private, kill it first */
Curl_hash_destroy(data->dns.hostcache);
data->dns.hostcache = Curl_global_host_cache_get();
data->dns.hostcachetype = HCACHE_GLOBAL;
ptr = Curl_global_host_cache_init();
if(ptr) {
/* only do this if the global cache init works */
data->dns.hostcache = ptr;
data->dns.hostcachetype = HCACHE_GLOBAL;
}
}
if(!data->dns.hostcache) {
@@ -528,9 +538,9 @@ void Curl_easy_addmulti(struct SessionHandle *data,
void Curl_easy_initHandleData(struct SessionHandle *data)
{
memset(&data->reqdata, 0, sizeof(struct HandleData));
memset(&data->req, 0, sizeof(struct SingleRequest));
data->reqdata.maxdownload = -1;
data->req.maxdownload = -1;
}
/*
@@ -676,11 +686,11 @@ void curl_easy_reset(CURL *curl)
{
struct SessionHandle *data = (struct SessionHandle *)curl;
Curl_safefree(data->reqdata.pathbuffer);
data->reqdata.pathbuffer=NULL;
Curl_safefree(data->state.pathbuffer);
data->state.pathbuffer=NULL;
Curl_safefree(data->reqdata.proto.generic);
data->reqdata.proto.generic=NULL;
Curl_safefree(data->state.proto.generic);
data->state.proto.generic=NULL;
/* zero out UserDefined data: */
Curl_freeset(data);
@@ -744,6 +754,107 @@ void curl_easy_reset(CURL *curl)
data->set.new_directory_perms = 0755; /* Default permissions */
}
/*
* curl_easy_pause() allows an application to pause or unpause a specific
* transfer and direction. This function sets the full new state for the
* current connection this easy handle operates on.
*
* NOTE: if you have the receiving paused and you call this function to remove
* the pausing, you may get your write callback called at this point.
*
* Action is a bitmask consisting of CURLPAUSE_* bits in curl/curl.h
*/
CURLcode curl_easy_pause(CURL *curl, int action)
{
struct SessionHandle *data = (struct SessionHandle *)curl;
struct SingleRequest *k = &data->req;
CURLcode result = CURLE_OK;
/* first switch off both pause bits */
int newstate = k->keepon &~ (KEEP_READ_PAUSE| KEEP_WRITE_PAUSE);
/* set the new desired pause bits */
newstate |= ((action & CURLPAUSE_RECV)?KEEP_READ_PAUSE:0) |
((action & CURLPAUSE_SEND)?KEEP_WRITE_PAUSE:0);
/* put it back in the keepon */
k->keepon = newstate;
if(!(newstate & KEEP_READ_PAUSE) && data->state.tempwrite) {
/* we have a buffer for writing that we now seem to be able to deliver since
the receive pausing is lifted! */
/* get the pointer, type and length in local copies since the function may
return PAUSE again and then we'll get a new copy allocted and stored in
the tempwrite variables */
char *tempwrite = data->state.tempwrite;
size_t tempsize = data->state.tempwritesize;
int temptype = data->state.tempwritetype;
size_t chunklen;
/* clear tempwrite here just to make sure it gets cleared if there's no
further use of it, and make sure we don't clear it after the function
invoke as it may have been set to a new value by then */
data->state.tempwrite = NULL;
/* since the write callback API is define to never exceed
CURL_MAX_WRITE_SIZE bytes in a single call, and since we may in fact
have more data than that in our buffer here, we must loop sending the
data in multiple calls until there's no data left or we get another
pause returned.
A tricky part is that the function we call will "buffer" the data
itself when it pauses on a particular buffer, so we may need to do some
extra trickery if we get a pause return here.
*/
do {
chunklen = (tempsize > CURL_MAX_WRITE_SIZE)?CURL_MAX_WRITE_SIZE:tempsize;
result = Curl_client_write(data->state.current_conn,
temptype, tempwrite, chunklen);
if(!result)
/* failures abort the loop at once */
break;
if(data->state.tempwrite && (tempsize - chunklen)) {
/* Ouch, the reading is again paused and the block we send is now
"cached". If this is the final chunk we can leave it like this, but
if we have more chunks that is cached after this, we need to free
the newly cached one and put back a version that is truly the entire
contents that is saved for later
*/
char *newptr;
free(data->state.tempwrite); /* free the one just cached as it isn't
enough */
/* note that tempsize is still the size as before the callback was
used, and thus the whole piece of data to keep */
newptr = malloc(tempsize);
if(!newptr) {
result = CURLE_OUT_OF_MEMORY;
/* tempwrite will be freed further down */
break;
}
data->state.tempwrite = newptr; /* store new pointer */
memcpy(newptr, tempwrite, tempsize);
data->state.tempwritesize = tempsize; /* store new size */
/* tempwrite will be freed further down */
break; /* go back to pausing until further notice */
}
else {
tempsize -= chunklen; /* left after the call above */
tempwrite += chunklen; /* advance the pointer */
}
} while((result == CURLE_OK) && tempsize);
free(tempwrite); /* this is unconditionally no longer used */
}
return result;
}
#ifdef CURL_DOES_CONVERSIONS
/*
* Curl_convert_to_network() is an internal function

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -70,6 +70,7 @@
#endif
#include "strtoofft.h"
#include "urldata.h"
#include <curl/curl.h>
#include "progress.h"
@@ -94,10 +95,10 @@
* Forward declarations.
*/
static CURLcode Curl_file(struct connectdata *, bool *done);
static CURLcode Curl_file_done(struct connectdata *conn,
CURLcode status, bool premature);
static CURLcode Curl_file_connect(struct connectdata *conn, bool *done);
static CURLcode file_do(struct connectdata *, bool *done);
static CURLcode file_done(struct connectdata *conn,
CURLcode status, bool premature);
static CURLcode file_connect(struct connectdata *conn, bool *done);
/*
* FILE scheme handler.
@@ -106,10 +107,10 @@ static CURLcode Curl_file_connect(struct connectdata *conn, bool *done);
const struct Curl_handler Curl_handler_file = {
"FILE", /* scheme */
ZERO_NULL, /* setup_connection */
Curl_file, /* do_it */
Curl_file_done, /* done */
file_do, /* do_it */
file_done, /* done */
ZERO_NULL, /* do_more */
Curl_file_connect, /* connect_it */
file_connect, /* connect_it */
ZERO_NULL, /* connecting */
ZERO_NULL, /* doing */
ZERO_NULL, /* proto_getsock */
@@ -119,15 +120,70 @@ const struct Curl_handler Curl_handler_file = {
PROT_FILE /* protocol */
};
/*
Check if this is a range download, and if so, set the internal variables
properly. This code is copied from the FTP implementation and might as
well be factored out.
*/
static CURLcode file_range(struct connectdata *conn)
{
curl_off_t from, to;
curl_off_t totalsize=-1;
char *ptr;
char *ptr2;
struct SessionHandle *data = conn->data;
if(data->state.use_range && data->state.range) {
from=curlx_strtoofft(data->state.range, &ptr, 0);
while(ptr && *ptr && (isspace((int)*ptr) || (*ptr=='-')))
ptr++;
to=curlx_strtoofft(ptr, &ptr2, 0);
if(ptr == ptr2) {
/* we didn't get any digit */
to=-1;
}
if((-1 == to) && (from>=0)) {
/* X - */
data->state.resume_from = from;
DEBUGF(infof(data, "RANGE %" FORMAT_OFF_T " to end of file\n",
from));
}
else if(from < 0) {
/* -Y */
totalsize = -from;
data->req.maxdownload = -from;
data->state.resume_from = from;
DEBUGF(infof(data, "RANGE the last %" FORMAT_OFF_T " bytes\n",
totalsize));
}
else {
/* X-Y */
totalsize = to-from;
data->req.maxdownload = totalsize+1; /* include last byte */
data->state.resume_from = from;
DEBUGF(infof(data, "RANGE from %" FORMAT_OFF_T
" getting %" FORMAT_OFF_T " bytes\n",
from, data->req.maxdownload));
}
DEBUGF(infof(data, "range-download from %" FORMAT_OFF_T
" to %" FORMAT_OFF_T ", totally %" FORMAT_OFF_T " bytes\n",
from, to, data->req.maxdownload));
}
else
data->req.maxdownload = -1;
return CURLE_OK;
}
/*
* Curl_file_connect() gets called from Curl_protocol_connect() to allow us to
* file_connect() gets called from Curl_protocol_connect() to allow us to
* do protocol-specific actions at connect-time. We emulate a
* connect-then-transfer protocol and "connect" to the file here
*/
static CURLcode Curl_file_connect(struct connectdata *conn, bool *done)
static CURLcode file_connect(struct connectdata *conn, bool *done)
{
struct SessionHandle *data = conn->data;
char *real_path = curl_easy_unescape(data, data->reqdata.path, 0, NULL);
char *real_path = curl_easy_unescape(data, data->state.path, 0, NULL);
struct FILEPROTO *file;
int fd;
#if defined(WIN32) || defined(MSDOS) || defined(__EMX__)
@@ -142,17 +198,17 @@ static CURLcode Curl_file_connect(struct connectdata *conn, bool *done)
sessionhandle, deal with it */
Curl_reset_reqproto(conn);
if(!data->reqdata.proto.file) {
if(!data->state.proto.file) {
file = (struct FILEPROTO *)calloc(sizeof(struct FILEPROTO), 1);
if(!file) {
free(real_path);
return CURLE_OUT_OF_MEMORY;
}
data->reqdata.proto.file = file;
data->state.proto.file = file;
}
else {
/* file is not a protocol that can deal with "persistancy" */
file = data->reqdata.proto.file;
file = data->state.proto.file;
Curl_safefree(file->freepath);
if(file->fd != -1)
close(file->fd);
@@ -200,8 +256,8 @@ static CURLcode Curl_file_connect(struct connectdata *conn, bool *done)
file->fd = fd;
if(!data->set.upload && (fd == -1)) {
failf(data, "Couldn't open file %s", data->reqdata.path);
Curl_file_done(conn, CURLE_FILE_COULDNT_READ_FILE, FALSE);
failf(data, "Couldn't open file %s", data->state.path);
file_done(conn, CURLE_FILE_COULDNT_READ_FILE, FALSE);
return CURLE_FILE_COULDNT_READ_FILE;
}
*done = TRUE;
@@ -209,10 +265,10 @@ static CURLcode Curl_file_connect(struct connectdata *conn, bool *done)
return CURLE_OK;
}
static CURLcode Curl_file_done(struct connectdata *conn,
static CURLcode file_done(struct connectdata *conn,
CURLcode status, bool premature)
{
struct FILEPROTO *file = conn->data->reqdata.proto.file;
struct FILEPROTO *file = conn->data->state.proto.file;
(void)status; /* not used */
(void)premature; /* not used */
Curl_safefree(file->freepath);
@@ -231,7 +287,7 @@ static CURLcode Curl_file_done(struct connectdata *conn,
static CURLcode file_upload(struct connectdata *conn)
{
struct FILEPROTO *file = conn->data->reqdata.proto.file;
struct FILEPROTO *file = conn->data->state.proto.file;
const char *dir = strchr(file->path, DIRSEP);
FILE *fp;
CURLcode res=CURLE_OK;
@@ -250,7 +306,7 @@ static CURLcode file_upload(struct connectdata *conn)
*/
conn->fread_func = data->set.fread_func;
conn->fread_in = data->set.in;
conn->data->reqdata.upload_fromhere = buf;
conn->data->req.upload_fromhere = buf;
if(!dir)
return CURLE_FILE_COULDNT_READ_FILE; /* fix: better error code */
@@ -258,7 +314,7 @@ static CURLcode file_upload(struct connectdata *conn)
if(!dir[1])
return CURLE_FILE_COULDNT_READ_FILE; /* fix: better error code */
if(data->reqdata.resume_from)
if(data->state.resume_from)
fp = fopen( file->path, "ab" );
else {
int fd;
@@ -287,14 +343,14 @@ static CURLcode file_upload(struct connectdata *conn)
Curl_pgrsSetUploadSize(data, data->set.infilesize);
/* treat the negative resume offset value as the case of "-" */
if(data->reqdata.resume_from < 0){
if(stat(file->path, &file_stat)){
if(data->state.resume_from < 0) {
if(stat(file->path, &file_stat)) {
fclose(fp);
failf(data, "Can't get the size of %s", file->path);
return CURLE_WRITE_ERROR;
}
else
data->reqdata.resume_from = (curl_off_t)file_stat.st_size;
data->state.resume_from = (curl_off_t)file_stat.st_size;
}
while(res == CURLE_OK) {
@@ -309,16 +365,16 @@ static CURLcode file_upload(struct connectdata *conn)
nread = (size_t)readcount;
/*skip bytes before resume point*/
if(data->reqdata.resume_from) {
if( (curl_off_t)nread <= data->reqdata.resume_from ) {
data->reqdata.resume_from -= nread;
if(data->state.resume_from) {
if( (curl_off_t)nread <= data->state.resume_from ) {
data->state.resume_from -= nread;
nread = 0;
buf2 = buf;
}
else {
buf2 = buf + data->reqdata.resume_from;
nread -= data->reqdata.resume_from;
data->reqdata.resume_from = 0;
buf2 = buf + data->state.resume_from;
nread -= (size_t)data->state.resume_from;
data->state.resume_from = 0;
}
}
else
@@ -349,14 +405,14 @@ static CURLcode file_upload(struct connectdata *conn)
}
/*
* Curl_file() is the protocol-specific function for the do-phase, separated
* file_do() is the protocol-specific function for the do-phase, separated
* from the connect-phase above. Other protocols merely setup the transfer in
* the do-phase, to have it done in the main transfer loop but since some
* platforms we support don't allow select()ing etc on file handles (as
* opposed to sockets) we instead perform the whole do-operation in this
* function.
*/
static CURLcode Curl_file(struct connectdata *conn, bool *done)
static CURLcode file_do(struct connectdata *conn, bool *done)
{
/* This implementation ignores the host name in conformance with
RFC 1738. Only local files (reachable via the standard file system)
@@ -370,6 +426,7 @@ static CURLcode Curl_file(struct connectdata *conn, bool *done)
curl_off_t expected_size=0;
bool fstated=FALSE;
ssize_t nread;
size_t bytestoread;
struct SessionHandle *data = conn->data;
char *buf = data->state.buffer;
curl_off_t bytecount = 0;
@@ -385,7 +442,7 @@ static CURLcode Curl_file(struct connectdata *conn, bool *done)
return file_upload(conn);
/* get the fd from the connection phase */
fd = conn->data->reqdata.proto.file->fd;
fd = conn->data->state.proto.file->fd;
/* VMS: This only works reliable for STREAMLF files */
if( -1 != fstat(fd, &statbuf)) {
@@ -434,13 +491,31 @@ static CURLcode Curl_file(struct connectdata *conn, bool *done)
return result;
}
if(data->reqdata.resume_from <= expected_size)
expected_size -= data->reqdata.resume_from;
/* Check whether file range has been specified */
file_range(conn);
/* Adjust the start offset in case we want to get the N last bytes
* of the stream iff the filesize could be determined */
if(data->state.resume_from < 0) {
if(!fstated) {
failf(data, "Can't get the size of file.");
return CURLE_READ_ERROR;
}
else
data->state.resume_from += (curl_off_t)statbuf.st_size;
}
if(data->state.resume_from <= expected_size)
expected_size -= data->state.resume_from;
else {
failf(data, "failed to resume file:// transfer");
return CURLE_BAD_DOWNLOAD_RESUME;
}
/* A high water mark has been specified so we obey... */
if (data->req.maxdownload > 0)
expected_size = data->req.maxdownload;
if(fstated && (expected_size == 0))
return CURLE_OK;
@@ -451,24 +526,27 @@ static CURLcode Curl_file(struct connectdata *conn, bool *done)
if(fstated)
Curl_pgrsSetDownloadSize(data, expected_size);
if(data->reqdata.resume_from) {
if(data->reqdata.resume_from !=
lseek(fd, data->reqdata.resume_from, SEEK_SET))
if(data->state.resume_from) {
if(data->state.resume_from !=
lseek(fd, data->state.resume_from, SEEK_SET))
return CURLE_BAD_DOWNLOAD_RESUME;
}
Curl_pgrsTime(data, TIMER_STARTTRANSFER);
while(res == CURLE_OK) {
nread = read(fd, buf, BUFSIZE-1);
/* Don't fill a whole buffer if we want less than all data */
bytestoread = (expected_size < BUFSIZE-1)?(size_t)expected_size:BUFSIZE-1;
nread = read(fd, buf, bytestoread);
if( nread > 0)
buf[nread] = 0;
if(nread <= 0)
if (nread <= 0 || expected_size == 0)
break;
bytecount += nread;
expected_size -= nread;
res = Curl_client_write(conn, CLIENTWRITE_BODY, buf, nread);
if(res)

347
lib/ftp.c
View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -134,21 +134,21 @@ static CURLcode ftp_nb_type(struct connectdata *conn,
bool ascii, ftpstate newstate);
static int ftp_need_type(struct connectdata *conn,
bool ascii);
static CURLcode Curl_ftp(struct connectdata *conn, bool *done);
static CURLcode Curl_ftp_done(struct connectdata *conn,
static CURLcode ftp_do(struct connectdata *conn, bool *done);
static CURLcode ftp_done(struct connectdata *conn,
CURLcode, bool premature);
static CURLcode Curl_ftp_connect(struct connectdata *conn, bool *done);
static CURLcode Curl_ftp_disconnect(struct connectdata *conn);
static CURLcode Curl_ftp_nextconnect(struct connectdata *conn);
static CURLcode Curl_ftp_multi_statemach(struct connectdata *conn, bool *done);
static int Curl_ftp_getsock(struct connectdata *conn,
static CURLcode ftp_connect(struct connectdata *conn, bool *done);
static CURLcode ftp_disconnect(struct connectdata *conn);
static CURLcode ftp_nextconnect(struct connectdata *conn);
static CURLcode ftp_multi_statemach(struct connectdata *conn, bool *done);
static int ftp_getsock(struct connectdata *conn,
curl_socket_t *socks,
int numsocks);
static CURLcode Curl_ftp_doing(struct connectdata *conn,
static CURLcode ftp_doing(struct connectdata *conn,
bool *dophase_done);
static CURLcode Curl_ftp_setup_connection(struct connectdata * conn);
static CURLcode ftp_setup_connection(struct connectdata * conn);
#ifdef USE_SSL
static CURLcode Curl_ftps_setup_connection(struct connectdata * conn);
static CURLcode ftps_setup_connection(struct connectdata * conn);
#endif
/* easy-to-use macro: */
@@ -164,16 +164,16 @@ static CURLcode Curl_ftps_setup_connection(struct connectdata * conn);
const struct Curl_handler Curl_handler_ftp = {
"FTP", /* scheme */
Curl_ftp_setup_connection, /* setup_connection */
Curl_ftp, /* do_it */
Curl_ftp_done, /* done */
Curl_ftp_nextconnect, /* do_more */
Curl_ftp_connect, /* connect_it */
Curl_ftp_multi_statemach, /* connecting */
Curl_ftp_doing, /* doing */
Curl_ftp_getsock, /* proto_getsock */
Curl_ftp_getsock, /* doing_getsock */
Curl_ftp_disconnect, /* disconnect */
ftp_setup_connection, /* setup_connection */
ftp_do, /* do_it */
ftp_done, /* done */
ftp_nextconnect, /* do_more */
ftp_connect, /* connect_it */
ftp_multi_statemach, /* connecting */
ftp_doing, /* doing */
ftp_getsock, /* proto_getsock */
ftp_getsock, /* doing_getsock */
ftp_disconnect, /* disconnect */
PORT_FTP, /* defport */
PROT_FTP /* protocol */
};
@@ -186,16 +186,16 @@ const struct Curl_handler Curl_handler_ftp = {
const struct Curl_handler Curl_handler_ftps = {
"FTPS", /* scheme */
Curl_ftps_setup_connection, /* setup_connection */
Curl_ftp, /* do_it */
Curl_ftp_done, /* done */
Curl_ftp_nextconnect, /* do_more */
Curl_ftp_connect, /* connect_it */
Curl_ftp_multi_statemach, /* connecting */
Curl_ftp_doing, /* doing */
Curl_ftp_getsock, /* proto_getsock */
Curl_ftp_getsock, /* doing_getsock */
Curl_ftp_disconnect, /* disconnect */
ftps_setup_connection, /* setup_connection */
ftp_do, /* do_it */
ftp_done, /* done */
ftp_nextconnect, /* do_more */
ftp_connect, /* connect_it */
ftp_multi_statemach, /* connecting */
ftp_doing, /* doing */
ftp_getsock, /* proto_getsock */
ftp_getsock, /* doing_getsock */
ftp_disconnect, /* disconnect */
PORT_FTPS, /* defport */
PROT_FTP | PROT_FTPS | PROT_SSL /* protocol */
};
@@ -403,7 +403,7 @@ static CURLcode ftp_readresp(curl_socket_t sockfd,
int *ftpcode, /* return the ftp-code if done */
size_t *size) /* size of the response */
{
int perline; /* count bytes per line */
ssize_t perline; /* count bytes per line */
bool keepon=TRUE;
ssize_t gotbytes;
char *ptr;
@@ -418,8 +418,9 @@ static CURLcode ftp_readresp(curl_socket_t sockfd,
ptr=buf + ftpc->nread_resp;
perline= (int)(ptr-ftpc->linestart_resp); /* number of bytes in the current
line, so far */
/* number of bytes in the current line, so far */
perline = (ssize_t)(ptr-ftpc->linestart_resp);
keepon=TRUE;
while((ftpc->nread_resp<BUFSIZE) && (keepon && !result)) {
@@ -479,10 +480,10 @@ static CURLcode ftp_readresp(curl_socket_t sockfd,
* byte to a set of lines and possible just a piece of the last
* line */
ssize_t i;
int clipamount = 0;
ssize_t clipamount = 0;
bool restart = FALSE;
data->reqdata.keep.headerbytecount += gotbytes;
data->req.headerbytecount += gotbytes;
ftpc->nread_resp += gotbytes;
for(i = 0; i < gotbytes; ptr++, i++) {
@@ -788,7 +789,7 @@ static void state(struct connectdata *conn,
static CURLcode ftp_state_user(struct connectdata *conn)
{
CURLcode result;
struct FTP *ftp = conn->data->reqdata.proto.ftp;
struct FTP *ftp = conn->data->state.proto.ftp;
/* send USER */
NBFTPSENDF(conn, "USER %s", ftp->user?ftp->user:"");
@@ -810,7 +811,7 @@ static CURLcode ftp_state_pwd(struct connectdata *conn)
}
/* For the FTP "protocol connect" and "doing" phases only */
static int Curl_ftp_getsock(struct connectdata *conn,
static int ftp_getsock(struct connectdata *conn,
curl_socket_t *socks,
int numsocks)
{
@@ -933,7 +934,7 @@ static CURLcode ftp_state_use_port(struct connectdata *conn,
rc = getnameinfo((struct sockaddr *)&ss, sslen, hbuf, sizeof(hbuf), NULL,
0, NIFLAGS);
if(rc) {
failf(data, "getnameinfo() returned %d \n", rc);
failf(data, "getnameinfo() returned %d", rc);
return CURLE_FTP_PORT_FAILED;
}
host = hbuf; /* use this host name */
@@ -1313,7 +1314,7 @@ static CURLcode ftp_state_use_pasv(struct connectdata *conn)
static CURLcode ftp_state_post_rest(struct connectdata *conn)
{
CURLcode result = CURLE_OK;
struct FTP *ftp = conn->data->reqdata.proto.ftp;
struct FTP *ftp = conn->data->state.proto.ftp;
struct SessionHandle *data = conn->data;
if(ftp->transfer != FTPTRANSFER_BODY) {
@@ -1337,7 +1338,7 @@ static CURLcode ftp_state_post_rest(struct connectdata *conn)
static CURLcode ftp_state_post_size(struct connectdata *conn)
{
CURLcode result = CURLE_OK;
struct FTP *ftp = conn->data->reqdata.proto.ftp;
struct FTP *ftp = conn->data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
if((ftp->transfer != FTPTRANSFER_BODY) && ftpc->file) {
@@ -1358,7 +1359,7 @@ static CURLcode ftp_state_post_size(struct connectdata *conn)
static CURLcode ftp_state_post_type(struct connectdata *conn)
{
CURLcode result = CURLE_OK;
struct FTP *ftp = conn->data->reqdata.proto.ftp;
struct FTP *ftp = conn->data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
if((ftp->transfer == FTPTRANSFER_INFO) && ftpc->file) {
@@ -1398,11 +1399,11 @@ static CURLcode ftp_state_post_listtype(struct connectdata *conn)
lstArg = NULL;
if((data->set.ftp_filemethod == FTPFILE_NOCWD) &&
data->reqdata.path &&
data->reqdata.path[0] &&
strchr(data->reqdata.path,'/')) {
data->state.path &&
data->state.path[0] &&
strchr(data->state.path,'/')) {
lstArg = strdup(data->reqdata.path);
lstArg = strdup(data->state.path);
if(!lstArg)
return CURLE_OUT_OF_MEMORY;
@@ -1465,7 +1466,7 @@ static CURLcode ftp_state_post_stortype(struct connectdata *conn)
static CURLcode ftp_state_post_mdtm(struct connectdata *conn)
{
CURLcode result = CURLE_OK;
struct FTP *ftp = conn->data->reqdata.proto.ftp;
struct FTP *ftp = conn->data->state.proto.ftp;
struct SessionHandle *data = conn->data;
struct ftp_conn *ftpc = &conn->proto.ftpc;
@@ -1522,13 +1523,12 @@ static CURLcode ftp_state_ul_setup(struct connectdata *conn,
bool sizechecked)
{
CURLcode result = CURLE_OK;
struct FTP *ftp = conn->data->reqdata.proto.ftp;
struct FTP *ftp = conn->data->state.proto.ftp;
struct SessionHandle *data = conn->data;
struct ftp_conn *ftpc = &conn->proto.ftpc;
curl_off_t passed=0;
if((data->reqdata.resume_from && !sizechecked) ||
((data->reqdata.resume_from > 0) && sizechecked)) {
if((data->state.resume_from && !sizechecked) ||
((data->state.resume_from > 0) && sizechecked)) {
/* we're about to continue the uploading of a file */
/* 1. get already existing file's size. We use the SIZE command for this
which may not exist in the server! The SIZE command is not in
@@ -1542,7 +1542,7 @@ static CURLcode ftp_state_ul_setup(struct connectdata *conn,
/* 4. lower the infilesize counter */
/* => transfer as usual */
if(data->reqdata.resume_from < 0 ) {
if(data->state.resume_from < 0 ) {
/* Got no given size to start from, figure it out */
NBFTPSENDF(conn, "SIZE %s", ftpc->file);
state(conn, FTP_STOR_SIZE);
@@ -1552,34 +1552,43 @@ static CURLcode ftp_state_ul_setup(struct connectdata *conn,
/* enable append */
data->set.ftp_append = TRUE;
/* Let's read off the proper amount of bytes from the input. If we knew it
was a proper file we could've just fseek()ed but we only have a stream
here */
/* Let's read off the proper amount of bytes from the input. */
if(conn->seek_func) {
curl_off_t readthisamountnow = data->state.resume_from;
/* TODO: allow the ioctlfunction to provide a fast forward function that
can be used here and use this method only as a fallback! */
do {
curl_off_t readthisamountnow = (data->reqdata.resume_from - passed);
curl_off_t actuallyread;
if(readthisamountnow > BUFSIZE)
readthisamountnow = BUFSIZE;
actuallyread = (curl_off_t)
conn->fread_func(data->state.buffer, 1, (size_t)readthisamountnow,
conn->fread_in);
passed += actuallyread;
if(actuallyread != readthisamountnow) {
failf(data, "Could only read %" FORMAT_OFF_T
" bytes from the input", passed);
if(conn->seek_func(conn->seek_client,
readthisamountnow, SEEK_SET) != 0) {
failf(data, "Could not seek stream");
return CURLE_FTP_COULDNT_USE_REST;
}
} while(passed != data->reqdata.resume_from);
}
else {
curl_off_t passed=0;
do {
curl_off_t readthisamountnow = (data->state.resume_from - passed);
curl_off_t actuallyread;
if(readthisamountnow > BUFSIZE)
readthisamountnow = BUFSIZE;
actuallyread = (curl_off_t)
conn->fread_func(data->state.buffer, 1, (size_t)readthisamountnow,
conn->fread_in);
passed += actuallyread;
if((actuallyread <= 0) || (actuallyread > readthisamountnow)) {
/* this checks for greater-than only to make sure that the
CURL_READFUNC_ABORT return code still aborts */
failf(data, "Failed to read data");
return CURLE_FTP_COULDNT_USE_REST;
}
} while(passed < data->state.resume_from);
}
/* now, decrease the size of the read */
if(data->set.infilesize>0) {
data->set.infilesize -= data->reqdata.resume_from;
data->set.infilesize -= data->state.resume_from;
if(data->set.infilesize <= 0) {
infof(data, "File already completely uploaded\n");
@@ -1588,7 +1597,7 @@ static CURLcode ftp_state_ul_setup(struct connectdata *conn,
result=Curl_setup_transfer(conn, -1, -1, FALSE, NULL, -1, NULL);
/* Set ->transfer so that we won't get any error in
* Curl_ftp_done() because we didn't transfer anything! */
* ftp_done() because we didn't transfer anything! */
ftp->transfer = FTPTRANSFER_NONE;
state(conn, FTP_STOP);
@@ -1612,7 +1621,7 @@ static CURLcode ftp_state_quote(struct connectdata *conn,
{
CURLcode result = CURLE_OK;
struct SessionHandle *data = conn->data;
struct FTP *ftp = data->reqdata.proto.ftp;
struct FTP *ftp = data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
bool quote=FALSE;
struct curl_slist *item;
@@ -1725,8 +1734,10 @@ static CURLcode ftp_state_pasv_resp(struct connectdata *conn,
newport = (unsigned short)(num & 0xffff);
if(conn->bits.tunnel_proxy ||
data->set.proxytype == CURLPROXY_SOCKS5 ||
data->set.proxytype == CURLPROXY_SOCKS4)
data->set.proxytype == CURLPROXY_SOCKS5 ||
data->set.proxytype == CURLPROXY_SOCKS5_HOSTNAME ||
data->set.proxytype == CURLPROXY_SOCKS4 ||
data->set.proxytype == CURLPROXY_SOCKS4A)
/* proxy tunnel -> use other host info because ip_addr_str is the
proxy address not the ftp host */
snprintf(newhost, sizeof(newhost), "%s", conn->host.name);
@@ -1780,7 +1791,9 @@ static CURLcode ftp_state_pasv_resp(struct connectdata *conn,
conn->ip_addr_str);
if(conn->bits.tunnel_proxy ||
data->set.proxytype == CURLPROXY_SOCKS5 ||
data->set.proxytype == CURLPROXY_SOCKS4)
data->set.proxytype == CURLPROXY_SOCKS5_HOSTNAME ||
data->set.proxytype == CURLPROXY_SOCKS4 ||
data->set.proxytype == CURLPROXY_SOCKS4A)
/* proxy tunnel -> use other host info because ip_addr_str is the
proxy address not the ftp host */
snprintf(newhost, sizeof(newhost), "%s", conn->host.name);
@@ -1877,6 +1890,7 @@ static CURLcode ftp_state_pasv_resp(struct connectdata *conn,
switch(data->set.proxytype) {
case CURLPROXY_SOCKS5:
case CURLPROXY_SOCKS5_HOSTNAME:
result = Curl_SOCKS5(conn->proxyuser, conn->proxypasswd, newhost, newport,
SECONDARYSOCKET, conn);
break;
@@ -1885,7 +1899,11 @@ static CURLcode ftp_state_pasv_resp(struct connectdata *conn,
break;
case CURLPROXY_SOCKS4:
result = Curl_SOCKS4(conn->proxyuser, newhost, newport,
SECONDARYSOCKET, conn);
SECONDARYSOCKET, conn, FALSE);
break;
case CURLPROXY_SOCKS4A:
result = Curl_SOCKS4(conn->proxyuser, newhost, newport,
SECONDARYSOCKET, conn, TRUE);
break;
default:
failf(data, "unknown proxytype option given");
@@ -1907,13 +1925,13 @@ static CURLcode ftp_state_pasv_resp(struct connectdata *conn,
* FTP pointer
*/
struct HTTP http_proxy;
struct FTP *ftp_save = data->reqdata.proto.ftp;
struct FTP *ftp_save = data->state.proto.ftp;
memset(&http_proxy, 0, sizeof(http_proxy));
data->reqdata.proto.http = &http_proxy;
data->state.proto.http = &http_proxy;
result = Curl_proxyCONNECT(conn, SECONDARYSOCKET, newhost, newport);
data->reqdata.proto.ftp = ftp_save;
data->state.proto.ftp = ftp_save;
if(CURLE_OK != result)
return result;
@@ -1963,7 +1981,7 @@ static CURLcode ftp_state_mdtm_resp(struct connectdata *conn,
{
CURLcode result = CURLE_OK;
struct SessionHandle *data=conn->data;
struct FTP *ftp = data->reqdata.proto.ftp;
struct FTP *ftp = data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
switch(ftpcode) {
@@ -2095,7 +2113,7 @@ static CURLcode ftp_state_post_retr_size(struct connectdata *conn,
{
CURLcode result = CURLE_OK;
struct SessionHandle *data=conn->data;
struct FTP *ftp = data->reqdata.proto.ftp;
struct FTP *ftp = data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
if(data->set.max_filesize && (filesize > data->set.max_filesize)) {
@@ -2104,7 +2122,7 @@ static CURLcode ftp_state_post_retr_size(struct connectdata *conn,
}
ftp->downloadsize = filesize;
if(data->reqdata.resume_from) {
if(data->state.resume_from) {
/* We always (attempt to) get the size of downloads, so it is done before
this even when not doing resumes. */
if(filesize == -1) {
@@ -2117,28 +2135,28 @@ static CURLcode ftp_state_post_retr_size(struct connectdata *conn,
else {
/* We got a file size report, so we check that there actually is a
part of the file left to get, or else we go home. */
if(data->reqdata.resume_from< 0) {
if(data->state.resume_from< 0) {
/* We're supposed to download the last abs(from) bytes */
if(filesize < -data->reqdata.resume_from) {
if(filesize < -data->state.resume_from) {
failf(data, "Offset (%" FORMAT_OFF_T
") was beyond file size (%" FORMAT_OFF_T ")",
data->reqdata.resume_from, filesize);
data->state.resume_from, filesize);
return CURLE_BAD_DOWNLOAD_RESUME;
}
/* convert to size to download */
ftp->downloadsize = -data->reqdata.resume_from;
ftp->downloadsize = -data->state.resume_from;
/* download from where? */
data->reqdata.resume_from = filesize - ftp->downloadsize;
data->state.resume_from = filesize - ftp->downloadsize;
}
else {
if(filesize < data->reqdata.resume_from) {
if(filesize < data->state.resume_from) {
failf(data, "Offset (%" FORMAT_OFF_T
") was beyond file size (%" FORMAT_OFF_T ")",
data->reqdata.resume_from, filesize);
data->state.resume_from, filesize);
return CURLE_BAD_DOWNLOAD_RESUME;
}
/* Now store the number of bytes we are expected to download */
ftp->downloadsize = filesize-data->reqdata.resume_from;
ftp->downloadsize = filesize-data->state.resume_from;
}
}
@@ -2147,7 +2165,7 @@ static CURLcode ftp_state_post_retr_size(struct connectdata *conn,
result = Curl_setup_transfer(conn, -1, -1, FALSE, NULL, -1, NULL);
infof(data, "File already completely downloaded\n");
/* Set ->transfer so that we won't get any error in Curl_ftp_done()
/* Set ->transfer so that we won't get any error in ftp_done()
* because we didn't transfer the any file */
ftp->transfer = FTPTRANSFER_NONE;
state(conn, FTP_STOP);
@@ -2156,9 +2174,9 @@ static CURLcode ftp_state_post_retr_size(struct connectdata *conn,
/* Set resume file transfer offset */
infof(data, "Instructs server to resume from offset %" FORMAT_OFF_T
"\n", data->reqdata.resume_from);
"\n", data->state.resume_from);
NBFTPSENDF(conn, "REST %" FORMAT_OFF_T, data->reqdata.resume_from);
NBFTPSENDF(conn, "REST %" FORMAT_OFF_T, data->state.resume_from);
state(conn, FTP_RETR_REST);
@@ -2202,7 +2220,7 @@ static CURLcode ftp_state_size_resp(struct connectdata *conn,
result = ftp_state_post_retr_size(conn, filesize);
}
else if(instate == FTP_STOR_SIZE) {
data->reqdata.resume_from = filesize;
data->state.resume_from = filesize;
result = ftp_state_ul_setup(conn, TRUE);
}
@@ -2250,7 +2268,7 @@ static CURLcode ftp_state_stor_resp(struct connectdata *conn,
{
CURLcode result = CURLE_OK;
struct SessionHandle *data = conn->data;
struct FTP *ftp = data->reqdata.proto.ftp;
struct FTP *ftp = data->state.proto.ftp;
if(ftpcode>=400) {
failf(data, "Failed FTP upload: %0d", ftpcode);
@@ -2297,7 +2315,7 @@ static CURLcode ftp_state_get_resp(struct connectdata *conn,
{
CURLcode result = CURLE_OK;
struct SessionHandle *data = conn->data;
struct FTP *ftp = data->reqdata.proto.ftp;
struct FTP *ftp = data->state.proto.ftp;
char *buf = data->state.buffer;
if((ftpcode == 150) || (ftpcode == 125)) {
@@ -2384,10 +2402,10 @@ static CURLcode ftp_state_get_resp(struct connectdata *conn,
return result;
}
if(size > data->reqdata.maxdownload && data->reqdata.maxdownload > 0)
size = data->reqdata.size = data->reqdata.maxdownload;
if(size > data->req.maxdownload && data->req.maxdownload > 0)
size = data->req.size = data->req.maxdownload;
infof(data, "Maxdownload = %" FORMAT_OFF_T "\n", data->reqdata.maxdownload);
infof(data, "Maxdownload = %" FORMAT_OFF_T "\n", data->req.maxdownload);
if(instate != FTP_LIST)
infof(data, "Getting file with size: %" FORMAT_OFF_T "\n", size);
@@ -2465,7 +2483,7 @@ static CURLcode ftp_state_user_resp(struct connectdata *conn,
{
CURLcode result = CURLE_OK;
struct SessionHandle *data = conn->data;
struct FTP *ftp = data->reqdata.proto.ftp;
struct FTP *ftp = data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
(void)instate; /* no use for this yet */
@@ -2612,7 +2630,7 @@ static CURLcode ftp_statemach_act(struct connectdata *conn)
ftpc->count1 = 1;
break;
default:
failf(data, "unsupported parameter to CURLOPT_FTPSSLAUTH: %d\n",
failf(data, "unsupported parameter to CURLOPT_FTPSSLAUTH: %d",
data->set.ftpsslauth);
return CURLE_FAILED_INIT; /* we don't know what to do */
}
@@ -2929,7 +2947,7 @@ static long ftp_state_timeout(struct connectdata *conn)
/* called repeatedly until done from multi.c */
static CURLcode Curl_ftp_multi_statemach(struct connectdata *conn,
static CURLcode ftp_multi_statemach(struct connectdata *conn,
bool *done)
{
curl_socket_t sock = conn->sock[FIRSTSOCKET];
@@ -3009,17 +3027,17 @@ static CURLcode ftp_init(struct connectdata *conn)
{
struct SessionHandle *data = conn->data;
struct FTP *ftp;
if(data->reqdata.proto.ftp)
if(data->state.proto.ftp)
return CURLE_OK;
ftp = (struct FTP *)calloc(sizeof(struct FTP), 1);
if(!ftp)
return CURLE_OUT_OF_MEMORY;
data->reqdata.proto.ftp = ftp;
data->state.proto.ftp = ftp;
/* get some initial data into the ftp struct */
ftp->bytecountp = &data->reqdata.keep.bytecount;
ftp->bytecountp = &data->req.bytecount;
/* no need to duplicate them, this connectdata struct won't change */
ftp->user = conn->user;
@@ -3031,14 +3049,14 @@ static CURLcode ftp_init(struct connectdata *conn)
}
/*
* Curl_ftp_connect() should do everything that is to be considered a part of
* ftp_connect() should do everything that is to be considered a part of
* the connection phase.
*
* The variable 'done' points to will be TRUE if the protocol-layer connect
* phase is done when this function returns, or FALSE is not. When called as
* a part of the easy interface, it will always be TRUE.
*/
static CURLcode Curl_ftp_connect(struct connectdata *conn,
static CURLcode ftp_connect(struct connectdata *conn,
bool *done) /* see description above */
{
CURLcode result;
@@ -3076,14 +3094,14 @@ static CURLcode Curl_ftp_connect(struct connectdata *conn,
* Curl_proxyCONNECT we have to set back the member to the original struct
* FTP pointer
*/
ftp_save = data->reqdata.proto.ftp;
ftp_save = data->state.proto.ftp;
memset(&http_proxy, 0, sizeof(http_proxy));
data->reqdata.proto.http = &http_proxy;
data->state.proto.http = &http_proxy;
result = Curl_proxyCONNECT(conn, FIRSTSOCKET,
conn->host.name, conn->remote_port);
data->reqdata.proto.ftp = ftp_save;
data->state.proto.ftp = ftp_save;
if(CURLE_OK != result)
return result;
@@ -3106,7 +3124,7 @@ static CURLcode Curl_ftp_connect(struct connectdata *conn,
ftpc->response = Curl_tvnow(); /* start response time-out now! */
if(data->state.used_interface == Curl_if_multi)
result = Curl_ftp_multi_statemach(conn, done);
result = ftp_multi_statemach(conn, done);
else {
result = ftp_easy_statemach(conn);
if(!result)
@@ -3118,26 +3136,25 @@ static CURLcode Curl_ftp_connect(struct connectdata *conn,
/***********************************************************************
*
* Curl_ftp_done()
* ftp_done()
*
* The DONE function. This does what needs to be done after a single DO has
* performed.
*
* Input argument is already checked for validity.
*/
static CURLcode Curl_ftp_done(struct connectdata *conn, CURLcode status,
static CURLcode ftp_done(struct connectdata *conn, CURLcode status,
bool premature)
{
struct SessionHandle *data = conn->data;
struct FTP *ftp = data->reqdata.proto.ftp;
struct FTP *ftp = data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
ssize_t nread;
int ftpcode;
CURLcode result=CURLE_OK;
bool was_ctl_valid = ftpc->ctl_valid;
char *path;
char *path_to_use = data->reqdata.path;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
char *path_to_use = data->state.path;
if(!ftp)
/* When the easy handle is removed from the multi while libcurl is still
@@ -3281,22 +3298,24 @@ static CURLcode Curl_ftp_done(struct connectdata *conn, CURLcode status,
}
}
else {
if((-1 != k->size) && (k->size != *ftp->bytecountp) &&
if((-1 != data->req.size) &&
(data->req.size != *ftp->bytecountp) &&
#ifdef CURL_DO_LINEEND_CONV
/* Most FTP servers don't adjust their file SIZE response for CRLFs, so
* we'll check to see if the discrepancy can be explained by the number
* of CRLFs we've changed to LFs.
*/
((k->size + data->state.crlf_conversions) != *ftp->bytecountp) &&
((data->req.size + data->state.crlf_conversions) !=
*ftp->bytecountp) &&
#endif /* CURL_DO_LINEEND_CONV */
(k->maxdownload != *ftp->bytecountp)) {
(data->req.maxdownload != *ftp->bytecountp)) {
failf(data, "Received only partial file: %" FORMAT_OFF_T " bytes",
*ftp->bytecountp);
result = CURLE_PARTIAL_FILE;
}
else if(!ftpc->dont_check &&
!*ftp->bytecountp &&
(k->size>0)) {
(data->req.size>0)) {
failf(data, "No data was received!");
result = CURLE_FTP_COULDNT_RETR_FILE;
}
@@ -3426,8 +3445,8 @@ static CURLcode ftp_range(struct connectdata *conn)
struct SessionHandle *data = conn->data;
struct ftp_conn *ftpc = &conn->proto.ftpc;
if(data->reqdata.use_range && data->reqdata.range) {
from=curlx_strtoofft(data->reqdata.range, &ptr, 0);
if(data->state.use_range && data->state.range) {
from=curlx_strtoofft(data->state.range, &ptr, 0);
while(ptr && *ptr && (ISSPACE(*ptr) || (*ptr=='-')))
ptr++;
to=curlx_strtoofft(ptr, &ptr2, 0);
@@ -3437,53 +3456,53 @@ static CURLcode ftp_range(struct connectdata *conn)
}
if((-1 == to) && (from>=0)) {
/* X - */
data->reqdata.resume_from = from;
data->state.resume_from = from;
DEBUGF(infof(conn->data, "FTP RANGE %" FORMAT_OFF_T " to end of file\n",
from));
}
else if(from < 0) {
/* -Y */
totalsize = -from;
data->reqdata.maxdownload = -from;
data->reqdata.resume_from = from;
data->req.maxdownload = -from;
data->state.resume_from = from;
DEBUGF(infof(conn->data, "FTP RANGE the last %" FORMAT_OFF_T " bytes\n",
totalsize));
}
else {
/* X-Y */
totalsize = to-from;
data->reqdata.maxdownload = totalsize+1; /* include last byte */
data->reqdata.resume_from = from;
data->req.maxdownload = totalsize+1; /* include last byte */
data->state.resume_from = from;
DEBUGF(infof(conn->data, "FTP RANGE from %" FORMAT_OFF_T
" getting %" FORMAT_OFF_T " bytes\n",
from, data->reqdata.maxdownload));
from, data->req.maxdownload));
}
DEBUGF(infof(conn->data, "range-download from %" FORMAT_OFF_T
" to %" FORMAT_OFF_T ", totally %" FORMAT_OFF_T " bytes\n",
from, to, data->reqdata.maxdownload));
from, to, data->req.maxdownload));
ftpc->dont_check = TRUE; /* dont check for successful transfer */
}
else
data->reqdata.maxdownload = -1;
data->req.maxdownload = -1;
return CURLE_OK;
}
/*
* Curl_ftp_nextconnect()
* ftp_nextconnect()
*
* This function shall be called when the second FTP (data) connection is
* connected.
*/
static CURLcode Curl_ftp_nextconnect(struct connectdata *conn)
static CURLcode ftp_nextconnect(struct connectdata *conn)
{
struct SessionHandle *data=conn->data;
struct ftp_conn *ftpc = &conn->proto.ftpc;
CURLcode result = CURLE_OK;
/* the ftp struct is inited in Curl_ftp_connect() */
struct FTP *ftp = data->reqdata.proto.ftp;
/* the ftp struct is inited in ftp_connect() */
struct FTP *ftp = data->state.proto.ftp;
DEBUGF(infof(data, "DO-MORE phase starts\n"));
@@ -3558,7 +3577,7 @@ CURLcode ftp_perform(struct connectdata *conn,
if(conn->bits.no_body) {
/* requested no body means no transfer... */
struct FTP *ftp = conn->data->reqdata.proto.ftp;
struct FTP *ftp = conn->data->state.proto.ftp;
ftp->transfer = FTPTRANSFER_INFO;
}
@@ -3572,7 +3591,7 @@ CURLcode ftp_perform(struct connectdata *conn,
/* run the state-machine */
if(conn->data->state.used_interface == Curl_if_multi)
result = Curl_ftp_multi_statemach(conn, dophase_done);
result = ftp_multi_statemach(conn, dophase_done);
else {
result = ftp_easy_statemach(conn);
*dophase_done = TRUE; /* with the easy interface we are done here */
@@ -3587,14 +3606,14 @@ CURLcode ftp_perform(struct connectdata *conn,
/***********************************************************************
*
* Curl_ftp()
* ftp_do()
*
* This function is registered as 'curl_do' function. It decodes the path
* parts etc as a wrapper to the actual DO function (ftp_perform).
*
* The input argument is already checked for validity.
*/
static CURLcode Curl_ftp(struct connectdata *conn, bool *done)
static CURLcode ftp_do(struct connectdata *conn, bool *done)
{
CURLcode retcode = CURLE_OK;
@@ -3604,7 +3623,7 @@ static CURLcode Curl_ftp(struct connectdata *conn, bool *done)
Since connections can be re-used between SessionHandles, this might be a
connection already existing but on a fresh SessionHandle struct so we must
make sure we have a good 'struct FTP' to play with. For new connections,
the struct FTP is allocated and setup in the Curl_ftp_connect() function.
the struct FTP is allocated and setup in the ftp_connect() function.
*/
Curl_reset_reqproto(conn);
retcode = ftp_init(conn);
@@ -3787,12 +3806,12 @@ static CURLcode ftp_quit(struct connectdata *conn)
/***********************************************************************
*
* Curl_ftp_disconnect()
* ftp_disconnect()
*
* Disconnect from an FTP server. Cleanup protocol-specific per-connection
* resources. BLOCKING.
*/
static CURLcode Curl_ftp_disconnect(struct connectdata *conn)
static CURLcode ftp_disconnect(struct connectdata *conn)
{
struct ftp_conn *ftpc= &conn->proto.ftpc;
@@ -3840,11 +3859,11 @@ CURLcode ftp_parse_url_path(struct connectdata *conn)
{
struct SessionHandle *data = conn->data;
/* the ftp struct is already inited in ftp_connect() */
struct FTP *ftp = data->reqdata.proto.ftp;
struct FTP *ftp = data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
size_t dlen;
char *slash_pos; /* position of the first '/' char in curpos */
char *path_to_use = data->reqdata.path;
char *path_to_use = data->state.path;
char *cur_pos;
cur_pos = path_to_use; /* current position in path. point at the begin
@@ -3864,10 +3883,10 @@ CURLcode ftp_parse_url_path(struct connectdata *conn)
the first condition in the if() right here, is there just in case
someone decides to set path to NULL one day
*/
if(data->reqdata.path &&
data->reqdata.path[0] &&
(data->reqdata.path[strlen(data->reqdata.path) - 1] != '/') )
ftpc->file = data->reqdata.path; /* this is a full file path */
if(data->state.path &&
data->state.path[0] &&
(data->state.path[strlen(data->state.path) - 1] != '/') )
ftpc->file = data->state.path; /* this is a full file path */
else
ftpc->file = NULL;
/*
@@ -3924,7 +3943,7 @@ CURLcode ftp_parse_url_path(struct connectdata *conn)
/* parse the URL path into separate path components */
while((slash_pos = strchr(cur_pos, '/')) != NULL) {
/* 1 or 0 to indicate absolute directory */
bool absolute_dir = (bool)((cur_pos - data->reqdata.path > 0) &&
bool absolute_dir = (bool)((cur_pos - data->state.path > 0) &&
(ftpc->dirdepth == 0));
/* seek out the next path component */
@@ -3995,7 +4014,7 @@ CURLcode ftp_parse_url_path(struct connectdata *conn)
if(ftpc->prevpath) {
/* prevpath is "raw" so we convert the input path before we compare the
strings */
char *path = curl_easy_unescape(conn->data, data->reqdata.path, 0, NULL);
char *path = curl_easy_unescape(conn->data, data->state.path, 0, NULL);
if(!path) {
freedirs(ftpc);
return CURLE_OUT_OF_MEMORY;
@@ -4018,11 +4037,11 @@ static CURLcode ftp_dophase_done(struct connectdata *conn,
bool connected)
{
CURLcode result = CURLE_OK;
struct FTP *ftp = conn->data->reqdata.proto.ftp;
struct FTP *ftp = conn->data->state.proto.ftp;
struct ftp_conn *ftpc = &conn->proto.ftpc;
if(connected)
result = Curl_ftp_nextconnect(conn);
result = ftp_nextconnect(conn);
if(result && (conn->sock[SECONDARYSOCKET] != CURL_SOCKET_BAD)) {
/* Failure detected, close the second socket if it was created already */
@@ -4044,11 +4063,11 @@ static CURLcode ftp_dophase_done(struct connectdata *conn,
}
/* called from multi.c while DOing */
static CURLcode Curl_ftp_doing(struct connectdata *conn,
static CURLcode ftp_doing(struct connectdata *conn,
bool *dophase_done)
{
CURLcode result;
result = Curl_ftp_multi_statemach(conn, dophase_done);
result = ftp_multi_statemach(conn, dophase_done);
if(*dophase_done) {
result = ftp_dophase_done(conn, FALSE /* not connected */);
@@ -4068,7 +4087,7 @@ static CURLcode Curl_ftp_doing(struct connectdata *conn,
* remote host.
*
* ftp->ctl_valid starts out as FALSE, and gets set to TRUE if we reach the
* Curl_ftp_done() function without finding any major problem.
* ftp_done() function without finding any major problem.
*/
static
CURLcode ftp_regular_transfer(struct connectdata *conn,
@@ -4078,7 +4097,7 @@ CURLcode ftp_regular_transfer(struct connectdata *conn,
bool connected=0;
struct SessionHandle *data = conn->data;
struct ftp_conn *ftpc = &conn->proto.ftpc;
data->reqdata.size = -1; /* make sure this is unknown at this point */
data->req.size = -1; /* make sure this is unknown at this point */
Curl_pgrsSetUploadCounter(data, 0);
Curl_pgrsSetDownloadCounter(data, 0);
@@ -4107,7 +4126,7 @@ CURLcode ftp_regular_transfer(struct connectdata *conn,
return result;
}
static CURLcode Curl_ftp_setup_connection(struct connectdata * conn)
static CURLcode ftp_setup_connection(struct connectdata * conn)
{
struct SessionHandle *data = conn->data;
char * type;
@@ -4134,11 +4153,11 @@ static CURLcode Curl_ftp_setup_connection(struct connectdata * conn)
#endif
}
data->reqdata.path++; /* don't include the initial slash */
data->state.path++; /* don't include the initial slash */
/* FTP URLs support an extension like ";type=<typecode>" that
* we'll try to get now! */
type = strstr(data->reqdata.path, ";type=");
type = strstr(data->state.path, ";type=");
if(!type)
type = strstr(conn->host.rawalloc, ";type=");
@@ -4168,12 +4187,12 @@ static CURLcode Curl_ftp_setup_connection(struct connectdata * conn)
}
#ifdef USE_SSL
static CURLcode Curl_ftps_setup_connection(struct connectdata * conn)
static CURLcode ftps_setup_connection(struct connectdata * conn)
{
struct SessionHandle *data = conn->data;
conn->ssl[SECONDARYSOCKET].use = data->set.ftp_ssl != CURLUSESSL_CONTROL;
return Curl_ftp_setup_connection(conn);
return ftp_setup_connection(conn);
}
#endif

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -127,22 +127,19 @@ static void freednsentry(void *freethis);
* Curl_global_host_cache_init() initializes and sets up a global DNS cache.
* Global DNS cache is general badness. Do not use. This will be removed in
* a future version. Use the share interface instead!
*
* Returns a struct curl_hash pointer on success, NULL on failure.
*/
void Curl_global_host_cache_init(void)
struct curl_hash *Curl_global_host_cache_init(void)
{
int rc = 0;
if(!host_cache_initialized) {
Curl_hash_init(&hostname_cache, 7, Curl_hash_str, Curl_str_key_compare,
freednsentry);
host_cache_initialized = 1;
rc = Curl_hash_init(&hostname_cache, 7, Curl_hash_str,
Curl_str_key_compare, freednsentry);
if(!rc)
host_cache_initialized = 1;
}
}
/*
* Return a pointer to the global cache
*/
struct curl_hash *Curl_global_host_cache_get(void)
{
return &hostname_cache;
return rc?NULL:&hostname_cache;
}
/*

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -125,11 +125,15 @@ struct hostent;
struct SessionHandle;
struct connectdata;
void Curl_global_host_cache_init(void);
/*
* Curl_global_host_cache_init() initializes and sets up a global DNS cache.
* Global DNS cache is general badness. Do not use. This will be removed in
* a future version. Use the share interface instead!
*
* Returns a struct curl_hash pointer on success, NULL on failure.
*/
struct curl_hash *Curl_global_host_cache_init(void);
void Curl_global_host_cache_dtor(void);
struct curl_hash *Curl_global_host_cache_get(void);
#define Curl_global_host_cache_use(__p) ((__p)->set.global_dns_cache)
struct Curl_dns_entry {
Curl_addrinfo *addr;

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -109,9 +109,9 @@
* Forward declarations.
*/
static CURLcode Curl_https_connecting(struct connectdata *conn, bool *done);
static CURLcode https_connecting(struct connectdata *conn, bool *done);
#ifdef USE_SSL
static int Curl_https_getsock(struct connectdata *conn,
static int https_getsock(struct connectdata *conn,
curl_socket_t *socks,
int numsocks);
#endif
@@ -146,9 +146,9 @@ const struct Curl_handler Curl_handler_https = {
Curl_http_done, /* done */
ZERO_NULL, /* do_more */
Curl_http_connect, /* connect_it */
Curl_https_connecting, /* connecting */
https_connecting, /* connecting */
ZERO_NULL, /* doing */
Curl_https_getsock, /* proto_getsock */
https_getsock, /* proto_getsock */
ZERO_NULL, /* doing_getsock */
ZERO_NULL, /* disconnect */
PORT_HTTPS, /* defport */
@@ -176,12 +176,12 @@ static char *checkheaders(struct SessionHandle *data, const char *thisheader)
}
/*
* Curl_output_basic() sets up an Authorization: header (or the proxy version)
* http_output_basic() sets up an Authorization: header (or the proxy version)
* for HTTP Basic authentication.
*
* Returns CURLcode.
*/
static CURLcode Curl_output_basic(struct connectdata *conn, bool proxy)
static CURLcode http_output_basic(struct connectdata *conn, bool proxy)
{
char *authorization;
struct SessionHandle *data=conn->data;
@@ -275,8 +275,7 @@ static bool pickoneauth(struct auth *pick)
static CURLcode perhapsrewind(struct connectdata *conn)
{
struct SessionHandle *data = conn->data;
struct HTTP *http = data->reqdata.proto.http;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct HTTP *http = data->state.proto.http;
curl_off_t bytessent;
curl_off_t expectsend = -1; /* default is unknown */
@@ -338,7 +337,7 @@ static CURLcode perhapsrewind(struct connectdata *conn)
/* This is not NTLM or NTLM with many bytes left to send: close
*/
conn->bits.close = TRUE;
k->size = 0; /* don't download any more than 0 bytes */
data->req.size = 0; /* don't download any more than 0 bytes */
}
if(bytessent)
@@ -361,7 +360,7 @@ CURLcode Curl_http_auth_act(struct connectdata *conn)
bool pickproxy = FALSE;
CURLcode code = CURLE_OK;
if(100 == data->reqdata.keep.httpcode)
if(100 == data->req.httpcode)
/* this is a transient response code, ignore */
return CURLE_OK;
@@ -369,23 +368,23 @@ CURLcode Curl_http_auth_act(struct connectdata *conn)
return data->set.http_fail_on_error?CURLE_HTTP_RETURNED_ERROR:CURLE_OK;
if(conn->bits.user_passwd &&
((data->reqdata.keep.httpcode == 401) ||
(conn->bits.authneg && data->reqdata.keep.httpcode < 300))) {
((data->req.httpcode == 401) ||
(conn->bits.authneg && data->req.httpcode < 300))) {
pickhost = pickoneauth(&data->state.authhost);
if(!pickhost)
data->state.authproblem = TRUE;
}
if(conn->bits.proxy_user_passwd &&
((data->reqdata.keep.httpcode == 407) ||
(conn->bits.authneg && data->reqdata.keep.httpcode < 300))) {
((data->req.httpcode == 407) ||
(conn->bits.authneg && data->req.httpcode < 300))) {
pickproxy = pickoneauth(&data->state.authproxy);
if(!pickproxy)
data->state.authproblem = TRUE;
}
if(pickhost || pickproxy) {
data->reqdata.newurl = strdup(data->change.url); /* clone URL */
if(!data->reqdata.newurl)
data->req.newurl = strdup(data->change.url); /* clone URL */
if(!data->req.newurl)
return CURLE_OUT_OF_MEMORY;
if((data->set.httpreq != HTTPREQ_GET) &&
@@ -397,7 +396,7 @@ CURLcode Curl_http_auth_act(struct connectdata *conn)
}
}
else if((data->reqdata.keep.httpcode < 300) &&
else if((data->req.httpcode < 300) &&
(!data->state.authhost.done) &&
conn->bits.authneg) {
/* no (known) authentication available,
@@ -406,15 +405,15 @@ CURLcode Curl_http_auth_act(struct connectdata *conn)
we didn't try HEAD or GET */
if((data->set.httpreq != HTTPREQ_GET) &&
(data->set.httpreq != HTTPREQ_HEAD)) {
data->reqdata.newurl = strdup(data->change.url); /* clone URL */
if(!data->reqdata.newurl)
data->req.newurl = strdup(data->change.url); /* clone URL */
if(!data->req.newurl)
return CURLE_OUT_OF_MEMORY;
data->state.authhost.done = TRUE;
}
}
if(Curl_http_should_fail(conn)) {
failf (data, "The requested URL returned error: %d",
data->reqdata.keep.httpcode);
data->req.httpcode);
code = CURLE_HTTP_RETURNED_ERROR;
}
@@ -436,11 +435,11 @@ CURLcode Curl_http_auth_act(struct connectdata *conn)
* @returns CURLcode
*/
static CURLcode
Curl_http_output_auth(struct connectdata *conn,
const char *request,
const char *path,
bool proxytunnel) /* TRUE if this is the request setting
up the proxy tunnel */
http_output_auth(struct connectdata *conn,
const char *request,
const char *path,
bool proxytunnel) /* TRUE if this is the request setting
up the proxy tunnel */
{
CURLcode result = CURLE_OK;
struct SessionHandle *data = conn->data;
@@ -503,11 +502,11 @@ Curl_http_output_auth(struct connectdata *conn,
if(conn->bits.proxy_user_passwd &&
!checkheaders(data, "Proxy-authorization:")) {
auth="Basic";
result = Curl_output_basic(conn, TRUE);
result = http_output_basic(conn, TRUE);
if(result)
return result;
}
/* NOTE: Curl_output_basic() should set 'done' TRUE, as the other auth
/* NOTE: http_output_basic() should set 'done' TRUE, as the other auth
functions work that way */
authproxy->done = TRUE;
}
@@ -583,7 +582,7 @@ Curl_http_output_auth(struct connectdata *conn,
if(conn->bits.user_passwd &&
!checkheaders(data, "Authorization:")) {
auth="Basic";
result = Curl_output_basic(conn, FALSE);
result = http_output_basic(conn, FALSE);
if(result)
return result;
}
@@ -660,8 +659,8 @@ CURLcode Curl_http_input_auth(struct connectdata *conn,
/* if exactly this is wanted, go */
int neg = Curl_input_negotiate(conn, (bool)(httpcode == 407), start);
if(neg == 0) {
data->reqdata.newurl = strdup(data->change.url);
data->state.authproblem = (data->reqdata.newurl == NULL);
data->req.newurl = strdup(data->change.url);
data->state.authproblem = (data->req.newurl == NULL);
}
else {
infof(data, "Authentication problem. Ignoring this.\n");
@@ -743,16 +742,13 @@ CURLcode Curl_http_input_auth(struct connectdata *conn,
int Curl_http_should_fail(struct connectdata *conn)
{
struct SessionHandle *data;
struct Curl_transfer_keeper *k;
int httpcode;
DEBUGASSERT(conn);
data = conn->data;
DEBUGASSERT(data);
/*
** For readability
*/
k = &data->reqdata.keep;
httpcode = data->req.httpcode;
/*
** If we haven't been asked to fail on error,
@@ -764,12 +760,12 @@ int Curl_http_should_fail(struct connectdata *conn)
/*
** Any code < 400 is never terminal.
*/
if(k->httpcode < 400)
if(httpcode < 400)
return 0;
if(data->reqdata.resume_from &&
(data->set.httpreq==HTTPREQ_GET) &&
(k->httpcode == 416)) {
if(data->state.resume_from &&
(data->set.httpreq==HTTPREQ_GET) &&
(httpcode == 416)) {
/* "Requested Range Not Satisfiable", just proceed and
pretend this is no error */
return 0;
@@ -779,14 +775,14 @@ int Curl_http_should_fail(struct connectdata *conn)
** Any code >= 400 that's not 401 or 407 is always
** a terminal error
*/
if((k->httpcode != 401) &&
(k->httpcode != 407))
if((httpcode != 401) &&
(httpcode != 407))
return 1;
/*
** All we have left to deal with is 401 and 407
*/
DEBUGASSERT((k->httpcode == 401) || (k->httpcode == 407));
DEBUGASSERT((httpcode == 401) || (httpcode == 407));
/*
** Examine the current authentication state to see if this
@@ -807,7 +803,8 @@ int Curl_http_should_fail(struct connectdata *conn)
infof(data,"%s: authavail = 0x%08x\n",__FUNCTION__,data->state.authavail);
infof(data,"%s: httpcode = %d\n",__FUNCTION__,k->httpcode);
infof(data,"%s: authdone = %d\n",__FUNCTION__,data->state.authdone);
infof(data,"%s: newurl = %s\n",__FUNCTION__,data->reqdata.newurl ? data->reqdata.newurl : "(null)");
infof(data,"%s: newurl = %s\n",__FUNCTION__,data->req.newurl ?
data->req.newurl : "(null)");
infof(data,"%s: authproblem = %d\n",__FUNCTION__,data->state.authproblem);
#endif
@@ -815,9 +812,9 @@ int Curl_http_should_fail(struct connectdata *conn)
** Either we're not authenticating, or we're supposed to
** be authenticating something else. This is an error.
*/
if((k->httpcode == 401) && !conn->bits.user_passwd)
if((httpcode == 401) && !conn->bits.user_passwd)
return TRUE;
if((k->httpcode == 407) && !conn->bits.proxy_user_passwd)
if((httpcode == 407) && !conn->bits.proxy_user_passwd)
return TRUE;
return data->state.authproblem;
@@ -837,7 +834,7 @@ static size_t readmoredata(char *buffer,
void *userp)
{
struct connectdata *conn = (struct connectdata *)userp;
struct HTTP *http = conn->data->reqdata.proto.http;
struct HTTP *http = conn->data->state.proto.http;
size_t fullsize = size * nitems;
if(0 == http->postsize)
@@ -929,7 +926,7 @@ CURLcode add_buffer_send(send_buffer *in,
CURLcode res;
char *ptr;
size_t size;
struct HTTP *http = conn->data->reqdata.proto.http;
struct HTTP *http = conn->data->state.proto.http;
size_t sendsize;
curl_socket_t sockfd;
size_t headersize;
@@ -1220,7 +1217,7 @@ CURLcode Curl_proxyCONNECT(struct connectdata *conn,
{
int subversion=0;
struct SessionHandle *data=conn->data;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct SingleRequest *k = &data->req;
CURLcode result;
int res;
long timeout =
@@ -1246,12 +1243,12 @@ CURLcode Curl_proxyCONNECT(struct connectdata *conn,
infof(data, "Establish HTTP proxy tunnel to %s:%d\n",
hostname, remote_port);
if(data->reqdata.newurl) {
if(data->req.newurl) {
/* This only happens if we've looped here due to authentication
reasons, and we don't really use the newly cloned URL here
then. Just free() it. */
free(data->reqdata.newurl);
data->reqdata.newurl = NULL;
free(data->req.newurl);
data->req.newurl = NULL;
}
/* initialize a dynamic send-buffer */
@@ -1267,7 +1264,7 @@ CURLcode Curl_proxyCONNECT(struct connectdata *conn,
}
/* Setup the proxy-authorization header, if any */
result = Curl_http_output_auth(conn, (char *)"CONNECT", host_port, TRUE);
result = http_output_auth(conn, (char *)"CONNECT", host_port, TRUE);
if(CURLE_OK == result) {
char *host=(char *)"";
@@ -1412,8 +1409,15 @@ CURLcode Curl_proxyCONNECT(struct connectdata *conn,
keepon = FALSE;
else if(gotbytes <= 0) {
keepon = FALSE;
error = SELECT_ERROR;
failf(data, "Proxy CONNECT aborted");
if(data->set.proxyauth && data->state.authproxy.avail) {
/* proxy auth was requested and there was proxy auth available,
then deem this as "mere" proxy disconnect */
conn->bits.proxy_connect_closed = TRUE;
}
else {
error = SELECT_ERROR;
failf(data, "Proxy CONNECT aborted");
}
}
else {
/*
@@ -1593,6 +1597,8 @@ CURLcode Curl_proxyCONNECT(struct connectdata *conn,
}
break;
} /* switch */
if(Curl_pgrsUpdate(conn))
return CURLE_ABORTED_BY_CALLBACK;
} /* while there's buffer left and loop is requested */
if(error)
@@ -1603,20 +1609,20 @@ CURLcode Curl_proxyCONNECT(struct connectdata *conn,
headers. 'newurl' is set to a new URL if we must loop. */
Curl_http_auth_act(conn);
if(closeConnection && data->reqdata.newurl) {
if(closeConnection && data->req.newurl) {
/* Connection closed by server. Don't use it anymore */
sclose(conn->sock[sockindex]);
conn->sock[sockindex] = CURL_SOCKET_BAD;
break;
}
} /* END NEGOTIATION PHASE */
} while(data->reqdata.newurl);
} while(data->req.newurl);
if(200 != k->httpcode) {
if(200 != data->req.httpcode) {
failf(data, "Received HTTP code %d from proxy after CONNECT",
k->httpcode);
data->req.httpcode);
if(closeConnection && data->reqdata.newurl)
if(closeConnection && data->req.newurl)
conn->bits.proxy_connect_closed = TRUE;
return CURLE_RECV_ERROR;
@@ -1631,7 +1637,7 @@ CURLcode Curl_proxyCONNECT(struct connectdata *conn,
data->state.authproxy.done = TRUE;
infof (data, "Proxy replied OK to CONNECT request\n");
k->ignorebody = FALSE; /* put it (back) to non-ignore state */
data->req.ignorebody = FALSE; /* put it (back) to non-ignore state */
return CURLE_OK;
}
@@ -1685,7 +1691,7 @@ CURLcode Curl_http_connect(struct connectdata *conn, bool *done)
if(conn->protocol & PROT_HTTPS) {
/* perform SSL initialization */
if(data->state.used_interface == Curl_if_multi) {
result = Curl_https_connecting(conn, done);
result = https_connecting(conn, done);
if(result)
return result;
}
@@ -1704,7 +1710,7 @@ CURLcode Curl_http_connect(struct connectdata *conn, bool *done)
return CURLE_OK;
}
static CURLcode Curl_https_connecting(struct connectdata *conn, bool *done)
static CURLcode https_connecting(struct connectdata *conn, bool *done)
{
CURLcode result;
DEBUGASSERT((conn) && (conn->protocol & PROT_HTTPS));
@@ -1720,7 +1726,7 @@ static CURLcode Curl_https_connecting(struct connectdata *conn, bool *done)
#ifdef USE_SSLEAY
/* This function is OpenSSL-specific. It should be made to query the generic
SSL layer instead. */
static int Curl_https_getsock(struct connectdata *conn,
static int https_getsock(struct connectdata *conn,
curl_socket_t *socks,
int numsocks)
{
@@ -1745,7 +1751,7 @@ static int Curl_https_getsock(struct connectdata *conn,
}
#else
#ifdef USE_GNUTLS
int Curl_https_getsock(struct connectdata *conn,
int https_getsock(struct connectdata *conn,
curl_socket_t *socks,
int numsocks)
{
@@ -1756,7 +1762,7 @@ int Curl_https_getsock(struct connectdata *conn,
}
#else
#ifdef USE_NSS
int Curl_https_getsock(struct connectdata *conn,
int https_getsock(struct connectdata *conn,
curl_socket_t *socks,
int numsocks)
{
@@ -1767,7 +1773,7 @@ int Curl_https_getsock(struct connectdata *conn,
}
#else
#ifdef USE_QSOSSL
int Curl_https_getsock(struct connectdata *conn,
int https_getsock(struct connectdata *conn,
curl_socket_t *socks,
int numsocks)
{
@@ -1790,13 +1796,14 @@ CURLcode Curl_http_done(struct connectdata *conn,
CURLcode status, bool premature)
{
struct SessionHandle *data = conn->data;
struct HTTP *http =data->reqdata.proto.http;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct HTTP *http =data->state.proto.http;
(void)premature; /* not used */
/* set the proper values (possibly modified on POST) */
conn->fread_func = data->set.fread_func; /* restore */
conn->fread_in = data->set.in; /* restore */
conn->seek_func = data->set.seek_func; /* restore */
conn->seek_client = data->set.seek_client; /* restore */
if(http == NULL)
return CURLE_OK;
@@ -1810,7 +1817,7 @@ CURLcode Curl_http_done(struct connectdata *conn,
}
if(HTTPREQ_POST_FORM == data->set.httpreq) {
k->bytecount = http->readbytecount + http->writebytecount;
data->req.bytecount = http->readbytecount + http->writebytecount;
Curl_formclean(&http->sendit); /* Now free that whole lot */
if(http->form.fp) {
@@ -1820,15 +1827,15 @@ CURLcode Curl_http_done(struct connectdata *conn,
}
}
else if(HTTPREQ_PUT == data->set.httpreq)
k->bytecount = http->readbytecount + http->writebytecount;
data->req.bytecount = http->readbytecount + http->writebytecount;
if(status != CURLE_OK)
return (status);
if(!conn->bits.retry &&
((http->readbytecount +
data->reqdata.keep.headerbytecount -
data->reqdata.keep.deductheadercount)) <= 0) {
data->req.headerbytecount -
data->req.deductheadercount)) <= 0) {
/* If this connection isn't simply closed to be retried, AND nothing was
read from the HTTP server (that counts), this can't be right so we
return an error here */
@@ -1911,7 +1918,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
char *buf = data->state.buffer; /* this is a short cut to the buffer */
CURLcode result=CURLE_OK;
struct HTTP *http;
char *ppath = data->reqdata.path;
char *ppath = data->state.path;
char ftp_typecode[sizeof(";type=?")] = "";
char *host = conn->host.name;
const char *te = ""; /* transfer-encoding */
@@ -1930,16 +1937,16 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
sessionhandle, deal with it */
Curl_reset_reqproto(conn);
if(!data->reqdata.proto.http) {
if(!data->state.proto.http) {
/* Only allocate this struct if we don't already have it! */
http = (struct HTTP *)calloc(sizeof(struct HTTP), 1);
if(!http)
return CURLE_OUT_OF_MEMORY;
data->reqdata.proto.http = http;
data->state.proto.http = http;
}
else
http = data->reqdata.proto.http;
http = data->state.proto.http;
if( (conn->protocol&(PROT_HTTP|PROT_FTP)) &&
data->set.upload) {
@@ -1983,7 +1990,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
}
/* setup the authentication headers */
result = Curl_http_output_auth(conn, request, ppath, FALSE);
result = http_output_auth(conn, request, ppath, FALSE);
if(result)
return result;
@@ -2126,22 +2133,24 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
}
}
ppath = data->change.url;
/* when doing ftp, append ;type=<a|i> if not present */
if(checkprefix("ftp://", ppath) || checkprefix("ftps://", ppath)) {
char *p = strstr(ppath, ";type=");
if(p && p[6] && p[7] == 0) {
switch (toupper((int)((unsigned char)p[6]))) {
case 'A':
case 'D':
case 'I':
break;
default:
p = NULL;
if (data->set.proxy_transfer_mode) {
/* when doing ftp, append ;type=<a|i> if not present */
if(checkprefix("ftp://", ppath) || checkprefix("ftps://", ppath)) {
char *p = strstr(ppath, ";type=");
if(p && p[6] && p[7] == 0) {
switch (toupper((int)((unsigned char)p[6]))) {
case 'A':
case 'D':
case 'I':
break;
default:
p = NULL;
}
}
if(!p)
snprintf(ftp_typecode, sizeof(ftp_typecode), ";type=%c",
data->set.prefer_ascii ? 'a' : 'i');
}
if(!p)
snprintf(ftp_typecode, sizeof(ftp_typecode), ";type=%c",
data->set.prefer_ascii ? 'a' : 'i');
}
}
if(HTTPREQ_POST_FORM == httpreq) {
@@ -2169,7 +2178,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
if(( (HTTPREQ_POST == httpreq) ||
(HTTPREQ_POST_FORM == httpreq) ||
(HTTPREQ_PUT == httpreq) ) &&
data->reqdata.resume_from) {
data->state.resume_from) {
/**********************************************************************
* Resuming upload in HTTP means that we PUT or POST and that we have
* got a resume_from value set. The resume value has already created
@@ -2178,44 +2187,55 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
* file size before we continue this venture in the dark lands of HTTP.
*********************************************************************/
if(data->reqdata.resume_from < 0 ) {
if(data->state.resume_from < 0 ) {
/*
* This is meant to get the size of the present remote-file by itself.
* We don't support this now. Bail out!
*/
data->reqdata.resume_from = 0;
data->state.resume_from = 0;
}
if(data->reqdata.resume_from && !data->state.this_is_a_follow) {
if(data->state.resume_from && !data->state.this_is_a_follow) {
/* do we still game? */
curl_off_t passed=0;
/* Now, let's read off the proper amount of bytes from the
input. If we knew it was a proper file we could've just
fseek()ed but we only have a stream here */
do {
size_t readthisamountnow = (size_t)(data->reqdata.resume_from - passed);
size_t actuallyread;
input. */
if(conn->seek_func) {
curl_off_t readthisamountnow = data->state.resume_from;
if(readthisamountnow > BUFSIZE)
readthisamountnow = BUFSIZE;
actuallyread =
data->set.fread_func(data->state.buffer, 1, (size_t)readthisamountnow,
data->set.in);
passed += actuallyread;
if(actuallyread != readthisamountnow) {
failf(data, "Could only read %" FORMAT_OFF_T
" bytes from the input",
passed);
if(conn->seek_func(conn->seek_client,
readthisamountnow, SEEK_SET) != 0) {
failf(data, "Could not seek stream");
return CURLE_READ_ERROR;
}
} while(passed != data->reqdata.resume_from); /* loop until done */
}
else {
curl_off_t passed=0;
do {
size_t readthisamountnow = (size_t)(data->state.resume_from - passed);
size_t actuallyread;
if(readthisamountnow > BUFSIZE)
readthisamountnow = BUFSIZE;
actuallyread = data->set.fread_func(data->state.buffer, 1,
(size_t)readthisamountnow,
data->set.in);
passed += actuallyread;
if(actuallyread != readthisamountnow) {
failf(data, "Could only read %" FORMAT_OFF_T
" bytes from the input",
passed);
return CURLE_READ_ERROR;
}
} while(passed != data->state.resume_from); /* loop until done */
}
/* now, decrease the size of the read */
if(data->set.infilesize>0) {
data->set.infilesize -= data->reqdata.resume_from;
data->set.infilesize -= data->state.resume_from;
if(data->set.infilesize <= 0) {
failf(data, "File already completely uploaded");
@@ -2225,7 +2245,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
/* we've passed, proceed as normal */
}
}
if(data->reqdata.use_range) {
if(data->state.use_range) {
/*
* A range is selected. We use different headers whether we're downloading
* or uploading and we always let customized headers override our internal
@@ -2237,7 +2257,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
if(conn->allocptr.rangeline)
free(conn->allocptr.rangeline);
conn->allocptr.rangeline = aprintf("Range: bytes=%s\r\n",
data->reqdata.range);
data->state.range);
}
else if((httpreq != HTTPREQ_GET) &&
!checkheaders(data, "Content-Range:")) {
@@ -2246,14 +2266,14 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
if(conn->allocptr.rangeline)
free(conn->allocptr.rangeline);
if(data->reqdata.resume_from) {
if(data->state.resume_from) {
/* This is because "resume" was selected */
curl_off_t total_expected_size=
data->reqdata.resume_from + data->set.infilesize;
data->state.resume_from + data->set.infilesize;
conn->allocptr.rangeline =
aprintf("Content-Range: bytes %s%" FORMAT_OFF_T
"/%" FORMAT_OFF_T "\r\n",
data->reqdata.range, total_expected_size-1,
data->state.range, total_expected_size-1,
total_expected_size);
}
else {
@@ -2261,7 +2281,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
append total size */
conn->allocptr.rangeline =
aprintf("Content-Range: bytes %s/%" FORMAT_OFF_T "\r\n",
data->reqdata.range, data->set.infilesize);
data->state.range, data->set.infilesize);
}
if(!conn->allocptr.rangeline)
return CURLE_OUT_OF_MEMORY;
@@ -2306,7 +2326,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
conn->allocptr.proxyuserpwd?
conn->allocptr.proxyuserpwd:"",
conn->allocptr.userpwd?conn->allocptr.userpwd:"",
(data->reqdata.use_range && conn->allocptr.rangeline)?
(data->state.use_range && conn->allocptr.rangeline)?
conn->allocptr.rangeline:"",
(data->set.str[STRING_USERAGENT] &&
*data->set.str[STRING_USERAGENT] && conn->allocptr.uagent)?
@@ -2340,7 +2360,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
co = Curl_cookie_getlist(data->cookies,
conn->allocptr.cookiehost?
conn->allocptr.cookiehost:host,
data->reqdata.path,
data->state.path,
(bool)(conn->protocol&PROT_HTTPS?TRUE:FALSE));
Curl_share_unlock(data, CURL_LOCK_DATA_COOKIE);
}
@@ -2363,7 +2383,7 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
}
co = co->next; /* next cookie please */
}
Curl_cookie_freelist(store); /* free the cookie list */
Curl_cookie_freelist(store, FALSE); /* free the cookie list */
}
if(addcookies && (CURLE_OK == result)) {
if(!count)
@@ -2615,17 +2635,19 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
return result;
}
if(data->set.postfields) {
/* For really small posts we don't use Expect: headers at all, and for
the somewhat bigger ones we allow the app to disable it. Just make
sure that the expect100header is always set to the preferred value
here. */
if(postsize > TINY_INITIAL_POST_SIZE) {
result = expect100(data, req_buffer);
if(result)
return result;
}
else
data->state.expect100header = FALSE;
/* for really small posts we don't use Expect: headers at all, and for
the somewhat bigger ones we allow the app to disable it */
if(postsize > TINY_INITIAL_POST_SIZE) {
result = expect100(data, req_buffer);
if(result)
return result;
}
else
data->state.expect100header = FALSE;
if(data->set.postfields) {
if(!data->state.expect100header &&
(postsize < MAX_INITIAL_POST_SIZE)) {
@@ -2689,9 +2711,13 @@ CURLcode Curl_http(struct connectdata *conn, bool *done)
/* set the upload size to the progress meter */
Curl_pgrsSetUploadSize(data, postsize?postsize:-1);
/* set the pointer to mark that we will send the post body using
the read callback */
http->postdata = (char *)&http->postdata;
/* set the pointer to mark that we will send the post body using the
read callback, but only if we're not in authenticate
negotiation */
if(!conn->bits.authneg) {
http->postdata = (char *)&http->postdata;
http->postsize = postsize;
}
}
}
/* issue the request */

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -109,7 +109,7 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
CURLcode result=CURLE_OK;
struct SessionHandle *data = conn->data;
struct Curl_chunker *ch = &conn->chunk;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct SingleRequest *k = &data->req;
size_t piece;
size_t length = (size_t)datalen;
size_t *wrote = (size_t *)wrotep;
@@ -118,8 +118,11 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
/* the original data is written to the client, but we go on with the
chunk read process, to properly calculate the content length*/
if(data->set.http_te_skip && !k->ignorebody)
Curl_client_write(conn, CLIENTWRITE_BODY, datap,datalen);
if(data->set.http_te_skip && !k->ignorebody) {
result = Curl_client_write(conn, CLIENTWRITE_BODY, datap, datalen);
if(result)
return CHUNKE_WRITE_ERROR;
}
while(length) {
switch(ch->state) {
@@ -217,7 +220,7 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
/* Write the data portion available */
#ifdef HAVE_LIBZ
switch (conn->data->set.http_ce_skip?
IDENTITY : data->reqdata.keep.content_encoding) {
IDENTITY : data->req.content_encoding) {
case IDENTITY:
#endif
if(!k->ignorebody) {
@@ -231,16 +234,16 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
break;
case DEFLATE:
/* update data->reqdata.keep.str to point to the chunk data. */
data->reqdata.keep.str = datap;
result = Curl_unencode_deflate_write(conn, &data->reqdata.keep,
/* update data->req.keep.str to point to the chunk data. */
data->req.str = datap;
result = Curl_unencode_deflate_write(conn, &data->req,
(ssize_t)piece);
break;
case GZIP:
/* update data->reqdata.keep.str to point to the chunk data. */
data->reqdata.keep.str = datap;
result = Curl_unencode_gzip_write(conn, &data->reqdata.keep,
/* update data->req.keep.str to point to the chunk data. */
data->req.str = datap;
result = Curl_unencode_gzip_write(conn, &data->req,
(ssize_t)piece);
break;
@@ -362,9 +365,12 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
return(CHUNKE_BAD_CHUNK);
}
#endif /* CURL_DOES_CONVERSIONS */
if( !data->set.http_te_skip )
Curl_client_write(conn, CLIENTWRITE_HEADER,
conn->trailer, conn->trlPos);
if(!data->set.http_te_skip) {
result = Curl_client_write(conn, CLIENTWRITE_HEADER,
conn->trailer, conn->trlPos);
if(result)
return CHUNKE_WRITE_ERROR;
}
}
ch->state = CHUNK_TRAILER;
conn->trlPos=0;

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -90,19 +90,19 @@ CURLdigest Curl_input_digest(struct connectdata *conn,
Curl_digest_cleanup_one(d);
while(more) {
char value[32];
char content[128];
char value[256];
char content[1024];
size_t totlen=0;
while(*header && ISSPACE(*header))
header++;
/* how big can these strings be? */
if((2 == sscanf(header, "%31[^=]=\"%127[^\"]\"",
if((2 == sscanf(header, "%255[^=]=\"%1023[^\"]\"",
value, content)) ||
/* try the same scan but without quotes around the content but don't
include the possibly trailing comma, newline or carriage return */
(2 == sscanf(header, "%31[^=]=%127[^\r\n,]",
(2 == sscanf(header, "%255[^=]=%1023[^\r\n,]",
value, content)) ) {
if(strequal(value, "nonce")) {
d->nonce = strdup(content);
@@ -180,6 +180,9 @@ CURLdigest Curl_input_digest(struct connectdata *conn,
break; /* we're done here */
header += totlen;
/* pass all additional spaces here */
while(*header && ISSPACE(*header))
header++;
if(',' == *header)
/* allow the list to be comma-separated */
header++;

View File

@@ -7,7 +7,7 @@
*
* Copyright (c) 1995, 1996, 1997, 1998, 1999 Kungliga Tekniska H<>gskolan
* (Royal Institute of Technology, Stockholm, Sweden).
* Copyright (c) 2004 - 2007 Daniel Stenberg
* Copyright (c) 2004 - 2008 Daniel Stenberg
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -362,7 +362,7 @@ CURLcode Curl_krb_kauth(struct connectdata *conn)
tmp=0;
}
if(!tmp || !ptr) {
Curl_failf(conn->data, "Failed to decode base64 in reply.\n");
Curl_failf(conn->data, "Failed to decode base64 in reply");
Curl_set_command_prot(conn, save);
return CURLE_FTP_WEIRD_SERVER_REPLY;
}

View File

@@ -555,7 +555,7 @@ static bool unescape_elements (void *data, LDAPURLDesc *ludp)
*
* <hostname> already known from 'conn->host.name'.
* <port> already known from 'conn->remote_port'.
* extract the rest from 'conn->data->reqdata.path+1'. All fields are optional.
* extract the rest from 'conn->data->state.path+1'. All fields are optional.
* e.g.
* ldap://<hostname>:<port>/?<attributes>?<scope>?<filter>
* yields ludp->lud_dn = "".
@@ -568,8 +568,8 @@ static int _ldap_url_parse2 (const struct connectdata *conn, LDAPURLDesc *ludp)
int i;
if(!conn->data ||
!conn->data->reqdata.path ||
conn->data->reqdata.path[0] != '/' ||
!conn->data->state.path ||
conn->data->state.path[0] != '/' ||
!checkprefix(conn->protostr, conn->data->change.url))
return LDAP_INVALID_SYNTAX;
@@ -579,7 +579,7 @@ static int _ldap_url_parse2 (const struct connectdata *conn, LDAPURLDesc *ludp)
/* parse DN (Distinguished Name).
*/
ludp->lud_dn = strdup(conn->data->reqdata.path+1);
ludp->lud_dn = strdup(conn->data->state.path+1);
if(!ludp->lud_dn)
return LDAP_NO_MEMORY;

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -52,7 +52,8 @@ BEGIN
VALUE "OriginalFilename", "libcurl.dll\0"
VALUE "ProductName", "The cURL library\0"
VALUE "ProductVersion", LIBCURL_VERSION "\0"
VALUE "LegalCopyright", "Copyright 1996-2007 by Daniel Stenberg. http://curl.haxx.se/docs/copyright.html\0"
VALUE "LegalCopyright", "<EFBFBD> " LIBCURL_COPYRIGHT "\0"
VALUE "License", "http://curl.haxx.se/docs/copyright.html\0"
END
END

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -136,3 +136,52 @@ Curl_llist_count(struct curl_llist *list)
{
return list->size;
}
int Curl_llist_move(struct curl_llist *list, struct curl_llist_element *e,
struct curl_llist *to_list, struct curl_llist_element *to_e)
{
/* Remove element from list */
if(e == NULL || list->size == 0)
return 0;
if(e == list->head) {
list->head = e->next;
if(list->head == NULL)
list->tail = NULL;
else
e->next->prev = NULL;
}
else {
e->prev->next = e->next;
if(!e->next)
list->tail = e->prev;
else
e->next->prev = e->prev;
}
--list->size;
/* Add element to to_list after to_e */
if(to_list->size == 0) {
to_list->head = e;
to_list->head->prev = NULL;
to_list->head->next = NULL;
to_list->tail = e;
}
else {
e->next = to_e->next;
e->prev = to_e;
if(to_e->next) {
to_e->next->prev = e;
}
else {
to_list->tail = e;
}
to_e->next = e;
}
++to_list->size;
return 1;
}

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2005, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -56,5 +56,7 @@ int Curl_llist_remove_next(struct curl_llist *, struct curl_llist_element *,
void *);
size_t Curl_llist_count(struct curl_llist *);
void Curl_llist_destroy(struct curl_llist *, void *);
int Curl_llist_move(struct curl_llist *, struct curl_llist_element *,
struct curl_llist *, struct curl_llist_element *);
#endif

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -85,7 +85,6 @@ typedef enum {
CURLM_STATE_TOOFAST, /* wait because limit-rate exceeded */
CURLM_STATE_DONE, /* post data transfer operation */
CURLM_STATE_COMPLETED, /* operation complete */
CURLM_STATE_CANCELLED, /* cancelled */
CURLM_STATE_LAST /* not a true state, never use this */
} CURLMstate;
@@ -190,6 +189,14 @@ static void add_closure(struct Curl_multi *multi,
struct SessionHandle *data);
static int update_timer(struct Curl_multi *multi);
static CURLcode addHandleToSendOrPendPipeline(struct SessionHandle *handle,
struct connectdata *conn);
static int checkPendPipeline(struct connectdata *conn);
static int moveHandleFromSendToRecvPipeline(struct SessionHandle *habdle,
struct connectdata *conn);
static bool isHandleAtHead(struct SessionHandle *handle,
struct curl_llist *pipeline);
#ifdef CURLDEBUG
static const char * const statename[]={
"INIT",
@@ -208,7 +215,6 @@ static const char * const statename[]={
"TOOFAST",
"DONE",
"COMPLETED",
"CANCELLED"
};
void curl_multi_dump(CURLM *multi_handle);
@@ -578,15 +584,17 @@ CURLMcode curl_multi_remove_handle(CURLM *multi_handle,
alive connections when this is removed */
multi->num_alive--;
if(easy->easy_handle->state.is_in_pipeline &&
easy->state > CURLM_STATE_DO &&
if(easy->easy_conn &&
easy->easy_handle->state.is_in_pipeline &&
easy->state > CURLM_STATE_WAITDO &&
easy->state < CURLM_STATE_COMPLETED) {
/* If the handle is in a pipeline and has finished sending off its
request but not received its reponse yet, we need to remember the
fact that we want to remove this handle but do the actual removal at
a later time */
easy->easy_handle->state.cancelled = TRUE;
return CURLM_OK;
/* If the handle is in a pipeline and has started sending off its
request but not received its reponse yet, we need to close
connection. */
easy->easy_conn->bits.close = TRUE;
/* Set connection owner so that Curl_done() closes it.
We can sefely do this here since connection is killed. */
easy->easy_conn->data = easy->easy_handle;
}
/* The timer must be shut down before easy->multi is set to NULL,
@@ -772,6 +780,7 @@ static int multi_getsock(struct Curl_one_easy *easy,
case CURLM_STATE_DOING:
return Curl_doing_getsock(easy->easy_conn, socks, numsocks);
case CURLM_STATE_WAITPROXYCONNECT:
case CURLM_STATE_WAITCONNECT:
return waitconnect_getsock(easy->easy_conn, socks, numsocks);
@@ -845,7 +854,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
bool dophase_done;
bool done;
CURLMcode result = CURLM_OK;
struct Curl_transfer_keeper *k;
struct SingleRequest *k;
do {
bool disconnect_conn = FALSE;
@@ -857,11 +866,12 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
we're using gets cleaned up and we're left with nothing. */
if(easy->easy_handle->state.pipe_broke) {
infof(easy->easy_handle, "Pipe broke: handle 0x%x, url = %s\n",
easy, easy->easy_handle->reqdata.path);
easy, easy->easy_handle->state.path);
if(easy->easy_handle->state.is_in_pipeline) {
/* Head back to the CONNECT state */
multistate(easy, CURLM_STATE_CONNECT);
easy->easy_handle->state.is_in_pipeline = FALSE;
result = CURLM_CALL_MULTI_PERFORM;
easy->result = CURLE_OK;
}
@@ -932,28 +942,36 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
&async, &protocol_connect);
if(CURLE_OK == easy->result) {
/* Add this handle to the send pipeline */
easy->result = Curl_addHandleToPipeline(easy->easy_handle,
easy->easy_conn->send_pipe);
/* Add this handle to the send or pend pipeline */
easy->result = addHandleToSendOrPendPipeline(easy->easy_handle,
easy->easy_conn);
if(CURLE_OK == easy->result) {
if(async)
/* We're now waiting for an asynchronous name lookup */
multistate(easy, CURLM_STATE_WAITRESOLVE);
if (easy->easy_handle->state.is_in_pipeline) {
multistate(easy, CURLM_STATE_WAITDO);
if(isHandleAtHead(easy->easy_handle,
easy->easy_conn->send_pipe))
result = CURLM_CALL_MULTI_PERFORM;
}
else {
/* after the connect has been sent off, go WAITCONNECT unless the
protocol connect is already done and we can go directly to
WAITDO! */
result = CURLM_CALL_MULTI_PERFORM;
if(protocol_connect)
multistate(easy, CURLM_STATE_WAITDO);
if(async)
/* We're now waiting for an asynchronous name lookup */
multistate(easy, CURLM_STATE_WAITRESOLVE);
else {
/* after the connect has been sent off, go WAITCONNECT unless the
protocol connect is already done and we can go directly to
WAITDO! */
result = CURLM_CALL_MULTI_PERFORM;
if(protocol_connect)
multistate(easy, CURLM_STATE_WAITDO);
else {
#ifndef CURL_DISABLE_HTTP
if(easy->easy_conn->bits.tunnel_connecting)
multistate(easy, CURLM_STATE_WAITPROXYCONNECT);
else
if(easy->easy_conn->bits.tunnel_connecting)
multistate(easy, CURLM_STATE_WAITPROXYCONNECT);
else
#endif
multistate(easy, CURLM_STATE_WAITCONNECT);
multistate(easy, CURLM_STATE_WAITCONNECT);
}
}
}
}
@@ -1077,12 +1095,12 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
easy->easy_conn->connectindex,
easy->easy_conn->send_pipe->size,
easy->easy_conn->writechannel_inuse,
Curl_isHandleAtHead(easy->easy_handle,
easy->easy_conn->send_pipe));
isHandleAtHead(easy->easy_handle,
easy->easy_conn->send_pipe));
#endif
if(!easy->easy_conn->writechannel_inuse &&
Curl_isHandleAtHead(easy->easy_handle,
easy->easy_conn->send_pipe)) {
isHandleAtHead(easy->easy_handle,
easy->easy_conn->send_pipe)) {
/* Grab the channel */
easy->easy_conn->writechannel_inuse = TRUE;
multistate(easy, CURLM_STATE_DO);
@@ -1190,12 +1208,10 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
break;
case CURLM_STATE_DO_DONE:
/* Remove ourselves from the send pipeline */
Curl_removeHandleFromPipeline(easy->easy_handle,
easy->easy_conn->send_pipe);
/* Add ourselves to the recv pipeline */
easy->result = Curl_addHandleToPipeline(easy->easy_handle,
easy->easy_conn->recv_pipe);
/* Move ourselves from the send to recv pipeline */
moveHandleFromSendToRecvPipeline(easy->easy_handle, easy->easy_conn);
/* Check if we can move pending requests to send pipe */
checkPendPipeline(easy->easy_conn);
multistate(easy, CURLM_STATE_WAITPERFORM);
result = CURLM_CALL_MULTI_PERFORM;
break;
@@ -1206,13 +1222,13 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
easy->easy_conn->connectindex,
easy->easy_conn->recv_pipe->size,
easy->easy_conn->readchannel_inuse,
Curl_isHandleAtHead(easy->easy_handle,
easy->easy_conn->recv_pipe));
isHandleAtHead(easy->easy_handle,
easy->easy_conn->recv_pipe));
#endif
/* Wait for our turn to PERFORM */
if(!easy->easy_conn->readchannel_inuse &&
Curl_isHandleAtHead(easy->easy_handle,
easy->easy_conn->recv_pipe)) {
isHandleAtHead(easy->easy_handle,
easy->easy_conn->recv_pipe)) {
/* Grab the channel */
easy->easy_conn->readchannel_inuse = TRUE;
multistate(easy, CURLM_STATE_PERFORM);
@@ -1252,16 +1268,16 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
/* read/write data if it is ready to do so */
easy->result = Curl_readwrite(easy->easy_conn, &done);
k = &easy->easy_handle->reqdata.keep;
k = &easy->easy_handle->req;
if(!(k->keepon & KEEP_READ)) {
/* We're done reading */
easy->easy_conn->readchannel_inuse = FALSE;
/* We're done reading */
easy->easy_conn->readchannel_inuse = FALSE;
}
if(!(k->keepon & KEEP_WRITE)) {
/* We're done writing */
easy->easy_conn->writechannel_inuse = FALSE;
/* We're done writing */
easy->easy_conn->writechannel_inuse = FALSE;
}
if(easy->result) {
@@ -1271,6 +1287,7 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
easy->easy_conn->bits.close = TRUE;
Curl_removeHandleFromPipeline(easy->easy_handle,
easy->easy_conn->recv_pipe);
easy->easy_handle->state.is_in_pipeline = FALSE;
if(CURL_SOCKET_BAD != easy->easy_conn->sock[SECONDARYSOCKET]) {
/* if we failed anywhere, we must clean up the secondary socket if
@@ -1289,14 +1306,17 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
Curl_posttransfer(easy->easy_handle);
/* When we follow redirects, must to go back to the CONNECT state */
if(easy->easy_handle->reqdata.newurl || retry) {
if(easy->easy_handle->req.newurl || retry) {
Curl_removeHandleFromPipeline(easy->easy_handle,
easy->easy_conn->recv_pipe);
/* Check if we can move pending requests to send pipe */
checkPendPipeline(easy->easy_conn);
easy->easy_handle->state.is_in_pipeline = FALSE;
if(!retry) {
/* if the URL is a follow-location and not just a retried request
then figure out the URL here */
newurl = easy->easy_handle->reqdata.newurl;
easy->easy_handle->reqdata.newurl = NULL;
newurl = easy->easy_handle->req.newurl;
easy->easy_handle->req.newurl = NULL;
}
easy->result = Curl_done(&easy->easy_conn, CURLE_OK, FALSE);
if(easy->result == CURLE_OK)
@@ -1323,6 +1343,8 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
/* Remove ourselves from the receive pipeline */
Curl_removeHandleFromPipeline(easy->easy_handle,
easy->easy_conn->recv_pipe);
/* Check if we can move pending requests to send pipe */
checkPendPipeline(easy->easy_conn);
easy->easy_handle->state.is_in_pipeline = FALSE;
if(easy->easy_conn->bits.stream_was_rewound) {
@@ -1332,22 +1354,16 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
result = CURLM_CALL_MULTI_PERFORM;
}
if(!easy->easy_handle->state.cancelled) {
/* post-transfer command */
easy->result = Curl_done(&easy->easy_conn, CURLE_OK, FALSE);
/* post-transfer command */
easy->result = Curl_done(&easy->easy_conn, CURLE_OK, FALSE);
/* after we have DONE what we're supposed to do, go COMPLETED, and
it doesn't matter what the Curl_done() returned! */
multistate(easy, CURLM_STATE_COMPLETED);
}
/* after we have DONE what we're supposed to do, go COMPLETED, and
it doesn't matter what the Curl_done() returned! */
multistate(easy, CURLM_STATE_COMPLETED);
break;
case CURLM_STATE_COMPLETED:
if(easy->easy_handle->state.cancelled)
/* Go into the CANCELLED state if we were cancelled */
multistate(easy, CURLM_STATE_CANCELLED);
/* this is a completed transfer, it is likely to still be connected */
/* This node should be delinked from the list now and we should post
@@ -1358,13 +1374,6 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
easy->easy_conn = NULL;
break;
case CURLM_STATE_CANCELLED:
/* Cancelled transfer, wait to be cleaned up */
/* Reset the conn pointer so we don't leave it dangling */
easy->easy_conn = NULL;
break;
default:
return CURLM_INTERNAL_ERROR;
}
@@ -1390,6 +1399,8 @@ static CURLMcode multi_runsingle(struct Curl_multi *multi,
easy->easy_conn->send_pipe);
Curl_removeHandleFromPipeline(easy->easy_handle,
easy->easy_conn->recv_pipe);
/* Check if we can move pending requests to send pipe */
checkPendPipeline(easy->easy_conn);
}
if(disconnect_conn) {
@@ -1460,15 +1471,6 @@ CURLMcode curl_multi_perform(CURLM *multi_handle, int *running_handles)
while(easy != &multi->easy) {
CURLMcode result;
if(easy->easy_handle->state.cancelled &&
easy->state == CURLM_STATE_CANCELLED) {
/* Remove cancelled handles once it's safe to do so */
Curl_multi_rmeasy(multi_handle, easy->easy_handle);
easy->easy_handle = NULL;
easy = easy->next;
continue;
}
result = multi_runsingle(multi, easy);
if(result)
returncode = result;
@@ -1952,6 +1954,77 @@ static int update_timer(struct Curl_multi *multi)
return multi->timer_cb((CURLM*)multi, timeout_ms, multi->timer_userp);
}
static CURLcode addHandleToSendOrPendPipeline(struct SessionHandle *handle,
struct connectdata *conn)
{
size_t pipeLen = conn->send_pipe->size + conn->recv_pipe->size;
struct curl_llist *pipeline;
if(!Curl_isPipeliningEnabled(handle) ||
pipeLen == 0)
pipeline = conn->send_pipe;
else {
if(conn->server_supports_pipelining &&
pipeLen < MAX_PIPELINE_LENGTH)
pipeline = conn->send_pipe;
else
pipeline = conn->pend_pipe;
}
return Curl_addHandleToPipeline(handle, pipeline);
}
static int checkPendPipeline(struct connectdata *conn)
{
int result = 0;
if (conn->server_supports_pipelining) {
size_t pipeLen = conn->send_pipe->size + conn->recv_pipe->size;
struct curl_llist_element *curr = conn->pend_pipe->head;
while(pipeLen < MAX_PIPELINE_LENGTH && curr) {
Curl_llist_move(conn->pend_pipe, curr,
conn->send_pipe, conn->send_pipe->tail);
Curl_pgrsTime(curr->ptr, TIMER_CONNECT);
++result; /* count how many handles we moved */
curr = conn->pend_pipe->head;
++pipeLen;
}
if (result > 0)
conn->now = Curl_tvnow();
}
return result;
}
static int moveHandleFromSendToRecvPipeline(struct SessionHandle *handle,
struct connectdata *conn)
{
struct curl_llist_element *curr;
curr = conn->send_pipe->head;
while(curr) {
if(curr->ptr == handle) {
Curl_llist_move(conn->send_pipe, curr,
conn->recv_pipe, conn->recv_pipe->tail);
return 1; /* we moved a handle */
}
curr = curr->next;
}
return 0;
}
static bool isHandleAtHead(struct SessionHandle *handle,
struct curl_llist *pipeline)
{
struct curl_llist_element *curr = pipeline->head;
if(curr)
return (bool)(curr->ptr == handle);
return FALSE;
}
/* given a number of milliseconds from now to use to set the 'act before
this'-time for the transfer, to be extracted by curl_multi_timeout() */
void Curl_expire(struct SessionHandle *data, long milli)

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -436,10 +436,10 @@ static int display_error(struct connectdata *conn, PRInt32 err,
{
switch(err) {
case SEC_ERROR_BAD_PASSWORD:
failf(conn->data, "Unable to load client key: Incorrect password\n");
failf(conn->data, "Unable to load client key: Incorrect password");
return 1;
case SEC_ERROR_UNKNOWN_CERT:
failf(conn->data, "Unable to load certificate %s\n", filename);
failf(conn->data, "Unable to load certificate %s", filename);
return 1;
default:
break;
@@ -521,10 +521,10 @@ static SECStatus nss_Init_Tokens(struct connectdata * conn)
if(PK11_NeedLogin(slot) && PK11_NeedUserInit(slot)) {
if(slot == PK11_GetInternalKeySlot()) {
failf(conn->data, "The NSS database has not been initialized.\n");
failf(conn->data, "The NSS database has not been initialized");
}
else {
failf(conn->data, "The token %s has not been initialized.",
failf(conn->data, "The token %s has not been initialized",
PK11_GetTokenName(slot));
}
PK11_FreeSlot(slot);
@@ -1057,7 +1057,7 @@ int Curl_nss_send(struct connectdata *conn, /* connection data */
return CURLE_OPERATION_TIMEDOUT;
}
failf(conn->data, "SSL write: error %d\n", err);
failf(conn->data, "SSL write: error %d", err);
return -1;
}
return rc; /* number of bytes */

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -35,8 +35,6 @@
#include <nks/thread.h>
#include <nks/synch.h>
#include "memory.h"
#include "memdebug.h"
typedef struct
{

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -84,7 +84,7 @@
#include <curl/curl.h>
static time_t Curl_parsedate(const char *date);
static time_t parsedate(const char *date);
const char * const Curl_wkday[] =
{"Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"};
@@ -223,7 +223,7 @@ enum assume {
DATE_TIME
};
static time_t Curl_parsedate(const char *date)
static time_t parsedate(const char *date)
{
time_t t = 0;
int wdaynum=-1; /* day of the week number, 0-6 (mon-sun) */
@@ -289,11 +289,17 @@ static time_t Curl_parsedate(const char *date)
if((tzoff == -1) &&
((end - date) == 4) &&
(val < 1300) &&
(val <= 1400) &&
(indate< date) &&
((date[-1] == '+' || date[-1] == '-'))) {
/* four digits and a value less than 1300 and it is preceeded with
a plus or minus. This is a time zone indication. */
/* four digits and a value less than or equal to 1400 (to take into
account all sorts of funny time zone diffs) and it is preceeded
with a plus or minus. This is a time zone indication. 1400 is
picked since +1300 is frequently used and +1400 is mentioned as
an edge number in the document "ISO C 200X Proposal: Timezone
Functions" at http://david.tribble.com/text/c0xtimezone.html If
anyone has a more authoritative source for the exact maximum time
zone offsets, please speak up! */
found = TRUE;
tzoff = (val/100 * 60 + val%100)*60;
@@ -421,5 +427,5 @@ static time_t Curl_parsedate(const char *date)
time_t curl_getdate(const char *p, const time_t *now)
{
(void)now;
return Curl_parsedate(p);
return parsedate(p);
}

View File

@@ -356,11 +356,10 @@ int Curl_pgrsUpdate(struct connectdata *conn)
progress */
if(!(data->progress.flags & PGRS_HEADERS_OUT)) {
if(data->reqdata.resume_from) {
if(data->state.resume_from) {
fprintf(data->set.err,
"** Resuming transfer from byte position %" FORMAT_OFF_T
"\n",
data->reqdata.resume_from);
"** Resuming transfer from byte position %" FORMAT_OFF_T "\n",
data->state.resume_from);
}
fprintf(data->set.err,
" %% Total %% Received %% Xferd Average Speed Time Time Time Current\n"

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -108,7 +108,7 @@ static CURLcode Curl_qsossl_init_session(struct SessionHandle * data)
break;
case SSL_ERROR_IO:
failf(data, "SSL_Init() I/O error: %s\n", strerror(errno));
failf(data, "SSL_Init() I/O error: %s", strerror(errno));
return CURLE_SSL_CONNECT_ERROR;
case SSL_ERROR_BAD_CIPHER_SUITE:
@@ -125,7 +125,7 @@ static CURLcode Curl_qsossl_init_session(struct SessionHandle * data)
return CURLE_SSL_CERTPROBLEM;
default:
failf(data, "SSL_Init(): %s\n", SSL_Strerror(rc, NULL));
failf(data, "SSL_Init(): %s", SSL_Strerror(rc, NULL));
return CURLE_SSL_CONNECT_ERROR;
}
@@ -142,9 +142,9 @@ static CURLcode Curl_qsossl_create(struct connectdata * conn, int sockindex)
h = SSL_Create(conn->sock[sockindex], SSL_ENCRYPT);
if(!h) {
failf(conn->data, "SSL_Create() I/O error: %s\n", strerror(errno));
failf(conn->data, "SSL_Create() I/O error: %s", strerror(errno));
return CURLE_SSL_CONNECT_ERROR;
}
}
connssl->handle = h;
return CURLE_OK;
@@ -232,11 +232,11 @@ static CURLcode Curl_qsossl_handshake(struct connectdata * conn, int sockindex)
return CURLE_SSL_CERTPROBLEM;
case SSL_ERROR_IO:
failf(data, "SSL_Handshake(): %s\n", SSL_Strerror(rc, NULL));
failf(data, "SSL_Handshake(): %s", SSL_Strerror(rc, NULL));
return CURLE_SSL_CONNECT_ERROR;
default:
failf(data, "SSL_Init(): %s\n", SSL_Strerror(rc, NULL));
failf(data, "SSL_Init(): %s", SSL_Strerror(rc, NULL));
return CURLE_SSL_CONNECT_ERROR;
}
@@ -282,12 +282,12 @@ static int Curl_qsossl_close_one(struct ssl_connect_data * conn,
if(rc) {
if(rc == SSL_ERROR_IO) {
failf(data, "SSL_Destroy() I/O error: %s\n", strerror(errno));
failf(data, "SSL_Destroy() I/O error: %s", strerror(errno));
return -1;
}
/* An SSL error. */
failf(data, "SSL_Destroy() returned error %d\n", SSL_Strerror(rc, NULL));
failf(data, "SSL_Destroy() returned error %d", SSL_Strerror(rc, NULL));
return -1;
}
@@ -359,7 +359,7 @@ int Curl_qsossl_shutdown(struct connectdata * conn, int sockindex)
nread = read(conn->sock[sockindex], buf, sizeof(buf));
if(nread < 0) {
failf(data, "read: %s\n", strerror(errno));
failf(data, "read: %s", strerror(errno));
rc = -1;
}
@@ -399,12 +399,12 @@ ssize_t Curl_qsossl_send(struct connectdata * conn, int sockindex, void * mem,
return 0;
}
failf(conn->data, "SSL_Write() I/O error: %s\n", strerror(errno));
failf(conn->data, "SSL_Write() I/O error: %s", strerror(errno));
return -1;
}
/* An SSL error. */
failf(conn->data, "SSL_Write() returned error %d\n",
failf(conn->data, "SSL_Write() returned error %d",
SSL_Strerror(rc, NULL));
return -1;
}
@@ -442,11 +442,11 @@ ssize_t Curl_qsossl_recv(struct connectdata * conn, int num, char * buf,
return -1;
}
failf(conn->data, "SSL_Read() I/O error: %s\n", strerror(errno));
failf(conn->data, "SSL_Read() I/O error: %s", strerror(errno));
return -1;
default:
failf(conn->data, "SSL read error: %s\n", SSL_Strerror(nread, NULL));
failf(conn->data, "SSL read error: %s", SSL_Strerror(nread, NULL));
return -1;
}
}

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -27,6 +27,8 @@
#ifdef HAVE_SYS_POLL_H
#include <sys/poll.h>
#elif defined(HAVE_POLL_H)
#include <poll.h>
#endif
/*
@@ -49,7 +51,9 @@
* Definition of pollfd struct and constants for platforms lacking them.
*/
#if !defined(HAVE_STRUCT_POLLFD) && !defined(HAVE_SYS_POLL_H)
#if !defined(HAVE_STRUCT_POLLFD) && \
!defined(HAVE_SYS_POLL_H) && \
!defined(HAVE_POLL_H)
#define POLLIN 0x01
#define POLLPRI 0x02

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -160,7 +160,7 @@ static size_t convert_lineends(struct SessionHandle *data,
if(*startPtr == '\n') {
/* This block of incoming data starts with the
previous block's LF so get rid of it */
memcpy(startPtr, startPtr+1, size-1);
memmove(startPtr, startPtr+1, size-1);
size--;
/* and it wasn't a bare CR but a CRLF conversion instead */
data->state.crlf_conversions++;
@@ -310,10 +310,10 @@ CURLcode Curl_sendf(curl_socket_t sockfd, struct connectdata *conn,
return res;
}
static ssize_t Curl_plain_send(struct connectdata *conn,
int num,
void *mem,
size_t len)
static ssize_t send_plain(struct connectdata *conn,
int num,
void *mem,
size_t len)
{
curl_socket_t sockfd = conn->sock[num];
ssize_t bytes_written = swrite(sockfd, mem, len);
@@ -368,7 +368,7 @@ CURLcode Curl_write(struct connectdata *conn,
/* only TRUE if krb enabled */
bytes_written = Curl_sec_send(conn, num, mem, len);
else
bytes_written = Curl_plain_send(conn, num, mem, len);
bytes_written = send_plain(conn, num, mem, len);
*written = bytes_written;
retcode = (-1 != bytes_written)?CURLE_OK:CURLE_SEND_ERROR;
@@ -376,6 +376,36 @@ CURLcode Curl_write(struct connectdata *conn,
return retcode;
}
static CURLcode pausewrite(struct SessionHandle *data,
int type, /* what type of data */
char *ptr,
size_t len)
{
/* signalled to pause sending on this connection, but since we have data
we want to send we need to dup it to save a copy for when the sending
is again enabled */
struct SingleRequest *k = &data->req;
char *dupl = malloc(len);
if(!dupl)
return CURLE_OUT_OF_MEMORY;
memcpy(dupl, ptr, len);
/* store this information in the state struct for later use */
data->state.tempwrite = dupl;
data->state.tempwritesize = len;
data->state.tempwritetype = type;
/* mark the connection as RECV paused */
k->keepon |= KEEP_READ_PAUSE;
DEBUGF(infof(data, "Pausing with %d bytes in buffer for type %02x\n",
(int)len, type));
return CURLE_OK;
}
/* client_write() sends data to the write callback(s)
The bit pattern defines to what "streams" to write to. Body and/or header.
@@ -389,9 +419,33 @@ CURLcode Curl_client_write(struct connectdata *conn,
struct SessionHandle *data = conn->data;
size_t wrote;
if(data->state.cancelled) {
/* We just suck everything into a black hole */
return CURLE_OK;
/* If reading is actually paused, we're forced to append this chunk of data
to the already held data, but only if it is the same type as otherwise it
can't work and it'll return error instead. */
if(data->req.keepon & KEEP_READ_PAUSE) {
size_t newlen;
char *newptr;
if(type != data->state.tempwritetype)
/* major internal confusion */
return CURLE_RECV_ERROR;
/* figure out the new size of the data to save */
newlen = len + data->state.tempwritesize;
/* allocate the new memory area */
newptr = malloc(newlen);
if(!newptr)
return CURLE_OUT_OF_MEMORY;
/* copy the previously held data to the new area */
memcpy(newptr, data->state.tempwrite, data->state.tempwritesize);
/* copy the new data to the end of the new area */
memcpy(newptr + data->state.tempwritesize, ptr, len);
/* free the old data */
free(data->state.tempwrite);
/* update the pointer and the size */
data->state.tempwrite = newptr;
data->state.tempwritesize = newlen;
return CURLE_OK;
}
if(0 == len)
@@ -422,8 +476,11 @@ CURLcode Curl_client_write(struct connectdata *conn,
wrote = len;
}
if(CURL_WRITEFUNC_PAUSE == wrote)
return pausewrite(data, type, ptr, len);
if(wrote != len) {
failf (data, "Failed writing body");
failf(data, "Failed writing body (%d != %d)", (int)wrote, (int)len);
return CURLE_WRITE_ERROR;
}
}
@@ -441,6 +498,12 @@ CURLcode Curl_client_write(struct connectdata *conn,
regardless of the ftp transfer mode (ASCII/Image) */
wrote = writeit(ptr, 1, len, data->set.writeheader);
if(CURL_WRITEFUNC_PAUSE == wrote)
/* here we pass in the HEADER bit only since if this was body as well
then it was passed already and clearly that didn't trigger the pause,
so this is saved for later with the HEADER bit only */
return pausewrite(data, CLIENTWRITE_HEADER, ptr, len);
if(wrote != len) {
failf (data, "Failed writing header");
return CURLE_WRITE_ERROR;

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -118,16 +118,19 @@ static int blockread_all(struct connectdata *conn, /* connection data */
* http://socks.permeo.com/protocol/socks4.protocol
*
* Note :
* Nonsupport "SOCKS 4A (Simple Extension to SOCKS 4 Protocol)"
* Set protocol4a=true for "SOCKS 4A (Simple Extension to SOCKS 4 Protocol)"
* Nonsupport "Identification Protocol (RFC1413)"
*/
CURLcode Curl_SOCKS4(const char *proxy_name,
const char *hostname,
int remote_port,
int sockindex,
struct connectdata *conn)
struct connectdata *conn,
bool protocol4a)
{
unsigned char socksreq[262]; /* room for SOCKS4 request incl. user id */
#define SOCKS4REQLEN 262
unsigned char socksreq[SOCKS4REQLEN]; /* room for SOCKS4 request incl. user
id */
int result;
CURLcode code;
curl_socket_t sock = conn->sock[sockindex];
@@ -165,8 +168,8 @@ CURLcode Curl_SOCKS4(const char *proxy_name,
socksreq[1] = 1; /* connect */
*((unsigned short*)&socksreq[2]) = htons((unsigned short)remote_port);
/* DNS resolve */
{
/* DNS resolve only for SOCKS4, not SOCKS4a */
if (!protocol4a) {
struct Curl_dns_entry *dns;
Curl_addrinfo *hp=NULL;
int rc;
@@ -225,15 +228,40 @@ CURLcode Curl_SOCKS4(const char *proxy_name,
{
ssize_t actualread;
ssize_t written;
ssize_t hostnamelen = 0;
int packetsize = 9 +
(int)strlen((char*)socksreq + 8); /* size including NUL */
/* If SOCKS4a, set special invalid IP address 0.0.0.x */
if (protocol4a) {
socksreq[4] = 0;
socksreq[5] = 0;
socksreq[6] = 0;
socksreq[7] = 1;
/* If still enough room in buffer, also append hostname */
hostnamelen = (ssize_t)strlen(hostname) + 1; /* length including NUL */
if (packetsize + hostnamelen <= SOCKS4REQLEN)
strcpy((char*)socksreq + packetsize, hostname);
else
hostnamelen = 0; /* Flag: hostname did not fit in buffer */
}
/* Send request */
code = Curl_write(conn, sock, (char *)socksreq, packetsize, &written);
if((code != CURLE_OK) || (written != packetsize)) {
code = Curl_write(conn, sock, (char *)socksreq, packetsize + hostnamelen,
&written);
if((code != CURLE_OK) || (written != packetsize + hostnamelen)) {
failf(data, "Failed to send SOCKS4 connect request.");
return CURLE_COULDNT_CONNECT;
}
if (protocol4a && hostnamelen == 0) {
/* SOCKS4a with very long hostname - send that name separately */
hostnamelen = (ssize_t)strlen(hostname) + 1;
code = Curl_write(conn, sock, (char *)hostname, hostnamelen, &written);
if((code != CURLE_OK) || (written != hostnamelen)) {
failf(data, "Failed to send SOCKS4 connect request.");
return CURLE_COULDNT_CONNECT;
}
}
packetsize = 8; /* receive data size */
@@ -275,7 +303,10 @@ CURLcode Curl_SOCKS4(const char *proxy_name,
switch(socksreq[1])
{
case 90:
infof(data, "SOCKS4 request granted.\n");
if (protocol4a)
infof(data, "SOCKS4a request granted.\n");
else
infof(data, "SOCKS4 request granted.\n");
break;
case 91:
failf(data,
@@ -359,6 +390,17 @@ CURLcode Curl_SOCKS5(const char *proxy_name,
curl_socket_t sock = conn->sock[sockindex];
struct SessionHandle *data = conn->data;
long timeout;
bool socks5_resolve_local = (bool)(data->set.proxytype == CURLPROXY_SOCKS5);
const size_t hostname_len = strlen(hostname);
ssize_t packetsize = 0;
/* RFC1928 chapter 5 specifies max 255 chars for domain name in packet */
if(!socks5_resolve_local && hostname_len > 255)
{
infof(conn->data,"SOCKS5: server resolving disabled for hostnames of "
"length > 255 [actual len=%d]\n", hostname_len);
socks5_resolve_local = TRUE;
}
/* get timeout */
if(data->set.timeout && data->set.connecttimeout) {
@@ -522,13 +564,26 @@ CURLcode Curl_SOCKS5(const char *proxy_name,
socksreq[0] = 5; /* version (SOCKS5) */
socksreq[1] = 1; /* connect */
socksreq[2] = 0; /* must be zero */
socksreq[3] = 1; /* IPv4 = 1 */
{
if(!socks5_resolve_local) {
packetsize = (ssize_t)(5 + hostname_len + 2);
socksreq[3] = 3; /* ATYP: domain name = 3 */
socksreq[4] = (char) hostname_len; /* address length */
memcpy(&socksreq[5], hostname, hostname_len); /* address bytes w/o NULL */
*((unsigned short*)&socksreq[hostname_len+5]) =
htons((unsigned short)remote_port);
}
else {
struct Curl_dns_entry *dns;
Curl_addrinfo *hp=NULL;
int rc = Curl_resolv(conn, hostname, remote_port, &dns);
packetsize = 10;
socksreq[3] = 1; /* IPv4 = 1 */
if(rc == CURLRESOLV_ERROR)
return CURLE_COULDNT_RESOLVE_HOST;
@@ -564,40 +619,76 @@ CURLcode Curl_SOCKS5(const char *proxy_name,
hostname);
return CURLE_COULDNT_RESOLVE_HOST;
}
*((unsigned short*)&socksreq[8]) = htons((unsigned short)remote_port);
}
*((unsigned short*)&socksreq[8]) = htons((unsigned short)remote_port);
code = Curl_write(conn, sock, (char *)socksreq, packetsize, &written);
if((code != CURLE_OK) || (written != packetsize)) {
failf(data, "Failed to send SOCKS5 connect request.");
return CURLE_COULDNT_CONNECT;
}
{
const int packetsize = 10;
packetsize = 10; /* minimum packet size is 10 */
code = Curl_write(conn, sock, (char *)socksreq, packetsize, &written);
if((code != CURLE_OK) || (written != packetsize)) {
failf(data, "Failed to send SOCKS5 connect request.");
result = blockread_all(conn, sock, (char *)socksreq, packetsize,
&actualread, timeout);
if((result != CURLE_OK) || (actualread != packetsize)) {
failf(data, "Failed to receive SOCKS5 connect request ack.");
return CURLE_COULDNT_CONNECT;
}
if(socksreq[0] != 5) { /* version */
failf(data,
"SOCKS5 reply has wrong version, version should be 5.");
return CURLE_COULDNT_CONNECT;
}
if(socksreq[1] != 0) { /* Anything besides 0 is an error */
failf(data,
"Can't complete SOCKS5 connection to %d.%d.%d.%d:%d. (%d)",
(unsigned char)socksreq[4], (unsigned char)socksreq[5],
(unsigned char)socksreq[6], (unsigned char)socksreq[7],
(unsigned int)ntohs(*(unsigned short*)(&socksreq[8])),
socksreq[1]);
return CURLE_COULDNT_CONNECT;
}
}
result = blockread_all(conn, sock, (char *)socksreq, packetsize,
/* Fix: in general, returned BND.ADDR is variable length parameter by RFC
1928, so the reply packet should be read until the end to avoid errors at
subsequent protocol level.
+----+-----+-------+------+----------+----------+
|VER | REP | RSV | ATYP | BND.ADDR | BND.PORT |
+----+-----+-------+------+----------+----------+
| 1 | 1 | X'00' | 1 | Variable | 2 |
+----+-----+-------+------+----------+----------+
ATYP:
o IP v4 address: X'01', BND.ADDR = 4 byte
o domain name: X'03', BND.ADDR = [ 1 byte length, string ]
o IP v6 address: X'04', BND.ADDR = 16 byte
*/
/* Calculate real packet size */
if(socksreq[3] == 3) {
/* domain name */
int addrlen = (int) socksreq[4];
packetsize = 5 + addrlen + 2;
}
else if(socksreq[3] == 4) {
/* IPv6 */
packetsize = 4 + 16 + 2;
}
/* At this point we already read first 10 bytes */
if(packetsize > 10) {
packetsize -= 10;
result = blockread_all(conn, sock, (char *)&socksreq[10], packetsize,
&actualread, timeout);
if((result != CURLE_OK) || (actualread != packetsize)) {
failf(data, "Failed to receive SOCKS5 connect request ack.");
return CURLE_COULDNT_CONNECT;
}
if(socksreq[0] != 5) { /* version */
failf(data,
"SOCKS5 reply has wrong version, version should be 5.");
return CURLE_COULDNT_CONNECT;
}
if(socksreq[1] != 0) { /* Anything besides 0 is an error */
failf(data,
"Can't complete SOCKS5 connection to %d.%d.%d.%d:%d. (%d)",
(unsigned char)socksreq[4], (unsigned char)socksreq[5],
(unsigned char)socksreq[6], (unsigned char)socksreq[7],
(unsigned int)ntohs(*(unsigned short*)(&socksreq[8])),
socksreq[1]);
return CURLE_COULDNT_CONNECT;
}
}
Curl_nonblock(sock, TRUE);

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -24,14 +24,15 @@
***************************************************************************/
/*
* This function logs in to a SOCKS4 proxy and sends the specifics to the
* This function logs in to a SOCKS4(a) proxy and sends the specifics to the
* final destination server.
*/
CURLcode Curl_SOCKS4(const char *proxy_name,
const char *hostname,
int remote_port,
int sockindex,
struct connectdata *conn);
struct connectdata *conn,
bool protocol4a);
/*
* This function logs in to a SOCKS5 proxy and sends the specifics to the

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -374,7 +374,7 @@ static CURLcode ssh_getworkingpath(struct connectdata *conn,
char *working_path;
int working_path_len;
working_path = curl_easy_unescape(data, data->reqdata.path, 0,
working_path = curl_easy_unescape(data, data->state.path, 0,
&working_path_len);
if(!working_path)
return CURLE_OUT_OF_MEMORY;
@@ -432,7 +432,7 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
{
CURLcode result = CURLE_OK;
struct SessionHandle *data = conn->data;
struct SSHPROTO *sftp_scp = data->reqdata.proto.ssh;
struct SSHPROTO *sftp_scp = data->state.proto.ssh;
struct ssh_conn *sshc = &conn->proto.sshc;
curl_socket_t sock = conn->sock[FIRSTSOCKET];
#ifdef CURL_LIBSSH2_DEBUG
@@ -733,7 +733,11 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
break;
}
else {
failf(data, "Failure initialising sftp session\n");
char *err_msg;
(void)libssh2_session_last_error(sshc->ssh_session,
&err_msg, NULL, 0);
failf(data, "Failure initializing sftp session: %s", err_msg);
state(conn, SSH_SESSION_FREE);
sshc->actualcode = CURLE_FAILED_INIT;
break;
@@ -1350,7 +1354,7 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
}
/* since this counts what we send to the client, we include the newline
in this counter */
data->reqdata.keep.bytecount += sshc->readdir_len+1;
data->req.bytecount += sshc->readdir_len+1;
/* output debug output if that is requested */
if(data->set.verbose) {
@@ -1469,7 +1473,7 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
Curl_debug(data, CURLINFO_DATA_OUT, sshc->readdir_line,
sshc->readdir_currLen, conn);
}
data->reqdata.keep.bytecount += sshc->readdir_currLen;
data->req.bytecount += sshc->readdir_currLen;
}
Curl_safefree(sshc->readdir_line);
sshc->readdir_line = NULL;
@@ -1533,18 +1537,18 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
* libssh2_sftp_open() didn't return an error, so maybe the server
* just doesn't support stat()
*/
data->reqdata.size = -1;
data->reqdata.maxdownload = -1;
data->req.size = -1;
data->req.maxdownload = -1;
}
else {
data->reqdata.size = attrs.filesize;
data->reqdata.maxdownload = attrs.filesize;
data->req.size = attrs.filesize;
data->req.maxdownload = attrs.filesize;
Curl_pgrsSetDownloadSize(data, attrs.filesize);
}
}
/* Setup the actual download */
result = Curl_setup_transfer(conn, FIRSTSOCKET, data->reqdata.size,
result = Curl_setup_transfer(conn, FIRSTSOCKET, data->req.size,
FALSE, NULL, -1, NULL);
if(result) {
state(conn, SSH_SFTP_CLOSE);
@@ -1648,7 +1652,7 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
}
/* upload data */
result = Curl_setup_transfer(conn, -1, data->reqdata.size, FALSE, NULL,
result = Curl_setup_transfer(conn, -1, data->req.size, FALSE, NULL,
FIRSTSOCKET, NULL);
if(result) {
@@ -1696,7 +1700,7 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
/* download data */
bytecount = (curl_off_t)sb.st_size;
data->reqdata.maxdownload = (curl_off_t)sb.st_size;
data->req.maxdownload = (curl_off_t)sb.st_size;
result = Curl_setup_transfer(conn, FIRSTSOCKET,
bytecount, FALSE, NULL, -1, NULL);
@@ -1736,7 +1740,7 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
break;
}
else if(rc) {
infof(data, "Failed to get channel EOF\n");
infof(data, "Failed to get channel EOF: %d\n", rc);
}
}
state(conn, SSH_SCP_WAIT_CLOSE);
@@ -1749,7 +1753,7 @@ static CURLcode ssh_statemach_act(struct connectdata *conn)
break;
}
else if(rc) {
infof(data, "Channel failed to close\n");
infof(data, "Channel failed to close: %d\n", rc);
}
}
state(conn, SSH_SCP_CHANNEL_FREE);
@@ -1849,14 +1853,14 @@ static CURLcode ssh_init(struct connectdata *conn)
{
struct SessionHandle *data = conn->data;
struct SSHPROTO *ssh;
if(data->reqdata.proto.ssh)
if(data->state.proto.ssh)
return CURLE_OK;
ssh = (struct SSHPROTO *)calloc(sizeof(struct SSHPROTO), 1);
if(!ssh)
return CURLE_OUT_OF_MEMORY;
data->reqdata.proto.ssh = ssh;
data->state.proto.ssh = ssh;
return CURLE_OK;
}
@@ -1989,7 +1993,7 @@ static CURLcode ssh_do(struct connectdata *conn, bool *done)
*done = FALSE; /* default to false */
data->reqdata.size = -1; /* make sure this is unknown at this point */
data->req.size = -1; /* make sure this is unknown at this point */
Curl_pgrsSetUploadCounter(data, 0);
Curl_pgrsSetDownloadCounter(data, 0);
@@ -2011,8 +2015,8 @@ static CURLcode scp_disconnect(struct connectdata *conn)
{
CURLcode result;
Curl_safefree(conn->data->reqdata.proto.ssh);
conn->data->reqdata.proto.ssh = NULL;
Curl_safefree(conn->data->state.proto.ssh);
conn->data->state.proto.ssh = NULL;
state(conn, SSH_SESSION_DISCONNECT);
@@ -2046,7 +2050,7 @@ static CURLcode scp_done(struct connectdata *conn, CURLcode status,
}
if(done) {
struct SSHPROTO *sftp_scp = conn->data->reqdata.proto.ssh;
struct SSHPROTO *sftp_scp = conn->data->state.proto.ssh;
Curl_safefree(sftp_scp->path);
sftp_scp->path = NULL;
Curl_pgrsDone(conn);
@@ -2154,8 +2158,8 @@ static CURLcode sftp_disconnect(struct connectdata *conn)
DEBUGF(infof(conn->data, "SSH DISCONNECT starts now\n"));
Curl_safefree(conn->data->reqdata.proto.ssh);
conn->data->reqdata.proto.ssh = NULL;
Curl_safefree(conn->data->state.proto.ssh);
conn->data->state.proto.ssh = NULL;
state(conn, SSH_SFTP_SHUTDOWN);
result = ssh_easy_statemach(conn);

View File

@@ -96,6 +96,7 @@ bool
Curl_clone_ssl_config(struct ssl_config_data *source,
struct ssl_config_data *dest)
{
dest->sessionid = source->sessionid;
dest->verifyhost = source->verifyhost;
dest->verifypeer = source->verifypeer;
dest->version = source->version;
@@ -383,6 +384,9 @@ CURLcode Curl_ssl_addsessionid(struct connectdata *conn,
store->sessionid = ssl_sessionid;
store->idsize = idsize;
store->age = data->state.sessionage; /* set current age */
if (store->name)
/* free it if there's one already present */
free(store->name);
store->name = clone_host; /* clone host name */
store->remote_port = conn->remote_port; /* port number */

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -426,7 +426,7 @@ int cert_stuff(struct connectdata *conn,
key_file=cert_file;
case SSL_FILETYPE_ASN1:
if(SSL_CTX_use_PrivateKey_file(ctx, key_file, file_type) != 1) {
failf(data, "unable to set private key file: '%s' type %s\n",
failf(data, "unable to set private key file: '%s' type %s",
key_file, key_type?key_type:"PEM");
return 0;
}
@@ -440,7 +440,7 @@ int cert_stuff(struct connectdata *conn,
UI_METHOD *ui_method = UI_OpenSSL();
#endif
if(!key_file || !key_file[0]) {
failf(data, "no key set to load from crypto engine\n");
failf(data, "no key set to load from crypto engine");
return 0;
}
/* the typecast below was added to please mingw32 */
@@ -451,40 +451,40 @@ int cert_stuff(struct connectdata *conn,
#endif
data->set.str[STRING_KEY_PASSWD]);
if(!priv_key) {
failf(data, "failed to load private key from crypto engine\n");
failf(data, "failed to load private key from crypto engine");
return 0;
}
if(SSL_CTX_use_PrivateKey(ctx, priv_key) != 1) {
failf(data, "unable to set private key\n");
failf(data, "unable to set private key");
EVP_PKEY_free(priv_key);
return 0;
}
EVP_PKEY_free(priv_key); /* we don't need the handle any more... */
}
else {
failf(data, "crypto engine not set, can't load private key\n");
failf(data, "crypto engine not set, can't load private key");
return 0;
}
}
break;
#else
failf(data, "file type ENG for private key not supported\n");
failf(data, "file type ENG for private key not supported");
return 0;
#endif
case SSL_FILETYPE_PKCS12:
if(!cert_done) {
failf(data, "file type P12 for private key not supported\n");
failf(data, "file type P12 for private key not supported");
return 0;
}
break;
default:
failf(data, "not supported file type for private key\n");
failf(data, "not supported file type for private key");
return 0;
}
ssl=SSL_new(ctx);
if(NULL == ssl) {
failf(data,"unable to create an SSL structure\n");
failf(data,"unable to create an SSL structure");
return 0;
}
@@ -850,9 +850,9 @@ int Curl_ossl_close_all(struct SessionHandle *data)
return 0;
}
static int Curl_ASN1_UTCTIME_output(struct connectdata *conn,
const char *prefix,
const ASN1_UTCTIME *tm)
static int asn1_output(struct connectdata *conn,
const char *prefix,
const ASN1_UTCTIME *tm)
{
const char *asn1_string;
int gmt=FALSE;
@@ -1256,8 +1256,8 @@ static void ssl_tls_trace(int direction, int ssl_ver, int content_type,
/* ====================================================== */
static CURLcode
Curl_ossl_connect_step1(struct connectdata *conn,
int sockindex)
ossl_connect_step1(struct connectdata *conn,
int sockindex)
{
CURLcode retcode = CURLE_OK;
@@ -1443,8 +1443,8 @@ Curl_ossl_connect_step1(struct connectdata *conn,
}
static CURLcode
Curl_ossl_connect_step2(struct connectdata *conn,
int sockindex, long *timeout_ms)
ossl_connect_step2(struct connectdata *conn,
int sockindex, long *timeout_ms)
{
struct SessionHandle *data = conn->data;
int err;
@@ -1569,14 +1569,105 @@ Curl_ossl_connect_step2(struct connectdata *conn,
}
}
static CURLcode
Curl_ossl_connect_step3(struct connectdata *conn,
int sockindex)
/*
* Get the server cert, verify it and show it etc, only call failf() if the
* 'strict' argument is TRUE as otherwise all this is for informational
* purposes only!
*
* We check certificates to authenticate the server; otherwise we risk
* man-in-the-middle attack.
*/
static CURLcode servercert(struct connectdata *conn,
struct ssl_connect_data *connssl,
bool strict)
{
CURLcode retcode = CURLE_OK;
char * str;
char *str;
long lerr;
ASN1_TIME *certdate;
struct SessionHandle *data = conn->data;
connssl->server_cert = SSL_get_peer_certificate(connssl->handle);
if(!connssl->server_cert) {
if(strict)
failf(data, "SSL: couldn't get peer certificate!");
return CURLE_PEER_FAILED_VERIFICATION;
}
infof (data, "Server certificate:\n");
str = X509_NAME_oneline(X509_get_subject_name(connssl->server_cert),
NULL, 0);
if(!str) {
if(strict)
failf(data, "SSL: couldn't get X509-subject!");
X509_free(connssl->server_cert);
connssl->server_cert = NULL;
return CURLE_SSL_CONNECT_ERROR;
}
infof(data, "\t subject: %s\n", str);
CRYPTO_free(str);
certdate = X509_get_notBefore(connssl->server_cert);
asn1_output(conn, "\t start date: ", certdate);
certdate = X509_get_notAfter(connssl->server_cert);
asn1_output(conn, "\t expire date: ", certdate);
if(data->set.ssl.verifyhost) {
retcode = verifyhost(conn, connssl->server_cert);
if(retcode) {
X509_free(connssl->server_cert);
connssl->server_cert = NULL;
return retcode;
}
}
str = X509_NAME_oneline(X509_get_issuer_name(connssl->server_cert),
NULL, 0);
if(!str) {
if(strict)
failf(data, "SSL: couldn't get X509-issuer name!");
retcode = CURLE_SSL_CONNECT_ERROR;
}
else {
infof(data, "\t issuer: %s\n", str);
CRYPTO_free(str);
/* We could do all sorts of certificate verification stuff here before
deallocating the certificate. */
lerr = data->set.ssl.certverifyresult=
SSL_get_verify_result(connssl->handle);
if(data->set.ssl.certverifyresult != X509_V_OK) {
if(data->set.ssl.verifypeer) {
/* We probably never reach this, because SSL_connect() will fail
and we return earlyer if verifypeer is set? */
if(strict)
failf(data, "SSL certificate verify result: %s (%ld)",
X509_verify_cert_error_string(lerr), lerr);
retcode = CURLE_PEER_FAILED_VERIFICATION;
}
else
infof(data, "SSL certificate verify result: %s (%ld),"
" continuing anyway.\n",
X509_verify_cert_error_string(lerr), lerr);
}
else
infof(data, "SSL certificate verify ok.\n");
}
X509_free(connssl->server_cert);
connssl->server_cert = NULL;
connssl->connecting_state = ssl_connect_done;
return retcode;
}
static CURLcode
ossl_connect_step3(struct connectdata *conn,
int sockindex)
{
CURLcode retcode = CURLE_OK;
void *ssl_sessionid=NULL;
struct SessionHandle *data = conn->data;
struct ssl_connect_data *connssl = &conn->ssl[sockindex];
@@ -1615,88 +1706,28 @@ Curl_ossl_connect_step3(struct connectdata *conn,
}
/* Get server's certificate (note: beware of dynamic allocation) - opt */
/* major serious hack alert -- we should check certificates
* to authenticate the server; otherwise we risk man-in-the-middle
* attack
/*
* We check certificates to authenticate the server; otherwise we risk
* man-in-the-middle attack; NEVERTHELESS, if we're told explicitly not to
* verify the peer ignore faults and failures from the server cert
* operations.
*/
connssl->server_cert = SSL_get_peer_certificate(connssl->handle);
if(!connssl->server_cert) {
failf(data, "SSL: couldn't get peer certificate!");
return CURLE_PEER_FAILED_VERIFICATION;
}
infof (data, "Server certificate:\n");
if(!data->set.ssl.verifypeer)
(void)servercert(conn, connssl, FALSE);
else
retcode = servercert(conn, connssl, TRUE);
str = X509_NAME_oneline(X509_get_subject_name(connssl->server_cert),
NULL, 0);
if(!str) {
failf(data, "SSL: couldn't get X509-subject!");
X509_free(connssl->server_cert);
connssl->server_cert = NULL;
return CURLE_SSL_CONNECT_ERROR;
}
infof(data, "\t subject: %s\n", str);
CRYPTO_free(str);
certdate = X509_get_notBefore(connssl->server_cert);
Curl_ASN1_UTCTIME_output(conn, "\t start date: ", certdate);
certdate = X509_get_notAfter(connssl->server_cert);
Curl_ASN1_UTCTIME_output(conn, "\t expire date: ", certdate);
if(data->set.ssl.verifyhost) {
retcode = verifyhost(conn, connssl->server_cert);
if(retcode) {
X509_free(connssl->server_cert);
connssl->server_cert = NULL;
return retcode;
}
}
str = X509_NAME_oneline(X509_get_issuer_name(connssl->server_cert),
NULL, 0);
if(!str) {
failf(data, "SSL: couldn't get X509-issuer name!");
retcode = CURLE_SSL_CONNECT_ERROR;
}
else {
infof(data, "\t issuer: %s\n", str);
CRYPTO_free(str);
/* We could do all sorts of certificate verification stuff here before
deallocating the certificate. */
lerr = data->set.ssl.certverifyresult=
SSL_get_verify_result(connssl->handle);
if(data->set.ssl.certverifyresult != X509_V_OK) {
if(data->set.ssl.verifypeer) {
/* We probably never reach this, because SSL_connect() will fail
and we return earlyer if verifypeer is set? */
failf(data, "SSL certificate verify result: %s (%ld)",
X509_verify_cert_error_string(lerr), lerr);
retcode = CURLE_PEER_FAILED_VERIFICATION;
}
else
infof(data, "SSL certificate verify result: %s (%ld),"
" continuing anyway.\n",
X509_verify_cert_error_string(lerr), lerr);
}
else
infof(data, "SSL certificate verify ok.\n");
}
X509_free(connssl->server_cert);
connssl->server_cert = NULL;
connssl->connecting_state = ssl_connect_done;
if(CURLE_OK == retcode)
connssl->connecting_state = ssl_connect_done;
return retcode;
}
static CURLcode
Curl_ossl_connect_common(struct connectdata *conn,
int sockindex,
bool nonblocking,
bool *done)
ossl_connect_common(struct connectdata *conn,
int sockindex,
bool nonblocking,
bool *done)
{
CURLcode retcode;
struct SessionHandle *data = conn->data;
@@ -1705,7 +1736,7 @@ Curl_ossl_connect_common(struct connectdata *conn,
long timeout_ms;
if(ssl_connect_1==connssl->connecting_state) {
retcode = Curl_ossl_connect_step1(conn, sockindex);
retcode = ossl_connect_step1(conn, sockindex);
if(retcode)
return retcode;
}
@@ -1749,7 +1780,7 @@ Curl_ossl_connect_common(struct connectdata *conn,
}
/* get the timeout from step2 to avoid computing it twice. */
retcode = Curl_ossl_connect_step2(conn, sockindex, &timeout_ms);
retcode = ossl_connect_step2(conn, sockindex, &timeout_ms);
if(retcode)
return retcode;
@@ -1757,7 +1788,7 @@ Curl_ossl_connect_common(struct connectdata *conn,
if(ssl_connect_3==connssl->connecting_state) {
retcode = Curl_ossl_connect_step3(conn, sockindex);
retcode = ossl_connect_step3(conn, sockindex);
if(retcode)
return retcode;
}
@@ -1780,7 +1811,7 @@ Curl_ossl_connect_nonblocking(struct connectdata *conn,
int sockindex,
bool *done)
{
return Curl_ossl_connect_common(conn, sockindex, TRUE, done);
return ossl_connect_common(conn, sockindex, TRUE, done);
}
CURLcode
@@ -1790,7 +1821,7 @@ Curl_ossl_connect(struct connectdata *conn,
CURLcode retcode;
bool done = FALSE;
retcode = Curl_ossl_connect_common(conn, sockindex, FALSE, &done);
retcode = ossl_connect_common(conn, sockindex, FALSE, &done);
if(retcode)
return retcode;
@@ -1824,19 +1855,19 @@ ssize_t Curl_ossl_send(struct connectdata *conn,
equivalent. */
return 0;
case SSL_ERROR_SYSCALL:
failf(conn->data, "SSL_write() returned SYSCALL, errno = %d\n",
failf(conn->data, "SSL_write() returned SYSCALL, errno = %d",
SOCKERRNO);
return -1;
case SSL_ERROR_SSL:
/* A failure in the SSL library occurred, usually a protocol error.
The OpenSSL error queue contains more information on the error. */
sslerror = ERR_get_error();
failf(conn->data, "SSL_write() error: %s\n",
failf(conn->data, "SSL_write() error: %s",
ERR_error_string(sslerror, error_buffer));
return -1;
}
/* a true error */
failf(conn->data, "SSL_write() return error %d\n", err);
failf(conn->data, "SSL_write() return error %d", err);
return -1;
}
return (ssize_t)rc; /* number of bytes */

View File

@@ -125,8 +125,8 @@ static void printsub(struct SessionHandle *data,
size_t length);
static void suboption(struct connectdata *);
static CURLcode Curl_telnet(struct connectdata *conn, bool *done);
static CURLcode Curl_telnet_done(struct connectdata *conn,
static CURLcode telnet_do(struct connectdata *conn, bool *done);
static CURLcode telnet_done(struct connectdata *conn,
CURLcode, bool premature);
/* For negotiation compliant to RFC 1143 */
@@ -182,8 +182,8 @@ struct TELNET {
const struct Curl_handler Curl_handler_telnet = {
"TELNET", /* scheme */
ZERO_NULL, /* setup_connection */
Curl_telnet, /* do_it */
Curl_telnet_done, /* done */
telnet_do, /* do_it */
telnet_done, /* done */
ZERO_NULL, /* do_more */
ZERO_NULL, /* connect_it */
ZERO_NULL, /* connecting */
@@ -245,7 +245,7 @@ CURLcode init_telnet(struct connectdata *conn)
if(!tn)
return CURLE_OUT_OF_MEMORY;
conn->data->reqdata.proto.telnet = (void *)tn; /* make us known */
conn->data->state.proto.telnet = (void *)tn; /* make us known */
tn->telrcv_state = CURL_TS_DATA;
@@ -264,7 +264,7 @@ CURLcode init_telnet(struct connectdata *conn)
static void negotiate(struct connectdata *conn)
{
int i;
struct TELNET *tn = (struct TELNET *) conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *) conn->data->state.proto.telnet;
for(i = 0;i < CURL_NTELOPTS;i++)
{
@@ -340,7 +340,7 @@ static void send_negotiation(struct connectdata *conn, int cmd, int option)
static
void set_remote_option(struct connectdata *conn, int option, int newstate)
{
struct TELNET *tn = (struct TELNET *)conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)conn->data->state.proto.telnet;
if(newstate == CURL_YES)
{
switch(tn->him[option])
@@ -422,7 +422,7 @@ void set_remote_option(struct connectdata *conn, int option, int newstate)
static
void rec_will(struct connectdata *conn, int option)
{
struct TELNET *tn = (struct TELNET *)conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)conn->data->state.proto.telnet;
switch(tn->him[option])
{
case CURL_NO:
@@ -475,7 +475,7 @@ void rec_will(struct connectdata *conn, int option)
static
void rec_wont(struct connectdata *conn, int option)
{
struct TELNET *tn = (struct TELNET *)conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)conn->data->state.proto.telnet;
switch(tn->him[option])
{
case CURL_NO:
@@ -520,7 +520,7 @@ void rec_wont(struct connectdata *conn, int option)
static void
set_local_option(struct connectdata *conn, int option, int newstate)
{
struct TELNET *tn = (struct TELNET *)conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)conn->data->state.proto.telnet;
if(newstate == CURL_YES)
{
switch(tn->us[option])
@@ -602,7 +602,7 @@ set_local_option(struct connectdata *conn, int option, int newstate)
static
void rec_do(struct connectdata *conn, int option)
{
struct TELNET *tn = (struct TELNET *)conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)conn->data->state.proto.telnet;
switch(tn->us[option])
{
case CURL_NO:
@@ -655,7 +655,7 @@ void rec_do(struct connectdata *conn, int option)
static
void rec_dont(struct connectdata *conn, int option)
{
struct TELNET *tn = (struct TELNET *)conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)conn->data->state.proto.telnet;
switch(tn->us[option])
{
case CURL_NO:
@@ -817,7 +817,7 @@ static CURLcode check_telnet_options(struct connectdata *conn)
char option_arg[256];
char *buf;
struct SessionHandle *data = conn->data;
struct TELNET *tn = (struct TELNET *)conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)conn->data->state.proto.telnet;
/* Add the user name as an environment variable if it
was given on the command line */
@@ -888,7 +888,7 @@ static void suboption(struct connectdata *conn)
char varname[128];
char varval[128];
struct SessionHandle *data = conn->data;
struct TELNET *tn = (struct TELNET *)data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)data->state.proto.telnet;
printsub(data, '<', (unsigned char *)tn->subbuffer, CURL_SB_LEN(tn)+2);
switch (CURL_SB_GET(tn)) {
@@ -956,7 +956,7 @@ void telrcv(struct connectdata *conn,
int in = 0;
int startwrite=-1;
struct SessionHandle *data = conn->data;
struct TELNET *tn = (struct TELNET *)data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)data->state.proto.telnet;
#define startskipping() \
if(startwrite >= 0) \
@@ -1117,22 +1117,22 @@ void telrcv(struct connectdata *conn,
bufferflush();
}
static CURLcode Curl_telnet_done(struct connectdata *conn,
static CURLcode telnet_done(struct connectdata *conn,
CURLcode status, bool premature)
{
struct TELNET *tn = (struct TELNET *)conn->data->reqdata.proto.telnet;
struct TELNET *tn = (struct TELNET *)conn->data->state.proto.telnet;
(void)status; /* unused */
(void)premature; /* not used */
curl_slist_free_all(tn->telnet_vars);
free(conn->data->reqdata.proto.telnet);
conn->data->reqdata.proto.telnet = NULL;
free(conn->data->state.proto.telnet);
conn->data->state.proto.telnet = NULL;
return CURLE_OK;
}
static CURLcode Curl_telnet(struct connectdata *conn, bool *done)
static CURLcode telnet_do(struct connectdata *conn, bool *done)
{
CURLcode code;
struct SessionHandle *data = conn->data;
@@ -1166,7 +1166,7 @@ static CURLcode Curl_telnet(struct connectdata *conn, bool *done)
if(code)
return code;
tn = (struct TELNET *)data->reqdata.proto.telnet;
tn = (struct TELNET *)data->state.proto.telnet;
code = check_telnet_options(conn);
if(code)

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -151,11 +151,11 @@ typedef struct tftp_state_data {
/* Forward declarations */
static CURLcode tftp_rx(tftp_state_data_t *state, tftp_event_t event) ;
static CURLcode tftp_tx(tftp_state_data_t *state, tftp_event_t event) ;
static CURLcode Curl_tftp_connect(struct connectdata *conn, bool *done);
static CURLcode Curl_tftp(struct connectdata *conn, bool *done);
static CURLcode Curl_tftp_done(struct connectdata *conn,
static CURLcode tftp_connect(struct connectdata *conn, bool *done);
static CURLcode tftp_do(struct connectdata *conn, bool *done);
static CURLcode tftp_done(struct connectdata *conn,
CURLcode, bool premature);
static CURLcode Curl_tftp_setup_connection(struct connectdata * conn);
static CURLcode tftp_setup_connection(struct connectdata * conn);
/*
@@ -164,11 +164,11 @@ static CURLcode Curl_tftp_setup_connection(struct connectdata * conn);
const struct Curl_handler Curl_handler_tftp = {
"TFTP", /* scheme */
Curl_tftp_setup_connection, /* setup_connection */
Curl_tftp, /* do_it */
Curl_tftp_done, /* done */
tftp_setup_connection, /* setup_connection */
tftp_do, /* do_it */
tftp_done, /* done */
ZERO_NULL, /* do_more */
Curl_tftp_connect, /* connect_it */
tftp_connect, /* connect_it */
ZERO_NULL, /* connecting */
ZERO_NULL, /* doing */
ZERO_NULL, /* proto_getsock */
@@ -306,7 +306,7 @@ static CURLcode tftp_send_first(tftp_state_data_t *state, tftp_event_t event)
if(data->set.upload) {
/* If we are uploading, send an WRQ */
setpacketevent(&state->spacket, TFTP_EVENT_WRQ);
state->conn->data->reqdata.upload_fromhere =
state->conn->data->req.upload_fromhere =
(char *)&state->spacket.data[4];
if(data->set.infilesize != -1)
Curl_pgrsSetUploadSize(data, data->set.infilesize);
@@ -317,7 +317,7 @@ static CURLcode tftp_send_first(tftp_state_data_t *state, tftp_event_t event)
}
/* As RFC3617 describes the separator slash is not actually part of the
file name so we skip the always-present first letter of the path string. */
filename = curl_easy_unescape(data, &state->conn->data->reqdata.path[1], 0,
filename = curl_easy_unescape(data, &state->conn->data->state.path[1], 0,
NULL);
if(!filename)
return CURLE_OUT_OF_MEMORY;
@@ -331,7 +331,7 @@ static CURLcode tftp_send_first(tftp_state_data_t *state, tftp_event_t event)
state->conn->ip_addr->ai_addr,
state->conn->ip_addr->ai_addrlen);
if(sbytes < 0) {
failf(data, "%s\n", Curl_strerror(state->conn, SOCKERRNO));
failf(data, "%s", Curl_strerror(state->conn, SOCKERRNO));
}
Curl_safefree(filename);
break;
@@ -353,7 +353,7 @@ static CURLcode tftp_send_first(tftp_state_data_t *state, tftp_event_t event)
break;
default:
failf(state->conn->data, "tftp_send_first: internal error\n");
failf(state->conn->data, "tftp_send_first: internal error");
break;
}
return res;
@@ -384,7 +384,7 @@ static CURLcode tftp_rx(tftp_state_data_t *state, tftp_event_t event)
"Received unexpected DATA packet block %d\n", rblock);
state->retries++;
if(state->retries>state->retry_max) {
failf(data, "tftp_rx: giving up waiting for block %d\n",
failf(data, "tftp_rx: giving up waiting for block %d",
state->block+1);
return CURLE_TFTP_ILLEGAL;
}
@@ -399,7 +399,7 @@ static CURLcode tftp_rx(tftp_state_data_t *state, tftp_event_t event)
(struct sockaddr *)&state->remote_addr,
state->remote_addrlen);
if(sbytes < 0) {
failf(data, "%s\n", Curl_strerror(state->conn, SOCKERRNO));
failf(data, "%s", Curl_strerror(state->conn, SOCKERRNO));
return CURLE_SEND_ERROR;
}
@@ -429,7 +429,7 @@ static CURLcode tftp_rx(tftp_state_data_t *state, tftp_event_t event)
state->remote_addrlen);
/* Check all sbytes were sent */
if(sbytes<0) {
failf(data, "%s\n", Curl_strerror(state->conn, SOCKERRNO));
failf(data, "%s", Curl_strerror(state->conn, SOCKERRNO));
return CURLE_SEND_ERROR;
}
}
@@ -440,7 +440,7 @@ static CURLcode tftp_rx(tftp_state_data_t *state, tftp_event_t event)
break;
default:
failf(data, "%s\n", "tftp_rx: internal error");
failf(data, "%s", "tftp_rx: internal error");
return CURLE_TFTP_ILLEGAL; /* not really the perfect return code for
this */
}
@@ -460,7 +460,7 @@ static CURLcode tftp_tx(tftp_state_data_t *state, tftp_event_t event)
int sbytes;
int rblock;
CURLcode res = CURLE_OK;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct SingleRequest *k = &data->req;
switch(event) {
@@ -487,7 +487,7 @@ static CURLcode tftp_tx(tftp_state_data_t *state, tftp_event_t event)
state->remote_addrlen);
/* Check all sbytes were sent */
if(sbytes<0) {
failf(data, "%s\n", Curl_strerror(state->conn, SOCKERRNO));
failf(data, "%s", Curl_strerror(state->conn, SOCKERRNO));
res = CURLE_SEND_ERROR;
}
}
@@ -512,7 +512,7 @@ static CURLcode tftp_tx(tftp_state_data_t *state, tftp_event_t event)
state->remote_addrlen);
/* Check all sbytes were sent */
if(sbytes<0) {
failf(data, "%s\n", Curl_strerror(state->conn, SOCKERRNO));
failf(data, "%s", Curl_strerror(state->conn, SOCKERRNO));
return CURLE_SEND_ERROR;
}
/* Update the progress meter */
@@ -538,7 +538,7 @@ static CURLcode tftp_tx(tftp_state_data_t *state, tftp_event_t event)
state->remote_addrlen);
/* Check all sbytes were sent */
if(sbytes<0) {
failf(data, "%s\n", Curl_strerror(state->conn, SOCKERRNO));
failf(data, "%s", Curl_strerror(state->conn, SOCKERRNO));
return CURLE_SEND_ERROR;
}
/* since this was a re-send, we remain at the still byte position */
@@ -551,7 +551,7 @@ static CURLcode tftp_tx(tftp_state_data_t *state, tftp_event_t event)
break;
default:
failf(data, "%s\n", "tftp_tx: internal error");
failf(data, "%s", "tftp_tx: internal error");
break;
}
@@ -588,7 +588,7 @@ static CURLcode tftp_state_machine(tftp_state_data_t *state,
break;
default:
DEBUGF(infof(data, "STATE: %d\n", state->state));
failf(data, "%s\n", "Internal state machine error");
failf(data, "%s", "Internal state machine error");
res = CURLE_TFTP_ILLEGAL;
break;
}
@@ -598,12 +598,12 @@ static CURLcode tftp_state_machine(tftp_state_data_t *state,
/**********************************************************
*
* Curl_tftp_connect
* tftp_connect
*
* The connect callback
*
**********************************************************/
static CURLcode Curl_tftp_connect(struct connectdata *conn, bool *done)
static CURLcode tftp_connect(struct connectdata *conn, bool *done)
{
CURLcode code;
tftp_state_data_t *state;
@@ -613,9 +613,10 @@ static CURLcode Curl_tftp_connect(struct connectdata *conn, bool *done)
sessionhandle, deal with it */
Curl_reset_reqproto(conn);
if(!(state = conn->data->reqdata.proto.tftp)) {
state = conn->data->reqdata.proto.tftp = calloc(sizeof(tftp_state_data_t),
1);
state = conn->data->state.proto.tftp;
if(!state) {
state = conn->data->state.proto.tftp = calloc(sizeof(tftp_state_data_t),
1);
if(!state)
return CURLE_OUT_OF_MEMORY;
}
@@ -649,7 +650,7 @@ static CURLcode Curl_tftp_connect(struct connectdata *conn, bool *done)
rc = bind(state->sockfd, (struct sockaddr *)&state->local_addr,
conn->ip_addr->ai_addrlen);
if(rc) {
failf(conn->data, "bind() failed; %s\n",
failf(conn->data, "bind() failed; %s",
Curl_strerror(conn, SOCKERRNO));
return CURLE_COULDNT_CONNECT;
}
@@ -665,21 +666,17 @@ static CURLcode Curl_tftp_connect(struct connectdata *conn, bool *done)
/**********************************************************
*
* Curl_tftp_done
* tftp_done
*
* The done callback
*
**********************************************************/
static CURLcode Curl_tftp_done(struct connectdata *conn, CURLcode status,
static CURLcode tftp_done(struct connectdata *conn, CURLcode status,
bool premature)
{
(void)status; /* unused */
(void)premature; /* not used */
#if 0
free(conn->data->reqdata.proto.tftp);
conn->data->reqdata.proto.tftp = NULL;
#endif
Curl_pgrsDone(conn);
return CURLE_OK;
@@ -688,7 +685,7 @@ static CURLcode Curl_tftp_done(struct connectdata *conn, CURLcode status,
/**********************************************************
*
* Curl_tftp
* tftp
*
* The do callback
*
@@ -696,7 +693,7 @@ static CURLcode Curl_tftp_done(struct connectdata *conn, CURLcode status,
*
**********************************************************/
static CURLcode Curl_tftp(struct connectdata *conn, bool *done)
static CURLcode tftp_do(struct connectdata *conn, bool *done)
{
struct SessionHandle *data = conn->data;
tftp_state_data_t *state;
@@ -706,7 +703,7 @@ static CURLcode Curl_tftp(struct connectdata *conn, bool *done)
struct Curl_sockaddr_storage fromaddr;
socklen_t fromlen;
int check_time = 0;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct SingleRequest *k = &data->req;
*done = TRUE;
@@ -714,16 +711,16 @@ static CURLcode Curl_tftp(struct connectdata *conn, bool *done)
Since connections can be re-used between SessionHandles, this might be a
connection already existing but on a fresh SessionHandle struct so we must
make sure we have a good 'struct TFTP' to play with. For new connections,
the struct TFTP is allocated and setup in the Curl_tftp_connect() function.
the struct TFTP is allocated and setup in the tftp_connect() function.
*/
Curl_reset_reqproto(conn);
if(!data->reqdata.proto.tftp) {
code = Curl_tftp_connect(conn, done);
if(!data->state.proto.tftp) {
code = tftp_connect(conn, done);
if(code)
return code;
}
state = (tftp_state_data_t *)data->reqdata.proto.tftp;
state = (tftp_state_data_t *)data->state.proto.tftp;
/* Run the TFTP State Machine */
for(code=tftp_state_machine(state, TFTP_EVENT_INIT);
@@ -737,7 +734,7 @@ static CURLcode Curl_tftp(struct connectdata *conn, bool *done)
if(rc == -1) {
/* bail out */
int error = SOCKERRNO;
failf(data, "%s\n", Curl_strerror(conn, error));
failf(data, "%s", Curl_strerror(conn, error));
event = TFTP_EVENT_ERROR;
}
else if(rc==0) {
@@ -762,7 +759,7 @@ static CURLcode Curl_tftp(struct connectdata *conn, bool *done)
/* Sanity check packet length */
if(state->rbytes < 4) {
failf(data, "Received too short packet\n");
failf(data, "Received too short packet");
/* Not a timeout, but how best to handle it? */
event = TFTP_EVENT_TIMEOUT;
}
@@ -794,7 +791,7 @@ static CURLcode Curl_tftp(struct connectdata *conn, bool *done)
case TFTP_EVENT_RRQ:
case TFTP_EVENT_WRQ:
default:
failf(data, "%s\n", "Internal error: Unexpected packet");
failf(data, "%s", "Internal error: Unexpected packet");
break;
}
@@ -868,7 +865,7 @@ static CURLcode Curl_tftp(struct connectdata *conn, bool *done)
return code;
}
static CURLcode Curl_tftp_setup_connection(struct connectdata * conn)
static CURLcode tftp_setup_connection(struct connectdata * conn)
{
struct SessionHandle *data = conn->data;
char * type;
@@ -878,7 +875,7 @@ static CURLcode Curl_tftp_setup_connection(struct connectdata * conn)
/* TFTP URLs support an extension like ";mode=<typecode>" that
* we'll try to get now! */
type = strstr(data->reqdata.path, ";mode=");
type = strstr(data->state.path, ";mode=");
if(!type)
type = strstr(conn->host.rawalloc, ";mode=");

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -122,18 +122,26 @@ CURLcode Curl_fillreadbuffer(struct connectdata *conn, int bytes, int *nreadp)
if(conn->bits.upload_chunky) {
/* if chunked Transfer-Encoding */
buffersize -= (8 + 2 + 2); /* 32bit hex + CRLF + CRLF */
data->reqdata.upload_fromhere += 10; /* 32bit hex + CRLF */
data->req.upload_fromhere += 10; /* 32bit hex + CRLF */
}
/* this function returns a size_t, so we typecast to int to prevent warnings
with picky compilers */
nread = (int)conn->fread_func(data->reqdata.upload_fromhere, 1,
nread = (int)conn->fread_func(data->req.upload_fromhere, 1,
buffersize, conn->fread_in);
if(nread == CURL_READFUNC_ABORT) {
failf(data, "operation aborted by callback\n");
failf(data, "operation aborted by callback");
return CURLE_ABORTED_BY_CALLBACK;
}
else if(nread == CURL_READFUNC_PAUSE) {
struct SingleRequest *k = &data->req;
k->keepon |= KEEP_READ_PAUSE; /* mark reading as paused */
return CURLE_OK; /* nothing was read */
}
else if((size_t)nread > buffersize)
/* the read function returned a too large value */
return CURLE_READ_ERROR;
if(!conn->bits.forbidchunk && conn->bits.upload_chunky) {
/* if chunked Transfer-Encoding */
@@ -141,18 +149,18 @@ CURLcode Curl_fillreadbuffer(struct connectdata *conn, int bytes, int *nreadp)
int hexlen = snprintf(hexbuffer, sizeof(hexbuffer),
"%x\r\n", nread);
/* move buffer pointer */
data->reqdata.upload_fromhere -= hexlen;
data->req.upload_fromhere -= hexlen;
nread += hexlen;
/* copy the prefix to the buffer */
memcpy(data->reqdata.upload_fromhere, hexbuffer, hexlen);
memcpy(data->req.upload_fromhere, hexbuffer, hexlen);
/* always append CRLF to the data */
memcpy(data->reqdata.upload_fromhere + nread, "\r\n", 2);
memcpy(data->req.upload_fromhere + nread, "\r\n", 2);
if((nread - hexlen) == 0) {
/* mark this as done once this chunk is transfered */
data->reqdata.keep.upload_done = TRUE;
data->req.upload_done = TRUE;
}
nread+=2; /* for the added CRLF */
@@ -163,7 +171,7 @@ CURLcode Curl_fillreadbuffer(struct connectdata *conn, int bytes, int *nreadp)
#ifdef CURL_DOES_CONVERSIONS
if(data->set.prefer_ascii) {
CURLcode res;
res = Curl_convert_to_network(data, data->reqdata.upload_fromhere, nread);
res = Curl_convert_to_network(data, data->req.upload_fromhere, nread);
/* Curl_convert_to_network calls failf if unsuccessful */
if(res != CURLE_OK) {
return(res);
@@ -237,16 +245,25 @@ CURLcode Curl_readrewind(struct connectdata *conn)
(data->set.httpreq == HTTPREQ_POST_FORM))
; /* do nothing */
else {
if(data->set.ioctl_func) {
if(data->set.seek_func) {
int err;
err = (data->set.seek_func)(data->set.seek_client, 0, SEEK_SET);
if(err) {
failf(data, "seek callback returned error %d", (int)err);
return CURLE_SEND_FAIL_REWIND;
}
}
else if(data->set.ioctl_func) {
curlioerr err;
err = (data->set.ioctl_func) (data, CURLIOCMD_RESTARTREAD,
data->set.ioctl_client);
err = (data->set.ioctl_func)(data, CURLIOCMD_RESTARTREAD,
data->set.ioctl_client);
infof(data, "the ioctl callback returned %d\n", (int)err);
if(err) {
/* FIXME: convert to a human readable error message */
failf(data, "ioctl callback returned error %d\n", (int)err);
failf(data, "ioctl callback returned error %d", (int)err);
return CURLE_SEND_FAIL_REWIND;
}
}
@@ -261,7 +278,7 @@ CURLcode Curl_readrewind(struct connectdata *conn)
}
/* no callback set or failure aboe, makes us fail at once */
failf(data, "necessary data rewind wasn't possible\n");
failf(data, "necessary data rewind wasn't possible");
return CURLE_SEND_FAIL_REWIND;
}
}
@@ -315,7 +332,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
bool *done)
{
struct SessionHandle *data = conn->data;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct SingleRequest *k = &data->req;
CURLcode result;
ssize_t nread; /* number of bytes read */
int didwhat=0;
@@ -330,7 +347,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
/* only use the proper socket if the *_HOLD bit is not set simultaneously as
then we are in rate limiting state in that transfer direction */
if((k->keepon & (KEEP_READ|KEEP_READ_HOLD)) == KEEP_READ) {
if((k->keepon & KEEP_READBITS) == KEEP_READ) {
fd_read = conn->sockfd;
#if defined(USE_LIBSSH2)
if(conn->protocol & (PROT_SCP|PROT_SFTP))
@@ -339,7 +356,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
} else
fd_read = CURL_SOCKET_BAD;
if((k->keepon & (KEEP_WRITE|KEEP_WRITE_HOLD)) == KEEP_WRITE)
if((k->keepon & KEEP_WRITEBITS) == KEEP_WRITE)
fd_write = conn->writesockfd;
else
fd_write = CURL_SOCKET_BAD;
@@ -628,12 +645,12 @@ CURLcode Curl_readwrite(struct connectdata *conn,
return result;
data->info.header_size += (long)headerlen;
data->reqdata.keep.headerbytecount += (long)headerlen;
data->req.headerbytecount += (long)headerlen;
data->reqdata.keep.deductheadercount =
(100 == k->httpcode)?data->reqdata.keep.headerbytecount:0;
data->req.deductheadercount =
(100 == k->httpcode)?data->req.headerbytecount:0;
if(data->reqdata.resume_from &&
if(data->state.resume_from &&
(data->set.httpreq==HTTPREQ_GET) &&
(k->httpcode == 416)) {
/* "Requested Range Not Satisfiable" */
@@ -792,7 +809,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
((k->httpcode != 401) || !conn->bits.user_passwd) &&
((k->httpcode != 407) || !conn->bits.proxy_user_passwd) ) {
if(data->reqdata.resume_from &&
if(data->state.resume_from &&
(data->set.httpreq==HTTPREQ_GET) &&
(k->httpcode == 416)) {
/* "Requested Range Not Satisfiable", just proceed and
@@ -813,6 +830,15 @@ CURLcode Curl_readwrite(struct connectdata *conn,
infof(data, "HTTP 1.0, assume close after body\n");
conn->bits.close = TRUE;
}
else if(k->httpversion >= 11 &&
!conn->bits.close) {
/* If HTTP version is >= 1.1 and connection is persistent
server supports pipelining. */
DEBUGF(infof(data,
"HTTP 1.1 or later with persistent connection, "
"pipelining supported\n"));
conn->server_supports_pipelining = TRUE;
}
switch(k->httpcode) {
case 204:
@@ -1042,7 +1068,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
k->offset = curlx_strtoofft(ptr, NULL, 10);
if(data->reqdata.resume_from == k->offset)
if(data->state.resume_from == k->offset)
/* we asked for a resume and we got it */
k->content_range = TRUE;
}
@@ -1057,7 +1083,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
here, or else use real peer host name. */
conn->allocptr.cookiehost?
conn->allocptr.cookiehost:conn->host.name,
data->reqdata.path);
data->state.path);
Curl_share_unlock(data, CURL_LOCK_DATA_COOKIE);
}
#endif
@@ -1105,9 +1131,9 @@ CURLcode Curl_readwrite(struct connectdata *conn,
backup = *ptr; /* store the ending letter */
if(ptr != start) {
*ptr = '\0'; /* zero terminate */
data->reqdata.newurl = strdup(start); /* clone string */
data->req.newurl = strdup(start); /* clone string */
*ptr = backup; /* restore ending letter */
if(!data->reqdata.newurl)
if(!data->req.newurl)
return CURLE_OUT_OF_MEMORY;
}
}
@@ -1131,7 +1157,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
return result;
data->info.header_size += (long)k->hbuflen;
data->reqdata.keep.headerbytecount += (long)k->hbuflen;
data->req.headerbytecount += (long)k->hbuflen;
/* reset hbufp pointer && hbuflen */
k->hbufp = data->state.headerbuff;
@@ -1160,7 +1186,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
if(conn->protocol&PROT_HTTP) {
/* HTTP-only checks */
if(data->reqdata.newurl) {
if(data->req.newurl) {
if(conn->bits.close) {
/* Abort after the headers if "follow Location" is set
and we're set to close anyway. */
@@ -1174,7 +1200,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
k->ignorebody = TRUE;
infof(data, "Ignoring the response-body\n");
}
if(data->reqdata.resume_from && !k->content_range &&
if(data->state.resume_from && !k->content_range &&
(data->set.httpreq==HTTPREQ_GET) &&
!k->ignorebody) {
/* we wanted to resume a download, although the server doesn't
@@ -1185,7 +1211,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
return CURLE_RANGE_ERROR;
}
if(data->set.timecondition && !data->reqdata.range) {
if(data->set.timecondition && !data->state.range) {
/* A time condition has been set AND no ranges have been
requested. This seems to be what chapter 13.3.4 of
RFC 2616 defines to be the correct action for a
@@ -1284,7 +1310,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
" bytes on url %s (size = %" FORMAT_OFF_T
", maxdownload = %" FORMAT_OFF_T
", bytecount = %" FORMAT_OFF_T ", nread = %d)\n",
excess, conn->data->reqdata.path,
excess, data->state.path,
k->size, k->maxdownload, k->bytecount, nread);
read_rewind(conn, excess);
}
@@ -1394,9 +1420,9 @@ CURLcode Curl_readwrite(struct connectdata *conn,
/* only read more data if there's no upload data already
present in the upload buffer */
if(0 == data->reqdata.upload_present) {
if(0 == data->req.upload_present) {
/* init the "upload from here" pointer */
data->reqdata.upload_fromhere = k->uploadbuf;
data->req.upload_fromhere = k->uploadbuf;
if(!k->upload_done) {
/* HTTP pollution, this should be written nicer to become more
@@ -1404,7 +1430,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
int fillcount;
if(k->wait100_after_headers &&
(data->reqdata.proto.http->sending == HTTPSEND_BODY)) {
(data->state.proto.http->sending == HTTPSEND_BODY)) {
/* If this call is to send body data, we must take some action:
We have sent off the full HTTP 1.1 request, and we shall now
go into the Expect: 100 state and await such a header */
@@ -1425,9 +1451,11 @@ CURLcode Curl_readwrite(struct connectdata *conn,
else
nread = 0; /* we're done uploading/reading */
/* the signed int typecase of nread of for systems that has
unsigned size_t */
if(nread<=0) {
if(!nread && (k->keepon & KEEP_READ_PAUSE)) {
/* this is a paused transfer */
break;
}
else if(nread<=0) {
/* done */
k->keepon &= ~KEEP_WRITE; /* we're done writing */
writedone = TRUE;
@@ -1441,7 +1469,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
}
/* store number of bytes available for upload */
data->reqdata.upload_present = nread;
data->req.upload_present = nread;
/* convert LF to CRLF if so asked */
#ifdef CURL_DO_LINEEND_CONV
@@ -1463,7 +1491,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
* must be used instead of the escape sequences \r & \n.
*/
for(i = 0, si = 0; i < nread; i++, si++) {
if(data->reqdata.upload_fromhere[i] == 0x0a) {
if(data->req.upload_fromhere[i] == 0x0a) {
data->state.scratch[si++] = 0x0d;
data->state.scratch[si] = 0x0a;
if(!data->set.crlf) {
@@ -1473,7 +1501,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
}
}
else
data->state.scratch[si] = data->reqdata.upload_fromhere[i];
data->state.scratch[si] = data->req.upload_fromhere[i];
}
if(si != nread) {
/* only perform the special operation if we really did replace
@@ -1481,10 +1509,10 @@ CURLcode Curl_readwrite(struct connectdata *conn,
nread = si;
/* upload from the new (replaced) buffer instead */
data->reqdata.upload_fromhere = data->state.scratch;
data->req.upload_fromhere = data->state.scratch;
/* set the new amount too */
data->reqdata.upload_present = nread;
data->req.upload_present = nread;
}
}
}
@@ -1496,33 +1524,33 @@ CURLcode Curl_readwrite(struct connectdata *conn,
/* write to socket (send away data) */
result = Curl_write(conn,
conn->writesockfd, /* socket to send to */
data->reqdata.upload_fromhere, /* buffer pointer */
data->reqdata.upload_present, /* buffer size */
data->req.upload_fromhere, /* buffer pointer */
data->req.upload_present, /* buffer size */
&bytes_written); /* actually send away */
if(result)
return result;
if(data->set.verbose)
/* show the data before we change the pointer upload_fromhere */
Curl_debug(data, CURLINFO_DATA_OUT, data->reqdata.upload_fromhere,
Curl_debug(data, CURLINFO_DATA_OUT, data->req.upload_fromhere,
(size_t)bytes_written, conn);
if(data->reqdata.upload_present != bytes_written) {
if(data->req.upload_present != bytes_written) {
/* we only wrote a part of the buffer (if anything), deal with it! */
/* store the amount of bytes left in the buffer to write */
data->reqdata.upload_present -= bytes_written;
data->req.upload_present -= bytes_written;
/* advance the pointer where to find the buffer when the next send
is to happen */
data->reqdata.upload_fromhere += bytes_written;
data->req.upload_fromhere += bytes_written;
writedone = TRUE; /* we are done, stop the loop */
}
else {
/* we've uploaded that buffer now */
data->reqdata.upload_fromhere = k->uploadbuf;
data->reqdata.upload_present = 0; /* no more bytes left */
data->req.upload_fromhere = k->uploadbuf;
data->req.upload_present = 0; /* no more bytes left */
if(k->upload_done) {
/* switch off writing, we're done! */
@@ -1609,7 +1637,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
*/
(k->bytecount != (k->size + data->state.crlf_conversions)) &&
#endif /* CURL_DO_LINEEND_CONV */
!data->reqdata.newurl) {
!data->req.newurl) {
failf(data, "transfer closed with %" FORMAT_OFF_T
" bytes remaining to read",
k->size - k->bytecount);
@@ -1635,7 +1663,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
}
/* Now update the "done" boolean we return */
*done = (bool)(0 == (k->keepon&(KEEP_READ|KEEP_WRITE)));
*done = (bool)(0 == (k->keepon&(KEEP_READ|KEEP_WRITE|KEEP_READ_PAUSE|KEEP_WRITE_PAUSE)));
return CURLE_OK;
}
@@ -1660,7 +1688,8 @@ int Curl_single_getsock(const struct connectdata *conn,
/* simple check but we might need two slots */
return GETSOCK_BLANK;
if(data->reqdata.keep.keepon & KEEP_READ) {
/* don't include HOLD and PAUSE connections */
if((data->req.keepon & KEEP_READBITS) == KEEP_READ) {
DEBUGASSERT(conn->sockfd != CURL_SOCKET_BAD);
@@ -1668,13 +1697,14 @@ int Curl_single_getsock(const struct connectdata *conn,
sock[sockindex] = conn->sockfd;
}
if(data->reqdata.keep.keepon & KEEP_WRITE) {
/* don't include HOLD and PAUSE connections */
if((data->req.keepon & KEEP_WRITEBITS) == KEEP_WRITE) {
if((conn->sockfd != conn->writesockfd) ||
!(data->reqdata.keep.keepon & KEEP_READ)) {
!(data->req.keepon & KEEP_READ)) {
/* only if they are not the same socket or we didn't have a readable
one, we increase index */
if(data->reqdata.keep.keepon & KEEP_READ)
if(data->req.keepon & KEEP_READ)
sockindex++; /* increase index if we need two entries */
DEBUGASSERT(conn->writesockfd != CURL_SOCKET_BAD);
@@ -1708,7 +1738,7 @@ Transfer(struct connectdata *conn)
{
CURLcode result;
struct SessionHandle *data = conn->data;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct SingleRequest *k = &data->req;
bool done=FALSE;
if((conn->sockfd == CURL_SOCKET_BAD) &&
@@ -1751,10 +1781,17 @@ Transfer(struct connectdata *conn)
k->keepon |= KEEP_READ_HOLD; /* hold it */
}
/* The *_HOLD logic is necessary since even though there might be no
traffic during the select interval, we still call Curl_readwrite() for
the timeout case and if we limit transfer speed we must make sure that
this function doesn't transfer anything while in HOLD status. */
/* pause logic. Don't check descriptors for paused connections */
if(k->keepon & KEEP_READ_PAUSE)
fd_read = CURL_SOCKET_BAD;
if(k->keepon & KEEP_WRITE_PAUSE)
fd_write = CURL_SOCKET_BAD;
/* The *_HOLD and *_PAUSE logic is necessary since even though there might
be no traffic during the select interval, we still call
Curl_readwrite() for the timeout case and if we limit transfer speed we
must make sure that this function doesn't transfer anything while in
HOLD status. */
switch (Curl_socket_ready(fd_read, fd_write, 1000)) {
case -1: /* select() error, stop reading */
@@ -1790,7 +1827,7 @@ CURLcode Curl_pretransfer(struct SessionHandle *data)
CURLcode res;
if(!data->change.url) {
/* we can't do anything wihout URL */
failf(data, "No URL set!\n");
failf(data, "No URL set!");
return CURLE_URL_MALFORMAT;
}
@@ -2133,9 +2170,9 @@ CURLcode Curl_follow(struct SessionHandle *data,
* a HTTP (proxy-) authentication scheme other than Basic.
*/
switch(data->info.httpcode) {
/* 401 - Act on a www-authentication, we keep on moving and do the
/* 401 - Act on a WWW-Authenticate, we keep on moving and do the
Authorization: XXXX header in the HTTP request code snippet */
/* 407 - Act on a proxy-authentication, we keep on moving and do the
/* 407 - Act on a Proxy-Authenticate, we keep on moving and do the
Proxy-Authorization: XXXX header in the HTTP request code snippet */
/* 300 - Multiple Choices */
/* 306 - Not used */
@@ -2219,8 +2256,8 @@ CURLcode Curl_follow(struct SessionHandle *data,
}
static CURLcode
Curl_connect_host(struct SessionHandle *data,
struct connectdata **conn)
connect_host(struct SessionHandle *data,
struct connectdata **conn)
{
CURLcode res = CURLE_OK;
int urlchanged = FALSE;
@@ -2277,8 +2314,8 @@ bool Curl_retry_request(struct connectdata *conn,
if(data->set.upload && !(conn->protocol&PROT_HTTP))
return retry;
if((data->reqdata.keep.bytecount +
data->reqdata.keep.headerbytecount == 0) &&
if((data->req.bytecount +
data->req.headerbytecount == 0) &&
conn->bits.reuse &&
!conn->bits.no_body) {
/* We got no data, we attempted to re-use a connection and yet we want a
@@ -2327,7 +2364,7 @@ CURLcode Curl_perform(struct SessionHandle *data)
*/
do {
res = Curl_connect_host(data, &conn); /* primary connection */
res = connect_host(data, &conn); /* primary connection */
if(res == CURLE_OK) {
bool do_done;
@@ -2349,7 +2386,7 @@ CURLcode Curl_perform(struct SessionHandle *data)
* We must duplicate the new URL here as the connection data may
* be free()ed in the Curl_done() function.
*/
newurl = data->reqdata.newurl?strdup(data->reqdata.newurl):NULL;
newurl = data->req.newurl?strdup(data->req.newurl):NULL;
}
else {
/* The transfer phase returned error, we mark the connection to get
@@ -2435,12 +2472,12 @@ Curl_setup_transfer(
)
{
struct SessionHandle *data;
struct Curl_transfer_keeper *k;
struct SingleRequest *k;
DEBUGASSERT(conn != NULL);
data = conn->data;
k = &data->reqdata.keep;
k = &data->req;
DEBUGASSERT((sockindex <= 1) && (sockindex >= -1));
@@ -2451,9 +2488,9 @@ Curl_setup_transfer(
CURL_SOCKET_BAD:conn->sock[writesockindex];
conn->bits.getheader = getheader;
data->reqdata.size = size;
data->reqdata.bytecountp = bytecountp;
data->reqdata.writebytecountp = writecountp;
k->size = size;
k->bytecountp = bytecountp;
k->writebytecountp = writecountp;
/* The code sequence below is placed in this function just because all
necessary input is not always known in do_complete() as this function may
@@ -2461,8 +2498,8 @@ Curl_setup_transfer(
if(!conn->bits.getheader) {
k->header = FALSE;
if(k->size > 0)
Curl_pgrsSetDownloadSize(data, k->size);
if(size > 0)
Curl_pgrsSetDownloadSize(data, size);
}
/* we want header and/or body, if neither then don't do this! */
if(conn->bits.getheader || !conn->bits.no_body) {
@@ -2482,7 +2519,7 @@ Curl_setup_transfer(
state info where we wait for the 100-return code
*/
if(data->state.expect100header &&
(data->reqdata.proto.http->sending == HTTPSEND_BODY)) {
(data->state.proto.http->sending == HTTPSEND_BODY)) {
/* wait with write until we either got 100-continue or a timeout */
k->write_after_100_header = TRUE;
k->start100 = k->start;

395
lib/url.c
View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -159,7 +159,6 @@ static bool ConnectionExists(struct SessionHandle *data,
static long ConnectionStore(struct SessionHandle *data,
struct connectdata *conn);
static bool IsPipeliningPossible(const struct SessionHandle *handle);
static bool IsPipeliningEnabled(const struct SessionHandle *handle);
static void conn_free(struct connectdata *conn);
static void signalPipeClose(struct curl_llist *pipeline);
@@ -176,8 +175,6 @@ static void flush_cookies(struct SessionHandle *data, int cleanup);
#define verboseconnect(x) do { } while (0)
#endif
#define MAX_PIPELINE_LENGTH 5
#ifndef USE_ARES
/* not for ares builds */
@@ -298,7 +295,7 @@ void Curl_freeset(struct SessionHandle * data)
Curl_safefree(data->set.str[i]);
}
static CURLcode Curl_setstropt(char **charp, char * s)
static CURLcode setstropt(char **charp, char * s)
{
/* Release the previous storage at `charp' and replace by a dynamic storage
copy of `s'. Return CURLE_OK or CURLE_OUT_OF_MEMORY. */
@@ -334,7 +331,7 @@ CURLcode Curl_dupset(struct SessionHandle * dst, struct SessionHandle * src)
/* duplicate all strings */
for(i=(enum dupstring)0; i< STRING_LAST; i++) {
r = Curl_setstropt(&dst->set.str[i], src->set.str[i]);
r = setstropt(&dst->set.str[i], src->set.str[i]);
if(r != CURLE_OK)
break;
}
@@ -425,6 +422,16 @@ CURLcode Curl_close(struct SessionHandle *data)
}
}
}
pipeline = connptr->pend_pipe;
if(pipeline) {
for (curr = pipeline->head; curr; curr=curr->next) {
if(data == (struct SessionHandle *) curr->ptr) {
fprintf(stderr,
"MAJOR problem we %p are still in pend pipe for %p done %d\n",
data, connptr, connptr->bits.done);
}
}
}
}
}
#endif
@@ -457,18 +464,15 @@ CURLcode Curl_close(struct SessionHandle *data)
return CURLE_OK;
}
if( ! (data->share && data->share->hostcache) ) {
if( !Curl_global_host_cache_use(data)) {
Curl_hash_destroy(data->dns.hostcache);
}
}
if(data->dns.hostcachetype == HCACHE_PRIVATE)
Curl_hash_destroy(data->dns.hostcache);
if(data->reqdata.rangestringalloc)
free(data->reqdata.range);
if(data->state.rangestringalloc)
free(data->state.range);
/* Free the pathbuffer */
Curl_safefree(data->reqdata.pathbuffer);
Curl_safefree(data->reqdata.proto.generic);
Curl_safefree(data->state.pathbuffer);
Curl_safefree(data->state.proto.generic);
/* Close down all open SSL info and sessions */
Curl_ssl_close_all(data);
@@ -685,6 +689,10 @@ CURLcode Curl_open(struct SessionHandle **curl)
/* use fread as default function to read input */
data->set.fread_func = (curl_read_callback)fread;
/* don't use a seek function by default */
data->set.seek_func = ZERO_NULL;
data->set.seek_client = ZERO_NULL;
/* conversion callbacks for non-ASCII hosts */
data->set.convfromnetwork = ZERO_NULL;
data->set.convtonetwork = ZERO_NULL;
@@ -743,7 +751,7 @@ CURLcode Curl_open(struct SessionHandle **curl)
data->set.ssl.sessionid = TRUE; /* session ID caching enabled by default */
#ifdef CURL_CA_BUNDLE
/* This is our preferred CA cert bundle since install time */
res = Curl_setstropt(&data->set.str[STRING_SSL_CAFILE],
res = setstropt(&data->set.str[STRING_SSL_CAFILE],
(char *) CURL_CA_BUNDLE);
#endif
}
@@ -777,16 +785,14 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
break;
case CURLOPT_DNS_USE_GLOBAL_CACHE:
{
/* remember we want this enabled */
long use_cache = va_arg(param, long);
if(use_cache)
Curl_global_host_cache_init();
data->set.global_dns_cache = (bool)(0 != use_cache);
}
break;
case CURLOPT_SSL_CIPHER_LIST:
/* set a list of cipher we want to use in the SSL connection */
result = Curl_setstropt(&data->set.str[STRING_SSL_CIPHER_LIST],
result = setstropt(&data->set.str[STRING_SSL_CIPHER_LIST],
va_arg(param, char *));
break;
@@ -795,14 +801,14 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
* This is the path name to a file that contains random data to seed
* the random SSL stuff with. The file is only used for reading.
*/
result = Curl_setstropt(&data->set.str[STRING_SSL_RANDOM_FILE],
result = setstropt(&data->set.str[STRING_SSL_RANDOM_FILE],
va_arg(param, char *));
break;
case CURLOPT_EGDSOCKET:
/*
* The Entropy Gathering Daemon socket pathname
*/
result = Curl_setstropt(&data->set.str[STRING_SSL_EGDSOCKET],
result = setstropt(&data->set.str[STRING_SSL_EGDSOCKET],
va_arg(param, char *));
break;
case CURLOPT_MAXCONNECTS:
@@ -926,7 +932,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* Use this file instead of the $HOME/.netrc file
*/
result = Curl_setstropt(&data->set.str[STRING_NETRC_FILE],
result = setstropt(&data->set.str[STRING_NETRC_FILE],
va_arg(param, char *));
break;
case CURLOPT_TRANSFERTEXT:
@@ -979,7 +985,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
*
*/
argptr = va_arg(param, char *);
result = Curl_setstropt(&data->set.str[STRING_ENCODING],
result = setstropt(&data->set.str[STRING_ENCODING],
(argptr && !*argptr)?
(char *) ALL_CONTENT_ENCODINGS: argptr);
break;
@@ -1036,7 +1042,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
argptr = va_arg(param, char *);
if(!argptr || data->set.postfieldsize == -1)
result = Curl_setstropt(&data->set.str[STRING_COPYPOSTFIELDS], argptr);
result = setstropt(&data->set.str[STRING_COPYPOSTFIELDS], argptr);
else {
/*
* Check that requested length does not overflow the size_t type.
@@ -1049,7 +1055,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
else {
char * p;
(void) Curl_setstropt(&data->set.str[STRING_COPYPOSTFIELDS], NULL);
(void) setstropt(&data->set.str[STRING_COPYPOSTFIELDS], NULL);
/* Allocate even when size == 0. This satisfies the need of possible
later address compare to detect the COPYPOSTFIELDS mode, and
@@ -1079,7 +1085,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
*/
data->set.postfields = va_arg(param, void *);
/* Release old copied data. */
(void) Curl_setstropt(&data->set.str[STRING_COPYPOSTFIELDS], NULL);
(void) setstropt(&data->set.str[STRING_COPYPOSTFIELDS], NULL);
data->set.httpreq = HTTPREQ_POST;
break;
@@ -1093,7 +1099,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
if(data->set.postfieldsize < bigsize &&
data->set.postfields == data->set.str[STRING_COPYPOSTFIELDS]) {
/* Previous CURLOPT_COPYPOSTFIELDS is no longer valid. */
(void) Curl_setstropt(&data->set.str[STRING_COPYPOSTFIELDS], NULL);
(void) setstropt(&data->set.str[STRING_COPYPOSTFIELDS], NULL);
data->set.postfields = NULL;
}
@@ -1110,7 +1116,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
if(data->set.postfieldsize < bigsize &&
data->set.postfields == data->set.str[STRING_COPYPOSTFIELDS]) {
/* Previous CURLOPT_COPYPOSTFIELDS is no longer valid. */
(void) Curl_setstropt(&data->set.str[STRING_COPYPOSTFIELDS], NULL);
(void) setstropt(&data->set.str[STRING_COPYPOSTFIELDS], NULL);
data->set.postfields = NULL;
}
@@ -1134,7 +1140,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
free(data->change.referer);
data->change.referer_alloc = FALSE;
}
result = Curl_setstropt(&data->set.str[STRING_SET_REFERER],
result = setstropt(&data->set.str[STRING_SET_REFERER],
va_arg(param, char *));
data->change.referer = data->set.str[STRING_SET_REFERER];
break;
@@ -1143,7 +1149,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* String to use in the HTTP User-Agent field
*/
result = Curl_setstropt(&data->set.str[STRING_USERAGENT],
result = setstropt(&data->set.str[STRING_USERAGENT],
va_arg(param, char *));
break;
@@ -1166,7 +1172,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* Cookie string to send to the remote server in the request.
*/
result = Curl_setstropt(&data->set.str[STRING_COOKIE],
result = setstropt(&data->set.str[STRING_COOKIE],
va_arg(param, char *));
break;
@@ -1192,7 +1198,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* Set cookie file name to dump all cookies to when we're done.
*/
result = Curl_setstropt(&data->set.str[STRING_COOKIEJAR],
result = setstropt(&data->set.str[STRING_COOKIEJAR],
va_arg(param, char *));
/*
@@ -1296,7 +1302,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* Set a custom string to use as request
*/
result = Curl_setstropt(&data->set.str[STRING_CUSTOMREQUEST],
result = setstropt(&data->set.str[STRING_CUSTOMREQUEST],
va_arg(param, char *));
/* we don't set
@@ -1363,7 +1369,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
* Setting it to NULL, means no proxy but allows the environment variables
* to decide for us.
*/
result = Curl_setstropt(&data->set.str[STRING_PROXY],
result = setstropt(&data->set.str[STRING_PROXY],
va_arg(param, char *));
break;
@@ -1390,7 +1396,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* Use FTP PORT, this also specifies which IP address to use
*/
result = Curl_setstropt(&data->set.str[STRING_FTPPORT],
result = setstropt(&data->set.str[STRING_FTPPORT],
va_arg(param, char *));
data->set.ftp_use_port = (bool)(NULL != data->set.str[STRING_FTPPORT]);
break;
@@ -1475,7 +1481,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
free(data->change.url);
data->change.url_alloc=FALSE;
}
result = Curl_setstropt(&data->set.str[STRING_SET_URL],
result = setstropt(&data->set.str[STRING_SET_URL],
va_arg(param, char *));
data->change.url = data->set.str[STRING_SET_URL];
if(data->change.url)
@@ -1514,7 +1520,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* user:password to use in the operation
*/
result = Curl_setstropt(&data->set.str[STRING_USERPWD],
result = setstropt(&data->set.str[STRING_USERPWD],
va_arg(param, char *));
break;
case CURLOPT_POSTQUOTE:
@@ -1556,14 +1562,14 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* user:password needed to use the proxy
*/
result = Curl_setstropt(&data->set.str[STRING_PROXYUSERPWD],
result = setstropt(&data->set.str[STRING_PROXYUSERPWD],
va_arg(param, char *));
break;
case CURLOPT_RANGE:
/*
* What range of the file you want to transfer
*/
result = Curl_setstropt(&data->set.str[STRING_SET_RANGE],
result = setstropt(&data->set.str[STRING_SET_RANGE],
va_arg(param, char *));
break;
case CURLOPT_RESUME_FROM:
@@ -1627,6 +1633,18 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/* When set to NULL, reset to our internal default function */
data->set.fread_func = (curl_read_callback)fread;
break;
case CURLOPT_SEEKFUNCTION:
/*
* Seek callback. Might be NULL.
*/
data->set.seek_func = va_arg(param, curl_seek_callback);
break;
case CURLOPT_SEEKDATA:
/*
* Seek control callback. Might be NULL.
*/
data->set.seek_client = va_arg(param, void *);
break;
case CURLOPT_CONV_FROM_NETWORK_FUNCTION:
/*
* "Convert from network encoding" callback
@@ -1661,35 +1679,35 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* String that holds file name of the SSL certificate to use
*/
result = Curl_setstropt(&data->set.str[STRING_CERT],
result = setstropt(&data->set.str[STRING_CERT],
va_arg(param, char *));
break;
case CURLOPT_SSLCERTTYPE:
/*
* String that holds file type of the SSL certificate to use
*/
result = Curl_setstropt(&data->set.str[STRING_CERT_TYPE],
result = setstropt(&data->set.str[STRING_CERT_TYPE],
va_arg(param, char *));
break;
case CURLOPT_SSLKEY:
/*
* String that holds file name of the SSL certificate to use
*/
result = Curl_setstropt(&data->set.str[STRING_KEY],
result = setstropt(&data->set.str[STRING_KEY],
va_arg(param, char *));
break;
case CURLOPT_SSLKEYTYPE:
/*
* String that holds file type of the SSL certificate to use
*/
result = Curl_setstropt(&data->set.str[STRING_KEY_TYPE],
result = setstropt(&data->set.str[STRING_KEY_TYPE],
va_arg(param, char *));
break;
case CURLOPT_KEYPASSWD:
/*
* String that holds the SSL or SSH private key password.
*/
result = Curl_setstropt(&data->set.str[STRING_KEY_PASSWD],
result = setstropt(&data->set.str[STRING_KEY_PASSWD],
va_arg(param, char *));
break;
case CURLOPT_SSLENGINE:
@@ -1719,7 +1737,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
* Set what interface or address/hostname to bind the socket to when
* performing an operation and thus what from-IP your connection will use.
*/
result = Curl_setstropt(&data->set.str[STRING_DEVICE],
result = setstropt(&data->set.str[STRING_DEVICE],
va_arg(param, char *));
break;
case CURLOPT_LOCALPORT:
@@ -1738,7 +1756,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* A string that defines the kerberos security level.
*/
result = Curl_setstropt(&data->set.str[STRING_KRB_LEVEL],
result = setstropt(&data->set.str[STRING_KRB_LEVEL],
va_arg(param, char *));
data->set.krb = (bool)(NULL != data->set.str[STRING_KRB_LEVEL]);
break;
@@ -1770,7 +1788,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* Set CA info for SSL connection. Specify file name of the CA certificate
*/
result = Curl_setstropt(&data->set.str[STRING_SSL_CAFILE],
result = setstropt(&data->set.str[STRING_SSL_CAFILE],
va_arg(param, char *));
break;
case CURLOPT_CAPATH:
@@ -1779,7 +1797,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
* certificates which have been prepared using openssl c_rehash utility.
*/
/* This does not work on windows. */
result = Curl_setstropt(&data->set.str[STRING_SSL_CAPATH],
result = setstropt(&data->set.str[STRING_SSL_CAPATH],
va_arg(param, char *));
break;
case CURLOPT_TELNETOPTIONS:
@@ -1868,7 +1886,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
case CURLOPT_PROXYTYPE:
/*
* Set proxy type. HTTP/SOCKS4/SOCKS5
* Set proxy type. HTTP/SOCKS4/SOCKS4a/SOCKS5/SOCKS5_HOSTNAME
*/
data->set.proxytype = (curl_proxytype)va_arg(param, long);
break;
@@ -1929,7 +1947,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
These former 3rd party transfer options are deprecated */
case CURLOPT_FTP_ACCOUNT:
result = Curl_setstropt(&data->set.str[STRING_FTP_ACCOUNT],
result = setstropt(&data->set.str[STRING_FTP_ACCOUNT],
va_arg(param, char *));
break;
@@ -1945,7 +1963,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
break;
case CURLOPT_FTP_ALTERNATIVE_TO_USER:
result = Curl_setstropt(&data->set.str[STRING_FTP_ALTERNATIVE_TO_USER],
result = setstropt(&data->set.str[STRING_FTP_ALTERNATIVE_TO_USER],
va_arg(param, char *));
break;
@@ -1990,7 +2008,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* Use this file instead of the $HOME/.ssh/id_dsa.pub file
*/
result = Curl_setstropt(&data->set.str[STRING_SSH_PUBLIC_KEY],
result = setstropt(&data->set.str[STRING_SSH_PUBLIC_KEY],
va_arg(param, char *));
break;
@@ -1998,7 +2016,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
/*
* Use this file instead of the $HOME/.ssh/id_dsa file
*/
result = Curl_setstropt(&data->set.str[STRING_SSH_PRIVATE_KEY],
result = setstropt(&data->set.str[STRING_SSH_PRIVATE_KEY],
va_arg(param, char *));
break;
case CURLOPT_SSH_HOST_PUBLIC_KEY_MD5:
@@ -2006,7 +2024,7 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
* Option to allow for the MD5 of the host public key to be checked
* for validation purposes.
*/
result = Curl_setstropt(&data->set.str[STRING_SSH_HOST_PUBLIC_KEY_MD5],
result = setstropt(&data->set.str[STRING_SSH_HOST_PUBLIC_KEY_MD5],
va_arg(param, char *));
break;
case CURLOPT_HTTP_TRANSFER_DECODING:
@@ -2036,6 +2054,23 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option,
*/
data->set.new_directory_perms = va_arg(param, long);
break;
case CURLOPT_PROXY_TRANSFER_MODE:
/*
* set transfer mode (;type=<a|i>) when doing FTP via an HTTP proxy
*/
switch (va_arg(param, long)) {
case 0:
data->set.proxy_transfer_mode = FALSE;
break;
case 1:
data->set.proxy_transfer_mode = TRUE;
break;
default:
/* reserve other values for future use */
result = CURLE_FAILED_INIT;
break;
}
break;
default:
/* unknown tag and its companion, just ignore: */
@@ -2077,6 +2112,7 @@ static void conn_free(struct connectdata *conn)
Curl_llist_destroy(conn->send_pipe, NULL);
Curl_llist_destroy(conn->recv_pipe, NULL);
Curl_llist_destroy(conn->pend_pipe, NULL);
/* possible left-overs from the async name resolvers */
#if defined(USE_ARES)
@@ -2160,13 +2196,14 @@ CURLcode Curl_disconnect(struct connectdata *conn)
Curl_ssl_close(conn, FIRSTSOCKET);
/* Indicate to all handles on the pipe that we're dead */
if(IsPipeliningEnabled(data)) {
if(Curl_isPipeliningEnabled(data)) {
signalPipeClose(conn->send_pipe);
signalPipeClose(conn->recv_pipe);
signalPipeClose(conn->pend_pipe);
}
conn_free(conn);
data->reqdata.current_conn = NULL;
data->state.current_conn = NULL;
return CURLE_OK;
}
@@ -2200,7 +2237,7 @@ static bool IsPipeliningPossible(const struct SessionHandle *handle)
return FALSE;
}
static bool IsPipeliningEnabled(const struct SessionHandle *handle)
bool Curl_isPipeliningEnabled(const struct SessionHandle *handle)
{
if(handle->multi && Curl_multi_canPipeline(handle->multi))
return TRUE;
@@ -2223,9 +2260,8 @@ CURLcode Curl_addHandleToPipeline(struct SessionHandle *data,
return CURLE_OK;
}
int Curl_removeHandleFromPipeline(struct SessionHandle *handle,
struct curl_llist *pipeline)
struct curl_llist *pipeline)
{
struct curl_llist_element *curr;
@@ -2249,23 +2285,12 @@ static void Curl_printPipeline(struct curl_llist *pipeline)
curr = pipeline->head;
while(curr) {
struct SessionHandle *data = (struct SessionHandle *) curr->ptr;
infof(data, "Handle in pipeline: %s\n", data->reqdata.path);
infof(data, "Handle in pipeline: %s\n", data->state.path);
curr = curr->next;
}
}
#endif
bool Curl_isHandleAtHead(struct SessionHandle *handle,
struct curl_llist *pipeline)
{
struct curl_llist_element *curr = pipeline->head;
if(curr) {
return (bool)(curr->ptr == handle);
}
return FALSE;
}
static struct SessionHandle* gethandleathead(struct curl_llist *pipeline)
{
struct curl_llist_element *curr = pipeline->head;
@@ -2341,43 +2366,6 @@ ConnectionExists(struct SessionHandle *data,
from the multi */
}
if(pipeLen > 0 && !canPipeline) {
/* can only happen within multi handles, and means that another easy
handle is using this connection */
continue;
}
#ifdef CURLRES_ASYNCH
/* ip_addr_str is NULL only if the resolving of the name hasn't completed
yet and until then we don't re-use this connection */
if(!check->ip_addr_str) {
infof(data,
"Connection #%ld hasn't finished name resolve, can't reuse\n",
check->connectindex);
continue;
}
#endif
if((check->sock[FIRSTSOCKET] == CURL_SOCKET_BAD) || check->bits.close) {
/* Don't pick a connection that hasn't connected yet or that is going to
get closed. */
infof(data, "Connection #%ld isn't open enough, can't reuse\n",
check->connectindex);
#ifdef CURLDEBUG
if(check->recv_pipe->size > 0) {
infof(data, "BAD! Unconnected #%ld has a non-empty recv pipeline!\n",
check->connectindex);
}
#endif
continue;
}
if(pipeLen >= MAX_PIPELINE_LENGTH) {
infof(data, "Connection #%ld has its pipeline full, can't reuse\n",
check->connectindex);
continue;
}
if(canPipeline) {
/* Make sure the pipe has only GET requests */
struct SessionHandle* sh = gethandleathead(check->send_pipe);
@@ -2390,6 +2378,45 @@ ConnectionExists(struct SessionHandle *data,
if(!IsPipeliningPossible(rh))
continue;
}
#ifdef CURLDEBUG
if(pipeLen > MAX_PIPELINE_LENGTH) {
infof(data, "BAD! Connection #%ld has too big pipeline!\n",
check->connectindex);
}
#endif
}
else {
if(pipeLen > 0) {
/* can only happen within multi handles, and means that another easy
handle is using this connection */
continue;
}
#ifdef CURLRES_ASYNCH
/* ip_addr_str is NULL only if the resolving of the name hasn't completed
yet and until then we don't re-use this connection */
if(!check->ip_addr_str) {
infof(data,
"Connection #%ld hasn't finished name resolve, can't reuse\n",
check->connectindex);
continue;
}
#endif
if((check->sock[FIRSTSOCKET] == CURL_SOCKET_BAD) || check->bits.close) {
/* Don't pick a connection that hasn't connected yet or that is going to
get closed. */
infof(data, "Connection #%ld isn't open enough, can't reuse\n",
check->connectindex);
#ifdef CURLDEBUG
if(check->recv_pipe->size > 0) {
infof(data, "BAD! Unconnected #%ld has a non-empty recv pipeline!\n",
check->connectindex);
}
#endif
continue;
}
}
if((needle->protocol&PROT_SSL) != (check->protocol&PROT_SSL))
@@ -2450,7 +2477,7 @@ ConnectionExists(struct SessionHandle *data,
}
if(match) {
if(!IsPipeliningEnabled(data)) {
if(!check->is_in_pipeline) {
/* The check for a dead socket makes sense only in the
non-pipelining case */
bool dead = SocketIsDead(check->sock[FIRSTSOCKET]);
@@ -2533,7 +2560,7 @@ static void
ConnectionDone(struct connectdata *conn)
{
conn->inuse = FALSE;
if(!conn->send_pipe && !conn->recv_pipe)
if(!conn->send_pipe && !conn->recv_pipe && !conn->pend_pipe)
conn->is_in_pipeline = FALSE;
}
@@ -2619,15 +2646,21 @@ static CURLcode ConnectPlease(struct SessionHandle *data,
switch(data->set.proxytype) {
case CURLPROXY_SOCKS5:
result = Curl_SOCKS5(conn->proxyuser, conn->proxypasswd, conn->host.name,
conn->remote_port, FIRSTSOCKET, conn);
case CURLPROXY_SOCKS5_HOSTNAME:
result = Curl_SOCKS5(conn->proxyuser, conn->proxypasswd,
conn->host.name, conn->remote_port,
FIRSTSOCKET, conn);
break;
case CURLPROXY_HTTP:
/* do nothing here. handled later. */
break;
case CURLPROXY_SOCKS4:
result = Curl_SOCKS4(conn->proxyuser, conn->host.name, conn->remote_port,
FIRSTSOCKET, conn);
result = Curl_SOCKS4(conn->proxyuser, conn->host.name,
conn->remote_port, FIRSTSOCKET, conn, FALSE);
break;
case CURLPROXY_SOCKS4A:
result = Curl_SOCKS4(conn->proxyuser, conn->host.name,
conn->remote_port, FIRSTSOCKET, conn, TRUE);
break;
default:
failf(data, "unknown proxytype option given");
@@ -2872,7 +2905,7 @@ static CURLcode ParseURLAndFillConnection(struct SessionHandle *data,
char *at;
char *tmp;
char *path = data->reqdata.path;
char *path = data->state.path;
/*************************************************************
* Parse the URL.
@@ -3030,7 +3063,7 @@ static CURLcode ParseURLAndFillConnection(struct SessionHandle *data,
* So if the URL was A://B/C,
* conn->protostr is A
* conn->host.name is B
* data->reqdata.path is /C
* data->state.path is /C
*/
return CURLE_OK;
@@ -3049,28 +3082,27 @@ static CURLcode setup_range(struct SessionHandle *data)
* If we're doing a resumed transfer, we need to setup our stuff
* properly.
*/
struct HandleData *req = &data->reqdata;
struct UrlState *s = &data->state;
s->resume_from = data->set.set_resume_from;
if(s->resume_from || data->set.str[STRING_SET_RANGE]) {
if(s->rangestringalloc)
free(s->range);
req->resume_from = data->set.set_resume_from;
if(req->resume_from || data->set.str[STRING_SET_RANGE]) {
if(req->rangestringalloc)
free(req->range);
if(req->resume_from)
req->range = aprintf("%" FORMAT_OFF_T "-", req->resume_from);
if(s->resume_from)
s->range = aprintf("%" FORMAT_OFF_T "-", s->resume_from);
else
req->range = strdup(data->set.str[STRING_SET_RANGE]);
s->range = strdup(data->set.str[STRING_SET_RANGE]);
req->rangestringalloc = (unsigned char)(req->range?TRUE:FALSE);
s->rangestringalloc = (bool)(s->range?TRUE:FALSE);
if(!req->range)
if(!s->range)
return CURLE_OUT_OF_MEMORY;
/* tell ourselves to fetch this range */
req->use_range = TRUE; /* enable range download */
s->use_range = TRUE; /* enable range download */
}
else
req->use_range = FALSE; /* disable range download */
s->use_range = FALSE; /* disable range download */
return CURLE_OK;
}
@@ -3226,7 +3258,7 @@ static char *detect_proxy(struct connectdata *conn)
if(conn->proxytype == CURLPROXY_HTTP) {
/* force this connection's protocol to become HTTP */
conn->protocol = PROT_HTTP | bits;
conn->bits.httpproxy = TRUE;
conn->bits.proxy = conn->bits.httpproxy = TRUE;
}
}
} /* if(!nope) - it wasn't specified non-proxy */
@@ -3522,7 +3554,8 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* Initialize the pipeline lists */
conn->send_pipe = Curl_llist_alloc((curl_llist_dtor) llist_dtor);
conn->recv_pipe = Curl_llist_alloc((curl_llist_dtor) llist_dtor);
if(!conn->send_pipe || !conn->recv_pipe)
conn->pend_pipe = Curl_llist_alloc((curl_llist_dtor) llist_dtor);
if(!conn->send_pipe || !conn->recv_pipe || !conn->pend_pipe)
return CURLE_OUT_OF_MEMORY;
/* This initing continues below, see the comment "Continue connectdata
@@ -3539,7 +3572,7 @@ static CURLcode CreateConnection(struct SessionHandle *data,
urllen=LEAST_PATH_ALLOC;
/* Free the old buffer */
Curl_safefree(data->reqdata.pathbuffer);
Curl_safefree(data->state.pathbuffer);
/*
* We malloc() the buffers below urllen+2 to make room for to possibilities:
@@ -3547,10 +3580,10 @@ static CURLcode CreateConnection(struct SessionHandle *data,
* 2 - an extra slash (in case a syntax like "www.host.com?moo" is used)
*/
data->reqdata.pathbuffer=(char *)malloc(urllen+2);
if(NULL == data->reqdata.pathbuffer)
data->state.pathbuffer=(char *)malloc(urllen+2);
if(NULL == data->state.pathbuffer)
return CURLE_OUT_OF_MEMORY; /* really bad error */
data->reqdata.path = data->reqdata.pathbuffer;
data->state.path = data->state.pathbuffer;
conn->host.rawalloc=(char *)malloc(urllen+2);
if(NULL == conn->host.rawalloc)
@@ -3803,7 +3836,7 @@ static CURLcode CreateConnection(struct SessionHandle *data,
char *url;
url = aprintf("%s://%s:%d%s", conn->protostr, conn->host.name,
conn->remote_port, data->reqdata.path);
conn->remote_port, data->state.path);
if(!url)
return CURLE_OUT_OF_MEMORY;
@@ -3986,6 +4019,7 @@ static CURLcode CreateConnection(struct SessionHandle *data,
Curl_safefree(old_conn->proxypasswd);
Curl_llist_destroy(old_conn->send_pipe, NULL);
Curl_llist_destroy(old_conn->recv_pipe, NULL);
Curl_llist_destroy(old_conn->pend_pipe, NULL);
Curl_safefree(old_conn->master_buffer);
free(old_conn); /* we don't need this anymore */
@@ -4016,6 +4050,8 @@ static CURLcode CreateConnection(struct SessionHandle *data,
* the persistent connection stuff */
conn->fread_func = data->set.fread_func;
conn->fread_in = data->set.in;
conn->seek_func = data->set.seek_func;
conn->seek_client = data->set.seek_client;
if((conn->protocol&PROT_HTTP) &&
data->set.upload &&
@@ -4237,7 +4273,7 @@ static CURLcode SetupConnection(struct connectdata *conn,
return CURLE_OUT_OF_MEMORY;
}
data->reqdata.keep.headerbytecount = 0;
data->req.headerbytecount = 0;
#ifdef CURL_DO_LINEEND_CONV
data->state.crlf_conversions = 0; /* reset CRLF conversion counter */
@@ -4318,26 +4354,24 @@ CURLcode Curl_connect(struct SessionHandle *data,
if(CURLE_OK == code) {
/* no error */
if(dns || !*asyncp)
/* If an address is available it means that we already have the name
resolved, OR it isn't async. if this is a re-used connection 'dns'
will be NULL here. Continue connecting from here */
code = SetupConnection(*in_connect, dns, protocol_done);
/* else
response will be received and treated async wise */
}
if(CURLE_OK != code) {
/* We're not allowed to return failure with memory left allocated
in the connectdata struct, free those here */
if(*in_connect) {
Curl_disconnect(*in_connect); /* close the connection */
*in_connect = NULL; /* return a NULL */
}
}
else {
if((*in_connect)->is_in_pipeline)
data->state.is_in_pipeline = TRUE;
else {
if(dns || !*asyncp)
/* If an address is available it means that we already have the name
resolved, OR it isn't async. if this is a re-used connection 'dns'
will be NULL here. Continue connecting from here */
code = SetupConnection(*in_connect, dns, protocol_done);
/* else
response will be received and treated async wise */
}
}
if(CURLE_OK != code && *in_connect) {
/* We're not allowed to return failure with memory left allocated
in the connectdata struct, free those here */
Curl_disconnect(*in_connect); /* close the connection */
*in_connect = NULL; /* return a NULL */
}
return code;
@@ -4391,11 +4425,12 @@ CURLcode Curl_done(struct connectdata **connp,
if(Curl_removeHandleFromPipeline(data, conn->send_pipe) &&
conn->writechannel_inuse)
conn->writechannel_inuse = FALSE;
Curl_removeHandleFromPipeline(data, conn->pend_pipe);
/* Cleanup possible redirect junk */
if(data->reqdata.newurl) {
free(data->reqdata.newurl);
data->reqdata.newurl = NULL;
if(data->req.newurl) {
free(data->req.newurl);
data->req.newurl = NULL;
}
if(conn->dns_entry) {
@@ -4411,6 +4446,13 @@ CURLcode Curl_done(struct connectdata **connp,
Curl_pgrsDone(conn); /* done with the operation */
/* if the transfer was completed in a paused state there can be buffered
data left to write and then kill */
if(data->state.tempwrite) {
free(data->state.tempwrite);
data->state.tempwrite = NULL;
}
/* for ares-using, make sure all possible outstanding requests are properly
cancelled before we proceed */
ares_cancel(data->state.areschannel);
@@ -4458,14 +4500,13 @@ CURLcode Curl_done(struct connectdata **connp,
static CURLcode do_init(struct connectdata *conn)
{
struct SessionHandle *data = conn->data;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
struct SingleRequest *k = &data->req;
conn->bits.done = FALSE; /* Curl_done() is not called yet */
conn->bits.do_more = FALSE; /* by default there's no curl_do_more() to use */
/* NB: the content encoding software depends on this initialization of
Curl_transfer_keeper.*/
memset(k, 0, sizeof(struct Curl_transfer_keeper));
/* NB: the content encoding software depends on this initialization */
Curl_easy_initHandleData(data);
k->start = Curl_tvnow(); /* start time */
k->now = k->start; /* current time is now */
@@ -4496,19 +4537,11 @@ static CURLcode do_init(struct connectdata *conn)
*/
static void do_complete(struct connectdata *conn)
{
struct SessionHandle *data = conn->data;
struct Curl_transfer_keeper *k = &data->reqdata.keep;
conn->bits.chunk=FALSE;
conn->bits.trailerhdrpresent=FALSE;
k->maxfd = (conn->sockfd>conn->writesockfd?
conn->sockfd:conn->writesockfd)+1;
k->size = data->reqdata.size;
k->maxdownload = data->reqdata.maxdownload;
k->bytecountp = data->reqdata.bytecountp;
k->writebytecountp = data->reqdata.writebytecountp;
conn->data->req.maxfd = (conn->sockfd>conn->writesockfd?
conn->sockfd:conn->writesockfd)+1;
}
CURLcode Curl_do(struct connectdata **connp, bool *done)
@@ -4575,8 +4608,8 @@ CURLcode Curl_do(struct connectdata **connp, bool *done)
}
}
if(result == CURLE_OK)
/* pre readwrite must be called after the protocol-specific DO function */
if((result == CURLE_OK) && *done)
/* do_complete must be called after the protocol-specific DO function */
do_complete(conn);
}
return result;
@@ -4589,6 +4622,10 @@ CURLcode Curl_do_more(struct connectdata *conn)
if(conn->handler->do_more)
result = conn->handler->do_more(conn);
if(result == CURLE_OK)
/* do_complete must be called after the protocol-specific DO function */
do_complete(conn);
return result;
}
@@ -4598,9 +4635,9 @@ CURLcode Curl_do_more(struct connectdata *conn)
void Curl_reset_reqproto(struct connectdata *conn)
{
struct SessionHandle *data = conn->data;
if(data->reqdata.proto.generic && data->reqdata.current_conn != conn) {
free(data->reqdata.proto.generic);
data->reqdata.proto.generic = NULL;
if(data->state.proto.generic && data->state.current_conn != conn) {
free(data->state.proto.generic);
data->state.proto.generic = NULL;
}
data->reqdata.current_conn = conn;
data->state.current_conn = conn;
}

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -64,12 +64,11 @@ int Curl_doing_getsock(struct connectdata *conn,
curl_socket_t *socks,
int numsocks);
bool Curl_isPipeliningEnabled(const struct SessionHandle *handle);
CURLcode Curl_addHandleToPipeline(struct SessionHandle *handle,
struct curl_llist *pipeline);
int Curl_removeHandleFromPipeline(struct SessionHandle *handle,
struct curl_llist *pipeline);
bool Curl_isHandleAtHead(struct SessionHandle *handle,
struct curl_llist *pipeline);
void Curl_close_connections(struct SessionHandle *data);

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -73,6 +73,9 @@
#include "ssl.h"
#include "err.h"
#endif /* USE_OPENSSL */
#ifdef USE_GNUTLS
#error Configuration error; cannot use GnuTLS *and* OpenSSL.
#endif
#endif /* USE_SSLEAY */
#ifdef USE_GNUTLS
@@ -629,12 +632,18 @@ struct hostname {
*/
#define KEEP_NONE 0
#define KEEP_READ 1 /* there is or may be data to read */
#define KEEP_WRITE 2 /* there is or may be data to write */
#define KEEP_READ_HOLD 4 /* when set, no reading should be done but there
might still be data to read */
#define KEEP_WRITE_HOLD 8 /* when set, no writing should be done but there
might still be data to write */
#define KEEP_READ (1<<0) /* there is or may be data to read */
#define KEEP_WRITE (1<<1) /* there is or may be data to write */
#define KEEP_READ_HOLD (1<<2) /* when set, no reading should be done but there
might still be data to read */
#define KEEP_WRITE_HOLD (1<<3) /* when set, no writing should be done but there
might still be data to write */
#define KEEP_READ_PAUSE (1<<4) /* reading is paused */
#define KEEP_WRITE_PAUSE (1<<5) /* writing is paused */
#define KEEP_READBITS (KEEP_READ | KEEP_READ_HOLD | KEEP_READ_PAUSE)
#define KEEP_WRITEBITS (KEEP_WRITE | KEEP_WRITE_HOLD | KEEP_WRITE_PAUSE)
#ifdef HAVE_LIBZ
typedef enum {
@@ -646,16 +655,36 @@ typedef enum {
} zlibInitState;
#endif
#if defined(USE_ARES) || defined(USE_THREADING_GETHOSTBYNAME) || \
defined(USE_THREADING_GETADDRINFO)
struct Curl_async {
char *hostname;
int port;
struct Curl_dns_entry *dns;
bool done; /* set TRUE when the lookup is complete */
int status; /* if done is TRUE, this is the status from the callback */
void *os_specific; /* 'struct thread_data' for Windows */
};
#endif
#define FIRSTSOCKET 0
#define SECONDARYSOCKET 1
/* These function pointer types are here only to allow easier typecasting
within the source when we need to cast between data pointers (such as NULL)
and function pointers. */
typedef CURLcode (*Curl_do_more_func)(struct connectdata *);
typedef CURLcode (*Curl_done_func)(struct connectdata *, CURLcode, bool);
/*
* This struct is all the previously local variables from Curl_perform() moved
* to struct to allow the function to return and get re-invoked better without
* losing state.
* Request specific data in the easy handle (SessionHandle). Previously,
* these members were on the connectdata struct but since a conn struct may
* now be shared between different SessionHandles, we store connection-specifc
* data here. This struct only keeps stuff that's interesting for *this*
* request, as it will be cleared between multiple ones
*/
struct Curl_transfer_keeper {
/** Values copied over from the HandleData struct each time on init **/
struct SingleRequest {
curl_off_t size; /* -1 if unknown at this point */
curl_off_t *bytecountp; /* return number of bytes read or NULL */
@@ -663,17 +692,15 @@ struct Curl_transfer_keeper {
-1 means unlimited */
curl_off_t *writebytecountp; /* return number of bytes written or NULL */
/** End of HandleData struct copies **/
curl_off_t bytecount; /* total number of bytes read */
curl_off_t writebytecount; /* number of bytes written */
long headerbytecount; /* only count received headers */
long headerbytecount; /* only count received headers */
long deductheadercount; /* this amount of bytes doesn't count when we check
if anything has been transfered at the end of
a connection. We use this counter to make only
a 100 reply (without a following second response
code) result in a CURLE_GOT_NOTHING error code */
if anything has been transfered at the end of a
connection. We use this counter to make only a
100 reply (without a following second response
code) result in a CURLE_GOT_NOTHING error code */
struct timeval start; /* transfer started at this time */
struct timeval now; /* current time */
@@ -733,48 +760,10 @@ struct Curl_transfer_keeper {
bool ignorebody; /* we read a response-body but we ignore it! */
bool ignorecl; /* This HTTP response has no body so we ignore the Content-
Length: header */
};
#if defined(USE_ARES) || defined(USE_THREADING_GETHOSTBYNAME) || \
defined(USE_THREADING_GETADDRINFO)
struct Curl_async {
char *hostname;
int port;
struct Curl_dns_entry *dns;
bool done; /* set TRUE when the lookup is complete */
int status; /* if done is TRUE, this is the status from the callback */
void *os_specific; /* 'struct thread_data' for Windows */
};
#endif
#define FIRSTSOCKET 0
#define SECONDARYSOCKET 1
/* These function pointer types are here only to allow easier typecasting
within the source when we need to cast between data pointers (such as NULL)
and function pointers. */
typedef CURLcode (*Curl_do_more_func)(struct connectdata *);
typedef CURLcode (*Curl_done_func)(struct connectdata *, CURLcode, bool);
/*
* Store's request specific data in the easy handle (SessionHandle).
* Previously, these members were on the connectdata struct but since
* a conn struct may now be shared between different SessionHandles,
* we store connection-specifc data here.
*
*/
struct HandleData {
char *pathbuffer;/* allocated buffer to store the URL's path part in */
char *path; /* path to use, points to somewhere within the pathbuffer
area */
char *newurl; /* This can only be set if a Location: was in the
document headers */
/* This struct is inited when needed */
struct Curl_transfer_keeper keep;
/* 'upload_present' is used to keep a byte counter of how much data there is
still left in the buffer, aimed for upload. */
ssize_t upload_present;
@@ -784,40 +773,6 @@ struct HandleData {
and the 'upload_present' contains the number of bytes available at this
position */
char *upload_fromhere;
curl_off_t size; /* -1 if unknown at this point */
curl_off_t *bytecountp; /* return number of bytes read or NULL */
curl_off_t maxdownload; /* in bytes, the maximum amount of data to fetch, -1
means unlimited */
curl_off_t *writebytecountp; /* return number of bytes written or NULL */
bool use_range;
bool rangestringalloc; /* the range string is malloc()'ed */
char *range; /* range, if used. See README for detailed specification on
this syntax. */
curl_off_t resume_from; /* continue [ftp] transfer from here */
/* Protocol specific data.
*
*************************************************************************
* Note that this data will be REMOVED after each request, so anything that
* should be kept/stored on a per-connection basis and thus live for the
* next requst on the same connection MUST be put in the connectdata struct!
*************************************************************************/
union {
struct HTTP *http;
struct HTTP *https; /* alias, just for the sake of being more readable */
struct FTP *ftp;
void *tftp; /* private for tftp.c-eyes only */
struct FILEPROTO *file;
void *telnet; /* private for telnet.c-eyes only */
void *generic;
struct SSHPROTO *ssh;
} proto;
/* current user of this HandleData instance, or NULL */
struct connectdata *current_conn;
};
/*
@@ -997,11 +952,16 @@ struct connectdata {
bool writechannel_inuse; /* whether the write channel is in use by an easy
handle */
bool is_in_pipeline; /* TRUE if this connection is in a pipeline */
bool server_supports_pipelining; /* TRUE if server supports pipelining,
set after first response */
struct curl_llist *send_pipe; /* List of handles waiting to
send on this pipeline */
struct curl_llist *recv_pipe; /* List of handles waiting to read
their responses on this pipeline */
struct curl_llist *pend_pipe; /* List of pending handles on
this pipeline */
#define MAX_PIPELINE_LENGTH 5
char* master_buffer; /* The master buffer allocated on-demand;
used for pipelining. */
@@ -1009,6 +969,9 @@ struct connectdata {
size_t buf_len; /* Length of the buffer?? */
curl_seek_callback seek_func; /* function that seeks the input */
void *seek_client; /* pointer to pass to the seek() above */
/*************** Request - specific items ************/
/* previously this was in the urldata struct */
@@ -1180,10 +1143,13 @@ struct UrlState {
following not keep sending user+password... This is
strdup() data.
*/
struct curl_ssl_session *session; /* array of 'numsessions' size */
long sessionage; /* number of the most recent session */
char *tempwrite; /* allocated buffer to keep data in when a write
callback returns to make the connection paused */
size_t tempwritesize; /* size of the 'tempwrite' allocated buffer */
int tempwritetype; /* type of the 'tempwrite' buffer as a bitmask that is
used with Curl_client_write() */
char *scratch; /* huge buffer[BUFSIZE*2] when doing upload CRLF replacing */
bool errorbuf; /* Set to TRUE if the error buffer is already filled in.
This must be set to FALSE every time _easy_perform() is
@@ -1228,7 +1194,6 @@ struct UrlState {
bool pipe_broke; /* TRUE if the connection we were pipelined on broke
and we need to restart from the beginning */
bool cancelled; /* TRUE if the request was cancelled */
#ifndef WIN32
/* do FTP line-end conversions on most platforms */
@@ -1246,6 +1211,36 @@ struct UrlState {
bool closed; /* set to TRUE when curl_easy_cleanup() has been called on this
handle, but it is kept around as mentioned for
shared_conn */
char *pathbuffer;/* allocated buffer to store the URL's path part in */
char *path; /* path to use, points to somewhere within the pathbuffer
area */
bool use_range;
bool rangestringalloc; /* the range string is malloc()'ed */
char *range; /* range, if used. See README for detailed specification on
this syntax. */
curl_off_t resume_from; /* continue [ftp] transfer from here */
/* Protocol specific data.
*
*************************************************************************
* Note that this data will be REMOVED after each request, so anything that
* should be kept/stored on a per-connection basis and thus live for the
* next requst on the same connection MUST be put in the connectdata struct!
*************************************************************************/
union {
struct HTTP *http;
struct HTTP *https; /* alias, just for the sake of being more readable */
struct FTP *ftp;
void *tftp; /* private for tftp.c-eyes only */
struct FILEPROTO *file;
void *telnet; /* private for telnet.c-eyes only */
void *generic;
struct SSHPROTO *ssh;
} proto;
/* current user of this SessionHandle instance, or NULL */
struct connectdata *current_conn;
};
@@ -1339,6 +1334,7 @@ struct UserDefined {
bool free_referer; /* set TRUE if 'referer' points to a string we
allocated */
void *postfields; /* if POST, set the fields' values here */
curl_seek_callback seek_func; /* function that seeks the input */
curl_off_t postfieldsize; /* if POST, this might have a size to use instead
of strlen(), and then the data *may* be binary
(contain zero bytes) */
@@ -1357,6 +1353,7 @@ struct UserDefined {
the address and opening the socket */
void* opensocket_client;
void *seek_client; /* pointer to pass to the seek callback */
/* the 3 curl_conv_callback functions below are used on non-ASCII hosts */
/* function to convert from the network encoding: */
curl_conv_callback convfromnetwork;
@@ -1428,7 +1425,7 @@ struct UserDefined {
bool ftp_create_missing_dirs; /* create directories that don't exist */
bool ftp_use_port; /* use the FTP PORT command */
bool hide_progress; /* don't use the progress meter */
bool http_fail_on_error; /* fail on HTTP error codes >= 300 */
bool http_fail_on_error; /* fail on HTTP error codes >= 300 */
bool http_follow_location; /* follow HTTP redirects */
bool http_disable_hostname_check_before_authentication;
bool include_header; /* include received protocol headers in data output */
@@ -1463,7 +1460,8 @@ struct UserDefined {
content-encoded (chunked, compressed) */
long new_file_perms; /* Permissions to use when creating remote files */
long new_directory_perms; /* Permissions to use when creating remote dirs */
bool proxy_transfer_mode; /* set transfer mode (;type=<a|i>) when doing FTP
via an HTTP proxy */
char *str[STRING_LAST]; /* array of strings, pointing to allocated memory */
};
@@ -1496,7 +1494,7 @@ struct SessionHandle {
in multi controlling structure to assist
in removal. */
struct Curl_share *share; /* Share, handles global variable mutexing */
struct HandleData reqdata; /* Request-specific data */
struct SingleRequest req; /* Request-specific data */
struct UserDefined set; /* values set by the libcurl user */
struct DynamicStatic change; /* possibly modified userdefined data */
struct CookieInfo *cookies; /* the cookies, read from files and servers.

View File

@@ -5,7 +5,7 @@
# * | (__| |_| | _ <| |___
# * \___|\___/|_| \_\_____|
# *
# * Copyright (C) 1998 - 2004, Daniel Stenberg, <daniel@haxx.se>, et al.
# * Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
# *
# * This software is licensed as described in the file COPYING, which
# * you should have received as part of this distribution. The terms
@@ -20,48 +20,56 @@
# *
# * $Id$
# ***************************************************************************
# awk script which fetches libcurl version number and string from input file
# and writes them to STDOUT. Here you can get an awk version for Win32:
# http://www.gknw.com/development/prgtools/awk.zip
# awk script which fetches curl / ares version number and string from input
# file and writes them to STDOUT. Here you can get an awk version for Win32:
# http://www.gknw.net/development/prgtools/awk-20070501.zip
#
BEGIN {
if (match (ARGV[1], /curlver.h/)) {
while ((getline < ARGV[1]) > 0) {
if (match ($0, /^#define LIBCURL_VERSION "[^"]+"/)) {
if (match ($0, /^#define LIBCURL_COPYRIGHT "[^"]+"$/)) {
libcurl_copyright_str = substr($0, 28, length($0)-28);
}
else if (match ($0, /^#define LIBCURL_VERSION "[^"]+"$/)) {
libcurl_ver_str = substr($3, 2, length($3)-2);
}
else if (match ($0, /^#define LIBCURL_VERSION_MAJOR [^"]+/)) {
else if (match ($0, /^#define LIBCURL_VERSION_MAJOR [0-9]+$/)) {
libcurl_ver_major = substr($3, 1, length($3));
}
else if (match ($0, /^#define LIBCURL_VERSION_MINOR [^"]+/)) {
else if (match ($0, /^#define LIBCURL_VERSION_MINOR [0-9]+$/)) {
libcurl_ver_minor = substr($3, 1, length($3));
}
else if (match ($0, /^#define LIBCURL_VERSION_PATCH [^"]+/)) {
else if (match ($0, /^#define LIBCURL_VERSION_PATCH [0-9]+$/)) {
libcurl_ver_patch = substr($3, 1, length($3));
}
}
libcurl_ver = libcurl_ver_major "," libcurl_ver_minor "," libcurl_ver_patch;
print "LIBCURL_VERSION = " libcurl_ver "";
print "LIBCURL_VERSION_STR = " libcurl_ver_str "";
print "LIBCURL_COPYRIGHT_STR = " libcurl_copyright_str "";
}
if (match (ARGV[1], /ares_version.h/)) {
while ((getline < ARGV[1]) > 0) {
if (match ($0, /^#define ARES_VERSION_STR "[^"]+"/)) {
if (match ($0, /^#define ARES_COPYRIGHT "[^"]+"$/)) {
libcares_copyright_str = substr($0, 25, length($0)-25);
}
else if (match ($0, /^#define ARES_VERSION_STR "[^"]+"$/)) {
libcares_ver_str = substr($3, 2, length($3)-2);
}
else if (match ($0, /^#define ARES_VERSION_MAJOR [^"]+/)) {
else if (match ($0, /^#define ARES_VERSION_MAJOR [0-9]+$/)) {
libcares_ver_major = substr($3, 1, length($3));
}
else if (match ($0, /^#define ARES_VERSION_MINOR [^"]+/)) {
else if (match ($0, /^#define ARES_VERSION_MINOR [0-9]+$/)) {
libcares_ver_minor = substr($3, 1, length($3));
}
else if (match ($0, /^#define ARES_VERSION_PATCH [^"]+/)) {
else if (match ($0, /^#define ARES_VERSION_PATCH [0-9]+$/)) {
libcares_ver_patch = substr($3, 1, length($3));
}
}
libcares_ver = libcares_ver_major "," libcares_ver_minor "," libcares_ver_patch;
print "LIBCARES_VERSION = " libcares_ver "";
print "LIBCARES_VERSION_STR = " libcares_ver_str "";
print "LIBCARES_COPYRIGHT_STR = " libcares_copyright_str "";
}
}

View File

@@ -181,10 +181,11 @@ objects:
_ Library CURL. All other objects will be stored in this library.
_ Modules for all libcurl units.
_ Binding directory CURL_A, to be used at calling program link time for
statically binding the modules (specify BNDSRVPGM(QADRTTS) when creating a
program using CURL_A).
_ Service program CURL, to be used at calling program run-time when this program
has dynamically bound curl at link time.
statically binding the modules (specify BNDSRVPGM(QADRTTS QGLDCLNT QGLDBRDR)
when creating a program using CURL_A).
_ Service program CURL.<soname>, where <soname> is extracted from the
lib/Makefile.am VERSION variable. To be used at calling program run-time
when this program has dynamically bound curl at link time.
_ Binding directory CURL. To be used to dynamically bind libcurl when linking a
calling program.
_ Source file H. It contains all the include members needed to compile a C/C++

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -112,6 +112,11 @@
*
d CURL_READFUNC_ABORT...
d c X'10000000'
d CURL_READFUNC_PAUSE...
d c X'10000001'
*
d CURL_WRITEFUNC_PAUSE...
d c X'10000001'
*
d CURLAUTH_NONE c X'00000000'
d CURLAUTH_BASIC c X'00000001'
@@ -196,6 +201,15 @@
d CURL_CSELECT_ERR...
d c X'00000004'
*
d CURLPAUSE_RECV c X'00000001'
d CURLPAUSE_RECV_CONT...
d c X'00000000'
d CURLPAUSE_SEND c X'00000004'
d CURLPAUSE_SEND_CONT...
d c X'00000000'
d CURLPAUSE_ALL c X'00000005'
d CURLPAUSE_CONT c X'00000000'
*
**************************************************************************
* Types
**************************************************************************
@@ -404,6 +418,10 @@
d c 4
d CURLPROXY_SOCKS5...
d c 5
d CURLPROXY_SOCKS4A...
d c 6
d CURLPROXY_SOCKS5_HOSTNAME...
d c 7
*
d curl_usessl s 10i 0 based(######ptr######) Enum
d CURLUSESSL_NONE...
@@ -552,7 +570,7 @@
d c 00061
d CURLOPT_INTERFACE...
d c 10062
d CURLOPT_KRB4LEVEL...
d CURLOPT_KRBLEVEL...
d c 10063
d CURLOPT_SSL_VERIFYPEER...
d c 00064
@@ -731,6 +749,14 @@
d c 20163
d CURLOPT_OPENSOCKETDATA...
d c 10164
d CURLOPT_COPYPOSTFIELDS...
d c 10165
d CURLOPT_PROXY_TRANSFER_MODE...
d c 00166
d CURLOPT_SEEKFUNCTION...
d c 20167
d CURLOPT_SEEKDATA...
d c 10168
*
d CURLFORMcode s 10i 0 based(######ptr######) Enum
d CURL_FORMADD_OK...
@@ -1080,6 +1106,9 @@
d s * based(######ptr######) procptr
*
d curl_read_callback...
d s * based(######ptr######) procptr
*
d curl_seek_callback...
d s * based(######ptr######) procptr
*
d curl_sockopt_callback...
@@ -1325,6 +1354,11 @@
d pr extproc('curl_easy_reset')
d curl * value CURL *
*
d curl_easy_pause...
d pr extproc('curl_easy_pause')
d curl * value CURL *
d bitmask 10i 0 value
*
d curl_multi_init...
d pr * extproc('curl_multi_init') CURLM *
*

View File

@@ -20,6 +20,12 @@ TOPDIR=`dirname "${SCRIPTDIR}"`
TOPDIR=`dirname "${TOPDIR}"`
export SCRIPTDIR TOPDIR
# Extract the SONAME from the library makefile.
SONAME=`sed -e '/^VERSION=/!d' -e 's/^.* \([0-9]*\):.*$/\1/' \
< "${TOPDIR}/lib/Makefile.am"`
export SONAME
################################################################################
#
@@ -30,12 +36,12 @@ export SCRIPTDIR TOPDIR
TARGETLIB='CURL' # Target OS/400 program library
STATBNDDIR='CURL_A' # Static binding directory.
DYNBNDDIR='CURL' # Dynamic binding directory.
SRVPGM='CURL' # Service program.
SRVPGM="CURL.${SONAME}" # Service program.
TGTCCSID='500' # Target CCSID of objects
DEBUG='*ALL' # Debug level
OPTIMIZE='10' # Optimisation level
OUTPUT='*NONE' # Compilation output option.
TGTRLS='V5R1M0' # Target OS release
TGTRLS='V5R2M0' # Target OS release
export TARGETLIB STATBNDDIR DYNBNDDIR SRVPGM TGTCCSID DEBUG OPTIMIZE OUTPUTC
export TGTRLS

View File

@@ -28,7 +28,7 @@ fi
echo '#pragma comment(user, "libcurl version '"${LIBCURL_VERSION}"'")' > os400.c
echo '#pragma comment(date)' >> os400.c
echo '#pragma comment(copyright, "Copyright (C) 1998-2007 Daniel Stenberg et al. OS/400 version by P. Monnerat")' >> os400.c
echo '#pragma comment(copyright, "Copyright (C) 1998-2008 Daniel Stenberg et al. OS/400 version by P. Monnerat")' >> os400.c
make_module OS400 os400.c
LINK= # No need to rebuild service program yet.
MODULES=
@@ -113,12 +113,13 @@ EXPORTS=`grep '^CURL_EXTERN[ ]' \
BSF="${LIBIFSNAME}/TOOLS.FILE/BNDSRC.MBR"
if action_needed "${BSF}"
if action_needed "${BSF}" Makefile.am
then LINK=YES
fi
if [ "${LINK}" ]
then echo " STRPGMEXP PGMLVL(*CURRENT) SIGNATURE('LIBCURL')" > "${BSF}"
then echo " STRPGMEXP PGMLVL(*CURRENT) SIGNATURE('LIBCURL_${SONAME}')" \
> "${BSF}"
for EXPORT in ${EXPORTS}
do echo ' EXPORT SYMBOL("'"${EXPORT}"'")' >> "${BSF}"
done
@@ -138,7 +139,7 @@ then CMD="CRTSRVPGM SRVPGM(${TARGETLIB}/${SRVPGM})"
CMD="${CMD} SRCFILE(${TARGETLIB}/TOOLS) SRCMBR(BNDSRC)"
CMD="${CMD} MODULE(${TARGETLIB}/OS400)"
CMD="${CMD} BNDDIR(${TARGETLIB}/${STATBNDDIR})"
CMD="${CMD} BNDSRVPGM(QADRTTS)"
CMD="${CMD} BNDSRVPGM(QADRTTS QGLDCLNT QGLDBRDR)"
CMD="${CMD} TEXT('curl API library')"
CMD="${CMD} TGTRLS(${TGTRLS})"
system "${CMD}"

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms

View File

@@ -1,15 +1,9 @@
#
# Watcom / OpenWatcom / Win32 makefile for cURL.
# G. Vanem <giva@bgnett.no>
# G. Vanem <gvanem@broadpark.no>
#
# $Id$
#
# Set to 1 to use static lib.
# Set to 0 to use DLL and import lib.
#
STATIC = 0
CC = wcc386
CFLAGS = -3r -mf -d3 -hc -zff -zgf -zq -zm -s -fr=con -w2 -fpi -oilrtfm &
@@ -17,15 +11,7 @@ CFLAGS = -3r -mf -d3 -hc -zff -zgf -zq -zm -s -fr=con -w2 -fpi -oilrtfm &
-dSIZEOF_CURL_OFF_T=8 -dCURLDEBUG -dENABLE_IPV6 -dHAVE_WINSOCK2_H &
-I..\include -I..\lib
!ifeq STATIC 0
LIBCURL = ..\lib\libcurl_wc_imp.lib
!else
CFLAGS += -dCURL_STATICLIB
LIBCURL = ..\lib\libcurl_wc.lib
!endif
OBJ_DIR = Watcom_obj
OBJ_DIR = WC_Win32.obj
OBJS = $(OBJ_DIR)\getpass.obj $(OBJ_DIR)\homedir.obj $(OBJ_DIR)\hugehelp.obj &
$(OBJ_DIR)\main.obj $(OBJ_DIR)\urlglob.obj $(OBJ_DIR)\writeenv.obj &
@@ -46,7 +32,7 @@ curl.exe: $(OBJS) $(RESOURCE)
wlink name $@ system nt file { $(OBJS) } &
option quiet, map, caseexact, eliminate, res=$(RESOURCE) &
libpath $(%watcom)\lib386;$(%watcom)\lib386\nt &
library $(LIBCURL), clib3r.lib, ws2_32.lib
library ..\lib\libcurl_wc_imp.lib, clib3r.lib, ws2_32.lib
clean: .SYMBOLIC
- rm -f $(OBJS) $(RESOURCE)
@@ -61,7 +47,6 @@ $(RESOURCE): curl.rc
.ERASE
.c{$(OBJ_DIR)}.obj:
$(CC) $[@ $(CFLAGS) -fo=$@
@echo .
#
# Dependencies based on "gcc -MM .."

View File

@@ -41,7 +41,9 @@ curl_LDADD = ../lib/libcurl.la @CURL_LIBS@
curl_DEPENDENCIES = ../lib/libcurl.la
BUILT_SOURCES = hugehelp.c
CLEANFILES = hugehelp.c
NROFF=@NROFF@ @MANOPT@ # figured out by the configure script
# Use the C locale to ensure that only ASCII characters appear in the
# embedded text.
NROFF=env LC_ALL=C @NROFF@ @MANOPT@ # figured out by the configure script
EXTRA_DIST = mkhelp.pl makefile.dj Makefile.vc6 Makefile.b32 Makefile.m32 \
Makefile.riscos config.h.in macos/curl.mcp.xml.sit.hqx \

View File

@@ -21,11 +21,11 @@ ZLIB_PATH = ../../zlib-1.2.3
endif
# Edit the path below to point to the base of your OpenSSL package.
ifndef OPENSSL_PATH
OPENSSL_PATH = ../../openssl-0.9.8e
OPENSSL_PATH = ../../openssl-0.9.8g
endif
# Edit the path below to point to the base of your LibSSH2 package.
ifndef LIBSSH2_PATH
LIBSSH2_PATH = ../../libssh2-0.17
LIBSSH2_PATH = ../../libssh2-0.18
endif
# Edit the path below to point to the base of your Novell LDAP NDK.
ifndef LDAP_SDK

View File

@@ -35,7 +35,7 @@ endif
# Edit the vars below to change NLM target settings.
TARGET = curl
VERSION = $(LIBCURL_VERSION)
COPYR = Copyright (C) 1996 - 2007, Daniel Stenberg, <daniel@haxx.se>
COPYR = Copyright (C) $(LIBCURL_COPYRIGHT_STR)
DESCR = cURL $(LIBCURL_VERSION_STR) ($(LIBARCH)) - http://curl.haxx.se
MTSAFE = YES
STACK = 64000
@@ -73,7 +73,7 @@ else
CC = gcc
endif
# a native win32 awk can be downloaded from here:
# http://www.gknw.net/development/prgtools/awk-20050424.zip
# http://www.gknw.net/development/prgtools/awk-20070501.zip
AWK = awk
CP = cp -afv
# RM = rm -f
@@ -324,9 +324,9 @@ endif
ifdef IMPORTS
@echo $(DL)import $(IMPORTS)$(DL) >> $@
endif
ifeq ($(LD),nlmconv)
@echo $(DL)input $(OBJS)$(DL) >> $@
ifeq ($(findstring nlmconv,$(LD)),nlmconv)
@echo $(DL)input $(PRELUDE)$(DL) >> $@
@echo $(DL)input $(OBJS)$(DL) >> $@
ifdef LDLIBS
@echo $(DL)input $(LDLIBS)$(DL) >> $@
endif

View File

@@ -22,7 +22,7 @@ ZLIB_PATH = ../../zlib-1.2.3
!ENDIF
!IFNDEF OPENSSL_PATH
OPENSSL_PATH = ../../openssl-0.9.8e
OPENSSL_PATH = ../../openssl-0.9.8g
!ENDIF
!IFNDEF MACHINE

View File

@@ -187,6 +187,20 @@
#define _CRT_NONSTDC_NO_DEPRECATE 1
#endif
/* VS2008 does not support Windows build targets prior to WinXP, */
/* so, if no build target has been defined we will target WinXP. */
#if defined(_MSC_VER) && (_MSC_VER >= 1500)
# ifndef _WIN32_WINNT
# define _WIN32_WINNT 0x0501
# endif
# ifndef WINVER
# define WINVER 0x0501
# endif
# if (_WIN32_WINNT < 0x0501) || (WINVER < 0x0501)
# error VS2008 does not support Windows build targets prior to WinXP
# endif
#endif
/* ---------------------------------------------------------------- */
/* ADDITIONAL DEFINITIONS */
/* ---------------------------------------------------------------- */

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -52,7 +52,8 @@ BEGIN
VALUE "OriginalFilename", "curl.exe\0"
VALUE "ProductName", "The cURL executable\0"
VALUE "ProductVersion", CURL_VERSION "\0"
VALUE "LegalCopyright", "Copyright 1996-2007 by Daniel Stenberg. http://curl.haxx.se/docs/copyright.html\0"
VALUE "LegalCopyright", "<EFBFBD> " CURL_COPYRIGHT "\0"
VALUE "License", "http://curl.haxx.se/docs/copyright.html\0"
END
END

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2007, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -85,6 +85,8 @@
#ifdef HAVE_SYS_POLL_H
#include <sys/poll.h>
#elif defined(HAVE_POLL_H)
#include <poll.h>
#endif
#ifdef HAVE_LOCALE_H
@@ -104,6 +106,13 @@
#endif
#endif /* CURL_DOES_CONVERSIONS && HAVE_ICONV */
#ifdef HAVE_NETINET_IN_H
#include <netinet/in.h> /* for IPPROTO_TCP */
#endif
#ifdef HAVE_NETINET_TCP_H
#include <netinet/tcp.h> /* for TCP_KEEPIDLE, TCP_KEEPINTVL */
#endif
/* The last #include file should be: */
#ifdef CURLDEBUG
#ifndef CURLTOOLDEBUG
@@ -127,6 +136,12 @@
#define SET_BINMODE(file) ((void)0)
#endif
#ifndef O_BINARY
/* since O_BINARY as used in bitmasks, setting it to zero makes it usable in
source code but yet it doesn't ruin anything */
#define O_BINARY 0
#endif
#ifdef MSDOS
#include <dos.h>
@@ -143,6 +158,18 @@ char **__crt0_glob_function (char *arg)
#endif /* __DJGPP__ */
#endif /* MSDOS */
#ifndef STDIN_FILENO
#define STDIN_FILENO fileno(stdin)
#endif
#ifndef STDOUT_FILENO
#define STDOUT_FILENO fileno(stdout)
#endif
#ifndef STDERR_FILENO
#define STDERR_FILENO fileno(stderr)
#endif
#define CURL_PROGRESS_STATS 0 /* default progress display */
#define CURL_PROGRESS_BAR 1
@@ -201,6 +228,7 @@ typedef enum {
/* Support uploading and resuming of >2GB files
*/
#if defined(WIN32) && (SIZEOF_CURL_OFF_T > 4)
#define lseek(x,y,z) _lseeki64(x, y, z)
#define struct_stat struct _stati64
#define stat(file,st) _stati64(file,st)
#else
@@ -323,21 +351,19 @@ char convert_char(curl_infotype infotype, char this_char)
#define _lseeki64(hnd,ofs,whence) lseek(hnd,ofs,whence)
#endif
#ifndef HAVE_FTRUNCATE
#define HAVE_FTRUNCATE 1
#endif
static int ftruncate64 (int fd, curl_off_t where)
{
curl_off_t curr;
int rc = 0;
if(_lseeki64(fd, where, SEEK_SET) < 0)
return -1;
if ((curr = _lseeki64(fd, 0, SEEK_CUR)) < 0)
return -1;
if(!SetEndOfFile((HANDLE)_get_osfhandle(fd)))
return -1;
if (_lseeki64(fd, where, SEEK_SET) < 0)
return -1;
if (write(fd, 0, 0) < 0)
rc = -1;
_lseeki64(fd, curr, SEEK_SET);
return rc;
return 0;
}
#define ftruncate(fd,where) ftruncate64(fd,where)
#endif
@@ -374,7 +400,7 @@ struct Configurable {
bool disable_eprt;
curl_off_t resume_from;
char *postfields;
long postfieldsize;
curl_off_t postfieldsize;
char *referer;
long timeout;
long connecttimeout;
@@ -481,6 +507,9 @@ struct Configurable {
char *libcurl; /* output libcurl code to this file name */
bool raw;
bool post301;
bool nokeepalive; /* for keepalive needs */
long alivetime;
struct OutStruct *outs;
};
@@ -606,140 +635,152 @@ struct getout {
static void help(void)
{
int i;
/* A few of these source lines are >80 columns wide, but that's only because
breaking the strings narrower makes this chunk look even worse!
Starting with 7.18.0, this list of command line options is sorted based
on the long option name. It is not done automatically, although a command
line like the following can help out:
curl --help | cut -c5- | grep "^-" | sort
*/
static const char * const helptext[]={
"Usage: curl [options...] <url>",
"Options: (H) means HTTP/HTTPS only, (F) means FTP only",
" -a/--append Append to target file when uploading (F)",
" -A/--user-agent <string> User-Agent to send to server (H)",
" --anyauth Pick \"any\" authentication method (H)",
" -b/--cookie <name=string/file> Cookie string or file to read cookies from (H)",
" -a/--append Append to target file when uploading (F)",
" --basic Use HTTP Basic Authentication (H)",
" -B/--use-ascii Use ASCII/text transfer",
" -c/--cookie-jar <file> Write cookies to this file after operation (H)",
" --cacert <file> CA certificate to verify peer against (SSL)",
" --capath <directory> CA directory to verify peer against (SSL)",
" -E/--cert <cert[:passwd]> Client certificate file and password (SSL)",
" --cert-type <type> Certificate file type (DER/PEM/ENG) (SSL)",
" --ciphers <list> SSL ciphers to use (SSL)",
" --compressed Request compressed response (using deflate or gzip)",
" -K/--config Specify which config file to read",
" --connect-timeout <seconds> Maximum time allowed for connection",
" -C/--continue-at <offset> Resumed transfer offset",
" -b/--cookie <name=string/file> Cookie string or file to read cookies from (H)",
" -c/--cookie-jar <file> Write cookies to this file after operation (H)",
" --create-dirs Create necessary local directory hierarchy",
" --crlf Convert LF to CRLF in upload",
" -d/--data <data> HTTP POST data (H)",
" --data-ascii <data> HTTP POST ASCII data (H)",
" --data-binary <data> HTTP POST binary data (H)",
" --data-urlencode <name=data/name@filename> HTTP POST data url encoded (H)",
" --negotiate Use HTTP Negotiate Authentication (H)",
" --digest Use HTTP Digest Authentication (H)",
" --disable-eprt Inhibit using EPRT or LPRT (F)",
" --disable-epsv Inhibit using EPSV (F)",
" -D/--dump-header <file> Write the headers to this file",
" --egd-file <file> EGD socket path for random data (SSL)",
" --tcp-nodelay Use the TCP_NODELAY option",
" --engine <eng> Crypto engine to use (SSL). \"--engine list\" for list",
#ifdef USE_ENVIRONMENT
" --environment Write results to environment variables (RISC OS)",
#endif
" -e/--referer Referer URL (H)",
" -E/--cert <cert[:passwd]> Client certificate file and password (SSL)",
" --cert-type <type> Certificate file type (DER/PEM/ENG) (SSL)",
" --key <key> Private key file name (SSL/SSH)",
" --key-type <type> Private key file type (DER/PEM/ENG) (SSL)",
" --pass <pass> Pass phrase for the private key (SSL/SSH)",
" --pubkey <key> Public key file name (SSH)",
" --engine <eng> Crypto engine to use (SSL). \"--engine list\" for list",
" --cacert <file> CA certificate to verify peer against (SSL)",
" --capath <directory> CA directory (made using c_rehash) to verify",
" peer against (SSL)",
" --hostpubmd5 <md5> Hex encoded MD5 string of the host public key. (SSH)",
" --ciphers <list> SSL ciphers to use (SSL)",
" --compressed Request compressed response (using deflate or gzip)",
" --connect-timeout <seconds> Maximum time allowed for connection",
" --create-dirs Create necessary local directory hierarchy",
" --crlf Convert LF to CRLF in upload",
" -f/--fail Fail silently (no output at all) on HTTP errors (H)",
" -F/--form <name=content> Specify HTTP multipart POST data (H)",
" --form-string <name=string> Specify HTTP multipart POST data (H)",
" --ftp-account <data> Account data to send when requested by server (F)",
" --ftp-alternative-to-user String to replace \"USER [name]\" (F)",
" --ftp-create-dirs Create the remote dirs if not present (F)",
" --ftp-method [multicwd/nocwd/singlecwd] Control CWD usage (F)",
" --ftp-pasv Use PASV/EPSV instead of PORT (F)",
" -P/--ftp-port <address> Use PORT with address instead of PASV (F)",
" --ftp-skip-pasv-ip Skip the IP address for PASV (F)\n"
" --ftp-ssl Try SSL/TLS for ftp transfer (F)",
" --ftp-ssl-control Require SSL/TLS for ftp login, clear for transfer (F)",
" --ftp-ssl-reqd Require SSL/TLS for ftp transfer (F)",
" --ftp-ssl-ccc Send CCC after authenticating (F)",
" --ftp-ssl-ccc-mode [active/passive] Set CCC mode (F)",
" -F/--form <name=content> Specify HTTP multipart POST data (H)",
" --form-string <name=string> Specify HTTP multipart POST data (H)",
" -g/--globoff Disable URL sequences and ranges using {} and []",
" --ftp-ssl-control Require SSL/TLS for ftp login, clear for transfer (F)",
" --ftp-ssl-reqd Require SSL/TLS for ftp transfer (F)",
" -G/--get Send the -d data with a HTTP GET (H)",
" -h/--help This help text",
" -g/--globoff Disable URL sequences and ranges using {} and []",
" -H/--header <line> Custom header to pass to server (H)",
" -I/--head Show document info only",
" -h/--help This help text",
" --hostpubmd5 <md5> Hex encoded MD5 string of the host public key. (SSH)",
" -0/--http1.0 Use HTTP 1.0 (H)",
" --ignore-content-length Ignore the HTTP Content-Length header",
" -i/--include Include protocol headers in the output (H/F)",
" -I/--head Show document info only",
" -j/--junk-session-cookies Ignore session cookies read from file (H)",
" --interface <interface> Specify network interface/address to use",
" --krb <level> Enable kerberos with specified security level (F)",
" -k/--insecure Allow connections to SSL sites without certs (H)",
" -K/--config Specify which config file to read",
" --interface <interface> Specify network interface/address to use",
" -4/--ipv4 Resolve name to IPv4 address",
" -6/--ipv6 Resolve name to IPv6 address",
" -j/--junk-session-cookies Ignore session cookies read from file (H)",
" --keepalive-time <seconds> Interval between keepalive probes",
" --key <key> Private key file name (SSL/SSH)",
" --key-type <type> Private key file type (DER/PEM/ENG) (SSL)",
" --krb <level> Enable kerberos with specified security level (F)",
" --libcurl <file> Dump libcurl equivalent code of this command line",
" -l/--list-only List only names of an FTP directory (F)",
" --limit-rate <rate> Limit transfer speed to this rate",
" --local-port <num>[-num] Force use of these local port numbers\n",
" -l/--list-only List only names of an FTP directory (F)",
" --local-port <num>[-num] Force use of these local port numbers",
" -L/--location Follow Location: hints (H)",
" --location-trusted Follow Location: and send authentication even ",
" to other hostnames (H)",
" -m/--max-time <seconds> Maximum time allowed for the transfer",
" --max-redirs <num> Maximum number of redirects allowed (H)",
" --max-filesize <bytes> Maximum file size to download (H/F)",
" --location-trusted Follow Location: and send auth to other hosts (H)",
" -M/--manual Display the full manual",
" --max-filesize <bytes> Maximum file size to download (H/F)",
" --max-redirs <num> Maximum number of redirects allowed (H)",
" -m/--max-time <seconds> Maximum time allowed for the transfer",
" --negotiate Use HTTP Negotiate Authentication (H)",
" -n/--netrc Must read .netrc for user name and password",
" --netrc-optional Use either .netrc or URL; overrides -n",
" --ntlm Use HTTP NTLM authentication (H)",
" -N/--no-buffer Disable buffering of the output stream",
" --no-keepalive Disable keepalive use on the connection",
" --no-sessionid Disable SSL session-ID reusing (SSL)",
" --ntlm Use HTTP NTLM authentication (H)",
" -o/--output <file> Write output to <file> instead of stdout",
" -O/--remote-name Write output to a file named as the remote file",
" --pass <pass> Pass phrase for the private key (SSL/SSH)",
" --post301 Do not switch to GET after following a 301 redirect (H)",
" -p/--proxytunnel Operate through a HTTP proxy tunnel (using CONNECT)",
" -#/--progress-bar Display transfer progress as a progress bar",
" -x/--proxy <host[:port]> Use HTTP proxy on given port",
" --proxy-anyauth Pick \"any\" proxy authentication method (H)",
" --proxy-basic Use Basic authentication on the proxy (H)",
" --proxy-digest Use Digest authentication on the proxy (H)",
" --proxy-negotiate Use Negotiate authentication on the proxy (H)",
" --proxy-ntlm Use NTLM authentication on the proxy (H)",
" -P/--ftp-port <address> Use PORT with address instead of PASV (F)",
" -q If used as the first parameter disables .curlrc",
" -U/--proxy-user <user[:password]> Set proxy user and password",
" -p/--proxytunnel Operate through a HTTP proxy tunnel (using CONNECT)",
" --pubkey <key> Public key file name (SSH)",
" -Q/--quote <cmd> Send command(s) to server before file transfer (F/SFTP)",
" -r/--range <range> Retrieve a byte range from a HTTP/1.1 or FTP server",
" --random-file <file> File for reading random data from (SSL)",
" -r/--range <range> Retrieve a byte range from a HTTP/1.1 or FTP server",
" --raw Pass HTTP \"raw\", without any transfer decoding (H)",
" -e/--referer Referer URL (H)",
" -O/--remote-name Write output to a file named as the remote file",
" -R/--remote-time Set the remote file's time on the local output",
" -X/--request <command> Specify request command to use",
" --retry <num> Retry request <num> times if transient problems occur",
" --retry-delay <seconds> When retrying, wait this many seconds between each",
" --retry-max-time <seconds> Retry only within this period",
" -s/--silent Silent mode. Don't output anything",
" -S/--show-error Show error. With -s, make curl show errors when they occur",
" --socks4 <host[:port]> Use SOCKS4 proxy on given host + port",
" --socks5 <host[:port]> Use SOCKS5 proxy on given host + port",
" -s/--silent Silent mode. Don't output anything",
" --socks4 <host[:port]> SOCKS4 proxy on given host + port",
" --socks4a <host[:port]> SOCKS4a proxy on given host + port",
" --socks5 <host[:port]> SOCKS5 proxy on given host + port",
" --socks5-hostname <host[:port]> SOCKS5 proxy, pass host name to proxy",
" -Y/--speed-limit Stop transfer if below speed-limit for 'speed-time' secs",
" -y/--speed-time Time needed to trig speed-limit abort. Defaults to 30",
" -2/--sslv2 Use SSLv2 (SSL)",
" -3/--sslv3 Use SSLv3 (SSL)",
" --stderr <file> Where to redirect stderr. - means stdout",
" --tcp-nodelay Use the TCP_NODELAY option",
" -t/--telnet-option <OPT=val> Set telnet option",
" -z/--time-cond <time> Transfer based on a time condition",
" -1/--tlsv1 Use TLSv1 (SSL)",
" --trace <file> Write a debug trace to the given file",
" --trace-ascii <file> Like --trace but without the hex output",
" --trace-time Add time stamps to trace/verbose output",
" -T/--upload-file <file> Transfer <file> to remote site",
" --url <URL> Set URL to work with",
" -B/--use-ascii Use ASCII/text transfer",
" -u/--user <user[:password]> Set server user and password",
" -U/--proxy-user <user[:password]> Set proxy user and password",
" -A/--user-agent <string> User-Agent to send to server (H)",
" -v/--verbose Make the operation more talkative",
" -V/--version Show version number and quit",
#ifdef MSDOS
" --wdebug Turn on Watt-32 debugging under DJGPP",
#endif
" -w/--write-out [format] What to output after completion",
" -x/--proxy <host[:port]> Use HTTP proxy on given port",
" -X/--request <command> Specify request command to use",
" -y/--speed-time Time needed to trig speed-limit abort. Defaults to 30",
" -Y/--speed-limit Stop transfer if below speed-limit for 'speed-time' secs",
" -z/--time-cond <time> Transfer based on a time condition",
" -0/--http1.0 Use HTTP 1.0 (H)",
" -1/--tlsv1 Use TLSv1 (SSL)",
" -2/--sslv2 Use SSLv2 (SSL)",
" -3/--sslv3 Use SSLv3 (SSL)",
" -4/--ipv4 Resolve name to IPv4 address",
" -6/--ipv6 Resolve name to IPv6 address",
" -#/--progress-bar Display transfer progress as a progress bar",
" -q If used as the first parameter disables .curlrc",
NULL
};
for(i=0; helptext[i]; i++) {
@@ -776,71 +817,6 @@ static void GetStr(char **string,
*string = NULL;
}
static char *file2string(FILE *file)
{
char buffer[256];
char *ptr;
char *string=NULL;
size_t len=0;
size_t stringlen;
if(file) {
while(fgets(buffer, sizeof(buffer), file)) {
ptr= strchr(buffer, '\r');
if(ptr)
*ptr=0;
ptr= strchr(buffer, '\n');
if(ptr)
*ptr=0;
stringlen=strlen(buffer);
if(string)
string = realloc(string, len+stringlen+1);
else
string = malloc(stringlen+1);
strcpy(string+len, buffer);
len+=stringlen;
}
return string;
}
else
return NULL; /* no string */
}
static char *file2memory(FILE *file, long *size)
{
char buffer[1024];
char *string=NULL;
char *newstring=NULL;
size_t len=0;
long stringlen=0;
if(file) {
while((len = fread(buffer, 1, sizeof(buffer), file))) {
if(string) {
newstring = realloc(string, len+stringlen+1);
if(newstring)
string = newstring;
else
break; /* no more strings attached! :-) */
}
else
string = malloc(len+1);
memcpy(&string[stringlen], buffer, len);
stringlen+=len;
}
if (string) {
/* NUL terminate the buffer in case it's treated as a string later */
string[stringlen] = 0;
}
*size = stringlen;
return string;
}
else
return NULL; /* no string */
}
static void clean_getout(struct Configurable *config)
{
struct getout *node=config->url_list;
@@ -1283,6 +1259,82 @@ static const char *param2text(int res)
}
}
static ParameterError file2string(char **bufp, FILE *file)
{
char buffer[256];
char *ptr;
char *string = NULL;
size_t stringlen = 0;
size_t buflen;
if(file) {
while(fgets(buffer, sizeof(buffer), file)) {
if((ptr = strchr(buffer, '\r')) != NULL)
*ptr = '\0';
if((ptr = strchr(buffer, '\n')) != NULL)
*ptr = '\0';
buflen = strlen(buffer);
if((ptr = realloc(string, stringlen+buflen+1)) == NULL) {
if(string)
free(string);
return PARAM_NO_MEM;
}
string = ptr;
strcpy(string+stringlen, buffer);
stringlen += buflen;
}
}
*bufp = string;
return PARAM_OK;
}
static ParameterError file2memory(char **bufp, size_t *size, FILE *file)
{
char *newbuf;
char *buffer = NULL;
size_t alloc = 512;
size_t nused = 0;
size_t nread;
if(file) {
do {
if(!buffer || (alloc == nused)) {
/* size_t overflow detection for huge files */
if(alloc+1 > ((size_t)-1)/2) {
if(buffer)
free(buffer);
return PARAM_NO_MEM;
}
alloc *= 2;
/* allocate an extra char, reserved space, for null termination */
if((newbuf = realloc(buffer, alloc+1)) == NULL) {
if(buffer)
free(buffer);
return PARAM_NO_MEM;
}
buffer = newbuf;
}
nread = fread(buffer+nused, 1, alloc-nused, file);
nused += nread;
} while(nread);
/* null terminate the buffer in case it's used as a string later */
buffer[nused] = '\0';
/* free trailing slack space, if possible */
if(alloc != nused) {
if((newbuf = realloc(buffer, nused+1)) != NULL)
buffer = newbuf;
}
/* discard buffer if nothing was read */
if(!nused) {
free(buffer);
buffer = NULL; /* no string */
}
}
*size = nused;
*bufp = buffer;
return PARAM_OK;
}
static void cleanarg(char *str)
{
#ifdef HAVE_WRITABLE_ARGV
@@ -1432,6 +1484,57 @@ static int ftpcccmethod(struct Configurable *config, char *str)
return CURLFTPSSL_CCC_PASSIVE;
}
static int sockoptcallback(void *clientp, curl_socket_t curlfd,
curlsocktype purpose)
{
struct Configurable *config = (struct Configurable *)clientp;
int onoff = 1; /* this callback is only used if we ask for keepalives on the
connection */
#if defined(TCP_KEEPIDLE) || defined(TCP_KEEPINTVL)
int keepidle = (int)config->alivetime;
#endif
switch (purpose) {
case CURLSOCKTYPE_IPCXN:
if(setsockopt(curlfd, SOL_SOCKET, SO_KEEPALIVE, (void *)&onoff,
sizeof(onoff)) < 0) {
/* don't abort operation, just issue a warning */
SET_SOCKERRNO(0);
warnf(clientp, "Could not set SO_KEEPALIVE!\n");
return 0;
}
else {
if (config->alivetime) {
#ifdef TCP_KEEPIDLE
if(setsockopt(curlfd, IPPROTO_TCP, TCP_KEEPIDLE, (void *)&keepidle,
sizeof(keepidle)) < 0) {
/* don't abort operation, just issue a warning */
SET_SOCKERRNO(0);
warnf(clientp, "Could not set TCP_KEEPIDLE!\n");
return 0;
}
#endif
#ifdef TCP_KEEPINTVL
if(setsockopt(curlfd, IPPROTO_TCP, TCP_KEEPINTVL, (void *)&keepidle,
sizeof(keepidle)) < 0) {
/* don't abort operation, just issue a warning */
SET_SOCKERRNO(0);
warnf(clientp, "Could not set TCP_KEEPINTVL!\n");
return 0;
}
#endif
}
}
break;
default:
break;
}
return 0;
}
static ParameterError getparameter(char *flag, /* f or -long-flag */
char *nextarg, /* NULL if unset */
bool *usedarg, /* set to TRUE if the arg
@@ -1490,10 +1593,10 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
{"*z", "disable-eprt", FALSE},
{"$a", "ftp-ssl", FALSE},
{"$b", "ftp-pasv", FALSE},
{"$c", "socks5", TRUE},
{"$c", "socks", TRUE}, /* this is how the option was documented but
we prefer the --socks5 version for explicit
version */
{"$c", "socks5", TRUE},
{"$c", "socks", TRUE}, /* this is how the option once was documented
but we prefer the --socks5 version for
explicit version */
{"$d", "tcp-nodelay",FALSE},
{"$e", "proxy-digest", FALSE},
{"$f", "proxy-basic", FALSE},
@@ -1509,6 +1612,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
{"$r", "ftp-method", TRUE},
{"$s", "local-port", TRUE},
{"$t", "socks4", TRUE},
{"$T", "socks4a", TRUE},
{"$u", "ftp-alternative-to-user", TRUE},
{"$v", "ftp-ssl-reqd", FALSE},
{"$w", "no-sessionid", FALSE},
@@ -1518,6 +1622,9 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
{"$z", "libcurl", TRUE},
{"$#", "raw", FALSE},
{"$0", "post301", FALSE},
{"$1", "no-keepalive", FALSE},
{"$2", "socks5-hostname", TRUE},
{"$3", "keepalive-time", TRUE},
{"0", "http1.0", FALSE},
{"1", "tlsv1", FALSE},
@@ -1824,7 +1931,8 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
break;
case 'x': /* --krb */
/* kerberos level string */
if(curlinfo->features & (CURL_VERSION_KERBEROS4 | CURL_VERSION_GSSNEGOTIATE))
if(curlinfo->features & (CURL_VERSION_KERBEROS4 |
CURL_VERSION_GSSNEGOTIATE))
GetStr(&config->krblevel, nextarg);
else
return PARAM_LIBCURL_DOESNT_SUPPORT;
@@ -1874,7 +1982,8 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
free(config->ftpport);
config->ftpport = NULL;
break;
case 'c': /* --socks5 specifies a socks5 proxy to use */
case 'c': /* --socks5 specifies a socks5 proxy to use, and resolves
the name locally and passes on the resolved address */
GetStr(&config->socksproxy, nextarg);
config->socksver = CURLPROXY_SOCKS5;
break;
@@ -1882,6 +1991,15 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
GetStr(&config->socksproxy, nextarg);
config->socksver = CURLPROXY_SOCKS4;
break;
case 'T': /* --socks4a specifies a socks4a proxy to use */
GetStr(&config->socksproxy, nextarg);
config->socksver = CURLPROXY_SOCKS4A;
break;
case '2': /* --socks5-hostname specifies a socks5 proxy and enables name
resolving with the proxy */
GetStr(&config->socksproxy, nextarg);
config->socksver = CURLPROXY_SOCKS5_HOSTNAME;
break;
case 'd': /* --tcp-nodelay option */
config->tcp_nodelay ^= TRUE;
break;
@@ -1974,6 +2092,13 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
case '0': /* --post301 */
config->post301 ^= TRUE;
break;
case '1': /* --no-keepalive */
config->nokeepalive ^= TRUE;
break;
case '3': /* --keepalive-time */
if(str2num(&config->alivetime, nextarg))
return PARAM_BAD_NUMERIC;
break;
}
break;
case '#': /* --progress-bar */
@@ -2058,20 +2183,24 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
* the content.
*/
char *p = strchr(nextarg, '=');
long size = 0;
size_t size = 0;
size_t nlen;
char is_file;
if(!p)
/* there was no '=' letter, check for a '@' instead */
p = strchr(nextarg, '@');
if(!p) {
warnf(config, "bad use of --data-urlencode\n");
return PARAM_BAD_USE;
if (p) {
nlen = p - nextarg; /* length of the name part */
is_file = *p++; /* pass the separator */
}
nlen = p - nextarg; /* length of the name part */
if('@' == *p) {
else {
/* neither @ nor =, so no name and it isn't a file */
nlen = is_file = 0;
p = nextarg;
}
if('@' == is_file) {
/* a '@' letter, it means that a file name or - (stdin) follows */
p++; /* pass the separator */
if(curlx_strequal("-", p)) {
file = stdin;
SET_BINMODE(stdin);
@@ -2084,13 +2213,15 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
"an empty POST.\n", nextarg);
}
postdata = file2memory(file, &size);
err = file2memory(&postdata, &size, file);
if(file && (file != stdin))
fclose(file);
if(err)
return err;
}
else {
GetStr(&postdata, ++p);
GetStr(&postdata, p);
size = strlen(postdata);
}
@@ -2108,8 +2239,10 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
char *n = malloc(outlen);
if(!n)
return PARAM_NO_MEM;
snprintf(n, outlen, "%.*s=%s", nlen, nextarg, enc);
if (nlen > 0) /* only append '=' if we have a name */
snprintf(n, outlen, "%.*s=%s", nlen, nextarg, enc);
else
strcpy(n, enc);
curl_free(enc);
free(postdata);
if(n) {
@@ -2123,6 +2256,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
}
}
else if('@' == *nextarg) {
size_t size = 0;
/* the data begins with a '@' letter, it means that a file name
or - (stdin) follows */
nextarg++; /* pass the @ */
@@ -2139,13 +2273,18 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
"an empty POST.\n", nextarg);
}
if(subletter == 'b') /* forced binary */
postdata = file2memory(file, &config->postfieldsize);
if(subletter == 'b') {
/* forced binary */
err = file2memory(&postdata, &size, file);
config->postfieldsize = (curl_off_t)size;
}
else
postdata = file2string(file);
err = file2string(&postdata, file);
if(file && (file != stdin))
fclose(file);
if(err)
return err;
if(!postdata) {
/* no data from the file, point to a zero byte string to make this
@@ -2608,11 +2747,13 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
file = stdin;
else
file = fopen(nextarg, "r");
config->writeout = file2string(file);
if(!config->writeout)
warnf(config, "Failed to read %s", file);
err = file2string(&config->writeout, file);
if(file && (file != stdin))
fclose(file);
if(err)
return err;
if(!config->writeout)
warnf(config, "Failed to read %s", file);
}
else
GetStr(&config->writeout, nextarg);
@@ -2967,38 +3108,73 @@ static size_t my_fwrite(void *buffer, size_t sz, size_t nmemb, void *stream)
}
struct InStruct {
FILE *stream;
int fd;
struct Configurable *config;
};
static curlioerr my_ioctl(CURL *handle, curliocmd cmd, void *userp)
#define MAX_SEEK 2147483647
#ifndef SIZEOF_OFF_T
/* (Jan 11th 2008) this is a reasonably new define in the config.h so there
might be older handicrafted configs that don't define it properly and then
we assume 32bit off_t */
#define SIZEOF_OFF_T 4
#endif
/*
* my_seek() is the CURLOPT_SEEKFUNCTION we use
*/
static int my_seek(void *stream, curl_off_t offset, int whence)
{
struct InStruct *in=(struct InStruct *)userp;
(void)handle; /* not used in here */
struct InStruct *in=(struct InStruct *)stream;
switch(cmd) {
case CURLIOCMD_RESTARTREAD:
/* mr libcurl kindly asks as to rewind the read data stream to start */
if(-1 == fseek(in->stream, 0, SEEK_SET))
/* couldn't rewind, the reason is in errno but errno is just not
portable enough and we don't actually care that much why we failed. */
return CURLIOE_FAILRESTART;
#if (SIZEOF_CURL_OFF_T > SIZEOF_OFF_T) && !defined(lseek)
/* The sizeof check following here is only interesting if curl_off_t is
larger than off_t, but also not on windows-like systems for which lseek
is a defined macro that works around the 32bit off_t-problem and thus do
64bit seeks correctly anyway */
break;
if(offset > MAX_SEEK) {
/* Some precaution code to work around problems with different data sizes
to allow seeking >32bit even if off_t is 32bit. Should be very rare and
is really valid on weirdo-systems. */
curl_off_t left = offset;
default: /* ignore unknown commands */
return CURLIOE_UNKNOWNCMD;
if(whence != SEEK_SET)
/* this code path doesn't support other types */
return 1;
if(-1 == lseek(in->fd, 0, SEEK_SET))
/* couldn't rewind to beginning */
return 1;
while(left) {
long step = (left>MAX_SEEK ? MAX_SEEK : (long)left);
if(-1 == lseek(in->fd, step, SEEK_CUR))
/* couldn't seek forwards the desired amount */
return 1;
left -= step;
}
return 0;
}
return CURLIOE_OK;
#endif
if(-1 == lseek(in->fd, offset, whence))
/* couldn't rewind, the reason is in errno but errno is just not
portable enough and we don't actually care that much why we failed. */
return 1;
return 0;
}
static size_t my_fread(void *buffer, size_t sz, size_t nmemb, void *userp)
{
size_t rc;
ssize_t rc;
struct InStruct *in=(struct InStruct *)userp;
rc = fread(buffer, sz, nmemb, in->stream);
return rc;
rc = read(in->fd, buffer, sz*nmemb);
if(rc < 0)
/* since size_t is unsigned we can't return negative values fine */
return 0;
return (size_t)rc;
}
struct ProgressData {
@@ -3542,6 +3718,8 @@ static const char * const srchead[]={
" * libcurl.",
" * If you use any *_LARGE options, make sure your compiler figure",
" * out the correct size for the curl_off_t variable.",
" * Read the details for all curl_easy_setopt() options online on:",
" * http://curlm.haxx.se/libcurl/c/curl_easy_setopt.html",
" ************************************************************************/",
"[m]",
"#include <curl/curl.h>",
@@ -3589,8 +3767,9 @@ static void dumpeasycode(struct Configurable *config)
ptr = ptr->next;
}
fprintf(out,
" return (int)ret;\n"
"}\n"
"/* */\n");
"/**** End of sample code ****/\n");
if(fopened)
fclose(out);
}
@@ -3598,11 +3777,12 @@ static void dumpeasycode(struct Configurable *config)
curl_slist_free_all(easycode);
}
static int
operate(struct Configurable *config, int argc, argv_item_t argv[])
{
char errorbuffer[CURL_ERROR_SIZE];
char useragent[128]; /* buah, we don't want a larger default user agent */
char useragent[256]; /* buah, we don't want a larger default user agent */
struct ProgressData progressbar;
struct getout *urlnode;
struct getout *nextnode;
@@ -3617,8 +3797,8 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
int infilenum;
char *uploadfile=NULL; /* a single file, never a glob */
FILE *infd = stdin;
bool infdfopen;
int infd = STDIN_FILENO;
bool infdopen;
FILE *headerfilep = NULL;
curl_off_t uploadfilesize; /* -1 means unknown */
bool stillflags=TRUE;
@@ -3634,6 +3814,9 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
long retry_sleep;
char *env;
memset(&heads, 0, sizeof(struct OutStruct));
#ifdef CURLDEBUG
/* this sends all memory debug messages to a logfile named memdump */
env = curlx_getenv("CURL_MEMDEBUG");
@@ -4037,7 +4220,7 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
outs.stream = NULL; /* open when needed */
}
}
infdfopen=FALSE;
infdopen=FALSE;
if(uploadfile && !curlx_strequal(uploadfile, "-")) {
/*
* We have specified a file to upload and it isn't "-".
@@ -4105,11 +4288,11 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
* to be considered with one appended if implied CC
*/
infd=(FILE *) fopen(uploadfile, "rb");
if (!infd || stat(uploadfile, &fileinfo)) {
infd= open(uploadfile, O_RDONLY | O_BINARY);
if ((infd == -1) || stat(uploadfile, &fileinfo)) {
helpf("Can't open '%s'!\n", uploadfile);
if(infd)
fclose(infd);
if(infd != -1)
close(infd);
/* Free the list of remaining URLs and globbed upload files
* to force curl to exit immediately
@@ -4126,13 +4309,13 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
res = CURLE_READ_ERROR;
goto quit_urls;
}
infdfopen=TRUE;
infdopen=TRUE;
uploadfilesize=fileinfo.st_size;
}
else if(uploadfile && curlx_strequal(uploadfile, "-")) {
SET_BINMODE(stdin);
infd = stdin;
infd = STDIN_FILENO;
}
if(uploadfile && config->resume_from_current)
@@ -4194,8 +4377,8 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
config->errors = stderr;
if(!outfile && !(config->conf & CONF_GETTEXT)) {
/* We get the output to stdout and we have not got the ASCII/text flag,
then set stdout to be binary */
/* We get the output to stdout and we have not got the ASCII/text
flag, then set stdout to be binary */
SET_BINMODE(stdout);
}
@@ -4208,15 +4391,16 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
my_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
/* for uploads */
input.stream = infd;
input.fd = infd;
input.config = config;
my_setopt(curl, CURLOPT_READDATA, &input);
/* what call to read */
my_setopt(curl, CURLOPT_READFUNCTION, my_fread);
/* libcurl 7.12.3 business: */
my_setopt(curl, CURLOPT_IOCTLDATA, &input);
my_setopt(curl, CURLOPT_IOCTLFUNCTION, my_ioctl);
/* in 7.18.0, the CURLOPT_SEEKFUNCTION/DATA pair is taking over what
CURLOPT_IOCTLFUNCTION/DATA pair previously provided for seeking */
my_setopt(curl, CURLOPT_SEEKDATA, &input);
my_setopt(curl, CURLOPT_SEEKFUNCTION, my_seek);
if(config->recvpersecond)
/* tell libcurl to use a smaller sized buffer as it allows us to
@@ -4258,7 +4442,7 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
switch(config->httpreq) {
case HTTPREQ_SIMPLEPOST:
my_setopt(curl, CURLOPT_POSTFIELDS, config->postfields);
my_setopt(curl, CURLOPT_POSTFIELDSIZE, config->postfieldsize);
my_setopt(curl, CURLOPT_POSTFIELDSIZE_LARGE, config->postfieldsize);
break;
case HTTPREQ_POST:
my_setopt(curl, CURLOPT_HTTPPOST, config->httppost);
@@ -4288,12 +4472,13 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
my_setopt(curl, CURLOPT_SSLKEYTYPE, config->key_type);
my_setopt(curl, CURLOPT_KEYPASSWD, config->key_passwd);
/* SSH private key uses the same command-line option as SSL private key */
/* SSH private key uses the same command-line option as SSL private
key */
my_setopt(curl, CURLOPT_SSH_PRIVATE_KEYFILE, config->key);
my_setopt(curl, CURLOPT_SSH_PUBLIC_KEYFILE, config->pubkey);
/* SSH host key md5 checking allows us to fail if we are
* not talking to who we think we should
* not talking to who we think we should
*/
my_setopt(curl, CURLOPT_SSH_HOST_PUBLIC_KEY_MD5, config->hostpubmd5);
@@ -4486,6 +4671,10 @@ operate(struct Configurable *config, int argc, argv_item_t argv[])
/* curl 7.17.1 */
my_setopt(curl, CURLOPT_POST301, config->post301);
if (!config->nokeepalive) {
my_setopt(curl, CURLOPT_SOCKOPTFUNCTION, sockoptcallback);
my_setopt(curl, CURLOPT_SOCKOPTDATA, config);
}
retry_numretries = config->req_retry;
@@ -4693,8 +4882,8 @@ quit_urls:
if(outfile)
free(outfile);
if(infdfopen)
fclose(infd);
if(infdopen)
close(infd);
} /* loop to the next URL */

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2004, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 1998 - 2008, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
@@ -26,6 +26,7 @@
#include <curl/curlver.h>
#define CURL_NAME "curl"
#define CURL_COPYRIGHT LIBCURL_COPYRIGHT
#define CURL_VERSION LIBCURL_VERSION
#define CURL_VERSION_MAJOR LIBCURL_VERSION_MAJOR
#define CURL_VERSION_MINOR LIBCURL_VERSION_MINOR

Some files were not shown because too many files have changed in this diff Show More