Compare commits

...

121 Commits

Author SHA1 Message Date
Daniel Stenberg
95a4b8db68 7.10.5 commit 2003-05-19 11:45:10 +00:00
Daniel Stenberg
663c1898a3 known AIX ipv6 problems 2003-05-16 10:57:53 +00:00
Daniel Stenberg
465de793e8 Skip any preceeding dots from the domain name of cookies when we keep them
in memory, only add it when we save the cookie. This makes all tailmatching
and domain string matching internally a lot easier.

This was also the reason for a remaining bug I introduced in my overhaul.
2003-05-15 22:28:19 +00:00
Daniel Stenberg
de9b76cef0 change the order of the in_addr_t tests, so that 'unsigned long' is tested
for first, as it seems to be what many systems use
2003-05-15 21:13:36 +00:00
Daniel Stenberg
1747a8d3d9 1. George Comninos' progress meter fix
2. I also added the pre-releases and dates to the log
2003-05-15 08:13:19 +00:00
Daniel Stenberg
1094e79749 documented CURLOPT_FTP_USE_EPRT 2003-05-14 09:03:51 +00:00
Daniel Stenberg
22569681bc George Comninos provided a fix that calls the progress meter when waiting
for FTP command responses take >1 second.
2003-05-14 06:31:00 +00:00
Daniel Stenberg
e615d117a0 Setup and use CURL_INADDR_NONE all over instead of INADDR_NONE. We setup
the define accordingly in the hostip.h header to work nicely all over.
2003-05-13 12:12:17 +00:00
Daniel Stenberg
a51258b6bb before using if2ip(), check if the address is an ip address and skip it if
it is.
2003-05-13 12:11:31 +00:00
Daniel Stenberg
8894bd07b6 libtool 1.4.2 is enough 2003-05-13 09:38:09 +00:00
Daniel Stenberg
ec45a9e825 fix comment 2003-05-13 09:37:45 +00:00
Daniel Stenberg
871358a6e5 before checking for network interfaces using if2ip(), check that the given
name isn't an ip address
2003-05-12 13:06:48 +00:00
Daniel Stenberg
2e2e0fba60 no more complaining when I have 1.5 and it tests for 1.4.2 2003-05-12 13:05:11 +00:00
Daniel Stenberg
4a5139e3f4 fixes from the last week+ 2003-05-12 12:49:22 +00:00
Daniel Stenberg
8f85933d7c Dan F clarified the CURLOPT_ENCODING description after his changes to
allow "" to enable all support formats.
2003-05-12 12:47:35 +00:00
Daniel Stenberg
246f3a63f6 Dan Fandrich added --compressed docu 2003-05-12 12:46:45 +00:00
Daniel Stenberg
e99eff4eb0 setting ENCODING to "" means enable-all-you-support 2003-05-12 12:45:57 +00:00
Daniel Stenberg
c0197f19cf Dan Fandrich changed CURLOPT_ENCODING to select all supported encodings if
set to "".  This frees the application from having to know which encodings
 the library supports.
2003-05-12 12:45:14 +00:00
Daniel Stenberg
3994d67eea Dan Fandrich lowered the libtool requirement 2003-05-12 12:38:52 +00:00
Daniel Stenberg
9ead79c9d4 when we have accepted the server's connection in a PORT sequence, we set
the new socket to non-blocking
2003-05-12 12:37:35 +00:00
Daniel Stenberg
9371aed46c avoid the write loop 2003-05-12 12:37:05 +00:00
Daniel Stenberg
940707ad66 incoming proxy headers shall be sent to the debug function has HEADERs not
DATA
2003-05-12 12:29:00 +00:00
Daniel Stenberg
e6c267fb4c oops, run libtoolize as the first tool 2003-05-09 08:17:41 +00:00
Daniel Stenberg
93538fccd6 run libtoolize too 2003-05-09 08:13:02 +00:00
Daniel Stenberg
83a7fad308 run libtoolize to generate these files 2003-05-09 08:12:46 +00:00
Daniel Stenberg
3c7e33388e CURLOPT_FTP_USE_EPRT added 2003-05-09 07:42:47 +00:00
Daniel Stenberg
7b0f35edb6 --disable-eprt added 2003-05-09 07:39:50 +00:00
Daniel Stenberg
94a157d0b0 support for CURLOPT_FTP_USE_EPRT added 2003-05-09 07:39:29 +00:00
Daniel Stenberg
ca04620253 AIX wants sys/select.h 2003-05-09 07:37:27 +00:00
Daniel Stenberg
073ef0b36a clarify on the curl name issue and that there may be other libcurl-based
tools that provide GUI
2003-05-09 07:07:13 +00:00
Daniel Stenberg
c41c05d4f4 Kevin Delafield reported another case where we didn't correctly check for
EAGAIN but only EWOULDBLOCK, which caused badness on HPUX. We also check for
 and act the same on EINTR errors as well now.
2003-05-06 08:19:36 +00:00
Daniel Stenberg
f1ea54e07a fixed the required tools' version numbers 2003-05-05 14:19:54 +00:00
Daniel Stenberg
a139ce901a the writable argv check now should not exit when building a cross-compiled
curl
2003-05-04 16:07:19 +00:00
Daniel Stenberg
7431957113 put back the libtool test, now for 1.5
require autoconf 2.57
require automake 1.7
2003-05-03 16:25:49 +00:00
Daniel Stenberg
1752d80915 If there is a custom Host: header specified, we use that host name to
extract the correct set of cookies to send. This functionality is verified
by test case 62.
2003-05-02 09:13:19 +00:00
Daniel Stenberg
aa7420e109 send correct cookies when using a custom Host: 2003-05-02 09:12:26 +00:00
Daniel Stenberg
a290d4b9db fixed the format slightly 2003-05-02 09:11:53 +00:00
Daniel Stenberg
19a4314e7f corrected a comment about gzip not being supported 2003-05-01 17:49:47 +00:00
Daniel Stenberg
d166e85e0a FTP URL with type=a 2003-05-01 17:48:59 +00:00
Daniel Stenberg
f213e857ab Andy Cedilnik fixed some compiler warnings 2003-05-01 13:37:36 +00:00
Daniel Stenberg
eb6130baa7 ourerrno became Curl_ourerrno() and is now available to all libcurl 2003-05-01 13:37:05 +00:00
Daniel Stenberg
f69ea2c68a Use the proper Curl_ourerrno() function instead of plain errno, for better
portability. Also use Andy Cedilnik's compiler warning fixes.
2003-05-01 13:36:28 +00:00
Daniel Stenberg
078441d477 the test numbers are now only for human readability, the numbers no longer
enforces protocol/server
2003-04-30 20:29:31 +00:00
Daniel Stenberg
95f6b15a67 no longer assume that the test number implies servers to run 2003-04-30 20:28:49 +00:00
Daniel Stenberg
ee29dbdb8f Each test case now specifies which server(s) it needs, without relying on the
test number.
2003-04-30 20:25:39 +00:00
Daniel Stenberg
15f3f4c93f we say welcome to test 142 2003-04-30 20:08:01 +00:00
Daniel Stenberg
6932e94e0e verify that curl fails fine when an FTP URL with a too deep dir hierarchy
is used
2003-04-30 20:07:37 +00:00
Daniel Stenberg
3ef06d7efe when making up the list of path parts, save the last entry pointing to NULL
as otherwise we'll go nuts
2003-04-30 20:04:17 +00:00
Daniel Stenberg
fb012b48e9 recent action 2003-04-30 20:01:22 +00:00
Daniel Stenberg
bc77bf217f if there's a cookiehost allocated, free that too 2003-04-30 19:58:36 +00:00
Daniel Stenberg
37d1e9351e ok, make the test run ok too 2003-04-30 19:56:53 +00:00
Daniel Stenberg
4494c0dee0 various new cookie tests with a custom Host: header set 2003-04-30 19:49:51 +00:00
Daniel Stenberg
26afc604ac modified to work with modified code 2003-04-30 17:16:25 +00:00
Daniel Stenberg
9aefcada19 modified to produce nicer output when a single test fails 2003-04-30 17:15:38 +00:00
Daniel Stenberg
69fc363760 make the diffs with 'diff -u' to make them nicer and easier to read 2003-04-30 17:15:00 +00:00
Daniel Stenberg
bea02ddebe stop parsing Host: host names at colons too 2003-04-30 17:12:29 +00:00
Daniel Stenberg
3fb257c39c modified to the new cookie function proto 2003-04-30 17:05:19 +00:00
Daniel Stenberg
7c96c5a39b extract host name from custom Host: headers to use for cookies 2003-04-30 17:04:53 +00:00
Daniel Stenberg
efd836d971 Many cookie fixes:
o Save domains in jars like Mozilla does. It means all domains set in
    Set-Cookie: headers are dot-prefixed.
  o Save and use the 'tailmatch' field in the Mozilla/Netscape cookie jars (the
    second column).
  o Reject cookies using illegal domains in the Set-Cookie: line. Concerns
    both domains with too few dots or domains that are outside the currently
    operating server host's domain.
  o Set the path part by default to the one used in the request, if none was
    set in the Set-Cookie line.
2003-04-30 17:03:43 +00:00
Daniel Stenberg
836aaa1647 changes need for the new ftp path treatment and the new cookie code 2003-04-30 17:01:00 +00:00
Daniel Stenberg
bf2b3dbf3e David Balazic's patch to make the FTP operations "do right" according to
RFC1738, which means it'll use one CWD for each pathpart.
2003-04-30 16:59:42 +00:00
Daniel Stenberg
b4fa2ff995 two more platforms Rich Gray built curl on 2003-04-30 07:32:43 +00:00
Daniel Stenberg
2f9cabc30b Peter Kovacs provided a patch that makes the CURLINFO_CONNECT_TIME work fine
when using the multi interface (too).
2003-04-29 18:03:30 +00:00
Daniel Stenberg
63593f5597 mention configure --help 2003-04-29 16:55:17 +00:00
Daniel Stenberg
c0acaa5d2c CURLOPT_FTPPORT could support port number too 2003-04-28 17:29:32 +00:00
Daniel Stenberg
2e46f8d0a6 corrected the comment which wasn't correct 2003-04-28 13:48:16 +00:00
Daniel Stenberg
51da6aaa07 RSAglue.lib is no longer needed with recent OpenSSL versions 2003-04-25 15:08:46 +00:00
Daniel Stenberg
c8b79e36db Dan Fandrich added support for the gzip Content-Encoding for --compressed 2003-04-24 06:34:31 +00:00
Daniel Stenberg
208374bcc9 Bryan Kemp's reported problems with curl and PUT from stdin and a faked
content-length made me add test case 60, that does exactly this, but it
seems to run fine...
2003-04-23 12:09:58 +00:00
Daniel Stenberg
7f0a6e7203 last 10 days or so 2003-04-22 23:30:04 +00:00
Daniel Stenberg
54ebb9cfd4 libtool 1.5 stuff 2003-04-22 23:29:27 +00:00
Daniel Stenberg
49e9c1495b stop checking for libtool, we don't run that in this script 2003-04-22 23:26:00 +00:00
Daniel Stenberg
a84b0fbd52 Dan Fandrich corrected the error messages on "bad encoding". 2003-04-22 22:33:39 +00:00
Daniel Stenberg
c95814c04d Dan Fandrich's gzip bugfix 2003-04-22 22:32:02 +00:00
Daniel Stenberg
9f8123f1b8 Dan Fandrich's fix 2003-04-22 22:31:02 +00:00
Daniel Stenberg
8b23db4f4d Peter Sylvester pointed out that curl_easy_setopt() will always (wrongly)
return CURLE_OK no matter what happens.
2003-04-22 21:42:39 +00:00
Daniel Stenberg
d77cc13374 two dashes is enough 2003-04-16 12:46:20 +00:00
Daniel Stenberg
9a12db1aa2 typecast the setting of the size, as it might be an off_t which is bigger
than long and libcurl expects a long...
2003-04-15 14:18:37 +00:00
Daniel Stenberg
eb54d34bec If MALLOCDEBUG, include the lib's setup.h here so that the proper defines
are set before all system headers, as otherwise we get compiler warnings
on my Solaris at least.
2003-04-15 14:01:57 +00:00
Daniel Stenberg
4b1203d4c9 include config.h before all system headers, so that _FILE_OFFSET_BITS and
similar is set properly by us first
2003-04-15 13:32:26 +00:00
Daniel Stenberg
183a9c6244 extended the -F section 2003-04-15 09:58:27 +00:00
Daniel Stenberg
1f2294d585 treat uploaded .html files as text/html by default 2003-04-15 09:29:39 +00:00
Daniel Stenberg
0b839c4f77 return the same error for the sslv2 "certificate verify failed" code 2003-04-14 22:00:36 +00:00
Daniel Stenberg
1d4fd1fcae new wording by Kevin Roth 2003-04-14 14:54:18 +00:00
Daniel Stenberg
b1d8d72c16 ignore all stamp-h* 2003-04-14 13:09:44 +00:00
Daniel Stenberg
bafb68b844 With the recent fix of libcurl, it shall now return CURLE_SSL_CACERT when
it had problems withe CA cert and thus we offer a huge blurb of verbose
help to explain to the poor user why this happens.
2003-04-14 13:09:09 +00:00
Daniel Stenberg
21873b52e9 Restored the SSL error codes since they was broken in the 7.10.4 release,
also now attempt to detect and return the specific CACERT error code.
2003-04-14 12:53:29 +00:00
Daniel Stenberg
0aa8b82871 FTP CWD response fixed
gzip content-encoding added
chunked content-encoding fixed
2003-04-14 07:13:08 +00:00
Daniel Stenberg
f9781afafd clarified the CURLINFO_SIZE_DOWNLOAD somewhat on Juan F. Codagnone's
suggestion
2003-04-11 16:52:30 +00:00
Daniel Stenberg
fece361a55 Nic fixed so that Curl_client_write() must not be called with 0 lenth data.
I edited somewhat and removed trailing whitespaces.
2003-04-11 16:31:18 +00:00
Daniel Stenberg
7b51b2f128 Nic Hines fixed this bug when deflate or gzip contents were downloaded using
chunked encoding.
2003-04-11 16:23:43 +00:00
Daniel Stenberg
22d88fb28e ah, move the zero byte too or havoc will occur 2003-04-11 16:23:06 +00:00
Daniel Stenberg
f7c5b28e76 verify the new url parser fix 2003-04-11 16:22:27 +00:00
Daniel Stenberg
5760f2a307 support ? as separator instead of / even if not protocol was given 2003-04-11 16:08:41 +00:00
Daniel Stenberg
ee46efb5a5 these guys deserve a mentioning here as well 2003-04-11 08:57:19 +00:00
Daniel Stenberg
eb6ffebfc7 Dan the man on the list 2003-04-11 08:55:08 +00:00
Daniel Stenberg
c06c44f286 Dan Fandrich's added gzip support documented. 2003-04-11 08:51:24 +00:00
Daniel Stenberg
019c4088cf Dan Fandrich's gzip patch applied 2003-04-11 08:49:20 +00:00
Daniel Stenberg
0b0a88b78d when saving a cookie jar fails, you don't get an error code or anything,
just a warning in the verbose output stream
2003-04-11 08:19:06 +00:00
Daniel Stenberg
028e9cc56f According to RFC959, CWD is supposed to return 250 on success, but
there seem to be non-compliant FTP servers out there that return 200,
 so we accept any '2xy' response now.
2003-04-11 08:10:54 +00:00
Daniel Stenberg
e0d8615ece show a verbose warning message in case cookie-saving fails, after
Ralph Mitchell's notification.
2003-04-11 07:39:16 +00:00
Daniel Stenberg
c8ecbda40b new ftp tests 2003-04-10 11:43:47 +00:00
Daniel Stenberg
2324c10d43 another week has passed 2003-04-10 11:36:56 +00:00
Daniel Stenberg
89cfa76291 Vlad Krupin's URL parsing patch to fix the URL parsing when the URL has no
slash after the host name, but still a ? and following "parameters".
2003-04-10 09:44:39 +00:00
Daniel Stenberg
072070a22c oops, committed test code not meant to be here 2003-04-09 12:02:06 +00:00
Daniel Stenberg
3c3ad134ea the default debugfunction shows incoming headers as well 2003-04-09 11:57:06 +00:00
Daniel Stenberg
a4ffcfd4d5 timecond support added
made the Last-Modified (faked) header look correct using GMT always
2003-04-09 11:56:31 +00:00
Daniel Stenberg
136670c58a three new ftp tests 2003-04-09 11:55:24 +00:00
Daniel Stenberg
28169725fa <mdtm> added 2003-04-09 11:53:09 +00:00
Daniel Stenberg
5b13106f54 MDTM support added 2003-04-09 11:52:24 +00:00
Daniel Stenberg
1a2db0dfb1 James Bursa fixed a flaw in the content-type extracting code that could
miss the first letter
2003-04-08 14:48:38 +00:00
Daniel Stenberg
696f95bb0a share.c added 2003-04-08 10:35:35 +00:00
Daniel Stenberg
acec588fe3 --disable-eprt perhaps? 2003-04-07 06:41:24 +00:00
Daniel Stenberg
6ed0da8e98 Ryan Weaver's fix to prevent the ca bundle to get installed even when
building curl without SSL support!
2003-04-06 12:29:45 +00:00
Daniel Stenberg
7fd91d70bd adjusted the formpost testcases to the new boundary string construction 2003-04-04 12:30:35 +00:00
Daniel Stenberg
61788a0389 Changed how boundary strings are generated. This new way uses 28 dashes
and 12 following hexadecimal letters, which seems to be what IE uses.
This makes curl work smoother with more stupidly written server apps.

Worked this out together with Martijn Broenland.
2003-04-04 12:24:01 +00:00
Daniel Stenberg
0821447b5b spell fix 2003-04-03 16:11:47 +00:00
Daniel Stenberg
3cba274ba6 kill a compiler warning on cygwin 2003-04-03 14:16:15 +00:00
Daniel Stenberg
df7bbcfd21 Added log output for when the writing of the input HTTP request is successful
or unsuccessful. Used to track down the recent cygwin test suite problems.
2003-04-03 13:43:15 +00:00
Daniel Stenberg
021d406f0c Modified how we log data to server.input, as we can't keep the file open
very much as it makes it troublesome on certain operating systems.
2003-04-03 13:42:06 +00:00
Daniel Stenberg
294569c502 new 2003-04-03 13:39:36 +00:00
173 changed files with 2056 additions and 8273 deletions

178
CHANGES
View File

@@ -6,6 +6,184 @@
Changelog
Version 7.10.5 (19 May 2003)
Daniel (15 May)
- Changed the order for the in_addr_t testing, as 'unsigned long' seems to be
a very common type inet_addr() returns.
Daniel (14 May)
- George Comninos provided a fix that calls the progress meter when waiting
for FTP command responses take >1 second. It'll make applications more
"responsive" even when dealing with very slow ftp servers.
Daniel (12 May)
- George Comninos pointed out that libcurl uploads had two quirks:
o when using FTP PORT command, it used blocking sockets!
o it could loop a long time without doing progress meter updates
Both items are fixed now.
Daniel (9 May)
- Dan Fandrich changed CURLOPT_ENCODING to select all supported encodings if
set to "". This frees the application from having to know which encodings
the library supports.
- Dan Fandrich pointed out we had three unnecessary files in CVS that is
generated with libtoolize, so they're now removed and libtoolize is invoked
accordingly in the buildconf script.
- Avery Fay found out that the CURLOPT_INTERFACE way of first checking if the
given name is a network interface gave a real performance penalty on Linux,
so now we more appropriately first check if it is an IP number and if so
we don't check for a network interface with that name.
- CURLOPT_FTP_USE_EPRT added. Set this to FALSE to disable libcurl's attempts
to use EPRT and LPRT before the traditional PORT command. The command line
tool sets this option with '--disable-eprt'.
Version 7.10.5-pre2 (6 May 2003)
Daniel (6 May)
- Kevin Delafield reported another case where we didn't correctly check for
EAGAIN but only EWOULDBLOCK, which caused badness on HPUX.
Daniel (4 May)
- Ben Greear noticed that the check for 'writable argv' exited the configure
script when run for cross-compiling, which wasn't nice. Now it'll default to
no and output a warning about the fact that it was not checked for.
Daniel (2 May)
- Added test case 62 and fixed some more on the cookie sending with a custom
Host: header set.
Daniel (1 May)
- Andy Cedilnik fixed a few compiler warnings.
- Made the "SSL read error: 5" error message more verbose, by adding code that
queries the OpenSSL library to fill in the error buffer.
Daniel (30 Apr)
- Added sys/select.h include in the curl/multi.h file, after having been
reminded about this by Rich Gray.
- I made each test set its own server requirements, thus abandoning the
previous system where the test number implied what server(s) to use for a
specific test.
- David Balazic made curl more RFC1738-compliant for FTP URLs, by fixing so
that libcurl now uses one CWD command for each path part. A bunch of test
cases were fixed to work accordingly.
- Cookie fixes:
A. Save domains in jars like Mozilla does. It means all domains set in
Set-Cookie: headers are dot-prefixed.
B. Save and use the 'tailmatch' field in the Mozilla/Netscape cookie jars
(the second column).
C. Reject cookies using illegal domains in the Set-Cookie: line. Concerns
both domains with too few dots or domains that are outside the currently
operating server host's domain.
D. Set the path part by default to the one used in the request, if none was
set in the Set-Cookie line.
To make item C really good, I also made libcurl notice custom Host: headers
and extract the host name set in there and use that as the host name for the
site we're getting the cookies from. This allows user to specify a site's
IP-address, but still be able to receive and send its cookies properly if
you provide a valid Host: name for the site.
Daniel (29 Apr)
- Peter Kovacs provided a patch that makes the CURLINFO_CONNECT_TIME work fine
when using the multi interface (too).
Version 7.10.5-pre1 (23 Apr 2003)
Daniel (23 Apr)
- Upgraded to libtool 1.5.
Daniel (22 Apr)
- Peter Sylvester pointed out that curl_easy_setopt() will always (wrongly)
return CURLE_OK no matter what happens.
- Dan Fandrich fixed some gzip decompression bugs and flaws.
Daniel (16 Apr)
- Fixed minor typo in man page, reported in the Debian bug tracker.
Daniel (15 Apr)
- Fixed some FTP tests in the test suite that failed on my Solaris host, due
to the config.h not being included before the system headers. When done that
way, it did get a mixed sense of if big files are supported or not and then
stat() and fstat() (as used in test case 505) got confused and failed to
return a proper file size.
- Formposting a file using a .html suffix is now properly set to Content-Type: text/html.
Daniel (14 Apr)
- Fixed the SSL error handling to return proper SSL error messages again, they
broke in 7.10.4. I also attempt to track down CA cert problems and then
return the CURLE_SSL_CACERT error code.
- The curl tool now intercepts the CURLE_SSL_CACERT error code and displays
a fairly big and explanatory error message. Kevin Roth helped me out with
the wording.
Daniel (11 Apr)
- Nic Hines provided a second patch for gzip decompression, and fixed a bug
when deflate or gzip contents were downloaded using chunked encoding.
- Dan Fandrich made libcurl support automatic decompression of gzip contents
(as an addition to the previous deflate support).
- I made the CWD command during FTP session consider all 2xy codes to be OK
responses.
Daniel (10 Apr)
- Vlad Krupin fixed a URL parsing issue. URLs that were not using a slash
after the host name, but still had "?" and parameters appended, as in
"http://hostname.com?foobar=moo", were not properly parsed by libcurl.
Daniel (9 Apr)
- Made CURLOPT_TIMECONDITION work for FTP transfers, using the same syntax as
for HTTP. This then made -z work for ftp transfers too. Added test case 139
and 140 for verifying this.
- Getting the file date of an ftp file used the wrong time zone when
displayed. It is supposedly always GMT. Added test case 141 for this.
- Made the test suite's FTP server support MDTM.
- The default DEBUGFUNCTION, as enabled with CURLOPT_VERBOSE now outputs
CURLINFO_HEADER_IN data as well. The most notable effect from this is that
using curl -v, you get to see the incoming "headers" as well. This is
perhaps most useful when doing ftp.
Daniel (8 Apr)
- James Bursa fixed a flaw in the Content-Type extraction code, which missed
the first letter if no space followed the colon.
- Magnus Nilsson pointed out that share.c was missing in the MSVC project
file.
Daniel (6 Apr)
- Ryan Weaver provided a patch that makes the CA cert bundle not get installed
anymore when 'configure --without-ssl' has been used.
Daniel (4 Apr)
- Martijn Broenland found another cases where a server application didn't
like the boundary string used by curl when foing a multi-part/formpost. We
modified the boundary string to look like the one IE uses, as this is
probably gonna make curl work with more applications.
Daniel (3 Apr)
- Kevin Roth reported that a bunch of tests fails on cygwin. One set fails
when using perl 5.8 (and they run fine with perl 5.6), and another set
failed because of an artifact in the test suite's FTP server that I
corrected. It turned out the FTP server code was still having a file opened
while the main test script removed it and invoked the HTTP server that
attempted to create the same file name of the file the FTP server kept open.
This operation works fine on unix, but not on cygwin.
Version 7.10.4 (2 Apr 2003)
Daniel (1 Apr)

View File

@@ -40,9 +40,9 @@ REQUIREMENTS
You need the following software installed:
o autoconf 2.50 (or later)
o automake 1.5 (or later)
o libtool 1.4 (or later)
o autoconf 2.57 (or later)
o automake 1.7 (or later)
o libtool 1.4.2 (or later)
o GNU m4 (required by autoconf)
o nroff + perl

View File

@@ -152,10 +152,8 @@ AC_DEFUN([TYPE_IN_ADDR_T],
AC_MSG_CHECKING([for in_addr_t equivalent])
AC_CACHE_VAL([curl_cv_in_addr_t_equiv],
[
# Systems have either "struct sockaddr *" or
# "void *" as the second argument to getpeername
curl_cv_in_addr_t_equiv=
for t in int size_t unsigned long "unsigned long"; do
for t in "unsigned long" int size_t unsigned long; do
AC_TRY_COMPILE([
#include <sys/types.h>
#include <sys/socket.h>

View File

@@ -6,18 +6,19 @@ die(){
}
#--------------------------------------------------------------------------
# autoconf 2.50 or newer
# autoconf 2.57 or newer
#
need_autoconf="2.57"
ac_version=`${AUTOCONF:-autoconf} --version 2>/dev/null|head -1| sed -e 's/^[^0-9]*//' -e 's/[a-z]* *$//'`
if test -z "$ac_version"; then
echo "buildconf: autoconf not found."
echo " You need autoconf version 2.50 or newer installed."
echo " You need autoconf version $need_autoconf or newer installed."
exit 1
fi
IFS=.; set $ac_version; IFS=' '
if test "$1" = "2" -a "$2" -lt "50" || test "$1" -lt "2"; then
if test "$1" = "2" -a "$2" -lt "57" || test "$1" -lt "2"; then
echo "buildconf: autoconf version $ac_version found."
echo " You need autoconf version 2.50 or newer installed."
echo " You need autoconf version $need_autoconf or newer installed."
echo " If you have a sufficient autoconf installed, but it"
echo " is not named 'autoconf', then try setting the"
echo " AUTOCONF environment variable."
@@ -48,18 +49,19 @@ fi
echo "buildconf: autoheader version $ah_version (ok)"
#--------------------------------------------------------------------------
# automake 1.5 or newer
# automake 1.7 or newer
#
am_version=`${AUTOMAKE:-automake} --version 2>/dev/null|head -1| sed -e 's/^[^0-9]*//' -e 's/[a-z]* *$//'`
need_automake="1.7"
am_version=`${AUTOMAKE:-automake} --version 2>/dev/null|head -1| sed -e 's/^.* \([0-9]\)/\1/' -e 's/[a-z]* *$//'`
if test -z "$am_version"; then
echo "buildconf: automake not found."
echo " You need automake version 1.5 or newer installed."
echo " You need automake version $need_automake or newer installed."
exit 1
fi
IFS=.; set $am_version; IFS=' '
if test "$1" = "1" -a "$2" -lt "5" || test "$1" -lt "1"; then
if test "$1" = "1" -a "$2" -lt "7" || test "$1" -lt "1"; then
echo "buildconf: automake version $am_version found."
echo " You need automake version 1.5 or newer installed."
echo " You need automake version $need_automake or newer installed."
echo " If you have a sufficient automake installed, but it"
echo " is not named 'autommake', then try setting the"
echo " AUTOMAKE environment variable."
@@ -68,33 +70,37 @@ fi
echo "buildconf: automake version $am_version (ok)"
#--------------------------------------------------------------------------
# libtool 1.4 or newer
# libtool check
#
LIBTOOL_WANTED_MAJOR=1
LIBTOOL_WANTED_MINOR=4
LIBTOOL_WANTED_PATCH=
LIBTOOL_WANTED_VERSION=1.4
LIBTOOL_WANTED_PATCH=2
LIBTOOL_WANTED_VERSION=1.4.2
libtool=`which glibtool 2>/dev/null`
if test ! -x "$libtool"; then
libtool=`which libtool`
fi
lt_pversion=`$libtool --version 2>/dev/null|sed -e 's/^[^0-9]*//' -e 's/[- ].*//'`
#lt_pversion=`${LIBTOOL:-$libtool} --version 2>/dev/null|head -1| sed -e 's/^.* \([0-9]\)/\1/' -e 's/[a-z]* *$//'`
lt_pversion=`$libtool --version 2>/dev/null|head -1|sed -e 's/^[^0-9]*//g' -e 's/[- ].*//'`
if test -z "$lt_pversion"; then
echo "buildconf: libtool not found."
echo " You need libtool version $LIBTOOL_WANTED_VERSION or newer installed"
exit 1
fi
lt_version=`echo $lt_pversion|sed -e 's/\([a-z]*\)$/.\1/'`
lt_version=`echo $lt_pversion` #|sed -e 's/\([a-z]*\)$/.\1/'`
IFS=.; set $lt_version; IFS=' '
lt_status="good"
if test "$1" = "$LIBTOOL_WANTED_MAJOR"; then
if test "$2" -lt "$LIBTOOL_WANTED_MINOR"; then
lt_status="bad"
elif test ! -z "$LIBTOOL_WANTED_PATCH"; then
if test "$3" -lt "$LIBTOOL_WANTED_PATCH"; then
lt_status="bad"
if test -n "$3"; then
if test "$3" -lt "$LIBTOOL_WANTED_PATCH"; then
lt_status="bad"
fi
fi
fi
fi
@@ -104,15 +110,20 @@ if test $lt_status != "good"; then
exit 1
fi
echo "buildconf: libtool version $lt_pversion (ok)"
echo "buildconf: libtool version $lt_version (ok)"
# ------------------------------------------------------------
# run the correct scripts now
echo "buildconf: running libtoolize"
${LIBTOOLIZE:-libtoolize} --copy --automake || die "The command '${LIBTOOLIZE:-libtoolize} --copy --automake' failed"
echo "buildconf: running aclocal"
aclocal || die "The command 'aclocal' failed"
${ACLOCAL:-aclocal} || die "The command '${AUTOHEADER:-aclocal}' failed"
echo "buildconf: running autoheader"
autoheader || die "The command 'autoheader' failed"
${AUTOHEADER:-autoheader} || die "The command '${AUTOHEADER:-autoheader}' failed"
echo "buildconf: running autoconf"
autoconf || die "The command 'autoconf' failed"
${AUTOCONF:-autoconf} || die "The command '${AUTOCONF:-autoconf}' failed"
echo "buildconf: running automake"
automake -a || die "The command 'automake -a' failed"
${AUTOMAKE:-automake} -a || die "The command '${AUTOMAKE:-automake} -a' failed"
exit 0

1363
config.guess vendored

File diff suppressed because it is too large Load Diff

1470
config.sub vendored

File diff suppressed because it is too large Load Diff

View File

@@ -349,15 +349,17 @@ dnl Check if the operating system allows programs to write to their own argv[]
dnl **********************************************************************
AC_MSG_CHECKING([if argv can be written to])
AC_TRY_RUN([
AC_RUN_IFELSE([[
int main(int argc, char ** argv) {
argv[0][0] = ' ';
return (argv[0][0] == ' ')?0:1;
}
],
]],
AC_DEFINE(HAVE_WRITABLE_ARGV, 1, [Define this symbol if your OS supports changing the contents of argv])
AC_MSG_RESULT(yes),
AC_MSG_RESULT(no)
AC_MSG_RESULT(no),
AC_MSG_RESULT(no)
AC_MSG_WARN([the previous check could not be made default was used])
)
dnl **********************************************************************
@@ -834,6 +836,11 @@ AC_HELP_STRING([--without-ca-bundle], [Don't install the CA bundle]),
fi
] )
if test X"$OPT_SSL" = Xno
then
ca="no"
fi
if test "x$ca" = "xno"; then
dnl let's not keep "no" as path name, blank it instead
ca=""

View File

@@ -1,4 +1,4 @@
Updated: February 25, 2003 (http://curl.haxx.se/docs/faq.html)
Updated: May 9, 2003 (http://curl.haxx.se/docs/faq.html)
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -97,6 +97,12 @@ FAQ
We spell it cURL or just curl. We pronounce it with an initial k sound:
[kurl].
NOTE: there are numerous sub-projects and related projects that also use the
word curl in the project names in various combinations, but you should take
notice that this FAQ is directed at the command-line tool named curl (and
libcurl the library), and may therefore not be valid for other curl
projects.
1.2 What is libcurl?
libcurl is a reliable and portable library which provides you with an easy
@@ -132,11 +138,9 @@ FAQ
better. We do however believe in a few rules when it comes to the future of
curl:
* Curl is to remain a command line tool. If you want GUIs or fancy scripting
capabilities, you're free to write another tool that uses libcurl and that
offers this. There's no point in having a single tool that does every
imaginable thing. That's also one of the great advantages of having the
core of curl as a library.
* Curl -- the command line tool -- is to remain a non-graphical command line
tool. If you want GUIs or fancy scripting capabilities, you should look
for another tool that uses libcurl.
* We do not add things to curl that other small and available tools already
do very fine at the side. Curl's output is fine to pipe into another

View File

@@ -31,6 +31,10 @@ UNIX
If you have checked out the sources from the CVS repository, read the
CVS-INFO on how to proceed.
Get a full listing of all available configure options by invoking it like:
./configure --help
If you want to install curl in a different file hierarchy than /usr/local,
you need to specify that already when running configure:
@@ -454,10 +458,12 @@ PORTS
- i386 Solaris 2.7
- i386 Windows 95, 98, ME, NT, 2000
- i386 QNX 6
- i486 ncr-sysv4.3.03 (NCR MP-RAS)
- ia64 Linux 2.3.99
- m68k AmigaOS 3
- m68k Linux
- m68k OpenBSD
- m88k dg-dgux5.4R3.00
- s390 Linux
- XScale/PXA250 Linux 2.4

View File

@@ -3,6 +3,9 @@ join in and help us correct one or more of these! Also be sure to check the
changelog of the current development status, as one or more of these problems
may have been fixed since this was written!
* IPv6 support on AIX 4.3.3 doesn't work due to a missing sockaddr_storage
struct. It has been reported to work on AIX 5.1 though.
* Running 'make test' on Mac OS X gives 4 errors. This seems to be related
to some kind of libtool problem:
http://curl.haxx.se/mail/archive-2002-03/0029.html and

View File

@@ -87,3 +87,6 @@ that have contributed with non-trivial parts:
- Miklos Nemeth <mnemeth@kfkisystems.com>
- Kevin Roth <kproth@users.sourceforge.net>
- Ralph Mitchell <rmitchell@eds.com>
- Dan Fandrich <dan@coneharvesters.com>
- Jean-Philippe Barrette-LaPierre <jpb@rrette.com>
- Richard Bramante <RBramante@on.com>

View File

@@ -73,6 +73,9 @@ TODO
FTP
* Make CURLOPT_FTPPORT support an additional port number on the IP/if/name,
like "blabla:[port]" or possibly even "blabla:[portfirst]-[portsecond]".
* FTP ASCII upload does not follow RFC959 section 3.1.1.1: "The sender
converts the data from an internal character representation to the standard
8-bit NVT-ASCII representation (see the Telnet specification). The
@@ -84,13 +87,16 @@ TODO
* An option to only download remote FTP files if they're newer than the local
one is a good idea, and it would fit right into the same syntax as the
already working http dito works. It of course requires that 'MDTM' works,
and it isn't a standard FTP command.
already working http dito works (-z). It of course requires that 'MDTM'
works, and it isn't a standard FTP command.
* Add FTPS support with SSL for the data connection too. This should be made
according to the specs written in draft-murray-auth-ftp-ssl-08.txt,
"Securing FTP with TLS"
* --disable-epsv exists, but for active connections we have no --disable-eprt
(or even --disable-lprt).
HTTP
* If the "body" of the POST is < MSS it really aught to be sent along with

View File

@@ -96,6 +96,10 @@ must be using valid ciphers. Read up on SSL cipher list details on this URL:
.I http://www.openssl.org/docs/apps/ciphers.html (Option added in curl 7.9)
If this option is used several times, the last one will override the others.
.IP "--compressed"
(HTTP) Request a compressed response using the deflate or gzip
algorithms and return the uncompressed document. If this option is used
and the server sends an unsupported encoding, Curl will report an error.
.IP "--connect-timeout <seconds>"
Maximum time in seconds that you allow the connection to the server to take.
This only limits the connection phase, once curl has connected this option is
@@ -110,6 +114,12 @@ no file will be written. The file will be written using the Netscape cookie
file format. If you set the file name to a single dash, "-", the cookies will
be written to stdout. (Option added in curl 7.9)
.B NOTE
If the cookie jar can't be created or written to, the whole curl operation
won't fail or even report an error clearly. Using -v will get a warning
displayed, but that is the only visible feedback you get about this possibly
lethal situation.
If this option is used several times, the last specfied file name will be
used.
.IP "-C/--continue-at <offset>"
@@ -122,7 +132,7 @@ Use "-C -" to tell curl to automatically find out where/how to resume the
transfer. It then uses the given output/input files to figure that out.
If this option is used several times, the last one will be used.
.IP "---create-dirs"
.IP "--create-dirs"
When used in conjunction with the -o option, curl will create the necessary
local directory hierarchy as needed.
.IP "--crlf"
@@ -256,12 +266,18 @@ Example, to send your password file to the server, where
\&'password' is the name of the form-field to which /etc/passwd will be the
input:
.B curl
-F password=@/etc/passwd www.mypasswords.com
\fBcurl\fP -F password=@/etc/passwd www.mypasswords.com
To read the file's content from stdin insted of a file, use - where the file
name should've been. This goes for both @ and < constructs.
You can also tell curl what Content-Type to use for the file upload part, by
using 'type=', in a manner similar to:
\fBcurl\fP -F "web=@index.html;type=text/html" url.com
See further examples and details in the MANUAL.
This option can be used multiple times.
.IP "-g/--globoff"
This option switches off the "URL globbing parser". When you set this option,

View File

@@ -78,7 +78,8 @@ uploaded.
.TP
.B CURLINFO_SIZE_DOWNLOAD
Pass a pointer to a double to receive the total amount of bytes that were
downloaded.
downloaded. The amount is only for the latest transfer and will be reset again
for each new transfer.
.TP
.B CURLINFO_SPEED_DOWNLOAD
Pass a pointer to a double to receive the average download speed that curl

View File

@@ -335,10 +335,18 @@ prompt function.
.SH HTTP OPTIONS
.TP 0.4i
.B CURLOPT_ENCODING
Two encodings are supported \fIdentity\fP, which does nothing, and
\fIdeflate\fP to request the server to compress its reponse using the
zlib algorithm. This is not an order, the server may or may not do it.
See the special file lib/README.encoding for details.
Sets the contents of the Accept-Encoding: header sent in an HTTP
request, and enables decoding of a response when a Content-Encoding:
header is received. Three encodings are supported: \fIidentity\fP,
which does nothing, \fIdeflate\fP which requests the server to
compress its response using the zlib algorithm, and \fIgzip\fP which
requests the gzip algorithm. If a zero-length string is set, then an
Accept-Encoding: header containing all supported encodings is sent.
This is a request, not an order; the server may or may not do it. This
option must be set (to any non-NULL value) or else any unsolicited
encoding done by the server is ignored. See the special file
lib/README.encoding for details.
.TP
.B CURLOPT_FOLLOWLOCATION
A non-zero parameter tells the library to follow any Location: header that the
@@ -478,6 +486,13 @@ is called. If no cookies are known, no file will be created. Specify "-" to
instead have the cookies written to stdout. Using this option also enables
cookies for this session, so if you for example follow a location it will make
matching cookies get sent accordingly. (Added in 7.9)
.B NOTE
If the cookie jar file can't be created or written to (when the
curl_easy_cleanup() is called), libcurl will not and cannot report an error
for this. Using CURLOPT_VERBOSE or CURLOPT_DEBUGFUNCTION will get a warning to
display, but that is the only visible feedback you get about this possibly
lethal situation.
.TP
.B CURLOPT_TIMECONDITION
Pass a long as parameter. This defines how the CURLOPT_TIMEVALUE time value is
@@ -560,6 +575,13 @@ and symbolic links.
A non-zero parameter tells the library to append to the remote file instead of
overwrite it. This is only useful when uploading to a ftp site.
.TP
.B CURLOPT_FTP_USE_EPRT
Pass a long. If the value is non-zero, it tells curl to use the EPRT (and
LPRT) command when doing active FTP downloads (which is enabled by
CURLOPT_FTPPORT). Using EPRT means that it will first attempt to use EPRT and
then LPRT before using PORT, but if you pass FALSE (zero) to this option, it
will not try using EPRT or LPRT, only plain PORT. (Added in 7.10.5)
.TP
.B CURLOPT_FTP_USE_EPSV
Pass a long. If the value is non-zero, it tells curl to use the EPSV command
when doing passive FTP downloads (which it always does by default). Using EPSV

View File

@@ -624,6 +624,11 @@ typedef enum {
and password to whatever host the server decides. */
CINIT(UNRESTRICTED_AUTH, LONG, 105),
/* Specificly switch on or off the FTP engine's use of the EPRT command ( it
also disables the LPRT attempt). By default, those ones will always be
attempted before the good old traditional PORT command. */
CINIT(FTP_USE_EPRT, LONG, 106),
CURLOPT_LASTENTRY /* the last unused */
} CURLoption;
@@ -814,8 +819,8 @@ CURLcode curl_global_init(long flags);
void curl_global_cleanup(void);
/* This is the version number */
#define LIBCURL_VERSION "7.10.4"
#define LIBCURL_VERSION_NUM 0x070a04
#define LIBCURL_VERSION "7.10.5"
#define LIBCURL_VERSION_NUM 0x070a05
/* linked-list structure for the CURLOPT_QUOTE option (and other) */
struct curl_slist {

View File

@@ -49,6 +49,7 @@
#if defined(WIN32) && !defined(__GNUC__) || defined(__MINGW32__)
#include <winsock.h>
#else
#include <sys/select.h>
#include <sys/socket.h>
#include <sys/time.h>
#include <sys/types.h>

View File

@@ -33,7 +33,7 @@
LIB_NAME = libcurl
LIB_NAME_DEBUG = libcurld
!IFNDEF OPENSSL_PATH
OPENSSL_PATH = ../../openssl-0.9.6
OPENSSL_PATH = ../../openssl-0.9.7a
!ENDIF
#############################################################
@@ -48,7 +48,8 @@ LNKDLL = link.exe /DLL /def:libcurl.def
LNKLIB = link.exe -lib
LFLAGS = /nologo
LINKLIBS = ws2_32.lib winmm.lib
SSLLIBS = libeay32.lib ssleay32.lib RSAglue.lib
SSLLIBS = libeay32.lib ssleay32.lib
# RSAglue.lib was formerly needed in the SSLLIBS
CFGSET = FALSE
######################

View File

@@ -5,15 +5,15 @@
HTTP/1.1 [RFC 2616] specifies that a client may request that a server encode
its response. This is usually used to compress a response using one of a set
of commonly available compression techniques. These schemes are `deflate'
(the zlib algorithm), `gzip' and `compress' [sec 3.5, RFC 2616]. A client
requests that the sever perform an encoding by including an Accept-Encoding
header in the request document. The value of the header should be one of the
recognized tokens `deflate', ... (there's a way to register new
schemes/tokens, see sec 3.5 of the spec). A server MAY honor the client's
encoding request. When a response is encoded, the server includes a
Content-Encoding header in the response. The value of the Content-Encoding
header indicates which scheme was used to encode the data.
of commonly available compression techniques. These schemes are `deflate' (the
zlib algorithm), `gzip' and `compress' [sec 3.5, RFC 2616]. A client requests
that the sever perform an encoding by including an Accept-Encoding header in
the request document. The value of the header should be one of the recognized
tokens `deflate', ... (there's a way to register new schemes/tokens, see sec
3.5 of the spec). A server MAY honor the client's encoding request. When a
response is encoded, the server includes a Content-Encoding header in the
response. The value of the Content-Encoding header indicates which scheme was
used to encode the data.
A client may tell a server that it can understand several different encoding
schemes. In this case the server may choose any one of those and use it to
@@ -24,11 +24,10 @@ information on the Accept-Encoding header.
* Current support for content encoding:
I added support for the 'deflate' content encoding to both libcurl and curl.
Both regular and chunked transfers should work although I've tested only the
former. The library zlib is required for this feature. Places where I
modified the source code are commented and typically include my initials and
the date (e.g., 08/29/02 jhrg).
Support for the 'deflate' and 'gzip' content encoding are supported by
libcurl. Both regular and chunked transfers should work fine. The library
zlib is required for this feature. 'deflate' support was added by James
Gallagher, and support for the 'gzip' encoding was added by Dan Fandrich.
* The libcurl interface:
@@ -39,15 +38,23 @@ To cause libcurl to request a content encoding use:
where <string> is the intended value of the Accept-Encoding header.
Currently, libcurl only understands how to process responses that use the
`deflate' Content-Encoding, so the only value for CURLOPT_ENCODING that will
work (besides "identity," which does nothing) is "deflate." If a response is
encoded using either the `gzip' or `compress' methods, libcurl will return an
error indicating that the response could not be decoded. If <string> is null
or empty no Accept-Encoding header is generated.
"deflate" or "gzip" Content-Encoding, so the only values for CURLOPT_ENCODING
that will work (besides "identity," which does nothing) are "deflate" and
"gzip" If a response is encoded using the "compress" or methods, libcurl will
return an error indicating that the response could not be decoded. If
<string> is NULL no Accept-Encoding header is generated. If <string> is a
zero-length string, then an Accept-Encoding header containing all supported
encodings will be generated.
The CURLOPT_ENCODING must be set to any non-NULL value for content to be
automatically decoded. If it is not set and the server still sends encoded
content (despite not having been asked), the data is returned in its raw form
and the Content-Encoding type is not checked.
* The curl interface:
Use the --compressed option with curl to cause it to ask servers to compress
responses using deflate.
responses using deflate.
James Gallagher <jgallagher@gso.uri.edu>
Dan Fandrich <dan@coneharvesters.com>

View File

@@ -81,8 +81,7 @@
#include "memdebug.h"
#endif
static
int ourerrno(void)
int Curl_ourerrno(void)
{
#ifdef WIN32
return (int)GetLastError();
@@ -198,10 +197,6 @@ static CURLcode bindlocal(struct connectdata *conn,
#ifdef HAVE_INET_NTOA
#ifndef INADDR_NONE
#define INADDR_NONE (in_addr_t) ~0
#endif
struct SessionHandle *data = conn->data;
/*************************************************************
@@ -214,7 +209,11 @@ static CURLcode bindlocal(struct connectdata *conn,
char myhost[256] = "";
in_addr_t in;
if(Curl_if2ip(data->set.device, myhost, sizeof(myhost))) {
/* First check if the given name is an IP address */
in=inet_addr(data->set.device);
if((in == CURL_INADDR_NONE) &&
Curl_if2ip(data->set.device, myhost, sizeof(myhost))) {
/*
* We now have the numerical IPv4-style x.y.z.w in the 'myhost' buffer
*/
@@ -247,7 +246,7 @@ static CURLcode bindlocal(struct connectdata *conn,
infof(data, "We bind local end to %s\n", myhost);
in=inet_addr(myhost);
if (INADDR_NONE != in) {
if (CURL_INADDR_NONE != in) {
if ( h ) {
Curl_addrinfo *addr = h->addr;
@@ -350,7 +349,7 @@ int socketerror(int sockfd)
if( -1 == getsockopt(sockfd, SOL_SOCKET, SO_ERROR,
(void *)&err, &errSize))
err = ourerrno();
err = Curl_ourerrno();
return err;
}
@@ -414,7 +413,7 @@ CURLcode Curl_is_connected(struct connectdata *conn,
return CURLE_COULDNT_CONNECT;
}
else if(1 != rc) {
int error = ourerrno();
int error = Curl_ourerrno();
failf(data, "Failed connect to %s:%d, errno: %d",
conn->hostname, conn->port, error);
return CURLE_COULDNT_CONNECT;
@@ -526,7 +525,7 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
rc = connect(sockfd, ai->ai_addr, ai->ai_addrlen);
if(-1 == rc) {
int error=ourerrno();
int error=Curl_ourerrno();
switch (error) {
case EINPROGRESS:
@@ -645,7 +644,7 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
sizeof(serv_addr));
if(-1 == rc) {
int error=ourerrno();
int error=Curl_ourerrno();
switch (error) {
case EINPROGRESS:

View File

@@ -37,4 +37,6 @@ CURLcode Curl_connecthost(struct connectdata *conn,
Curl_ipconnect **addr, /* the one we used */
bool *connected /* truly connected? */
);
int Curl_ourerrno(void);
#endif

View File

@@ -1,8 +1,8 @@
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2003, Daniel Stenberg, <daniel@haxx.se>, et al.
@@ -10,7 +10,7 @@
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at http://curl.haxx.se/docs/copyright.html.
*
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
@@ -25,13 +25,26 @@
#ifdef HAVE_LIBZ
#include <stdlib.h>
#include <string.h>
#include "urldata.h"
#include <curl/curl.h>
#include <curl/types.h>
#include "sendf.h"
#define DSIZ 4096 /* buffer size for decompressed data */
#define DSIZ 0x10000 /* buffer size for decompressed data */
#define GZIP_MAGIC_0 0x1f
#define GZIP_MAGIC_1 0x8b
/* gzip flag byte */
#define ASCII_FLAG 0x01 /* bit 0 set: file probably ascii text */
#define HEAD_CRC 0x02 /* bit 1 set: header CRC present */
#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
#define COMMENT 0x10 /* bit 4 set: file comment present */
#define RESERVED 0xE0 /* bits 5..7: reserved */
static CURLcode
process_zlib_error(struct SessionHandle *data, z_stream *z)
@@ -55,7 +68,7 @@ exit_zlib(z_stream *z, bool *zlib_init, CURLcode result)
}
CURLcode
Curl_unencode_deflate_write(struct SessionHandle *data,
Curl_unencode_deflate_write(struct SessionHandle *data,
struct Curl_transfer_keeper *k,
ssize_t nread)
{
@@ -63,7 +76,7 @@ Curl_unencode_deflate_write(struct SessionHandle *data,
int result; /* Curl_client_write status */
char decomp[DSIZ]; /* Put the decompressed data here. */
z_stream *z = &k->z; /* zlib state structure */
/* Initialize zlib? */
if (!k->zlib_init) {
z->zalloc = (alloc_func)Z_NULL;
@@ -74,7 +87,7 @@ Curl_unencode_deflate_write(struct SessionHandle *data,
k->zlib_init = 1;
}
/* Set the compressed input when this fucntion is called */
/* Set the compressed input when this function is called */
z->next_in = (Bytef *)k->str;
z->avail_in = nread;
@@ -87,11 +100,12 @@ Curl_unencode_deflate_write(struct SessionHandle *data,
status = inflate(z, Z_SYNC_FLUSH);
if (status == Z_OK || status == Z_STREAM_END) {
result = Curl_client_write(data, CLIENTWRITE_BODY, decomp,
DSIZ - z->avail_out);
/* if !CURLE_OK, clean up, return */
if (result) {
return exit_zlib(z, &k->zlib_init, result);
if (DSIZ - z->avail_out) {
result = Curl_client_write(data, CLIENTWRITE_BODY, decomp,
DSIZ - z->avail_out);
/* if !CURLE_OK, clean up, return */
if (result)
return exit_zlib(z, &k->zlib_init, result);
}
/* Done?; clean up, return */
@@ -103,7 +117,233 @@ Curl_unencode_deflate_write(struct SessionHandle *data,
}
/* Done with these bytes, exit */
if (status == Z_OK && z->avail_in == 0 && z->avail_out > 0)
if (status == Z_OK && z->avail_in == 0 && z->avail_out > 0)
return result;
}
else { /* Error; exit loop, handle below */
return exit_zlib(z, &k->zlib_init, process_zlib_error(data, z));
}
}
}
/* Skip over the gzip header */
static enum {
GZIP_OK,
GZIP_BAD,
GZIP_UNDERFLOW
} check_gzip_header(unsigned char const *data, ssize_t len, ssize_t *headerlen)
{
int method, flags;
const ssize_t totallen = len;
/* The shortest header is 10 bytes */
if (len < 10)
return GZIP_UNDERFLOW;
if ((data[0] != GZIP_MAGIC_0) || (data[1] != GZIP_MAGIC_1))
return GZIP_BAD;
method = data[2];
flags = data[3];
if (method != Z_DEFLATED || (flags & RESERVED) != 0) {
/* Can't handle this compression method or unknown flag */
return GZIP_BAD;
}
/* Skip over time, xflags, OS code and all previous bytes */
len -= 10;
data += 10;
if (flags & EXTRA_FIELD) {
ssize_t extra_len;
if (len < 2)
return GZIP_UNDERFLOW;
extra_len = (data[1] << 8) | data[0];
if (len < (extra_len+2))
return GZIP_UNDERFLOW;
len -= (extra_len + 2);
}
if (flags & ORIG_NAME) {
/* Skip over NUL-terminated file name */
while (len && *data) {
--len;
++data;
}
if (!len || *data)
return GZIP_UNDERFLOW;
/* Skip over the NUL */
--len;
++data;
}
if (flags & COMMENT) {
/* Skip over NUL-terminated comment */
while (len && *data) {
--len;
++data;
}
if (!len || *data)
return GZIP_UNDERFLOW;
/* Skip over the NUL */
--len;
++data;
}
if (flags & HEAD_CRC) {
if (len < 2)
return GZIP_UNDERFLOW;
len -= 2;
data += 2;
}
*headerlen = totallen - len;
return GZIP_OK;
}
CURLcode
Curl_unencode_gzip_write(struct SessionHandle *data,
struct Curl_transfer_keeper *k,
ssize_t nread)
{
int status; /* zlib status */
int result; /* Curl_client_write status */
char decomp[DSIZ]; /* Put the decompressed data here. */
z_stream *z = &k->z; /* zlib state structure */
/* Initialize zlib? */
if (!k->zlib_init) {
z->zalloc = (alloc_func)Z_NULL;
z->zfree = (free_func)Z_NULL;
z->opaque = 0; /* of dubious use 08/27/02 jhrg */
if (inflateInit2(z, -MAX_WBITS) != Z_OK)
return process_zlib_error(data, z);
k->zlib_init = 1; /* Initial call state */
}
/* This next mess is to get around the potential case where there isn't
enough data passed in to skip over the gzip header. If that happens,
we malloc a block and copy what we have then wait for the next call. If
there still isn't enough (this is definitely a worst-case scenario), we
make the block bigger, copy the next part in and keep waiting. */
/* Skip over gzip header? */
if (k->zlib_init == 1) {
/* Initial call state */
ssize_t hlen;
switch (check_gzip_header((unsigned char *)k->str, nread, &hlen)) {
case GZIP_OK:
z->next_in = (Bytef *)k->str + hlen;
z->avail_in = nread - hlen;
k->zlib_init = 3; /* Inflating stream state */
break;
case GZIP_UNDERFLOW:
/* We need more data so we can find the end of the gzip header.
It's possible that the memory block we malloc here will never be
freed if the transfer abruptly aborts after this point. Since it's
unlikely that circumstances will be right for this code path to be
followed in the first place, and it's even more unlikely for a transfer
to fail immediately afterwards, it should seldom be a problem. */
z->avail_in = nread;
z->next_in = malloc(z->avail_in);
if (z->next_in == NULL) {
return exit_zlib(z, &k->zlib_init, CURLE_OUT_OF_MEMORY);
}
memcpy(z->next_in, k->str, z->avail_in);
k->zlib_init = 2; /* Need more gzip header data state */
/* We don't have any data to inflate yet */
return CURLE_OK;
case GZIP_BAD:
default:
return exit_zlib(z, &k->zlib_init, process_zlib_error(data, z));
}
}
else if (k->zlib_init == 2) {
/* Need more gzip header data state */
ssize_t hlen;
unsigned char *oldblock = z->next_in;
z->avail_in += nread;
z->next_in = realloc(z->next_in, z->avail_in);
if (z->next_in == NULL) {
free(oldblock);
return exit_zlib(z, &k->zlib_init, CURLE_OUT_OF_MEMORY);
}
/* Append the new block of data to the previous one */
memcpy(z->next_in + z->avail_in - nread, k->str, nread);
switch (check_gzip_header(z->next_in, z->avail_in, &hlen)) {
case GZIP_OK:
/* This is the zlib stream data */
free(z->next_in);
/* Don't point into the malloced block since we just freed it */
z->next_in = (Bytef *)k->str + hlen + nread - z->avail_in;
z->avail_in = z->avail_in - hlen;
k->zlib_init = 3; /* Inflating stream state */
break;
case GZIP_UNDERFLOW:
/* We still don't have any data to inflate! */
return CURLE_OK;
case GZIP_BAD:
default:
free(z->next_in);
return exit_zlib(z, &k->zlib_init, process_zlib_error(data, z));
}
}
else {
/* Inflating stream state */
z->next_in = (Bytef *)k->str;
z->avail_in = nread;
}
if (z->avail_in == 0) {
/* We don't have any data to inflate; wait until next time */
return CURLE_OK;
}
/* because the buffer size is fixed, iteratively decompress
and transfer to the client via client_write. */
for (;;) {
/* (re)set buffer for decompressed output for every iteration */
z->next_out = (Bytef *)&decomp[0];
z->avail_out = DSIZ;
status = inflate(z, Z_SYNC_FLUSH);
if (status == Z_OK || status == Z_STREAM_END) {
if(DSIZ - z->avail_out) {
result = Curl_client_write(data, CLIENTWRITE_BODY, decomp,
DSIZ - z->avail_out);
/* if !CURLE_OK, clean up, return */
if (result)
return exit_zlib(z, &k->zlib_init, result);
}
/* Done?; clean up, return */
/* We should really check the gzip CRC here */
if (status == Z_STREAM_END) {
if (inflateEnd(z) == Z_OK)
return exit_zlib(z, &k->zlib_init, result);
else
return exit_zlib(z, &k->zlib_init, process_zlib_error(data, z));
}
/* Done with these bytes, exit */
if (status == Z_OK && z->avail_in == 0 && z->avail_out > 0)
return result;
}
else { /* Error; exit loop, handle below */

View File

@@ -20,7 +20,22 @@
*
* $Id$
***************************************************************************/
#include "setup.h"
/*
* Comma-separated list all supported Content-Encodings ('identity' is implied)
*/
#ifdef HAVE_LIBZ
#define ALL_CONTENT_ENCODINGS "deflate, gzip"
#else
#define ALL_CONTENT_ENCODINGS "identity"
#endif
CURLcode Curl_unencode_deflate_write(struct SessionHandle *data,
struct Curl_transfer_keeper *k,
ssize_t nread);
CURLcode
Curl_unencode_gzip_write(struct SessionHandle *data,
struct Curl_transfer_keeper *k,
ssize_t nread);

View File

@@ -111,6 +111,17 @@ free_cookiemess(struct Cookie *co)
free(co);
}
static bool tailmatch(const char *little, const char *bigone)
{
unsigned int littlelen = strlen(little);
unsigned int biglen = strlen(bigone);
if(littlelen > biglen)
return FALSE;
return strequal(little, bigone+biglen-littlelen);
}
/****************************************************************************
*
* Curl_cookie_add()
@@ -123,7 +134,10 @@ struct Cookie *
Curl_cookie_add(struct CookieInfo *c,
bool httpheader, /* TRUE if HTTP header-style line */
char *lineptr, /* first character of the line */
char *domain) /* default domain */
char *domain, /* default domain */
char *path) /* full path used when this cookie is set,
used to get default path for the cookie
unless set */
{
struct Cookie *clist;
char what[MAX_COOKIE_LINE];
@@ -134,6 +148,7 @@ Curl_cookie_add(struct CookieInfo *c,
struct Cookie *lastc=NULL;
time_t now = time(NULL);
bool replace_old = FALSE;
bool badcookie = FALSE; /* cookies are good by default. mmmmm yummy */
/* First, alloc and init a new struct for it */
co = (struct Cookie *)malloc(sizeof(struct Cookie));
@@ -186,8 +201,63 @@ Curl_cookie_add(struct CookieInfo *c,
co->path=strdup(whatptr);
}
else if(strequal("domain", name)) {
co->domain=strdup(whatptr);
co->field1= (whatptr[0]=='.')?2:1;
/* note that this name may or may not have a preceeding dot, but
we don't care about that, we treat the names the same anyway */
char *ptr=whatptr;
int dotcount=1;
unsigned int i;
static const char *seventhree[]= {
"com", "edu", "net", "org", "gov", "mil", "int"
};
/* Count the dots, we need to make sure that there are THREE dots
in the normal domains, or TWO in the seventhree-domains. */
if('.' == whatptr[0])
/* don't count the initial dot, assume it */
ptr++;
do {
ptr = strchr(ptr, '.');
if(ptr) {
ptr++;
dotcount++;
}
} while(ptr);
for(i=0;
i<sizeof(seventhree)/sizeof(seventhree[0]); i++) {
if(tailmatch(seventhree[i], whatptr)) {
dotcount++; /* we allow one dot less for these */
break;
}
}
if(dotcount < 3) {
/* Received and skipped a cookie with a domain using too few
dots. */
badcookie=TRUE; /* mark this as a bad cookie */
}
else {
/* Now, we make sure that our host is within the given domain,
or the given domain is not valid and thus cannot be set. */
if(!domain || tailmatch(whatptr, domain)) {
char *ptr=whatptr;
if(ptr[0] == '.')
ptr++;
co->domain=strdup(ptr); /* dont prefix with dots internally */
co->tailmatch=TRUE; /* we always do that if the domain name was
given */
}
else {
/* we did not get a tailmatch and then the attempted set domain
is not a domain to which the current host belongs. Mark as
bad. */
badcookie=TRUE;
}
}
}
else if(strequal("version", name)) {
co->version=strdup(whatptr);
@@ -249,8 +319,11 @@ Curl_cookie_add(struct CookieInfo *c,
semiptr=strchr(ptr, '\0');
} while(semiptr);
if(NULL == co->name) {
/* we didn't get a cookie name, this is an illegal line, bail out */
if(badcookie || (NULL == co->name)) {
/* we didn't get a cookie name or a bad one,
this is an illegal line, bail out */
if(co->expirestr)
free(co->expirestr);
if(co->domain)
free(co->domain);
if(co->path)
@@ -264,8 +337,20 @@ Curl_cookie_add(struct CookieInfo *c,
}
if(NULL == co->domain)
/* no domain given in the header line, set the default now */
/* no domain was given in the header line, set the default now */
co->domain=domain?strdup(domain):NULL;
if((NULL == co->path) && path) {
/* no path was given in the header line, set the default now */
char *endslash = strrchr(path, '/');
if(endslash) {
int pathlen = endslash-path+1; /* include the ending slash */
co->path=malloc(pathlen+1); /* one extra for the zero byte */
if(co->path) {
memcpy(co->path, path, pathlen);
co->path[pathlen]=0; /* zero terminate */
}
}
}
}
else {
/* This line is NOT a HTTP header style line, we do offer support for
@@ -297,9 +382,12 @@ Curl_cookie_add(struct CookieInfo *c,
/* Now loop through the fields and init the struct we already have
allocated */
for(ptr=firstptr, fields=0; ptr; ptr=strtok_r(NULL, "\t", &tok_buf), fields++) {
for(ptr=firstptr, fields=0; ptr;
ptr=strtok_r(NULL, "\t", &tok_buf), fields++) {
switch(fields) {
case 0:
if(ptr[0]=='.') /* skip preceeding dots */
ptr++;
co->domain = strdup(ptr);
break;
case 1:
@@ -312,10 +400,8 @@ Curl_cookie_add(struct CookieInfo *c,
As far as I can see, it is set to true when the cookie says
.domain.com and to false when the domain is complete www.domain.com
We don't currently take advantage of this knowledge.
*/
co->field1=strequal(ptr, "TRUE")+1; /* store information */
co->tailmatch=strequal(ptr, "TRUE"); /* store information */
break;
case 2:
/* It turns out, that sometimes the file format allows the path
@@ -374,13 +460,8 @@ Curl_cookie_add(struct CookieInfo *c,
/* the names are identical */
if(clist->domain && co->domain) {
if(strequal(clist->domain, co->domain) ||
(clist->domain[0]=='.' &&
strequal(&(clist->domain[1]), co->domain)) ||
(co->domain[0]=='.' &&
strequal(clist->domain, &(co->domain[1]))) )
/* The domains are identical, or at least identical if you skip the
preceeding dot */
if(strequal(clist->domain, co->domain))
/* The domains are identical */
replace_old=TRUE;
}
else if(!clist->domain && !co->domain)
@@ -470,7 +551,6 @@ Curl_cookie_add(struct CookieInfo *c,
}
c->numcookies++; /* one more cookie in the jar */
return co;
}
@@ -532,7 +612,7 @@ struct CookieInfo *Curl_cookie_init(char *file,
while(*lineptr && isspace((int)*lineptr))
lineptr++;
Curl_cookie_add(c, headerline, lineptr, NULL);
Curl_cookie_add(c, headerline, lineptr, NULL, NULL);
}
if(fromfile)
fclose(fp);
@@ -561,9 +641,6 @@ struct Cookie *Curl_cookie_getlist(struct CookieInfo *c,
struct Cookie *newco;
struct Cookie *co;
time_t now = time(NULL);
int hostlen=strlen(host);
int domlen;
struct Cookie *mainco=NULL;
if(!c || !c->cookies)
@@ -572,43 +649,42 @@ struct Cookie *Curl_cookie_getlist(struct CookieInfo *c,
co = c->cookies;
while(co) {
/* only process this cookie if it is not expired or had no expire
date AND that if the cookie requires we're secure we must only
continue if we are! */
/* only process this cookie if it is not expired or had no expire
date AND that if the cookie requires we're secure we must only
continue if we are! */
if( (co->expires<=0 || (co->expires> now)) &&
(co->secure?secure:TRUE) ) {
/* now check if the domain is correct */
if(!co->domain ||
(co->tailmatch && tailmatch(co->domain, host)) ||
(!co->tailmatch && strequal(host, co->domain)) ) {
/* the right part of the host matches the domain stuff in the
cookie data */
/* now check the left part of the path with the cookies path
requirement */
if(!co->path ||
checkprefix(co->path, path) ) {
/* now check if the domain is correct */
domlen=co->domain?strlen(co->domain):0;
if(!co->domain ||
((domlen<=hostlen) &&
strequal(host+(hostlen-domlen), co->domain)) ) {
/* the right part of the host matches the domain stuff in the
cookie data */
/* and now, we know this is a match and we should create an
entry for the return-linked-list */
newco = (struct Cookie *)malloc(sizeof(struct Cookie));
if(newco) {
/* first, copy the whole source cookie: */
memcpy(newco, co, sizeof(struct Cookie));
/* now check the left part of the path with the cookies path
requirement */
if(!co->path ||
checkprefix(co->path, path) ) {
/* and now, we know this is a match and we should create an
entry for the return-linked-list */
newco = (struct Cookie *)malloc(sizeof(struct Cookie));
if(newco) {
/* first, copy the whole source cookie: */
memcpy(newco, co, sizeof(struct Cookie));
/* then modify our next */
newco->next = mainco;
/* point the main to us */
mainco = newco;
}
}
}
}
co = co->next;
/* then modify our next */
newco->next = mainco;
/* point the main to us */
mainco = newco;
}
}
}
}
co = co->next;
}
return mainco; /* return the new list */
@@ -716,15 +792,19 @@ int Curl_cookie_output(struct CookieInfo *c, char *dumphere)
while(co) {
fprintf(out,
"%s\t" /* domain */
"%s\t" /* field1 */
"%s%s\t" /* domain */
"%s\t" /* tailmatch */
"%s\t" /* path */
"%s\t" /* secure */
"%u\t" /* expires */
"%s\t" /* name */
"%s\n", /* value */
/* Make sure all domains are prefixed with a dot if they allow
tailmatching. This is Mozilla-style. */
(co->tailmatch && co->domain && co->domain[0] != '.')? ".":"",
co->domain?co->domain:"unknown",
co->field1==2?"TRUE":"FALSE",
co->tailmatch?"TRUE":"FALSE",
co->path?co->path:"/",
co->secure?"TRUE":"FALSE",
(unsigned int)co->expires,

View File

@@ -40,8 +40,7 @@ struct Cookie {
char *domain; /* domain = <this> */
long expires; /* expires = <this> */
char *expirestr; /* the plain text version */
char field1; /* read from a cookie file, 1 => FALSE, 2=> TRUE */
bool tailmatch; /* weather we do tail-matchning of the domain name */
/* RFC 2109 keywords. Version=1 means 2109-compliant cookie sending */
char *version; /* Version = <value> */
@@ -70,11 +69,11 @@ struct CookieInfo {
#define MAX_NAME_TXT "255"
/*
* Add a cookie to the internal list of cookies. The domain argument is only
* used if the header boolean is TRUE.
* Add a cookie to the internal list of cookies. The domain and path arguments
* are only used if the header boolean is TRUE.
*/
struct Cookie *Curl_cookie_add(struct CookieInfo *, bool header, char *line,
char *domain);
char *domain, char *path);
struct CookieInfo *Curl_cookie_init(char *, struct CookieInfo *, bool);
struct Cookie *Curl_cookie_getlist(struct CookieInfo *, char *, char *, bool);

View File

@@ -243,6 +243,10 @@ SOURCE=.\url.c
# End Source File
# Begin Source File
SOURCE=.\share.c
# End Source File
# Begin Source File
SOURCE=.\version.c
# End Source File
# End Group

View File

@@ -200,6 +200,7 @@ CURLcode curl_easy_setopt(CURL *curl, CURLoption tag, ...)
long param_long = 0;
void *param_obj = NULL;
struct SessionHandle *data = curl;
CURLcode ret=CURLE_FAILED_INIT;
va_start(arg, tag);
@@ -213,20 +214,20 @@ CURLcode curl_easy_setopt(CURL *curl, CURLoption tag, ...)
if(tag < CURLOPTTYPE_OBJECTPOINT) {
/* This is a LONG type */
param_long = va_arg(arg, long);
Curl_setopt(data, tag, param_long);
ret = Curl_setopt(data, tag, param_long);
}
else if(tag < CURLOPTTYPE_FUNCTIONPOINT) {
/* This is a object pointer type */
param_obj = va_arg(arg, void *);
Curl_setopt(data, tag, param_obj);
ret = Curl_setopt(data, tag, param_obj);
}
else {
param_func = va_arg(arg, func_T );
Curl_setopt(data, tag, param_func);
ret = Curl_setopt(data, tag, param_func);
}
va_end(arg);
return CURLE_OK;
return ret;
}
CURLcode curl_easy_perform(CURL *curl)

View File

@@ -128,11 +128,8 @@ Content-Disposition: form-data; name="FILECONTENT"
#include "memdebug.h"
#endif
/* Length of the random boundary string. The risk of this being used
in binary data is very close to zero, 64^32 makes
6277101735386680763835789423207666416102355444464034512896
combinations... */
#define BOUNDARY_LENGTH 32
/* Length of the random boundary string. */
#define BOUNDARY_LENGTH 40
/* What kind of Content-Type to use on un-specified files with unrecognized
extensions. */
@@ -520,7 +517,7 @@ static const char * ContentTypeForFilename (const char *filename,
{".jpg", "image/jpeg"},
{".jpeg", "image/jpeg"},
{".txt", "text/plain"},
{".html", "text/plain"}
{".html", "text/html"}
};
if(prevtype)
@@ -1049,22 +1046,23 @@ char *Curl_FormBoundary(void)
the same form won't be identical */
int i;
static char table62[]=
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
static char table16[]="abcdef0123456789";
retstring = (char *)malloc(BOUNDARY_LENGTH);
retstring = (char *)malloc(BOUNDARY_LENGTH+1);
if(!retstring)
return NULL; /* failed */
srand(time(NULL)+randomizer++); /* seed */
strcpy(retstring, "curl"); /* bonus commercials 8*) */
strcpy(retstring, "----------------------------");
for(i=4; i<(BOUNDARY_LENGTH-1); i++) {
retstring[i] = table62[rand()%62];
}
retstring[BOUNDARY_LENGTH-1]=0; /* zero terminate */
for(i=strlen(retstring); i<BOUNDARY_LENGTH; i++)
retstring[i] = table16[rand()%16];
/* 28 dashes and 12 hexadecimal digits makes 12^16 (184884258895036416)
combinations */
retstring[BOUNDARY_LENGTH]=0; /* zero terminate */
return retstring;
}

171
lib/ftp.c
View File

@@ -158,6 +158,7 @@ static CURLcode AllowServerConnect(struct SessionHandle *data,
infof(data, "Connection accepted from server\n");
conn->secondarysocket = s;
Curl_nonblock(s, TRUE); /* enable non-blocking */
}
break;
}
@@ -237,7 +238,7 @@ CURLcode Curl_GetFTPResponse(ssize_t *nreadp, /* return number of bytes read */
if(!ftp->cache) {
readfd = rkeepfd; /* set every lap */
interval.tv_sec = timeout;
interval.tv_sec = 1; /* use 1 second timeout intervals */
interval.tv_usec = 0;
switch (select (sockfd+1, &readfd, NULL, NULL, &interval)) {
@@ -246,9 +247,10 @@ CURLcode Curl_GetFTPResponse(ssize_t *nreadp, /* return number of bytes read */
failf(data, "Transfer aborted due to select() error: %d", errno);
break;
case 0: /* timeout */
result = CURLE_OPERATION_TIMEDOUT;
failf(data, "Transfer aborted due to timeout");
break;
if(Curl_pgrsUpdate(conn))
return CURLE_ABORTED_BY_CALLBACK;
continue; /* just continue in our loop for the timeout duration */
default:
break;
}
@@ -726,7 +728,10 @@ CURLcode ftp_cwd(struct connectdata *conn, char *path)
if (result)
return result;
if (ftpcode != 250) {
/* According to RFC959, CWD is supposed to return 250 on success, but
there seem to be non-compliant FTP servers out there that return 200,
so we accept any '2xy' code here. */
if (ftpcode/100 != 2) {
failf(conn->data, "Couldn't cd to %s", path);
return CURLE_FTP_ACCESS_DENIED;
}
@@ -933,6 +938,7 @@ ftp_pasv_verbose(struct connectdata *conn,
# endif
# else
(void)hostent_buf; /* avoid compiler warning */
answer = gethostbyaddr((char *) &address, sizeof(address), AF_INET);
# endif
#else
@@ -961,7 +967,7 @@ ftp_pasv_verbose(struct connectdata *conn,
#else
const int niflags = NI_NUMERICHOST | NI_NUMERICSERV;
#endif
port = 0; /* unused, prevent warning */
(void)port; /* prevent compiler warning */
if (getnameinfo(addr->ai_addr, addr->ai_addrlen,
nbuf, sizeof(nbuf), sbuf, sizeof(sbuf), niflags)) {
snprintf(nbuf, sizeof(nbuf), "?");
@@ -1076,7 +1082,8 @@ CURLcode ftp_use_port(struct connectdata *conn)
return CURLE_FTP_PORT_FAILED;
}
for (modep = (char **)mode; modep && *modep; modep++) {
for (modep = (char **)(data->set.ftp_use_eprt?&mode[0]:&mode[2]);
modep && *modep; modep++) {
int lprtaf, eprtaf;
int alen=0, plen=0;
@@ -1206,7 +1213,13 @@ CURLcode ftp_use_port(struct connectdata *conn)
bool sa_filled_in = FALSE;
if(data->set.ftpport) {
if(Curl_if2ip(data->set.ftpport, myhost, sizeof(myhost))) {
in_addr_t in;
/* First check if the given name is an IP address */
in=inet_addr(data->set.ftpport);
if((in == CURL_INADDR_NONE) &&
Curl_if2ip(data->set.ftpport, myhost, sizeof(myhost))) {
h = Curl_resolv(data, myhost, 0);
}
else {
@@ -1963,17 +1976,46 @@ CURLcode ftp_perform(struct connectdata *conn,
return result;
}
/* change directory first! */
if(ftp->dir && ftp->dir[0]) {
if ((result = ftp_cwd(conn, ftp->dir)) != CURLE_OK)
{
int i; /* counter for loop */
for (i=0; ftp->dirs[i]; i++) {
/* RFC 1738 says empty components should be respected too */
if ((result = ftp_cwd(conn, ftp->dirs[i])) != CURLE_OK)
return result;
}
}
/* Requested time of file? */
if(data->set.get_filetime && ftp->file) {
/* Requested time of file or time-depended transfer? */
if((data->set.get_filetime || data->set.timecondition) &&
ftp->file) {
result = ftp_getfiletime(conn, ftp->file);
if(result)
return result;
if(data->set.timecondition) {
if((data->info.filetime > 0) && (data->set.timevalue > 0)) {
switch(data->set.timecondition) {
case TIMECOND_IFMODSINCE:
default:
if(data->info.filetime < data->set.timevalue) {
infof(data, "The requested document is not new enough\n");
ftp->no_transfer = TRUE; /* mark this to not transfer data */
return CURLE_OK;
}
break;
case TIMECOND_IFUNMODSINCE:
if(data->info.filetime > data->set.timevalue) {
infof(data, "The requested document is not old enough\n");
ftp->no_transfer = TRUE; /* mark this to not transfer data */
return CURLE_OK;
}
break;
} /* switch */
}
else {
infof(data, "Skipping time comparison\n");
}
}
}
/* If we have selected NOBODY and HEADER, it means that we only want file
@@ -2016,7 +2058,7 @@ CURLcode ftp_perform(struct connectdata *conn,
tm = localtime((unsigned long *)&data->info.filetime);
#endif
/* format: "Tue, 15 Nov 1994 12:45:26 GMT" */
strftime(buf, BUFSIZE-1, "Last-Modified: %a, %d %b %Y %H:%M:%S %Z\r\n",
strftime(buf, BUFSIZE-1, "Last-Modified: %a, %d %b %Y %H:%M:%S GMT\r\n",
tm);
result = Curl_client_write(data, CLIENTWRITE_BOTH, buf, 0);
if(result)
@@ -2061,34 +2103,70 @@ CURLcode ftp_perform(struct connectdata *conn,
*/
CURLcode Curl_ftp(struct connectdata *conn)
{
CURLcode retcode;
bool connected;
CURLcode retcode=CURLE_OK;
bool connected=0;
struct SessionHandle *data = conn->data;
struct FTP *ftp;
int dirlength=0; /* 0 forces strlen() */
char *slash_pos; /* position of the first '/' char in curpos */
char *cur_pos=conn->ppath; /* current position in ppath. point at the begin
of next path component */
int path_part=0;/* current path component */
/* the ftp struct is already inited in ftp_connect() */
ftp = conn->proto.ftp;
conn->size = -1; /* make sure this is unknown at this point */
/* We split the path into dir and file parts *before* we URLdecode
it */
ftp->file = strrchr(conn->ppath, '/');
if(ftp->file) {
if(ftp->file != conn->ppath)
dirlength=ftp->file-conn->ppath; /* don't count the traling slash */
Curl_pgrsSetUploadCounter(data, 0);
Curl_pgrsSetDownloadCounter(data, 0);
Curl_pgrsSetUploadSize(data, 0);
Curl_pgrsSetDownloadSize(data, 0);
ftp->file++; /* point to the first letter in the file name part or
remain NULL */
}
else {
ftp->file = conn->ppath; /* there's only a file part */
/* fixed : initialize ftp->dirs[xxx] to NULL !
is done in Curl_ftp_connect() */
/* parse the URL path into separate path components */
while((slash_pos=strchr(cur_pos, '/'))) {
/* seek out the next path component */
if (0 == slash_pos-cur_pos) /* empty path component, like "x//y" */
ftp->dirs[path_part] = strdup(""); /* empty string */
else
ftp->dirs[path_part] = curl_unescape(cur_pos,slash_pos-cur_pos);
if (!ftp->dirs[path_part]) { /* run out of memory ... */
failf(data, "no memory");
retcode = CURLE_OUT_OF_MEMORY;
}
else {
cur_pos = slash_pos + 1; /* jump to the rest of the string */
if(++path_part >= (CURL_MAX_FTP_DIRDEPTH-1)) {
/* too deep, we need the last entry to be kept NULL at all
times to signal end of list */
failf(data, "too deep dir hierarchy");
retcode = CURLE_URL_MALFORMAT;
}
}
if (retcode) {
int i;
for (i=0;i<path_part;i++) { /* free previous parts */
free(ftp->dirs[i]);
ftp->dirs[i]=NULL;
}
return retcode; /* failure */
}
}
ftp->file = cur_pos; /* the rest is the file name */
if(*ftp->file) {
ftp->file = curl_unescape(ftp->file, 0);
if(NULL == ftp->file) {
int i;
for (i=0;i<path_part;i++){
free(ftp->dirs[i]);
ftp->dirs[i]=NULL;
}
failf(data, "no memory");
return CURLE_OUT_OF_MEMORY;
}
@@ -2096,28 +2174,22 @@ CURLcode Curl_ftp(struct connectdata *conn)
else
ftp->file=NULL; /* instead of point to a zero byte, we make it a NULL
pointer */
ftp->urlpath = conn->ppath;
if(dirlength) {
ftp->dir = curl_unescape(ftp->urlpath, dirlength);
if(NULL == ftp->dir) {
if(ftp->file)
free(ftp->file);
failf(data, "no memory");
return CURLE_OUT_OF_MEMORY; /* failure */
}
}
else
ftp->dir = NULL;
retcode = ftp_perform(conn, &connected);
if(CURLE_OK == retcode) {
if(connected)
retcode = Curl_ftp_nextconnect(conn);
else
/* since we didn't connect now, we want do_more to get called */
conn->bits.do_more = TRUE;
else {
if(ftp->no_transfer) {
/* no data to transfer */
retcode=Curl_Transfer(conn, -1, -1, FALSE, NULL, -1, NULL);
}
else {
/* since we didn't connect now, we want do_more to get called */
conn->bits.do_more = TRUE;
}
}
}
return retcode;
@@ -2182,6 +2254,7 @@ CURLcode Curl_ftpsendf(struct connectdata *conn,
CURLcode Curl_ftp_disconnect(struct connectdata *conn)
{
struct FTP *ftp= conn->proto.ftp;
int i;
/* The FTP session may or may not have been allocated/setup at this point! */
if(ftp) {
@@ -2191,10 +2264,12 @@ CURLcode Curl_ftp_disconnect(struct connectdata *conn)
free(ftp->cache);
if(ftp->file)
free(ftp->file);
if(ftp->dir)
free(ftp->dir);
for (i=0;ftp->dirs[i];i++){
free(ftp->dirs[i]);
ftp->dirs[i]=NULL;
}
ftp->file = ftp->dir = NULL; /* zero */
ftp->file = NULL; /* zero */
}
return CURLE_OK;
}

View File

@@ -38,6 +38,8 @@
/* Make this the last #include */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#else
#include <stdlib.h>
#endif
/*

View File

@@ -239,7 +239,7 @@ struct Curl_dns_entry *Curl_resolv(struct SessionHandle *data,
will generate a signal and we will siglongjmp() from that here */
if(!data->set.no_signal && sigsetjmp(curl_jmpenv, 1)) {
/* this is coming from a siglongjmp() */
failf(data, "name lookup time-outed");
failf(data, "name lookup timed out");
return NULL;
}
#endif
@@ -532,10 +532,6 @@ static char *MakeIP(unsigned long num, char *addr, int addr_len)
return (addr);
}
#ifndef INADDR_NONE
#define INADDR_NONE (in_addr_t) ~0
#endif
static void hostcache_fixoffset(struct hostent *h, int offset)
{
int i=0;
@@ -573,7 +569,8 @@ static Curl_addrinfo *my_getaddrinfo(struct SessionHandle *data,
port=0; /* unused in IPv4 code */
ret = 0; /* to prevent the compiler warning */
if ( (in=inet_addr(hostname)) != INADDR_NONE ) {
in=inet_addr(hostname);
if (in != CURL_INADDR_NONE) {
struct in_addr *addrentry;
struct namebuf {
struct hostent hostentry;

View File

@@ -79,4 +79,11 @@ int curl_getaddrinfo(char *hostname, char *service,
int line, const char *source);
#endif
#ifndef INADDR_NONE
#define CURL_INADDR_NONE (in_addr_t) ~0
#else
#define CURL_INADDR_NONE INADDR_NONE
#endif
#endif

View File

@@ -491,16 +491,16 @@ CURLcode Curl_ConnectHTTPProxyTunnel(struct connectdata *conn,
/* a newline is CRLF in ftp-talk, so the CR is ignored as
the line isn't really terminated until the LF comes */
/* output debug output if that is requested */
if(data->set.verbose)
Curl_debug(data, CURLINFO_DATA_IN, line_start, perline);
if('\r' == line_start[0]) {
/* end of headers */
keepon=FALSE;
break; /* breaks out of loop, not switch */
}
/* output debug output if that is requested */
if(data->set.verbose)
Curl_debug(data, CURLINFO_HEADER_IN, line_start, perline);
if(2 == sscanf(line_start, "HTTP/1.%d %d",
&subversion,
&httperror)) {
@@ -626,6 +626,7 @@ CURLcode Curl_http(struct connectdata *conn)
char *ppath = conn->ppath; /* three previous function arguments */
char *host = conn->name;
const char *te = ""; /* tranfer-encoding */
char *ptr;
if(!conn->proto.http) {
/* Only allocate this struct if we don't already have it! */
@@ -714,30 +715,30 @@ CURLcode Curl_http(struct connectdata *conn)
}
}
if(data->cookies) {
co = Curl_cookie_getlist(data->cookies,
host, ppath,
(bool)(conn->protocol&PROT_HTTPS?TRUE:FALSE));
}
if (data->change.proxy && *data->change.proxy &&
!data->set.tunnel_thru_httpproxy &&
!(conn->protocol&PROT_HTTPS)) {
/* The path sent to the proxy is in fact the entire URL */
ppath = data->change.url;
}
if(HTTPREQ_POST_FORM == data->set.httpreq) {
/* we must build the whole darned post sequence first, so that we have
a size of the whole shebang before we start to send it */
result = Curl_getFormData(&http->sendit, data->set.httppost,
&http->postsize);
if(CURLE_OK != result) {
/* Curl_getFormData() doesn't use failf() */
failf(data, "failed creating formpost data");
return result;
}
}
ptr = checkheaders(data, "Host:");
if(ptr) {
/* If we have a given custom Host: header, we extract the host name
in order to possibly use it for cookie reasons later on. */
char *start = ptr+strlen("Host:");
char *ptr;
while(*start && isspace((int)*start ))
start++;
ptr = start; /* start host-scanning here */
if(!checkheaders(data, "Host:")) {
/* scan through the string to find the end (space or colon) */
while(*ptr && !isspace((int)*ptr) && !(':'==*ptr))
ptr++;
if(ptr != start) {
int len=ptr-start;
conn->allocptr.cookiehost = malloc(len+1);
if(!conn->allocptr.cookiehost)
return CURLE_OUT_OF_MEMORY;
memcpy(conn->allocptr.cookiehost, start, len);
conn->allocptr.cookiehost[len]=0;
}
}
else {
/* if ptr_host is already set, it is almost OK since we only re-use
connections to the very same host and port, but when we use a HTTP
proxy we have a persistant connect and yet we must change the Host:
@@ -765,6 +766,32 @@ CURLcode Curl_http(struct connectdata *conn)
conn->remote_port);
}
if(data->cookies) {
co = Curl_cookie_getlist(data->cookies,
conn->allocptr.cookiehost?
conn->allocptr.cookiehost:host, ppath,
(bool)(conn->protocol&PROT_HTTPS?TRUE:FALSE));
}
if (data->change.proxy && *data->change.proxy &&
!data->set.tunnel_thru_httpproxy &&
!(conn->protocol&PROT_HTTPS)) {
/* The path sent to the proxy is in fact the entire URL */
ppath = data->change.url;
}
if(HTTPREQ_POST_FORM == data->set.httpreq) {
/* we must build the whole darned post sequence first, so that we have
a size of the whole shebang before we start to send it */
result = Curl_getFormData(&http->sendit, data->set.httppost,
&http->postsize);
if(CURLE_OK != result) {
/* Curl_getFormData() doesn't use failf() */
failf(data, "failed creating formpost data");
return result;
}
}
if(!checkheaders(data, "Pragma:"))
http->p_pragma = "Pragma: no-cache\r\n";

View File

@@ -186,15 +186,22 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
break;
case DEFLATE:
/* update conn->keep.str to point to the chunk data. */
conn->keep.str = datap;
result = Curl_unencode_deflate_write(conn->data, &conn->keep, piece);
break;
case GZIP:
/* update conn->keep.str to point to the chunk data. */
conn->keep.str = datap;
result = Curl_unencode_gzip_write(conn->data, &conn->keep, piece);
break;
case COMPRESS:
default:
failf (conn->data,
"Unrecognized content encoding type. "
"libcurl understands `identity' and `deflate' "
"libcurl understands `identity', `deflate' and `gzip' "
"content encodings.");
return CHUNKE_BAD_ENCODING;
}

View File

@@ -38,6 +38,7 @@
#include "transfer.h"
#include "url.h"
#include "connect.h"
#include "progress.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
@@ -328,6 +329,7 @@ CURLMcode curl_multi_perform(CURLM *multi_handle, int *running_handles)
}
/* Connect. We get a connection identifier filled in. */
Curl_pgrsTime(easy->easy_handle, TIMER_STARTSINGLE);
easy->result = Curl_connect(easy->easy_handle, &easy->easy_conn);
/* after the connect has been sent off, go WAITCONNECT */
@@ -468,11 +470,12 @@ CURLMcode curl_multi_perform(CURLM *multi_handle, int *running_handles)
}
if(CURLM_STATE_COMPLETED != easy->state) {
if(CURLE_OK != easy->result)
if(CURLE_OK != easy->result) {
/*
* If an error was returned, and we aren't in completed state now,
* then we go to completed and consider this transfer aborted. */
easy->state = CURLM_STATE_COMPLETED;
}
else
/* this one still lives! */
(*running_handles)++;

View File

@@ -172,18 +172,20 @@ void Curl_pgrsSetUploadCounter(struct SessionHandle *data, double size)
void Curl_pgrsSetDownloadSize(struct SessionHandle *data, double size)
{
if(size > 0) {
data->progress.size_dl = size;
data->progress.size_dl = size;
if(size > 0)
data->progress.flags |= PGRS_DL_SIZE_KNOWN;
}
else
data->progress.flags &= ~PGRS_DL_SIZE_KNOWN;
}
void Curl_pgrsSetUploadSize(struct SessionHandle *data, double size)
{
if(size > 0) {
data->progress.size_ul = size;
data->progress.size_ul = size;
if(size > 0)
data->progress.flags |= PGRS_UL_SIZE_KNOWN;
}
else
data->progress.flags &= ~PGRS_UL_SIZE_KNOWN;
}
/* EXAMPLE OUTPUT to follow:

View File

@@ -46,6 +46,7 @@
#include <curl/curl.h>
#include "urldata.h"
#include "sendf.h"
#include "connect.h" /* for the Curl_ourerrno() proto */
#define _MPRINTF_REPLACE /* use the internal *printf() functions */
#include <curl/mprintf.h>
@@ -228,6 +229,7 @@ CURLcode Curl_write(struct connectdata *conn, int sockfd,
ssize_t *written)
{
ssize_t bytes_written;
(void)conn;
#ifdef USE_SSLEAY
/* SSL_write() is said to return 'int' while write() and send() returns
@@ -246,7 +248,8 @@ CURLcode Curl_write(struct connectdata *conn, int sockfd,
*written = 0;
return CURLE_OK;
case SSL_ERROR_SYSCALL:
failf(conn->data, "SSL_write() returned SYSCALL, errno = %d\n", errno);
failf(conn->data, "SSL_write() returned SYSCALL, errno = %d\n",
Curl_ourerrno());
return CURLE_SEND_ERROR;
}
/* a true error */
@@ -267,14 +270,15 @@ CURLcode Curl_write(struct connectdata *conn, int sockfd,
bytes_written = swrite(sockfd, mem, len);
}
if(-1 == bytes_written) {
int err = Curl_ourerrno();
#ifdef WIN32
if(WSAEWOULDBLOCK == GetLastError())
if(WSAEWOULDBLOCK == err)
#else
/* As pointed out by Christophe Demory on March 11 2003, errno
may be EWOULDBLOCK or on some systems EAGAIN when it returned
due to its inability to send off data without blocking. We
therefor treat both error codes the same here */
if((EWOULDBLOCK == errno) || (EAGAIN == errno))
if((EWOULDBLOCK == err) || (EAGAIN == err) || (EINTR == err))
#endif
{
/* this is just a case of EWOULDBLOCK */
@@ -345,6 +349,7 @@ int Curl_read(struct connectdata *conn,
ssize_t *n)
{
ssize_t nread;
(void)conn;
*n=0; /* reset amount to zero */
#ifdef USE_SSLEAY
@@ -363,6 +368,17 @@ int Curl_read(struct connectdata *conn,
case SSL_ERROR_WANT_WRITE:
/* there's data pending, re-invoke SSL_read() */
return -1; /* basicly EWOULDBLOCK */
case SSL_ERROR_SYSCALL:
/* openssl/ssl.h says "look at error stack/return value/errno" */
{
char error_buffer[120]; /* OpenSSL documents that this must be at least
120 bytes long. */
int sslerror = ERR_get_error();
failf(conn->data, "SSL read: %s, errno %d",
ERR_error_string(sslerror, error_buffer),
Curl_ourerrno() );
}
return CURLE_RECV_ERROR;
default:
failf(conn->data, "SSL read error: %d", err);
return CURLE_RECV_ERROR;
@@ -379,10 +395,11 @@ int Curl_read(struct connectdata *conn,
nread = sread (sockfd, buf, buffersize);
if(-1 == nread) {
int err = Curl_ourerrno();
#ifdef WIN32
if(WSAEWOULDBLOCK == GetLastError())
if(WSAEWOULDBLOCK == err)
#else
if(EWOULDBLOCK == errno)
if((EWOULDBLOCK == err) || (EAGAIN == err) || (EINTR == err))
#endif
return -1;
}
@@ -408,6 +425,7 @@ int Curl_debug(struct SessionHandle *data, curl_infotype type,
switch(type) {
case CURLINFO_TEXT:
case CURLINFO_HEADER_OUT:
case CURLINFO_HEADER_IN:
fwrite(s_infotype[type], 2, 1, data->set.err);
fwrite(ptr, size, 1, data->set.err);
break;

View File

@@ -697,6 +697,7 @@ static int Curl_ASN1_UTCTIME_output(struct connectdata *conn,
#endif
/* ====================================================== */
#ifdef USE_SSLEAY
static int
cert_hostcheck(const char *certname, const char *hostname)
{
@@ -733,6 +734,7 @@ cert_hostcheck(const char *certname, const char *hostname)
}
return 0;
}
#endif
/* ====================================================== */
CURLcode
@@ -900,14 +902,34 @@ Curl_SSLConnect(struct connectdata *conn)
/* untreated error */
char error_buffer[120]; /* OpenSSL documents that this must be at least
120 bytes long. */
/* detail is already set to the SSL error above */
failf(data, "SSL: %s", ERR_error_string(detail, error_buffer));
/* OpenSSL 0.9.6 and later has a function named
ERRO_error_string_n() that takes the size of the buffer as a third
argument, and we should possibly switch to using that one in the
future. */
return CURLE_SSL_CONNECT_ERROR;
detail = ERR_get_error(); /* Gets the earliest error code from the
thread's error queue and removes the
entry. */
switch(detail) {
case 0x1407E086:
/* 1407E086:
SSL routines:
SSL2_SET_CERTIFICATE:
certificate verify failed */
case 0x14090086:
/* 14090086:
SSL routines:
SSL3_GET_SERVER_CERTIFICATE:
certificate verify failed */
failf(data,
"SSL certificate problem, verify that the CA cert is OK");
return CURLE_SSL_CACERT;
default:
/* detail is already set to the SSL error above */
failf(data, "SSL: %s", ERR_error_string(detail, error_buffer));
/* OpenSSL 0.9.6 and later has a function named
ERRO_error_string_n() that takes the size of the buffer as a third
argument, and we should possibly switch to using that one in the
future. */
return CURLE_SSL_CONNECT_ERROR;
}
}
}
else

View File

@@ -568,7 +568,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
int len;
/* Find the first non-space letter */
for(start=k->p+14;
for(start=k->p+13;
*start && isspace((int)*start);
start++);
@@ -647,12 +647,12 @@ CURLcode Curl_readwrite(struct connectdata *conn,
else if (checkprefix("Content-Encoding:", k->p) &&
data->set.encoding) {
/*
* Process Content-Encoding. Look for the values: identity, gzip,
* defalte, compress, x-gzip and x-compress. x-gzip and
* Process Content-Encoding. Look for the values: identity,
* gzip, deflate, compress, x-gzip and x-compress. x-gzip and
* x-compress are the same as gzip and compress. (Sec 3.5 RFC
* 2616). zlib cannot handle compress, and gzip is not currently
* implemented. However, errors are handled further down when the
* response body is processed 08/27/02 jhrg */
* 2616). zlib cannot handle compress. However, errors are
* handled further down when the response body is processed
*/
char *start;
/* Find the first non-space letter */
@@ -686,7 +686,12 @@ CURLcode Curl_readwrite(struct connectdata *conn,
}
else if(data->cookies &&
checkprefix("Set-Cookie:", k->p)) {
Curl_cookie_add(data->cookies, TRUE, k->p+11, conn->name);
Curl_cookie_add(data->cookies, TRUE, k->p+11,
/* If there is a custom-set Host: name, use it
here, or else use real peer host name. */
conn->allocptr.cookiehost?
conn->allocptr.cookiehost:conn->name,
conn->ppath);
}
else if(checkprefix("Last-Modified:", k->p) &&
(data->set.timecondition || data->set.get_filetime) ) {
@@ -888,7 +893,9 @@ CURLcode Curl_readwrite(struct connectdata *conn,
if(k->badheader < HEADER_ALLBAD) {
/* This switch handles various content encodings. If there's an
error here, be sure to check over the almost identical code
in http_chunk.c. 08/29/02 jhrg */
in http_chunks.c. 08/29/02 jhrg
Make sure that ALL_CONTENT_ENCODINGS contains all the
encodings handled here. */
#ifdef HAVE_LIBZ
switch (k->content_encoding) {
case IDENTITY:
@@ -907,11 +914,15 @@ CURLcode Curl_readwrite(struct connectdata *conn,
result = Curl_unencode_deflate_write(data, k, nread);
break;
case GZIP: /* FIXME 08/27/02 jhrg */
case COMPRESS:
case GZIP:
/* Assume CLIENTWRITE_BODY; headers are not encoded. */
result = Curl_unencode_gzip_write(data, k, nread);
break;
case COMPRESS: /* FIXME 08/27/02 jhrg */
default:
failf (data, "Unrecognized content encoding type. "
"libcurl understands `identity' and `deflate' "
"libcurl understands `identity', `deflate' and `gzip' "
"content encodings.");
result = CURLE_BAD_CONTENT_ENCODING;
break;
@@ -940,7 +951,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
int i, si;
ssize_t bytes_written;
bool writedone=FALSE;
bool writedone=TRUE;
if ((k->bytecount == 0) && (k->writebytecount == 0))
Curl_pgrsTime(data, TIMER_STARTTRANSFER);

View File

@@ -106,6 +106,7 @@
#include "escape.h"
#include "strtok.h"
#include "share.h"
#include "content_encoding.h"
/* And now for the protocols */
#include "ftp.h"
@@ -210,9 +211,11 @@ CURLcode Curl_close(struct SessionHandle *data)
free(data->state.headerbuff);
#ifndef CURL_DISABLE_HTTP
if(data->set.cookiejar)
if(data->set.cookiejar) {
/* we have a "destination" for all the cookies to get dumped to */
Curl_cookie_output(data->cookies, data->set.cookiejar);
if(Curl_cookie_output(data->cookies, data->set.cookiejar))
infof(data, "WARNING: failed to save cookies in given jar\n");
}
Curl_cookie_cleanup(data->cookies);
#endif
@@ -282,6 +285,7 @@ CURLcode Curl_open(struct SessionHandle **curl)
data->set.httpreq = HTTPREQ_GET; /* Default HTTP request */
data->set.ftp_use_epsv = TRUE; /* FTP defaults to EPSV operations */
data->set.ftp_use_eprt = TRUE; /* FTP defaults to EPRT operations */
data->set.dns_cache_timeout = 60; /* Timeout every 60 seconds by default */
@@ -632,10 +636,14 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option, ...)
data->set.ftp_use_port = data->set.ftpport?1:0;
break;
case CURLOPT_FTP_USE_EPRT:
data->set.ftp_use_eprt = va_arg(param, long)?TRUE:FALSE;
break;
case CURLOPT_FTP_USE_EPSV:
data->set.ftp_use_epsv = va_arg(param, long)?TRUE:FALSE;
break;
case CURLOPT_HTTPHEADER:
/*
* Set a list with HTTP headers to use (or replace internals with)
@@ -818,8 +826,16 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option, ...)
case CURLOPT_ENCODING:
/*
* String to use at the value of Accept-Encoding header. 08/28/02 jhrg
*
* If the encoding is set to "" we use an Accept-Encoding header that
* encompasses all the encodings we support.
* If the encoding is set to NULL we don't send an Accept-Encoding header
* and ignore an received Content-Encoding header.
*
*/
data->set.encoding = va_arg(param, char *);
if(data->set.encoding && !*data->set.encoding)
data->set.encoding = (char*)ALL_CONTENT_ENCODINGS;
break;
case CURLOPT_USERPWD:
@@ -1215,6 +1231,8 @@ CURLcode Curl_disconnect(struct connectdata *conn)
free(conn->allocptr.cookie);
if(conn->allocptr.host)
free(conn->allocptr.host);
if(conn->allocptr.cookiehost)
free(conn->allocptr.cookiehost);
if(conn->proxyhost)
free(conn->proxyhost);
@@ -1775,7 +1793,6 @@ static CURLcode CreateConnection(struct SessionHandle *data,
struct connectdata **in_connect)
{
char *tmp;
char *buf;
CURLcode result=CURLE_OK;
char resumerange[40]="";
struct connectdata *conn;
@@ -1921,16 +1938,20 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* Set default host and default path */
strcpy(conn->gname, "curl.haxx.se");
strcpy(conn->path, "/");
/* We need to search for '/' OR '?' - whichever comes first after host
* name but before the path. We need to change that to handle things like
* http://example.com?param= (notice the missing '/'). Later we'll insert
* that missing slash at the beginning of the path.
*/
if (2 > sscanf(data->change.url,
"%64[^\n:]://%512[^\n/]%[^\n]",
"%64[^\n:]://%512[^\n/?]%[^\n]",
conn->protostr, conn->gname, conn->path)) {
/*
* The URL was badly formatted, let's try the browser-style _without_
* protocol specified like 'http://'.
*/
if((1 > sscanf(data->change.url, "%512[^\n/]%[^\n]",
if((1 > sscanf(data->change.url, "%512[^\n/?]%[^\n]",
conn->gname, conn->path)) ) {
/*
* We couldn't even get this format.
@@ -1972,7 +1993,17 @@ static CURLcode CreateConnection(struct SessionHandle *data,
}
}
buf = data->state.buffer; /* this is our buffer */
/* If the URL is malformatted (missing a '/' after hostname before path) we
* insert a slash here. The only letter except '/' we accept to start a path
* is '?'.
*/
if(conn->path[0] == '?') {
/* We need this function to deal with overlapping memory areas. We know
that the memory area 'path' points to is 'urllen' bytes big and that
is bigger than the path. Use +1 to move the zero byte too. */
memmove(&conn->path[1], conn->path, strlen(conn->path)+1);
conn->path[0] = '/';
}
/*
* So if the URL was A://B/C,

View File

@@ -93,6 +93,9 @@
of need. */
#define HEADERSIZE 256
/* Maximum number of dirs supported by libcurl in a FTP dir hierarchy */
#define CURL_MAX_FTP_DIRDEPTH 100
/* Just a convenience macro to get the larger value out of two given */
#ifndef MAX
#define MAX(x,y) ((x)>(y)?(x):(y))
@@ -193,7 +196,7 @@ struct FTP {
char *user; /* user name string */
char *passwd; /* password string */
char *urlpath; /* the originally given path part of the URL */
char *dir; /* decoded directory */
char *dirs[CURL_MAX_FTP_DIRDEPTH]; /* path components */
char *file; /* decoded file */
char *entrypath; /* the PWD reply when we logged on */
@@ -296,7 +299,7 @@ struct Curl_transfer_keeper {
#ifdef HAVE_LIBZ
bool zlib_init; /* True if zlib already initialized;
undefined if Content-Encdoing header. */
undefined if Content-Encoding header. */
z_stream z; /* State structure for zlib. */
#endif
@@ -435,6 +438,7 @@ struct connectdata {
char *ref; /* free later if not NULL! */
char *cookie; /* free later if not NULL! */
char *host; /* free later if not NULL */
char *cookiehost; /* free later if not NULL */
} allocptr;
char *newurl; /* This can only be set if a Location: was in the
@@ -755,6 +759,7 @@ struct UserDefined {
bool reuse_fresh; /* do not re-use an existing connection */
bool expect100header; /* TRUE if we added Expect: 100-continue */
bool ftp_use_epsv; /* if EPSV is to be attempted or not */
bool ftp_use_eprt; /* if EPRT is to be attempted or not */
bool no_signal; /* do not use any signal/alarm handler */
bool global_dns_cache;

4998
ltmain.sh

File diff suppressed because it is too large Load Diff

View File

@@ -3,10 +3,8 @@
#
# formfind.pl
#
# This script gets a HTML page from the specified URL and presents form
# information you may need in order to machine-make a respond to the form.
#
# Written to use 'curl' for URL fetching.
# This script gets a HTML page on stdin and presents form information on
# stdout.
#
# Author: Daniel Stenberg <daniel@haxx.se>
# Version: 0.2 Nov 18, 2002

View File

@@ -5,4 +5,4 @@ Makefile.in
curl
config.h
hugehelp.c
stamp-h2*
stamp-h*

View File

@@ -345,7 +345,8 @@ static void help(void)
" -C/--continue-at <offset> Specify absolute resume offset\n"
" -d/--data <data> HTTP POST data (H)\n"
" --data-ascii <data> HTTP POST ASCII data (H)\n"
" --data-binary <data> HTTP POST binary data (H)\n"
" --data-binary <data> HTTP POST binary data (H)");
puts(" --disable-eprt Prevents curl from using EPRT or LPRT (F)\n"
" --disable-epsv Prevents curl from using EPSV (F)\n"
" -D/--dump-header <file> Write the headers to this file\n"
" --egd-file <file> EGD socket path for random data (SSL)\n"
@@ -359,11 +360,11 @@ static void help(void)
" --key-type <type> Specifies private key file type (DER/PEM/ENG) (HTTPS)\n"
" --pass <pass> Specifies passphrase for the private key (HTTPS)");
puts(" --engine <eng> Specifies the crypto engine to use (HTTPS)\n"
" --cacert <file> CA certifciate to verify peer against (SSL)\n"
" --cacert <file> CA certificate to verify peer against (SSL)\n"
" --capath <directory> CA directory (made using c_rehash) to verify\n"
" peer against (SSL)\n"
" --ciphers <list> What SSL ciphers to use (SSL)\n"
" --compressed Request a compressed response (using deflate).");
" --compressed Request a compressed response (using deflate or gzip).");
puts(" --connect-timeout <seconds> Maximum time allowed for connection\n"
" --create-dirs Create the necessary local directory hierarchy\n"
" --crlf Convert LF to CRLF in upload. Useful for MVS (OS/390)\n"
@@ -445,6 +446,7 @@ struct Configurable {
bool use_resume;
bool resume_from_current;
bool disable_epsv;
bool disable_eprt;
int resume_from;
char *postfields;
long postfieldsize;
@@ -1213,6 +1215,9 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
case 'e': /* --disable-epsv */
config->disable_epsv ^= TRUE;
break;
case 'f': /* --disable-eprt */
config->disable_eprt ^= TRUE;
break;
#ifdef USE_ENVIRONMENT
case 'f':
config->writeenv ^= TRUE;
@@ -2932,6 +2937,11 @@ operate(struct Configurable *config, int argc, char *argv[])
/* disable it */
curl_easy_setopt(curl, CURLOPT_FTP_USE_EPSV, FALSE);
/* new in libcurl 7.10.5 */
if(config->disable_eprt)
/* disable it */
curl_easy_setopt(curl, CURLOPT_FTP_USE_EPRT, FALSE);
/* new in curl 7.9.7 */
if(config->trace_dump) {
curl_easy_setopt(curl, CURLOPT_DEBUGFUNCTION, my_trace);
@@ -2942,7 +2952,7 @@ operate(struct Configurable *config, int argc, char *argv[])
/* new in curl 7.10 */
curl_easy_setopt(curl, CURLOPT_ENCODING,
(config->encoding) ? "deflate" : NULL);
(config->encoding) ? "" : NULL);
res = curl_easy_perform(curl);
@@ -2966,8 +2976,31 @@ operate(struct Configurable *config, int argc, char *argv[])
vms_show = VMSSTS_HIDE;
}
#else
if((res!=CURLE_OK) && config->showerror)
fprintf(config->errors, "curl: (%d) %s\n", res, errorbuffer);
if((res!=CURLE_OK) && config->showerror) {
if(CURLE_SSL_CACERT == res) {
fprintf(config->errors, "curl: (%d) %s\n\n", res, errorbuffer);
#define CURL_CA_CERT_ERRORMSG1 \
"More details here: http://curl.haxx.se/docs/sslcerts.html\n\n" \
"curl performs SSL certificate verification by default, using a \"bundle\"\n" \
" of Certificate Authority (CA) public keys (CA certs). The default\n" \
" bundle is named curl-ca-bundle.crt; you can specify an alternate file\n" \
" using the --cacert option.\n"
#define CURL_CA_CERT_ERRORMSG2 \
"If this HTTPS server uses a certificate signed by a CA represented in\n" \
" the bundle, the certificate verification probably failed due to a\n" \
" problem with the certificate (it might be expired, or the name might\n" \
" not match the domain name in the URL).\n" \
"If you'd like to turn off curl's verification of the certificate, use\n" \
" the -k (or --insecure) option.\n"
fprintf(config->errors, "%s%s",
CURL_CA_CERT_ERRORMSG1,
CURL_CA_CERT_ERRORMSG2 );
}
else
fprintf(config->errors, "curl: (%d) %s\n", res, errorbuffer);
}
#endif
if (outfile && !curl_strequal(outfile, "-") && outs.stream)

View File

@@ -23,8 +23,6 @@
* $Id$
***************************************************************************/
#include <stdio.h>
#if !defined(WIN32) && defined(__WIN32__)
/* Borland fix */
#define WIN32
@@ -50,6 +48,15 @@
#endif
#endif
#ifdef MALLOCDEBUG
/* This is an ugly hack for MALLOCDEBUG conditions only. We need to include
the file here, since it might set the _FILE_OFFSET_BITS define, which must
be set BEFORE all normal system headers. */
#include "../lib/setup.h"
#endif
#include <stdio.h>
#ifndef OS
#define OS "unknown"
#endif

View File

@@ -1,3 +1,3 @@
#define CURL_NAME "curl"
#define CURL_VERSION "7.10.4"
#define CURL_VERSION "7.10.5"
#define CURL_ID CURL_NAME " " CURL_VERSION " (" OS ") "

View File

@@ -27,6 +27,10 @@ before comparing with the one actually received by the client
<size>
number to return on a ftp SIZE command (set to -1 to make this command fail)
</size>
<mdtm>
what to send back if the client sends a (FTP) MDTM command, set to -1 to
have it return that the file doesn't exist
</mdtm>
<cmd>
special purpose server-command to control its behavior *before* the
reply is sent

View File

@@ -48,7 +48,8 @@ Debug:
Logs:
All logs are generated in the logs/ subdirctory (it is emtpied first
in the runtests.pl script)
in the runtests.pl script). Use runtests.pl -k to make the temporary files
to be kept after the test run.
Data:
All test-data are put in the data/ subdirctory. Each test is stored in the
@@ -69,12 +70,10 @@ TEST CASE NUMBERS
300 - 399 HTTPS
400 - 499 FTPS
... if we run out of test numbers for a particular protocol, then we need
to fix it.
Since 30-apr-2003, there's nothing in the system that requires us to keep
within these number series. Each test case now specify their own server
requirements, independent of test number.
TODO:
* Port old test cases to the new file format
* Make httpserver.pl work when we PUT without Content-Length:
* Add persistant connection support and test cases

View File

@@ -19,4 +19,4 @@ test304 test39 test32 test128 test48 test306 \
test130 test131 test132 test133 test134 test135 test403 test305 \
test49 test50 test51 test52 test53 test54 test55 test56 \
test500 test501 test502 test503 test504 test136 test57 test137 test138 \
test58
test58 test139 test140 test141 test59 test60 test61 test142 test143 test62

View File

@@ -20,6 +20,9 @@ Funny-head: yesyes
#
# Client-side
<client>
<server>
http
</server>
<name>
simple HTTP GET
</name>

View File

@@ -12,6 +12,9 @@ blablabla
# Client-side
<client>
<server>
http
</server>
<name>
simple HTTP PUT from file
</name>

View File

@@ -19,6 +19,9 @@ dr-xr-xr-x 5 0 1 512 Oct 1 1997 usr
#
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP dir list PASV
</name>

View File

@@ -17,6 +17,9 @@ dr-xr-xr-x 5 0 1 512 Oct 1 1997 usr
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP dir list, PORT with specified IP
</name>

View File

@@ -12,6 +12,9 @@ works
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP RETR PASV
</name>

View File

@@ -12,6 +12,9 @@ works
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP RETR PORT with CWD
</name>
@@ -31,7 +34,8 @@ ftp://%HOSTIP:%FTPPORT/a/path/103 -P -
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD a/path
CWD a
CWD path
PORT 127,0,0,1,246,33
TYPE I
SIZE 103

View File

@@ -7,6 +7,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP --head to get file size only
</name>
@@ -21,7 +24,8 @@ ftp://%HOSTIP:%FTPPORT/a/path/103 --head
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD a/path
CWD a
CWD path
MDTM 103
TYPE I
SIZE 103

View File

@@ -12,6 +12,9 @@ works
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP user+password in URL and ASCII transfer
</name>

View File

@@ -12,6 +12,9 @@ works
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP GET with type=A style ASCII URL using %20 codes
</name>
@@ -26,7 +29,9 @@ FTP GET with type=A style ASCII URL using %20 codes
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD /path with spaces/and things2
CWD
CWD path with spaces
CWD and things2
EPSV
TYPE A
SIZE 106

View File

@@ -1,6 +1,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP PASV upload file
</name>

View File

@@ -6,6 +6,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP PORT upload with CWD
</name>
@@ -29,7 +32,9 @@ Moooooooooooo
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD CWD/STOR/RETR
CWD CWD
CWD STOR
CWD RETR
PORT 127,0,0,1,5,109
TYPE I
STOR 108

View File

@@ -6,6 +6,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP PASV upload append
</name>

View File

@@ -36,6 +36,9 @@ If this is received, the location following worked
# Client-side
<client>
<server>
http
</server>
<name>
simple HTTP Location: following
</name>

View File

@@ -11,6 +11,9 @@ but we emulate that
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download resume with set limit
</name>

View File

@@ -7,6 +7,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download resume beyond file size
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP PASV upload resume
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed login: USER not valid
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed login: PASS not valid
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed PASV
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed PORT
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed TYPE
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed RETR
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed RETR with PORT
</name>

View File

@@ -20,6 +20,9 @@ ink="#ffffff" vlink="#cccccc">
# Client-side
<client>
<server>
http
</server>
<name>
HTTP range support
</name>

View File

@@ -12,6 +12,9 @@ works
# Client-side
<client>
<server>
ftp
</server>
<name>
ftp download with post-quote delete operation
</name>

View File

@@ -12,6 +12,9 @@ works
# Client-side
<client>
<server>
ftp
</server>
<name>
ftp download with post- and pre-transfer delete operations
</name>

View File

@@ -7,6 +7,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download resume with whole file already downloaded
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP upload resume with whole file already downloaded
</name>

View File

@@ -7,6 +7,9 @@ we can still send data even if pwd fails!
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed PWD
</name>

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download, failed CWD
</name>
@@ -24,6 +27,6 @@ REPLY CWD 314 bluah you f00l!
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD path/to/file
CWD path
</protocol>
</verify>

View File

@@ -7,6 +7,9 @@ this is file contents
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download with multiple replies at once in RETR
</name>
@@ -24,7 +27,8 @@ RETRWEIRDO
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD blalbla/lululul
CWD blalbla
CWD lululul
EPSV
TYPE I
SIZE 126

View File

@@ -7,6 +7,9 @@ moooooooo
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP --disable-epsv
</name>
@@ -21,7 +24,9 @@ ftp://%HOSTIP:%FTPPORT/path/to/file/127 --disable-epsv
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD path/to/file
CWD path
CWD to
CWD file
PASV
TYPE I
SIZE 127

View File

@@ -4,6 +4,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP upload with --crlf
</name>

View File

@@ -11,6 +11,9 @@ blabla custom request result
# Client-side
<client>
<server>
http
</server>
<name>
HTTP custom request 'DELETE'
</name>

View File

@@ -19,6 +19,9 @@ dr-xr-xr-x 5 0 1 512 Oct 1 1997 usr
#
# Client-side
<client requires=netrc_debug>
<server>
ftp
</server>
<name>
FTP (optional .netrc; no user/pass) dir list PASV
</name>

View File

@@ -19,6 +19,9 @@ dr-xr-xr-x 5 0 1 512 Oct 1 1997 usr
#
# Client-side
<client requires=netrc_debug>
<server>
ftp
</server>
<name>
FTP (optional .netrc; user/no pass) dir list PASV
</name>

View File

@@ -19,6 +19,9 @@ dr-xr-xr-x 5 0 1 512 Oct 1 1997 usr
#
# Client-side
<client requires=netrc_debug>
<server>
ftp
</server>
<name>
FTP (optional .netrc; user/passwd supplied) dir list PASV
</name>

View File

@@ -19,6 +19,9 @@ dr-xr-xr-x 5 0 1 512 Oct 1 1997 usr
#
# Client-side
<client requires=netrc_debug>
<server>
ftp
</server>
<name>
FTP (compulsory .netrc; ignored user/passwd) dir list PASV
</name>

View File

@@ -19,6 +19,9 @@ dr-xr-xr-x 5 0 1 512 Oct 1 1997 usr
#
# Client-side
<client requires=netrc_debug>
<server>
ftp
</server>
<name>
FTP (optional .netrc; programmatic user/passwd) dir list PASV
</name>

View File

@@ -16,6 +16,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP retrieve a byte-range
</name>

View File

@@ -7,6 +7,9 @@
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP with user and no password
</name>

View File

@@ -7,6 +7,9 @@ this is file contents
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download without size in RETR string
</name>
@@ -24,7 +27,8 @@ RETRNOSIZE
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD blalbla/lululul
CWD blalbla
CWD lululul
EPSV
TYPE I
SIZE 137

View File

@@ -10,6 +10,9 @@ this is file contents
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download without size in RETR string and no SIZE command
</name>
@@ -27,7 +30,8 @@ RETRNOSIZE
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD blalbla/lululul
CWD blalbla
CWD lululul
EPSV
TYPE I
SIZE 138

37
tests/data/test139 Normal file
View File

@@ -0,0 +1,37 @@
# Server-side
<reply>
<data>
this is file contents
</data>
<mdtm>
213 20030409102659
</mdtm>
</reply>
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download a newer file with -z
</name>
<command>
ftp://%HOSTIP:%FTPPORT/blalbla/139 -z "1 jan 1989"
</command>
</test>
# Verify data after the test has been "shot"
<verify>
<protocol>
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD blalbla
MDTM 139
EPSV
TYPE I
SIZE 139
RETR 139
</protocol>
</verify>

View File

@@ -10,6 +10,9 @@ Connection: close
# Client-side
<client>
<server>
http
</server>
<name>
HTTP HEAD with Connection: close
</name>

32
tests/data/test140 Normal file
View File

@@ -0,0 +1,32 @@
# Server-side
<reply>
<data>
</data>
<mdtm>
213 20030409102659
</mdtm>
</reply>
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download file with -z, expected to not transfer
</name>
<command>
ftp://%HOSTIP:%FTPPORT/blalbla/140 -z "1 jan 2004"
</command>
</test>
# Verify data after the test has been "shot"
<verify>
<protocol>
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD blalbla
MDTM 140
</protocol>
</verify>

37
tests/data/test141 Normal file
View File

@@ -0,0 +1,37 @@
# Server-side
<reply>
<data>
</data>
<mdtm>
213 20030409102659
</mdtm>
</reply>
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP download info with -I
</name>
<command>
ftp://%HOSTIP:%FTPPORT/blalbla/141 -I
</command>
</test>
# Verify data after the test has been "shot"
<verify>
<protocol>
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD blalbla
MDTM 141
TYPE I
SIZE 141
</protocol>
<stdout>
Last-Modified: Wed, 09 Apr 2003 10:26:59 GMT
</stdout>
</verify>

30
tests/data/test142 Normal file
View File

@@ -0,0 +1,30 @@
# Server-side
<reply>
<data>
</data>
</reply>
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP URL with too deep (100+) dir hierarchy
</name>
<command>
ftp://%HOSTIP:%FTPPORT/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/b
</command>
</test>
# Verify data after the test has been "shot"
<verify>
<errorcode>
3
</errorcode>
<protocol>
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
</protocol>
</verify>

34
tests/data/test143 Normal file
View File

@@ -0,0 +1,34 @@
# Server-side
<reply>
<data>
bla bla bla
</data>
</reply>
# Client-side
<client>
<server>
ftp
</server>
<name>
FTP URL with type=a
</name>
<command>
"ftp://%HOSTIP:%FTPPORT/%2ftmp/moo/143;type=a"
</command>
</test>
# Verify data after the test has been "shot"
<verify>
<protocol>
USER anonymous
PASS curl_by_daniel@haxx.se
PWD
CWD /tmp
CWD moo
EPSV
TYPE A
SIZE 143
RETR 143
</protocol>
</verify>

View File

@@ -12,6 +12,9 @@ Repeated nonsense-headers
# Client-side
<client>
<server>
http
</server>
<name>
--write-out test
</name>

Some files were not shown because too many files have changed in this diff Show More