Compare commits

..

189 Commits

Author SHA1 Message Date
Daniel Stenberg
1841c8ee6a curl 7.7 beta 3 2001-03-14 11:25:44 +00:00
Daniel Stenberg
70793595fe removed the two unnecessary include files 2001-03-14 10:27:13 +00:00
Daniel Stenberg
28a8e1602d ssluse fixed, various win32 fixes 2001-03-14 10:21:52 +00:00
Daniel Stenberg
cce05b9138 Bjrn Stenberg corrected the silly '(void)data' usage when SSL is not
used
2001-03-14 10:15:42 +00:00
Daniel Stenberg
72a7fd4dc7 Jrn's updated file 2001-03-14 10:06:23 +00:00
Daniel Stenberg
9a6a476cf5 the URL escape/unescape functions are also public but undocumented 2001-03-14 08:59:34 +00:00
Daniel Stenberg
5d0efedd2d First Jrn's updates were applied, then
my take at removing the private functions from the list, then I renamed
the *str(n)equal functions...
2001-03-14 08:58:36 +00:00
Daniel Stenberg
a426818a78 no longer includes the curl/types.h and curl/easy.h include files
explicitly, as they're taken care of indirectly by curl/curl.h these
days.
2001-03-14 08:55:17 +00:00
Daniel Stenberg
bfe413d8bd increased the 'current' number for the interface 2001-03-14 08:54:18 +00:00
Daniel Stenberg
dbbd20646f Curl_str(n)equal renamed to curl_str(n)equal 2001-03-14 08:53:31 +00:00
Daniel Stenberg
b8fe4deb13 documented the undocumented public functions in libcurl 2001-03-14 08:51:04 +00:00
Daniel Stenberg
332a016e3c chunked bugfix, Jrn's fixes, the interface number increase 2001-03-14 08:49:11 +00:00
Daniel Stenberg
3738e4bdc0 The Curl_* prefixes are now changed for curl_* ones, as these two functions
are used externally and thus are public symbols.
2001-03-14 08:47:56 +00:00
Daniel Stenberg
3201d2dafa Jrn added "#define socklen_t int" 2001-03-14 08:28:54 +00:00
Daniel Stenberg
0a1e002ca4 Jrn fixed it to compile on win32 again 2001-03-14 08:28:19 +00:00
Daniel Stenberg
9195bb64d4 Jrn Hartroth added a set of files 2001-03-14 08:23:51 +00:00
Daniel Stenberg
11ee547a0e Jrn Hartroth fixed a bad #endif placement 2001-03-14 08:20:41 +00:00
Daniel Stenberg
147de35d41 re-added the default switch for weird states 2001-03-13 23:29:53 +00:00
Daniel Stenberg
e16e9b91ae removed the random seeding and persistant stuff, as both are already in
this version!
2001-03-13 22:31:56 +00:00
Daniel Stenberg
f9cde0646f Added a failf() error message when the chunked read returns failure 2001-03-13 22:20:14 +00:00
Daniel Stenberg
195233ed5c updated the chunked state-machine to deal with the trailing CRLF that comes
after the data part
2001-03-13 22:16:42 +00:00
Daniel Stenberg
048e654514 made 'X to Y' sequences not include X twice 2001-03-13 22:14:53 +00:00
Daniel Stenberg
dfbd45142d corrected the chunked format 2001-03-13 22:13:06 +00:00
Daniel Stenberg
ff681f7bfd 7.7 beta 2 fixes 2001-03-13 15:44:31 +00:00
Daniel Stenberg
60bbb64a81 EXTRA_DIST got too long, I shortened it now but we have to do something
else as it will grow a lot more...
2001-03-13 13:31:14 +00:00
Daniel Stenberg
c622f2bb4e failf() now respects the mute flag 2001-03-13 13:22:58 +00:00
Daniel Stenberg
cd59f13da6 Guenole Bescon's bug found on march 8 is added 2001-03-13 13:14:21 +00:00
Daniel Stenberg
11d718bf52 exchanged I and me to we and us in a lot of places
updated for persistant connections and 7.7
2001-03-13 11:47:30 +00:00
Daniel Stenberg
8e8846d876 Added test case 37, HTTP GET with name+password in the URL 2001-03-13 09:44:09 +00:00
Daniel Stenberg
7d562bb685 a whole new section on persitant connections and how they're treated
internally
2001-03-13 08:16:54 +00:00
Daniel Stenberg
20ddd35669 we speak HTTP 1.1 now
more braging about the portability
2001-03-13 08:16:25 +00:00
Daniel Stenberg
063f88cd14 close policies 2001-03-13 07:59:19 +00:00
Daniel Stenberg
87b0b7cab9 initial close policy support 2001-03-13 07:54:18 +00:00
Daniel Stenberg
70d0d9d4da Added 'created' to the connectdata struct to hold the creation date, to
be used for the close policy decision
2001-03-13 07:53:59 +00:00
Daniel Stenberg
4ae3bd71ea Curl_tvnow is now properly declared with (void) 2001-03-13 07:53:06 +00:00
Daniel Stenberg
a9390665b8 curl_getinfo is removed, not a public function 2001-03-13 07:46:19 +00:00
Daniel Stenberg
fb7a6e3423 added --random-file and --egd-file to the command line client 2001-03-12 16:02:29 +00:00
Daniel Stenberg
cc99e3f7de Added the two new seeding options 2001-03-12 15:52:18 +00:00
Daniel Stenberg
e6b40bb6ac two new random seed options for the ssl config struct 2001-03-12 15:47:41 +00:00
Daniel Stenberg
f2fd1b8856 two new random seed options: CURLOPT_RANDOM_FILE and CURLOPT_EGDSOCKET 2001-03-12 15:47:17 +00:00
Daniel Stenberg
cb4efcf275 better chunked error detection 2001-03-12 15:29:04 +00:00
Daniel Stenberg
56a27d608a Added test case 36:
[HTTP GET with badly formatted chunked Transfer-Encoding]
2001-03-12 15:27:01 +00:00
Daniel Stenberg
46c9075eab updated the comment for the chunked reading 2001-03-12 15:21:11 +00:00
Daniel Stenberg
d95fa648e9 made it return illegal hex in case no hexadecimal digit was read when at
least one was expected
2001-03-12 15:20:35 +00:00
Daniel Stenberg
563ad213dc added an error code for illegal hex values in the chunked stream 2001-03-12 15:20:02 +00:00
Daniel Stenberg
0121d7d731 Added new libcurl options in include/curl/curl.h, they're documented in
curl_easy_setopt.3 and they're partly implemented in lib/url.c

Slowly, we're getting there...
2001-03-12 15:11:38 +00:00
Daniel Stenberg
8495fac1c5 Added options for the persistant support, they're also documented in
curl_easy_setopt.3 now
2001-03-12 15:06:29 +00:00
Daniel Stenberg
38c349f751 support for a few new libcurl 7.7 CURLOPT_* options added 2001-03-12 15:05:54 +00:00
Daniel Stenberg
542df800ab Added four new options that come with the new persitant support:
CURLOPT_MAXCONNECTS, CURLOPT_CLOSEPOLICY, CURLOPT_FRESH_CONNECT and
CURLOPT_FORBID_REUSE
2001-03-12 14:54:00 +00:00
Daniel Stenberg
3e88b1cac5 the client is adjusted to work with persistant curl handles, and *gee* it
seems to be working!!!
2001-03-12 13:59:38 +00:00
Daniel Stenberg
d774b10afb Added infof() calls for persistant connection info, we are very likely to
need these at least for debugging 7.7 and probably later as well...
2001-03-12 13:58:03 +00:00
Daniel Stenberg
b449b94393 moved the libcurl init call 2001-03-12 13:57:02 +00:00
Daniel Stenberg
a6cb9b08b2 persistant updates 2001-03-12 13:55:06 +00:00
Daniel Stenberg
440a3101d0 added a note about persitant connections through HTTP proxies 2001-03-12 13:54:46 +00:00
Daniel Stenberg
9778a5356b Added some persistant notes 2001-03-12 13:54:10 +00:00
Daniel Stenberg
de7dcdbc54 modified to make the curl client with persistant connection support do
correct
2001-03-12 13:47:07 +00:00
Daniel Stenberg
070968abbc include the failed test case numbers in the end summary 2001-03-12 13:46:23 +00:00
Daniel Stenberg
e97fc2aab5 Added description of the new test case ranges support 2001-03-12 12:58:57 +00:00
Daniel Stenberg
a23ac24192 made it support test case ranges on the command line, specified as
"X to Y", where X is smaller than Y.
2001-03-12 12:58:30 +00:00
Daniel Stenberg
9ee14644a7 adjusted to work with the HTTP 1.1-speaking libcurl 2001-03-12 12:45:12 +00:00
Daniel Stenberg
c576e114b9 output the protocol data to stderr when verbose is on 2001-03-12 12:44:44 +00:00
Daniel Stenberg
639a7982ba server problems,
libcurl *works* persistant over HTTP proxy!!!!
2001-03-12 10:18:01 +00:00
Daniel Stenberg
5bbe189420 modified Curl_disconnect() so that it unlinks itself from the data struct,
it saves me from more mistakes when the connectindex is -1 ... also, there's
no point in having its parent do it as all parents would do it anyway.
2001-03-12 10:13:42 +00:00
Daniel Stenberg
93ff159e32 split up the big printf() into several ones to never use strings longer
than 509 letters (as newer gcc warns on with -Wall)
2001-03-12 09:47:23 +00:00
Daniel Stenberg
8eb8a0a8e4 bugfix: don't use the connectindex if it is -1 2001-03-12 09:44:57 +00:00
Daniel Stenberg
a4af638867 added persistant connection details 2001-03-12 09:44:08 +00:00
Daniel Stenberg
75a9a87ec2 replaced I and my with we and us 2001-03-12 09:43:43 +00:00
Daniel Stenberg
b5ba011110 updated 2001-03-12 09:42:22 +00:00
Daniel Stenberg
e9b763ff05 use the new name and hostname even though an old connection is reused, since
we can re-use a proxy connection that actually has different host names on
the same connection
2001-03-09 16:50:08 +00:00
Daniel Stenberg
ac0bad2433 remake Host: for each connection and it'll work with proxies too 2001-03-09 16:48:18 +00:00
Daniel Stenberg
67d5c0a970 for HTTP/1.0 we default to non keep-alive connections, but when we get a
1.0-reply from a proxy we use and the Proxy-Connection: keep-alive header
is used, we switch it on and live happily ever after
2001-03-09 16:02:59 +00:00
Daniel Stenberg
580896d615 Added httpversion to the progress struct, we do read it, we can just as well
store it.
2001-03-09 15:58:36 +00:00
Daniel Stenberg
11693c0faa the socklen_t check is more involved now, but works on linux at least 2001-03-09 15:38:59 +00:00
Daniel Stenberg
26cd8eda4a Added socklen_t 2001-03-09 15:24:33 +00:00
Daniel Stenberg
8cd3f44040 added a check for socklen_t
removed the tiny/Makefile that was added accidentaly before
2001-03-09 15:21:00 +00:00
Daniel Stenberg
2b30bfc349 all comments for the former public "low level" interface have been removed
since they were out-of-date and not correct anymore.

moved around some struct fields
2001-03-09 15:19:42 +00:00
Daniel Stenberg
8ec4dba599 removed handles and states from the main structs
renamed prefixes from curl_ to Curl_
made persistant connections work with http proxies (at least partly)
2001-03-09 15:18:25 +00:00
Daniel Stenberg
1efec6572e curl_transfer became Curl_perform() to better match the public name and
use the correct prefix
2001-03-09 15:17:09 +00:00
Daniel Stenberg
781dd7a9bf prefix changes curl_ to Curl_
made it work (partly) with persistant connections for HTTP/1.0 replies
moved the 'newurl' struct field for Location: to the connectdata struct
2001-03-09 15:16:28 +00:00
Daniel Stenberg
beb8761b22 #include <string.h> removed a warning 2001-03-09 15:14:51 +00:00
Daniel Stenberg
071c7de9fe removed curl_read() and curl_write() - they weren't used and the public
"low leve" interface is dumped
2001-03-09 15:14:22 +00:00
Daniel Stenberg
3e7ebcd051 uses socklen_t now 2001-03-09 15:13:34 +00:00
Daniel Stenberg
c67952fc5c curl_ prefix modified to Curl_ 2001-03-09 15:13:11 +00:00
Daniel Stenberg
7d7c24f915 accept() and getsockname() now use socklen_t types, as that was just added
to configure
2001-03-09 15:12:22 +00:00
Daniel Stenberg
0dc8c4d451 use unsigned int hex to receive the hex digit in, caused a warning with
-Wall and a new gcc
2001-03-09 15:11:39 +00:00
Daniel Stenberg
9cf4434ae2 Modified to use Curl_* functions instead of curl_* ones 2001-03-09 15:10:58 +00:00
Daniel Stenberg
8ccd8b6dbc only generate maximum 509 characters in each string 2001-03-09 13:11:28 +00:00
Daniel Stenberg
b4f70aa2c8 version 7.7-beta1 2001-03-08 12:35:51 +00:00
Daniel Stenberg
f54a282ccc persistant adjusts 2001-03-08 12:32:03 +00:00
Daniel Stenberg
2a11bdc216 HTTP HEAD tests 2001-03-08 10:39:36 +00:00
Daniel Stenberg
5cd4c3ed24 return from transfer when all headers have been received and nobody is set,
as is the case when doing HEAD requests
2001-03-08 10:32:27 +00:00
Daniel Stenberg
147a673063 updated for persistant connections 2001-03-08 09:25:09 +00:00
Daniel Stenberg
9ce5827fc1 made it split the version number on - too to make 7.7-blabla make a better
version number define in the header file
2001-03-08 09:23:11 +00:00
Daniel Stenberg
97f1c93674 added lots of numbers for the error codes as they're often printed
and used
2001-03-08 09:04:43 +00:00
Daniel Stenberg
e61ceaf1bd clarified the 0001-files use a bit, I couldn't understand it myself! :-) 2001-03-08 08:33:17 +00:00
Daniel Stenberg
1118612249 Added test #34 - HTTP GET with chunked Transfer-Encoding 2001-03-08 08:30:35 +00:00
Daniel Stenberg
a23db7b7c7 "Transfer-Encoding: chunked" support added 2001-03-07 23:51:41 +00:00
Daniel Stenberg
f6b6dff46a added the http_chunks files 2001-03-07 23:50:00 +00:00
Daniel Stenberg
55b8ceac18 chunked transfer encoding support 2001-03-07 23:28:22 +00:00
Daniel Stenberg
bcf448ee32 connection timeout is in for 7.7 2001-03-07 23:24:23 +00:00
Daniel Stenberg
91e4da7ddb initial chunked transfer-encoding support 2001-03-07 17:12:12 +00:00
Daniel Stenberg
2873c18132 removed compiler warning if HAVE_RAND_STATUS is false 2001-03-07 17:08:20 +00:00
Daniel Stenberg
5dd0a8a63e Added persistant connections blurb even if it doesn't really work yet... 2001-03-06 14:37:37 +00:00
Daniel Stenberg
2103dc41f5 cleaned up for the 7.7 fixes 2001-03-06 12:50:42 +00:00
Daniel Stenberg
2ef13230cb new seeding stuff as mentioned by Albert Chin 2001-03-06 00:04:58 +00:00
Daniel Stenberg
9479ac6dda Added a persistant connection example 2001-03-05 16:56:10 +00:00
Daniel Stenberg
4e878eae79 updated to libcurl 7.7 conditions 2001-03-05 15:51:34 +00:00
Daniel Stenberg
1e8e90a220 mucho updated with new 7.7 concepts 2001-03-05 15:38:06 +00:00
Daniel Stenberg
fe95c7dc34 removed an incorrect comment 2001-03-05 14:52:23 +00:00
Daniel Stenberg
6dae34d5da all test cases run OK now (again) 2001-03-05 14:13:15 +00:00
Daniel Stenberg
36c621c9df more details on debugging with the test suite 2001-03-05 14:08:22 +00:00
Daniel Stenberg
1717963e3d show the ftp server invoke line when -d is used 2001-03-05 14:03:48 +00:00
Daniel Stenberg
4646a1ffa9 talks more on verbose 2001-03-05 14:03:20 +00:00
Daniel Stenberg
0cb4eba002 free the struct on done 2001-03-05 14:01:13 +00:00
Daniel Stenberg
5eba359b5d telnet without any static variables 2001-03-05 13:59:43 +00:00
Daniel Stenberg
07ce7539a8 set download size properly for HTTP downloads 2001-03-05 13:40:31 +00:00
Daniel Stenberg
c21f848c1c enable persistant connections by default 2001-03-05 13:40:08 +00:00
Daniel Stenberg
84e94fda8b remade FILE:// support to look more as the other protocols 2001-03-05 13:39:01 +00:00
Daniel Stenberg
ebd6897b10 runtests -g explained 2001-03-04 18:11:25 +00:00
Daniel Stenberg
5ab8a9d32f persistant support protocol updates 2001-03-04 18:07:13 +00:00
Daniel Stenberg
cf8704ccdf 7.7 alpha 2 commit 2001-03-04 16:34:20 +00:00
Daniel Stenberg
5543c2f11f Added include of easy.h to enable libcurl-using programs to *only* have to
include <curl/curl.h>
2001-03-04 15:32:44 +00:00
Daniel Stenberg
90ac37a683 Curl_http() could crash on connection re-use 2001-03-04 15:25:54 +00:00
Daniel Stenberg
dd893fd8a4 ipv6 fix for the 'port' no longer in urldata 2001-03-03 17:50:01 +00:00
Daniel Stenberg
834f079918 fixed for persistant stuff 2001-03-03 16:28:59 +00:00
Daniel Stenberg
2665c763df latest 2001-03-02 15:38:06 +00:00
Daniel Stenberg
d1cfbd51b5 remade the port number stuff so that following locations work and doing
intermixed HTTP and FTP persistant connections also work!
2001-03-02 15:34:15 +00:00
Daniel Stenberg
a3ba6b7a6a Added the disconnect proto 2001-03-02 07:44:22 +00:00
Daniel Stenberg
415d2e7cb7 removed the slist -functions from here
added the Curl_ftp_disconnect function for FTP-specific disconnects
2001-03-02 07:44:05 +00:00
Daniel Stenberg
af4451ec26 improved connections 2001-03-02 07:43:20 +00:00
Daniel Stenberg
7c6562683a extending connectdata 2001-03-02 07:42:35 +00:00
Daniel Stenberg
b6fa2f882c moved the slist-functions here from FTP since they're more generic than simply
for FTP-stuff
2001-03-02 07:42:11 +00:00
Daniel Stenberg
b6c5da337a strdup() takes a const char * now 2001-03-02 07:41:40 +00:00
Daniel Stenberg
9bc24e4876 cleanup better when connects fail 2001-02-28 14:03:46 +00:00
Daniel Stenberg
4af55809e4 added some infof() calls for persistant info 2001-02-22 23:51:17 +00:00
Daniel Stenberg
9c63fcf210 we only allocate the HTTP struct if we need to 2001-02-22 23:41:15 +00:00
Daniel Stenberg
1f17fb5f89 Now persistant connection download works thanks to the Content-Length taken
into account
2001-02-22 23:32:41 +00:00
Daniel Stenberg
584dbffe60 moved the dynamicly set pointers to the connectdata struct 2001-02-22 23:32:02 +00:00
Daniel Stenberg
1c6f6f6972 Douglas R. Horner's corrections applied 2001-02-22 22:33:49 +00:00
Daniel Stenberg
da06a6e7e3 IPv6-adjustments 2001-02-21 17:15:09 +00:00
Daniel Stenberg
46e0937263 corrected memory leaks when re-using connections 2001-02-20 17:46:35 +00:00
Daniel Stenberg
a1d6ad2610 multiple connection support initial commit 2001-02-20 17:35:51 +00:00
Daniel Stenberg
5f3d63ed5b bugfix 2001-02-20 13:58:56 +00:00
Daniel Stenberg
63b5748eb6 -g runs the specified test(s) with gdb! 2001-02-20 13:58:39 +00:00
Daniel Stenberg
e2590430c5 removed the #ifdef 2001-02-20 13:57:50 +00:00
Daniel Stenberg
ada9bc2b24 win32sockets.c is now added with winsock init/cleanup example functions 2001-02-20 13:56:38 +00:00
Daniel Stenberg
43da41e73e Added three tiny PHP examples 2001-02-19 13:39:21 +00:00
Daniel Stenberg
720fa45b56 blurb about different languages and environments added 2001-02-19 13:38:29 +00:00
Daniel Stenberg
7de874c438 just a few PHP/curl examples 2001-02-19 13:38:05 +00:00
Daniel Stenberg
2078c1a01a added two VC++ files for project stuff 2001-02-19 09:29:40 +00:00
Daniel Stenberg
f7a8909372 Made CURLOPT_POST no longer necessary when CURLOPT_POSTFIELDS is used 2001-02-19 09:29:19 +00:00
Daniel Stenberg
250df30e64 Moved a bunch of prototypes from curl.h here, they're no longer public and
I merely stuffed them here before I decide where they belong and if they
are to remain at all
2001-02-19 09:28:10 +00:00
Daniel Stenberg
b887cf7521 removed a bunch of "low level" functions that were never used and are about
to never become reality either
2001-02-19 09:27:12 +00:00
Daniel Stenberg
630e932091 MS VC++ stuff 2001-02-19 09:26:29 +00:00
Daniel Stenberg
cdabd67aa9 Bob Schader updated this 2001-02-19 09:26:01 +00:00
Daniel Stenberg
42e4f9d776 added stuff to the mailing list chapter 2001-02-19 09:25:18 +00:00
Daniel Stenberg
c111033595 removed --continue task (done)
added URL to the NTLM task
2001-02-16 13:41:34 +00:00
Daniel Stenberg
26d1aaccdf 2.2 - rephrased 2001-02-16 13:41:09 +00:00
Daniel Stenberg
ce95d2020f better english timeouted => timed out, as suggested by Larry Fahnoe 2001-02-13 21:57:04 +00:00
Daniel Stenberg
948c3b3aa9 7.6.1 commit 2001-02-13 13:37:14 +00:00
Daniel Stenberg
a140e5311d moved the protocol-specific free to allow easier multiple transfers 2001-02-13 13:34:16 +00:00
Daniel Stenberg
7686ac3f2c ftp response fix, netrc fix for non-http/ftp, https put research 2001-02-12 13:20:04 +00:00
Daniel Stenberg
54778134e4 corrected the prototype 2001-02-12 13:19:09 +00:00
Daniel Stenberg
c59baa06f0 Added 3.10 and a few minor updates 2001-02-12 10:05:09 +00:00
Daniel Stenberg
c107303ade very minor indentation fix 2001-02-12 08:22:19 +00:00
Daniel Stenberg
21b05afc99 removed getenv.h from the package as it was unused 2001-02-12 08:21:45 +00:00
Daniel Stenberg
eebcf7d4f5 Not used anymore 2001-02-09 07:33:58 +00:00
Daniel Stenberg
8d169dfadd Added a failf() call in the error-check just added 2001-02-09 07:14:28 +00:00
Daniel Stenberg
b12e334d83 if netrc is parsed and our host was found in there, set data->bits.user_passwd
unconditioanlly!
2001-02-08 13:53:13 +00:00
Daniel Stenberg
7e36c4437e today's FTP response check fix 2001-02-08 13:52:38 +00:00
Daniel Stenberg
3c7a80a275 postit.c was added as a HTML form file upload example 2001-02-08 08:26:54 +00:00
Daniel Stenberg
61e2a8108b 7.6.1-pre3 2001-02-07 09:49:06 +00:00
Daniel Stenberg
abb14de7e0 GetLine() didn't properly act on -1 lengths returned from Curl_read() 2001-02-07 09:31:03 +00:00
Daniel Stenberg
ccd57e58f6 Added #define ssize_t int since ssize_t doesn't seem to exist in normal
win32 systems
2001-02-07 09:23:54 +00:00
Daniel Stenberg
58d70db92e no longer #includes "getenv.h" 2001-02-07 08:36:23 +00:00
Daniel Stenberg
09f6fc22ed silly me, corrected the strlcat() to compile 2001-02-06 09:12:39 +00:00
Daniel Stenberg
833ce37cb9 new openbsd inspired implementation of strlcat() 2001-02-06 09:08:24 +00:00
Daniel Stenberg
07e7018564 nntp@iname.com's suggested fix to set the libpath 2001-02-06 07:14:44 +00:00
Daniel Stenberg
db70cd28b3 adjusted the IPv6 stuff to compile and build on Linux as well 2001-02-05 23:35:44 +00:00
Daniel Stenberg
f6e2bfd464 Jun-ichiro itojun Hagino's IPv6 adjustments 2001-02-05 23:04:44 +00:00
Daniel Stenberg
1ae5dab8fb Robert Weaver's VC experiences 2001-02-05 22:35:55 +00:00
Daniel Stenberg
c6355e6a43 Added a telnet section 2001-02-05 22:35:21 +00:00
Daniel Stenberg
7d26eb61fe Added a few more configure option explanations 2001-02-05 10:24:12 +00:00
Daniel Stenberg
8613ce377f the new getinfo() stuff and the cygwin patch 2001-02-04 20:10:52 +00:00
Daniel Stenberg
d6b94488a1 Added blurb about the win32 thing that precents a DLL from using a pointer
passed to it from user-space!
2001-02-04 20:10:02 +00:00
Daniel Stenberg
5d7b32d09f extended 5.5 2001-02-04 20:08:42 +00:00
Daniel Stenberg
ed16d30ea8 CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD documented 2001-02-04 20:07:53 +00:00
Daniel Stenberg
6f7c70fbbc CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD were
added as suggested by Bob Schader
2001-02-04 20:03:30 +00:00
Daniel Stenberg
9ab5d30e3b Ingo Ralf Blum made it compile with the newest cygwin 2001-02-04 19:00:27 +00:00
165 changed files with 5527 additions and 2550 deletions

255
CHANGES
View File

@@ -7,10 +7,261 @@
History of Changes
Daniel (14 March 2001)
- Bj<42>rn Stenberg provided similar fixes as J<>rn did and some additional patches
for non-SSL compiles.
- I increased the interface number for libcurl as I've removed the low level
functions from the interface. I also took this opportunity to rename the
Curl_strequal function to curl_strequal and Curl_strnequal to
curl_strnequal, as they're public libcurl functions (even if they're still
undocumented).
This will make older programs not capable of using the new libcurl with
just a drop-in replacement.
- J<>rn Hartroth updated stuff for win32 compiles:
o config-win32.h was fixed for socklen_t
o lib/ssluse.c had a bad #endif placement
o lib/file.c was made to compile on win32 again
o lib/Makefile.m32 was updated with the new files
o lib/libcurl.def matches the current interface state
Daniel (13 March 2001)
- It only took an hour or so before J<>rn Hartroth found a problem in the
chunked transfer-encoding. Given his fine example-site, I could easily spot
the problem and when I re-read the spec (the part I have pasted in the top
of the http_chunks.h file), I realized I had made my state-machine slightly
wrong and didn't expect/handle the trailing CRLF that comes after the data
in each chunk (and those extra two bytes sure feel wasted).
Had to modify test case 34 to match this as well.
Version 7.7-beta2
Daniel (13 March 2001)
- Added the policy stuff to the curl_easy_setopt man page for the two supported
policies.
- Implemented some support for the CURLOPT_CLOSEPOLICY option. The policies
CURLCLOSEPOLICY_LEAST_RECENTLY_USED and CURLCLOSEPOLICY_OLDEST are now
supported, and the "least recently used" is used as default if no policy
is chosen.
Daniel (12 March 2001)
- Added CURLOPT_RANDOM_FILE and CURLOPT_EGDSOCKET to libcurl for seeding the
SSL random engine. The random seeding support was also brought to the curl
client with the new options --random-file <file> and --egd-file <file>. I
need some people to really test this to know they work as supposed. Remember
that libcurl now informs (if verbose is on) if the random seed is considered
weak (HTTPS connections).
- Made the chunked transfer-encoding engine detected bad formatted data length
and return error if so (we can't possibly extract sensible data if this is
the case). Added a test case that detects this. Number 36. Now there are 60
test cases.
- Added 5 new libcurl options to curl/curl.h that can be used to control the
persistant connection support in libcurl. They're also documented (fairly
thoroughly) in the curl_easy_setopt.3 man page. Three of them are now
implemented, although not really tested at this point... Anyway, the new
implemented options are named CURLOPT_MAXCONNECTS, CURLOPT_FRESH_CONNECT,
CURLOPT_FORBID_REUSE. The ones still left to write code for are:
CURLOPT_CLOSEPOLICY and its related option CURLOPT_CLOSEFUNCTION.
- Made curl (the actual command line tool) use the new libcurl 7.7 persistant
connection support by re-using the same curl handle for every specified file
transfer and after some more test case tweaking we have 100% test case OK.
I made some test cases return HTTP/1.0 now to make sure that works as well.
- Had to add 'Connection: close' to the headers of a bunch of test cases so
that curl behaves "old-style" since the test http server doesn't do multiple
connections... Now I get 100% test case OK.
- The curl.haxx.se site, the main curl mailing list and my personal email are
all dead today due to power blackout in the area where the main servers are
located. Horrible.
- I've made persistance work over a squid HTTP proxy. I find it disturbing
that it uses headers that aren't present in any HTTP standard though
(Proxy-Connection:) and that makes me feel that I'm now on the edge of what
the standard actually defines. I need to get this code excercised on a lot
of different HTTP proxies before I feel safe.
Now I'm facing the problem with my test suite servers (both FTP and HTTP)
not supporting persistant connections and libcurl is doing them now. I have
to fix the test servers to get all the test cases do OK.
Daniel (8 March 2001)
- Guenole Bescon reported that libcurl did output errors to stderr even if
MUTE and NOPROGRESS was set. It turned out to be a bug and happens if
there's an error and no ERRORBUFFER is set. This is now corrected.
Version 7.7-beta1
Daniel (8 March 2001)
- "Transfer-Encoding: chunked" is no longer any trouble for libcurl. I've
added two source files and I've run some test downloads that look fine.
- HTTP HEAD works too, even on 1.1 servers.
Daniel (5 March 2001)
- The current 57 test cases now pass OK. It would suggest that libcurl works
using the old-style with one connection per handle. The test suite doesn't
handle multiple connections yet so there are no test cases for this.
- I patched the telnet.c heavily to not use any global variables anymore. It
should make it a lot nicer library-wise.
- The file:// support was modified slightly to use the internal connect-first-
then-do approach.
Daniel (4 March 2001)
- More bugs erased.
Version 7.7-alpha2
Daniel (4 March 2001)
- Now, there's even a basic check that a re-used connection is still alive
before it is assumed so. A few first tests have proven that libcurl will
then re-connect instead of re-use the dead connection!
Daniel (2 March 2001)
- Now they work intermixed as well. Major coolness!
- More fiddling around, my 'tiny' client I have for testing purposes now has
proved to download both FTP and HTTP with persistant connections. They do
not work intermixed yet though.
Daniel (1 March 2001)
- Wilfredo Sanchez pointed out a minor spelling mistake in a man page and that
curl_slist_append() should take a const char * as second argument. It does
now.
Daniel (22 February 2001)
- The persistant connections start to look good for HTTP. On a subsequent
request, it seems that libcurl now can pick an already existing connection
if a suitable one exists, or it opens a new one.
- Douglas R. Horner mailed me corrections to the curl_formparse() man page
that I applied.
Daniel (20 February 2001)
- Added the docs/examples/win32sockets.c file for our windows friends.
- Linus Nielsen Feltzing provided brand new TELNET functionality and
improvements:
* Negotiation is now passive. Curl does not negotiate until the peer does.
* Possibility to set negotiation options on the command line, currently only
XDISPLOC, TTYPE and NEW_ENVIRON (called NEW_ENV).
* Now sends the USER environment variable if the -u switch is used.
* Use -t to set telnet options (Linus even updated the man page, awesome!)
- Haven't done this big changes to curl for a while. Moved around a lot of
struct fields and stuff to make multiple connections get connection specific
data in separate structs so that they can co-exist in a nice way. See the
mailing lists for discussions around how this is gonna be implemented. Docs
and more will follow.
Studied the HTTP RFC to find out better how persistant connections should
work. Seems cool enough.
Daniel (19 February 2001)
- Bob Schader brought me two files that help set up a MS VC++ libcurl project
easier. He also provided me with an up-to-date libcurl.def file.
- I moved a bunch of prototypes from the public <curl/curl.h> file to the
library private urldata.h. This is because of the upcoming changes. The
low level interface is no longer being planned to become reality.
Daniel (15 February 2001)
- CURLOPT_POST is not required anymore. Just setting the POST string with
CURLOPT_POSTFIELDS will switch on the HTTP POST. Most other things in
libcurl already works this way, i.e they require only the parameter to
switch on a feature so I think this works well with the rest. Setting a NULL
string switches off the POST again.
- Excellent suggestions from Rich Gray, Rick Jones, Johan Nilsson and Bjorn
Reese helped me define a way how to incorporate persistant connections into
libcurl in a very smooth way. If done right, no change may have to be made
to older programs and they will just start using persistant connections when
applicable!
Daniel (13 February 2001)
- Changed the word 'timeouted' to 'timed out' in two different error messages.
Suggested by Larry Fahnoe.
Version 7.6.1
Daniel (9 February 2001)
- Frank Reid and Cain Hopwood provided information and research around a HTTPS
PUT/upload problem we seem to have. No solution found yet.
Daniel (8 February 2001)
- An interesting discussion is how to specify an empty password without having
curl ask for it interactively? The current implmentation takes an empty
password as a request for a password prompt. However, I still want to
support a blank user field. Thus, today if you enter "-u :" (without user
and password) curl will prompt for the password. Tricky. How would you
specify you want the prompt otherwise?
- Made the netrc parse result possible to use for other protocols than FTP and
HTTP (such as the upcoming TELNET fixes).
- The previously mentioned "MSVC++ problems" turned out to be a non-issue.
- Added a HTTP file upload code example in the docs/examples/ section on
request.
- Adjusted the FTP response fix slightly.
Version 7.6.1-pre3
Daniel (7 February 2001)
- SM found a flaw in the response reading function for FTP that could make
libcurl not get out of the loop properly when it should, if libcurl got -1
returned when reading the socket.
- I found a similar mistake in http.c when using a proxy and reading the
results from the proxy connection.
Daniel (6 February 2001)
- A friendly person named "SM" (nntp at iname.com) pointed out that the VC
makefile in src/ needed the libpath set for the debug build to work.
- Daniel Gehriger stepped in to assist with the VC++ stuff Robert Weaver
brought up yesterday.
Daniel (5 February 2001)
- Jun-ichiro itojun Hagino brought a big patch that brings IPv6-awareness to
a bunch of different areas within libcurl.
- Robert Weaver told me about the problems the MS VC++ 6.0 compiler has with
the 'static' keyword on a number of libcurl functions. I might need to add a
patch that redefines static when libcurl is compiled with that compiler.
How do I know when VC++ compiles, anyone?
Daniel (4 February 2001)
- curl_getinfo() was extended with two new options:
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD. They
return the full assumed content length of the transfer in the given
direction. The CURLINFO_CONTENT_LENGTH_DOWNLOAD will be the Content-Length:
size of a HTTP download. Added descriptions to the man page as well. This
was done after discussions with Bob Schader.
Daniel (3 February 2001)
- Ingo Ralf Blum provided another fix that makes curl build under the more
recent cygwin installations. It seems they've changed the preset defines to
not include WIN32 anymore.
Version 7.6.1-pre2
Daniel (31 January 2001)
- Curl_read() and curl_read() now return a ssize_t for the size, as it had to
be able to return -1. The telnet support crashed due to this and there was
a possibility to weird behaviour all over.
be able to return -1. The telnet support crashed due to this and there was a
possibility to weird behaviour all over. Linus Nielsen Feltzing helped me
find this.
- Added a configure.in check for a working getaddrinfo() if IPv6 is requested.
I also made the configure script feature --enable-debug which sets a couple

View File

@@ -39,3 +39,15 @@
/* Define if you want to enable IPv6 support */
#undef ENABLE_IPV6
/* Define this to 'int' if ssize_t is not an available typedefed type */
#undef ssize_t
/* Define this to 'int' if socklen_t is not an available typedefed type */
#undef socklen_t
/* Define this as a suitable file to read random data from */
#undef RANDOM_FILE
/* Define this to your Entropy Gathering Daemon socket pathname */
#undef EGD_SOCKET

View File

@@ -23,6 +23,12 @@
/* Define to `unsigned' if <sys/types.h> doesn't define. */
/* #undef size_t */
/* Define this to 'int' if ssize_t is not an available typedefed type */
#define ssize_t int
/* Define this to 'int' if socklen_t is not an available typedefed type */
#define socklen_t int
/* Define if you have the ANSI C header files. */
#define STDC_HEADERS 1

View File

@@ -53,15 +53,9 @@ dnl
AC_DEFUN(CURL_CHECK_WORKING_GETADDRINFO,[
AC_CACHE_CHECK(for working getaddrinfo, ac_cv_working_getaddrinfo,[
AC_TRY_RUN( [
#ifdef HAVE_NETDB_H
#include <netdb.h>
#endif
#ifdef HAVE_STRING_H
#include <string.h>
#endif
#ifdef HAVE_SYS_SOCKET_H
#include <sys/types.h>
#include <sys/socket.h>
#endif
void main(void) {
struct addrinfo hints, *ai;
@@ -397,6 +391,36 @@ AC_CHECK_FUNC(gethostname, , AC_CHECK_LIB(ucb, gethostname))
dnl dl lib?
AC_CHECK_FUNC(dlopen, , AC_CHECK_LIB(dl, dlopen))
dnl **********************************************************************
dnl Check for the random seed preferences
dnl **********************************************************************
AC_ARG_WITH(egd-socket,
[ --with-egd-socket=FILE Entropy Gathering Daemon socket pathname],
[ EGD_SOCKET="$withval" ]
)
if test -n "$EGD_SOCKET" ; then
AC_DEFINE_UNQUOTED(EGD_SOCKET, "$EGD_SOCKET")
fi
dnl Check for user-specified random device
AC_ARG_WITH(random,
[ --with-random=FILE read randomness from FILE (default=/dev/urandom)],
[ RANDOM_FILE="$withval" ],
[
dnl Check for random device
AC_CHECK_FILE("/dev/urandom",
[
RANDOM_FILE="/dev/urandom";
]
)
]
)
if test -n "$RANDOM_FILE" ; then
AC_SUBST(RANDOM_FILE)
AC_DEFINE_UNQUOTED(RANDOM_FILE, "$RANDOM_FILE")
fi
dnl **********************************************************************
dnl Check for the presence of Kerberos4 libraries and headers
dnl **********************************************************************
@@ -434,6 +458,10 @@ AC_MSG_CHECKING([if Kerberos4 support is requested])
if test "$want_krb4" = yes
then
if test "$ipv6" = "yes"; then
echo krb4 is not compatible with IPv6
exit 1
fi
AC_MSG_RESULT(yes)
dnl Check for & handle argument to --with-krb4
@@ -547,7 +575,8 @@ else
dnl these can only exist if openssl exists
AC_CHECK_FUNCS( RAND_status \
RAND_screen )
RAND_screen \
RAND_egd )
fi
@@ -661,6 +690,31 @@ AC_CHECK_SIZEOF(long double, 8)
# check for 'long long'
AC_CHECK_SIZEOF(long long, 4)
# check for ssize_t
AC_CHECK_TYPE(ssize_t, int)
dnl
dnl We can't just AC_CHECK_TYPE() for socklen_t since it doesn't appear
dnl in the standard headers. We egrep for it in the socket headers and
dnl if it is used there we assume we have the type defined, otherwise
dnl we search for it with AC_CHECK_TYPE() the "normal" way
dnl
if test "$ac_cv_header_sys_socket_h" = "yes"; then
AC_MSG_CHECKING(for socklen_t in sys/socket.h)
AC_EGREP_HEADER(socklen_t,
sys/socket.h,
socklen_t=yes
AC_MSG_RESULT(yes),
AC_MSG_RESULT(no))
fi
if test "$socklen_t" != "yes"; then
# check for socklen_t the standard way if it wasn't found before
AC_CHECK_TYPE(socklen_t, int)
fi
dnl Get system canonical name
AC_CANONICAL_HOST
AC_DEFINE_UNQUOTED(OS, "${host}")
@@ -691,7 +745,8 @@ AC_CHECK_FUNCS( socket \
setvbuf \
sigaction \
signal \
getpass_r
getpass_r \
strlcat
)
dnl removed 'getpass' check on October 26, 2000

View File

@@ -6,9 +6,9 @@
BUGS
Curl has grown substantially from that day, several years ago, when I
started fiddling with it. When I write this, there are 16500 lines of source
code, and by the time you read this it has probably grown even more.
Curl and libcurl have grown substantially since the beginning. At the time
of writing (mid March 2001), there are 23000 lines of source code, and by
the time you read this it has probably grown even more.
Of course there are lots of bugs left. And lots of misfeatures.
@@ -21,10 +21,11 @@ BUGS
http://sourceforge.net/bugs/?group_id=976
When reporting a bug, you should include information that will help us
understand what's wrong, what's expected and how to repeat it. You therefore
need to supply your operating system's name and version number (uname -a
under a unix is fine), what version of curl you're using (curl -v is fine),
what URL you were working with and anything else you think matters.
understand what's wrong, what you expected to happen and how to repeat the
bad behaviour. You therefore need to supply your operating system's name and
version number (uname -a under a unix is fine), what version of curl you're
using (curl -V is fine), what URL you were working with and anything else
you think matters.
If curl crashed, causing a core dump (in unix), there is hardly any use to
send that huge file to anyone of us. Unless we have an exact same system
@@ -32,7 +33,7 @@ BUGS
a stack trace and send that (much smaller) output to us instead!
The address and how to subscribe to the mailing list is detailed in the
README.curl file.
MANUAL file.
HOW TO GET A STACK TRACE with a common unix debugger
====================================================

View File

@@ -13,7 +13,7 @@ To Think About When Contributing Source Code
The License Issue
When contributing with code, you agree to put your changes and new code under
the same license curl and libcurl is already using.
the same license curl and libcurl is already using unless stated otherwise.
If you add a larger piece of code, you can opt to make that file or set of
files to use a different license as long as they don't enfore any changes to
@@ -26,19 +26,19 @@ Naming
Try using a non-confusing naming scheme for your new functions and variable
names. It doesn't necessarily have to mean that you should use the same as in
other places of the code, just that the names should be logical,
understandable and be named according to what they're used for.
understandable and be named according to what they're used for. File-local
functions should be made static.
Indenting
Please try using the same indenting levels and bracing method as all the
other code already does. It makes the source code a lot easier to follow if
all of it is written using the same style. I don't ask you to like it, I just
ask you to follow the tradition! ;-)
all of it is written using the same style. We don't ask you to like it, we
just ask you to follow the tradition! ;-)
Commenting
Comment your source code extensively. I don't see myself as a very good
source commenter, but I try to become one. Commented code is quality code and
Comment your source code extensively. Commented code is quality code and
enables future modifications much more. Uncommented code much more risk being
completely replaced when someone wants to extend things, since other persons'
source code can get quite hard to read.
@@ -71,9 +71,9 @@ Separate Patches Doing Different Things
Patch Against Recent Sources
Please try to get the latest available sources to make your patches
against. It makes my life so much easier. The very best is if you get the
most up-to-date sources from the CVS repository, but the latest release
archive is quite OK as well!
against. It makes the life of the developers so much easier. The very best is
if you get the most up-to-date sources from the CVS repository, but the
latest release archive is quite OK as well!
Document
@@ -91,9 +91,9 @@ Write Access to CVS Repository
Test Cases
Since the introduction of the test suite, we will get the possibility to
quickly verify that the main features are working as supposed to. To maintain
this situation and improve it, all new features and functions that are added
need tro be tested. Every feature that is added should get at least one valid
Since the introduction of the test suite, we can quickly verify that the main
features are working as they're supposed to. To maintain this situation and
improve it, all new features and functions that are added need to be tested
in the test suite. Every feature that is added should get at least one valid
test case that verifies that it works as documented. If every submitter also
post a few test cases, it won't end up as a heavy burden on a single person!

166
docs/FAQ
View File

@@ -1,4 +1,4 @@
Updated: January 29, 2001 (http://curl.haxx.se/docs/faq.shtml)
Updated: March 13, 2001 (http://curl.haxx.se/docs/faq.shtml)
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -31,6 +31,7 @@ FAQ
3.7 Can I use curl to delete/rename a file through FTP?
3.8 How do I tell curl to follow HTTP redirects?
3.9 How do I use curl in PHP?
3.10 What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP?
4. Running Problems
4.1 Problems connecting to SSL servers.
@@ -53,7 +54,7 @@ FAQ
5.2 How can I receive all data into a large memory chunk?
5.3 How do I fetch multiple files with libcurl?
5.4 Does libcurl do Winsock initing on win32 systems?
5.5 Does CURLOPT_FILE work on win32 ?
5.5 Does CURLOPT_FILE and CURLOPT_INFILE work on win32 ?
5.6 What about Keep-Alive or persistant connections?
6. License Issues
@@ -106,34 +107,35 @@ FAQ
or with PHP.
Curl is not a single-OS program. Curl exists, compiles, builds and runs
under a wide range of operating systems, including all modern Unixes,
Windows, Amiga, BeOS, OS/2, OS X, QNX etc.
under a wide range of operating systems, including all modern Unixes (and a
bunch of older ones too), Windows, Amiga, BeOS, OS/2, OS X, QNX etc.
1.4 When will you make curl do XXXX ?
I love suggestions of what to change in order to make curl and libcurl
better. I do however believe in a few rules when it comes to the future of
We love suggestions of what to change in order to make curl and libcurl
better. We do however believe in a few rules when it comes to the future of
curl:
* It is to remain a command line tool. If you want GUIs or fancy scripting
* Curl is to remain a command line tool. If you want GUIs or fancy scripting
capabilities, you're free to write another tool that uses libcurl and that
offers this. There's no point in having one single tool that does every
offers this. There's no point in having a single tool that does every
imaginable thing. That's also one of the great advantages of having the
core of curl as a library: libcurl.
core of curl as a library.
* I do not add things to curl that other small and available tools already
* We do not add things to curl that other small and available tools already
do very fine at the side. Curl's output is fine to pipe into another
program or redirect to another file for the next program to interpret.
* I focus on protocol related issues and improvements. If you wanna do more
* We focus on protocol related issues and improvements. If you wanna do more
magic with the supported protocols than curl currently does, chances are
big I will agree. If you wanna add more protocols, I may very well
agree.
* If you want me to make all the work while you wait for me to implement it
for you, that is not a very friendly attitude. I spend a considerable time
already on maintaining and developing curl. In order to get more out of
me, I trust you will offer some of your time and efforts in return.
* If you want someone else to make all the work while you wait for us to
implement it for you, that is not a very friendly attitude. We spend a
considerable time already on maintaining and developing curl. In order to
get more out of us, you should consider trading in some of your time and
efforts in return.
* If you write the code, chances are bigger that it will get into curl
faster.
@@ -181,26 +183,24 @@ FAQ
2.2. Does curl work/build with other SSL libraries?
Curl has been written to use OpenSSL, although I doubt there would be much
problems using a different library. I just don't know any other free one and
that has limited my possibilities to develop against anything else.
If anyone does "port" curl to use a commercial SSL library, I am of course
very interested in getting the patch!
Curl has been written to use OpenSSL, although there should not be much
problems using a different library. If anyone does "port" curl to use a
different SSL library, we are of course very interested in getting the
patch!
2.3. Where can I find a copy of LIBEAY32.DLL?
That is an OpenSSL binary built for Windows.
Curl uses OpenSSL to do the SSL stuff. The LIBEAY32.DLL is what curl needs
on a windows machine to do https://. Check out the curl web page to find
on a windows machine to do https://. Check out the curl web site to find
accurate and up-to-date pointers to recent OpenSSL DDLs and other binary
packages.
2.4. Does cURL support Socks (RFC 1928) ?
No. Nobody has wanted it that badly yet. I would appriciate patches that
brings this functionality.
No. Nobody has wanted it that badly yet. We appriciate patches that bring
this functionality.
3. Usage problems
@@ -222,7 +222,7 @@ FAQ
3.2. How do I tell curl to resume a transfer?
Curl supports resume both ways on FTP, download ways on HTTP.
Curl supports resumed transfers both ways on both FTP and HTTP.
Try the -C option.
@@ -230,14 +230,14 @@ FAQ
You can't simply use -F or -d at your choice. The web server that will
receive your post assumes one of the formats. If the form you're trying to
"fake" sets the type to 'multipart/form-data', than and only then you must
"fake" sets the type to 'multipart/form-data', then and only then you must
use the -F type. In all the most common cases, you should use -d which then
causes a posting with the type 'application/x-www-form-urlencoded'.
I have described this in some detail in the README.curl file, and if you
don't understand it the first time, read it again before you post questions
about this to the mailing list. I would also suggest that you read through
the mailing list archives for old postings and questions regarding this.
This is described in some detail in the README.curl file, and if you don't
understand it the first time, read it again before you post questions about
this to the mailing list. Also, try reading through the mailing list
archives for old postings and questions regarding this.
3.4. How do I tell curl to run custom FTP commands?
@@ -294,13 +294,23 @@ FAQ
invoke the curl tool using a command line. This is the way to use curl if
you're using PHP3 or PHP4 built without curl module support.
3.10 What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP?
Curl adheres to the HTTP spec, which basically means you can play with *any*
protocol that is built ontop of HTTP. Protocols such as SOAP, WEBDAV and
XML-RPC are all such ones. You can use -X to set custom requests and -H to
set custom headers (or replace internally generated ones).
Using libcurl or PHP's curl modules is just as fine and you'd just use the
proper library options to do the same.
4. Running Problems
4.1. Problems connecting to SSL servers.
It took a very long time before I could sort out why curl had problems
to connect to certain SSL servers when using SSLeay or OpenSSL v0.9+.
The error sometimes showed up similar to:
It took a very long time before we could sort out why curl had problems to
connect to certain SSL servers when using SSLeay or OpenSSL v0.9+. The
error sometimes showed up similar to:
16570:error:1407D071:SSL routines:SSL2_READ:bad mac decode:s2_pkt.c:233:
@@ -308,12 +318,12 @@ FAQ
requests properly. To correct this problem, tell curl to select SSLv2 from
the command line (-2/--sslv2).
I have also seen examples where the remote server didn't like the SSLv2
There has also been examples where the remote server didn't like the SSLv2
request and instead you had to force curl to use SSLv3 with -3/--sslv3.
4.2. Why do I get problems when I use & or % in the URL?
In general unix shells, the & letter is treated special and when used it
In general unix shells, the & letter is treated special and when used, it
runs the specified command in the background. To safely send the & as a part
of a URL, you should qoute the entire URL by using single (') or double (")
quotes around it.
@@ -338,8 +348,8 @@ FAQ
curl '{curl,www}.haxx.se'
To be able to use those letters as actual parts of the URL (without using
them for the curl URL "globbing" system), use the -g/--globoff option
(included in curl 7.6 and later):
them for the curl URL "globbing" system), use the -g/--globoff option (curl
7.6 and later):
curl -g 'www.site.com/weirdname[].html'
@@ -355,8 +365,8 @@ FAQ
4.5 Why do I get return code XXX from a HTTP server?
RFC2616 clearly explains the return codes. I'll make a short transcript
here. Go read the RFC for exact details:
RFC2616 clearly explains the return codes. This is a short transcript. Go
read the RFC for exact details:
4.5.1 "400 Bad Request"
@@ -392,7 +402,7 @@ FAQ
4.7. How do I keep usernames and passwords secret in Curl command lines?
I see this problem as two parts:
This problem has two sides:
The first part is to avoid having clear-text passwords in the command line
so that they don't appear in 'ps' outputs and similar. That is easily
@@ -426,7 +436,8 @@ FAQ
4.9. Curl can't authenticate to the server that requires NTLM?
NTLM is a Microsoft proprietary protocol. Unfortunately, curl does not
currently support that.
currently support that. Proprietary formats are evil. You should not use
such ones.
5. libcurl Issues
@@ -438,9 +449,8 @@ FAQ
programs. libcurl will use thread-safe functions instead of non-safe ones if
your system has such.
I am very interested in once and for all getting some kind of report or
README file from those who have used libcurl in a threaded environment,
since I haven't and I get this question more and more frequently!
We would appriciate some kind of report or README file from those who have
used libcurl in a threaded environment.
5.2 How can I receive all data into a large memory chunk?
@@ -477,9 +487,16 @@ FAQ
5.3 How do I fetch multiple files with libcurl?
The easy interface of libcurl does not support multiple requests using the
same connection. The only available way to do multiple requests is to
init/perform/cleanup for each request.
Starting with version 7.7, curl and libcurl will have excellent support for
transferring multiple files. You should just repeatedly set new URLs with
curl_easy_setopt() and then transfer it with curl_easy_perform(). The handle
you get from curl_easy_init() is not only reusable starting with libcurl
7.7, but also you're encouraged to reuse it if you can, as that will enable
libcurl to use persistant connections.
For libcurl prior to 7.7, there was no multiple file support. The only
available way to do multiple requests was to init/perform/cleanup for each
transfer.
5.4 Does libcurl do Winsock initing on win32 systems?
@@ -491,28 +508,28 @@ FAQ
use several different libraries and parts, and there's no reason for every
single library to do this.
5.5 Does CURLOPT_FILE work on win32 ?
5.5 Does CURLOPT_FILE and CURLOPT_INFILE work on win32 ?
Yes, but you cannot open a FILE * and pass the pointer to a DLL and have
that DLL use the FILE *. You must use CURLOPT_WRITEFUNCTION as well to set a
function that writes the file, even if that simply writes the data to the
specified FILE*.
that DLL use the FILE *. If you set CURLOPT_FILE you must also use
CURLOPT_WRITEFUNCTION as well to set a function that writes the file, even
if that simply writes the data to the specified FILE*. Similarly, if you use
CURLOPT_INFILE you must also specify CURLOPT_READFUNCTION.
(provided by Joel DeYoung)
(Provided by Joel DeYoung and Bob Schader)
5.6 What about Keep-Alive or persistant connections?
This is closely related to issue 5.3. Since libcurl has no real support
for doing multiple file transfers, there's no support for Keep-Alive or
persistant connections either.
Starting with version 7.7, curl and libcurl will have excellent support for
persistant connections when transferring several files from the same server.
Curl will attempt to reuse connections for all URLs specified on the same
command line/config file, and libcurl will reuse connections for all
transfers that are made using the same libcurl handle.
This is of course subject to change as soon as libcurl gets support for
multiple files. Feel free to join in and make this change happen sooner!
Previous versions had no persistant connection support.
6. License Issues
NOTE: This section is now updated to concern curl 7.5.2 or later!
Curl and libcurl are released under a MIT/X derivate license *or* the MPL,
the Mozilla Public License. To get a really good answer to your license
conflict questions, you should study the MPL and MIT/X licenses and the
@@ -529,27 +546,25 @@ FAQ
6.2. I have a closed-source program, can I use the libcurl library?
Yes.
Yes!
libcurl does not put any restrictions on the program that uses the
library.
libcurl does not put any restrictions on the program that uses the library.
6.3. I have a BSD licensed program, can I use the libcurl library?
Yes.
Yes!
libcurl does not put any restrictions on the program that uses the
library.
libcurl does not put any restrictions on the program that uses the library.
6.4. I have a program that uses LGPL libraries, can I use libcurl?
Yes.
Yes!
The LGPL license don't clash with other licenses.
The LGPL license doesn't clash with other licenses.
6.5. Can I modify curl/libcurl for my program and keep the changes secret?
Yes.
Yes!
The MIT/X derivate license practically allows you to do almost anything with
the sources, on the condition that the copyright texts in the sources are
@@ -557,9 +572,12 @@ FAQ
6.6. Can you please change the curl/libcurl license to XXXX?
No. We carefully picked this license years ago and a large amount of people
have contributed with source code knowing that this is the license we
use. This license puts the restrictions we want on curl/libcurl and it does
not spread to other programs or libraries that use it. The recent dual
license modification should make it possible for everyone to use libcurl or
curl in their projects, no matter what license they already have in use.
No.
We have carefully picked this license after years of development and
discussions and a large amount of people have contributed with source code
knowing that this is the license we use. This license puts the restrictions
we want on curl/libcurl and it does not spread to other programs or
libraries that use it. The recent dual license modification should make it
possible for everyone to use libcurl or curl in their projects, no matter
what license they already have in use.

View File

@@ -17,18 +17,21 @@ Misc
- progress bar/time specs while downloading
- "standard" proxy environment variables support
- config file support
- compiles on win32
- compiles on win32 (reported built on 29 operating systems)
- redirectable stderr
- use selected network interface for outgoing traffic
- IPv6 support
- persistant connections
HTTP
- HTTP/1.1 compliant
- GET
- PUT
- HEAD
- POST
- multipart POST
- authentication
- resume
- resume (both GET and PUT)
- follow redirects
- maximum amount of redirects to follow
- custom HTTP request
@@ -71,6 +74,7 @@ FTP
TELNET
- connection negotiation
- custom telnet options
- stdin/stdout I/O
LDAP (*2)

View File

@@ -84,9 +84,10 @@ UNIX
KNOWN PROBLEMS
If you happen to have autoconf installed, but a version older than
2.12 you will get into trouble. Then you can still build curl by
issuing these commands: (from Ralph Beckmann)
If you happen to have autoconf installed, but a version older than 2.12
you will get into trouble. Then you can still build curl by issuing these
commands (note that this requires curl to be built staticly): (from Ralph
Beckmann)
./configure [...]
cd lib; make; cd ..
@@ -139,6 +140,14 @@ UNIX
./configure --with-krb4=/usr/athena
If your system support shared libraries, but you want to built a static
version only, you can disable building the shared version by using:
./configure --disable-shared
If you're a curl developer and use gcc, you might want to enable more
debug options with the --enable-debug option.
Win32
=====

View File

@@ -1,4 +1,4 @@
Updated for curl 7.6 on January 26, 2001
Updated for curl 7.7 on March 13, 2001
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -7,11 +7,11 @@
INTERNALS
The project is kind of split in two. The library and the client. The client
part uses the library, but the library is meant to be designed to allow other
applications to use it.
The project is split in two. The library and the client. The client part uses
the library, but the library is designed to allow other applications to use
it.
Thus, the largest amount of code and complexity is in the library part.
The largest amount of code and complexity is in the library part.
CVS
===
@@ -35,13 +35,13 @@ Windows vs Unix
the same at all places except for the header file that defines them. The
macros in use are sclose(), sread() and swrite().
2. Windows requires a couple of init calls for the socket stuff
2. Windows requires a couple of init calls for the socket stuff.
Those must be made by the application that uses libcurl, in curl that means
src/main.c has some code #ifdef'ed to do just that.
3. The file descriptors for network communication and file operations are
not easily interchangable as in unix
not easily interchangable as in unix.
We avoid this by not trying any funny tricks on file descriptors.
@@ -51,10 +51,10 @@ Windows vs Unix
We set stdout to binary under windows
Inside the source code, I do make an effort to avoid '#ifdef WIN32'. All
Inside the source code, We make an effort to avoid '#ifdef [Your OS]'. All
conditionals that deal with features *should* instead be in the format
'#ifdef HAVE_THAT_WEIRD_FUNCTION'. Since Windows can't run configure scripts,
I maintain two config-win32.h files (one in / and one in src/) that are
we maintain two config-win32.h files (one in / and one in src/) that are
supposed to look exactly as a config.h file would have looked like on a
Windows machine!
@@ -64,12 +64,6 @@ Windows vs Unix
Library
=======
As described elsewhere, libcurl is meant to get two different "layers" of
interfaces. At the present point only the high-level, the "easy", interface
has been fully implemented and documented. We assume the easy-interface in
this description, the low-level interface will be documented when fully
implemented.
There are plenty of entry points to the library, namely each publicly defined
function that libcurl offers to applications. All of those functions are
rather small and easy-to-follow. All the ones prefixed with 'curl_easy' are
@@ -103,8 +97,9 @@ Library
lib/sendf.c) function to send printf-style formatted data to the remote host
and when they're ready to make the actual file transfer they call the
Curl_Transfer() function (in lib/transfer.c) to setup the transfer and
returns. curl_transfer() then calls _Tranfer() in lib/transfer.c that
performs the entire file transfer.
returns. Curl_perform() then calls Transfer() in lib/transfer.c that performs
the entire file transfer. Curl_perform() is what does the main "connect - do
- transfer - done" loop. It loops if there's a Location: to follow.
During transfer, the progress functions in lib/progress.c are called at a
frequent interval (or at the user's choice, a specified callback might get
@@ -114,6 +109,22 @@ Library
When completed, the curl_easy_cleanup() should be called to free up used
resources.
A quick roundup on internal function sequences (many of these call
protocol-specific function-pointers):
curl_connect - connects to a remote site and does initial connect fluff
This also checks for an existing connection to the requested site and uses
that one if it is possible.
curl_do - starts a transfer
curl_transfer() - transfers data
curl_done - ends a transfer
curl_disconnect - disconnects from a remote site. This is called when the
disconnect is really requested, which doesn't necessarily have to be
exactly after curl_done in case we want to keep the connection open for
a while.
HTTP(S)
HTTP offers a lot and is the protocol in curl that uses the most lines of
@@ -129,6 +140,14 @@ Library
the source by the use of curl_read() for reading and curl_write() for writing
data to the remote server.
http_chunks.c contains functions that understands HTTP 1.1 chunked transfer
encoding.
An interesting detail with the HTTP(S) request, is the add_buffer() series of
functions we use. They append data to one single buffer, and when the
building is done the entire request is sent off in one single write. This is
done this way to overcome problems with flawed firewalls and lame servers.
FTP
The Curl_if2ip() function can be used for getting the IP number of a
@@ -160,7 +179,7 @@ Library
URL encoding and decoding, called escaping and unescaping in the source code,
is found in lib/escape.c.
While transfering data in _Transfer() a few functions might get
While transfering data in Transfer() a few functions might get
used. curl_getdate() in lib/getdate.c is for HTTP date comparisons (and
more).
@@ -182,6 +201,34 @@ Library
exists in lib/getpass.c. libcurl offers a custom callback that can be used
instead of this, but it doesn't change much to us.
Persistant Connections
======================
With curl 7.7, we added persistant connection support to libcurl which has
introduced a somewhat different treatmeant of things inside of libcurl.
o The 'UrlData' struct returned in the curl_easy_init() call must never
hold connection-oriented data. It is meant to hold the root data as well
as all the options etc that the library-user may choose.
o The 'UrlData' struct holds the cache array of pointers to 'connectdata'
structs. There's one connectdata struct for each connection that libcurl
knows about.
o This also enables the 'curl handle' to be reused on subsequent transfers,
something that was illegal in pre-7.7 versions.
o When we are about to perform a transfer with curl_easy_perform(), we first
check for an already existing connection in the cache that we can use,
otherwise we create a new one and add to the cache. If the cache is full
already when we add a new connection, we close one of the present ones. We
select which one to close dependent on the close policy that may have been
previously set.
o When the tranfer operation is complete, we try to leave the connection open.
Particular options may tell us not to, and protocols may signal closure on
connections and then we don't keep it open of course.
o When curl_easy_cleanup() is called, we close all still opened connections.
You do realize that the curl handle must be re-used in order for the
persistant connections to work.
Library Symbols
===============
@@ -236,12 +283,12 @@ Memory Debugging
deal with resources that might give us problems if we "leak" them. The
functions in the memdebug system do nothing fancy, they do their normal
function and then log information about what they just did. The logged data
is then analyzed after a complete session,
can then be analyzed after a complete session,
memanalyze.pl is a perl script present only in CVS (not part of the release
archives) that analyzes a log file generated by the memdebug system. It
detects if resources are allocated but never freed and other kinds of errors
related to resource management.
memanalyze.pl is a perl script present only present in CVS (not part of the
release archives) that analyzes a log file generated by the memdebug
system. It detects if resources are allocated but never freed and other kinds
of errors related to resource management.
Use -DMALLOCDEBUG when compiling to enable memory debugging.
@@ -256,8 +303,8 @@ Test Suite
httpserver.pl and ftpserver.pl before all the test cases are performed. The
test suite currently only runs on unix-like platforms.
You'll find a complete description of the test case data files in the README
file in the test directory.
You'll find a complete description of the test case data files in the
tests/README file.
The test suite automatically detects if curl was built with the memory
debugging enabled, and if it was it will detect memory leaks too.
@@ -269,6 +316,7 @@ Building Releases
released, run the 'maketgz' script (using 'make distcheck' will give you a
pretty good view on the status of the current sources). maketgz prompts for
version number of the client and the library before it creates a release
archive.
archive. maketgz uses 'make dist' for the actual archive building, why you
need to fill in the Makefile.am files properly for which files that should
be included in the release archives.
You must have autoconf installed to build release archives.

View File

@@ -4,58 +4,91 @@
| | | |_) | (__| |_| | | | |
|_|_|_.__/ \___|\__,_|_| |_|
How To Use Libcurl In Your C/C++ Program
How To Use Libcurl In Your Program
[ libcurl can be used directly from within your PHP or Perl programs as well,
look elsewhere for documentation on this ]
Interfaces
libcurl currently offers two different interfaces to the URL transfer
engine. They can be seen as one low-level and one high-level, in the sense
that the low-level one will allow you to deal with a lot more details but on
the other hand not offer as many fancy features (such as Location:
following). The high-level interface is supposed to be a built-in
implementation of the low-level interface. You will not be able to mix
function calls from the different layers.
As we currently ONLY support the high-level interface, the so called easy
interface, I will not attempt to describe any low-level functions at this
point.
Function descriptions
The interface is meant to be very simple for very simple
implementations. Thus, we have minimized the number of entries.
The interface is meant to be very simple for applictions/programmers, hence
the name "easy". We have therefore minimized the number of entries.
The Easy Interface
When using the easy interface, you init your easy-session and get a handle,
which you use as input to the following interface functions you use.
When using the easy interface, you init your session and get a handle, which
you use as input to the following interface functions you use. Use
curl_easy_init() to get the handle.
You continue by setting all the options you want in the upcoming transfer,
most important among them is the URL itself. You might want to set some
callbacks as well that will be called from the library when data is available
etc.
most important among them is the URL itself (you can't transfer anything
without a specified URL as you may have figured out yourself). You might want
to set some callbacks as well that will be called from the library when data
is available etc. curl_easy_setopt() is there for this.
When all is setup, you tell libcurl to perform the transfer. It will then do
the entire operation and won't return until it is done or failed.
When all is setup, you tell libcurl to perform the transfer using
curl_easy_perform(). It will then do the entire operation and won't return
until it is done or failed.
After the transfer has been made, you cleanup the easy-session's handle and
libcurl is entirely off the hook!
After the transfer has been made, you cleanup the session with
curl_easy_cleanup() and libcurl is entirely off the hook! If you want
persistant connections, you don't cleanup immediately, but instead run ahead
and perform other transfers. See the chapter below for Persistant
Connections.
curl_easy_init()
curl_easy_setopt()
curl_easy_perform()
curl_easy_cleanup()
While the above mentioned four functions are the main functions to use in the
easy interface, there is a series of other helpful functions to use. They
are:
While the above four functions are the main functions to use in the easy
interface, there is a series of helpful functions to use. They are:
curl_version() - displays the libcurl version
curl_getdate() - converts a date string to time_t
curl_getenv() - portable environment variable reader
curl_easy_getinfo() - get information about a performed transfer
curl_formparse() - helps building a HTTP form POST
curl_formfree() - free a list built with curl_formparse()
curl_slist_append() - builds a linked list
curl_slist_free_all() - frees a whole curl_slist
curl_version() - displays the libcurl version
curl_getdate() - converts a date string to time_t
curl_getenv() - portable environment variable reader
curl_formparse() - helps building a HTTP form POST
curl_slist_append() - builds a linked list
curl_slist_free_all() - frees a whole curl_slist
For details on these, read the separate man pages.
Read the separate man pages for these functions for details!
Portability
libcurl works *exactly* the same, on any of the platforms it compiles and
builds on.
There's only one caution, and that is the win32 platform that may(*) require
you to init the winsock stuff before you use the libcurl functions. Details
on this are noted on the curl_easy_init() man page.
(*) = it appears as if users of the cygwin environment get this done
automatically.
Threads
Never *ever* call curl-functions simultaneously using the same handle from
several threads. libcurl is thread-safe and can be used in any number of
threads, but you must use separate curl handles if you want to use libcurl in
more than one thread simultaneously.
Persistant Connections
With libcurl 7.7, persistant connections were added. Persistant connections
means that libcurl can re-use the same connection for several transfers, if
the conditions are right.
libcurl will *always* attempt to use persistant connections. Whenever you use
curl_easy_perform(), libcurl will attempt to use an existing connection to do
the transfer, and if none exists it'll open a new one that will be subject
for re-use on a possible following call to curl_easy_perform().
To allow libcurl to take full advantage of persistant connections, you should
do as many of your file transfers as possible using the same curl
handle. When you call curl_easy_cleanup(), all the possibly open connections
held by libcurl will be closed and forgotten.
Note that the options set with curl_easy_setopt() will be used in on every
repeat curl_easy_perform() call
Compatibility with older libcurls
Repeated curl_easy_perform() calls on the same handle were not supported in
pre-7.7 versions, and caused confusion and defined behaviour.

View File

@@ -25,12 +25,16 @@ SIMPLE USAGE
Get a list of the root directory of an FTP site:
curl ftp://ftp.fts.frontec.se/
curl ftp://cool.haxx.se/
Get the definition of curl from a dictionary:
curl dict://dict.org/m:curl
Fetch two documents at once:
curl ftp://cool.haxx.se/ http://www.weirdserver.com:8000/
DOWNLOAD TO A FILE
Get a web page and store in a local file:
@@ -43,6 +47,10 @@ DOWNLOAD TO A FILE
curl -O http://www.netscape.com/index.html
Fetch two files and store them with their remote names:
curl -O www.haxx.se/index.html -O curl.haxx.se/download.html
USING PASSWORDS
FTP
@@ -455,9 +463,13 @@ EXTRA HEADERS
curl -H "X-you-and-me: yes" www.love.com
This can also be useful in case you want curl to send a different text in
a header than it normally does. The -H header you specify then replaces the
header curl would normally send.
This can also be useful in case you want curl to send a different text in a
header than it normally does. The -H header you specify then replaces the
header curl would normally send. If you replace an internal header with an
empty one, you prevent that header from being sent. To prevent the Host:
header from being used:
curl -H "Host:" www.server.com
FTP and PATH NAMES
@@ -726,16 +738,60 @@ KERBEROS4 FTP TRANSFER
There's no use for a password on the -u switch, but a blank one will make
curl ask for one and you already entered the real password to kauth.
MAILING LIST
TELNET
We have an open mailing list to discuss curl, its development and things
relevant to this.
The curl telnet support is basic and very easy to use. Curl passes all data
passed to it on stdin to the remote server. Connect to a remote telnet
server using a command line similar to:
To subscribe, mail curl-request@contactor.se with "subscribe <fill in your
email address>" in the body.
curl telnet://remote.server.com
To post to the list, mail curl@contactor.se.
And enter the data to pass to the server on stdin. The result will be sent
to stdout or to the file you specify with -o.
To unsubcribe, mail curl-request@contactor.se with "unsubscribe <your
subscribed email address>" in the body.
You might want the -N/--no-buffer option to switch off the buffered output
for slow connections or similar.
NOTE: the telnet protocol does not specify any way to login with a specified
user and password so curl can't do that automatically. To do that, you need
to track when the login prompt is received and send the username and
password accordingly.
PERSISTANT CONNECTIONS
Specifying multiple files on a single command line will make curl transfer
all of them, one after the other in the specified order.
libcurl will attempt to use persistant connections for the transfers so that
the second transfer to the same host can use the same connection that was
already initiated and was left open in the previous transfer. This greatly
decreases connection time for all but the first transfer and it makes a far
better use of the network.
Note that curl cannot use persistant connections for transfers that are used
in subsequence curl invokes. Try to stuff as many URLs as possible on the
same command line if they are using the same host, as that'll make the
transfers faster. If you use a http proxy for file transfers, practicly
all transfers will be persistant.
Persistant connections were introduced in curl 7.7.
MAILING LISTS
For your convenience, we have several open mailing lists to discuss curl,
its development and things relevant to this.
To subscribe to the main curl list, mail curl-request@contactor.se with
"subscribe <fill in your email address>" in the body.
To subscribe to the curl-library users/deverlopers list, follow the
instructions at http://curl.haxx.se/mail/
To subscribe to the curl-announce list, to only get information about new
releases, follow the instructions at http://curl.haxx.se/mail/
To subscribe to the curl-and-PHP list in which curl using with PHP is
discussed, follow the instructions at http://curl.haxx.se/mail/
Please direct curl questions, feature requests and trouble reports to one of
these mailing lists instead of mailing any individual.

View File

@@ -6,44 +6,46 @@
TODO
For the future
Things to do in project cURL. Please tell me what you think, contribute and
send me patches that improve things!
Ok, this is what I wanna do with Curl. Please tell me what you think, and
please don't hesitate to contribute and send me patches that improve this
product! (Yes, you may add things not mentioned here, these are just a
few teasers...)
To do for the 7.7 release:
* Add a special connection-timeout that only goes for the connection phase.
To do for the 7.8 release:
* Make SSL session ids get used if multiple HTTPS documents from the same
host is requested.
* Make the curl tool support URLs that start with @ that would then mean that
the following is a plain list with URLs to download. Thus @filename.txt
reads a list of URLs from a local file. A fancy option would then be to
support @http://whatever.com that would first load a list and then get the
URLs mentioned in the list. I figure -O or something would have to be
implied by such an action.
To do in a future release (random order):
* Improve the regular progress meter with --continue is used. It should be
noticable when there's a resume going on.
* Document the undocumented libcurl functions: the printf clones (like
curl_msprintf, curl_mfprintf, curl_msnprintf, curl_maprintf and
curl_mvfprintf), the string compare functions (curl_strequal
and curl_strnequal) and the URL escape/unescape functions.
* Add configure options that disables certain protocols in libcurl to
decrease footprint. '--disable-[protocol]' where protocol is http, ftp,
telnet, ldap, dict or file.
* Extend the test suite to include telnet and https. The telnet could just do
ftp or http operations (for which we have test servers) and the https would
probably work against/with some of the openssl tools.
* Add a command line option that allows the output file to get the same time
stamp as the remote file. We already are capable of fetching the remote
stamp as the remote file. libcurl already is capable of fetching the remote
file's date.
* Make the SSL layer option capable of using the Mozilla Security Services as
an alternative to OpenSSL:
http://www.mozilla.org/projects/security/pki/nss/
* Make sure the low-level interface works. highlevel.c should basically be
possible to write using that interface. Document the low-level interface
* Make the easy-interface support multiple file transfers. If they're done
to the same host, they should use persistant connections or similar.
Figure out a nice design for this.
* Add asynchronous name resolving, as this enables full timeout support for
fork() systems.
* Non-blocking connect(), also to make timeouts work on windows.
* Move non-URL related functions that are used by both the lib and the curl
application to a separate "portability lib".
@@ -51,50 +53,40 @@ For the future
something being worked on in this area) and perl (we have seen the first
versions of this!) comes to mind. Python anyone?
* "Content-Encoding: compress/gzip/zlib"
* "Content-Encoding: compress/gzip/zlib" HTTP 1.1 clearly defines how to get
and decode compressed documents. There is the zlib that is pretty good at
decompressing stuff. This work was started in October 1999 but halted again
since it proved more work than we thought. It is still a good idea to
implement though.
HTTP 1.1 clearly defines how to get and decode compressed documents. There
is the zlib that is pretty good at decompressing stuff. This work was
started in October 1999 but halted again since it proved more work than we
thought. It is still a good idea to implement though.
* Authentication: NTLM. It would be cool to support that MS crap called NTLM
* Authentication: NTLM. Support for that MS crap called NTLM
authentication. MS proxies and servers sometime require that. Since that
protocol is a proprietary one, it involves reverse engineering and network
sniffing. This should however be a library-based functionality. There are a
few different efforts "out there" to make open source HTTP clients support
this and it should be possible to take advantage of other people's hard
work. http://modntlm.sourceforge.net/ is one.
work. http://modntlm.sourceforge.net/ is one. There's a web page at
http://www.innovation.ch/java/ntlm.html that contains detailed reverse-
engineered info.
* RFC2617 compliance, "Digest Access Authentication"
A valid test page seem to exist at:
http://hopf.math.nwu.edu/testpage/digest/
http://hopf.math.nwu.edu/testpage/digest/
And some friendly person's server source code is available at
http://hopf.math.nwu.edu/digestauth/index.html
http://hopf.math.nwu.edu/digestauth/index.html
Then there's the Apache mod_digest source code too of course. It seems as
if Netscape doesn't support this, and not many servers do. Although this is
a lot better authentication method than the more common "Basic". Basic
sends the password in cleartext over the network, this "Digest" method uses
a challange-response protocol which increases security quite a lot.
* Multiple Proxies?
Is there anyone that actually uses serial-proxies? I mean, send CONNECT to
the first proxy to connect to the second proxy to which you send CONNECT to
connect to the remote host (or even more iterations). Is there anyone
wanting curl to support it? (Not that it would be hard, just confusing...)
* Other proxies
Ftp-kind proxy, Socks5, whatever kind of proxies are there?
* IPv6 Awareness and support
Where ever it would fit. configure search for v6-versions of a few
functions and then use them instead is of course the first thing to do...
RFC 2428 "FTP Extensions for IPv6 and NATs" will be interesting. PORT
should be replaced with EPRT for IPv6, and EPSV instead of PASV.
* IPv6 Awareness and support. (This is partly done.) RFC 2428 "FTP
Extensions for IPv6 and NATs" is interesting. PORT should be replaced with
EPRT for IPv6 (done), and EPSV instead of PASV. HTTP proxies are left to
add support for.
* SSL for more protocols, like SSL-FTP...
(http://search.ietf.org/internet-drafts/draft-murray-auth-ftp-ssl-05.txt)
* HTTP POST resume using Range:

View File

@@ -2,7 +2,7 @@
.\" nroff -man curl.1
.\" Written by Daniel Stenberg
.\"
.TH curl 1 "19 January 2001" "Curl 7.6" "Curl Manual"
.TH curl 1 "12 March 2001" "Curl 7.7" "Curl Manual"
.SH NAME
curl \- get a URL with FTP, TELNET, LDAP, GOPHER, DICT, FILE, HTTP or
HTTPS syntax.
@@ -41,6 +41,12 @@ supported at the moment:
Starting with curl 7.6, you can specify any amount of URLs on the command
line. They will be fetched in a sequential manner in the specified order.
Starting with curl 7.7, curl will attempt to re-use connections for multiple
file transfers, so that getting many files from the same server will not do
multiple connects/handshakes. This improves speed. Of course this is only done
on files specified on a single command line and cannot be used between
separate curl invokes.
.SH OPTIONS
.IP "-a/--append"
(FTP)
@@ -425,11 +431,14 @@ If this option is used twice, the second will again disable mute.
When used with -s it makes curl show error message if it fails.
If this option is used twice, the second will again disable show error.
.IP "-t/--upload"
.B Deprecated. Use '-T -' instead.
Transfer the stdin data to the specified file. Curl will read
everything from stdin until EOF and store with the supplied name. If
this is used on a http(s) server, the PUT command will be used.
.IP "-t/--telnet-option <OPT=val>"
Pass options to the telnet protocol. Supported options are:
TTYPE=<term> Sets the terminal type.
XDISPLOC=<X display> Sets the X display location.
NEW_ENV=<var,val> Sets an environment variable.
.IP "-T/--upload-file <file>"
Like -t, but this transfers the specified local file. If there is no
file part in the specified URL, Curl will append the local file
@@ -758,7 +767,7 @@ If you do find bugs, mail them to curl-bug@haxx.se.
- Lars J. Aas <larsa@sim.no>
- J<>rn Hartroth <Joern.Hartroth@computer.org>
- Matthew Clarke <clamat@van.maves.ca>
- Linus Nielsen <Linus.Nielsen@haxx.se>
- Linus Nielsen Feltzing <linus@haxx.se>
- Felix von Leitner <felix@convergence.de>
- Dan Zitter <dzitter@zitter.net>
- Jongki Suwandi <Jongki.Suwandi@eng.sun.com>
@@ -788,6 +797,7 @@ If you do find bugs, mail them to curl-bug@haxx.se.
- Loic Dachary <loic@senga.org>
- Robert Weaver <robert.weaver@sabre.com>
- Ingo Ralf Blum <ingoralfblum@ingoralfblum.com>
- Jun-ichiro itojun Hagino <itojun@iijlab.net>
.SH WWW
http://curl.haxx.se

View File

@@ -2,13 +2,13 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_easy_cleanup 3 "22 May 2000" "Curl 7.0" "libcurl Manual"
.TH curl_easy_cleanup 3 "5 March 2001" "libcurl 7.7" "libcurl Manual"
.SH NAME
curl_easy_cleanup - End a libcurl "easy" session
curl_easy_cleanup - End a libcurl session
.SH SYNOPSIS
.B #include <curl/easy.h>
.B #include <curl/curl.h>
.sp
.BI "curl_easy_cleanup(CURL *" handle ");
.BI "curl_easy_cleanup(CURL *" handle ");"
.ad
.SH DESCRIPTION
This function must be the last function to call for a curl session. It is the
@@ -17,6 +17,10 @@ opposite of the
function and must be called with the same
.I handle
as input as the curl_easy_init call returned.
This will effectively close all connections libcurl has been used and possibly
has kept open until now. Don't call this function if you intend to transfer
more files (libcurl 7.7 or later).
.SH RETURN VALUE
None
.SH "SEE ALSO"

View File

@@ -2,11 +2,11 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_easy_init 3 "22 November 2000" "Curl 7.5" "libcurl Manual"
.TH curl_easy_init 3 "5 March 2001" "libcurl 7.6.1" "libcurl Manual"
.SH NAME
curl_easy_getinfo - Extract information from a curl session (added in 7.4)
.SH SYNOPSIS
.B #include <curl/easy.h>
.B #include <curl/curl.h>
.sp
.BI "CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ... );"
.ad
@@ -81,6 +81,14 @@ than one request if FOLLOWLOCATION is true.
Pass a pointer to a long to receive the result of the certification
verification that was requested (using the CURLOPT_SSL_VERIFYPEER option to
curl_easy_setopt). (Added in 7.4.2)
.TP
.B CURLINFO_CONTENT_LENGTH_DOWNLOAD
Pass a pointer to a double to receive the content-length of the download. This
is the value read from the Content-Length: field. (Added in 7.6.1)
.TP
.B CURLINFO_CONTENT_LENGTH_UPLOAD
Pass a pointer to a double to receive the specified size of the upload.
(Added in 7.6.1)
.PP
.SH RETURN VALUE

View File

@@ -2,11 +2,11 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_easy_init 3 "26 September 2000" "Curl 7.0" "libcurl Manual"
.TH curl_easy_init 3 "5 March 2001" "libcurl 7.7" "libcurl Manual"
.SH NAME
curl_easy_init - Start a libcurl "easy" session
curl_easy_init - Start a libcurl session
.SH SYNOPSIS
.B #include <curl/easy.h>
.B #include <curl/curl.h>
.sp
.BI "CURL *curl_easy_init( );"
.ad
@@ -19,6 +19,10 @@ when the operation is complete.
On win32 systems, you need to init the winsock stuff manually, libcurl will
not do that for you. WSAStartup() and WSACleanup() should be used accordingly.
Using libcurl 7.7 and later, you should perform all your sequential file
transfers using the same curl handle. This enables libcurl to use persistant
connections where possible.
.SH RETURN VALUE
If this function returns NULL, something went wrong and you cannot use the
other curl functions.

View File

@@ -2,11 +2,11 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_easy_perform 3 "25 Jan 2001" "Curl 7.0" "libcurl Manual"
.TH curl_easy_perform 3 "5 Mar 2001" "libcurl 7.7" "libcurl Manual"
.SH NAME
curl_easy_perform - Do the actual transfer in a "easy" session
curl_easy_perform - Perform a file transfer
.SH SYNOPSIS
.B #include <curl/easy.h>
.B #include <curl/curl.h>
.sp
.BI "CURLcode curl_easy_perform(CURL *" handle ");
.ad
@@ -17,15 +17,28 @@ It must be called with the same
.I handle
as input as the curl_easy_init call returned.
You are only allowed to call this function once using the same handle. If you
want to do repeated calls, you must call curl_easy_cleanup and curl_easy_init
again first.
libcurl version 7.7 or later (for older versions see below): You can do any
amount of calls to curl_easy_perform() while using the same handle. If you
intend to transfer more than one file, you are even encouraged to do
so. libcurl will then attempt to re-use the same connection for the following
transfers, thus making the operations faster, less CPU intense and using less
network resources. Just note that you will have to use
.I curl_easy_setopt
between the invokes to set options for the following curl_easy_perform.
You must never call this function simultaneously from two places using the
same handle. Let the function return first before invoking it another time. If
you want parallel transfers, you must use several curl handles.
Before libcurl version 7.7: You are only allowed to call this function once
using the same handle. If you want to do repeated calls, you must call
curl_easy_cleanup and curl_easy_init again first.
.SH RETURN VALUE
0 means everything was ok, non-zero means an error occurred as
.I <curl/curl.h>
defines. If the CURLOPT_ERRORBUFFER was set with
.I curl_easy_setopt
there willo be a readable error message in the error buffer when non-zero is
there will be a readable error message in the error buffer when non-zero is
returned.
.SH "SEE ALSO"
.BR curl_easy_init "(3), " curl_easy_setopt "(3), "

View File

@@ -2,11 +2,11 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_easy_setopt 3 "28 November 2000" "Curl 7.5" "libcurl Manual"
.TH curl_easy_setopt 3 "13 March 2001" "libcurl 7.7" "libcurl Manual"
.SH NAME
curl_easy_setopt - Set curl easy-session options
.SH SYNOPSIS
.B #include <curl/easy.h>
.B #include <curl/curl.h>
.sp
.BI "CURLcode curl_easy_setopt(CURL *" handle ", CURLoption "option ", ...);
.ad
@@ -20,7 +20,18 @@ followed by a parameter. That parameter can be a long, a function pointer or
an object pointer, all depending on what the option in question expects. Read
this manual carefully as bad input values may cause libcurl to behave badly!
You can only set one option in each function call. A typical application uses
many calls in the setup phase.
many curl_easy_setopt() calls in the setup phase.
NOTE: strings passed to libcurl as 'char *' arguments, will not be copied by
the library. Instead you should keep them available until libcurl no longer
needs them. Failing to do so will cause very odd behaviour or even crashes.
More note: the options set with this function call are valid for the
forthcoming data transfers that are performed when you invoke
.I curl_easy_perform .
The options are not in any way reset between transfers, so if you want
subsequent transfers with different options, you must change them between the
transfers.
The
.I "handle"
@@ -35,6 +46,12 @@ Data pointer to pass instead of FILE * to the file write function. Note that
if you specify the
.I CURLOPT_WRITEFUNCTION
, this is the pointer you'll get as input.
NOTE: If you're using libcurl as a win32 .DLL, you MUST use a
.I CURLOPT_WRITEFUNCTION
if you set the
.I CURLOPT_FILE
option.
.TP
.B CURLOPT_WRITEFUNCTION
Function pointer that should use match the following prototype:
@@ -53,6 +70,12 @@ Data pointer to pass instead of FILE * to the file read function. Note that if
you specify the
.I CURLOPT_READFUNCTION
, this is the pointer you'll get as input.
NOTE: If you're using libcurl as a win32 .DLL, you MUST use a
.I CURLOPT_READFUNCTION
if you set the
.I CURLOPT_INFILE
option.
.TP
.B CURLOPT_READFUNCTION
Function pointer that should use match the following prototype:
@@ -74,14 +97,16 @@ libcurl what the expected size of the infile is.
.TP
.B CURLOPT_URL
The actual URL to deal with. The parameter should be a char * to a zero
terminated string. NOTE: this option is currently required!
terminated string. The string must remain present until curl no longer needs
it, as it doesn't copy the string. NOTE: this option is required to be set
before curl_easy_perform() is called.
.TP
.B CURLOPT_PROXY
If you need libcurl to use a http proxy to access the outside world, set the
proxy string with this option. The parameter should be a char * to a zero
terminated string. To specify port number in this string, append":[port]" to
terminated string. To specify port number in this string, append :[port] to
the end of the host name. The proxy string may be prefixed with
"[protocol]://" since any such prefix will be ignored.
[protocol]:// since any such prefix will be ignored.
.TP
.B CURLOPT_PROXYPORT
Set this long with this option to set the proxy port to use unless it is
@@ -177,9 +202,11 @@ prompted for it.
.TP
.B CURLOPT_RANGE
Pass a char * as parameter, which should contain the specified range you
want. It should be in the format "X-Y", where X or Y may be left out. The HTTP
want. It should be in the format "X-Y", where X or Y may be left out. HTTP
transfers also support several intervals, separated with commas as in
.I "X-Y,N-M".
.I "X-Y,N-M"
. Using this kind of multiple intervals will cause the HTTP server to send the
response document in pieces.
.TP
.B CURLOPT_ERRORBUFFER
Pass a char * to a buffer that the libcurl may store human readable error
@@ -190,7 +217,8 @@ library. The buffer must be at least CURL_ERROR_SIZE big.
Pass a long as parameter containing the maximum time in seconds that you allow
the libcurl transfer operation to take. Do note that normally, name lookups
maky take a considerable time and that limiting the operation to less than a
few minutes risk aborting perfectly normal operations.
few minutes risk aborting perfectly normal operations. This option will cause
curl to use the SIGALRM to enable timeouting system calls.
.TP
.B CURLOPT_POSTFIELDS
Pass a char * as parameter, which should be the full data to post in a HTTP
@@ -398,6 +426,50 @@ Pass a long. The set number will be the redirection limit. If that many
redirections have been followed, the next redirect will cause an error. This
option only makes sense if the CURLOPT_FOLLOWLOCATION is used at the same
time. (Added in 7.5)
.TP
.B CURLOPT_MAXCONNECTS
Pass a long. The set number will be the persistant connection cache size. The
set amount will be the maximum amount of simultaneous connections that libcurl
may cache between file transfers. Default is 5, and there isn't much point in
changing this value unless you are perfectly aware of how this work and
changes libcurl's behaviour. Note: if you have already performed transfers
with this curl handle, setting a smaller MAXCONNECTS than before may cause
open connections to unnecessarily get closed. (Added in 7.7)
.TP
.B CURLOPT_CLOSEPOLICY
Pass a long. This option sets what policy libcurl should use when the
connection cache is filled and one of the open connections has to be closed to
make room for a new connection. This must be one of the CURLCLOSEPOLICY_*
defines. Use CURLCLOSEPOLICY_LEAST_RECENTLY_USED to make libcurl close the
connection that was least recently used, that connection is also least likely
to be capable of re-use. Use CURLCLOSEPOLICY_OLDEST to make libcurl close the
oldest connection, the one that was created first among the ones in the
connection cache. The other close policies are not support yet. (Added in 7.7)
.TP
.B CURLOPT_FRESH_CONNECT
Pass a long. Set to non-zero to make the next transfer use a new connection by
force. If the connection cache is full before this connection, one of the
existinf connections will be closed as according to the set policy. This
option should be used with caution and only if you understand what it
does. Set to 0 to have libcurl attempt re-use of an existing connection.
(Added in 7.7)
.TP
.B CURLOPT_FORBID_REUSE
Pass a long. Set to non-zero to make the next transfer explicitly close the
connection when done. Normally, libcurl keep all connections alive when done
with one transfer in case there comes a succeeding one that can re-use them.
This option should be used with caution and only if you understand what it
does. Set to 0 to have libcurl keep the connection open for possibly later
re-use. (Added in 7.7)
.TP
.B CURLOPT_RANDOM_FILE
Pass a char * to a zero terminated file name. The file will be used to read
from to seed the random engine for SSL. The more random the specified file is,
the more secure will the SSL connection become.
.TP
.B CURLOPT_FORBID_REUSE
Pass a char * to the zero terminated path name to the Entropy Gathering Daemon
socket. It will be used to seed the random engine for SSL.
.PP
.SH RETURN VALUE
0 means the option was set properly, non-zero means an error as

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_formfree 3 "17 November 2000" "Curl 7.5" "libcurl Manual"
.TH curl_formfree 3 "5 March 2001" "libcurl 7.5" "libcurl Manual"
.SH NAME
curl_formfree - free a previously build multipart/formdata HTTP POST chain
.SH SYNOPSIS

View File

@@ -2,13 +2,13 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_formparse 3 "6 June 2000" "Curl 7.0" "libcurl Manual"
.TH curl_formparse 3 "5 March 2001" "libcurl 7.0" "libcurl Manual"
.SH NAME
curl_formparse - add a section to a multipart/formdata HTTP POST
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "CURLcode *curl_formparse(char *" string, "struct HttpPost **" firstitem,
.BI "CURLcode curl_formparse(char *" string, "struct HttpPost **" firstitem,
.BI "struct HttpPost ** "lastitem ");"
.ad
.SH DESCRIPTION
@@ -42,14 +42,14 @@ Add a form field named 'name' with the contents as read from the local files
named 'filename1' and 'filename2'. This is identical to the upper, except that
you get the contents of several files in one section.
.TP
.B [name]=@[filename];[content-type]
.B [name]=@[filename];[type=<content-type>]
Whenever you specify a file to read from, you can optionally specify the
content-type as well. The content-type is passed to the server together with
the contents of the file. curl_formparse() will guess content-type for a
number of well-known extensions and otherwise it will set it to binary. You
can override the internal decision by using this option.
.TP
.B [name]=@[filename1,filename2,...];[content-type]
.B [name]=@[filename1,filename2,...];[type=<content-type>]
When you specify several files to read the contents from, you can set the
content-type for all of them in the same way as with a single file.
.PP

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_getdate 3 "2 June 2000" "Curl 7.0" "libcurl Manual"
.TH curl_getdate 3 "5 March 2001" "libcurl 7.0" "libcurl Manual"
.SH NAME
curl_getdate - Convert an date in a ASCII string to number of seconds since
January 1, 1970

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_getenv 3 "2 June 2000" "Curl 7.0" "libcurl Manual"
.TH curl_getenv 3 "5 March 2001" "libcurl 7.0" "libcurl Manual"
.SH NAME
curl_getenv - return value for environment name
.SH SYNOPSIS

View File

@@ -2,14 +2,14 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_slist_append 3 "2 June 2000" "Curl 7.0" "libcurl Manual"
.TH curl_slist_append 3 "5 March 2001" "libcurl 7.0" "libcurl Manual"
.SH NAME
curl_slist_append - add a string to an slist
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "struct curl_slist *curl_slist_append(struct curl_slit *" list,
.BI "char * "string ");"
.BI "const char * "string ");"
.ad
.SH DESCRIPTION
curl_slist_append() appends a specified string to a linked list of

View File

@@ -2,13 +2,13 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_slist_free_all 3 "2 June 2000" "Curl 7.0" "libcurl Manual"
.TH curl_slist_free_all 3 "5 March 2001" "libcurl 7.0" "libcurl Manual"
.SH NAME
curl_slist_free_all - free an entire curl_slist list
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "void curl_slist_free_all(struct curl_slit *" list);
.BI "void curl_slist_free_all(struct curl_slist *" list);
.ad
.SH DESCRIPTION
curl_slist_free_all() removes all traces of a previously built curl_slist

View File

@@ -2,11 +2,11 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_version 3 "2 June 2000" "Curl 7.0" "libcurl Manual"
.TH curl_version 3 "5 March 2001" "libcurl 7.0" "libcurl Manual"
.SH NAME
curl_version - returns the libcurl version string
.SH SYNOPSIS
.B #include <curl/easy.h>
.B #include <curl/curl.h>
.sp
.BI "char *curl_version( );"
.ad
@@ -14,9 +14,9 @@ curl_version - returns the libcurl version string
Returns a human readable string with the version number of libcurl and some of
its important components (like OpenSSL version).
Do note that this returns the actual running lib's version, you might have
installed a newer lib's include files in your system which may turn your
LIBCURL_VERSION #define value to differ from this result.
Note: this returns the actual running lib's version, you might have installed
a newer lib's include files in your system which may turn your LIBCURL_VERSION
#define value to differ from this result.
.SH RETURN VALUE
A pointer to a zero terminated string.
.SH "SEE ALSO"

View File

@@ -5,7 +5,9 @@
AUTOMAKE_OPTIONS = foreign no-dependencies
EXTRA_DIST =
README curlgtk.c sepheaders.c simple.c
README curlgtk.c sepheaders.c simple.c postit.c \
win32sockets.c persistant.c \
getpageinvar.php simpleget.php simplepost.php
all:
@echo "done"

View File

@@ -6,3 +6,6 @@ advantage of libcurl.
If you end up with other small but still useful example sources, please mail
them for submission in future packages and on the web site.
There are examples for different languages and environments. Browse around to
find those that fit you.

View File

@@ -1,4 +1,12 @@
/* curlgtk.c */
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*/
/* Copyright (c) 2000 David Odin (aka DindinX) for MandrakeSoft */
/* an attempt to use the curl library in concert with a gtk-threaded application */

View File

@@ -0,0 +1,10 @@
#
# The PHP curl module supports the received page to be returned in a variable
# if told.
#
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,"http://www.myurl.com/");
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
$result=curl_exec ($ch);
curl_close ($ch);

View File

@@ -0,0 +1,53 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*/
#include <stdio.h>
#include <unistd.h>
#include <curl/curl.h>
/* to make this work under windows, use the win32-functions from the
docs/examples/win32socket.c file as well */
/* This example REQUIRES libcurl 7.7 or later */
#if (LIBCURL_VERSION_NUM < 0x070700)
#error Too old libcurl version, upgrade or stay away.
#endif
int main(int argc, char **argv)
{
CURL *curl;
CURLcode res;
#ifdef MALLOCDEBUG
/* this sends all memory debug messages to a specified logfile */
curl_memdebug("memdump");
#endif
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_VERBOSE, 1);
curl_easy_setopt(curl, CURLOPT_HEADER, 1);
/* get the first document */
curl_easy_setopt(curl, CURLOPT_URL, "http://curl.haxx.se/");
res = curl_easy_perform(curl);
/* get another document from the same server using the same
connection */
curl_easy_setopt(curl, CURLOPT_URL, "http://curl.haxx.se/docs/");
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}

71
docs/examples/postit.c Normal file
View File

@@ -0,0 +1,71 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*
* Example code that uploads a file name 'foo' to a remote script that accepts
* "HTML form based" (as described in RFC1738) uploads using HTTP POST.
*
* The imaginary form we'll fill in looks like:
*
* <form method="post" enctype="multipart/form-data" action="examplepost.cgi">
* Enter file: <input type="file" name="sendfile" size="40">
* Enter file name: <input type="text" name="filename" size="30">
* <input type="submit" value="send" name="submit">
* </form>
*
* This exact source code has not been verified to work.
*/
/* to make this work under windows, use the win32-functions from the
win32socket.c file as well */
#include <stdio.h>
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
int main(int argc, char **argv)
{
CURL *curl;
CURLcode res;
struct HttpPost *formpost=NULL;
struct HttpPost *lastptr=NULL;
/* Fill in the file upload field */
curl_formparse("sendfile=@foo",
&formpost,
&lastptr);
/* Fill in the filename field */
curl_formparse("filename=foo",
&formpost,
&lastptr);
/* Fill in the submit field too, even if this is rarely needed */
curl_formparse("submit=send",
&formpost,
&lastptr);
curl = curl_easy_init();
if(curl) {
/* what URL that receives this POST */
curl_easy_setopt(curl, CURLOPT_URL, "http://curl.haxx.se/examplepost.cgi");
curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost);
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
/* then cleanup the formpost chain */
curl_formfree(formpost);
}
return 0;
}

View File

@@ -1,3 +1,16 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*/
/* to make this work under windows, use the win32-functions from the
win32socket.c file as well */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

View File

@@ -1,9 +1,22 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*/
#include <stdio.h>
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
/* to make this work under windows, use the win32-functions from the
win32socket.c file as well */
int main(int argc, char **argv)
{
CURL *curl;

View File

@@ -0,0 +1,13 @@
#
# A very simple example that gets a HTTP page.
#
$ch = curl_init();
curl_setopt ($ch, CURLOPT_URL, "http://www.zend.com/");
curl_setopt ($ch, CURLOPT_HEADER, 0);
curl_exec ($ch);
curl_close ($ch);

View File

@@ -0,0 +1,12 @@
#
# A very simple PHP example that sends a HTTP POST to a remote site
#
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,"http://www.mysite.com/tester.phtml");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "postvar1=value1&postvar2=value2&postvar3=value3");
curl_exec ($ch);
curl_close ($ch);

View File

@@ -0,0 +1,40 @@
/*
* These are example functions doing socket init that Windows
* require. If you don't use windows, you can safely ignore this crap.
*/
static void win32_cleanup(void)
{
WSACleanup();
}
static CURLcode win32_init(void)
{
WORD wVersionRequested;
WSADATA wsaData;
int err;
wVersionRequested = MAKEWORD(1, 1);
err = WSAStartup(wVersionRequested, &wsaData);
if (err != 0)
/* Tell the user that we couldn't find a useable */
/* winsock.dll. */
return 1;
/* Confirm that the Windows Sockets DLL supports 1.1.*/
/* Note that if the DLL supports versions greater */
/* than 1.1 in addition to 1.1, it will still return */
/* 1.1 in wVersion since that is the version we */
/* requested. */
if ( LOBYTE( wsaData.wVersion ) != 1 ||
HIBYTE( wsaData.wVersion ) != 1 ) {
/* Tell the user that we couldn't find a useable */
/* winsock.dll. */
WSACleanup();
return 1;
}
return 0; /* 0 is ok */
}

View File

@@ -97,68 +97,57 @@ typedef int (*curl_passwd_callback)(void *clientp,
typedef enum {
CURLE_OK = 0,
CURLE_UNSUPPORTED_PROTOCOL,
CURLE_FAILED_INIT,
CURLE_URL_MALFORMAT,
CURLE_URL_MALFORMAT_USER,
CURLE_COULDNT_RESOLVE_PROXY,
CURLE_COULDNT_RESOLVE_HOST,
CURLE_COULDNT_CONNECT,
CURLE_FTP_WEIRD_SERVER_REPLY,
CURLE_FTP_ACCESS_DENIED,
CURLE_FTP_USER_PASSWORD_INCORRECT,
CURLE_FTP_WEIRD_PASS_REPLY,
CURLE_FTP_WEIRD_USER_REPLY,
CURLE_FTP_WEIRD_PASV_REPLY,
CURLE_FTP_WEIRD_227_FORMAT,
CURLE_FTP_CANT_GET_HOST,
CURLE_FTP_CANT_RECONNECT,
CURLE_FTP_COULDNT_SET_BINARY,
CURLE_PARTIAL_FILE,
CURLE_FTP_COULDNT_RETR_FILE,
CURLE_FTP_WRITE_ERROR,
CURLE_FTP_QUOTE_ERROR,
CURLE_HTTP_NOT_FOUND,
CURLE_WRITE_ERROR,
CURLE_UNSUPPORTED_PROTOCOL, /* 1 */
CURLE_FAILED_INIT, /* 2 */
CURLE_URL_MALFORMAT, /* 3 */
CURLE_URL_MALFORMAT_USER, /* 4 */
CURLE_COULDNT_RESOLVE_PROXY, /* 5 */
CURLE_COULDNT_RESOLVE_HOST, /* 6 */
CURLE_COULDNT_CONNECT, /* 7 */
CURLE_FTP_WEIRD_SERVER_REPLY, /* 8 */
CURLE_FTP_ACCESS_DENIED, /* 9 */
CURLE_FTP_USER_PASSWORD_INCORRECT, /* 10 */
CURLE_FTP_WEIRD_PASS_REPLY, /* 11 */
CURLE_FTP_WEIRD_USER_REPLY, /* 12 */
CURLE_FTP_WEIRD_PASV_REPLY, /* 13 */
CURLE_FTP_WEIRD_227_FORMAT, /* 14 */
CURLE_FTP_CANT_GET_HOST, /* 15 */
CURLE_FTP_CANT_RECONNECT, /* 16 */
CURLE_FTP_COULDNT_SET_BINARY, /* 17 */
CURLE_PARTIAL_FILE, /* 18 */
CURLE_FTP_COULDNT_RETR_FILE, /* 19 */
CURLE_FTP_WRITE_ERROR, /* 20 */
CURLE_FTP_QUOTE_ERROR, /* 21 */
CURLE_HTTP_NOT_FOUND, /* 22 */
CURLE_WRITE_ERROR, /* 23 */
CURLE_MALFORMAT_USER, /* 24 - user name is illegally specified */
CURLE_FTP_COULDNT_STOR_FILE, /* 25 - failed FTP upload */
CURLE_READ_ERROR, /* 26 - could open/read from file */
CURLE_OUT_OF_MEMORY, /* 27 */
CURLE_OPERATION_TIMEOUTED, /* 28 - the timeout time was reached */
CURLE_FTP_COULDNT_SET_ASCII, /* 29 - TYPE A failed */
CURLE_FTP_PORT_FAILED, /* 30 - FTP PORT operation failed */
CURLE_FTP_COULDNT_USE_REST, /* 31 - the REST command failed */
CURLE_FTP_COULDNT_GET_SIZE, /* 32 - the SIZE command failed */
CURLE_HTTP_RANGE_ERROR, /* 33 - RANGE "command" didn't work */
CURLE_HTTP_POST_ERROR, /* 34 */
CURLE_SSL_CONNECT_ERROR, /* 35 - wrong when connecting with SSL */
CURLE_FTP_BAD_DOWNLOAD_RESUME, /* 36 - couldn't resume download */
CURLE_FILE_COULDNT_READ_FILE, /* 37 */
CURLE_LDAP_CANNOT_BIND, /* 38 */
CURLE_LDAP_SEARCH_FAILED, /* 39 */
CURLE_LIBRARY_NOT_FOUND, /* 40 */
CURLE_FUNCTION_NOT_FOUND, /* 41 */
CURLE_ABORTED_BY_CALLBACK, /* 42 */
CURLE_BAD_FUNCTION_ARGUMENT, /* 43 */
CURLE_BAD_CALLING_ORDER, /* 44 */
CURLE_HTTP_PORT_FAILED, /* 45 - HTTP Interface operation failed */
CURLE_BAD_PASSWORD_ENTERED, /* 46 - my_getpass() returns fail */
CURLE_TOO_MANY_REDIRECTS , /* 47 - catch endless re-direct loops */
CURLE_UNKNOWN_TELNET_OPTION, /* 48 - User specified an unknown option */
CURLE_TELNET_OPTION_SYNTAX , /* 49 - Malformed telnet option */
CURLE_MALFORMAT_USER, /* the user name is illegally specified */
CURLE_FTP_COULDNT_STOR_FILE, /* failed FTP upload */
CURLE_READ_ERROR, /* could open/read from file */
CURLE_OUT_OF_MEMORY,
CURLE_OPERATION_TIMEOUTED, /* the timeout time was reached */
CURLE_FTP_COULDNT_SET_ASCII, /* TYPE A failed */
CURLE_FTP_PORT_FAILED, /* FTP PORT operation failed */
CURLE_FTP_COULDNT_USE_REST, /* the REST command failed */
CURLE_FTP_COULDNT_GET_SIZE, /* the SIZE command failed */
CURLE_HTTP_RANGE_ERROR, /* The RANGE "command" didn't seem to work */
CURLE_HTTP_POST_ERROR,
CURLE_SSL_CONNECT_ERROR, /* something was wrong when connecting with SSL */
CURLE_FTP_BAD_DOWNLOAD_RESUME, /* couldn't resume download */
CURLE_FILE_COULDNT_READ_FILE,
CURLE_LDAP_CANNOT_BIND,
CURLE_LDAP_SEARCH_FAILED,
CURLE_LIBRARY_NOT_FOUND,
CURLE_FUNCTION_NOT_FOUND,
CURLE_ABORTED_BY_CALLBACK,
CURLE_BAD_FUNCTION_ARGUMENT,
CURLE_BAD_CALLING_ORDER,
CURLE_HTTP_PORT_FAILED, /* HTTP Interface operation failed */
CURLE_BAD_PASSWORD_ENTERED, /* when the my_getpass() returns fail */
CURLE_TOO_MANY_REDIRECTS , /* catch endless re-direct loops */
CURL_LAST
CURL_LAST /* never use! */
} CURLcode;
/* This is just to make older programs not break: */
@@ -406,6 +395,36 @@ typedef enum {
document! Pass a NULL to shut it off. */
CINIT(FILETIME, OBJECTPOINT, 69),
/* This points to a linked list of telnet options */
CINIT(TELNETOPTIONS, OBJECTPOINT, 70),
/* Max amount of cached alive connections */
CINIT(MAXCONNECTS, LONG, 71),
/* What policy to use when closing connections when the cache is filled
up */
CINIT(CLOSEPOLICY, LONG, 72),
/* Callback to use when CURLCLOSEPOLICY_CALLBACK is set */
CINIT(CLOSEFUNCTION, FUNCTIONPOINT, 73),
/* Set to explicitly use a new connection for the upcoming transfer.
Do not use this unless you're absolutely sure of this, as it makes the
operation slower and is less friendly for the network. */
CINIT(FRESH_CONNECT, LONG, 74),
/* Set to explicitly forbid the upcoming transfer's connection to be re-used
when done. Do not use this unless you're absolutely sure of this, as it
makes the operation slower and is less friendly for the network. */
CINIT(FORBID_REUSE, LONG, 75),
/* Set to a file name that contains random data for libcurl to use to
seed the random engine when doing SSL connects. */
CINIT(RANDOM_FILE, OBJECTPOINT, 76),
/* Set to the Entropy Gathering Daemon socket pathname */
CINIT(EGDSOCKET, OBJECTPOINT, 77),
CURLOPT_LASTENTRY /* the last unusued */
} CURLoption;
@@ -431,10 +450,10 @@ typedef enum {
NOTE: they return TRUE if the strings match *case insensitively*.
*/
extern int (Curl_strequal)(const char *s1, const char *s2);
extern int (Curl_strnequal)(const char *s1, const char *s2, size_t n);
#define strequal(a,b) Curl_strequal(a,b)
#define strnequal(a,b,c) Curl_strnequal(a,b,c)
extern int (curl_strequal)(const char *s1, const char *s2);
extern int (curl_strnequal)(const char *s1, const char *s2, size_t n);
#define strequal(a,b) curl_strequal(a,b)
#define strnequal(a,b,c) curl_strnequal(a,b,c)
/* external form function */
int curl_formparse(char *string,
@@ -452,8 +471,8 @@ char *curl_getenv(char *variable);
char *curl_version(void);
/* This is the version number */
#define LIBCURL_VERSION "7.6.1-pre2"
#define LIBCURL_VERSION_NUM 0x070601
#define LIBCURL_VERSION "7.7-beta3"
#define LIBCURL_VERSION_NUM 0x070700
/* linked-list structure for the CURLOPT_QUOTE option (and other) */
struct curl_slist {
@@ -461,184 +480,8 @@ struct curl_slist {
struct curl_slist *next;
};
struct curl_slist *curl_slist_append(struct curl_slist *list, char *data);
void curl_slist_free_all(struct curl_slist *list);
/*
* NAME curl_init()
*
* DESCRIPTION
*
* Inits libcurl globally. This must be used before any libcurl calls can
* be used. This may install global plug-ins or whatever. (This does not
* do winsock inits in Windows.)
*
* EXAMPLE
*
* curl_init();
*
*/
CURLcode curl_init(void);
/*
* NAME curl_init()
*
* DESCRIPTION
*
* Frees libcurl globally. This must be used after all libcurl calls have
* been used. This may remove global plug-ins or whatever. (This does not
* do winsock cleanups in Windows.)
*
* EXAMPLE
*
* curl_free(curl);
*
*/
void curl_free(void);
/*
* NAME curl_open()
*
* DESCRIPTION
*
* Opens a general curl session. It does not try to connect or do anything
* on the network because of this call. The specified URL is only required
* to enable curl to figure out what protocol to "activate".
*
* A session should be looked upon as a series of requests to a single host. A
* session interacts with one host only, using one single protocol.
*
* The URL is not required. If set to "" or NULL, it can still be set later
* using the curl_setopt() function. If the curl_connect() function is called
* without the URL being known, it will return error.
*
* EXAMPLE
*
* CURLcode result;
* CURL *curl;
* result = curl_open(&curl, "http://curl.haxx.nu/libcurl/");
* if(result != CURL_OK) {
* return result;
* }
* */
CURLcode curl_open(CURL **curl, char *url);
/*
* NAME curl_setopt()
*
* DESCRIPTION
*
* Sets a particular option to the specified value.
*
* EXAMPLE
*
* CURL curl;
* curl_setopt(curl, CURL_HTTP_FOLLOW_LOCATION, TRUE);
*/
CURLcode curl_setopt(CURL *handle, CURLoption option, ...);
/*
* NAME curl_close()
*
* DESCRIPTION
*
* Closes a session previously opened with curl_open()
*
* EXAMPLE
*
* CURL *curl;
* CURLcode result;
*
* result = curl_close(curl);
*/
CURLcode curl_close(CURL *curl); /* the opposite of curl_open() */
CURLcode curl_read(CURLconnect *c_conn, char *buf, size_t buffersize,
ssize_t *n);
CURLcode curl_write(CURLconnect *c_conn, char *buf, size_t amount,
size_t *n);
/*
* NAME curl_connect()
*
* DESCRIPTION
*
* Connects to the peer server and performs the initial setup. This function
* writes a connect handle to its second argument that is a unique handle for
* this connect. This allows multiple connects from the same handle returned
* by curl_open().
*
* EXAMPLE
*
* CURLCode result;
* CURL curl;
* CURLconnect connect;
* result = curl_connect(curl, &connect);
*/
CURLcode curl_connect(CURL *curl, CURLconnect **in_connect);
/*
* NAME curl_do()
*
* DESCRIPTION
*
* (Note: May 3rd 2000: this function does not currently allow you to
* specify a document, it will use the one set previously)
*
* This function asks for the particular document, file or resource that
* resides on the server we have connected to. You may specify a full URL,
* just an absolute path or even a relative path. That means, if you're just
* getting one file from the remote site, you can use the same URL as input
* for both curl_open() as well as for this function.
*
* In the even there is a host name, port number, user name or password parts
* in the URL, you can use the 'flags' argument to ignore them completely, or
* at your choice, make the function fail if you're trying to get a URL from
* different host than you connected to with curl_connect().
*
* You can only get one document at a time using the same connection. When one
* document has been received you can although request again.
*
* When the transfer is done, curl_done() MUST be called.
*
* EXAMPLE
*
* CURLCode result;
* char *url;
* CURLconnect *connect;
* result = curl_do(connect, url, CURL_DO_NONE); */
CURLcode curl_do(CURLconnect *in_conn);
/*
* NAME curl_done()
*
* DESCRIPTION
*
* When the transfer following a curl_do() call is done, this function should
* get called.
*
* EXAMPLE
*
* CURLCode result;
* char *url;
* CURLconnect *connect;
* result = curl_done(connect); */
CURLcode curl_done(CURLconnect *connect);
/*
* NAME curl_disconnect()
*
* DESCRIPTION
*
* Disconnects from the peer server and performs connection cleanup.
*
* EXAMPLE
*
* CURLcode result;
* CURLconnect *connect;
* result = curl_disconnect(connect); */
CURLcode curl_disconnect(CURLconnect *connect);
struct curl_slist *curl_slist_append(struct curl_slist *, const char *);
void curl_slist_free_all(struct curl_slist *);
/*
* NAME curl_getdate()
@@ -676,22 +519,28 @@ typedef enum {
CURLINFO_SSL_VERIFYRESULT = CURLINFO_LONG + 13,
CURLINFO_FILETIME = CURLINFO_LONG + 14,
CURLINFO_LASTONE = 15
CURLINFO_CONTENT_LENGTH_DOWNLOAD = CURLINFO_DOUBLE + 15,
CURLINFO_CONTENT_LENGTH_UPLOAD = CURLINFO_DOUBLE + 16,
CURLINFO_LASTONE = 17
} CURLINFO;
/*
* NAME curl_getinfo()
*
* DESCRIPTION
*
* Request internal information from the curl session with this function.
* The third argument MUST be a pointer to a long or a pointer to a char *.
* The data pointed to will be filled in accordingly and can be relied upon
* only if the function returns CURLE_OK.
* This function is intended to get used *AFTER* a performed transfer, all
* results are undefined before the transfer is completed.
*/
CURLcode curl_getinfo(CURL *curl, CURLINFO info, ...);
/* unfortunately, the easy.h include file needs the options and info stuff
before it can be included! */
#include <curl/easy.h> /* nothing in curl is fun without the easy stuff */
typedef enum {
CURLCLOSEPOLICY_NONE, /* first, never use this */
CURLCLOSEPOLICY_OLDEST,
CURLCLOSEPOLICY_LEAST_RECENTLY_USED,
CURLCLOSEPOLICY_LEAST_TRAFFIC,
CURLCLOSEPOLICY_SLOWEST,
CURLCLOSEPOLICY_CALLBACK,
CURLCLOSEPOLICY_LAST /* last, never use this */
} curl_closepolicy;
#ifdef __cplusplus
}

View File

@@ -6,7 +6,7 @@ AUTOMAKE_OPTIONS = foreign
EXTRA_DIST = getdate.y \
Makefile.b32 Makefile.b32.resp Makefile.m32 Makefile.vc6 \
libcurl.def dllinit.c
libcurl.def dllinit.c curllib.dsp curllib.dsw
lib_LTLIBRARIES = libcurl.la
@@ -16,7 +16,7 @@ lib_LTLIBRARIES = libcurl.la
INCLUDES = -I$(top_srcdir)/include
libcurl_la_LDFLAGS = -version-info 1:0:0
libcurl_la_LDFLAGS = -version-info 2:0:0
# This flag accepts an argument of the form current[:revision[:age]]. So,
# passing -version-info 3:12:1 sets current to 3, revision to 12, and age to
# 1.
@@ -55,10 +55,11 @@ dict.c ftp.h if2ip.c speedcheck.c url.h \
dict.h getdate.c if2ip.h speedcheck.h urldata.h \
getdate.h ldap.c ssluse.c version.c \
getenv.c ldap.h ssluse.h \
escape.c getenv.h mprintf.c telnet.c \
escape.c mprintf.c telnet.c \
escape.h getpass.c netrc.c telnet.h \
getinfo.c transfer.c strequal.c strequal.h easy.c \
security.h security.c krb4.c krb4.h memdebug.c memdebug.h inet_ntoa_r.h
security.h security.c krb4.c krb4.h memdebug.c memdebug.h inet_ntoa_r.h \
http_chunks.c http_chunks.h
noinst_HEADERS = setup.h transfer.h

View File

@@ -33,13 +33,13 @@ libcurl_a_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c base64.c \
urldata.h transfer.c getdate.h ldap.c ssluse.c version.c transfer.h getenv.c \
ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c \
telnet.h getinfo.c strequal.c strequal.h easy.c security.h \
security.c krb4.c
security.c krb4.h krb4.c memdebug.h memdebug.c inet_ntoa_r.h http_chunks.h http_chunks.c
libcurl_a_OBJECTS = file.o timeval.o base64.o hostip.o progress.o \
formdata.o cookie.o http.o sendf.o ftp.o url.o dict.o if2ip.o \
speedcheck.o getdate.o transfer.o ldap.o ssluse.o version.o \
getenv.o escape.o mprintf.o telnet.o getpass.o netrc.o getinfo.o \
strequal.o easy.o security.o krb4.o
strequal.o easy.o security.o krb4.o memdebug.o http_chunks.o
LIBRARIES = $(libcurl_a_LIBRARIES)
SOURCES = $(libcurl_a_SOURCES)

367
lib/curllib.dsp Normal file
View File

@@ -0,0 +1,367 @@
# Microsoft Developer Studio Project File - Name="curllib" - Package Owner=<4>
# Microsoft Developer Studio Generated Build File, Format Version 6.00
# ** DO NOT EDIT **
# TARGTYPE "Win32 (x86) Dynamic-Link Library" 0x0102
CFG=curllib - Win32 Debug
!MESSAGE This is not a valid makefile. To build this project using NMAKE,
!MESSAGE use the Export Makefile command and run
!MESSAGE
!MESSAGE NMAKE /f "curllib.mak".
!MESSAGE
!MESSAGE You can specify a configuration when running NMAKE
!MESSAGE by defining the macro CFG on the command line. For example:
!MESSAGE
!MESSAGE NMAKE /f "curllib.mak" CFG="curllib - Win32 Debug"
!MESSAGE
!MESSAGE Possible choices for configuration are:
!MESSAGE
!MESSAGE "curllib - Win32 Release" (based on "Win32 (x86) Dynamic-Link Library")
!MESSAGE "curllib - Win32 Debug" (based on "Win32 (x86) Dynamic-Link Library")
!MESSAGE
# Begin Project
# PROP AllowPerConfigDependencies 0
# PROP Scc_ProjName ""
# PROP Scc_LocalPath ""
CPP=cl.exe
MTL=midl.exe
RSC=rc.exe
!IF "$(CFG)" == "curllib - Win32 Release"
# PROP BASE Use_MFC 0
# PROP BASE Use_Debug_Libraries 0
# PROP BASE Output_Dir "Release"
# PROP BASE Intermediate_Dir "Release"
# PROP BASE Target_Dir ""
# PROP Use_MFC 0
# PROP Use_Debug_Libraries 0
# PROP Output_Dir "Release"
# PROP Intermediate_Dir "Release"
# PROP Ignore_Export_Lib 0
# PROP Target_Dir ""
# ADD BASE CPP /nologo /MT /W3 /GX /O2 /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /YX /FD /c
# ADD CPP /nologo /MT /W3 /GX /O2 /I "C:\jdk1.3.0_01\include" /I "C:\jdk1.3.0_01\include\win32" /I "..\include" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /YX /FD /c
# ADD BASE MTL /nologo /D "NDEBUG" /mktyplib203 /win32
# ADD MTL /nologo /D "NDEBUG" /mktyplib203 /win32
# ADD BASE RSC /l 0x409 /d "NDEBUG"
# ADD RSC /l 0x409 /d "NDEBUG"
BSC32=bscmake.exe
# ADD BASE BSC32 /nologo
# ADD BSC32 /nologo
LINK32=link.exe
# ADD BASE LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /dll /machine:I386
# ADD LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib wsock32.lib /nologo /dll /machine:I386 /out:"Release/curl.dll"
!ELSEIF "$(CFG)" == "curllib - Win32 Debug"
# PROP BASE Use_MFC 0
# PROP BASE Use_Debug_Libraries 1
# PROP BASE Output_Dir "Debug"
# PROP BASE Intermediate_Dir "Debug"
# PROP BASE Target_Dir ""
# PROP Use_MFC 0
# PROP Use_Debug_Libraries 1
# PROP Output_Dir "Debug"
# PROP Intermediate_Dir "Debug"
# PROP Ignore_Export_Lib 0
# PROP Target_Dir ""
# ADD BASE CPP /nologo /MTd /W3 /Gm /GX /ZI /Od /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /YX /FD /GZ /c
# ADD CPP /nologo /MTd /W3 /Gm /GX /ZI /Od /I "C:\jdk1.3.0_01\include" /I "C:\jdk1.3.0_01\include\win32" /I "..\include" /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /YX /FD /GZ /c
# ADD BASE MTL /nologo /D "_DEBUG" /mktyplib203 /win32
# ADD MTL /nologo /D "_DEBUG" /mktyplib203 /win32
# ADD BASE RSC /l 0x409 /d "_DEBUG"
# ADD RSC /l 0x409 /d "_DEBUG"
BSC32=bscmake.exe
# ADD BASE BSC32 /nologo
# ADD BSC32 /nologo
LINK32=link.exe
# ADD BASE LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /dll /debug /machine:I386 /pdbtype:sept
# ADD LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib wsock32.lib /nologo /dll /debug /machine:I386 /out:"Debug/curl.dll" /pdbtype:sept
!ENDIF
# Begin Target
# Name "curllib - Win32 Release"
# Name "curllib - Win32 Debug"
# Begin Group "Source Files"
# PROP Default_Filter "cpp;c;cxx;rc;def;r;odl;idl;hpj;bat"
# Begin Source File
SOURCE=.\base64.c
# End Source File
# Begin Source File
SOURCE=.\cookie.c
# End Source File
# Begin Source File
SOURCE=.\dict.c
# End Source File
# Begin Source File
SOURCE=.\dllinit.c
# End Source File
# Begin Source File
SOURCE=.\easy.c
# End Source File
# Begin Source File
SOURCE=.\easyswig.c
# End Source File
# Begin Source File
SOURCE=.\easyswig_wrap.c
# End Source File
# Begin Source File
SOURCE=.\escape.c
# End Source File
# Begin Source File
SOURCE=.\file.c
# End Source File
# Begin Source File
SOURCE=.\formdata.c
# End Source File
# Begin Source File
SOURCE=.\ftp.c
# End Source File
# Begin Source File
SOURCE=.\getdate.c
# End Source File
# Begin Source File
SOURCE=.\getenv.c
# End Source File
# Begin Source File
SOURCE=.\getinfo.c
# End Source File
# Begin Source File
SOURCE=.\getpass.c
# End Source File
# Begin Source File
SOURCE=.\hostip.c
# End Source File
# Begin Source File
SOURCE=.\http.c
# End Source File
# Begin Source File
SOURCE=.\if2ip.c
# End Source File
# Begin Source File
SOURCE=.\krb4.c
# End Source File
# Begin Source File
SOURCE=.\ldap.c
# End Source File
# Begin Source File
SOURCE=.\libcurl.def
# End Source File
# Begin Source File
SOURCE=.\memdebug.c
# End Source File
# Begin Source File
SOURCE=.\mprintf.c
# End Source File
# Begin Source File
SOURCE=.\netrc.c
# End Source File
# Begin Source File
SOURCE=.\progress.c
# End Source File
# Begin Source File
SOURCE=.\security.c
# End Source File
# Begin Source File
SOURCE=.\sendf.c
# End Source File
# Begin Source File
SOURCE=.\speedcheck.c
# End Source File
# Begin Source File
SOURCE=.\ssluse.c
# End Source File
# Begin Source File
SOURCE=.\strequal.c
# End Source File
# Begin Source File
SOURCE=.\telnet.c
# End Source File
# Begin Source File
SOURCE=.\timeval.c
# End Source File
# Begin Source File
SOURCE=.\transfer.c
# End Source File
# Begin Source File
SOURCE=.\url.c
# End Source File
# Begin Source File
SOURCE=.\version.c
# End Source File
# End Group
# Begin Group "Header Files"
# PROP Default_Filter "h;hpp;hxx;hm;inl"
# Begin Source File
SOURCE=.\arpa_telnet.h
# End Source File
# Begin Source File
SOURCE=.\base64.h
# End Source File
# Begin Source File
SOURCE=.\cookie.h
# End Source File
# Begin Source File
SOURCE=.\dict.h
# End Source File
# Begin Source File
SOURCE=.\escape.h
# End Source File
# Begin Source File
SOURCE=.\file.h
# End Source File
# Begin Source File
SOURCE=.\formdata.h
# End Source File
# Begin Source File
SOURCE=.\ftp.h
# End Source File
# Begin Source File
SOURCE=.\getdate.h
# End Source File
# Begin Source File
SOURCE=.\getenv.h
# End Source File
# Begin Source File
SOURCE=.\getpass.h
# End Source File
# Begin Source File
SOURCE=.\hostip.h
# End Source File
# Begin Source File
SOURCE=.\http.h
# End Source File
# Begin Source File
SOURCE=.\if2ip.h
# End Source File
# Begin Source File
SOURCE=.\inet_ntoa_r.h
# End Source File
# Begin Source File
SOURCE=.\krb4.h
# End Source File
# Begin Source File
SOURCE=.\ldap.h
# End Source File
# Begin Source File
SOURCE=.\memdebug.h
# End Source File
# Begin Source File
SOURCE=.\netrc.h
# End Source File
# Begin Source File
SOURCE=.\progress.h
# End Source File
# Begin Source File
SOURCE=.\security.h
# End Source File
# Begin Source File
SOURCE=.\sendf.h
# End Source File
# Begin Source File
SOURCE=.\setup.h
# End Source File
# Begin Source File
SOURCE=.\speedcheck.h
# End Source File
# Begin Source File
SOURCE=.\ssluse.h
# End Source File
# Begin Source File
SOURCE=.\strequal.h
# End Source File
# Begin Source File
SOURCE=.\telnet.h
# End Source File
# Begin Source File
SOURCE=.\timeval.h
# End Source File
# Begin Source File
SOURCE=.\transfer.h
# End Source File
# Begin Source File
SOURCE=.\url.h
# End Source File
# Begin Source File
SOURCE=.\urldata.h
# End Source File
# End Group
# Begin Group "Resource Files"
# PROP Default_Filter "ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe"
# End Group
# End Target
# End Project

29
lib/curllib.dsw Normal file
View File

@@ -0,0 +1,29 @@
Microsoft Developer Studio Workspace File, Format Version 6.00
# WARNING: DO NOT EDIT OR DELETE THIS WORKSPACE FILE!
###############################################################################
Project: "curllib"=".\curllib.dsp" - Package Owner=<4>
Package=<5>
{{{
}}}
Package=<4>
{{{
}}}
###############################################################################
Global:
Package=<5>
{{{
}}}
Package=<3>
{{{
}}}
###############################################################################

View File

@@ -141,7 +141,7 @@ CURLcode Curl_dict(struct connectdata *conn)
nth = atoi(nthdef);
}
Curl_sendf(data->firstsocket, conn,
Curl_sendf(conn->firstsocket, conn,
"CLIENT " LIBCURL_NAME " " LIBCURL_VERSION "\n"
"MATCH "
"%s " /* database */
@@ -154,7 +154,7 @@ CURLcode Curl_dict(struct connectdata *conn)
word
);
result = Curl_Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
result = Curl_Transfer(conn, conn->firstsocket, -1, FALSE, bytecount,
-1, NULL); /* no upload */
if(result)
@@ -191,7 +191,7 @@ CURLcode Curl_dict(struct connectdata *conn)
nth = atoi(nthdef);
}
Curl_sendf(data->firstsocket, conn,
Curl_sendf(conn->firstsocket, conn,
"CLIENT " LIBCURL_NAME " " LIBCURL_VERSION "\n"
"DEFINE "
"%s " /* database */
@@ -202,7 +202,7 @@ CURLcode Curl_dict(struct connectdata *conn)
word
);
result = Curl_Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
result = Curl_Transfer(conn, conn->firstsocket, -1, FALSE, bytecount,
-1, NULL); /* no upload */
if(result)
@@ -220,13 +220,13 @@ CURLcode Curl_dict(struct connectdata *conn)
if (ppath[i] == ':')
ppath[i] = ' ';
}
Curl_sendf(data->firstsocket, conn,
Curl_sendf(conn->firstsocket, conn,
"CLIENT " LIBCURL_NAME " " LIBCURL_VERSION "\n"
"%s\n"
"QUIT\n",
ppath);
result = Curl_Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
result = Curl_Transfer(conn, conn->firstsocket, -1, FALSE, bytecount,
-1, NULL);
if(result)

View File

@@ -83,15 +83,11 @@ CURL *curl_easy_init(void)
CURLcode res;
struct UrlData *data;
if(curl_init())
return NULL;
/* We use curl_open() with undefined URL so far */
res = curl_open((CURL **)&data, NULL);
res = Curl_open((CURL **)&data, NULL);
if(res != CURLE_OK)
return NULL;
data->interf = CURLI_EASY; /* mark it as an easy one */
/* SAC */
data->device = NULL;
@@ -119,16 +115,16 @@ CURLcode curl_easy_setopt(CURL *curl, CURLoption tag, ...)
if(tag < CURLOPTTYPE_OBJECTPOINT) {
/* This is a LONG type */
param_long = va_arg(arg, long);
curl_setopt(data, tag, param_long);
Curl_setopt(data, tag, param_long);
}
else if(tag < CURLOPTTYPE_FUNCTIONPOINT) {
/* This is a object pointer type */
param_obj = va_arg(arg, void *);
curl_setopt(data, tag, param_obj);
Curl_setopt(data, tag, param_obj);
}
else {
param_func = va_arg(arg, func_T );
curl_setopt(data, tag, param_func);
Curl_setopt(data, tag, param_func);
}
va_end(arg);
@@ -137,13 +133,12 @@ CURLcode curl_easy_setopt(CURL *curl, CURLoption tag, ...)
CURLcode curl_easy_perform(CURL *curl)
{
return curl_transfer(curl);
return Curl_perform(curl);
}
void curl_easy_cleanup(CURL *curl)
{
curl_close(curl);
curl_free();
Curl_close(curl);
}
CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ...)
@@ -153,5 +148,5 @@ CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ...)
va_start(arg, info);
paramp = va_arg(arg, void *);
return curl_getinfo(curl, info, paramp);
return Curl_getinfo(curl, info, paramp);
}

View File

@@ -78,7 +78,7 @@ char *curl_unescape(char *string, int length)
char *ns = malloc(alloc);
unsigned char in;
int index=0;
int hex;
unsigned int hex;
char querypart=FALSE; /* everything to the right of a '?' letter is
the "query part" where '+' should become ' '.
RFC 2316, section 3.10 */

View File

@@ -91,29 +91,24 @@
#include "memdebug.h"
#endif
CURLcode file(struct connectdata *conn)
/* Emulate a connect-then-transfer protocol. We connect to the file here */
CURLcode Curl_file_connect(struct connectdata *conn)
{
/* This implementation ignores the host name in conformance with
RFC 1738. Only local files (reachable via the standard file system)
are supported. This means that files on remotely mounted directories
(via NFS, Samba, NT sharing) can be accessed through a file:// URL
*/
CURLcode res = CURLE_OK;
char *path = conn->path;
struct stat statbuf;
size_t expected_size=-1;
size_t nread;
struct UrlData *data = conn->data;
char *buf = data->buffer;
int bytecount = 0;
struct timeval start = Curl_tvnow();
struct timeval now = start;
char *actual_path = curl_unescape(conn->path, 0);
struct FILE *file;
int fd;
char *actual_path = curl_unescape(path, 0);
#if defined(WIN32) || defined(__EMX__)
int i;
#endif
file = (struct FILE *)malloc(sizeof(struct FILE));
if(!file)
return CURLE_OUT_OF_MEMORY;
memset(file, 0, sizeof(struct FILE));
conn->proto.file = file;
#if defined(WIN32) || defined(__EMX__)
/* change path separators from '/' to '\\' for Windows and OS/2 */
for (i=0; actual_path[i] != '\0'; ++i)
if (actual_path[i] == '/')
@@ -126,9 +121,37 @@ CURLcode file(struct connectdata *conn)
free(actual_path);
if(fd == -1) {
failf(data, "Couldn't open file %s", path);
failf(conn->data, "Couldn't open file %s", conn->path);
return CURLE_FILE_COULDNT_READ_FILE;
}
file->fd = fd;
return CURLE_OK;
}
/* This is the do-phase, separated from the connect-phase above */
CURLcode Curl_file(struct connectdata *conn)
{
/* This implementation ignores the host name in conformance with
RFC 1738. Only local files (reachable via the standard file system)
are supported. This means that files on remotely mounted directories
(via NFS, Samba, NT sharing) can be accessed through a file:// URL
*/
CURLcode res = CURLE_OK;
struct stat statbuf;
size_t expected_size=-1;
size_t nread;
struct UrlData *data = conn->data;
char *buf = data->buffer;
int bytecount = 0;
struct timeval start = Curl_tvnow();
struct timeval now = start;
int fd;
/* get the fd from the connection phase */
fd = conn->proto.file->fd;
if( -1 != fstat(fd, &statbuf)) {
/* we could stat it, then read out the size */
expected_size = statbuf.st_size;

View File

@@ -23,6 +23,6 @@
*
* $Id$
*****************************************************************************/
CURLcode file(struct connectdata *conn);
CURLcode Curl_file(struct connectdata *conn);
CURLcode Curl_file_connect(struct connectdata *conn);
#endif

567
lib/ftp.c
View File

@@ -88,76 +88,8 @@
/* easy-to-use macro: */
#define ftpsendf Curl_ftpsendf
/* returns last node in linked list */
static struct curl_slist *slist_get_last(struct curl_slist *list)
{
struct curl_slist *item;
/* if caller passed us a NULL, return now */
if (!list)
return NULL;
/* loop through to find the last item */
item = list;
while (item->next) {
item = item->next;
}
return item;
}
/* append a struct to the linked list. It always retunrs the address of the
* first record, so that you can sure this function as an initialization
* function as well as an append function. If you find this bothersome,
* then simply create a separate _init function and call it appropriately from
* within the proram. */
struct curl_slist *curl_slist_append(struct curl_slist *list, char *data)
{
struct curl_slist *last;
struct curl_slist *new_item;
new_item = (struct curl_slist *) malloc(sizeof(struct curl_slist));
if (new_item) {
new_item->next = NULL;
new_item->data = strdup(data);
}
else {
fprintf(stderr, "Cannot allocate memory for QUOTE list.\n");
return NULL;
}
if (list) {
last = slist_get_last(list);
last->next = new_item;
return list;
}
/* if this is the first item, then new_item *is* the list */
return new_item;
}
/* be nice and clean up resources */
void curl_slist_free_all(struct curl_slist *list)
{
struct curl_slist *next;
struct curl_slist *item;
if (!list)
return;
item = list;
do {
next = item->next;
if (item->data) {
free(item->data);
}
free(item);
item = next;
} while (next);
}
static CURLcode AllowServerConnect(struct UrlData *data,
struct connectdata *conn,
int sock)
{
fd_set rdset;
@@ -187,8 +119,8 @@ static CURLcode AllowServerConnect(struct UrlData *data,
size_t size = sizeof(struct sockaddr_in);
struct sockaddr_in add;
getsockname(sock, (struct sockaddr *) &add, (int *)&size);
s=accept(sock, (struct sockaddr *) &add, (int *)&size);
getsockname(sock, (struct sockaddr *) &add, (socklen_t *)&size);
s=accept(sock, (struct sockaddr *) &add, (socklen_t *)&size);
sclose(sock); /* close the first socket */
@@ -199,7 +131,7 @@ static CURLcode AllowServerConnect(struct UrlData *data,
}
infof(data, "Connection accepted from server\n");
data->secondarysocket = s;
conn->secondarysocket = s;
}
break;
}
@@ -282,6 +214,10 @@ int Curl_GetFTPResponse(int sockfd, char *buf,
*/
if(CURLE_OK != Curl_read(conn, sockfd, ptr, 1, &keepon))
keepon = FALSE;
else if(keepon <= 0) {
error = SELECT_ERROR;
failf(data, "Connection aborted");
}
else if ((*ptr == '\n') || (*ptr == '\r'))
keepon = FALSE;
}
@@ -358,23 +294,28 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
return CURLE_OUT_OF_MEMORY;
memset(ftp, 0, sizeof(struct FTP));
data->proto.ftp = ftp;
conn->proto.ftp = ftp;
/* We always support persistant connections on ftp */
conn->bits.close = FALSE;
/* get some initial data into the ftp struct */
ftp->bytecountp = &conn->bytecount;
ftp->user = data->user;
ftp->passwd = data->passwd;
/* duplicate to keep them even when the data struct changes */
ftp->user = strdup(data->user);
ftp->passwd = strdup(data->passwd);
if (data->bits.tunnel_thru_httpproxy) {
/* We want "seamless" FTP operations through HTTP proxy tunnel */
result = Curl_ConnectHTTPProxyTunnel(conn, data->firstsocket,
data->hostname, data->remote_port);
result = Curl_ConnectHTTPProxyTunnel(conn, conn->firstsocket,
conn->hostname, conn->remote_port);
if(CURLE_OK != result)
return result;
}
/* The first thing we do is wait for the "220*" line: */
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -394,7 +335,7 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
set a valid level */
sec_request_prot(conn, data->krb4_level);
data->cmdchannel = fdopen(data->firstsocket, "w");
data->cmdchannel = fdopen(conn->firstsocket, "w");
if(sec_login(conn) != 0)
infof(data, "Logging in with password in cleartext!\n");
@@ -404,10 +345,10 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
#endif
/* send USER */
ftpsendf(data->firstsocket, conn, "USER %s", ftp->user);
ftpsendf(conn->firstsocket, conn, "USER %s", ftp->user);
/* wait for feedback */
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -420,8 +361,8 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
else if(ftpcode == 331) {
/* 331 Password required for ...
(the server requires to send the user's password too) */
ftpsendf(data->firstsocket, conn, "PASS %s", ftp->passwd);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
ftpsendf(conn->firstsocket, conn, "PASS %s", ftp->passwd);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -465,6 +406,58 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
return CURLE_FTP_WEIRD_USER_REPLY;
}
/* send PWD to discover our entry point */
ftpsendf(conn->firstsocket, conn, "PWD");
/* wait for feedback */
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(ftpcode == 257) {
char *dir = (char *)malloc(nread+1);
char *store=dir;
char *ptr=&buf[4]; /* start on the first letter */
/* Reply format is like
257<space>"<directory-name>"<space><commentary> and the RFC959 says
The directory name can contain any character; embedded double-quotes
should be escaped by double-quotes (the "quote-doubling" convention).
*/
if('\"' == *ptr) {
/* it started good */
ptr++;
while(ptr && *ptr) {
if('\"' == *ptr) {
if('\"' == ptr[1]) {
/* "quote-doubling" */
*store = ptr[1];
ptr++;
}
else {
/* end of path */
*store = '\0'; /* zero terminate */
break; /* get out of this loop */
}
}
else
*store = *ptr;
store++;
ptr++;
}
ftp->entrypath =dir; /* remember this */
infof(data, "Entry path is '%s'\n", ftp->entrypath);
}
else {
/* couldn't get the path */
}
}
else {
/* We couldn't read the PWD response! */
}
return CURLE_OK;
}
@@ -473,7 +466,7 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
CURLcode Curl_ftp_done(struct connectdata *conn)
{
struct UrlData *data = conn->data;
struct FTP *ftp = data->proto.ftp;
struct FTP *ftp = conn->proto.ftp;
size_t nread;
char *buf = data->buffer; /* this is our buffer */
struct curl_slist *qitem; /* QUOTE item */
@@ -488,7 +481,7 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
}
else {
if((-1 != conn->size) && (conn->size != *ftp->bytecountp) &&
(data->maxdownload != *ftp->bytecountp)) {
(conn->maxdownload != *ftp->bytecountp)) {
failf(data, "Received only partial file");
return CURLE_PARTIAL_FILE;
}
@@ -498,16 +491,16 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
}
}
#ifdef KRB4
sec_fflush_fd(conn, data->secondarysocket);
sec_fflush_fd(conn, conn->secondarysocket);
#endif
/* shut down the socket to inform the server we're done */
sclose(data->secondarysocket);
data->secondarysocket = -1;
sclose(conn->secondarysocket);
conn->secondarysocket = -1;
if(!data->bits.no_body) {
/* now let's see what the server says about the transfer we
just performed: */
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -525,9 +518,9 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
while (qitem) {
/* Send string */
if (qitem->data) {
ftpsendf(data->firstsocket, conn, "%s", qitem->data);
ftpsendf(conn->firstsocket, conn, "%s", qitem->data);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -541,9 +534,6 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
}
}
free(ftp);
data->proto.ftp=NULL; /* it is gone */
return CURLE_OK;
}
@@ -564,10 +554,13 @@ CURLcode _ftp(struct connectdata *conn)
#if defined (HAVE_INET_NTOA_R)
char ntoa_buf[64];
#endif
#ifdef ENABLE_IPV6
struct addrinfo *ai;
#endif
struct curl_slist *qitem; /* QUOTE item */
/* the ftp struct is already inited in ftp_connect() */
struct FTP *ftp = data->proto.ftp;
struct FTP *ftp = conn->proto.ftp;
long *bytecountp = ftp->bytecountp;
int ftpcode; /* for ftp status */
@@ -579,9 +572,9 @@ CURLcode _ftp(struct connectdata *conn)
while (qitem) {
/* Send string */
if (qitem->data) {
ftpsendf(data->firstsocket, conn, "%s", qitem->data);
ftpsendf(conn->firstsocket, conn, "%s", qitem->data);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -595,10 +588,27 @@ CURLcode _ftp(struct connectdata *conn)
}
}
if(conn->bits.reuse) {
/* This is a re-used connection. Since we change directory to where the
transfer is taking place, we must now get back to the original dir
where we ended up after login: */
ftpsendf(conn->firstsocket, conn, "CWD %s", ftp->entrypath);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(ftpcode != 250) {
failf(data, "Couldn't change back to directory %s", ftp->entrypath);
return CURLE_FTP_ACCESS_DENIED;
}
}
/* change directory first! */
if(ftp->dir && ftp->dir[0]) {
ftpsendf(data->firstsocket, conn, "CWD %s", ftp->dir);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
ftpsendf(conn->firstsocket, conn, "CWD %s", ftp->dir);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -611,9 +621,9 @@ CURLcode _ftp(struct connectdata *conn)
if(data->bits.get_filetime && ftp->file) {
/* we have requested to get the modified-time of the file, this is yet
again a grey area as the MDTM is not kosher RFC959 */
ftpsendf(data->firstsocket, conn, "MDTM %s", ftp->file);
ftpsendf(conn->firstsocket, conn, "MDTM %s", ftp->file);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -647,10 +657,10 @@ CURLcode _ftp(struct connectdata *conn)
/* Some servers return different sizes for different modes, and thus we
must set the proper type before we check the size */
ftpsendf(data->firstsocket, conn, "TYPE %s",
ftpsendf(conn->firstsocket, conn, "TYPE %s",
(data->bits.ftp_ascii)?"A":"I");
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -661,9 +671,9 @@ CURLcode _ftp(struct connectdata *conn)
CURLE_FTP_COULDNT_SET_BINARY;
}
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
ftpsendf(conn->firstsocket, conn, "SIZE %s", ftp->file);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -702,6 +712,178 @@ CURLcode _ftp(struct connectdata *conn)
/* We have chosen to use the PORT command */
if(data->bits.ftp_use_port) {
#ifdef ENABLE_IPV6
struct addrinfo hints, *res, *ai;
struct sockaddr_storage ss;
int sslen;
char hbuf[NI_MAXHOST];
char *localaddr;
struct sockaddr *sa=(struct sockaddr *)&ss;
#ifdef NI_WITHSCOPEID
const int niflags = NI_NUMERICHOST | NI_NUMERICSERV | NI_WITHSCOPEID;
#else
const int niflags = NI_NUMERICHOST | NI_NUMERICSERV;
#endif
unsigned char *ap;
unsigned char *pp;
int alen, plen;
char portmsgbuf[4096], tmp[4096];
char *p;
char *mode[] = { "EPRT", "LPRT", "PORT", NULL };
char **modep;
/*
* we should use Curl_if2ip? given pickiness of recent ftpd,
* I believe we should use the same address as the control connection.
*/
sslen = sizeof(ss);
if (getsockname(conn->firstsocket, (struct sockaddr *)&ss, &sslen) < 0)
return CURLE_FTP_PORT_FAILED;
if (getnameinfo((struct sockaddr *)&ss, sslen, hbuf, sizeof(hbuf), NULL, 0,
niflags))
return CURLE_FTP_PORT_FAILED;
memset(&hints, 0, sizeof(hints));
hints.ai_family = sa->sa_family;
/*hints.ai_family = ss.ss_family;
this way can be used if sockaddr_storage is properly defined, as glibc
2.1.X doesn't do*/
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE;
if (getaddrinfo(hbuf, "0", &hints, &res))
return CURLE_FTP_PORT_FAILED;
portsock = -1;
for (ai = res; ai; ai = ai->ai_next) {
portsock = socket(ai->ai_family, ai->ai_socktype, ai->ai_protocol);
if (portsock < 0)
continue;
if (bind(portsock, ai->ai_addr, ai->ai_addrlen) < 0) {
close(portsock);
portsock = -1;
continue;
}
if (listen(portsock, 1) < 0) {
close(portsock);
portsock = -1;
continue;
}
break;
}
if (portsock < 0) {
failf(data, strerror(errno));
freeaddrinfo(res);
return CURLE_FTP_PORT_FAILED;
}
sslen = sizeof(ss);
if (getsockname(portsock, sa, &sslen) < 0) {
failf(data, strerror(errno));
freeaddrinfo(res);
return CURLE_FTP_PORT_FAILED;
}
for (modep = mode; modep && *modep; modep++) {
int lprtaf, eprtaf;
switch (sa->sa_family) {
case AF_INET:
ap = (char *)&((struct sockaddr_in *)&ss)->sin_addr;
alen = sizeof(((struct sockaddr_in *)&ss)->sin_addr);
pp = (char *)&((struct sockaddr_in *)&ss)->sin_port;
plen = sizeof(((struct sockaddr_in *)&ss)->sin_port);
lprtaf = 4;
eprtaf = 1;
break;
case AF_INET6:
ap = (char *)&((struct sockaddr_in6 *)&ss)->sin6_addr;
alen = sizeof(((struct sockaddr_in6 *)&ss)->sin6_addr);
pp = (char *)&((struct sockaddr_in6 *)&ss)->sin6_port;
plen = sizeof(((struct sockaddr_in6 *)&ss)->sin6_port);
lprtaf = 6;
eprtaf = 2;
break;
default:
ap = pp = NULL;
lprtaf = eprtaf = -1;
break;
}
if (strcmp(*modep, "EPRT") == 0) {
if (eprtaf < 0)
continue;
if (getnameinfo((struct sockaddr *)&ss, sslen,
portmsgbuf, sizeof(portmsgbuf), tmp, sizeof(tmp), niflags))
continue;
/* do not transmit IPv6 scope identifier to the wire */
if (sa->sa_family == AF_INET6) {
char *q = strchr(portmsgbuf, '%');
if (q)
*q = '\0';
}
ftpsendf(conn->firstsocket, conn, "%s |%d|%s|%s|", *modep, eprtaf,
portmsgbuf, tmp);
} else if (strcmp(*modep, "LPRT") == 0 || strcmp(*modep, "PORT") == 0) {
int i;
if (strcmp(*modep, "LPRT") == 0 && lprtaf < 0)
continue;
if (strcmp(*modep, "PORT") == 0 && sa->sa_family != AF_INET)
continue;
portmsgbuf[0] = '\0';
if (strcmp(*modep, "LPRT") == 0) {
snprintf(tmp, sizeof(tmp), "%d,%d", lprtaf, alen);
if (strlcat(portmsgbuf, tmp, sizeof(portmsgbuf)) >= sizeof(portmsgbuf)) {
goto again;
}
}
for (i = 0; i < alen; i++) {
if (portmsgbuf[0])
snprintf(tmp, sizeof(tmp), ",%u", ap[i]);
else
snprintf(tmp, sizeof(tmp), "%u", ap[i]);
if (strlcat(portmsgbuf, tmp, sizeof(portmsgbuf)) >= sizeof(portmsgbuf)) {
goto again;
}
}
if (strcmp(*modep, "LPRT") == 0) {
snprintf(tmp, sizeof(tmp), ",%d", plen);
if (strlcat(portmsgbuf, tmp, sizeof(portmsgbuf)) >= sizeof(portmsgbuf))
goto again;
}
for (i = 0; i < plen; i++) {
snprintf(tmp, sizeof(tmp), ",%u", pp[i]);
if (strlcat(portmsgbuf, tmp, sizeof(portmsgbuf)) >= sizeof(portmsgbuf)) {
goto again;
}
}
ftpsendf(conn->firstsocket, conn, "%s %s", *modep, portmsgbuf);
}
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if (nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if (ftpcode != 200) {
failf(data, "Server does not grok %s", *modep);
continue;
} else
break;
again:;
}
if (!*modep) {
close(portsock);
freeaddrinfo(res);
return CURLE_FTP_PORT_FAILED;
}
#else
struct sockaddr_in sa;
struct hostent *h=NULL;
char *hostdataptr=NULL;
@@ -733,7 +915,7 @@ CURLcode _ftp(struct connectdata *conn)
/* we set the secondary socket variable to this for now, it
is only so that the cleanup function will close it in case
we fail before the true secondary stuff is made */
data->secondarysocket = portsock;
conn->secondarysocket = portsock;
memset((char *)&sa, 0, sizeof(sa));
memcpy((char *)&sa.sin_addr,
@@ -750,7 +932,7 @@ CURLcode _ftp(struct connectdata *conn)
size = sizeof(add);
if(getsockname(portsock, (struct sockaddr *) &add,
(int *)&size)<0) {
(socklen_t *)&size)<0) {
failf(data, "getsockname() failed");
return CURLE_FTP_PORT_FAILED;
}
@@ -795,13 +977,13 @@ CURLcode _ftp(struct connectdata *conn)
sscanf( inet_ntoa(in), "%hu.%hu.%hu.%hu",
&ip[0], &ip[1], &ip[2], &ip[3]);
#endif
ftpsendf(data->firstsocket, conn, "PORT %d,%d,%d,%d,%d,%d",
ftpsendf(conn->firstsocket, conn, "PORT %d,%d,%d,%d,%d,%d",
ip[0], ip[1], ip[2], ip[3],
porttouse >> 8,
porttouse & 255);
}
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -809,26 +991,43 @@ CURLcode _ftp(struct connectdata *conn)
failf(data, "Server does not grok PORT, try without it!");
return CURLE_FTP_PORT_FAILED;
}
#endif /* ENABLE_IPV6 */
}
else { /* we use the PASV command */
#if 0
char *mode[] = { "EPSV", "LPSV", "PASV", NULL };
int results[] = { 229, 228, 227, 0 };
#else
char *mode[] = { "PASV", NULL };
int results[] = { 227, 0 };
#endif
int modeoff;
ftpsendf(data->firstsocket, conn, "PASV");
for (modeoff = 0; mode[modeoff]; modeoff++) {
ftpsendf(conn->firstsocket, conn, mode[modeoff]);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if (ftpcode == results[modeoff])
break;
}
if(ftpcode != 227) {
if (!mode[modeoff]) {
failf(data, "Odd return code after PASV");
return CURLE_FTP_WEIRD_PASV_REPLY;
}
else {
else if (strcmp(mode[modeoff], "PASV") == 0) {
int ip[4];
int port[2];
unsigned short newport; /* remote port, not necessary the local one */
unsigned short connectport; /* the local port connect() should use! */
char newhost[32];
#ifdef ENABLE_IPV6
struct addrinfo *res;
#else
struct hostent *he;
#endif
char *str=buf,*ip_addr;
char *hostdataptr=NULL;
@@ -863,21 +1062,79 @@ CURLcode _ftp(struct connectdata *conn)
* proxy again here. We already have the name info for it since the
* previous lookup.
*/
#ifdef ENABLE_IPV6
res = conn->hp;
#else
he = conn->hp;
#endif
connectport =
(unsigned short)data->port; /* we connect to the proxy's port */
(unsigned short)conn->port; /* we connect to the proxy's port */
}
else {
/* normal, direct, ftp connection */
#ifdef ENABLE_IPV6
res = Curl_getaddrinfo(data, newhost, newport);
if(!res)
#else
he = Curl_gethost(data, newhost, &hostdataptr);
if(!he) {
if(!he)
#endif
{
failf(data, "Can't resolve new host %s", newhost);
return CURLE_FTP_CANT_GET_HOST;
}
connectport = newport; /* we connect to the remote port */
}
data->secondarysocket = socket(AF_INET, SOCK_STREAM, 0);
#ifdef ENABLE_IPV6
conn->secondarysocket = -1;
for (ai = res; ai; ai = ai->ai_next) {
/* XXX for now, we can do IPv4 only */
if (ai->ai_family != AF_INET)
continue;
conn->secondarysocket = socket(ai->ai_family, ai->ai_socktype,
ai->ai_protocol);
if (conn->secondarysocket < 0)
continue;
if(data->bits.verbose) {
char hbuf[NI_MAXHOST];
char nbuf[NI_MAXHOST];
char sbuf[NI_MAXSERV];
#ifdef NI_WITHSCOPEID
const int niflags = NI_NUMERICHOST | NI_NUMERICSERV | NI_WITHSCOPEID;
#else
const int niflags = NI_NUMERICHOST | NI_NUMERICSERV;
#endif
if (getnameinfo(res->ai_addr, res->ai_addrlen, nbuf, sizeof(nbuf),
sbuf, sizeof(sbuf), niflags)) {
snprintf(nbuf, sizeof(nbuf), "?");
snprintf(sbuf, sizeof(sbuf), "?");
}
if (getnameinfo(res->ai_addr, res->ai_addrlen, hbuf, sizeof(hbuf),
NULL, 0, 0)) {
infof(data, "Connecting to %s port %s\n", nbuf, sbuf);
} else {
infof(data, "Connecting to %s (%s) port %s\n", hbuf, nbuf, sbuf);
}
}
if (connect(conn->secondarysocket, ai->ai_addr, ai->ai_addrlen) < 0) {
close(conn->secondarysocket);
conn->secondarysocket = -1;
continue;
}
break;
}
if (conn->secondarysocket < 0) {
failf(data, strerror(errno));
return CURLE_FTP_CANT_RECONNECT;
}
#else
conn->secondarysocket = socket(AF_INET, SOCK_STREAM, 0);
memset((char *) &serv_addr, '\0', sizeof(serv_addr));
memcpy((char *)&(serv_addr.sin_addr), he->h_addr, he->h_length);
@@ -951,7 +1208,7 @@ CURLcode _ftp(struct connectdata *conn)
if(hostdataptr)
free(hostdataptr);
if (connect(data->secondarysocket, (struct sockaddr *) &serv_addr,
if (connect(conn->secondarysocket, (struct sockaddr *) &serv_addr,
sizeof(serv_addr)) < 0) {
switch(errno) {
#ifdef ECONNREFUSED
@@ -962,7 +1219,7 @@ CURLcode _ftp(struct connectdata *conn)
#endif
#ifdef EINTR
case EINTR:
failf(data, "Connection timeouted to ftp server");
failf(data, "Connection timed out to ftp server");
break;
#endif
default:
@@ -971,14 +1228,17 @@ CURLcode _ftp(struct connectdata *conn)
}
return CURLE_FTP_CANT_RECONNECT;
}
#endif /*ENABLE_IPV6*/
if (data->bits.tunnel_thru_httpproxy) {
/* We want "seamless" FTP operations through HTTP proxy tunnel */
result = Curl_ConnectHTTPProxyTunnel(conn, data->secondarysocket,
result = Curl_ConnectHTTPProxyTunnel(conn, conn->secondarysocket,
newhost, newport);
if(CURLE_OK != result)
return result;
}
} else {
return CURLE_FTP_CANT_RECONNECT;
}
}
/* we have the (new) data connection ready */
@@ -987,10 +1247,10 @@ CURLcode _ftp(struct connectdata *conn)
if(data->bits.upload) {
/* Set type to binary (unless specified ASCII) */
ftpsendf(data->firstsocket, conn, "TYPE %s",
ftpsendf(conn->firstsocket, conn, "TYPE %s",
(data->bits.ftp_ascii)?"A":"I");
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1019,9 +1279,9 @@ CURLcode _ftp(struct connectdata *conn)
/* we could've got a specified offset from the command line,
but now we know we didn't */
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
ftpsendf(conn->firstsocket, conn, "SIZE %s", ftp->file);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1078,11 +1338,11 @@ CURLcode _ftp(struct connectdata *conn)
/* Send everything on data->in to the socket */
if(data->bits.ftp_append)
/* we append onto the file instead of rewriting it */
ftpsendf(data->firstsocket, conn, "APPE %s", ftp->file);
ftpsendf(conn->firstsocket, conn, "APPE %s", ftp->file);
else
ftpsendf(data->firstsocket, conn, "STOR %s", ftp->file);
ftpsendf(conn->firstsocket, conn, "STOR %s", ftp->file);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1093,7 +1353,7 @@ CURLcode _ftp(struct connectdata *conn)
}
if(data->bits.ftp_use_port) {
result = AllowServerConnect(data, portsock);
result = AllowServerConnect(data, conn, portsock);
if( result )
return result;
}
@@ -1106,7 +1366,7 @@ CURLcode _ftp(struct connectdata *conn)
Curl_pgrsSetUploadSize(data, data->infilesize);
result = Curl_Transfer(conn, -1, -1, FALSE, NULL, /* no download */
data->secondarysocket, bytecountp);
conn->secondarysocket, bytecountp);
if(result)
return result;
@@ -1138,16 +1398,16 @@ CURLcode _ftp(struct connectdata *conn)
else if(from < 0) {
/* -Y */
totalsize = -from;
data->maxdownload = -from;
conn->maxdownload = -from;
data->resume_from = from;
infof(data, "FTP RANGE the last %d bytes\n", totalsize);
}
else {
/* X-Y */
totalsize = to-from;
data->maxdownload = totalsize+1; /* include the last mentioned byte */
conn->maxdownload = totalsize+1; /* include the last mentioned byte */
data->resume_from = from;
infof(data, "FTP RANGE from %d getting %d bytes\n", from, data->maxdownload);
infof(data, "FTP RANGE from %d getting %d bytes\n", from, conn->maxdownload);
}
infof(data, "range-download from %d to %d, totally %d bytes\n",
from, to, totalsize);
@@ -1160,9 +1420,9 @@ CURLcode _ftp(struct connectdata *conn)
dirlist = TRUE;
/* Set type to ASCII */
ftpsendf(data->firstsocket, conn, "TYPE A");
ftpsendf(conn->firstsocket, conn, "TYPE A");
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1175,16 +1435,16 @@ CURLcode _ftp(struct connectdata *conn)
better used since the LIST command output is not specified or
standard in any way */
ftpsendf(data->firstsocket, conn, "%s",
ftpsendf(conn->firstsocket, conn, "%s",
data->customrequest?data->customrequest:
(data->bits.ftp_list_only?"NLST":"LIST"));
}
else {
/* Set type to binary (unless specified ASCII) */
ftpsendf(data->firstsocket, conn, "TYPE %s",
ftpsendf(conn->firstsocket, conn, "TYPE %s",
(data->bits.ftp_ascii)?"A":"I");
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1203,9 +1463,9 @@ CURLcode _ftp(struct connectdata *conn)
* of the file we're gonna get. If we can get the size, this is by far
* the best way to know if we're trying to resume beyond the EOF. */
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
ftpsendf(conn->firstsocket, conn, "SIZE %s", ftp->file);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1247,9 +1507,9 @@ CURLcode _ftp(struct connectdata *conn)
infof(data, "Instructs server to resume from offset %d\n",
data->resume_from);
ftpsendf(data->firstsocket, conn, "REST %d", data->resume_from);
ftpsendf(conn->firstsocket, conn, "REST %d", data->resume_from);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1259,10 +1519,10 @@ CURLcode _ftp(struct connectdata *conn)
}
}
ftpsendf(data->firstsocket, conn, "RETR %s", ftp->file);
ftpsendf(conn->firstsocket, conn, "RETR %s", ftp->file);
}
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1326,7 +1586,7 @@ CURLcode _ftp(struct connectdata *conn)
size = downloadsize;
if(data->bits.ftp_use_port) {
result = AllowServerConnect(data, portsock);
result = AllowServerConnect(data, conn, portsock);
if( result )
return result;
}
@@ -1334,7 +1594,7 @@ CURLcode _ftp(struct connectdata *conn)
infof(data, "Getting file with size: %d\n", size);
/* FTP download: */
result=Curl_Transfer(conn, data->secondarysocket, size, FALSE,
result=Curl_Transfer(conn, conn->secondarysocket, size, FALSE,
bytecountp,
-1, NULL); /* no upload here */
if(result)
@@ -1363,7 +1623,7 @@ CURLcode Curl_ftp(struct connectdata *conn)
int dirlength=0; /* 0 forces strlen() */
/* the ftp struct is already inited in ftp_connect() */
ftp = data->proto.ftp;
ftp = conn->proto.ftp;
/* We split the path into dir and file parts *before* we URLdecode
it */
@@ -1452,3 +1712,16 @@ size_t Curl_ftpsendf(int fd, struct connectdata *conn, char *fmt, ...)
}
CURLcode Curl_ftp_disconnect(struct connectdata *conn)
{
struct FTP *ftp= conn->proto.ftp;
if(ftp->user)
free(ftp->user);
if(ftp->passwd)
free(ftp->passwd);
if(ftp->entrypath)
free(ftp->entrypath);
return CURLE_OK;
}

View File

@@ -26,12 +26,10 @@
CURLcode Curl_ftp(struct connectdata *conn);
CURLcode Curl_ftp_done(struct connectdata *conn);
CURLcode Curl_ftp_connect(struct connectdata *conn);
CURLcode Curl_ftp_disconnect(struct connectdata *conn);
size_t Curl_ftpsendf(int fd, struct connectdata *, char *fmt, ...);
struct curl_slist *curl_slist_append(struct curl_slist *list, char *data);
void curl_slist_free_all(struct curl_slist *list);
/* The kerberos stuff needs this: */
int Curl_GetFTPResponse(int sockfd, char *buf,
struct connectdata *conn,

View File

@@ -390,7 +390,7 @@ static const short yycheck[] = { 0,
56
};
/* -*-C-*- Note some compilers choke on comments on `#line' lines. */
#line 3 "/usr/local/share/bison.simple"
#line 3 "/usr/lib/bison.simple"
/* This file comes from bison-1.28. */
/* Skeleton output parser for bison,
@@ -604,7 +604,7 @@ __yy_memcpy (char *to, char *from, unsigned int count)
#endif
#endif
#line 217 "/usr/local/share/bison.simple"
#line 217 "/usr/lib/bison.simple"
/* The user can define YYPARSE_PARAM as the name of an argument to be passed
into yyparse. The argument should have type void *.
@@ -1295,7 +1295,7 @@ case 50:
break;}
}
/* the action file gets copied in in place of this dollarsign */
#line 543 "/usr/local/share/bison.simple"
#line 543 "/usr/lib/bison.simple"
yyvsp -= yylen;
yyssp -= yylen;

View File

@@ -1,28 +0,0 @@
#ifndef __GETENV_H
#define __GETENV_H
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2000, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* In order to be useful for every potential user, curl and libcurl are
* dual-licensed under the MPL and the MIT/X-derivate licenses.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the MPL or the MIT/X-derivate
* licenses. You may pick one of these licenses.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
* $Id$
*****************************************************************************/
#include <curl/curl.h>
#endif

View File

@@ -31,7 +31,7 @@
#include <string.h>
#include <stdarg.h>
CURLcode curl_getinfo(CURL *curl, CURLINFO info, ...)
CURLcode Curl_getinfo(CURL *curl, CURLINFO info, ...)
{
va_list arg;
long *param_longp;
@@ -103,6 +103,12 @@ CURLcode curl_getinfo(CURL *curl, CURLINFO info, ...)
case CURLINFO_SSL_VERIFYRESULT:
*param_longp = data->ssl.certverifyresult;
break;
case CURLINFO_CONTENT_LENGTH_DOWNLOAD:
*param_doublep = data->progress.size_dl;
break;
case CURLINFO_CONTENT_LENGTH_UPLOAD:
*param_doublep = data->progress.size_ul;
break;
default:
return CURLE_BAD_FUNCTION_ARGUMENT;
}

View File

@@ -83,6 +83,29 @@ static char *MakeIP(unsigned long num,char *addr, int addr_len)
return (addr);
}
#ifdef ENABLE_IPV6
struct addrinfo *Curl_getaddrinfo(struct UrlData *data,
char *hostname,
int port)
{
struct addrinfo hints, *res;
int error;
char sbuf[NI_MAXSERV];
memset(&hints, 0, sizeof(hints));
hints.ai_family = PF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_CANONNAME;
snprintf(sbuf, sizeof(sbuf), "%d", port);
error = getaddrinfo(hostname, sbuf, &hints, &res);
if (error) {
infof(data, "getaddrinfo(3) failed for %s\n", hostname);
return NULL;
}
return res;
}
#endif
/* The original code to this function was once stolen from the Dancer source
code, written by Bjorn Reese, it has since been patched and modified
considerably. */

View File

@@ -23,6 +23,11 @@
* $Id$
*****************************************************************************/
struct addrinfo;
struct addrinfo *Curl_getaddrinfo(struct UrlData *data,
char *hostname,
int port);
struct hostent *Curl_gethost(struct UrlData *data,
char *hostname,
char **bufp);

View File

@@ -104,6 +104,7 @@
#include "memdebug.h"
#endif
/* ------------------------------------------------------------------------- */
/*
* The add_buffer series of functions are used to build one large memory chunk
* from repeated function invokes. Used so that the entire HTTP request can
@@ -205,7 +206,7 @@ CURLcode add_buffer(send_buffer *in, void *inptr, size_t size)
}
/* end of the add_buffer functions */
/*****************************************************************************/
/* ------------------------------------------------------------------------- */
/*
* Read everything until a newline.
@@ -226,17 +227,18 @@ int GetLine(int sockfd, char *buf, struct connectdata *conn)
(nread<BUFSIZE) && read_rc;
nread++, ptr++) {
if((CURLE_OK != Curl_read(conn, sockfd, ptr, 1, &nread)) ||
(nread <= 0) ||
(*ptr == '\n'))
break;
}
*ptr=0; /* zero terminate */
if(data->bits.verbose) {
fputs("< ", data->err);
fwrite(buf, 1, nread, data->err);
fputs("\n", data->err);
}
return nread;
return nread>0?nread:0;
}
@@ -281,8 +283,8 @@ CURLcode Curl_ConnectHTTPProxyTunnel(struct connectdata *conn,
"%s"
"\r\n",
hostname, remote_port,
(data->bits.proxy_user_passwd)?data->ptr_proxyuserpwd:"",
(data->useragent?data->ptr_uagent:"")
(data->bits.proxy_user_passwd)?conn->allocptr.proxyuserpwd:"",
(data->useragent?conn->allocptr.uagent:"")
);
/* wait for the proxy to send us a HTTP/1.0 200 OK header */
@@ -308,6 +310,9 @@ CURLcode Curl_ConnectHTTPProxyTunnel(struct connectdata *conn,
return CURLE_OK;
}
/*
* HTTP stuff to do at connect-time.
*/
CURLcode Curl_http_connect(struct connectdata *conn)
{
struct UrlData *data;
@@ -324,21 +329,21 @@ CURLcode Curl_http_connect(struct connectdata *conn)
if (conn->protocol & PROT_HTTPS) {
if (data->bits.httpproxy) {
/* HTTPS through a proxy can only be done with a tunnel */
result = Curl_ConnectHTTPProxyTunnel(conn, data->firstsocket,
data->hostname, data->remote_port);
result = Curl_ConnectHTTPProxyTunnel(conn, conn->firstsocket,
conn->hostname, conn->remote_port);
if(CURLE_OK != result)
return result;
}
/* now, perform the SSL initialization for this socket */
if(Curl_SSLConnect(data))
if(Curl_SSLConnect(conn))
return CURLE_SSL_CONNECT_ERROR;
}
if(data->bits.user_passwd && !data->bits.this_is_a_follow) {
/* Authorization: is requested, this is not a followed location, get the
original host name */
data->auth_host = strdup(data->hostname);
data->auth_host = strdup(conn->hostname);
}
return CURLE_OK;
@@ -360,7 +365,7 @@ CURLcode Curl_http_done(struct connectdata *conn)
struct HTTP *http;
data=conn->data;
http=data->proto.http;
http=conn->proto.http;
if(data->bits.http_formpost) {
*bytecount = http->readbytecount + http->writebytecount;
@@ -374,9 +379,6 @@ CURLcode Curl_http_done(struct connectdata *conn)
*bytecount = http->readbytecount + http->writebytecount;
}
free(http);
data->proto.http=NULL; /* it is gone */
return CURLE_OK;
}
@@ -392,11 +394,20 @@ CURLcode Curl_http(struct connectdata *conn)
char *host = conn->name;
long *bytecount = &conn->bytecount;
http = (struct HTTP *)malloc(sizeof(struct HTTP));
if(!http)
return CURLE_OUT_OF_MEMORY;
memset(http, 0, sizeof(struct HTTP));
data->proto.http = http;
if(!conn->proto.http) {
/* Only allocate this struct if we don't already have it! */
http = (struct HTTP *)malloc(sizeof(struct HTTP));
if(!http)
return CURLE_OUT_OF_MEMORY;
memset(http, 0, sizeof(struct HTTP));
conn->proto.http = http;
}
else
http = conn->proto.http;
/* We default to persistant connections */
conn->bits.close = FALSE;
if ( (conn->protocol&(PROT_HTTP|PROT_FTP)) &&
data->bits.upload) {
@@ -407,9 +418,9 @@ CURLcode Curl_http(struct connectdata *conn)
have been used in the proxy connect, but if we have got a header with
the user-agent string specified, we erase the previously made string
here. */
if(checkheaders(data, "User-Agent:") && data->ptr_uagent) {
free(data->ptr_uagent);
data->ptr_uagent=NULL;
if(checkheaders(data, "User-Agent:") && conn->allocptr.uagent) {
free(conn->allocptr.uagent);
conn->allocptr.uagent=NULL;
}
if((data->bits.user_passwd) && !checkheaders(data, "Authorization:")) {
@@ -419,21 +430,27 @@ CURLcode Curl_http(struct connectdata *conn)
host due to a location-follow, we do some weirdo checks here */
if(!data->bits.this_is_a_follow ||
!data->auth_host ||
strequal(data->auth_host, data->hostname)) {
strequal(data->auth_host, conn->hostname)) {
sprintf(data->buffer, "%s:%s", data->user, data->passwd);
if(Curl_base64_encode(data->buffer, strlen(data->buffer),
&authorization) >= 0) {
data->ptr_userpwd = aprintf( "Authorization: Basic %s\015\012",
if(conn->allocptr.userpwd)
free(conn->allocptr.userpwd);
conn->allocptr.userpwd = aprintf( "Authorization: Basic %s\015\012",
authorization);
free(authorization);
}
}
}
if((data->bits.http_set_referer) && !checkheaders(data, "Referer:")) {
data->ptr_ref = aprintf("Referer: %s\015\012", data->referer);
if(conn->allocptr.ref)
free(conn->allocptr.ref);
conn->allocptr.ref = aprintf("Referer: %s\015\012", data->referer);
}
if(data->cookie && !checkheaders(data, "Cookie:")) {
data->ptr_cookie = aprintf("Cookie: %s\015\012", data->cookie);
if(conn->allocptr.cookie)
free(conn->allocptr.cookie);
conn->allocptr.cookie = aprintf("Cookie: %s\015\012", data->cookie);
}
if(data->cookies) {
@@ -453,13 +470,22 @@ CURLcode Curl_http(struct connectdata *conn)
}
if(!checkheaders(data, "Host:")) {
if(((conn->protocol&PROT_HTTPS) && (data->remote_port == PORT_HTTPS)) ||
(!(conn->protocol&PROT_HTTPS) && (data->remote_port == PORT_HTTP)) )
/* if ptr_host is already set, it is almost OK since we only re-use
connections to the very same host and port, but when we use a HTTP
proxy we have a persistant connect and yet we must change the Host:
header! */
if(conn->allocptr.host)
free(conn->allocptr.host);
if(((conn->protocol&PROT_HTTPS) && (conn->remote_port == PORT_HTTPS)) ||
(!(conn->protocol&PROT_HTTPS) && (conn->remote_port == PORT_HTTP)) )
/* If (HTTPS on port 443) OR (non-HTTPS on port 80) then don't include
the port number in the host string */
data->ptr_host = aprintf("Host: %s\r\n", host);
conn->allocptr.host = aprintf("Host: %s\r\n", host);
else
data->ptr_host = aprintf("Host: %s:%d\r\n", host, data->remote_port);
conn->allocptr.host = aprintf("Host: %s:%d\r\n", host,
conn->remote_port);
}
if(!checkheaders(data, "Pragma:"))
@@ -533,7 +559,7 @@ CURLcode Curl_http(struct connectdata *conn)
*/
if((data->httpreq == HTTPREQ_GET) &&
!checkheaders(data, "Range:")) {
data->ptr_rangeline = aprintf("Range: bytes=%s\r\n", data->range);
conn->allocptr.rangeline = aprintf("Range: bytes=%s\r\n", data->range);
}
else if((data->httpreq != HTTPREQ_GET) &&
!checkheaders(data, "Content-Range:")) {
@@ -541,14 +567,14 @@ CURLcode Curl_http(struct connectdata *conn)
if(data->resume_from) {
/* This is because "resume" was selected */
long total_expected_size= data->resume_from + data->infilesize;
data->ptr_rangeline = aprintf("Content-Range: bytes %s%ld/%ld\r\n",
conn->allocptr.rangeline = aprintf("Content-Range: bytes %s%ld/%ld\r\n",
data->range, total_expected_size-1,
total_expected_size);
}
else {
/* Range was selected and then we just pass the incoming range and
append total size */
data->ptr_rangeline = aprintf("Content-Range: bytes %s/%d\r\n",
conn->allocptr.rangeline = aprintf("Content-Range: bytes %s/%d\r\n",
data->range, data->infilesize);
}
}
@@ -564,7 +590,7 @@ CURLcode Curl_http(struct connectdata *conn)
/* add the main request stuff */
add_bufferf(req_buffer,
"%s " /* GET/HEAD/POST/PUT */
"%s HTTP/1.0\r\n" /* path */
"%s HTTP/1.1\r\n" /* path */
"%s" /* proxyuserpwd */
"%s" /* userpwd */
"%s" /* range */
@@ -580,15 +606,15 @@ CURLcode Curl_http(struct connectdata *conn)
(data->bits.http_post || data->bits.http_formpost)?"POST":
(data->bits.http_put)?"PUT":"GET"),
ppath,
(data->bits.proxy_user_passwd && data->ptr_proxyuserpwd)?data->ptr_proxyuserpwd:"",
(data->bits.user_passwd && data->ptr_userpwd)?data->ptr_userpwd:"",
(data->bits.set_range && data->ptr_rangeline)?data->ptr_rangeline:"",
(data->useragent && *data->useragent && data->ptr_uagent)?data->ptr_uagent:"",
(data->ptr_cookie?data->ptr_cookie:""), /* Cookie: <data> */
(data->ptr_host?data->ptr_host:""), /* Host: host */
(data->bits.proxy_user_passwd && conn->allocptr.proxyuserpwd)?conn->allocptr.proxyuserpwd:"",
(data->bits.user_passwd && conn->allocptr.userpwd)?conn->allocptr.userpwd:"",
(data->bits.set_range && conn->allocptr.rangeline)?conn->allocptr.rangeline:"",
(data->useragent && *data->useragent && conn->allocptr.uagent)?conn->allocptr.uagent:"",
(conn->allocptr.cookie?conn->allocptr.cookie:""), /* Cookie: <data> */
(conn->allocptr.host?conn->allocptr.host:""), /* Host: host */
http->p_pragma?http->p_pragma:"",
http->p_accept?http->p_accept:"",
(data->bits.http_set_referer && data->ptr_ref)?data->ptr_ref:"" /* Referer: <data> <CRLF> */
(data->bits.http_set_referer && conn->allocptr.ref)?conn->allocptr.ref:"" /* Referer: <data> <CRLF> */
);
if(co) {
@@ -692,10 +718,10 @@ CURLcode Curl_http(struct connectdata *conn)
Curl_pgrsSetUploadSize(data, http->postsize);
data->request_size =
add_buffer_send(data->firstsocket, conn, req_buffer);
result = Curl_Transfer(conn, data->firstsocket, -1, TRUE,
add_buffer_send(conn->firstsocket, conn, req_buffer);
result = Curl_Transfer(conn, conn->firstsocket, -1, TRUE,
&http->readbytecount,
data->firstsocket,
conn->firstsocket,
&http->writebytecount);
if(result) {
Curl_FormFree(http->sendit); /* free that whole lot */
@@ -718,12 +744,12 @@ CURLcode Curl_http(struct connectdata *conn)
/* this sends the buffer and frees all the buffer resources */
data->request_size =
add_buffer_send(data->firstsocket, conn, req_buffer);
add_buffer_send(conn->firstsocket, conn, req_buffer);
/* prepare for transfer */
result = Curl_Transfer(conn, data->firstsocket, -1, TRUE,
result = Curl_Transfer(conn, conn->firstsocket, -1, TRUE,
&http->readbytecount,
data->firstsocket,
conn->firstsocket,
&http->writebytecount);
if(result)
return result;
@@ -764,10 +790,10 @@ CURLcode Curl_http(struct connectdata *conn)
/* issue the request */
data->request_size =
add_buffer_send(data->firstsocket, conn, req_buffer);
add_buffer_send(conn->firstsocket, conn, req_buffer);
/* HTTP GET/HEAD download: */
result = Curl_Transfer(conn, data->firstsocket, -1, TRUE, bytecount,
result = Curl_Transfer(conn, conn->firstsocket, -1, TRUE, bytecount,
-1, NULL); /* nothing to upload */
}
if(result)

View File

@@ -35,4 +35,9 @@ CURLcode Curl_http_done(struct connectdata *conn);
CURLcode Curl_http_connect(struct connectdata *conn);
CURLcode Curl_http_close(struct connectdata *conn);
/* The following functions are defined in http_chunks.c */
void Curl_httpchunk_init(struct connectdata *conn);
CHUNKcode Curl_httpchunk_read(struct connectdata *conn, char *datap,
ssize_t length, ssize_t *wrote);
#endif

222
lib/http_chunks.c Normal file
View File

@@ -0,0 +1,222 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2001, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* In order to be useful for every potential user, curl and libcurl are
* dual-licensed under the MPL and the MIT/X-derivate licenses.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the MPL or the MIT/X-derivate
* licenses. You may pick one of these licenses.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
* $Id$
*****************************************************************************/
#include "setup.h"
/* -- WIN32 approved -- */
#include <stdio.h>
#include <string.h>
#include <stdarg.h>
#include <stdlib.h>
#include <ctype.h>
#include "urldata.h" /* it includes http_chunks.h */
#include "sendf.h" /* for the client write stuff */
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/*
* Chunk format (simplified):
*
* <HEX SIZE>[ chunk extension ] CRLF
* <DATA>
*
* Highlights from RFC2616 section 3.6 say:
The chunked encoding modifies the body of a message in order to
transfer it as a series of chunks, each with its own size indicator,
followed by an OPTIONAL trailer containing entity-header fields. This
allows dynamically produced content to be transferred along with the
information necessary for the recipient to verify that it has
received the full message.
Chunked-Body = *chunk
last-chunk
trailer
CRLF
chunk = chunk-size [ chunk-extension ] CRLF
chunk-data CRLF
chunk-size = 1*HEX
last-chunk = 1*("0") [ chunk-extension ] CRLF
chunk-extension= *( ";" chunk-ext-name [ "=" chunk-ext-val ] )
chunk-ext-name = token
chunk-ext-val = token | quoted-string
chunk-data = chunk-size(OCTET)
trailer = *(entity-header CRLF)
The chunk-size field is a string of hex digits indicating the size of
the chunk. The chunked encoding is ended by any chunk whose size is
zero, followed by the trailer, which is terminated by an empty line.
*/
void Curl_httpchunk_init(struct connectdata *conn)
{
struct Curl_chunker *chunk = &conn->proto.http->chunk;
chunk->hexindex=0; /* start at 0 */
chunk->dataleft=0; /* no data left yet! */
chunk->state = CHUNK_HEX; /* we get hex first! */
}
/*
* chunk_read() returns a OK for normal operations, or a positive return code
* for errors. STOP means this sequence of chunks is complete. The 'wrote'
* argument is set to tell the caller how many bytes we actually passed to the
* client (for byte-counting and whatever).
*
* The states and the state-machine is further explained in the header file.
*/
CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
char *datap,
ssize_t length,
ssize_t *wrote)
{
CURLcode result;
struct Curl_chunker *ch = &conn->proto.http->chunk;
int piece;
*wrote = 0; /* nothing yet */
while(length) {
switch(ch->state) {
case CHUNK_HEX:
if(isxdigit((int)*datap)) {
if(ch->hexindex < MAXNUM_SIZE) {
ch->hexbuffer[ch->hexindex] = *datap;
datap++;
length--;
ch->hexindex++;
}
else {
return CHUNKE_TOO_LONG_HEX; /* longer hex than we support */
}
}
else {
if(0 == ch->hexindex) {
/* This is illegal data, we received junk where we expected
a hexadecimal digit. */
return CHUNKE_ILLEGAL_HEX;
}
/* length and datap are unmodified */
ch->hexbuffer[ch->hexindex]=0;
ch->datasize=strtoul(ch->hexbuffer, NULL, 16);
ch->state = CHUNK_POSTHEX;
}
break;
case CHUNK_POSTHEX:
/* In this state, we're waiting for CRLF to arrive. We support
this to allow so called chunk-extensions to show up here
before the CRLF comes. */
if(*datap == '\r')
ch->state = CHUNK_CR;
length--;
datap++;
break;
case CHUNK_CR:
/* waiting for the LF */
if(*datap == '\n') {
/* we're now expecting data to come, unless size was zero! */
if(0 == ch->datasize) {
ch->state = CHUNK_STOP; /* stop reading! */
if(1 == length) {
/* This was the final byte, return right now */
return CHUNKE_STOP;
}
}
else
ch->state = CHUNK_DATA;
}
else
/* previously we got a fake CR, go back to CR waiting! */
ch->state = CHUNK_CR;
datap++;
length--;
break;
case CHUNK_DATA:
/* we get pure and fine data
We expect another 'datasize' of data. We have 'length' right now,
it can be more or less than 'datasize'. Get the smallest piece.
*/
piece = (ch->datasize >= length)?length:ch->datasize;
/* Write the data portion available */
result = Curl_client_write(conn->data, CLIENTWRITE_BODY, datap, piece);
if(result)
return CHUNKE_WRITE_ERROR;
*wrote += piece;
ch->datasize -= piece; /* decrease amount left to expect */
datap += piece; /* move read pointer forward */
length -= piece; /* decrease space left in this round */
if(0 == ch->datasize)
/* end of data this round, we now expect a trailing CRLF */
ch->state = CHUNK_POSTCR;
break;
case CHUNK_POSTCR:
if(*datap == '\r') {
ch->state = CHUNK_POSTLF;
datap++;
length--;
}
else
return CHUNKE_BAD_CHUNK;
break;
case CHUNK_POSTLF:
if(*datap == '\n') {
/*
* The last one before we go back to hex state and start all
* over.
*/
Curl_httpchunk_init(conn);
datap++;
length--;
}
else
return CHUNKE_BAD_CHUNK;
break;
case CHUNK_STOP:
/* If we arrive here, there is data left in the end of the buffer
even if there's no more chunks to read */
ch->dataleft = length;
return CHUNKE_STOP; /* return stop */
default:
return CHUNKE_STATE_ERROR;
}
}
return CHUNKE_OK;
}

87
lib/http_chunks.h Normal file
View File

@@ -0,0 +1,87 @@
#ifndef __HTTP_CHUNKS_H
#define __HTTP_CHUNKS_H
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2001, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* In order to be useful for every potential user, curl and libcurl are
* dual-licensed under the MPL and the MIT/X-derivate licenses.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the MPL or the MIT/X-derivate
* licenses. You may pick one of these licenses.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
* $Id$
*****************************************************************************/
/*
* The longest possible hexadecimal number we support in a chunked transfer.
* Weird enough, RFC2616 doesn't set a maximum size! Since we use strtoul()
* to convert it, we "only" support 2^32 bytes chunk data.
*/
#define MAXNUM_SIZE 16
typedef enum {
CHUNK_FIRST, /* never use */
/* In this we await and buffer all hexadecimal digits until we get one
that isn't a hexadecimal digit. When done, we go POSTHEX */
CHUNK_HEX,
/* We have received the hexadecimal digit and we eat all characters until
we get a CRLF pair. When we see a CR we go to the CR state. */
CHUNK_POSTHEX,
/* A single CR has been found and we should get a LF right away in this
state or we go back to POSTHEX. When LF is received, we go to DATA.
If the size given was zero, we set state to STOP and return. */
CHUNK_CR,
/* We eat the amount of data specified. When done, we move on to the
POST_CR state. */
CHUNK_DATA,
/* POSTCR should get a CR and nothing else, then move to POSTLF */
CHUNK_POSTCR,
/* POSTLF should get a LF and nothing else, then move back to HEX as
the CRLF combination marks the end of a chunk */
CHUNK_POSTLF,
/* This is mainly used to really mark that we're out of the game.
NOTE: that there's a 'dataleft' field in the struct that will tell how
many bytes that were not passed to the client in the end of the last
buffer! */
CHUNK_STOP,
CHUNK_LAST /* never use */
} ChunkyState;
typedef enum {
CHUNKE_STOP = -1,
CHUNKE_OK = 0,
CHUNKE_TOO_LONG_HEX = 1,
CHUNKE_ILLEGAL_HEX,
CHUNKE_BAD_CHUNK,
CHUNKE_WRITE_ERROR,
CHUNKE_STATE_ERROR,
CHUNKE_LAST
} CHUNKcode;
struct Curl_chunker {
char hexbuffer[ MAXNUM_SIZE + 1];
int hexindex;
ChunkyState state;
size_t datasize;
size_t dataleft; /* untouched data amount at the end of the last buffer */
};
#endif

View File

@@ -24,7 +24,7 @@
*****************************************************************************/
#include "setup.h"
#if ! defined(WIN32) && ! defined(__BEOS__)
#if ! defined(WIN32) && ! defined(__BEOS__) && !defined(__CYGWIN32__)
extern char *Curl_if2ip(char *interface, char *buf, int buf_size);
#else
#define Curl_if2ip(a,b,c) NULL

View File

@@ -290,7 +290,7 @@ krb4_auth(void *app_data, struct connectdata *conn)
size_t nread;
int l = sizeof(local_addr);
if(getsockname(conn->data->firstsocket,
if(getsockname(conn->firstsocket,
(struct sockaddr *)LOCAL_ADDR, &l) < 0)
perror("getsockname()");
@@ -339,9 +339,9 @@ krb4_auth(void *app_data, struct connectdata *conn)
return AUTH_CONTINUE;
}
/*ret = command("ADAT %s", p)*/
Curl_ftpsendf(conn->data->firstsocket, conn, "ADAT %s", p);
Curl_ftpsendf(conn->firstsocket, conn, "ADAT %s", p);
/* wait for feedback */
nread = Curl_GetFTPResponse(conn->data->firstsocket,
nread = Curl_GetFTPResponse(conn->firstsocket,
conn->data->buffer, conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/-1;
@@ -409,10 +409,10 @@ void krb_kauth(struct connectdata *conn)
save = set_command_prot(conn, prot_private);
/*ret = command("SITE KAUTH %s", name);***/
Curl_ftpsendf(conn->data->firstsocket, conn,
Curl_ftpsendf(conn->firstsocket, conn,
"SITE KAUTH %s", conn->data->user);
/* wait for feedback */
nread = Curl_GetFTPResponse(conn->data->firstsocket, conn->data->buffer,
nread = Curl_GetFTPResponse(conn->firstsocket, conn->data->buffer,
conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/;
@@ -486,10 +486,10 @@ void krb_kauth(struct connectdata *conn)
}
memset (tktcopy.dat, 0, tktcopy.length);
/*ret = command("SITE KAUTH %s %s", name, p);***/
Curl_ftpsendf(conn->data->firstsocket, conn,
Curl_ftpsendf(conn->firstsocket, conn,
"SITE KAUTH %s %s", name, p);
/* wait for feedback */
nread = Curl_GetFTPResponse(conn->data->firstsocket, conn->data->buffer,
nread = Curl_GetFTPResponse(conn->firstsocket, conn->data->buffer,
conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/;

View File

@@ -171,10 +171,10 @@ CURLcode Curl_ldap(struct connectdata *conn)
DYNA_GET_FUNCTION(int (*)(void *, char *, void *, void *, char **, char **, int (*)(void *, char *, int), void *, char *, int, unsigned long), ldap_entry2text);
DYNA_GET_FUNCTION(int (*)(void *, char *, void *, void *, char **, char **, int (*)(void *, char *, int), void *, char *, int, unsigned long, char *, char *), ldap_entry2html);
server = ldap_open(data->hostname, data->port);
server = ldap_open(conn->hostname, conn->port);
if (server == NULL) {
failf(data, "LDAP: Cannot connect to %s:%d",
data->hostname, data->port);
conn->hostname, conn->port);
status = CURLE_COULDNT_CONNECT;
} else {
rc = ldap_simple_bind_s(server, data->user, data->passwd);

View File

@@ -7,36 +7,25 @@ LIBRARY LIBCURL
DESCRIPTION 'curl libcurl - http://curl.haxx.se'
EXPORTS
curl_close @ 1 ;
curl_connect @ 2 ;
curl_disconnect @ 3 ;
curl_do @ 4 ;
curl_done @ 5 ;
curl_easy_cleanup @ 6 ;
curl_easy_getinfo @ 7 ;
curl_easy_init @ 8 ;
curl_easy_perform @ 9 ;
curl_easy_setopt @ 10 ;
curl_escape @ 11 ;
curl_formparse @ 12 ;
curl_free @ 13 ;
curl_getdate @ 14 ;
curl_getenv @ 15 ;
curl_init @ 16 ;
curl_open @ 17 ;
curl_read @ 18 ;
curl_setopt @ 19 ;
curl_slist_append @ 20 ;
curl_slist_free_all @ 21 ;
curl_transfer @ 22 ;
curl_unescape @ 23 ;
curl_version @ 24 ;
curl_write @ 25 ;
maprintf @ 26 ;
mfprintf @ 27 ;
mprintf @ 28 ;
msprintf @ 29 ;
msnprintf @ 30 ;
mvfprintf @ 31 ;
strequal @ 32 ;
strnequal @ 33 ;
curl_easy_cleanup @ 1 ;
curl_easy_getinfo @ 2 ;
curl_easy_init @ 3 ;
curl_easy_perform @ 4 ;
curl_easy_setopt @ 5 ;
curl_escape @ 6 ;
curl_formparse @ 7 ;
curl_formfree @ 8 ;
curl_getdate @ 9 ;
curl_getenv @ 10 ;
curl_slist_append @ 11 ;
curl_slist_free_all @ 12 ;
curl_unescape @ 13 ;
curl_version @ 14 ;
curl_maprintf @ 15 ;
curl_mfprintf @ 16 ;
curl_mprintf @ 17 ;
curl_msprintf @ 18 ;
curl_msnprintf @ 19 ;
curl_mvfprintf @ 20 ;
curl_strequal @ 21 ;
curl_strnequal @ 22 ;

View File

@@ -72,7 +72,7 @@ void *curl_domalloc(size_t size, int line, char *source)
return mem;
}
char *curl_dostrdup(char *str, int line, char *source)
char *curl_dostrdup(const char *str, int line, char *source)
{
char *mem;
size_t len;
@@ -120,7 +120,7 @@ int curl_socket(int domain, int type, int protocol, int line, char *source)
return sockfd;
}
int curl_accept(int s, struct sockaddr *addr, int *addrlen,
int curl_accept(int s, struct sockaddr *addr, socklen_t *addrlen,
int line, char *source)
{
int sockfd=(accept)(s, addr, addrlen);

View File

@@ -7,13 +7,13 @@
void *curl_domalloc(size_t size, int line, char *source);
void *curl_dorealloc(void *ptr, size_t size, int line, char *source);
void curl_dofree(void *ptr, int line, char *source);
char *curl_dostrdup(char *str, int line, char *source);
char *curl_dostrdup(const char *str, int line, char *source);
void curl_memdebug(char *logname);
/* file descriptor manipulators */
int curl_socket(int domain, int type, int protocol, int, char *);
int curl_sclose(int sockfd, int, char *);
int curl_accept(int s, struct sockaddr *addr, int *addrlen,
int curl_accept(int s, struct sockaddr *addr, socklen_t *addrlen,
int line, char *source);
/* FILE functions */

View File

@@ -27,7 +27,8 @@
#include <stdlib.h>
#include <string.h>
#include "getenv.h"
#include <curl/curl.h>
#include "strequal.h"
/* Debug this single source file with:

View File

@@ -482,10 +482,10 @@ sec_prot_internal(struct connectdata *conn, int level)
}
if(level){
Curl_ftpsendf(conn->data->firstsocket, conn,
Curl_ftpsendf(conn->firstsocket, conn,
"PBSZ %u", s);
/* wait for feedback */
nread = Curl_GetFTPResponse(conn->data->firstsocket,
nread = Curl_GetFTPResponse(conn->firstsocket,
conn->data->buffer, conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/-1;
@@ -501,10 +501,10 @@ sec_prot_internal(struct connectdata *conn, int level)
conn->buffer_size = s;
}
Curl_ftpsendf(conn->data->firstsocket, conn,
Curl_ftpsendf(conn->firstsocket, conn,
"PROT %c", level["CSEP"]);
/* wait for feedback */
nread = Curl_GetFTPResponse(conn->data->firstsocket,
nread = Curl_GetFTPResponse(conn->firstsocket,
conn->data->buffer, conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/-1;
@@ -610,10 +610,10 @@ sec_login(struct connectdata *conn)
}
infof(data, "Trying %s...\n", (*m)->name);
/*ret = command("AUTH %s", (*m)->name);***/
Curl_ftpsendf(conn->data->firstsocket, conn,
Curl_ftpsendf(conn->firstsocket, conn,
"AUTH %s", (*m)->name);
/* wait for feedback */
nread = Curl_GetFTPResponse(conn->data->firstsocket,
nread = Curl_GetFTPResponse(conn->firstsocket,
conn->data->buffer, conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/-1;

View File

@@ -50,7 +50,77 @@
#include "memdebug.h"
#endif
/* infof() is for info message along the way */
/* returns last node in linked list */
static struct curl_slist *slist_get_last(struct curl_slist *list)
{
struct curl_slist *item;
/* if caller passed us a NULL, return now */
if (!list)
return NULL;
/* loop through to find the last item */
item = list;
while (item->next) {
item = item->next;
}
return item;
}
/* append a struct to the linked list. It always retunrs the address of the
* first record, so that you can sure this function as an initialization
* function as well as an append function. If you find this bothersome,
* then simply create a separate _init function and call it appropriately from
* within the proram. */
struct curl_slist *curl_slist_append(struct curl_slist *list,
const char *data)
{
struct curl_slist *last;
struct curl_slist *new_item;
new_item = (struct curl_slist *) malloc(sizeof(struct curl_slist));
if (new_item) {
new_item->next = NULL;
new_item->data = strdup(data);
}
else {
fprintf(stderr, "Cannot allocate memory for QUOTE list.\n");
return NULL;
}
if (list) {
last = slist_get_last(list);
last->next = new_item;
return list;
}
/* if this is the first item, then new_item *is* the list */
return new_item;
}
/* be nice and clean up resources */
void curl_slist_free_all(struct curl_slist *list)
{
struct curl_slist *next;
struct curl_slist *item;
if (!list)
return;
item = list;
do {
next = item->next;
if (item->data) {
free(item->data);
}
free(item);
item = next;
} while (next);
}
/* Curl_infof() is for info message along the way */
void Curl_infof(struct UrlData *data, char *fmt, ...)
{
@@ -63,7 +133,7 @@ void Curl_infof(struct UrlData *data, char *fmt, ...)
}
}
/* failf() is for messages stating why we failed, the LAST one will be
/* Curl_failf() is for messages stating why we failed, the LAST one will be
returned for the user (if requested) */
void Curl_failf(struct UrlData *data, char *fmt, ...)
@@ -72,8 +142,11 @@ void Curl_failf(struct UrlData *data, char *fmt, ...)
va_start(ap, fmt);
if(data->errorbuffer)
vsnprintf(data->errorbuffer, CURL_ERROR_SIZE, fmt, ap);
else /* no errorbuffer receives this, write to data->err instead */
else if(!data->bits.mute) {
/* no errorbuffer receives this, write to data->err instead */
vfprintf(data->err, fmt, ap);
fprintf(data->err, "\n");
}
va_end(ap);
}
@@ -111,15 +184,14 @@ CURLcode Curl_write(struct connectdata *conn, int sockfd,
size_t *written)
{
size_t bytes_written;
struct UrlData *data=conn->data; /* conn knows data, not vice versa */
#ifdef USE_SSLEAY
if (data->ssl.use) {
if (conn->ssl.use) {
int loop=100; /* just a precaution to never loop endlessly */
while(loop--) {
bytes_written = SSL_write(data->ssl.handle, mem, len);
bytes_written = SSL_write(conn->ssl.handle, mem, len);
if((-1 != bytes_written) ||
(SSL_ERROR_WANT_WRITE != SSL_get_error(data->ssl.handle,
(SSL_ERROR_WANT_WRITE != SSL_get_error(conn->ssl.handle,
bytes_written) ))
break;
}
@@ -141,23 +213,6 @@ CURLcode Curl_write(struct connectdata *conn, int sockfd,
return CURLE_OK;
}
/*
* External write-function, writes to the data-socket.
* Takes care of plain sockets, SSL or kerberos transparently.
*/
CURLcode curl_write(CURLconnect *c_conn, char *buf, size_t amount,
size_t *n)
{
struct connectdata *conn = (struct connectdata *)c_conn;
if(!n || !conn || (conn->handle != STRUCT_CONNECT))
return CURLE_FAILED_INIT;
return Curl_write(conn, conn->sockfd, buf, amount, n);
}
/* client_write() sends data to the write callback(s)
The bit pattern defines to what "streams" to write to. Body and/or header.
@@ -200,16 +255,15 @@ CURLcode Curl_read(struct connectdata *conn, int sockfd,
char *buf, size_t buffersize,
ssize_t *n)
{
struct UrlData *data = conn->data;
ssize_t nread;
#ifdef USE_SSLEAY
if (data->ssl.use) {
if (conn->ssl.use) {
int loop=100; /* just a precaution to never loop endlessly */
while(loop--) {
nread = SSL_read(data->ssl.handle, buf, buffersize);
nread = SSL_read(conn->ssl.handle, buf, buffersize);
if((-1 != nread) ||
(SSL_ERROR_WANT_READ != SSL_get_error(data->ssl.handle, nread) ))
(SSL_ERROR_WANT_READ != SSL_get_error(conn->ssl.handle, nread) ))
break;
}
}
@@ -228,19 +282,3 @@ CURLcode Curl_read(struct connectdata *conn, int sockfd,
return CURLE_OK;
}
/*
* The public read function reads from the 'sockfd' file descriptor only.
* Use the Curl_read() internally when you want to specify fd.
*/
CURLcode curl_read(CURLconnect *c_conn, char *buf, size_t buffersize,
ssize_t *n)
{
struct connectdata *conn = (struct connectdata *)c_conn;
if(!n || !conn || (conn->handle != STRUCT_CONNECT))
return CURLE_FAILED_INIT;
return Curl_read(conn, conn->sockfd, buf, buffersize, n);
}

View File

@@ -24,6 +24,7 @@
#include "setup.h"
#include <stdio.h>
#include <string.h>
#if defined(__MINGW32__)
#include <winsock.h>
#endif

View File

@@ -35,6 +35,7 @@
#include "formdata.h" /* for the boundary function */
#ifdef USE_SSLEAY
#include <openssl/rand.h>
static char global_passwd[64];
@@ -58,15 +59,108 @@ static int passwd_callback(char *buf, int num, int verify
return 0;
}
/* This function is *highly* inspired by (and parts are directly stolen
* from) source from the SSLeay package written by Eric Young
* (eay@cryptsoft.com). */
static
bool seed_enough(struct connectdata *conn, /* unused for now */
int nread)
{
#ifdef HAVE_RAND_STATUS
/* only available in OpenSSL 0.9.5a and later */
if(RAND_status())
return TRUE;
#else
if(nread > 500)
/* this is a very silly decision to make */
return TRUE;
#endif
return FALSE; /* not enough */
}
static
int cert_stuff(struct UrlData *data,
int random_the_seed(struct connectdata *conn)
{
char *buf = conn->data->buffer; /* point to the big buffer */
int nread=0;
struct UrlData *data=conn->data;
/* Q: should we add support for a random file name as a libcurl option?
A: Yes, it is here */
#ifndef RANDOM_FILE
/* if RANDOM_FILE isn't defined, we only perform this if an option tells
us to! */
if(data->ssl.random_file)
#define RANDOM_FILE "" /* doesn't matter won't be used */
#endif
{
/* let the option override the define */
nread += RAND_load_file((data->ssl.random_file?
data->ssl.random_file:RANDOM_FILE),
16384);
if(seed_enough(conn, nread))
return nread;
}
#if defined(HAVE_RAND_EGD)
/* only available in OpenSSL 0.9.5 and later */
/* EGD_SOCKET is set at configure time or not at all */
#ifndef EGD_SOCKET
/* If we don't have the define set, we only do this if the egd-option
is set */
if(data->ssl.egdsocket)
#define EGD_SOCKET "" /* doesn't matter won't be used */
#endif
{
/* If there's an option and a define, the option overrides the
define */
int ret = RAND_egd(data->ssl.egdsocket?data->ssl.egdsocket:EGD_SOCKET);
if(-1 != ret) {
nread += ret;
if(seed_enough(conn, nread))
return nread;
}
}
#endif
/* If we get here, it means we need to seed the PRNG using a "silly"
approach! */
#ifdef HAVE_RAND_SCREEN
/* This one gets a random value by reading the currently shown screen */
RAND_screen();
nread = 100; /* just a value */
#else
{
int len;
char *area = Curl_FormBoundary();
if(!area)
return 3; /* out of memory */
len = strlen(area);
RAND_seed(area, len);
free(area); /* now remove the random junk */
}
#endif
/* generates a default path for the random seed file */
buf[0]=0; /* blank it first */
RAND_file_name(buf, BUFSIZE);
if ( buf[0] ) {
/* we got a file name to try */
nread += RAND_load_file(buf, 16384);
if(seed_enough(conn, nread))
return nread;
}
infof(conn->data, "Your connection is using a weak random seed!\n");
return nread;
}
static
int cert_stuff(struct connectdata *conn,
char *cert_file,
char *key_file)
{
struct UrlData *data = conn->data;
if (cert_file != NULL) {
SSL *ssl;
X509 *x509;
@@ -78,10 +172,10 @@ int cert_stuff(struct UrlData *data,
*/
strcpy(global_passwd, data->cert_passwd);
/* Set passwd callback: */
SSL_CTX_set_default_passwd_cb(data->ssl.ctx, passwd_callback);
SSL_CTX_set_default_passwd_cb(conn->ssl.ctx, passwd_callback);
}
if (SSL_CTX_use_certificate_file(data->ssl.ctx,
if (SSL_CTX_use_certificate_file(conn->ssl.ctx,
cert_file,
SSL_FILETYPE_PEM) <= 0) {
failf(data, "unable to set certificate file (wrong password?)\n");
@@ -90,14 +184,14 @@ int cert_stuff(struct UrlData *data,
if (key_file == NULL)
key_file=cert_file;
if (SSL_CTX_use_PrivateKey_file(data->ssl.ctx,
if (SSL_CTX_use_PrivateKey_file(conn->ssl.ctx,
key_file,
SSL_FILETYPE_PEM) <= 0) {
failf(data, "unable to set public key file\n");
return(0);
}
ssl=SSL_new(data->ssl.ctx);
ssl=SSL_new(conn->ssl.ctx);
x509=SSL_get_certificate(ssl);
if (x509 != NULL)
@@ -111,7 +205,7 @@ int cert_stuff(struct UrlData *data,
/* Now we know that a key and cert have been set against
* the SSL context */
if (!SSL_CTX_check_private_key(data->ssl.ctx)) {
if (!SSL_CTX_check_private_key(conn->ssl.ctx)) {
failf(data, "Private key does not match the certificate public key\n");
return(0);
}
@@ -122,9 +216,6 @@ int cert_stuff(struct UrlData *data,
return(1);
}
#endif
#ifdef USE_SSLEAY
static
int cert_verify_callback(int ok, X509_STORE_CTX *ctx)
{
@@ -141,152 +232,133 @@ int cert_verify_callback(int ok, X509_STORE_CTX *ctx)
/* ====================================================== */
int
Curl_SSLConnect (struct UrlData *data)
Curl_SSLConnect(struct connectdata *conn)
{
#ifdef USE_SSLEAY
int err;
char * str;
SSL_METHOD *req_method;
struct UrlData *data = conn->data;
int err;
char * str;
SSL_METHOD *req_method;
/* mark this is being ssl enabled from here on out. */
data->ssl.use = TRUE;
/* mark this is being ssl enabled from here on out. */
conn->ssl.use = TRUE;
/* Lets get nice error messages */
SSL_load_error_strings();
/* Lets get nice error messages */
SSL_load_error_strings();
#ifdef HAVE_RAND_STATUS
/* RAND_status() was introduced in OpenSSL 0.9.5 */
if(0 == RAND_status())
#endif
{
/* We need to seed the PRNG properly! */
#ifdef HAVE_RAND_SCREEN
/* This one gets a random value by reading the currently shown screen */
RAND_screen();
#else
int len;
char *area = Curl_FormBoundary();
if(!area)
return 3; /* out of memory */
len = strlen(area);
RAND_seed(area, len);
free(area); /* now remove the random junk */
#endif
}
/* Make funny stuff to get random input */
random_the_seed(conn);
/* Setup all the global SSL stuff */
SSLeay_add_ssl_algorithms();
/* Setup all the global SSL stuff */
SSLeay_add_ssl_algorithms();
switch(data->ssl.version) {
default:
req_method = SSLv23_client_method();
break;
case 2:
req_method = SSLv2_client_method();
break;
case 3:
req_method = SSLv3_client_method();
break;
}
switch(data->ssl.version) {
default:
req_method = SSLv23_client_method();
break;
case 2:
req_method = SSLv2_client_method();
break;
case 3:
req_method = SSLv3_client_method();
break;
}
data->ssl.ctx = SSL_CTX_new(req_method);
conn->ssl.ctx = SSL_CTX_new(req_method);
if(!data->ssl.ctx) {
failf(data, "SSL: couldn't create a context!");
return 1;
}
if(!conn->ssl.ctx) {
failf(data, "SSL: couldn't create a context!");
return 1;
}
if(data->cert) {
if (!cert_stuff(data, data->cert, data->cert)) {
failf(data, "couldn't use certificate!\n");
return 2;
}
if(data->cert) {
if (!cert_stuff(conn, data->cert, data->cert)) {
failf(data, "couldn't use certificate!\n");
return 2;
}
}
if(data->ssl.verifypeer){
SSL_CTX_set_verify(data->ssl.ctx,
SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT|
SSL_VERIFY_CLIENT_ONCE,
cert_verify_callback);
if (!SSL_CTX_load_verify_locations(data->ssl.ctx,
data->ssl.CAfile,
data->ssl.CApath)) {
failf(data,"error setting cerficate verify locations\n");
return 2;
}
if(data->ssl.verifypeer){
SSL_CTX_set_verify(conn->ssl.ctx,
SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT|
SSL_VERIFY_CLIENT_ONCE,
cert_verify_callback);
if (!SSL_CTX_load_verify_locations(conn->ssl.ctx,
data->ssl.CAfile,
data->ssl.CApath)) {
failf(data,"error setting cerficate verify locations\n");
return 2;
}
else
SSL_CTX_set_verify(data->ssl.ctx, SSL_VERIFY_NONE, cert_verify_callback);
}
else
SSL_CTX_set_verify(conn->ssl.ctx, SSL_VERIFY_NONE, cert_verify_callback);
/* Lets make an SSL structure */
data->ssl.handle = SSL_new (data->ssl.ctx);
SSL_set_connect_state (data->ssl.handle);
/* Lets make an SSL structure */
conn->ssl.handle = SSL_new (conn->ssl.ctx);
SSL_set_connect_state (conn->ssl.handle);
data->ssl.server_cert = 0x0;
conn->ssl.server_cert = 0x0;
/* pass the raw socket into the SSL layers */
SSL_set_fd (data->ssl.handle, data->firstsocket);
err = SSL_connect (data->ssl.handle);
/* pass the raw socket into the SSL layers */
SSL_set_fd (conn->ssl.handle, conn->firstsocket);
err = SSL_connect (conn->ssl.handle);
if (-1 == err) {
err = ERR_get_error();
failf(data, "SSL: %s", ERR_error_string(err, NULL));
return 10;
}
if (-1 == err) {
err = ERR_get_error();
failf(data, "SSL: %s", ERR_error_string(err, NULL));
return 10;
}
/* Informational message */
infof (data, "SSL connection using %s\n",
SSL_get_cipher(data->ssl.handle));
/* Informational message */
infof (data, "SSL connection using %s\n",
SSL_get_cipher(conn->ssl.handle));
/* Get server's certificate (note: beware of dynamic allocation) - opt */
/* major serious hack alert -- we should check certificates
* to authenticate the server; otherwise we risk man-in-the-middle
* attack
*/
/* Get server's certificate (note: beware of dynamic allocation) - opt */
/* major serious hack alert -- we should check certificates
* to authenticate the server; otherwise we risk man-in-the-middle
* attack
*/
data->ssl.server_cert = SSL_get_peer_certificate (data->ssl.handle);
if(!data->ssl.server_cert) {
failf(data, "SSL: couldn't get peer certificate!");
return 3;
}
infof (data, "Server certificate:\n");
conn->ssl.server_cert = SSL_get_peer_certificate (conn->ssl.handle);
if(!conn->ssl.server_cert) {
failf(data, "SSL: couldn't get peer certificate!");
return 3;
}
infof (data, "Server certificate:\n");
str = X509_NAME_oneline (X509_get_subject_name (data->ssl.server_cert),
NULL, 0);
if(!str) {
failf(data, "SSL: couldn't get X509-subject!");
return 4;
}
infof(data, "\t subject: %s\n", str);
CRYPTO_free(str);
str = X509_NAME_oneline (X509_get_subject_name (conn->ssl.server_cert),
NULL, 0);
if(!str) {
failf(data, "SSL: couldn't get X509-subject!");
return 4;
}
infof(data, "\t subject: %s\n", str);
CRYPTO_free(str);
str = X509_NAME_oneline (X509_get_issuer_name (data->ssl.server_cert),
NULL, 0);
if(!str) {
failf(data, "SSL: couldn't get X509-issuer name!");
return 5;
}
infof(data, "\t issuer: %s\n", str);
CRYPTO_free(str);
str = X509_NAME_oneline (X509_get_issuer_name (conn->ssl.server_cert),
NULL, 0);
if(!str) {
failf(data, "SSL: couldn't get X509-issuer name!");
return 5;
}
infof(data, "\t issuer: %s\n", str);
CRYPTO_free(str);
/* We could do all sorts of certificate verification stuff here before
deallocating the certificate. */
/* We could do all sorts of certificate verification stuff here before
deallocating the certificate. */
if(data->ssl.verifypeer) {
data->ssl.certverifyresult=SSL_get_verify_result(data->ssl.handle);
infof(data, "Verify result: %d\n", data->ssl.certverifyresult);
}
else
data->ssl.certverifyresult=0;
if(data->ssl.verifypeer) {
data->ssl.certverifyresult=SSL_get_verify_result(conn->ssl.handle);
infof(data, "Verify result: %d\n", data->ssl.certverifyresult);
}
else
data->ssl.certverifyresult=0;
X509_free(data->ssl.server_cert);
X509_free(conn->ssl.server_cert);
#else /* USE_SSLEAY */
/* this is for "-ansi -Wall -pedantic" to stop complaining! (rabe) */
(void) data;
/* this is for "-ansi -Wall -pedantic" to stop complaining! (rabe) */
(void) conn;
#endif
return 0;
return 0;
}

View File

@@ -22,5 +22,6 @@
*
* $Id$
*****************************************************************************/
int Curl_SSLConnect (struct UrlData *data);
#include "urldata.h"
int Curl_SSLConnect(struct connectdata *conn);
#endif

View File

@@ -25,7 +25,7 @@
#include <string.h>
int Curl_strequal(const char *first, const char *second)
int curl_strequal(const char *first, const char *second)
{
#if defined(HAVE_STRCASECMP)
return !strcasecmp(first, second);
@@ -45,7 +45,7 @@ int Curl_strequal(const char *first, const char *second)
#endif
}
int Curl_strnequal(const char *first, const char *second, size_t max)
int curl_strnequal(const char *first, const char *second, size_t max)
{
#if defined(HAVE_STRCASECMP)
return !strncasecmp(first, second, max);
@@ -66,3 +66,44 @@ int Curl_strnequal(const char *first, const char *second, size_t max)
#endif
}
#ifndef HAVE_STRLCAT
/*
* The strlcat() function appends the NUL-terminated string src to the end
* of dst. It will append at most size - strlen(dst) - 1 bytes, NUL-termi-
* nating the result.
*
* The strlcpy() and strlcat() functions return the total length of the
* string they tried to create. For strlcpy() that means the length of src.
* For strlcat() that means the initial length of dst plus the length of
* src. While this may seem somewhat confusing it was done to make trunca-
* tion detection simple.
*
*
*/
size_t strlcat(char *dst, const char *src, size_t siz)
{
char *d = dst;
const char *s = src;
size_t n = siz;
size_t dlen;
/* Find the end of dst and adjust bytes left but don't go past end */
while (n-- != 0 && *d != '\0')
d++;
dlen = d - dst;
n = siz - dlen;
if (n == 0)
return(dlen + strlen(s));
while (*s != '\0') {
if (n != 1) {
*d++ = *s;
n--;
}
s++;
}
*d = '\0';
return(dlen + (s - src)); /* count does not include NUL */
}
#endif

View File

@@ -22,10 +22,14 @@
*
* $Id$
*****************************************************************************/
int Curl_strequal(const char *first, const char *second);
int Curl_strnequal(const char *first, const char *second, size_t max);
#define strequal(a,b) Curl_strequal(a,b)
#define strnequal(a,b,c) Curl_strnequal(a,b,c)
/*
* These two actually are public functions.
*/
int curl_strequal(const char *first, const char *second);
int curl_strnequal(const char *first, const char *second, size_t max);
#define strequal(a,b) curl_strequal(a,b)
#define strnequal(a,b,c) curl_strnequal(a,b,c)
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -53,7 +53,7 @@ gettimeofday (struct timeval *tp, void *nothing)
#endif
#endif
struct timeval Curl_tvnow ()
struct timeval Curl_tvnow (void)
{
struct timeval now;
#ifdef HAVE_GETTIMEOFDAY

View File

@@ -82,7 +82,6 @@
#include <curl/types.h>
#include "netrc.h"
#include "getenv.h"
#include "hostip.h"
#include "transfer.h"
#include "sendf.h"
@@ -90,6 +89,7 @@
#include "getpass.h"
#include "progress.h"
#include "getdate.h"
#include "http.h"
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
@@ -107,7 +107,7 @@
<butlerm@xmission.com>. */
CURLcode static
_Transfer(struct connectdata *c_conn)
Transfer(struct connectdata *c_conn)
{
ssize_t nread; /* number of bytes read */
int bytecount = 0; /* total number of bytes read */
@@ -128,6 +128,7 @@ _Transfer(struct connectdata *c_conn)
int offset = 0; /* possible resume offset read from the
Content-Range: header */
int code = 0; /* error code from the 'HTTP/1.? XXX' line */
int httpversion = -1; /* the last digit in the HTTP/1.1 string */
/* for the low speed checks: */
CURLcode urg;
@@ -141,9 +142,6 @@ _Transfer(struct connectdata *c_conn)
char *buf;
int maxfd;
if(!conn || (conn->handle != STRUCT_CONNECT))
return CURLE_BAD_FUNCTION_ARGUMENT;
data = conn->data; /* there's the root struct */
buf = data->buffer;
maxfd = (conn->sockfd>conn->writesockfd?conn->sockfd:conn->writesockfd)+1;
@@ -320,9 +318,9 @@ _Transfer(struct connectdata *c_conn)
p++; /* pass the \r byte */
if ('\n' == *p)
p++; /* pass the \n byte */
#if 0 /* headers are not included in the size */
Curl_pgrsSetDownloadSize(data, conn->size);
#endif
header = FALSE; /* no more header to parse! */
/* now, only output this if the header AND body are requested:
@@ -337,13 +335,25 @@ _Transfer(struct connectdata *c_conn)
return urg;
data->header_size += p - data->headerbuff;
/*
* end-of-headers.
*
* If we requested a "no body" and this isn't a "close"
* connection, this is a good time to get out and return
* home.
*/
if(!conn->bits.close && data->bits.no_body)
return CURLE_OK;
break; /* exit header line loop */
}
if (!headerline++) {
/* This is the first header, it MUST be the error code line
or else we consiser this to be the body right away! */
if (sscanf (p, " HTTP/1.%*c %3d", &code)) {
if (2 == sscanf (p, " HTTP/1.%d %3d", &httpversion, &code)) {
/* 404 -> URL not found! */
if (
( ((data->bits.http_follow_location) && (code >= 400))
@@ -357,6 +367,12 @@ _Transfer(struct connectdata *c_conn)
return CURLE_HTTP_NOT_FOUND;
}
data->progress.httpcode = code;
data->progress.httpversion = httpversion;
if(httpversion == 0)
/* Default action for HTTP/1.0 must be to close, unless
we get one of those fancy headers that tell us the
server keeps it open for us! */
conn->bits.close = TRUE;
}
else {
header = FALSE; /* this is not a header line */
@@ -365,12 +381,52 @@ _Transfer(struct connectdata *c_conn)
}
/* check for Content-Length: header lines to get size */
if (strnequal("Content-Length", p, 14) &&
sscanf (p+14, ": %ld", &contentlength))
sscanf (p+14, ": %ld", &contentlength)) {
conn->size = contentlength;
Curl_pgrsSetDownloadSize(data, contentlength);
}
else if((httpversion == 0) &&
conn->bits.httpproxy &&
strnequal("Proxy-Connection: keep-alive", p,
strlen("Proxy-Connection: keep-alive"))) {
/*
* When a HTTP/1.0 reply comes when using a proxy, the
* 'Proxy-Connection: keep-alive' line tells us the
* connection will be kept alive for our pleasure.
* Default action for 1.0 is to close.
*/
conn->bits.close = FALSE; /* don't close when done */
infof(data, "HTTP/1.0 proxy connection set to keep alive!\n");
}
else if (strnequal("Connection: close", p,
strlen("Connection: close"))) {
/*
* [RFC 2616, section 8.1.2.1]
* "Connection: close" is HTTP/1.1 language and means that
* the connection will close when this request has been
* served.
*/
conn->bits.close = TRUE; /* close when done */
}
else if (strnequal("Transfer-Encoding: chunked", p,
strlen("Transfer-Encoding: chunked"))) {
/*
* [RFC 2616, section 3.6.1] A 'chunked' transfer encoding
* means that the server will send a series of "chunks". Each
* chunk starts with line with info (including size of the
* coming block) (terminated with CRLF), then a block of data
* with the previously mentioned size. There can be any amount
* of chunks, and a chunk-data set to zero signals the
* end-of-chunks. */
conn->bits.chunk = TRUE; /* chunks coming our way */
/* init our chunky engine */
Curl_httpchunk_init(conn);
}
else if (strnequal("Content-Range", p, 13)) {
if (sscanf (p+13, ": bytes %d-", &offset) ||
sscanf (p+13, ": bytes: %d-", &offset)) {
/* This second format was added August 1st by Igor
/* This second format was added August 1st 2000 by Igor
Khristophorov since Sun's webserver JavaWebServer/1.1.1
obviously sends the header this way! :-( */
if (data->resume_from == offset) {
@@ -406,7 +462,7 @@ _Transfer(struct connectdata *c_conn)
ptr++;
backup = *ptr; /* store the ending letter */
*ptr = '\0'; /* zero terminate */
data->newurl = strdup(start); /* clone string */
conn->newurl = strdup(start); /* clone string */
*ptr = backup; /* restore ending letter */
}
@@ -447,12 +503,12 @@ _Transfer(struct connectdata *c_conn)
if(0 == bodywrites) {
/* These checks are only made the first time we are about to
write a chunk of the body */
write a piece of the body */
if(conn->protocol&PROT_HTTP) {
/* HTTP-only checks */
if (data->newurl) {
if (conn->newurl) {
/* abort after the headers if "follow Location" is set */
infof (data, "Follow to new URL: %s\n", data->newurl);
infof (data, "Follow to new URL: %s\n", conn->newurl);
return CURLE_OK;
}
else if (data->resume_from &&
@@ -490,13 +546,49 @@ _Transfer(struct connectdata *c_conn)
} /* switch */
} /* two valid time strings */
} /* we have a time condition */
if(!conn->bits.close) {
/* If this is not the last request before a close, we must
set the maximum download size to the size of the expected
document or else, we won't know when to stop reading! */
if(-1 != conn->size)
conn->maxdownload = conn->size;
/* What to do if the size is *not* known? */
}
} /* this is HTTP */
} /* this is the first time we write a body part */
bodywrites++;
if(data->maxdownload &&
(bytecount + nread > data->maxdownload)) {
nread = data->maxdownload - bytecount;
if(conn->bits.chunk) {
/*
* Bless me father for I have sinned. Here comes a chunked
* transfer flying and we need to decode this properly. While
* the name says read, this function both reads and writes away
* the data. The returned 'nread' holds the number of actual
* data it wrote to the client. */
CHUNKcode res =
Curl_httpchunk_read(conn, str, nread, &nread);
if(CHUNKE_OK < res) {
failf(data, "Receeived problem in the chunky parser");
return CURLE_READ_ERROR;
}
else if(CHUNKE_STOP == res) {
/* we're done reading chunks! */
keepon &= ~KEEP_READ; /* read no more */
/* There are now possibly N number of bytes at the end of the
str buffer that weren't written to the client, but we don't
care about them right now. */
}
/* If it returned OK, we just keep going */
}
if(conn->maxdownload &&
(bytecount + nread >= conn->maxdownload)) {
nread = conn->maxdownload - bytecount;
if((signed int)nread < 0 ) /* this should be unusual */
nread = 0;
keepon &= ~KEEP_READ; /* we're done reading */
@@ -506,9 +598,12 @@ _Transfer(struct connectdata *c_conn)
Curl_pgrsSetDownloadCounter(data, (double)bytecount);
urg = Curl_client_write(data, CLIENTWRITE_BODY, str, nread);
if(urg)
return urg;
if(! conn->bits.chunk) {
/* If this is chunky transfer, it was already written */
urg = Curl_client_write(data, CLIENTWRITE_BODY, str, nread);
if(urg)
return urg;
}
} /* if (! header and data to read ) */
} /* if( read from socket ) */
@@ -605,28 +700,27 @@ _Transfer(struct connectdata *c_conn)
return CURLE_OK;
}
typedef int (*func_T)(void);
CURLcode curl_transfer(CURL *curl)
CURLcode Curl_perform(CURL *curl)
{
CURLcode res;
struct UrlData *data = curl;
struct connectdata *c_connect=NULL;
struct UrlData *data = (struct UrlData *)curl;
struct connectdata *conn=NULL;
bool port=TRUE; /* allow data->use_port to set port to use */
Curl_pgrsStartNow(data);
do {
Curl_pgrsTime(data, TIMER_STARTSINGLE);
res = curl_connect(curl, (CURLconnect **)&c_connect);
res = Curl_connect(data, &conn, port);
if(res == CURLE_OK) {
res = curl_do(c_connect);
res = Curl_do(conn);
if(res == CURLE_OK) {
res = _Transfer(c_connect); /* now fetch that URL please */
res = Transfer(conn); /* now fetch that URL please */
if(res == CURLE_OK)
res = curl_done(c_connect);
res = Curl_done(conn);
}
if((res == CURLE_OK) && data->newurl) {
if((res == CURLE_OK) && conn->newurl) {
/* Location: redirect
This is assumed to happen for HTTP(S) only!
@@ -634,9 +728,14 @@ CURLcode curl_transfer(CURL *curl)
char prot[16]; /* URL protocol string storage */
char letter; /* used for a silly sscanf */
port=TRUE; /* by default we use the user set port number even after
a Location: */
if (data->maxredirs && (data->followlocation >= data->maxredirs)) {
failf(data,"Maximum (%d) redirects followed", data->maxredirs);
#ifdef USE_OLD_DISCONNECT
curl_disconnect(c_connect);
#endif
res=CURLE_TOO_MANY_REDIRECTS;
break;
}
@@ -661,7 +760,7 @@ CURLcode curl_transfer(CURL *curl)
data->bits.http_set_referer = TRUE; /* might have been false */
}
if(2 != sscanf(data->newurl, "%15[^:]://%c", prot, &letter)) {
if(2 != sscanf(conn->newurl, "%15[^:]://%c", prot, &letter)) {
/***
*DANG* this is an RFC 2068 violation. The URL is supposed
to be absolute and this doesn't seem to be that!
@@ -679,13 +778,14 @@ CURLcode curl_transfer(CURL *curl)
if(!protsep)
protsep=data->url;
else {
/* TBD: set the port with curl_setopt() */
data->port=0; /* we got a full URL and then we should reset the
port number here to re-initiate it later */
port=FALSE; /* we got a full URL and thus we should not obey the
port number that might have been set by the user
in data->use_port */
protsep+=2; /* pass the slashes */
}
if('/' != data->newurl[0]) {
if('/' != conn->newurl[0]) {
/* First we need to find out if there's a ?-letter in the URL,
and cut it and the right-side of that off */
pathsep = strrchr(protsep, '?');
@@ -708,27 +808,26 @@ CURLcode curl_transfer(CURL *curl)
newest=(char *)malloc( strlen(data->url) +
1 + /* possible slash */
strlen(data->newurl) + 1/* zero byte */);
strlen(conn->newurl) + 1/* zero byte */);
if(!newest)
return CURLE_OUT_OF_MEMORY;
sprintf(newest, "%s%s%s", data->url, ('/' == data->newurl[0])?"":"/",
data->newurl);
free(data->newurl);
data->newurl = newest;
sprintf(newest, "%s%s%s", data->url, ('/' == conn->newurl[0])?"":"/",
conn->newurl);
free(conn->newurl);
conn->newurl = newest;
}
else {
/* This was an absolute URL, clear the port number! */
/* TBD: set the port with curl_setopt() */
data->port = 0;
/* This is an absolute URL, don't use the custom port number */
port = FALSE;
}
if(data->bits.urlstringalloc)
free(data->url);
/* TBD: set the URL with curl_setopt() */
data->url = data->newurl;
data->newurl = NULL; /* don't show! */
data->url = conn->newurl;
conn->newurl = NULL; /* don't show! */
data->bits.urlstringalloc = TRUE; /* the URL is allocated */
infof(data, "Follows Location: to new URL: '%s'\n", data->url);
@@ -773,18 +872,24 @@ CURLcode curl_transfer(CURL *curl)
*/
break;
}
#ifdef USE_OLD_DISCONNECT
curl_disconnect(c_connect);
#endif
continue;
}
#ifdef USE_OLD_DISCONNECT
curl_disconnect(c_connect);
#endif
}
break; /* it only reaches here when this shouldn't loop */
} while(1); /* loop if Location: */
if(data->newurl)
free(data->newurl);
if(conn->newurl) {
free(conn->newurl);
conn->newurl = NULL;
}
return res;
}

View File

@@ -22,8 +22,9 @@
*
* $Id$
*****************************************************************************/
CURLcode curl_transfer(CURL *curl);
CURLcode Curl_perform(CURL *curl);
/* This sets up a forthcoming transfer */
CURLcode
Curl_Transfer (struct connectdata *data,
int sockfd, /* socket to read from or -1 */

1551
lib/url.c

File diff suppressed because it is too large Load Diff

View File

@@ -79,6 +79,8 @@
#include <curl/curl.h>
#include "http_chunks.h" /* for the structs and enum stuff */
/* Download buffer size, keep it fairly big for speed reasons */
#define BUFSIZE (1024*50)
@@ -96,27 +98,6 @@
#define MAX(x,y) ((x)>(y)?(x):(y))
#endif
/* Type of handle. All publicly returned 'handles' in the curl interface
have a handle first in the struct that describes what kind of handle it
is. Used to detect bad handle usage. */
typedef enum {
STRUCT_NONE,
STRUCT_OPEN,
STRUCT_CONNECT,
STRUCT_LAST
} Handle;
/* Connecting to a remote server using the curl interface is moving through
a state machine, this type is used to store the current state */
typedef enum {
CONN_NONE, /* illegal state */
CONN_INIT, /* curl_connect() has been called */
CONN_DO, /* curl_do() has been called successfully */
CONN_DONE, /* curl_done() has been called successfully */
CONN_ERROR, /* and error has occurred */
CONN_LAST /* illegal state */
} ConnState;
#ifdef KRB4
/* Types needed for krb4-ftp connections */
struct krb4buffer {
@@ -133,20 +114,86 @@ enum protection_level {
};
#endif
/* struct for data related to SSL and SSL connections */
struct ssl_connect_data {
bool use; /* use ssl encrypted communications TRUE/FALSE */
#ifdef USE_SSLEAY
/* these ones requires specific SSL-types */
SSL_CTX* ctx;
SSL* handle;
X509* server_cert;
#endif /* USE_SSLEAY */
};
struct ssl_config_data {
long version; /* what version the client wants to use */
long certverifyresult; /* result from the certificate verification */
long verifypeer; /* set TRUE if this is desired */
char *CApath; /* DOES NOT WORK ON WINDOWS */
char *CAfile; /* cerficate to verify peer against */
char *random_file; /* path to file containing "random" data */
char *egdsocket; /* path to file containing the EGD daemon socket */
};
/****************************************************************************
* HTTP unique setup
***************************************************************************/
struct HTTP {
struct FormData *sendit;
int postsize;
char *p_pragma; /* Pragma: string */
char *p_accept; /* Accept: string */
long readbytecount;
long writebytecount;
/* For FORM posting */
struct Form form;
size_t (*storefread)(char *, size_t , size_t , FILE *);
FILE *in;
struct Curl_chunker chunk;
};
/****************************************************************************
* FTP unique setup
***************************************************************************/
struct FTP {
long *bytecountp;
char *user; /* user name string */
char *passwd; /* password string */
char *urlpath; /* the originally given path part of the URL */
char *dir; /* decoded directory */
char *file; /* decoded file */
char *entrypath; /* the PWD reply when we logged on */
};
/****************************************************************************
* FILE unique setup
***************************************************************************/
struct FILE {
int fd; /* open file descriptor to read from! */
};
/*
* Boolean values that concerns this connection.
*/
struct ConnectBits {
bool close; /* if set, we close the connection after this request */
bool reuse; /* if set, this is a re-used connection */
bool chunk; /* if set, this is a chunked transfer-encoding */
bool httpproxy; /* if set, this transfer is done through a http proxy */
};
/*
* The connectdata struct contains all fields and variables that should be
* unique for an entire connection.
*/
struct connectdata {
/**** Fields set when inited and not modified again */
/* To better see what kind of struct that is passed as input, *ALL* publicly
returned handles MUST have this initial 'Handle'. */
Handle handle; /* struct identifier */
struct UrlData *data; /* link to the root CURL struct */
/**** curl_connect() phase fields */
ConnState state; /* for state dependent actions */
int connectindex; /* what index in the connects index this particular
struct has */
long protocol; /* PROT_* flags concerning the protocol set */
#define PROT_MISSING (1<<0)
@@ -159,20 +206,42 @@ struct connectdata {
#define PROT_LDAP (1<<7)
#define PROT_FILE (1<<8)
#ifdef ENABLE_IPV6
struct addrinfo *hp; /* host info pointer list */
struct addrinfo *ai; /* the particular host we use */
#else
char *hostent_buf; /* pointer to allocated memory for name info */
struct hostent *hp;
struct sockaddr_in serv_addr;
char proto[64]; /* store the protocol string in this buffer */
#endif
char protostr[64]; /* store the protocol string in this buffer */
char gname[257]; /* store the hostname in this buffer */
char *name; /* host name pointer to fool around with */
char *path; /* allocated buffer to store the URL's path part in */
char *hostname; /* hostname to connect, as parsed from url */
long port; /* which port to use locally */
unsigned short remote_port; /* what remote port to connect to,
not the proxy port! */
char *ppath;
long bytecount;
struct timeval now; /* current time */
char *proxyhost; /* name of the http proxy host */
struct timeval now; /* "current" time */
struct timeval created; /* creation time */
int firstsocket; /* the main socket to use */
int secondarysocket; /* for i.e ftp transfers */
long upload_bufsize; /* adjust as you see fit, never bigger than BUFSIZE
never smaller than UPLOAD_BUFSIZE */
long maxdownload; /* in bytes, the maximum amount of data to fetch, 0
means unlimited */
struct ssl_connect_data ssl; /* this is for ssl-stuff */
struct ConnectBits bits; /* various state-flags for this connection */
/* These two functions MUST be set by the curl_connect() function to be
be protocol dependent */
CURLcode (*curl_do)(struct connectdata *connect);
@@ -183,6 +252,11 @@ struct connectdata {
*/
CURLcode (*curl_connect)(struct connectdata *connect);
/* This function *MAY* be set to a protocol-dependent function that is run
* by the curl_disconnect(), as a step in the disconnection.
*/
CURLcode (*curl_disconnect)(struct connectdata *connect);
/* This function *MAY* be set to a protocol-dependent function that is run
* in the curl_close() function if protocol-specific cleanups are required.
*/
@@ -201,6 +275,21 @@ struct connectdata {
the same we read from. -1 disables */
long *writebytecountp; /* return number of bytes written or NULL */
/** Dynamicly allocated strings, may need to be freed before this **/
/** struct is killed. **/
struct dynamically_allocated_data {
char *proxyuserpwd; /* free later if not NULL! */
char *uagent; /* free later if not NULL! */
char *userpwd; /* free later if not NULL! */
char *rangeline; /* free later if not NULL! */
char *ref; /* free later if not NULL! */
char *cookie; /* free later if not NULL! */
char *host; /* free later if not NULL */
} allocptr;
char *newurl; /* This can only be set if a Location: was in the
document headers */
#ifdef KRB4
enum protection_level command_prot;
@@ -214,6 +303,24 @@ struct connectdata {
void *app_data;
#endif
/*************** Request - specific items ************/
/* previously this was in the urldata struct */
union {
struct HTTP *http;
struct HTTP *gopher; /* alias, just for the sake of being more readable */
struct HTTP *https; /* alias, just for the sake of being more readable */
struct FTP *ftp;
struct FILE *file;
void *telnet; /* private for telnet.c-eyes only */
#if 0 /* no need for special ones for these: */
struct LDAP *ldap;
struct DICT *dict;
#endif
void *generic;
} proto;
};
struct Progress {
@@ -240,6 +347,7 @@ struct Progress {
double t_connect;
double t_pretransfer;
int httpcode;
int httpversion;
time_t filetime; /* If requested, this is might get set. It may be 0 if
the time was unretrievable */
@@ -249,35 +357,6 @@ struct Progress {
int speeder_c;
};
/****************************************************************************
* HTTP unique setup
***************************************************************************/
struct HTTP {
struct FormData *sendit;
int postsize;
char *p_pragma; /* Pragma: string */
char *p_accept; /* Accept: string */
long readbytecount;
long writebytecount;
/* For FORM posting */
struct Form form;
size_t (*storefread)(char *, size_t , size_t , FILE *);
FILE *in;
};
/****************************************************************************
* FTP unique setup
***************************************************************************/
struct FTP {
long *bytecountp;
char *user; /* user name string */
char *passwd; /* password string */
char *urlpath; /* the originally given path part of the URL */
char *dir; /* decoded directory */
char *file; /* decoded file */
};
typedef enum {
HTTPREQ_NONE, /* first in list */
HTTPREQ_GET,
@@ -324,30 +403,10 @@ struct Configbits {
bool proxystringalloc; /* the http proxy string is malloc()'ed */
bool rangestringalloc; /* the range string is malloc()'ed */
bool urlstringalloc; /* the URL string is malloc()'ed */
};
/* What type of interface that intiated this struct */
typedef enum {
CURLI_NONE,
CURLI_EASY,
CURLI_NORMAL,
CURLI_LAST
} CurlInterface;
/* struct for data related to SSL and SSL connections */
struct ssldata {
bool use; /* use ssl encrypted communications TRUE/FALSE */
long version; /* what version the client wants to use */
long certverifyresult; /* result from the certificate verification */
long verifypeer; /* set TRUE if this is desired */
char *CApath; /* DOES NOT WORK ON WINDOWS */
char *CAfile; /* cerficate to verify peer against */
#ifdef USE_SSLEAY
/* these ones requires specific SSL-types */
SSL_CTX* ctx;
SSL* handle;
X509* server_cert;
#endif /* USE_SSLEAY */
bool reuse_forbid; /* if this is forbidden to be reused, close
after use */
bool reuse_fresh; /* do not re-use an existing connection for this
transfer */
};
/*
@@ -364,20 +423,24 @@ struct ssldata {
*
* (Request)
* 3 - Request-specific. Variables that are of interest for this particular
* transfer being made right now.
* transfer being made right now. THIS IS WRONG STRUCT FOR THOSE.
*
* In Febrary 2001, this is being done stricter. The 'connectdata' struct
* MUST have all the connection oriented stuff as we may now have several
* simultaneous connections and connection structs in memory.
*
* From now on, the 'UrlData' must only contain data that is set once to go
* for many (perhaps) independent connections. Values that are generated or
* calculated internally MUST NOT be a part of this struct.
*/
struct UrlData {
Handle handle; /* struct identifier */
CurlInterface interf; /* created by WHAT interface? */
/*************** Global - specific items ************/
FILE *err; /* the stderr writes goes here */
char *errorbuffer; /* store failure messages in here */
/*************** Session - specific items ************/
char *proxy; /* if proxy, set it here, set CONF_PROXY to use this */
char *proxy; /* if proxy, set it here */
char *proxyuserpwd; /* Proxy <user:password>, if used */
long proxyport; /* If non-zero, use this port number by default. If the
proxy string features a ":[port]" that one will override
@@ -387,33 +450,14 @@ struct UrlData {
long header_size; /* size of read header(s) in bytes */
long request_size; /* the amount of bytes sent in the request(s) */
/*************** Request - specific items ************/
union {
struct HTTP *http;
struct HTTP *gopher; /* alias, just for the sake of being more readable */
struct HTTP *https; /* alias, just for the sake of being more readable */
struct FTP *ftp;
#if 0 /* no need for special ones for these: */
struct TELNET *telnet;
struct FILE *file;
struct LDAP *ldap;
struct DICT *dict;
#endif
void *generic;
} proto;
FILE *out; /* the fetched file goes here */
FILE *in; /* the uploaded file is read from here */
FILE *writeheader; /* write the header to this is non-NULL */
char *url; /* what to get */
char *freethis; /* if non-NULL, an allocated string for the URL */
char *hostname; /* hostname to connect, as parsed from url */
long port; /* which port to use (if non-protocol bind) set
CONF_PORT to use this */
unsigned short remote_port; /* what remote port to connect to, not the proxy
port! */
long use_port; /* which port to use (when not using default) */
struct Configbits bits; /* new-style (v7) flag data */
struct ssl_config_data ssl; /* this is for ssl-stuff */
char *userpwd; /* <user:password>, if used */
char *range; /* range, if used. See README for detailed specification on
@@ -455,13 +499,6 @@ struct UrlData {
long timeout; /* in seconds, 0 means no timeout */
long infilesize; /* size of file to upload, -1 means unknown */
long maxdownload; /* in bytes, the maximum amount of data to fetch, 0
means unlimited */
/* fields only set and used within _urlget() */
int firstsocket; /* the main socket to use */
int secondarysocket; /* for i.e ftp transfers */
char buffer[BUFSIZE+1]; /* buffer with size BUFSIZE */
double current_speed; /* the ProgressShow() funcion sets this */
@@ -473,9 +510,6 @@ struct UrlData {
char *cookie; /* HTTP cookie string to send */
char *newurl; /* This can only be set if a Location: was in the
document headers */
struct curl_slist *headers; /* linked list of extra headers */
struct HttpPost *httppost; /* linked list of POST data */
@@ -484,12 +518,13 @@ struct UrlData {
struct CookieInfo *cookies;
struct ssldata ssl; /* this is for ssl-stuff */
long crlf;
struct curl_slist *quote; /* before the transfer */
struct curl_slist *postquote; /* after the transfer */
/* Telnet negotiation options */
struct curl_slist *telnet_options; /* linked list of telnet options */
TimeCond timecondition; /* kind of comparison */
time_t timevalue; /* what time to compare with */
@@ -500,12 +535,6 @@ struct UrlData {
char *headerbuff; /* allocated buffer to store headers in */
int headersize; /* size of the allocation */
#if 0
/* this was removed in libcurl 7.4 */
char *writeinfo; /* if non-NULL describes what to output on a successful
completion */
#endif
struct Progress progress; /* for all the progress meter data */
#define MAX_CURL_USER_LENGTH 128
@@ -522,25 +551,44 @@ struct UrlData {
char proxyuser[MAX_CURL_USER_LENGTH];
char proxypasswd[MAX_CURL_PASSWORD_LENGTH];
/**** Dynamicly allocated strings, may need to be freed on return ****/
char *ptr_proxyuserpwd; /* free later if not NULL! */
char *ptr_uagent; /* free later if not NULL! */
char *ptr_userpwd; /* free later if not NULL! */
char *ptr_rangeline; /* free later if not NULL! */
char *ptr_ref; /* free later if not NULL! */
char *ptr_cookie; /* free later if not NULL! */
char *ptr_host; /* free later if not NULL */
char *krb4_level; /* what security level */
#ifdef KRB4
FILE *cmdchannel;
#endif
struct timeval keeps_speed; /* this should be request-specific */
/* 'connects' will be an allocated array with pointers. If the pointer is
set, it holds an allocated connection. */
struct connectdata **connects;
size_t numconnects; /* size of the 'connects' array */
curl_closepolicy closepolicy;
};
#define LIBCURL_NAME "libcurl"
#define LIBCURL_ID LIBCURL_NAME " " LIBCURL_VERSION " " SSL_ID
CURLcode Curl_getinfo(CURL *curl, CURLINFO info, ...);
/*
* Here follows function prototypes from what we used to plan to call
* the "low level" interface. It is no longer prioritized and it is not likely
* to ever be supported to external users.
*
* I removed all the comments to them as well, as they were no longer accurate
* and they're not meant for "public use" anymore.
*/
CURLcode Curl_open(CURL **curl, char *url);
CURLcode Curl_setopt(CURL *handle, CURLoption option, ...);
CURLcode Curl_close(CURL *curl); /* the opposite of curl_open() */
CURLcode Curl_connect(struct UrlData *,
struct connectdata **,
bool allow_port);
CURLcode Curl_do(struct connectdata *);
CURLcode Curl_done(struct connectdata *);
CURLcode Curl_disconnect(struct connectdata *);
#endif

View File

@@ -11,7 +11,7 @@ libversion="$version"
# Now we have a section to get the major, minor and patch number from the
# full version string. We create a single hexadecimal number from it '0xMMmmpp'
#
perl='$a=<STDIN>;@p=split("\\.",$a);for(0..2){printf STDOUT ("%02x",$p[0+$_]);}';
perl='$a=<STDIN>;@p=split("[\\.-]",$a);for(0..2){printf STDOUT ("%02x",$p[0+$_]);}';
numeric=`echo $libversion | perl -e "$perl"`

View File

@@ -145,7 +145,7 @@ if($totalmem) {
for(keys %sizeataddr) {
$addr = $_;
$size = $sizeataddr{$addr};
if($size) {
if($size > 0) {
print "At $addr, there's $size bytes.\n";
print " allocated by ".$getmem{$addr}."\n";
}

View File

@@ -17,7 +17,7 @@ LINKR = link.exe /incremental:no /libpath:"../lib"
## Debug
CCD = cl.exe /MDd /Gm /ZI /Od /D "_DEBUG" /GZ
LINKD = link.exe /incremental:yes /debug
LINKD = link.exe /incremental:yes /debug /libpath:"../lib"
CFLAGS = /I "../include" /nologo /W3 /GX /D "WIN32" /D "_CONSOLE" /D "_MBCS" /YX /FD /c
LFLAGS = /nologo /out:$(PROGRAM_NAME) /subsystem:console /machine:I386

View File

@@ -29,8 +29,6 @@
#include <ctype.h>
#include <curl/curl.h>
#include <curl/types.h> /* new for v7 */
#include <curl/easy.h> /* new for v7 */
#define _MPRINTF_REPLACE /* we want curl-functions instead of native ones */
#include <curl/mprintf.h>
@@ -102,16 +100,13 @@ typedef enum {
#define CONF_NOBODY (1<<11) /* use HEAD to get http document */
#define CONF_FAILONERROR (1<<12) /* no output on http error codes >= 300 */
#define CONF_UPLOAD (1<<14) /* this is an upload */
#define CONF_POST (1<<15) /* HTTP POST method */
#define CONF_FTPLISTONLY (1<<16) /* Use NLST when listing ftp dir */
#define CONF_FTPAPPEND (1<<20) /* Append instead of overwrite on upload! */
#define CONF_NETRC (1<<22) /* read user+password from .netrc */
#define CONF_FOLLOWLOCATION (1<<23) /* use Location: Luke! */
#define CONF_GETTEXT (1<<24) /* use ASCII/text for transfer */
#define CONF_HTTPPOST (1<<25) /* multipart/form-data HTTP POST */
#if 0
#define CONF_PUT (1<<27) /* PUT the input file */
#endif
#define CONF_MUTE (1<<28) /* force NOPROGRESS */
#ifndef HAVE_STRDUP
@@ -245,60 +240,62 @@ static void help(void)
" -a/--append Append to target file when uploading (F)\n"
" -A/--user-agent <string> User-Agent to send to server (H)\n"
" -b/--cookie <name=string/file> Cookie string or file to read cookies from (H)\n"
" -B/--use-ascii Use ASCII/text transfer\n"
" -C/--continue-at <offset> Specify absolute resume offset\n"
" -B/--use-ascii Use ASCII/text transfer\n",
curl_version());
puts(" -C/--continue-at <offset> Specify absolute resume offset\n"
" -d/--data <data> HTTP POST data (H)\n"
" --data-ascii <data> HTTP POST ASCII data (H)\n"
" --data-binary <data> HTTP POST binary data (H)\n"
" -D/--dump-header <file> Write the headers to this file\n"
" -e/--referer Referer page (H)\n"
" -E/--cert <cert[:passwd]> Specifies your certificate file and password (HTTPS)\n"
" -e/--referer Referer page (H)");
puts(" -E/--cert <cert[:passwd]> Specifies your certificate file and password (HTTPS)\n"
" --cacert <file> CA certifciate to verify peer against (HTTPS)\n"
" -f/--fail Fail silently (no output at all) on errors (H)\n"
" -F/--form <name=content> Specify HTTP POST data (H)\n"
" -g/--globoff Disable URL sequences and ranges using {} and []\n"
" -h/--help This help text\n"
" -H/--header <line> Custom header to pass to server. (H)\n"
" -i/--include Include the HTTP-header in the output (H)\n"
" -H/--header <line> Custom header to pass to server. (H)");
puts(" -i/--include Include the HTTP-header in the output (H)\n"
" -I/--head Fetch document info only (HTTP HEAD/FTP SIZE)\n"
" --interface <interface> Specify the interface to be used\n"
" --krb4 <level> Enable krb4 with specified security level (F)\n"
" -K/--config Specify which config file to read\n"
" -l/--list-only List only names of an FTP directory (F)\n"
" -L/--location Follow Location: hints (H)\n"
" -l/--list-only List only names of an FTP directory (F)");
puts(" -L/--location Follow Location: hints (H)\n"
" -m/--max-time <seconds> Maximum time allowed for the transfer\n"
" -M/--manual Display huge help text\n"
" -n/--netrc Read .netrc for user name and password\n"
" -N/--no-buffer Disables the buffering of the output stream\n"
" -o/--output <file> Write output to <file> instead of stdout\n"
" -N/--no-buffer Disables the buffering of the output stream");
puts(" -o/--output <file> Write output to <file> instead of stdout\n"
" -O/--remote-name Write output to a file named as the remote file\n"
" -p/--proxytunnel Perform non-HTTP services through a HTTP proxy\n"
" -P/--ftpport <address> Use PORT with address instead of PASV when ftping (F)\n"
" -q When used as the first parameter disables .curlrc\n"
" -Q/--quote <cmd> Send QUOTE command to FTP before file transfer (F)\n"
" -r/--range <range> Retrieve a byte range from a HTTP/1.1 or FTP server\n"
" -Q/--quote <cmd> Send QUOTE command to FTP before file transfer (F)");
puts(" -r/--range <range> Retrieve a byte range from a HTTP/1.1 or FTP server\n"
" -s/--silent Silent mode. Don't output anything\n"
" -S/--show-error Show error. With -s, make curl show errors when they occur\n"
" -t/--telnet-option <OPT=val> Set telnet option\n"
" -T/--upload-file <file> Transfer/upload <file> to remote site\n"
" --url <URL> Another way to specify URL to work with\n"
" -u/--user <user[:password]> Specify user and password to use\n"
" --url <URL> Another way to specify URL to work with");
puts(" -u/--user <user[:password]> Specify user and password to use\n"
" -U/--proxy-user <user[:password]> Specify Proxy authentication\n"
" -v/--verbose Makes the operation more talkative\n"
" -V/--version Outputs version number then quits\n"
" -w/--write-out [format] What to output after completion\n"
" -x/--proxy <host[:port]> Use proxy. (Default port is 1080)\n"
" -X/--request <command> Specific request command to use\n"
" -y/--speed-time Time needed to trig speed-limit abort. Defaults to 30\n"
" -X/--request <command> Specific request command to use");
puts(" -y/--speed-time Time needed to trig speed-limit abort. Defaults to 30\n"
" -Y/--speed-limit Stop transfer if below speed-limit for 'speed-time' secs\n"
" -z/--time-cond <time> Includes a time condition to the server (H)\n"
" -Z/--max-redirs <num> Set maximum number of redirections allowed (H)\n"
" -2/--sslv2 Force usage of SSLv2 (H)\n"
" -3/--sslv3 Force usage of SSLv3 (H)\n"
" -#/--progress-bar Display transfer progress as a progress bar\n"
" -3/--sslv3 Force usage of SSLv3 (H)");
puts(" -#/--progress-bar Display transfer progress as a progress bar\n"
" --crlf Convert LF to CRLF in upload. Useful for MVS (OS/390)\n"
" --stderr <file> Where to redirect stderr. - means stdout.\n",
curl_version()
);
" --stderr <file> Where to redirect stderr. - means stdout.\n"
" --random-file <file> File to use for reading random data from (SSL)\n"
" --egd-file <file> EGD socket path for random data (SSL)");
}
struct LongShort {
@@ -308,6 +305,8 @@ struct LongShort {
};
struct Configurable {
char *random_file;
char *egd_file;
char *useragent;
char *cookie;
bool use_resume;
@@ -366,6 +365,8 @@ struct Configurable {
struct HttpPost *httppost;
struct HttpPost *last_post;
struct curl_slist *telnet_options;
HttpReq httpreq;
};
@@ -525,6 +526,8 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
{"7", "interface", TRUE},
{"6", "krb4", TRUE},
{"5", "url", TRUE},
{"5a", "random-file", TRUE},
{"5b", "egd-file", TRUE},
{"2", "sslv2", FALSE},
{"3", "sslv3", FALSE},
@@ -565,7 +568,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
{"r", "range", TRUE},
{"s", "silent", FALSE},
{"S", "show-error", FALSE},
{"t", "upload", FALSE},
{"t", "telnet-options", TRUE},
{"T", "upload-file", TRUE},
{"u", "user", TRUE},
{"U", "proxy-user", TRUE},
@@ -674,29 +677,37 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
GetStr(&config->krb4level, nextarg);
break;
case '5':
/* the URL! */
{
struct getout *url;
if(config->url_get || (config->url_get=config->url_list)) {
/* there's a node here, if it already is filled-in continue to find
an "empty" node */
while(config->url_get && (config->url_get->flags&GETOUT_URL))
config->url_get = config->url_get->next;
}
switch(subletter) {
case 'a': /* random-file */
GetStr(&config->random_file, nextarg);
break;
case 'b': /* egd-file */
GetStr(&config->egd_file, nextarg);
break;
default: /* the URL! */
{
struct getout *url;
if(config->url_get || (config->url_get=config->url_list)) {
/* there's a node here, if it already is filled-in continue to find
an "empty" node */
while(config->url_get && (config->url_get->flags&GETOUT_URL))
config->url_get = config->url_get->next;
}
/* now there might or might not be an available node to fill in! */
/* now there might or might not be an available node to fill in! */
if(config->url_get)
/* existing node */
url = config->url_get;
else
/* there was no free node, create one! */
url=new_getout(config);
if(url) {
/* fill in the URL */
GetStr(&url->url, nextarg);
url->flags |= GETOUT_URL;
if(config->url_get)
/* existing node */
url = config->url_get;
else
/* there was no free node, create one! */
url=new_getout(config);
if(url) {
/* fill in the URL */
GetStr(&url->url, nextarg);
url->flags |= GETOUT_URL;
}
}
}
break;
@@ -785,8 +796,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
else
config->postfields=postdata;
}
if(config->postfields)
config->conf |= CONF_POST;
if(SetHTTPrequest(HTTPREQ_SIMPLEPOST, &config->httpreq))
return PARAM_BAD_USE;
break;
@@ -959,9 +969,8 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
config->showerror ^= TRUE; /* toggle on if used with -s */
break;
case 't':
/* we are uploading */
config->conf ^= CONF_UPLOAD;
fprintf(stderr, "-t is a deprecated switch, use '-T -' instead!\n");
/* Telnet options */
config->telnet_options = curl_slist_append(config->telnet_options, nextarg);
break;
case 'T':
/* we are uploading */
@@ -1370,6 +1379,10 @@ void progressbarinit(struct ProgressData *bar)
void free_config_fields(struct Configurable *config)
{
if(config->random_file)
free(config->random_file);
if(config->egd_file)
free(config->egd_file);
if(config->userpwd)
free(config->userpwd);
if(config->postfields)
@@ -1443,12 +1456,10 @@ operate(struct Configurable *config, int argc, char *argv[])
curl_memdebug("memdump");
#endif
main_init(); /* inits winsock crap for windows */
config->showerror=TRUE;
config->conf=CONF_DEFAULT;
#if 0
config->crlf=FALSE;
config->quote=NULL;
#endif
if(argc>1 &&
(!strnequal("--", argv[1], 2) && (argv[1][0] == '-')) &&
@@ -1457,9 +1468,6 @@ operate(struct Configurable *config, int argc, char *argv[])
* The first flag, that is not a verbose name, but a shortname
* and it includes the 'q' flag!
*/
#if 0
fprintf(stderr, "I TURNED OFF THE CRAP\n");
#endif
;
}
else {
@@ -1539,6 +1547,15 @@ operate(struct Configurable *config, int argc, char *argv[])
else
allocuseragent = TRUE;
/*
* Get a curl handle to use for all forthcoming curl transfers. Cleanup
* when all transfers are done. This is supported with libcurl 7.7 and
* should not be attempted on previous versions.
*/
curl = curl_easy_init();
if(!curl)
return CURLE_FAILED_INIT;
urlnode = config->url_list;
/* loop through the list of given URLs */
@@ -1724,122 +1741,113 @@ operate(struct Configurable *config, int argc, char *argv[])
#endif
main_init();
curl_easy_setopt(curl, CURLOPT_FILE, (FILE *)&outs); /* where to store */
/* what call to write: */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
curl_easy_setopt(curl, CURLOPT_INFILE, infd); /* for uploads */
/* size of uploaded file: */
curl_easy_setopt(curl, CURLOPT_INFILESIZE, infilesize);
curl_easy_setopt(curl, CURLOPT_URL, url); /* what to fetch */
curl_easy_setopt(curl, CURLOPT_PROXY, config->proxy); /* proxy to use */
curl_easy_setopt(curl, CURLOPT_VERBOSE, config->conf&CONF_VERBOSE);
curl_easy_setopt(curl, CURLOPT_HEADER, config->conf&CONF_HEADER);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, config->conf&CONF_NOPROGRESS);
curl_easy_setopt(curl, CURLOPT_NOBODY, config->conf&CONF_NOBODY);
curl_easy_setopt(curl, CURLOPT_FAILONERROR,
config->conf&CONF_FAILONERROR);
curl_easy_setopt(curl, CURLOPT_UPLOAD, config->conf&CONF_UPLOAD);
curl_easy_setopt(curl, CURLOPT_FTPLISTONLY,
config->conf&CONF_FTPLISTONLY);
curl_easy_setopt(curl, CURLOPT_FTPAPPEND, config->conf&CONF_FTPAPPEND);
curl_easy_setopt(curl, CURLOPT_NETRC, config->conf&CONF_NETRC);
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION,
config->conf&CONF_FOLLOWLOCATION);
curl_easy_setopt(curl, CURLOPT_TRANSFERTEXT, config->conf&CONF_GETTEXT);
curl_easy_setopt(curl, CURLOPT_MUTE, config->conf&CONF_MUTE);
curl_easy_setopt(curl, CURLOPT_USERPWD, config->userpwd);
curl_easy_setopt(curl, CURLOPT_PROXYUSERPWD, config->proxyuserpwd);
curl_easy_setopt(curl, CURLOPT_RANGE, config->range);
curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, errorbuffer);
curl_easy_setopt(curl, CURLOPT_TIMEOUT, config->timeout);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, config->postfields);
/* new in libcurl 7.2: */
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, config->postfieldsize);
curl_easy_setopt(curl, CURLOPT_REFERER, config->referer);
curl_easy_setopt(curl, CURLOPT_AUTOREFERER,
config->conf&CONF_AUTO_REFERER);
curl_easy_setopt(curl, CURLOPT_USERAGENT, config->useragent);
curl_easy_setopt(curl, CURLOPT_FTPPORT, config->ftpport);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, config->low_speed_limit);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, config->low_speed_time);
curl_easy_setopt(curl, CURLOPT_RESUME_FROM,
config->use_resume?config->resume_from:0);
curl_easy_setopt(curl, CURLOPT_COOKIE, config->cookie);
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, config->headers);
curl_easy_setopt(curl, CURLOPT_HTTPPOST, config->httppost);
curl_easy_setopt(curl, CURLOPT_SSLCERT, config->cert);
curl_easy_setopt(curl, CURLOPT_SSLCERTPASSWD, config->cert_passwd);
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_FILE, (FILE *)&outs); /* where to store */
/* what call to write: */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
curl_easy_setopt(curl, CURLOPT_INFILE, infd); /* for uploads */
/* size of uploaded file: */
curl_easy_setopt(curl, CURLOPT_INFILESIZE, infilesize);
curl_easy_setopt(curl, CURLOPT_URL, url); /* what to fetch */
curl_easy_setopt(curl, CURLOPT_PROXY, config->proxy); /* proxy to use */
curl_easy_setopt(curl, CURLOPT_VERBOSE, config->conf&CONF_VERBOSE);
curl_easy_setopt(curl, CURLOPT_HEADER, config->conf&CONF_HEADER);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, config->conf&CONF_NOPROGRESS);
curl_easy_setopt(curl, CURLOPT_NOBODY, config->conf&CONF_NOBODY);
curl_easy_setopt(curl, CURLOPT_FAILONERROR,
config->conf&CONF_FAILONERROR);
curl_easy_setopt(curl, CURLOPT_UPLOAD, config->conf&CONF_UPLOAD);
curl_easy_setopt(curl, CURLOPT_POST, config->conf&CONF_POST);
curl_easy_setopt(curl, CURLOPT_FTPLISTONLY,
config->conf&CONF_FTPLISTONLY);
curl_easy_setopt(curl, CURLOPT_FTPAPPEND, config->conf&CONF_FTPAPPEND);
curl_easy_setopt(curl, CURLOPT_NETRC, config->conf&CONF_NETRC);
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION,
config->conf&CONF_FOLLOWLOCATION);
curl_easy_setopt(curl, CURLOPT_TRANSFERTEXT, config->conf&CONF_GETTEXT);
#if 0
curl_easy_setopt(curl, CURLOPT_PUT, config->conf&CONF_PUT);
#endif
curl_easy_setopt(curl, CURLOPT_MUTE, config->conf&CONF_MUTE);
curl_easy_setopt(curl, CURLOPT_USERPWD, config->userpwd);
curl_easy_setopt(curl, CURLOPT_PROXYUSERPWD, config->proxyuserpwd);
curl_easy_setopt(curl, CURLOPT_RANGE, config->range);
curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, errorbuffer);
curl_easy_setopt(curl, CURLOPT_TIMEOUT, config->timeout);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, config->postfields);
/* new in libcurl 7.2: */
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, config->postfieldsize);
curl_easy_setopt(curl, CURLOPT_REFERER, config->referer);
curl_easy_setopt(curl, CURLOPT_AUTOREFERER,
config->conf&CONF_AUTO_REFERER);
curl_easy_setopt(curl, CURLOPT_USERAGENT, config->useragent);
curl_easy_setopt(curl, CURLOPT_FTPPORT, config->ftpport);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, config->low_speed_limit);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, config->low_speed_time);
curl_easy_setopt(curl, CURLOPT_RESUME_FROM,
config->use_resume?config->resume_from:0);
curl_easy_setopt(curl, CURLOPT_COOKIE, config->cookie);
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, config->headers);
curl_easy_setopt(curl, CURLOPT_HTTPPOST, config->httppost);
curl_easy_setopt(curl, CURLOPT_SSLCERT, config->cert);
curl_easy_setopt(curl, CURLOPT_SSLCERTPASSWD, config->cert_passwd);
if(config->cacert) {
/* available from libcurl 7.5: */
curl_easy_setopt(curl, CURLOPT_CAINFO, config->cacert);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, TRUE);
}
if(config->conf&(CONF_NOBODY|CONF_USEREMOTETIME)) {
/* no body or use remote time */
/* new in 7.5 */
curl_easy_setopt(curl, CURLOPT_FILETIME, TRUE);
}
/* 7.5 news: */
if (config->maxredirs)
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, config->maxredirs);
else
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, DEFAULT_MAXREDIRS);
curl_easy_setopt(curl, CURLOPT_CRLF, config->crlf);
curl_easy_setopt(curl, CURLOPT_QUOTE, config->quote);
curl_easy_setopt(curl, CURLOPT_POSTQUOTE, config->postquote);
curl_easy_setopt(curl, CURLOPT_WRITEHEADER,
config->headerfile?&heads:NULL);
curl_easy_setopt(curl, CURLOPT_COOKIEFILE, config->cookiefile);
curl_easy_setopt(curl, CURLOPT_SSLVERSION, config->ssl_version);
curl_easy_setopt(curl, CURLOPT_TIMECONDITION, config->timecond);
curl_easy_setopt(curl, CURLOPT_TIMEVALUE, config->condtime);
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, config->customrequest);
curl_easy_setopt(curl, CURLOPT_STDERR, config->errors);
/* three new ones in libcurl 7.3: */
curl_easy_setopt(curl, CURLOPT_HTTPPROXYTUNNEL, config->proxytunnel);
curl_easy_setopt(curl, CURLOPT_INTERFACE, config->iface);
curl_easy_setopt(curl, CURLOPT_KRB4LEVEL, config->krb4level);
if((config->progressmode == CURL_PROGRESS_BAR) &&
!(config->conf&(CONF_NOPROGRESS|CONF_MUTE))) {
/* we want the alternative style, then we have to implement it
ourselves! */
progressbarinit(&progressbar);
curl_easy_setopt(curl, CURLOPT_PROGRESSFUNCTION, myprogress);
curl_easy_setopt(curl, CURLOPT_PROGRESSDATA, &progressbar);
}
res = curl_easy_perform(curl);
if(config->writeout) {
ourWriteOut(curl, config->writeout);
}
/* always cleanup */
curl_easy_cleanup(curl);
if((res!=CURLE_OK) && config->showerror)
fprintf(config->errors, "curl: (%d) %s\n", res, errorbuffer);
if(config->cacert) {
/* available from libcurl 7.5: */
curl_easy_setopt(curl, CURLOPT_CAINFO, config->cacert);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, TRUE);
}
else
fprintf(config->errors, "curl: failed to init libcurl!\n");
if(config->conf&(CONF_NOBODY|CONF_USEREMOTETIME)) {
/* no body or use remote time */
/* new in 7.5 */
curl_easy_setopt(curl, CURLOPT_FILETIME, TRUE);
}
/* 7.5 news: */
if (config->maxredirs)
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, config->maxredirs);
else
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, DEFAULT_MAXREDIRS);
curl_easy_setopt(curl, CURLOPT_CRLF, config->crlf);
curl_easy_setopt(curl, CURLOPT_QUOTE, config->quote);
curl_easy_setopt(curl, CURLOPT_POSTQUOTE, config->postquote);
curl_easy_setopt(curl, CURLOPT_WRITEHEADER,
config->headerfile?&heads:NULL);
curl_easy_setopt(curl, CURLOPT_COOKIEFILE, config->cookiefile);
curl_easy_setopt(curl, CURLOPT_SSLVERSION, config->ssl_version);
curl_easy_setopt(curl, CURLOPT_TIMECONDITION, config->timecond);
curl_easy_setopt(curl, CURLOPT_TIMEVALUE, config->condtime);
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, config->customrequest);
curl_easy_setopt(curl, CURLOPT_STDERR, config->errors);
/* three new ones in libcurl 7.3: */
curl_easy_setopt(curl, CURLOPT_HTTPPROXYTUNNEL, config->proxytunnel);
curl_easy_setopt(curl, CURLOPT_INTERFACE, config->iface);
curl_easy_setopt(curl, CURLOPT_KRB4LEVEL, config->krb4level);
if((config->progressmode == CURL_PROGRESS_BAR) &&
!(config->conf&(CONF_NOPROGRESS|CONF_MUTE))) {
/* we want the alternative style, then we have to implement it
ourselves! */
progressbarinit(&progressbar);
curl_easy_setopt(curl, CURLOPT_PROGRESSFUNCTION, myprogress);
curl_easy_setopt(curl, CURLOPT_PROGRESSDATA, &progressbar);
}
/* new in libcurl 7.6.2: */
curl_easy_setopt(curl, CURLOPT_TELNETOPTIONS, config->telnet_options);
main_free();
/* new in libcurl 7.7: */
curl_easy_setopt(curl, CURLOPT_RANDOM_FILE, config->random_file);
curl_easy_setopt(curl, CURLOPT_EGDSOCKET, config->egd_file);
res = curl_easy_perform(curl);
if(config->writeout) {
ourWriteOut(curl, config->writeout);
}
if((res!=CURLE_OK) && config->showerror)
fprintf(config->errors, "curl: (%d) %s\n", res, errorbuffer);
if((config->errors != stderr) &&
(config->errors != stdout))
@@ -1887,6 +1895,11 @@ operate(struct Configurable *config, int argc, char *argv[])
if(allocuseragent)
free(config->useragent);
/* cleanup the curl handle! */
curl_easy_cleanup(curl);
main_free(); /* cleanup the winsock stuff for windows */
return res;
}

View File

@@ -75,18 +75,18 @@ for(@out) {
$new = $_;
$outsize += length($new);
$outsize += length($new)+1; # one for the newline
$new =~ s/\\/\\\\/g;
$new =~ s/\"/\\\"/g;
printf("\"%s\\n\"\n", $new);
if($outsize > 10000) {
# gcc 2.96 claims ISO C89 only is required to support 509 letter strings
if($outsize > 500) {
# terminate and make another puts() call here
print ");\n puts(\n";
$outsize=0;
$outsize=length($new)+1;
}
printf("\"%s\\n\"\n", $new);
}

View File

@@ -1,3 +1,3 @@
#define CURL_NAME "curl"
#define CURL_VERSION "7.6.1-pre2"
#define CURL_VERSION "7.7-beta3"
#define CURL_ID CURL_NAME " " CURL_VERSION " (" OS ") "

View File

@@ -25,8 +25,6 @@
#include <string.h>
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
#define _MPRINTF_REPLACE /* we want curl-functions instead of native ones */
#include <curl/mprintf.h>

View File

@@ -7,7 +7,7 @@
The cURL Test Suite
Requires:
perl
perl (and a unix-style shell)
Run:
'make test'. This invokes the 'runtests.pl' perl script. Edit the top
@@ -15,10 +15,13 @@ Run:
The script breaks on the first test that doesn't do OK. Use -a to prevent
the script to abort on the first error. Run the script with -v for more
verbose output.
verbose output. Use -d to run the test servers with debug output enabled as
well.
Use -s fort shorter output, or pass a string with test numbers to run
specific tests only (like ./runtests.pl "3 4" to test 3 and 4 only)
Use -s for shorter output, or pass test numbers to run specific tests only
(like "./runtests.pl 3 4" to test 3 and 4 only). It also supports test case
ranges with 'to'. As in "./runtests 3 to 9" which runs the seven tests from
3 to 9.
Memory:
The test script will check that all allocated memory is freed properly IF
@@ -26,9 +29,24 @@ Memory:
automatically detect if that is the case, and it will use the ../memanalyze
script to analyze the memory debugging output.
Debug:
If a test case fails, you can conveniently get the script to invoke the
debugger (gdb) for you with the server running and the exact same command
line parameters that failed. Just invoke 'runtests.pl <test number> -g' and
then just type 'run' in the debugger to perform the command through the
debugger.
If a test case causes a core dump, analyze it by running gdb like:
# gdb ../curl/src core
... and get a stack trace with the gdb command:
(gdb) where
Logs:
All logs are generated in the logs/ subdirctory (it is emtpied first
in the runtests.sh script)
in the runtests.pl script)
Data:
All test-data are put in the data/ subdirctory.
@@ -45,8 +63,10 @@ Data:
replyN.txt: the full dump the server should reply to curl for this test.
If the final result that curl should've got is not in this
file, you can instead name the file replyN0001.txt. This enables
you to fiddle more. ;-)
file, you can instead name the file replyN0001.txt. This
enables you to fiddle more. ;-) Alas, the server sends the
replyN.txt file but checks the result after the test against
the *0001.txt file.
stdoutN.txt: if this file is present, curl's stdout is compared against
this file to see that they're identical. If this is present,
@@ -63,7 +83,7 @@ Data:
of the ftp server. It uses a simple syntax that is left to
describe here!
FIX:
TODO:
* Make httpserver.pl work when we PUT without Content-Length:
* Add persistant connection support and test cases

View File

@@ -5,53 +5,53 @@ test:
[ -f command1.txt ] || ln -s $(srcdir)/*.txt .
EXTRA_DIST = command1.txt error113.txt name17.txt prot8.txt \
command10.txt error114.txt name18.txt prot9.txt \
command100.txt error115.txt name19.txt reply1.txt \
command101.txt error116.txt name2.txt reply10.txt \
command102.txt error117.txt name20.txt reply100.txt \
command103.txt error118.txt name200.txt reply101.txt \
command104.txt error119.txt name201.txt reply102.txt \
command105.txt error19.txt name21.txt reply103.txt \
command106.txt error20.txt name22.txt reply104.txt \
command107.txt error201.txt name23.txt reply105.txt \
command108.txt error21.txt name24.txt reply106.txt \
command109.txt error23.txt name25.txt reply11.txt \
command11.txt error24.txt name3.txt reply110.txt \
command110.txt error25.txt name4.txt reply110001.txt \
command111.txt ftpd113.txt name5.txt reply110002.txt \
command112.txt ftpd114.txt name6.txt reply12.txt \
command113.txt ftpd115.txt name7.txt reply13.txt \
command114.txt ftpd116.txt name8.txt reply14.txt \
command115.txt ftpd117.txt name9.txt reply15.txt \
command116.txt ftpd118.txt prot1.txt reply16.txt \
command117.txt name1.txt prot10.txt reply17.txt \
command118.txt name10.txt prot100.txt reply2.txt \
command119.txt name100.txt prot101.txt reply200.txt \
command12.txt name101.txt prot102.txt reply22.txt \
command13.txt name102.txt prot103.txt reply24.txt \
command14.txt name103.txt prot104.txt reply25.txt \
command15.txt name104.txt prot105.txt reply3.txt \
command16.txt name105.txt prot106.txt reply4.txt \
command17.txt name106.txt prot107.txt reply5.txt \
command18.txt name107.txt prot108.txt reply6.txt \
command19.txt name108.txt prot109.txt reply7.txt \
command2.txt name109.txt prot11.txt reply8.txt \
command20.txt name11.txt prot110.txt reply9.txt \
command200.txt name110.txt prot112.txt stdin17.txt \
command201.txt name111.txt prot12.txt stdout107.txt \
command21.txt name112.txt prot13.txt stdout108.txt \
command22.txt name113.txt prot14.txt stdout109.txt \
command23.txt name114.txt prot15.txt stdout110.txt \
command24.txt name115.txt prot16.txt stdout112.txt \
command25.txt name116.txt prot17.txt stdout15.txt \
command3.txt name117.txt prot18.txt stdout18.txt \
command4.txt name118.txt prot2.txt upload107.txt \
command5.txt name119.txt prot22.txt upload108.txt \
command6.txt name12.txt prot3.txt upload109.txt \
command7.txt name13.txt prot4.txt upload112.txt \
command8.txt name14.txt prot5.txt \
command9.txt name15.txt prot6.txt \
error111.txt name16.txt prot7.txt \
command10.txt error114.txt name18.txt prot9.txt \
command100.txt error115.txt name19.txt reply1.txt \
command101.txt error116.txt name2.txt reply10.txt \
command102.txt error117.txt name20.txt reply100.txt \
command103.txt error118.txt name200.txt reply101.txt \
command104.txt error119.txt name201.txt reply102.txt \
command105.txt error19.txt name21.txt reply103.txt \
command106.txt error20.txt name22.txt reply104.txt \
command107.txt error201.txt name23.txt reply105.txt \
command108.txt error21.txt name24.txt reply106.txt \
command109.txt error23.txt name25.txt reply11.txt \
command11.txt error24.txt name3.txt reply110.txt \
command110.txt error25.txt name4.txt reply110001.txt \
command111.txt ftpd113.txt name5.txt reply110002.txt \
command112.txt ftpd114.txt name6.txt reply12.txt \
command113.txt ftpd115.txt name7.txt reply13.txt \
command114.txt ftpd116.txt name8.txt reply14.txt \
command115.txt ftpd117.txt name9.txt reply15.txt \
command116.txt ftpd118.txt prot1.txt reply16.txt \
command117.txt name1.txt prot10.txt reply17.txt \
command118.txt name10.txt prot100.txt reply2.txt \
command119.txt name100.txt prot101.txt reply200.txt \
command12.txt name101.txt prot102.txt reply22.txt \
command13.txt name102.txt prot103.txt reply24.txt \
command14.txt name103.txt prot104.txt reply25.txt \
command15.txt name104.txt prot105.txt reply3.txt \
command16.txt name105.txt prot106.txt reply4.txt \
command17.txt name106.txt prot107.txt reply5.txt \
command18.txt name107.txt prot108.txt reply6.txt \
command19.txt name108.txt prot109.txt reply7.txt \
command2.txt name109.txt prot11.txt reply8.txt \
command20.txt name11.txt prot110.txt reply9.txt \
command200.txt name110.txt prot112.txt stdin17.txt \
command201.txt name111.txt prot12.txt stdout107.txt \
command21.txt name112.txt prot13.txt stdout108.txt \
command22.txt name113.txt prot14.txt stdout109.txt \
command23.txt name114.txt prot15.txt stdout110.txt \
command24.txt name115.txt prot16.txt stdout112.txt \
command25.txt name116.txt prot17.txt stdout15.txt \
command3.txt name117.txt prot18.txt stdout18.txt \
command4.txt name118.txt prot2.txt upload107.txt \
command5.txt name119.txt prot22.txt upload108.txt \
command6.txt name12.txt prot3.txt upload109.txt \
command7.txt name13.txt prot4.txt upload112.txt \
command8.txt name14.txt prot5.txt \
command9.txt name15.txt prot6.txt \
error111.txt name16.txt prot7.txt \
command26.txt prot26.txt command27.txt prot27.txt \
name26.txt reply26.txt name27.txt stdout27.txt \
command28.txt name28.txt prot28.txt reply28.txt \
@@ -62,4 +62,8 @@ command30.txt name29.txt prot29.txt reply29.txt \
command31.txt name32.txt reply31.txt reply32.txt \
command32.txt prot31.txt reply310001.txt reply320001.txt \
name31.txt prot32.txt reply310002.txt reply320002.txt \
command33.txt extra33.txt name33.txt prot33.txt reply33.txt
command33.txt extra33.txt name33.txt prot33.txt reply33.txt \
command34.txt prot34.txt reply340001.txt name34.txt reply34.txt \
command35.txt name35.txt prot35.txt reply35.txt \
command36.txt error36.txt name36.txt reply36.txt \
command37.txt name37.txt prot37.txt reply37.txt

View File

@@ -1,4 +1,4 @@
http://%HOSTIP:%HOSTPORT/want/25 -o - -o -
http://%HOSTIP:%HOSTPORT/want/26 -o - -o -

1
tests/data/command34.txt Normal file
View File

@@ -0,0 +1 @@
http://%HOSTIP:%HOSTPORT/34

1
tests/data/command35.txt Normal file
View File

@@ -0,0 +1 @@
http://%HOSTIP:%HOSTPORT/want/35 --include --head

1
tests/data/command36.txt Normal file
View File

@@ -0,0 +1 @@
http://%HOSTIP:%HOSTPORT/36

1
tests/data/command37.txt Normal file
View File

@@ -0,0 +1 @@
http://uUsSeErrr:pppasswrd@%HOSTIP:%HOSTPORT/37

1
tests/data/error36.txt Normal file
View File

@@ -0,0 +1 @@
26

View File

@@ -1 +1 @@
HTTP HEAD
HTTP HEAD with Connection: close

View File

@@ -1 +1 @@
looping HTTP Location: following with --max-redirs
looping HTTP Location: following with --max-redirs, no persistance

1
tests/data/name34.txt Normal file
View File

@@ -0,0 +1 @@
HTTP GET with chunked Transfer-Encoding

1
tests/data/name35.txt Normal file
View File

@@ -0,0 +1 @@
HTTP HEAD without Connection: close

1
tests/data/name36.txt Normal file
View File

@@ -0,0 +1 @@
HTTP GET with badly formatted chunked Transfer-Encoding

1
tests/data/name37.txt Normal file
View File

@@ -0,0 +1 @@
HTTP GET with name+password in the URL

View File

@@ -1,4 +1,4 @@
GET /1 HTTP/1.0
GET /1 HTTP/1.1
User-Agent: curl/7.4.2-pre3 (sparc-sun-solaris2.7) libcurl 7.4.2-pre3 (SSL 0.9.6)
Host: 127.0.0.1:8999
Pragma: no-cache

Some files were not shown because too many files have changed in this diff Show More