Compare commits

...

135 Commits

Author SHA1 Message Date
Daniel Stenberg
ea409d0374 7.7-beta5 commit 2001-03-19 08:42:00 +00:00
Daniel Stenberg
eaaa1a1fd4 test case 39 added, HTTP location and continue 2001-03-19 08:36:08 +00:00
Daniel Stenberg
78b4851da1 Added support for HTTP code 100 continue, as 8.2.3 in RFC2616 defines 2001-03-19 07:47:57 +00:00
Daniel Stenberg
38c47803dd detect if chunked transfers are aborted 2001-03-16 15:45:12 +00:00
Daniel Stenberg
455663ba5e corrected the close to sclose() so that the memdebug stuff works 2001-03-16 15:44:38 +00:00
Daniel Stenberg
efb5d9a403 new directories 2001-03-16 15:22:51 +00:00
Daniel Stenberg
b1a5208e6b removed the CURL_SEPARATORS define 2001-03-16 15:21:26 +00:00
Daniel Stenberg
e6dacd92ec re-generated with the memdebug.h include 2001-03-16 15:20:36 +00:00
Daniel Stenberg
952b3a2c0f added memdebug.h include 2001-03-16 15:19:36 +00:00
Daniel Stenberg
721f9bca84 moved to ../../php/examples/ 2001-03-16 13:45:42 +00:00
Daniel Stenberg
ad4d5fabf8 the PHP examples are moved 2001-03-16 13:44:57 +00:00
Daniel Stenberg
aa860990ad fix the new makefiles in php/ and perl/ 2001-03-16 13:35:45 +00:00
Daniel Stenberg
0fa9135d9f use perl in two ways 2001-03-16 13:35:11 +00:00
Daniel Stenberg
8f0114a4dd Short about the perl interface 2001-03-16 13:34:08 +00:00
Daniel Stenberg
5980c2977b filled in 2001-03-16 13:30:56 +00:00
Daniel Stenberg
19f8d71508 for the php examples 2001-03-16 13:29:57 +00:00
Daniel Stenberg
6f3bccd911 PHP examples 2001-03-16 13:28:11 +00:00
Daniel Stenberg
96f81a5c4a new PHP section 2001-03-16 13:27:42 +00:00
Daniel Stenberg
ca05d1b59c a perl script that can be used to mirror all curl archives 2001-03-16 13:10:42 +00:00
Daniel Stenberg
895dc5e530 Added README for releases 2001-03-16 13:09:21 +00:00
Daniel Stenberg
bcc6ca6fd1 Added to build proper releases 2001-03-16 13:09:05 +00:00
Daniel Stenberg
d538241a58 Georg Horn's Curl::easy interface for perl 2001-03-16 13:05:39 +00:00
Daniel Stenberg
71b4b2ffa9 moved to contrib/ 2001-03-16 13:05:18 +00:00
Daniel Stenberg
65b4a63f56 moved here from ../ 2001-03-16 13:04:57 +00:00
Daniel Stenberg
ecbee01f4b moved the documentation item to 7.8, it is rather important to have things
documented
2001-03-15 14:45:03 +00:00
Daniel Stenberg
34fed76a35 updated to have the windows builds instructions use the root Makefile that
is delivered with each source archive
2001-03-15 14:44:01 +00:00
Daniel Stenberg
0adf0cfde7 connection timeouts added 2001-03-15 14:38:54 +00:00
Daniel Stenberg
d6c456db85 added connect timeout support 2001-03-15 14:38:30 +00:00
Daniel Stenberg
36c88343d3 Added --connect-timeout support 2001-03-15 14:38:03 +00:00
Daniel Stenberg
2360e5ce12 Added CURLOPT_CONNECTTIMEOUT 2001-03-15 14:37:41 +00:00
Daniel Stenberg
d445eac162 connection timeout is now supported 2001-03-15 14:37:17 +00:00
Daniel Stenberg
e0a6d20e20 Jrn's win32-fix to make it work better 2001-03-15 12:34:40 +00:00
Daniel Stenberg
3bb979b897 corrected it, did I mention IPv6 with HTTP proxy? 2001-03-15 09:14:43 +00:00
Daniel Stenberg
010daec776 Put more concentrated unix install help already at the top, with a note that
you might need to be root to use 'make install'.
2001-03-15 08:38:15 +00:00
Daniel Stenberg
e2b0ad8429 added some text for -d that says it "emulates filling in HTML forms" as that
is what most people will use -d for
2001-03-14 19:48:29 +00:00
Daniel Stenberg
6eed95103a ipv6 adjustments 2001-03-14 18:26:54 +00:00
Daniel Stenberg
4eb2a165e8 removed a bunch of warnings for IPv6-compiles 2001-03-14 18:24:07 +00:00
Daniel Stenberg
b7fc1e45b5 now works with IPv6 and HTTP proxy 2001-03-14 18:18:02 +00:00
Daniel Stenberg
3395a2fa9e netrc fix 2001-03-14 16:59:49 +00:00
Daniel Stenberg
a564a54e21 hm, don't free the home dir and append the .netrc part properly 2001-03-14 16:12:47 +00:00
Daniel Stenberg
92186dc3d3 checks for a few functions and include files more for the new getpwuid()
stuff in lib/netrc.c
2001-03-14 16:05:31 +00:00
Daniel Stenberg
7bd6507eec uses getpwuid() to find user's home dir 2001-03-14 16:05:00 +00:00
Daniel Stenberg
d4cc810de3 added a missing \ 2001-03-14 14:35:35 +00:00
Daniel Stenberg
bea7bbee1b always append the incoming request to the server.input file, it allows
the mainscript to verify a whole series of requests
2001-03-14 14:26:56 +00:00
Daniel Stenberg
fe64570d5d updated to work with the modified http server 2001-03-14 14:26:16 +00:00
Daniel Stenberg
df6ad8d8d6 Added test case 38 2001-03-14 14:25:57 +00:00
Daniel Stenberg
f8e1fc32de Edin Kadribaic's bug report #408488 forced a rearrange of two struct fields
from urldata to connectdata, quite correctly.
2001-03-14 14:11:11 +00:00
Daniel Stenberg
8c6d56f1f9 Added the --egd-file and --random-file options 2001-03-14 11:47:55 +00:00
Daniel Stenberg
1841c8ee6a curl 7.7 beta 3 2001-03-14 11:25:44 +00:00
Daniel Stenberg
70793595fe removed the two unnecessary include files 2001-03-14 10:27:13 +00:00
Daniel Stenberg
28a8e1602d ssluse fixed, various win32 fixes 2001-03-14 10:21:52 +00:00
Daniel Stenberg
cce05b9138 Bjrn Stenberg corrected the silly '(void)data' usage when SSL is not
used
2001-03-14 10:15:42 +00:00
Daniel Stenberg
72a7fd4dc7 Jrn's updated file 2001-03-14 10:06:23 +00:00
Daniel Stenberg
9a6a476cf5 the URL escape/unescape functions are also public but undocumented 2001-03-14 08:59:34 +00:00
Daniel Stenberg
5d0efedd2d First Jrn's updates were applied, then
my take at removing the private functions from the list, then I renamed
the *str(n)equal functions...
2001-03-14 08:58:36 +00:00
Daniel Stenberg
a426818a78 no longer includes the curl/types.h and curl/easy.h include files
explicitly, as they're taken care of indirectly by curl/curl.h these
days.
2001-03-14 08:55:17 +00:00
Daniel Stenberg
bfe413d8bd increased the 'current' number for the interface 2001-03-14 08:54:18 +00:00
Daniel Stenberg
dbbd20646f Curl_str(n)equal renamed to curl_str(n)equal 2001-03-14 08:53:31 +00:00
Daniel Stenberg
b8fe4deb13 documented the undocumented public functions in libcurl 2001-03-14 08:51:04 +00:00
Daniel Stenberg
332a016e3c chunked bugfix, Jrn's fixes, the interface number increase 2001-03-14 08:49:11 +00:00
Daniel Stenberg
3738e4bdc0 The Curl_* prefixes are now changed for curl_* ones, as these two functions
are used externally and thus are public symbols.
2001-03-14 08:47:56 +00:00
Daniel Stenberg
3201d2dafa Jrn added "#define socklen_t int" 2001-03-14 08:28:54 +00:00
Daniel Stenberg
0a1e002ca4 Jrn fixed it to compile on win32 again 2001-03-14 08:28:19 +00:00
Daniel Stenberg
9195bb64d4 Jrn Hartroth added a set of files 2001-03-14 08:23:51 +00:00
Daniel Stenberg
11ee547a0e Jrn Hartroth fixed a bad #endif placement 2001-03-14 08:20:41 +00:00
Daniel Stenberg
147de35d41 re-added the default switch for weird states 2001-03-13 23:29:53 +00:00
Daniel Stenberg
e16e9b91ae removed the random seeding and persistant stuff, as both are already in
this version!
2001-03-13 22:31:56 +00:00
Daniel Stenberg
f9cde0646f Added a failf() error message when the chunked read returns failure 2001-03-13 22:20:14 +00:00
Daniel Stenberg
195233ed5c updated the chunked state-machine to deal with the trailing CRLF that comes
after the data part
2001-03-13 22:16:42 +00:00
Daniel Stenberg
048e654514 made 'X to Y' sequences not include X twice 2001-03-13 22:14:53 +00:00
Daniel Stenberg
dfbd45142d corrected the chunked format 2001-03-13 22:13:06 +00:00
Daniel Stenberg
ff681f7bfd 7.7 beta 2 fixes 2001-03-13 15:44:31 +00:00
Daniel Stenberg
60bbb64a81 EXTRA_DIST got too long, I shortened it now but we have to do something
else as it will grow a lot more...
2001-03-13 13:31:14 +00:00
Daniel Stenberg
c622f2bb4e failf() now respects the mute flag 2001-03-13 13:22:58 +00:00
Daniel Stenberg
cd59f13da6 Guenole Bescon's bug found on march 8 is added 2001-03-13 13:14:21 +00:00
Daniel Stenberg
11d718bf52 exchanged I and me to we and us in a lot of places
updated for persistant connections and 7.7
2001-03-13 11:47:30 +00:00
Daniel Stenberg
8e8846d876 Added test case 37, HTTP GET with name+password in the URL 2001-03-13 09:44:09 +00:00
Daniel Stenberg
7d562bb685 a whole new section on persitant connections and how they're treated
internally
2001-03-13 08:16:54 +00:00
Daniel Stenberg
20ddd35669 we speak HTTP 1.1 now
more braging about the portability
2001-03-13 08:16:25 +00:00
Daniel Stenberg
063f88cd14 close policies 2001-03-13 07:59:19 +00:00
Daniel Stenberg
87b0b7cab9 initial close policy support 2001-03-13 07:54:18 +00:00
Daniel Stenberg
70d0d9d4da Added 'created' to the connectdata struct to hold the creation date, to
be used for the close policy decision
2001-03-13 07:53:59 +00:00
Daniel Stenberg
4ae3bd71ea Curl_tvnow is now properly declared with (void) 2001-03-13 07:53:06 +00:00
Daniel Stenberg
a9390665b8 curl_getinfo is removed, not a public function 2001-03-13 07:46:19 +00:00
Daniel Stenberg
fb7a6e3423 added --random-file and --egd-file to the command line client 2001-03-12 16:02:29 +00:00
Daniel Stenberg
cc99e3f7de Added the two new seeding options 2001-03-12 15:52:18 +00:00
Daniel Stenberg
e6b40bb6ac two new random seed options for the ssl config struct 2001-03-12 15:47:41 +00:00
Daniel Stenberg
f2fd1b8856 two new random seed options: CURLOPT_RANDOM_FILE and CURLOPT_EGDSOCKET 2001-03-12 15:47:17 +00:00
Daniel Stenberg
cb4efcf275 better chunked error detection 2001-03-12 15:29:04 +00:00
Daniel Stenberg
56a27d608a Added test case 36:
[HTTP GET with badly formatted chunked Transfer-Encoding]
2001-03-12 15:27:01 +00:00
Daniel Stenberg
46c9075eab updated the comment for the chunked reading 2001-03-12 15:21:11 +00:00
Daniel Stenberg
d95fa648e9 made it return illegal hex in case no hexadecimal digit was read when at
least one was expected
2001-03-12 15:20:35 +00:00
Daniel Stenberg
563ad213dc added an error code for illegal hex values in the chunked stream 2001-03-12 15:20:02 +00:00
Daniel Stenberg
0121d7d731 Added new libcurl options in include/curl/curl.h, they're documented in
curl_easy_setopt.3 and they're partly implemented in lib/url.c

Slowly, we're getting there...
2001-03-12 15:11:38 +00:00
Daniel Stenberg
8495fac1c5 Added options for the persistant support, they're also documented in
curl_easy_setopt.3 now
2001-03-12 15:06:29 +00:00
Daniel Stenberg
38c349f751 support for a few new libcurl 7.7 CURLOPT_* options added 2001-03-12 15:05:54 +00:00
Daniel Stenberg
542df800ab Added four new options that come with the new persitant support:
CURLOPT_MAXCONNECTS, CURLOPT_CLOSEPOLICY, CURLOPT_FRESH_CONNECT and
CURLOPT_FORBID_REUSE
2001-03-12 14:54:00 +00:00
Daniel Stenberg
3e88b1cac5 the client is adjusted to work with persistant curl handles, and *gee* it
seems to be working!!!
2001-03-12 13:59:38 +00:00
Daniel Stenberg
d774b10afb Added infof() calls for persistant connection info, we are very likely to
need these at least for debugging 7.7 and probably later as well...
2001-03-12 13:58:03 +00:00
Daniel Stenberg
b449b94393 moved the libcurl init call 2001-03-12 13:57:02 +00:00
Daniel Stenberg
a6cb9b08b2 persistant updates 2001-03-12 13:55:06 +00:00
Daniel Stenberg
440a3101d0 added a note about persitant connections through HTTP proxies 2001-03-12 13:54:46 +00:00
Daniel Stenberg
9778a5356b Added some persistant notes 2001-03-12 13:54:10 +00:00
Daniel Stenberg
de7dcdbc54 modified to make the curl client with persistant connection support do
correct
2001-03-12 13:47:07 +00:00
Daniel Stenberg
070968abbc include the failed test case numbers in the end summary 2001-03-12 13:46:23 +00:00
Daniel Stenberg
e97fc2aab5 Added description of the new test case ranges support 2001-03-12 12:58:57 +00:00
Daniel Stenberg
a23ac24192 made it support test case ranges on the command line, specified as
"X to Y", where X is smaller than Y.
2001-03-12 12:58:30 +00:00
Daniel Stenberg
9ee14644a7 adjusted to work with the HTTP 1.1-speaking libcurl 2001-03-12 12:45:12 +00:00
Daniel Stenberg
c576e114b9 output the protocol data to stderr when verbose is on 2001-03-12 12:44:44 +00:00
Daniel Stenberg
639a7982ba server problems,
libcurl *works* persistant over HTTP proxy!!!!
2001-03-12 10:18:01 +00:00
Daniel Stenberg
5bbe189420 modified Curl_disconnect() so that it unlinks itself from the data struct,
it saves me from more mistakes when the connectindex is -1 ... also, there's
no point in having its parent do it as all parents would do it anyway.
2001-03-12 10:13:42 +00:00
Daniel Stenberg
93ff159e32 split up the big printf() into several ones to never use strings longer
than 509 letters (as newer gcc warns on with -Wall)
2001-03-12 09:47:23 +00:00
Daniel Stenberg
8eb8a0a8e4 bugfix: don't use the connectindex if it is -1 2001-03-12 09:44:57 +00:00
Daniel Stenberg
a4af638867 added persistant connection details 2001-03-12 09:44:08 +00:00
Daniel Stenberg
75a9a87ec2 replaced I and my with we and us 2001-03-12 09:43:43 +00:00
Daniel Stenberg
b5ba011110 updated 2001-03-12 09:42:22 +00:00
Daniel Stenberg
e9b763ff05 use the new name and hostname even though an old connection is reused, since
we can re-use a proxy connection that actually has different host names on
the same connection
2001-03-09 16:50:08 +00:00
Daniel Stenberg
ac0bad2433 remake Host: for each connection and it'll work with proxies too 2001-03-09 16:48:18 +00:00
Daniel Stenberg
67d5c0a970 for HTTP/1.0 we default to non keep-alive connections, but when we get a
1.0-reply from a proxy we use and the Proxy-Connection: keep-alive header
is used, we switch it on and live happily ever after
2001-03-09 16:02:59 +00:00
Daniel Stenberg
580896d615 Added httpversion to the progress struct, we do read it, we can just as well
store it.
2001-03-09 15:58:36 +00:00
Daniel Stenberg
11693c0faa the socklen_t check is more involved now, but works on linux at least 2001-03-09 15:38:59 +00:00
Daniel Stenberg
26cd8eda4a Added socklen_t 2001-03-09 15:24:33 +00:00
Daniel Stenberg
8cd3f44040 added a check for socklen_t
removed the tiny/Makefile that was added accidentaly before
2001-03-09 15:21:00 +00:00
Daniel Stenberg
2b30bfc349 all comments for the former public "low level" interface have been removed
since they were out-of-date and not correct anymore.

moved around some struct fields
2001-03-09 15:19:42 +00:00
Daniel Stenberg
8ec4dba599 removed handles and states from the main structs
renamed prefixes from curl_ to Curl_
made persistant connections work with http proxies (at least partly)
2001-03-09 15:18:25 +00:00
Daniel Stenberg
1efec6572e curl_transfer became Curl_perform() to better match the public name and
use the correct prefix
2001-03-09 15:17:09 +00:00
Daniel Stenberg
781dd7a9bf prefix changes curl_ to Curl_
made it work (partly) with persistant connections for HTTP/1.0 replies
moved the 'newurl' struct field for Location: to the connectdata struct
2001-03-09 15:16:28 +00:00
Daniel Stenberg
beb8761b22 #include <string.h> removed a warning 2001-03-09 15:14:51 +00:00
Daniel Stenberg
071c7de9fe removed curl_read() and curl_write() - they weren't used and the public
"low leve" interface is dumped
2001-03-09 15:14:22 +00:00
Daniel Stenberg
3e7ebcd051 uses socklen_t now 2001-03-09 15:13:34 +00:00
Daniel Stenberg
c67952fc5c curl_ prefix modified to Curl_ 2001-03-09 15:13:11 +00:00
Daniel Stenberg
7d7c24f915 accept() and getsockname() now use socklen_t types, as that was just added
to configure
2001-03-09 15:12:22 +00:00
Daniel Stenberg
0dc8c4d451 use unsigned int hex to receive the hex digit in, caused a warning with
-Wall and a new gcc
2001-03-09 15:11:39 +00:00
Daniel Stenberg
9cf4434ae2 Modified to use Curl_* functions instead of curl_* ones 2001-03-09 15:10:58 +00:00
Daniel Stenberg
8ccd8b6dbc only generate maximum 509 characters in each string 2001-03-09 13:11:28 +00:00
122 changed files with 2763 additions and 1242 deletions

161
CHANGES
View File

@@ -6,11 +6,172 @@
History of Changes History of Changes
Version 7.7-beta5
Daniel (19 March 2001)
- Georg Ottinger reported problems with using -C together with -L in the sense
that the -C info got lost when it was redirected. I could not repeat this
problem on the 7.7 branch why I leave this for the moment. Test case 39 was
added to do exactly this, and it seems to do right.
- Christian Robottom Reis reported how his 7.7 beta didn't successfully do
form posts as elegantly as 7.6.1 did. Indeed, this was a flaw in the header
engine, as HTTP 1.1 has introduced a new 100 "transient" return code for PUT
and POST operations that I need to add support for. Section 8.2.3 in RFC2616
has all the details. Seems to work now!
Daniel (16 March 2001)
- After having experienced another machine break-down, we're back.
- Georg Horn's perl interface Curl::easy is now included in the curl release
archive. The perl/ directory is now present. Please help me with docs,
examples and updates you think fit.
- Made a new php/ directory in the release archive and moved the PHP examples
into a subdirectory in there. Not much PHP info yet, but I plan to. Please
help me here as well!
- Made libcurl return error if a transfer is aborted in the middle of a
"chunk". It actually enables libcurl to discover premature transfer aborts
even if the Content-Length: size is unknown.
Daniel (15 March 2001)
- Added --connect-timeout to curl, which sets the new CURLOPT_CONNECTTIMEOUT
option in libcurl. It limits the time curl is allowed to spend in the
connection phase. This differs from -m/--max-time that limits the entire
file transfer operation. Requested by Larry Fahnoe and others.
I also updated the curl.1 and curl_easy_setopt.3 man pages and removed the
item from the TODO.
Version 7.7-beta4
Daniel (14 March 2001)
- Made curl grok IPv6 with HTTP proxies and got everything to compile nicely
again when ENABLE_IPV6 is set.
I need to remake things in the test suite. I can't test the FTP parts with
curl built for IPv6 as it uses a different set of FTP commands then!
- I fell onto a bug report on php.net (posted by Lars Torben Wilson) that was
a report meant for our project. Anyway, it said the .netrc parsing didn't
work as supposed, and as I agreed with Lars, I made the netrc parser use
getpwuid() to figure out the home directory of the effective user and try
that netrc. It still uses the environment variable HOME for those that don't
have that function or if the user doesn't return valid pwd info.
- Edin Kadribaic posted a bug report where he got a crash when a fetch with
user+password in the URL followed a Location: to a second URL (absolute,
without name+password). This bug has been around for a long while and
crashes due to a read at address zero. Fixed now. Wrote test case 38, that
tests this.
- Modified the test suite's httpserver slightly to append all client request
data to its log file so that the test script now better can verify a range
of requests and not only the last one, as it did previously.
- Updated the curl man page with --random-file and --egd-file details.
Version 7.7-beta3
Daniel (14 March 2001)
- Bj<42>rn Stenberg provided similar fixes as J<>rn did and some additional patches
for non-SSL compiles.
- I increased the interface number for libcurl as I've removed the low level
functions from the interface. I also took this opportunity to rename the
Curl_strequal function to curl_strequal and Curl_strnequal to
curl_strnequal, as they're public libcurl functions (even if they're still
undocumented).
This will make older programs not capable of using the new libcurl with
just a drop-in replacement.
- J<>rn Hartroth updated stuff for win32 compiles:
o config-win32.h was fixed for socklen_t
o lib/ssluse.c had a bad #endif placement
o lib/file.c was made to compile on win32 again
o lib/Makefile.m32 was updated with the new files
o lib/libcurl.def matches the current interface state
Daniel (13 March 2001)
- It only took an hour or so before J<>rn Hartroth found a problem in the
chunked transfer-encoding. Given his fine example-site, I could easily spot
the problem and when I re-read the spec (the part I have pasted in the top
of the http_chunks.h file), I realized I had made my state-machine slightly
wrong and didn't expect/handle the trailing CRLF that comes after the data
in each chunk (and those extra two bytes sure feel wasted).
Had to modify test case 34 to match this as well.
Version 7.7-beta2
Daniel (13 March 2001)
- Added the policy stuff to the curl_easy_setopt man page for the two supported
policies.
- Implemented some support for the CURLOPT_CLOSEPOLICY option. The policies
CURLCLOSEPOLICY_LEAST_RECENTLY_USED and CURLCLOSEPOLICY_OLDEST are now
supported, and the "least recently used" is used as default if no policy
is chosen.
Daniel (12 March 2001)
- Added CURLOPT_RANDOM_FILE and CURLOPT_EGDSOCKET to libcurl for seeding the
SSL random engine. The random seeding support was also brought to the curl
client with the new options --random-file <file> and --egd-file <file>. I
need some people to really test this to know they work as supposed. Remember
that libcurl now informs (if verbose is on) if the random seed is considered
weak (HTTPS connections).
- Made the chunked transfer-encoding engine detected bad formatted data length
and return error if so (we can't possibly extract sensible data if this is
the case). Added a test case that detects this. Number 36. Now there are 60
test cases.
- Added 5 new libcurl options to curl/curl.h that can be used to control the
persistant connection support in libcurl. They're also documented (fairly
thoroughly) in the curl_easy_setopt.3 man page. Three of them are now
implemented, although not really tested at this point... Anyway, the new
implemented options are named CURLOPT_MAXCONNECTS, CURLOPT_FRESH_CONNECT,
CURLOPT_FORBID_REUSE. The ones still left to write code for are:
CURLOPT_CLOSEPOLICY and its related option CURLOPT_CLOSEFUNCTION.
- Made curl (the actual command line tool) use the new libcurl 7.7 persistant
connection support by re-using the same curl handle for every specified file
transfer and after some more test case tweaking we have 100% test case OK.
I made some test cases return HTTP/1.0 now to make sure that works as well.
- Had to add 'Connection: close' to the headers of a bunch of test cases so
that curl behaves "old-style" since the test http server doesn't do multiple
connections... Now I get 100% test case OK.
- The curl.haxx.se site, the main curl mailing list and my personal email are
all dead today due to power blackout in the area where the main servers are
located. Horrible.
- I've made persistance work over a squid HTTP proxy. I find it disturbing
that it uses headers that aren't present in any HTTP standard though
(Proxy-Connection:) and that makes me feel that I'm now on the edge of what
the standard actually defines. I need to get this code excercised on a lot
of different HTTP proxies before I feel safe.
Now I'm facing the problem with my test suite servers (both FTP and HTTP)
not supporting persistant connections and libcurl is doing them now. I have
to fix the test servers to get all the test cases do OK.
Daniel (8 March 2001)
- Guenole Bescon reported that libcurl did output errors to stderr even if
MUTE and NOPROGRESS was set. It turned out to be a bug and happens if
there's an error and no ERRORBUFFER is set. This is now corrected.
Version 7.7-beta1
Daniel (8 March 2001) Daniel (8 March 2001)
- "Transfer-Encoding: chunked" is no longer any trouble for libcurl. I've - "Transfer-Encoding: chunked" is no longer any trouble for libcurl. I've
added two source files and I've run some test downloads that look fine. added two source files and I've run some test downloads that look fine.
- HTTP HEAD works too, even on 1.1 servers.
Daniel (5 March 2001) Daniel (5 March 2001)
- The current 57 test cases now pass OK. It would suggest that libcurl works - The current 57 test cases now pass OK. It would suggest that libcurl works
using the old-style with one connection per handle. The test suite doesn't using the old-style with one connection per handle. The test suite doesn't

View File

@@ -10,7 +10,7 @@ memanalyze.pl is for analyzing the output generated by curl if -DMALLOCDEBUG
Makefile.dist is included as the root Makefile in distribution archives Makefile.dist is included as the root Makefile in distribution archives
perl/ is a subdirectory with various perl scripts perl/contrib/ is a subdirectory with various perl scripts
To build after having extracted everything from CVS, do this: To build after having extracted everything from CVS, do this:

View File

@@ -8,7 +8,7 @@ EXTRA_DIST = \
CHANGES LEGAL maketgz MITX.txt MPL-1.1.txt \ CHANGES LEGAL maketgz MITX.txt MPL-1.1.txt \
config-win32.h reconf packages/README Makefile.dist config-win32.h reconf packages/README Makefile.dist
SUBDIRS = docs lib src include tests packages SUBDIRS = docs lib src include tests packages perl php
# create a root makefile in the distribution: # create a root makefile in the distribution:
dist-hook: dist-hook:

View File

@@ -30,16 +30,16 @@ ssl:
make make
borland: borland:
cd lib; make -f Makefile.b32 cd lib & make -f Makefile.b32
cd src; make -f Makefile.b32 cd src & make -f Makefile.b32
mingw32: mingw32:
cd lib; make -f Makefile.m32 cd lib & make -f Makefile.m32
cd src; make -f Makefile.m32 cd src & make -f Makefile.m32
mingw32-ssl: mingw32-ssl:
cd lib; make -f Makefile.m32 SSL=1 cd lib & make -f Makefile.m32 SSL=1
cd src; make -f Makefile.m32 SSL=1 cd src & make -f Makefile.m32 SSL=1
vc: vc:
cd lib cd lib

View File

@@ -43,6 +43,9 @@
/* Define this to 'int' if ssize_t is not an available typedefed type */ /* Define this to 'int' if ssize_t is not an available typedefed type */
#undef ssize_t #undef ssize_t
/* Define this to 'int' if socklen_t is not an available typedefed type */
#undef socklen_t
/* Define this as a suitable file to read random data from */ /* Define this as a suitable file to read random data from */
#undef RANDOM_FILE #undef RANDOM_FILE

View File

@@ -26,6 +26,9 @@
/* Define this to 'int' if ssize_t is not an available typedefed type */ /* Define this to 'int' if ssize_t is not an available typedefed type */
#define ssize_t int #define ssize_t int
/* Define this to 'int' if socklen_t is not an available typedefed type */
#define socklen_t int
/* Define if you have the ANSI C header files. */ /* Define if you have the ANSI C header files. */
#define STDC_HEADERS 1 #define STDC_HEADERS 1

View File

@@ -673,6 +673,7 @@ AC_CHECK_HEADERS( \
winsock.h \ winsock.h \
time.h \ time.h \
io.h \ io.h \
pwd.h
) )
dnl Check for libz header dnl Check for libz header
@@ -693,6 +694,28 @@ AC_CHECK_SIZEOF(long long, 4)
# check for ssize_t # check for ssize_t
AC_CHECK_TYPE(ssize_t, int) AC_CHECK_TYPE(ssize_t, int)
dnl
dnl We can't just AC_CHECK_TYPE() for socklen_t since it doesn't appear
dnl in the standard headers. We egrep for it in the socket headers and
dnl if it is used there we assume we have the type defined, otherwise
dnl we search for it with AC_CHECK_TYPE() the "normal" way
dnl
if test "$ac_cv_header_sys_socket_h" = "yes"; then
AC_MSG_CHECKING(for socklen_t in sys/socket.h)
AC_EGREP_HEADER(socklen_t,
sys/socket.h,
socklen_t=yes
AC_MSG_RESULT(yes),
AC_MSG_RESULT(no))
fi
if test "$socklen_t" != "yes"; then
# check for socklen_t the standard way if it wasn't found before
AC_CHECK_TYPE(socklen_t, int)
fi
dnl Get system canonical name dnl Get system canonical name
AC_CANONICAL_HOST AC_CANONICAL_HOST
AC_DEFINE_UNQUOTED(OS, "${host}") AC_DEFINE_UNQUOTED(OS, "${host}")
@@ -724,7 +747,9 @@ AC_CHECK_FUNCS( socket \
sigaction \ sigaction \
signal \ signal \
getpass_r \ getpass_r \
strlcat strlcat \
getpwuid \
geteuid
) )
dnl removed 'getpass' check on October 26, 2000 dnl removed 'getpass' check on October 26, 2000
@@ -765,5 +790,9 @@ AC_OUTPUT( Makefile \
packages/Linux/RPM/Makefile \ packages/Linux/RPM/Makefile \
packages/Linux/RPM/curl.spec \ packages/Linux/RPM/curl.spec \
packages/Linux/RPM/curl-ssl.spec \ packages/Linux/RPM/curl-ssl.spec \
tiny/Makefile ) perl/Makefile \
perl/Curl_easy/Makefile \
php/Makefile \
php/examples/Makefile
)

View File

@@ -6,9 +6,9 @@
BUGS BUGS
Curl has grown substantially from that day, several years ago, when I Curl and libcurl have grown substantially since the beginning. At the time
started fiddling with it. When I write this, there are 16500 lines of source of writing (mid March 2001), there are 23000 lines of source code, and by
code, and by the time you read this it has probably grown even more. the time you read this it has probably grown even more.
Of course there are lots of bugs left. And lots of misfeatures. Of course there are lots of bugs left. And lots of misfeatures.
@@ -21,10 +21,11 @@ BUGS
http://sourceforge.net/bugs/?group_id=976 http://sourceforge.net/bugs/?group_id=976
When reporting a bug, you should include information that will help us When reporting a bug, you should include information that will help us
understand what's wrong, what's expected and how to repeat it. You therefore understand what's wrong, what you expected to happen and how to repeat the
need to supply your operating system's name and version number (uname -a bad behaviour. You therefore need to supply your operating system's name and
under a unix is fine), what version of curl you're using (curl -v is fine), version number (uname -a under a unix is fine), what version of curl you're
what URL you were working with and anything else you think matters. using (curl -V is fine), what URL you were working with and anything else
you think matters.
If curl crashed, causing a core dump (in unix), there is hardly any use to If curl crashed, causing a core dump (in unix), there is hardly any use to
send that huge file to anyone of us. Unless we have an exact same system send that huge file to anyone of us. Unless we have an exact same system
@@ -32,7 +33,7 @@ BUGS
a stack trace and send that (much smaller) output to us instead! a stack trace and send that (much smaller) output to us instead!
The address and how to subscribe to the mailing list is detailed in the The address and how to subscribe to the mailing list is detailed in the
README.curl file. MANUAL file.
HOW TO GET A STACK TRACE with a common unix debugger HOW TO GET A STACK TRACE with a common unix debugger
==================================================== ====================================================

View File

@@ -13,7 +13,7 @@ To Think About When Contributing Source Code
The License Issue The License Issue
When contributing with code, you agree to put your changes and new code under When contributing with code, you agree to put your changes and new code under
the same license curl and libcurl is already using. the same license curl and libcurl is already using unless stated otherwise.
If you add a larger piece of code, you can opt to make that file or set of If you add a larger piece of code, you can opt to make that file or set of
files to use a different license as long as they don't enfore any changes to files to use a different license as long as they don't enfore any changes to
@@ -26,19 +26,19 @@ Naming
Try using a non-confusing naming scheme for your new functions and variable Try using a non-confusing naming scheme for your new functions and variable
names. It doesn't necessarily have to mean that you should use the same as in names. It doesn't necessarily have to mean that you should use the same as in
other places of the code, just that the names should be logical, other places of the code, just that the names should be logical,
understandable and be named according to what they're used for. understandable and be named according to what they're used for. File-local
functions should be made static.
Indenting Indenting
Please try using the same indenting levels and bracing method as all the Please try using the same indenting levels and bracing method as all the
other code already does. It makes the source code a lot easier to follow if other code already does. It makes the source code a lot easier to follow if
all of it is written using the same style. I don't ask you to like it, I just all of it is written using the same style. We don't ask you to like it, we
ask you to follow the tradition! ;-) just ask you to follow the tradition! ;-)
Commenting Commenting
Comment your source code extensively. I don't see myself as a very good Comment your source code extensively. Commented code is quality code and
source commenter, but I try to become one. Commented code is quality code and
enables future modifications much more. Uncommented code much more risk being enables future modifications much more. Uncommented code much more risk being
completely replaced when someone wants to extend things, since other persons' completely replaced when someone wants to extend things, since other persons'
source code can get quite hard to read. source code can get quite hard to read.
@@ -71,9 +71,9 @@ Separate Patches Doing Different Things
Patch Against Recent Sources Patch Against Recent Sources
Please try to get the latest available sources to make your patches Please try to get the latest available sources to make your patches
against. It makes my life so much easier. The very best is if you get the against. It makes the life of the developers so much easier. The very best is
most up-to-date sources from the CVS repository, but the latest release if you get the most up-to-date sources from the CVS repository, but the
archive is quite OK as well! latest release archive is quite OK as well!
Document Document
@@ -91,9 +91,9 @@ Write Access to CVS Repository
Test Cases Test Cases
Since the introduction of the test suite, we will get the possibility to Since the introduction of the test suite, we can quickly verify that the main
quickly verify that the main features are working as supposed to. To maintain features are working as they're supposed to. To maintain this situation and
this situation and improve it, all new features and functions that are added improve it, all new features and functions that are added need to be tested
need tro be tested. Every feature that is added should get at least one valid in the test suite. Every feature that is added should get at least one valid
test case that verifies that it works as documented. If every submitter also test case that verifies that it works as documented. If every submitter also
post a few test cases, it won't end up as a heavy burden on a single person! post a few test cases, it won't end up as a heavy burden on a single person!

108
docs/FAQ
View File

@@ -1,4 +1,4 @@
Updated: March 6, 2001 (http://curl.haxx.se/docs/faq.shtml) Updated: March 13, 2001 (http://curl.haxx.se/docs/faq.shtml)
_ _ ____ _ _ _ ____ _
___| | | | _ \| | ___| | | | _ \| |
/ __| | | | |_) | | / __| | | | |_) | |
@@ -112,29 +112,30 @@ FAQ
1.4 When will you make curl do XXXX ? 1.4 When will you make curl do XXXX ?
I love suggestions of what to change in order to make curl and libcurl We love suggestions of what to change in order to make curl and libcurl
better. I do however believe in a few rules when it comes to the future of better. We do however believe in a few rules when it comes to the future of
curl: curl:
* It is to remain a command line tool. If you want GUIs or fancy scripting * Curl is to remain a command line tool. If you want GUIs or fancy scripting
capabilities, you're free to write another tool that uses libcurl and that capabilities, you're free to write another tool that uses libcurl and that
offers this. There's no point in having one single tool that does every offers this. There's no point in having a single tool that does every
imaginable thing. That's also one of the great advantages of having the imaginable thing. That's also one of the great advantages of having the
core of curl as a library: libcurl. core of curl as a library.
* I do not add things to curl that other small and available tools already * We do not add things to curl that other small and available tools already
do very fine at the side. Curl's output is fine to pipe into another do very fine at the side. Curl's output is fine to pipe into another
program or redirect to another file for the next program to interpret. program or redirect to another file for the next program to interpret.
* I focus on protocol related issues and improvements. If you wanna do more * We focus on protocol related issues and improvements. If you wanna do more
magic with the supported protocols than curl currently does, chances are magic with the supported protocols than curl currently does, chances are
big I will agree. If you wanna add more protocols, I may very well big I will agree. If you wanna add more protocols, I may very well
agree. agree.
* If you want me to make all the work while you wait for me to implement it * If you want someone else to make all the work while you wait for us to
for you, that is not a very friendly attitude. I spend a considerable time implement it for you, that is not a very friendly attitude. We spend a
already on maintaining and developing curl. In order to get more out of considerable time already on maintaining and developing curl. In order to
me, I trust you will offer some of your time and efforts in return. get more out of us, you should consider trading in some of your time and
efforts in return.
* If you write the code, chances are bigger that it will get into curl * If you write the code, chances are bigger that it will get into curl
faster. faster.
@@ -182,23 +183,24 @@ FAQ
2.2. Does curl work/build with other SSL libraries? 2.2. Does curl work/build with other SSL libraries?
Curl has been written to use OpenSSL, although I doubt there would be much Curl has been written to use OpenSSL, although there should not be much
problems using a different library. If anyone does "port" curl to use a problems using a different library. If anyone does "port" curl to use a
different SSL library, I am of course very interested in getting the patch! different SSL library, we are of course very interested in getting the
patch!
2.3. Where can I find a copy of LIBEAY32.DLL? 2.3. Where can I find a copy of LIBEAY32.DLL?
That is an OpenSSL binary built for Windows. That is an OpenSSL binary built for Windows.
Curl uses OpenSSL to do the SSL stuff. The LIBEAY32.DLL is what curl needs Curl uses OpenSSL to do the SSL stuff. The LIBEAY32.DLL is what curl needs
on a windows machine to do https://. Check out the curl web page to find on a windows machine to do https://. Check out the curl web site to find
accurate and up-to-date pointers to recent OpenSSL DDLs and other binary accurate and up-to-date pointers to recent OpenSSL DDLs and other binary
packages. packages.
2.4. Does cURL support Socks (RFC 1928) ? 2.4. Does cURL support Socks (RFC 1928) ?
No. Nobody has wanted it that badly yet. I would appriciate patches that No. Nobody has wanted it that badly yet. We appriciate patches that bring
brings this functionality. this functionality.
3. Usage problems 3. Usage problems
@@ -220,7 +222,7 @@ FAQ
3.2. How do I tell curl to resume a transfer? 3.2. How do I tell curl to resume a transfer?
Curl supports resume both ways on FTP, download ways on HTTP. Curl supports resumed transfers both ways on both FTP and HTTP.
Try the -C option. Try the -C option.
@@ -232,10 +234,10 @@ FAQ
use the -F type. In all the most common cases, you should use -d which then use the -F type. In all the most common cases, you should use -d which then
causes a posting with the type 'application/x-www-form-urlencoded'. causes a posting with the type 'application/x-www-form-urlencoded'.
I have described this in some detail in the README.curl file, and if you This is described in some detail in the README.curl file, and if you don't
don't understand it the first time, read it again before you post questions understand it the first time, read it again before you post questions about
about this to the mailing list. I would also suggest that you read through this to the mailing list. Also, try reading through the mailing list
the mailing list archives for old postings and questions regarding this. archives for old postings and questions regarding this.
3.4. How do I tell curl to run custom FTP commands? 3.4. How do I tell curl to run custom FTP commands?
@@ -306,9 +308,9 @@ FAQ
4.1. Problems connecting to SSL servers. 4.1. Problems connecting to SSL servers.
It took a very long time before I could sort out why curl had problems It took a very long time before we could sort out why curl had problems to
to connect to certain SSL servers when using SSLeay or OpenSSL v0.9+. connect to certain SSL servers when using SSLeay or OpenSSL v0.9+. The
The error sometimes showed up similar to: error sometimes showed up similar to:
16570:error:1407D071:SSL routines:SSL2_READ:bad mac decode:s2_pkt.c:233: 16570:error:1407D071:SSL routines:SSL2_READ:bad mac decode:s2_pkt.c:233:
@@ -316,12 +318,12 @@ FAQ
requests properly. To correct this problem, tell curl to select SSLv2 from requests properly. To correct this problem, tell curl to select SSLv2 from
the command line (-2/--sslv2). the command line (-2/--sslv2).
I have also seen examples where the remote server didn't like the SSLv2 There has also been examples where the remote server didn't like the SSLv2
request and instead you had to force curl to use SSLv3 with -3/--sslv3. request and instead you had to force curl to use SSLv3 with -3/--sslv3.
4.2. Why do I get problems when I use & or % in the URL? 4.2. Why do I get problems when I use & or % in the URL?
In general unix shells, the & letter is treated special and when used it In general unix shells, the & letter is treated special and when used, it
runs the specified command in the background. To safely send the & as a part runs the specified command in the background. To safely send the & as a part
of a URL, you should qoute the entire URL by using single (') or double (") of a URL, you should qoute the entire URL by using single (') or double (")
quotes around it. quotes around it.
@@ -346,8 +348,8 @@ FAQ
curl '{curl,www}.haxx.se' curl '{curl,www}.haxx.se'
To be able to use those letters as actual parts of the URL (without using To be able to use those letters as actual parts of the URL (without using
them for the curl URL "globbing" system), use the -g/--globoff option them for the curl URL "globbing" system), use the -g/--globoff option (curl
(included in curl 7.6 and later): 7.6 and later):
curl -g 'www.site.com/weirdname[].html' curl -g 'www.site.com/weirdname[].html'
@@ -363,8 +365,8 @@ FAQ
4.5 Why do I get return code XXX from a HTTP server? 4.5 Why do I get return code XXX from a HTTP server?
RFC2616 clearly explains the return codes. I'll make a short transcript RFC2616 clearly explains the return codes. This is a short transcript. Go
here. Go read the RFC for exact details: read the RFC for exact details:
4.5.1 "400 Bad Request" 4.5.1 "400 Bad Request"
@@ -400,7 +402,7 @@ FAQ
4.7. How do I keep usernames and passwords secret in Curl command lines? 4.7. How do I keep usernames and passwords secret in Curl command lines?
I see this problem as two parts: This problem has two sides:
The first part is to avoid having clear-text passwords in the command line The first part is to avoid having clear-text passwords in the command line
so that they don't appear in 'ps' outputs and similar. That is easily so that they don't appear in 'ps' outputs and similar. That is easily
@@ -447,9 +449,8 @@ FAQ
programs. libcurl will use thread-safe functions instead of non-safe ones if programs. libcurl will use thread-safe functions instead of non-safe ones if
your system has such. your system has such.
I am very interested in once and for all getting some kind of report or We would appriciate some kind of report or README file from those who have
README file from those who have used libcurl in a threaded environment, used libcurl in a threaded environment.
since I haven't and I get this question more and more frequently!
5.2 How can I receive all data into a large memory chunk? 5.2 How can I receive all data into a large memory chunk?
@@ -487,11 +488,15 @@ FAQ
5.3 How do I fetch multiple files with libcurl? 5.3 How do I fetch multiple files with libcurl?
Starting with version 7.7, curl and libcurl will have excellent support for Starting with version 7.7, curl and libcurl will have excellent support for
transferring multiple files. transferring multiple files. You should just repeatedly set new URLs with
curl_easy_setopt() and then transfer it with curl_easy_perform(). The handle
you get from curl_easy_init() is not only reusable starting with libcurl
7.7, but also you're encouraged to reuse it if you can, as that will enable
libcurl to use persistant connections.
The easy interface of libcurl does not support multiple requests using the For libcurl prior to 7.7, there was no multiple file support. The only
same connection. The only available way to do multiple requests is to available way to do multiple requests was to init/perform/cleanup for each
init/perform/cleanup for each request. transfer.
5.4 Does libcurl do Winsock initing on win32 systems? 5.4 Does libcurl do Winsock initing on win32 systems?
@@ -517,18 +522,14 @@ FAQ
Starting with version 7.7, curl and libcurl will have excellent support for Starting with version 7.7, curl and libcurl will have excellent support for
persistant connections when transferring several files from the same server. persistant connections when transferring several files from the same server.
Curl will attempt to reuse connections for all URLs specified on the same
command line/config file, and libcurl will reuse connections for all
transfers that are made using the same libcurl handle.
This is closely related to issue 5.3. Since libcurl has no real support Previous versions had no persistant connection support.
for doing multiple file transfers, there's no support for Keep-Alive or
persistant connections either.
This is of course subject to change as soon as libcurl gets support for
multiple files. Feel free to join in and make this change happen sooner!
6. License Issues 6. License Issues
NOTE: This section concerns curl 7.5.2 or later!
Curl and libcurl are released under a MIT/X derivate license *or* the MPL, Curl and libcurl are released under a MIT/X derivate license *or* the MPL,
the Mozilla Public License. To get a really good answer to your license the Mozilla Public License. To get a really good answer to your license
conflict questions, you should study the MPL and MIT/X licenses and the conflict questions, you should study the MPL and MIT/X licenses and the
@@ -573,9 +574,10 @@ FAQ
No. No.
We carefully picked this license years ago and a large amount of people have We have carefully picked this license after years of development and
contributed with source code knowing that this is the license we use. This discussions and a large amount of people have contributed with source code
license puts the restrictions we want on curl/libcurl and it does not spread knowing that this is the license we use. This license puts the restrictions
to other programs or libraries that use it. The recent dual license we want on curl/libcurl and it does not spread to other programs or
modification should make it possible for everyone to use libcurl or curl in libraries that use it. The recent dual license modification should make it
their projects, no matter what license they already have in use. possible for everyone to use libcurl or curl in their projects, no matter
what license they already have in use.

View File

@@ -17,12 +17,14 @@ Misc
- progress bar/time specs while downloading - progress bar/time specs while downloading
- "standard" proxy environment variables support - "standard" proxy environment variables support
- config file support - config file support
- compiles on win32 - compiles on win32 (reported built on 29 operating systems)
- redirectable stderr - redirectable stderr
- use selected network interface for outgoing traffic - use selected network interface for outgoing traffic
- IPv6 support - IPv6 support
- persistant connections
HTTP HTTP
- HTTP/1.1 compliant
- GET - GET
- PUT - PUT
- HEAD - HEAD
@@ -72,6 +74,7 @@ FTP
TELNET TELNET
- connection negotiation - connection negotiation
- custom telnet options
- stdin/stdout I/O - stdin/stdout I/O
LDAP (*2) LDAP (*2)

View File

@@ -10,21 +10,32 @@ Curl has been compiled and built on numerous different operating systems. The
way to proceed is mainly divided in two different ways: the unix way or the way to proceed is mainly divided in two different ways: the unix way or the
windows way. windows way.
If you're using Windows (95, 98, NT) or OS/2, you should continue reading from If you're using Windows (95/98/NT/ME/2000 or whatever) or OS/2, you should
the Win32 or OS/2 headers further down. All other systems should be capable of continue reading from the Win32 or OS/2 headers further down. All other
being installed as described below. systems should be capable of being installed as described below.
UNIX UNIX
==== ====
The configure script *always* tries to find a working SSL library unless A normal unix installation is made in three or four steps (after you've
explicitly told not to. If you have OpenSSL installed in the default unpacked the source archive):
search path for your compiler/linker, you don't need to do anything
special:
./configure ./configure
make
make test (optional)
make install
If you have OpenSSL installed in /usr/local/ssl, you can run configure You probably need to be root when doing the last command.
If you want to install curl in a different file hierarchy than /usr/local,
you need to specify that already when running configure:
./configure --prefix=/path/to/curl/tree
The configure script always tries to find a working SSL library unless
explicitly told not to. If you have OpenSSL installed in the default search
path for your compiler/linker, you don't need to do anything special. If
you have OpenSSL installed in e.g /usr/local/ssl, you can run configure
like: like:
./configure --with-ssl ./configure --with-ssl
@@ -54,33 +65,11 @@ UNIX
env CPPFLAGS="-I/path/to/ssl/include" LDFLAGS="-L/path/to/ssl/lib" \ env CPPFLAGS="-I/path/to/ssl/include" LDFLAGS="-L/path/to/ssl/lib" \
./configure ./configure
If your SSL library was compiled with rsaref (usually for use in If your SSL library was compiled with rsaref (usually for use in the United
the United States), you may also need to set: States), you may also need to set:
LIBS=-lRSAglue -lrsaref LIBS=-lRSAglue -lrsaref
(from Doug Kaufman <dkaufman@rahul.net>) (as suggested by Doug Kaufman)
Without SSL support, just run:
./configure
Then run:
make
Use the executable `curl` in src/ directory.
To install curl on your system, run
make install
This will copy curl to /usr/local/bin/ (or $prefix/bin if you used the
--prefix option to configure) and it copies the man pages, the lib and the
include files to suitable places.
To make sure everything runs as supposed, run the test suite:
make test
KNOWN PROBLEMS KNOWN PROBLEMS
@@ -109,7 +98,7 @@ UNIX
they're executable and set to appear in the path *BEFORE* the actual (but they're executable and set to appear in the path *BEFORE* the actual (but
obsolete) autoconf and autoheader scripts. obsolete) autoconf and autoheader scripts.
OPTIONS MORE OPTIONS
Remember, to force configure to use the standard cc compiler if both Remember, to force configure to use the standard cc compiler if both
cc and gcc are present, run configure like cc and gcc are present, run configure like
@@ -156,29 +145,27 @@ Win32
MingW32 (GCC-2.95) style MingW32 (GCC-2.95) style
------------------------ ------------------------
Run the 'mingw32.bat' file to get the proper environment variables Run the 'mingw32.bat' file to get the proper environment variables
set, then run 'make -f Makefile.m32' in the lib/ dir and then set, then run 'make mingw32' in the root dir.
'make -f Makefile.m32' in the src/ dir.
If you have any problems linking libraries or finding header files, If you have any problems linking libraries or finding header files, be
be sure to look at the provided "Makefile.m32" files for the proper sure to verify that the provided "Makefile.m32" files use the proper
paths, and adjust as necessary. paths, and adjust as necessary.
Cygwin style Cygwin style
------------ ------------
Almost identical to the unix installation. Run the configure script Almost identical to the unix installation. Run the configure script in
in the curl root with 'sh configure'. Make sure you have the sh the curl root with 'sh configure'. Make sure you have the sh
executable in /bin/ or you'll see the configure fail towards the executable in /bin/ or you'll see the configure fail towards the end.
end.
Run 'make' Run 'make'
Microsoft command line style Microsoft command line style
---------------------------- ----------------------------
Run the 'vcvars32.bat' file to get the proper environment variables Run the 'vcvars32.bat' file to get the proper environment variables
set, then run 'nmake -f Makefile.vc6' in the lib/ dir and then set, then run 'nmake vc' in the root dir.
'nmake -f Makefile.vc6' in the src/ dir.
The vcvars32.bat file is part of the Microsoft development environment. The vcvars32.bat file is part of the Microsoft development
environment.
IDE-style IDE-style
------------------------- -------------------------
@@ -206,26 +193,24 @@ Win32
MingW32 (GCC-2.95) style MingW32 (GCC-2.95) style
------------------------ ------------------------
Run the 'mingw32.bat' file to get the proper environment variables Run the 'mingw32.bat' file to get the proper environment variables
set, then run 'make -f Makefile.m32 SSL=1' in the lib/ dir and then set, then run 'make mingw32-ssl' in the root dir.
'make -f Makefile.m32 SSL=1' in the src/ dir.
If you have any problems linking libraries or finding header files, If you have any problems linking libraries or finding header files, be
be sure to look at the provided "Makefile.m32" files for the proper sure to look at the provided "Makefile.m32" files for the proper
paths, and adjust as necessary. paths, and adjust as necessary.
Cygwin style Cygwin style
------------ ------------
Haven't done, nor got any reports on how to do. It should although be Haven't done, nor got any reports on how to do. It should although be
identical to the unix setup for the same purpose. See above. identical to the unix setup for the same purpose. See above.
Microsoft command line style Microsoft command line style
---------------------------- ----------------------------
Run the 'vcvars32.bat' file to get the proper environment variables Run the 'vcvars32.bat' file to get the proper environment variables
set, then run 'nmake -f Makefile.vc6 release-ssl' in the lib/ dir and set, then run 'nmake vc-ssl' in the root dir.
then 'nmake -f Makefile.vc6' in the src/ dir.
The vcvars32.bat file is part of the Microsoft development environment. The vcvars32.bat file is part of the Microsoft development
environment.
Microsoft / Borland style Microsoft / Borland style
------------------------- -------------------------

View File

@@ -1,4 +1,4 @@
Updated for curl 7.6 on January 26, 2001 Updated for curl 7.7 on March 13, 2001
_ _ ____ _ _ _ ____ _
___| | | | _ \| | ___| | | | _ \| |
/ __| | | | |_) | | / __| | | | |_) | |
@@ -7,11 +7,11 @@
INTERNALS INTERNALS
The project is kind of split in two. The library and the client. The client The project is split in two. The library and the client. The client part uses
part uses the library, but the library is meant to be designed to allow other the library, but the library is designed to allow other applications to use
applications to use it. it.
Thus, the largest amount of code and complexity is in the library part. The largest amount of code and complexity is in the library part.
CVS CVS
=== ===
@@ -35,13 +35,13 @@ Windows vs Unix
the same at all places except for the header file that defines them. The the same at all places except for the header file that defines them. The
macros in use are sclose(), sread() and swrite(). macros in use are sclose(), sread() and swrite().
2. Windows requires a couple of init calls for the socket stuff 2. Windows requires a couple of init calls for the socket stuff.
Those must be made by the application that uses libcurl, in curl that means Those must be made by the application that uses libcurl, in curl that means
src/main.c has some code #ifdef'ed to do just that. src/main.c has some code #ifdef'ed to do just that.
3. The file descriptors for network communication and file operations are 3. The file descriptors for network communication and file operations are
not easily interchangable as in unix not easily interchangable as in unix.
We avoid this by not trying any funny tricks on file descriptors. We avoid this by not trying any funny tricks on file descriptors.
@@ -51,10 +51,10 @@ Windows vs Unix
We set stdout to binary under windows We set stdout to binary under windows
Inside the source code, I do make an effort to avoid '#ifdef WIN32'. All Inside the source code, We make an effort to avoid '#ifdef [Your OS]'. All
conditionals that deal with features *should* instead be in the format conditionals that deal with features *should* instead be in the format
'#ifdef HAVE_THAT_WEIRD_FUNCTION'. Since Windows can't run configure scripts, '#ifdef HAVE_THAT_WEIRD_FUNCTION'. Since Windows can't run configure scripts,
I maintain two config-win32.h files (one in / and one in src/) that are we maintain two config-win32.h files (one in / and one in src/) that are
supposed to look exactly as a config.h file would have looked like on a supposed to look exactly as a config.h file would have looked like on a
Windows machine! Windows machine!
@@ -64,12 +64,6 @@ Windows vs Unix
Library Library
======= =======
As described elsewhere, libcurl is meant to get two different "layers" of
interfaces. At the present point only the high-level, the "easy", interface
has been fully implemented and documented. We assume the easy-interface in
this description, the low-level interface will be documented when fully
implemented.
There are plenty of entry points to the library, namely each publicly defined There are plenty of entry points to the library, namely each publicly defined
function that libcurl offers to applications. All of those functions are function that libcurl offers to applications. All of those functions are
rather small and easy-to-follow. All the ones prefixed with 'curl_easy' are rather small and easy-to-follow. All the ones prefixed with 'curl_easy' are
@@ -103,8 +97,9 @@ Library
lib/sendf.c) function to send printf-style formatted data to the remote host lib/sendf.c) function to send printf-style formatted data to the remote host
and when they're ready to make the actual file transfer they call the and when they're ready to make the actual file transfer they call the
Curl_Transfer() function (in lib/transfer.c) to setup the transfer and Curl_Transfer() function (in lib/transfer.c) to setup the transfer and
returns. curl_transfer() then calls _Tranfer() in lib/transfer.c that returns. Curl_perform() then calls Transfer() in lib/transfer.c that performs
performs the entire file transfer. the entire file transfer. Curl_perform() is what does the main "connect - do
- transfer - done" loop. It loops if there's a Location: to follow.
During transfer, the progress functions in lib/progress.c are called at a During transfer, the progress functions in lib/progress.c are called at a
frequent interval (or at the user's choice, a specified callback might get frequent interval (or at the user's choice, a specified callback might get
@@ -114,6 +109,22 @@ Library
When completed, the curl_easy_cleanup() should be called to free up used When completed, the curl_easy_cleanup() should be called to free up used
resources. resources.
A quick roundup on internal function sequences (many of these call
protocol-specific function-pointers):
curl_connect - connects to a remote site and does initial connect fluff
This also checks for an existing connection to the requested site and uses
that one if it is possible.
curl_do - starts a transfer
curl_transfer() - transfers data
curl_done - ends a transfer
curl_disconnect - disconnects from a remote site. This is called when the
disconnect is really requested, which doesn't necessarily have to be
exactly after curl_done in case we want to keep the connection open for
a while.
HTTP(S) HTTP(S)
HTTP offers a lot and is the protocol in curl that uses the most lines of HTTP offers a lot and is the protocol in curl that uses the most lines of
@@ -129,6 +140,14 @@ Library
the source by the use of curl_read() for reading and curl_write() for writing the source by the use of curl_read() for reading and curl_write() for writing
data to the remote server. data to the remote server.
http_chunks.c contains functions that understands HTTP 1.1 chunked transfer
encoding.
An interesting detail with the HTTP(S) request, is the add_buffer() series of
functions we use. They append data to one single buffer, and when the
building is done the entire request is sent off in one single write. This is
done this way to overcome problems with flawed firewalls and lame servers.
FTP FTP
The Curl_if2ip() function can be used for getting the IP number of a The Curl_if2ip() function can be used for getting the IP number of a
@@ -160,7 +179,7 @@ Library
URL encoding and decoding, called escaping and unescaping in the source code, URL encoding and decoding, called escaping and unescaping in the source code,
is found in lib/escape.c. is found in lib/escape.c.
While transfering data in _Transfer() a few functions might get While transfering data in Transfer() a few functions might get
used. curl_getdate() in lib/getdate.c is for HTTP date comparisons (and used. curl_getdate() in lib/getdate.c is for HTTP date comparisons (and
more). more).
@@ -182,6 +201,34 @@ Library
exists in lib/getpass.c. libcurl offers a custom callback that can be used exists in lib/getpass.c. libcurl offers a custom callback that can be used
instead of this, but it doesn't change much to us. instead of this, but it doesn't change much to us.
Persistant Connections
======================
With curl 7.7, we added persistant connection support to libcurl which has
introduced a somewhat different treatmeant of things inside of libcurl.
o The 'UrlData' struct returned in the curl_easy_init() call must never
hold connection-oriented data. It is meant to hold the root data as well
as all the options etc that the library-user may choose.
o The 'UrlData' struct holds the cache array of pointers to 'connectdata'
structs. There's one connectdata struct for each connection that libcurl
knows about.
o This also enables the 'curl handle' to be reused on subsequent transfers,
something that was illegal in pre-7.7 versions.
o When we are about to perform a transfer with curl_easy_perform(), we first
check for an already existing connection in the cache that we can use,
otherwise we create a new one and add to the cache. If the cache is full
already when we add a new connection, we close one of the present ones. We
select which one to close dependent on the close policy that may have been
previously set.
o When the tranfer operation is complete, we try to leave the connection open.
Particular options may tell us not to, and protocols may signal closure on
connections and then we don't keep it open of course.
o When curl_easy_cleanup() is called, we close all still opened connections.
You do realize that the curl handle must be re-used in order for the
persistant connections to work.
Library Symbols Library Symbols
=============== ===============
@@ -236,12 +283,12 @@ Memory Debugging
deal with resources that might give us problems if we "leak" them. The deal with resources that might give us problems if we "leak" them. The
functions in the memdebug system do nothing fancy, they do their normal functions in the memdebug system do nothing fancy, they do their normal
function and then log information about what they just did. The logged data function and then log information about what they just did. The logged data
is then analyzed after a complete session, can then be analyzed after a complete session,
memanalyze.pl is a perl script present only in CVS (not part of the release memanalyze.pl is a perl script present only present in CVS (not part of the
archives) that analyzes a log file generated by the memdebug system. It release archives) that analyzes a log file generated by the memdebug
detects if resources are allocated but never freed and other kinds of errors system. It detects if resources are allocated but never freed and other kinds
related to resource management. of errors related to resource management.
Use -DMALLOCDEBUG when compiling to enable memory debugging. Use -DMALLOCDEBUG when compiling to enable memory debugging.
@@ -256,8 +303,8 @@ Test Suite
httpserver.pl and ftpserver.pl before all the test cases are performed. The httpserver.pl and ftpserver.pl before all the test cases are performed. The
test suite currently only runs on unix-like platforms. test suite currently only runs on unix-like platforms.
You'll find a complete description of the test case data files in the README You'll find a complete description of the test case data files in the
file in the test directory. tests/README file.
The test suite automatically detects if curl was built with the memory The test suite automatically detects if curl was built with the memory
debugging enabled, and if it was it will detect memory leaks too. debugging enabled, and if it was it will detect memory leaks too.
@@ -269,6 +316,7 @@ Building Releases
released, run the 'maketgz' script (using 'make distcheck' will give you a released, run the 'maketgz' script (using 'make distcheck' will give you a
pretty good view on the status of the current sources). maketgz prompts for pretty good view on the status of the current sources). maketgz prompts for
version number of the client and the library before it creates a release version number of the client and the library before it creates a release
archive. archive. maketgz uses 'make dist' for the actual archive building, why you
need to fill in the Makefile.am files properly for which files that should
be included in the release archives.
You must have autoconf installed to build release archives.

View File

@@ -58,9 +58,16 @@ Portability
you to init the winsock stuff before you use the libcurl functions. Details you to init the winsock stuff before you use the libcurl functions. Details
on this are noted on the curl_easy_init() man page. on this are noted on the curl_easy_init() man page.
(*) = it appears users of the cygwin environment gets this done (*) = it appears as if users of the cygwin environment get this done
automatically. automatically.
Threads
Never *ever* call curl-functions simultaneously using the same handle from
several threads. libcurl is thread-safe and can be used in any number of
threads, but you must use separate curl handles if you want to use libcurl in
more than one thread simultaneously.
Persistant Connections Persistant Connections
With libcurl 7.7, persistant connections were added. Persistant connections With libcurl 7.7, persistant connections were added. Persistant connections

View File

@@ -25,12 +25,16 @@ SIMPLE USAGE
Get a list of the root directory of an FTP site: Get a list of the root directory of an FTP site:
curl ftp://ftp.fts.frontec.se/ curl ftp://cool.haxx.se/
Get the definition of curl from a dictionary: Get the definition of curl from a dictionary:
curl dict://dict.org/m:curl curl dict://dict.org/m:curl
Fetch two documents at once:
curl ftp://cool.haxx.se/ http://www.weirdserver.com:8000/
DOWNLOAD TO A FILE DOWNLOAD TO A FILE
Get a web page and store in a local file: Get a web page and store in a local file:
@@ -43,6 +47,10 @@ DOWNLOAD TO A FILE
curl -O http://www.netscape.com/index.html curl -O http://www.netscape.com/index.html
Fetch two files and store them with their remote names:
curl -O www.haxx.se/index.html -O curl.haxx.se/download.html
USING PASSWORDS USING PASSWORDS
FTP FTP
@@ -455,9 +463,13 @@ EXTRA HEADERS
curl -H "X-you-and-me: yes" www.love.com curl -H "X-you-and-me: yes" www.love.com
This can also be useful in case you want curl to send a different text in This can also be useful in case you want curl to send a different text in a
a header than it normally does. The -H header you specify then replaces the header than it normally does. The -H header you specify then replaces the
header curl would normally send. header curl would normally send. If you replace an internal header with an
empty one, you prevent that header from being sent. To prevent the Host:
header from being used:
curl -H "Host:" www.server.com
FTP and PATH NAMES FTP and PATH NAMES
@@ -745,6 +757,25 @@ TELNET
to track when the login prompt is received and send the username and to track when the login prompt is received and send the username and
password accordingly. password accordingly.
PERSISTANT CONNECTIONS
Specifying multiple files on a single command line will make curl transfer
all of them, one after the other in the specified order.
libcurl will attempt to use persistant connections for the transfers so that
the second transfer to the same host can use the same connection that was
already initiated and was left open in the previous transfer. This greatly
decreases connection time for all but the first transfer and it makes a far
better use of the network.
Note that curl cannot use persistant connections for transfers that are used
in subsequence curl invokes. Try to stuff as many URLs as possible on the
same command line if they are using the same host, as that'll make the
transfers faster. If you use a http proxy for file transfers, practicly
all transfers will be persistant.
Persistant connections were introduced in curl 7.7.
MAILING LISTS MAILING LISTS
For your convenience, we have several open mailing lists to discuss curl, For your convenience, we have several open mailing lists to discuss curl,
@@ -753,10 +784,10 @@ MAILING LISTS
To subscribe to the main curl list, mail curl-request@contactor.se with To subscribe to the main curl list, mail curl-request@contactor.se with
"subscribe <fill in your email address>" in the body. "subscribe <fill in your email address>" in the body.
To subscribe to the libcurl users list, follow the instructions at To subscribe to the curl-library users/deverlopers list, follow the
http://curl.haxx.se/mail/ instructions at http://curl.haxx.se/mail/
To subscribe to the curl announce list, to only get information about new To subscribe to the curl-announce list, to only get information about new
releases, follow the instructions at http://curl.haxx.se/mail/ releases, follow the instructions at http://curl.haxx.se/mail/
To subscribe to the curl-and-PHP list in which curl using with PHP is To subscribe to the curl-and-PHP list in which curl using with PHP is

View File

@@ -9,21 +9,21 @@ TODO
Things to do in project cURL. Please tell me what you think, contribute and Things to do in project cURL. Please tell me what you think, contribute and
send me patches that improve things! send me patches that improve things!
To do for the 7.7 release:
* Fix the random seeding. Add --egd-socket and --random-file options to the
curl client and libcurl curl_easy_setopt() interface.
* Support persistant connections (fully detailed elsewhere)
* Add a special connection-timeout that only goes for the connection phase.
To do for the 7.8 release: To do for the 7.8 release:
* Make SSL session ids get used if multiple HTTPS documents from the same * Make SSL session ids get used if multiple HTTPS documents from the same
host is requested. host is requested.
To do in a future release: * Document the undocumented libcurl functions: the printf clones (like
curl_msprintf, curl_mfprintf, curl_msnprintf, curl_maprintf and
curl_mvfprintf), the string compare functions (curl_strequal
and curl_strnequal) and the URL escape/unescape functions.
To do in a future release (random order):
* Add configure options that disables certain protocols in libcurl to
decrease footprint. '--disable-[protocol]' where protocol is http, ftp,
telnet, ldap, dict or file.
* Extend the test suite to include telnet and https. The telnet could just do * Extend the test suite to include telnet and https. The telnet could just do
ftp or http operations (for which we have test servers) and the https would ftp or http operations (for which we have test servers) and the https would

View File

@@ -2,7 +2,7 @@
.\" nroff -man curl.1 .\" nroff -man curl.1
.\" Written by Daniel Stenberg .\" Written by Daniel Stenberg
.\" .\"
.TH curl 1 "19 January 2001" "Curl 7.6" "Curl Manual" .TH curl 1 "15 March 2001" "Curl 7.7" "Curl Manual"
.SH NAME .SH NAME
curl \- get a URL with FTP, TELNET, LDAP, GOPHER, DICT, FILE, HTTP or curl \- get a URL with FTP, TELNET, LDAP, GOPHER, DICT, FILE, HTTP or
HTTPS syntax. HTTPS syntax.
@@ -41,6 +41,12 @@ supported at the moment:
Starting with curl 7.6, you can specify any amount of URLs on the command Starting with curl 7.6, you can specify any amount of URLs on the command
line. They will be fetched in a sequential manner in the specified order. line. They will be fetched in a sequential manner in the specified order.
Starting with curl 7.7, curl will attempt to re-use connections for multiple
file transfers, so that getting many files from the same server will not do
multiple connects/handshakes. This improves speed. Of course this is only done
on files specified on a single command line and cannot be used between
separate curl invokes.
.SH OPTIONS .SH OPTIONS
.IP "-a/--append" .IP "-a/--append"
(FTP) (FTP)
@@ -85,6 +91,14 @@ also be enforced by using an URL that ends with ";type=A". This option causes
data sent to stdout to be in text mode for win32 systems. data sent to stdout to be in text mode for win32 systems.
If this option is used twice, the second one will disable ASCII usage. If this option is used twice, the second one will disable ASCII usage.
.IP "--connect-timeout <seconds>"
Maximum time in seconds that you allow the connection to the server to take.
This only limits the connection phase, once curl has connected this option is
of no more use. This option doesn't work in win32 systems. See also the
.I "--max-time"
option.
If this option is used serveral times, the last one will be used.
.IP "-c/--continue" .IP "-c/--continue"
.B Deprecated. Use '-C -' instead. .B Deprecated. Use '-C -' instead.
Continue/Resume a previous file transfer. This instructs curl to Continue/Resume a previous file transfer. This instructs curl to
@@ -105,14 +119,15 @@ HTTP resume is only possible with HTTP/1.1 or later servers.
If this option is used serveral times, the last one will be used. If this option is used serveral times, the last one will be used.
.IP "-d/--data <data>" .IP "-d/--data <data>"
(HTTP) Sends the specified data in a POST request to the HTTP server. Note (HTTP) Sends the specified data in a POST request to the HTTP server, in a way
that the data is sent exactly as specified with no extra processing (with all that can emulate as if a user has filled in a HTML form and pressed the submit
newlines cut off). The data is expected to be "url-encoded". This will cause button. Note that the data is sent exactly as specified with no extra
curl to pass the data to the server using the content-type processing (with all newlines cut off). The data is expected to be
application/x-www-form-urlencoded. Compare to -F. If more than one -d/--data "url-encoded". This will cause curl to pass the data to the server using the
option is used on the same command line, the data pieces specified will be content-type application/x-www-form-urlencoded. Compare to -F. If more than
merged together with a separating &-letter. Thus, using '-d name=daniel -d one -d/--data option is used on the same command line, the data pieces
skill=lousy' would generate a post chunk that looks like specified will be merged together with a separating &-letter. Thus, using '-d
name=daniel -d skill=lousy' would generate a post chunk that looks like
'name=daniel&skill=lousy'. 'name=daniel&skill=lousy'.
If you start the data with the letter @, the rest should be a file name to If you start the data with the letter @, the rest should be a file name to
@@ -160,6 +175,11 @@ previous URL when it follows a Location: header. The ";auto" string can be
used alone, even if you don't set an initial referer. used alone, even if you don't set an initial referer.
If this option is used serveral times, the last one will be used. If this option is used serveral times, the last one will be used.
.IP "--egd-file <file>"
(HTTPS) Specify the path name to the Entropy Gathering Daemon socket. The
socket is used to seed the random engine for SSL connections. See also the
.I "--random-file"
option.
.IP "-E/--cert <certificate[:password]>" .IP "-E/--cert <certificate[:password]>"
(HTTPS) (HTTPS)
Tells curl to use the specified certificate file when getting a file Tells curl to use the specified certificate file when getting a file
@@ -283,6 +303,9 @@ If this option is used twice, the second will again disable location following.
Maximum time in seconds that you allow the whole operation to take. This is Maximum time in seconds that you allow the whole operation to take. This is
useful for preventing your batch jobs from hanging for hours due to slow useful for preventing your batch jobs from hanging for hours due to slow
networks or links going down. This doesn't work fully in win32 systems. networks or links going down. This doesn't work fully in win32 systems.
See also the
.I "--connect-timeout"
option.
If this option is used serveral times, the last one will be used. If this option is used serveral times, the last one will be used.
.IP "-M/--manual" .IP "-M/--manual"
@@ -377,6 +400,12 @@ to be run before and after the transfer. If the server returns failure for one
of the commands, the entire operation will be aborted. of the commands, the entire operation will be aborted.
This option can be used multiple times. This option can be used multiple times.
.IP "--random-file <file>"
(HTTPS) Specify the path name to file containing what will be considered as
random data. The data is used to seed the random engine for SSL connections.
See also the
.I "--edg-file"
option.
.IP "-r/--range <range>" .IP "-r/--range <range>"
(HTTP/FTP) (HTTP/FTP)
Retrieve a byte range (i.e a partial document) from a HTTP/1.1 or FTP Retrieve a byte range (i.e a partial document) from a HTTP/1.1 or FTP

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file] .\" nroff -man [file]
.\" Written by daniel@haxx.se .\" Written by daniel@haxx.se
.\" .\"
.TH curl_easy_setopt 3 "6 March 2001" "libcurl 7.5" "libcurl Manual" .TH curl_easy_setopt 3 "13 March 2001" "libcurl 7.7" "libcurl Manual"
.SH NAME .SH NAME
curl_easy_setopt - Set curl easy-session options curl_easy_setopt - Set curl easy-session options
.SH SYNOPSIS .SH SYNOPSIS
@@ -26,6 +26,13 @@ NOTE: strings passed to libcurl as 'char *' arguments, will not be copied by
the library. Instead you should keep them available until libcurl no longer the library. Instead you should keep them available until libcurl no longer
needs them. Failing to do so will cause very odd behaviour or even crashes. needs them. Failing to do so will cause very odd behaviour or even crashes.
More note: the options set with this function call are valid for the
forthcoming data transfers that are performed when you invoke
.I curl_easy_perform .
The options are not in any way reset between transfers, so if you want
subsequent transfers with different options, you must change them between the
transfers.
The The
.I "handle" .I "handle"
is the return code from the is the return code from the
@@ -419,6 +426,59 @@ Pass a long. The set number will be the redirection limit. If that many
redirections have been followed, the next redirect will cause an error. This redirections have been followed, the next redirect will cause an error. This
option only makes sense if the CURLOPT_FOLLOWLOCATION is used at the same option only makes sense if the CURLOPT_FOLLOWLOCATION is used at the same
time. (Added in 7.5) time. (Added in 7.5)
.TP
.B CURLOPT_MAXCONNECTS
Pass a long. The set number will be the persistant connection cache size. The
set amount will be the maximum amount of simultaneous connections that libcurl
may cache between file transfers. Default is 5, and there isn't much point in
changing this value unless you are perfectly aware of how this work and
changes libcurl's behaviour. Note: if you have already performed transfers
with this curl handle, setting a smaller MAXCONNECTS than before may cause
open connections to unnecessarily get closed. (Added in 7.7)
.TP
.B CURLOPT_CLOSEPOLICY
Pass a long. This option sets what policy libcurl should use when the
connection cache is filled and one of the open connections has to be closed to
make room for a new connection. This must be one of the CURLCLOSEPOLICY_*
defines. Use CURLCLOSEPOLICY_LEAST_RECENTLY_USED to make libcurl close the
connection that was least recently used, that connection is also least likely
to be capable of re-use. Use CURLCLOSEPOLICY_OLDEST to make libcurl close the
oldest connection, the one that was created first among the ones in the
connection cache. The other close policies are not support yet. (Added in 7.7)
.TP
.B CURLOPT_FRESH_CONNECT
Pass a long. Set to non-zero to make the next transfer use a new connection by
force. If the connection cache is full before this connection, one of the
existinf connections will be closed as according to the set policy. This
option should be used with caution and only if you understand what it
does. Set to 0 to have libcurl attempt re-use of an existing connection.
(Added in 7.7)
.TP
.B CURLOPT_FORBID_REUSE
Pass a long. Set to non-zero to make the next transfer explicitly close the
connection when done. Normally, libcurl keep all connections alive when done
with one transfer in case there comes a succeeding one that can re-use them.
This option should be used with caution and only if you understand what it
does. Set to 0 to have libcurl keep the connection open for possibly later
re-use. (Added in 7.7)
.TP
.B CURLOPT_RANDOM_FILE
Pass a char * to a zero terminated file name. The file will be used to read
from to seed the random engine for SSL. The more random the specified file is,
the more secure will the SSL connection become.
.TP
.B CURLOPT_FORBID_REUSE
Pass a char * to the zero terminated path name to the Entropy Gathering Daemon
socket. It will be used to seed the random engine for SSL.
.TP
.B CURLOPT_CONNECTTIMEOUT
Pass a long. It should contain the maximum time in seconds that you allow the
connection to the server to take. This only limits the connection phase, once
it has connected, this option is of no more use. Set to zero to disable
connection timeout (it will then only timeout on the system's internal
timeouts). This option doesn't work in win32 systems. See also the
.I CURLOPT_TIMEOUT
option.
.PP .PP
.SH RETURN VALUE .SH RETURN VALUE
0 means the option was set properly, non-zero means an error as 0 means the option was set properly, non-zero means an error as

View File

@@ -7,5 +7,4 @@ advantage of libcurl.
If you end up with other small but still useful example sources, please mail If you end up with other small but still useful example sources, please mail
them for submission in future packages and on the web site. them for submission in future packages and on the web site.
There are examples for different languages and environments. Browse around to Try the php/examples/ directory for PHP programming snippets!
find those that fit you.

View File

@@ -398,6 +398,38 @@ typedef enum {
/* This points to a linked list of telnet options */ /* This points to a linked list of telnet options */
CINIT(TELNETOPTIONS, OBJECTPOINT, 70), CINIT(TELNETOPTIONS, OBJECTPOINT, 70),
/* Max amount of cached alive connections */
CINIT(MAXCONNECTS, LONG, 71),
/* What policy to use when closing connections when the cache is filled
up */
CINIT(CLOSEPOLICY, LONG, 72),
/* Callback to use when CURLCLOSEPOLICY_CALLBACK is set */
CINIT(CLOSEFUNCTION, FUNCTIONPOINT, 73),
/* Set to explicitly use a new connection for the upcoming transfer.
Do not use this unless you're absolutely sure of this, as it makes the
operation slower and is less friendly for the network. */
CINIT(FRESH_CONNECT, LONG, 74),
/* Set to explicitly forbid the upcoming transfer's connection to be re-used
when done. Do not use this unless you're absolutely sure of this, as it
makes the operation slower and is less friendly for the network. */
CINIT(FORBID_REUSE, LONG, 75),
/* Set to a file name that contains random data for libcurl to use to
seed the random engine when doing SSL connects. */
CINIT(RANDOM_FILE, OBJECTPOINT, 76),
/* Set to the Entropy Gathering Daemon socket pathname */
CINIT(EGDSOCKET, OBJECTPOINT, 77),
/* Time-out connect operations after this amount of seconds, if connects
are OK within this time, then fine... This only aborts the connect
phase. [Only works on unix-style/SIGALRM operating systems] */
CINIT(CONNECTTIMEOUT, LONG, 78),
CURLOPT_LASTENTRY /* the last unusued */ CURLOPT_LASTENTRY /* the last unusued */
} CURLoption; } CURLoption;
@@ -423,10 +455,10 @@ typedef enum {
NOTE: they return TRUE if the strings match *case insensitively*. NOTE: they return TRUE if the strings match *case insensitively*.
*/ */
extern int (Curl_strequal)(const char *s1, const char *s2); extern int (curl_strequal)(const char *s1, const char *s2);
extern int (Curl_strnequal)(const char *s1, const char *s2, size_t n); extern int (curl_strnequal)(const char *s1, const char *s2, size_t n);
#define strequal(a,b) Curl_strequal(a,b) #define strequal(a,b) curl_strequal(a,b)
#define strnequal(a,b,c) Curl_strnequal(a,b,c) #define strnequal(a,b,c) curl_strnequal(a,b,c)
/* external form function */ /* external form function */
int curl_formparse(char *string, int curl_formparse(char *string,
@@ -444,7 +476,7 @@ char *curl_getenv(char *variable);
char *curl_version(void); char *curl_version(void);
/* This is the version number */ /* This is the version number */
#define LIBCURL_VERSION "7.7-beta1" #define LIBCURL_VERSION "7.7-beta5"
#define LIBCURL_VERSION_NUM 0x070700 #define LIBCURL_VERSION_NUM 0x070700
/* linked-list structure for the CURLOPT_QUOTE option (and other) */ /* linked-list structure for the CURLOPT_QUOTE option (and other) */
@@ -502,21 +534,6 @@ typedef enum {
before it can be included! */ before it can be included! */
#include <curl/easy.h> /* nothing in curl is fun without the easy stuff */ #include <curl/easy.h> /* nothing in curl is fun without the easy stuff */
/*
* NAME curl_getinfo()
*
* DESCRIPTION
*
* Request internal information from the curl session with this function.
* The third argument MUST be a pointer to a long or a pointer to a char *.
* The data pointed to will be filled in accordingly and can be relied upon
* only if the function returns CURLE_OK.
* This function is intended to get used *AFTER* a performed transfer, all
* results are undefined before the transfer is completed.
*/
CURLcode curl_getinfo(CURL *curl, CURLINFO info, ...);
typedef enum { typedef enum {
CURLCLOSEPOLICY_NONE, /* first, never use this */ CURLCLOSEPOLICY_NONE, /* first, never use this */

View File

@@ -16,7 +16,7 @@ lib_LTLIBRARIES = libcurl.la
INCLUDES = -I$(top_srcdir)/include INCLUDES = -I$(top_srcdir)/include
libcurl_la_LDFLAGS = -version-info 1:0:0 libcurl_la_LDFLAGS = -version-info 2:0:0
# This flag accepts an argument of the form current[:revision[:age]]. So, # This flag accepts an argument of the form current[:revision[:age]]. So,
# passing -version-info 3:12:1 sets current to 3, revision to 12, and age to # passing -version-info 3:12:1 sets current to 3, revision to 12, and age to
# 1. # 1.

View File

@@ -33,13 +33,13 @@ libcurl_a_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c base64.c \
urldata.h transfer.c getdate.h ldap.c ssluse.c version.c transfer.h getenv.c \ urldata.h transfer.c getdate.h ldap.c ssluse.c version.c transfer.h getenv.c \
ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c \ ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c \
telnet.h getinfo.c strequal.c strequal.h easy.c security.h \ telnet.h getinfo.c strequal.c strequal.h easy.c security.h \
security.c krb4.c security.c krb4.h krb4.c memdebug.h memdebug.c inet_ntoa_r.h http_chunks.h http_chunks.c
libcurl_a_OBJECTS = file.o timeval.o base64.o hostip.o progress.o \ libcurl_a_OBJECTS = file.o timeval.o base64.o hostip.o progress.o \
formdata.o cookie.o http.o sendf.o ftp.o url.o dict.o if2ip.o \ formdata.o cookie.o http.o sendf.o ftp.o url.o dict.o if2ip.o \
speedcheck.o getdate.o transfer.o ldap.o ssluse.o version.o \ speedcheck.o getdate.o transfer.o ldap.o ssluse.o version.o \
getenv.o escape.o mprintf.o telnet.o getpass.o netrc.o getinfo.o \ getenv.o escape.o mprintf.o telnet.o getpass.o netrc.o getinfo.o \
strequal.o easy.o security.o krb4.o strequal.o easy.o security.o krb4.o memdebug.o http_chunks.o
LIBRARIES = $(libcurl_a_LIBRARIES) LIBRARIES = $(libcurl_a_LIBRARIES)
SOURCES = $(libcurl_a_SOURCES) SOURCES = $(libcurl_a_SOURCES)

View File

@@ -100,7 +100,7 @@ CURLcode Curl_dict(struct connectdata *conn)
char *path = conn->path; char *path = conn->path;
long *bytecount = &conn->bytecount; long *bytecount = &conn->bytecount;
if(data->bits.user_passwd) { if(conn->bits.user_passwd) {
/* AUTH is missing */ /* AUTH is missing */
} }

View File

@@ -83,15 +83,11 @@ CURL *curl_easy_init(void)
CURLcode res; CURLcode res;
struct UrlData *data; struct UrlData *data;
if(curl_init())
return NULL;
/* We use curl_open() with undefined URL so far */ /* We use curl_open() with undefined URL so far */
res = curl_open((CURL **)&data, NULL); res = Curl_open((CURL **)&data, NULL);
if(res != CURLE_OK) if(res != CURLE_OK)
return NULL; return NULL;
data->interf = CURLI_EASY; /* mark it as an easy one */
/* SAC */ /* SAC */
data->device = NULL; data->device = NULL;
@@ -119,16 +115,16 @@ CURLcode curl_easy_setopt(CURL *curl, CURLoption tag, ...)
if(tag < CURLOPTTYPE_OBJECTPOINT) { if(tag < CURLOPTTYPE_OBJECTPOINT) {
/* This is a LONG type */ /* This is a LONG type */
param_long = va_arg(arg, long); param_long = va_arg(arg, long);
curl_setopt(data, tag, param_long); Curl_setopt(data, tag, param_long);
} }
else if(tag < CURLOPTTYPE_FUNCTIONPOINT) { else if(tag < CURLOPTTYPE_FUNCTIONPOINT) {
/* This is a object pointer type */ /* This is a object pointer type */
param_obj = va_arg(arg, void *); param_obj = va_arg(arg, void *);
curl_setopt(data, tag, param_obj); Curl_setopt(data, tag, param_obj);
} }
else { else {
param_func = va_arg(arg, func_T ); param_func = va_arg(arg, func_T );
curl_setopt(data, tag, param_func); Curl_setopt(data, tag, param_func);
} }
va_end(arg); va_end(arg);
@@ -137,13 +133,12 @@ CURLcode curl_easy_setopt(CURL *curl, CURLoption tag, ...)
CURLcode curl_easy_perform(CURL *curl) CURLcode curl_easy_perform(CURL *curl)
{ {
return curl_transfer(curl); return Curl_perform(curl);
} }
void curl_easy_cleanup(CURL *curl) void curl_easy_cleanup(CURL *curl)
{ {
curl_close(curl); Curl_close(curl);
curl_free();
} }
CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ...) CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ...)
@@ -153,5 +148,5 @@ CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ...)
va_start(arg, info); va_start(arg, info);
paramp = va_arg(arg, void *); paramp = va_arg(arg, void *);
return curl_getinfo(curl, info, paramp); return Curl_getinfo(curl, info, paramp);
} }

View File

@@ -78,7 +78,7 @@ char *curl_unescape(char *string, int length)
char *ns = malloc(alloc); char *ns = malloc(alloc);
unsigned char in; unsigned char in;
int index=0; int index=0;
int hex; unsigned int hex;
char querypart=FALSE; /* everything to the right of a '?' letter is char querypart=FALSE; /* everything to the right of a '?' letter is
the "query part" where '+' should become ' '. the "query part" where '+' should become ' '.
RFC 2316, section 3.10 */ RFC 2316, section 3.10 */

View File

@@ -97,6 +97,9 @@ CURLcode Curl_file_connect(struct connectdata *conn)
char *actual_path = curl_unescape(conn->path, 0); char *actual_path = curl_unescape(conn->path, 0);
struct FILE *file; struct FILE *file;
int fd; int fd;
#if defined(WIN32) || defined(__EMX__)
int i;
#endif
file = (struct FILE *)malloc(sizeof(struct FILE)); file = (struct FILE *)malloc(sizeof(struct FILE));
if(!file) if(!file)
@@ -106,8 +109,6 @@ CURLcode Curl_file_connect(struct connectdata *conn)
conn->proto.file = file; conn->proto.file = file;
#if defined(WIN32) || defined(__EMX__) #if defined(WIN32) || defined(__EMX__)
int i;
/* change path separators from '/' to '\\' for Windows and OS/2 */ /* change path separators from '/' to '\\' for Windows and OS/2 */
for (i=0; actual_path[i] != '\0'; ++i) for (i=0; actual_path[i] != '\0'; ++i)
if (actual_path[i] == '/') if (actual_path[i] == '/')

View File

@@ -77,6 +77,8 @@
#include "krb4.h" #include "krb4.h"
#endif #endif
#include "strequal.h"
#define _MPRINTF_REPLACE /* use our functions only */ #define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h> #include <curl/mprintf.h>
@@ -119,8 +121,8 @@ static CURLcode AllowServerConnect(struct UrlData *data,
size_t size = sizeof(struct sockaddr_in); size_t size = sizeof(struct sockaddr_in);
struct sockaddr_in add; struct sockaddr_in add;
getsockname(sock, (struct sockaddr *) &add, (int *)&size); getsockname(sock, (struct sockaddr *) &add, (socklen_t *)&size);
s=accept(sock, (struct sockaddr *) &add, (int *)&size); s=accept(sock, (struct sockaddr *) &add, (socklen_t *)&size);
sclose(sock); /* close the first socket */ sclose(sock); /* close the first socket */
@@ -549,13 +551,14 @@ CURLcode _ftp(struct connectdata *conn)
char *buf = data->buffer; /* this is our buffer */ char *buf = data->buffer; /* this is our buffer */
/* for the ftp PORT mode */ /* for the ftp PORT mode */
int portsock=-1; int portsock=-1;
struct sockaddr_in serv_addr;
char hostent_buf[8192];
#if defined (HAVE_INET_NTOA_R) #if defined (HAVE_INET_NTOA_R)
char ntoa_buf[64]; char ntoa_buf[64];
#endif #endif
#ifdef ENABLE_IPV6 #ifdef ENABLE_IPV6
struct addrinfo *ai; struct addrinfo *ai;
#else
struct sockaddr_in serv_addr;
char hostent_buf[8192];
#endif #endif
struct curl_slist *qitem; /* QUOTE item */ struct curl_slist *qitem; /* QUOTE item */
@@ -715,20 +718,20 @@ CURLcode _ftp(struct connectdata *conn)
#ifdef ENABLE_IPV6 #ifdef ENABLE_IPV6
struct addrinfo hints, *res, *ai; struct addrinfo hints, *res, *ai;
struct sockaddr_storage ss; struct sockaddr_storage ss;
int sslen; socklen_t sslen;
char hbuf[NI_MAXHOST]; char hbuf[NI_MAXHOST];
char *localaddr;
struct sockaddr *sa=(struct sockaddr *)&ss; struct sockaddr *sa=(struct sockaddr *)&ss;
#ifdef NI_WITHSCOPEID #ifdef NI_WITHSCOPEID
const int niflags = NI_NUMERICHOST | NI_NUMERICSERV | NI_WITHSCOPEID; const int niflags = NI_NUMERICHOST | NI_NUMERICSERV | NI_WITHSCOPEID;
#else #else
const int niflags = NI_NUMERICHOST | NI_NUMERICSERV; const int niflags = NI_NUMERICHOST | NI_NUMERICSERV;
#endif #endif
unsigned char *ap; char *ap;
unsigned char *pp; char *pp;
int alen, plen; int alen, plen;
char portmsgbuf[4096], tmp[4096]; char portmsgbuf[4096], tmp[4096];
char *p;
char *mode[] = { "EPRT", "LPRT", "PORT", NULL }; char *mode[] = { "EPRT", "LPRT", "PORT", NULL };
char **modep; char **modep;
@@ -761,13 +764,13 @@ CURLcode _ftp(struct connectdata *conn)
continue; continue;
if (bind(portsock, ai->ai_addr, ai->ai_addrlen) < 0) { if (bind(portsock, ai->ai_addr, ai->ai_addrlen) < 0) {
close(portsock); sclose(portsock);
portsock = -1; portsock = -1;
continue; continue;
} }
if (listen(portsock, 1) < 0) { if (listen(portsock, 1) < 0) {
close(portsock); sclose(portsock);
portsock = -1; portsock = -1;
continue; continue;
} }
@@ -878,7 +881,7 @@ again:;
} }
if (!*modep) { if (!*modep) {
close(portsock); sclose(portsock);
freeaddrinfo(res); freeaddrinfo(res);
return CURLE_FTP_PORT_FAILED; return CURLE_FTP_PORT_FAILED;
} }
@@ -932,7 +935,7 @@ again:;
size = sizeof(add); size = sizeof(add);
if(getsockname(portsock, (struct sockaddr *) &add, if(getsockname(portsock, (struct sockaddr *) &add,
(int *)&size)<0) { (socklen_t *)&size)<0) {
failf(data, "getsockname() failed"); failf(data, "getsockname() failed");
return CURLE_FTP_PORT_FAILED; return CURLE_FTP_PORT_FAILED;
} }
@@ -1027,9 +1030,10 @@ again:;
struct addrinfo *res; struct addrinfo *res;
#else #else
struct hostent *he; struct hostent *he;
#endif
char *str=buf,*ip_addr;
char *hostdataptr=NULL; char *hostdataptr=NULL;
char *ip_addr;
#endif
char *str=buf;
/* /*
* New 227-parser June 3rd 1999. * New 227-parser June 3rd 1999.

View File

@@ -104,6 +104,11 @@
# include <string.h> # include <string.h>
#endif #endif
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 7) #if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 7)
# define __attribute__(x) # define __attribute__(x)
#endif #endif
@@ -222,7 +227,7 @@ static int yyRelSeconds;
static int yyRelYear; static int yyRelYear;
#line 205 "getdate.y" #line 210 "getdate.y"
typedef union { typedef union {
int Number; int Number;
enum _MERIDIAN Meridian; enum _MERIDIAN Meridian;
@@ -305,11 +310,11 @@ static const short yyrhs[] = { -1,
#if YYDEBUG != 0 #if YYDEBUG != 0
static const short yyrline[] = { 0, static const short yyrline[] = { 0,
221, 222, 225, 228, 231, 234, 237, 240, 243, 249, 226, 227, 230, 233, 236, 239, 242, 245, 248, 254,
255, 264, 270, 282, 285, 288, 294, 298, 302, 308, 260, 269, 275, 287, 290, 293, 299, 303, 307, 313,
312, 330, 336, 342, 346, 351, 355, 362, 370, 373, 317, 335, 341, 347, 351, 356, 360, 367, 375, 378,
376, 379, 382, 385, 388, 391, 394, 397, 400, 403, 381, 384, 387, 390, 393, 396, 399, 402, 405, 408,
406, 409, 412, 415, 418, 421, 424, 429, 462, 466 411, 414, 417, 420, 423, 426, 429, 434, 467, 471
}; };
#endif #endif
@@ -933,37 +938,37 @@ yyreduce:
switch (yyn) { switch (yyn) {
case 3: case 3:
#line 225 "getdate.y" #line 230 "getdate.y"
{ {
yyHaveTime++; yyHaveTime++;
; ;
break;} break;}
case 4: case 4:
#line 228 "getdate.y" #line 233 "getdate.y"
{ {
yyHaveZone++; yyHaveZone++;
; ;
break;} break;}
case 5: case 5:
#line 231 "getdate.y" #line 236 "getdate.y"
{ {
yyHaveDate++; yyHaveDate++;
; ;
break;} break;}
case 6: case 6:
#line 234 "getdate.y" #line 239 "getdate.y"
{ {
yyHaveDay++; yyHaveDay++;
; ;
break;} break;}
case 7: case 7:
#line 237 "getdate.y" #line 242 "getdate.y"
{ {
yyHaveRel++; yyHaveRel++;
; ;
break;} break;}
case 9: case 9:
#line 243 "getdate.y" #line 248 "getdate.y"
{ {
yyHour = yyvsp[-1].Number; yyHour = yyvsp[-1].Number;
yyMinutes = 0; yyMinutes = 0;
@@ -972,7 +977,7 @@ case 9:
; ;
break;} break;}
case 10: case 10:
#line 249 "getdate.y" #line 254 "getdate.y"
{ {
yyHour = yyvsp[-3].Number; yyHour = yyvsp[-3].Number;
yyMinutes = yyvsp[-1].Number; yyMinutes = yyvsp[-1].Number;
@@ -981,7 +986,7 @@ case 10:
; ;
break;} break;}
case 11: case 11:
#line 255 "getdate.y" #line 260 "getdate.y"
{ {
yyHour = yyvsp[-3].Number; yyHour = yyvsp[-3].Number;
yyMinutes = yyvsp[-1].Number; yyMinutes = yyvsp[-1].Number;
@@ -993,7 +998,7 @@ case 11:
; ;
break;} break;}
case 12: case 12:
#line 264 "getdate.y" #line 269 "getdate.y"
{ {
yyHour = yyvsp[-5].Number; yyHour = yyvsp[-5].Number;
yyMinutes = yyvsp[-3].Number; yyMinutes = yyvsp[-3].Number;
@@ -1002,7 +1007,7 @@ case 12:
; ;
break;} break;}
case 13: case 13:
#line 270 "getdate.y" #line 275 "getdate.y"
{ {
yyHour = yyvsp[-5].Number; yyHour = yyvsp[-5].Number;
yyMinutes = yyvsp[-3].Number; yyMinutes = yyvsp[-3].Number;
@@ -1015,53 +1020,53 @@ case 13:
; ;
break;} break;}
case 14: case 14:
#line 282 "getdate.y" #line 287 "getdate.y"
{ {
yyTimezone = yyvsp[0].Number; yyTimezone = yyvsp[0].Number;
; ;
break;} break;}
case 15: case 15:
#line 285 "getdate.y" #line 290 "getdate.y"
{ {
yyTimezone = yyvsp[0].Number - 60; yyTimezone = yyvsp[0].Number - 60;
; ;
break;} break;}
case 16: case 16:
#line 289 "getdate.y" #line 294 "getdate.y"
{ {
yyTimezone = yyvsp[-1].Number - 60; yyTimezone = yyvsp[-1].Number - 60;
; ;
break;} break;}
case 17: case 17:
#line 294 "getdate.y" #line 299 "getdate.y"
{ {
yyDayOrdinal = 1; yyDayOrdinal = 1;
yyDayNumber = yyvsp[0].Number; yyDayNumber = yyvsp[0].Number;
; ;
break;} break;}
case 18: case 18:
#line 298 "getdate.y" #line 303 "getdate.y"
{ {
yyDayOrdinal = 1; yyDayOrdinal = 1;
yyDayNumber = yyvsp[-1].Number; yyDayNumber = yyvsp[-1].Number;
; ;
break;} break;}
case 19: case 19:
#line 302 "getdate.y" #line 307 "getdate.y"
{ {
yyDayOrdinal = yyvsp[-1].Number; yyDayOrdinal = yyvsp[-1].Number;
yyDayNumber = yyvsp[0].Number; yyDayNumber = yyvsp[0].Number;
; ;
break;} break;}
case 20: case 20:
#line 308 "getdate.y" #line 313 "getdate.y"
{ {
yyMonth = yyvsp[-2].Number; yyMonth = yyvsp[-2].Number;
yyDay = yyvsp[0].Number; yyDay = yyvsp[0].Number;
; ;
break;} break;}
case 21: case 21:
#line 312 "getdate.y" #line 317 "getdate.y"
{ {
/* Interpret as YYYY/MM/DD if $1 >= 1000, otherwise as MM/DD/YY. /* Interpret as YYYY/MM/DD if $1 >= 1000, otherwise as MM/DD/YY.
The goal in recognizing YYYY/MM/DD is solely to support legacy The goal in recognizing YYYY/MM/DD is solely to support legacy
@@ -1082,7 +1087,7 @@ case 21:
; ;
break;} break;}
case 22: case 22:
#line 330 "getdate.y" #line 335 "getdate.y"
{ {
/* ISO 8601 format. yyyy-mm-dd. */ /* ISO 8601 format. yyyy-mm-dd. */
yyYear = yyvsp[-2].Number; yyYear = yyvsp[-2].Number;
@@ -1091,7 +1096,7 @@ case 22:
; ;
break;} break;}
case 23: case 23:
#line 336 "getdate.y" #line 341 "getdate.y"
{ {
/* e.g. 17-JUN-1992. */ /* e.g. 17-JUN-1992. */
yyDay = yyvsp[-2].Number; yyDay = yyvsp[-2].Number;
@@ -1100,14 +1105,14 @@ case 23:
; ;
break;} break;}
case 24: case 24:
#line 342 "getdate.y" #line 347 "getdate.y"
{ {
yyMonth = yyvsp[-1].Number; yyMonth = yyvsp[-1].Number;
yyDay = yyvsp[0].Number; yyDay = yyvsp[0].Number;
; ;
break;} break;}
case 25: case 25:
#line 346 "getdate.y" #line 351 "getdate.y"
{ {
yyMonth = yyvsp[-3].Number; yyMonth = yyvsp[-3].Number;
yyDay = yyvsp[-2].Number; yyDay = yyvsp[-2].Number;
@@ -1115,14 +1120,14 @@ case 25:
; ;
break;} break;}
case 26: case 26:
#line 351 "getdate.y" #line 356 "getdate.y"
{ {
yyMonth = yyvsp[0].Number; yyMonth = yyvsp[0].Number;
yyDay = yyvsp[-1].Number; yyDay = yyvsp[-1].Number;
; ;
break;} break;}
case 27: case 27:
#line 355 "getdate.y" #line 360 "getdate.y"
{ {
yyMonth = yyvsp[-1].Number; yyMonth = yyvsp[-1].Number;
yyDay = yyvsp[-2].Number; yyDay = yyvsp[-2].Number;
@@ -1130,7 +1135,7 @@ case 27:
; ;
break;} break;}
case 28: case 28:
#line 362 "getdate.y" #line 367 "getdate.y"
{ {
yyRelSeconds = -yyRelSeconds; yyRelSeconds = -yyRelSeconds;
yyRelMinutes = -yyRelMinutes; yyRelMinutes = -yyRelMinutes;
@@ -1141,115 +1146,115 @@ case 28:
; ;
break;} break;}
case 30: case 30:
#line 373 "getdate.y" #line 378 "getdate.y"
{ {
yyRelYear += yyvsp[-1].Number * yyvsp[0].Number; yyRelYear += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 31: case 31:
#line 376 "getdate.y" #line 381 "getdate.y"
{ {
yyRelYear += yyvsp[-1].Number * yyvsp[0].Number; yyRelYear += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 32: case 32:
#line 379 "getdate.y" #line 384 "getdate.y"
{ {
yyRelYear += yyvsp[0].Number; yyRelYear += yyvsp[0].Number;
; ;
break;} break;}
case 33: case 33:
#line 382 "getdate.y" #line 387 "getdate.y"
{ {
yyRelMonth += yyvsp[-1].Number * yyvsp[0].Number; yyRelMonth += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 34: case 34:
#line 385 "getdate.y" #line 390 "getdate.y"
{ {
yyRelMonth += yyvsp[-1].Number * yyvsp[0].Number; yyRelMonth += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 35: case 35:
#line 388 "getdate.y" #line 393 "getdate.y"
{ {
yyRelMonth += yyvsp[0].Number; yyRelMonth += yyvsp[0].Number;
; ;
break;} break;}
case 36: case 36:
#line 391 "getdate.y" #line 396 "getdate.y"
{ {
yyRelDay += yyvsp[-1].Number * yyvsp[0].Number; yyRelDay += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 37: case 37:
#line 394 "getdate.y" #line 399 "getdate.y"
{ {
yyRelDay += yyvsp[-1].Number * yyvsp[0].Number; yyRelDay += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 38: case 38:
#line 397 "getdate.y" #line 402 "getdate.y"
{ {
yyRelDay += yyvsp[0].Number; yyRelDay += yyvsp[0].Number;
; ;
break;} break;}
case 39: case 39:
#line 400 "getdate.y" #line 405 "getdate.y"
{ {
yyRelHour += yyvsp[-1].Number * yyvsp[0].Number; yyRelHour += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 40: case 40:
#line 403 "getdate.y" #line 408 "getdate.y"
{ {
yyRelHour += yyvsp[-1].Number * yyvsp[0].Number; yyRelHour += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 41: case 41:
#line 406 "getdate.y" #line 411 "getdate.y"
{ {
yyRelHour += yyvsp[0].Number; yyRelHour += yyvsp[0].Number;
; ;
break;} break;}
case 42: case 42:
#line 409 "getdate.y" #line 414 "getdate.y"
{ {
yyRelMinutes += yyvsp[-1].Number * yyvsp[0].Number; yyRelMinutes += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 43: case 43:
#line 412 "getdate.y" #line 417 "getdate.y"
{ {
yyRelMinutes += yyvsp[-1].Number * yyvsp[0].Number; yyRelMinutes += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 44: case 44:
#line 415 "getdate.y" #line 420 "getdate.y"
{ {
yyRelMinutes += yyvsp[0].Number; yyRelMinutes += yyvsp[0].Number;
; ;
break;} break;}
case 45: case 45:
#line 418 "getdate.y" #line 423 "getdate.y"
{ {
yyRelSeconds += yyvsp[-1].Number * yyvsp[0].Number; yyRelSeconds += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 46: case 46:
#line 421 "getdate.y" #line 426 "getdate.y"
{ {
yyRelSeconds += yyvsp[-1].Number * yyvsp[0].Number; yyRelSeconds += yyvsp[-1].Number * yyvsp[0].Number;
; ;
break;} break;}
case 47: case 47:
#line 424 "getdate.y" #line 429 "getdate.y"
{ {
yyRelSeconds += yyvsp[0].Number; yyRelSeconds += yyvsp[0].Number;
; ;
break;} break;}
case 48: case 48:
#line 430 "getdate.y" #line 435 "getdate.y"
{ {
if (yyHaveTime && yyHaveDate && !yyHaveRel) if (yyHaveTime && yyHaveDate && !yyHaveRel)
yyYear = yyvsp[0].Number; yyYear = yyvsp[0].Number;
@@ -1282,13 +1287,13 @@ case 48:
; ;
break;} break;}
case 49: case 49:
#line 463 "getdate.y" #line 468 "getdate.y"
{ {
yyval.Meridian = MER24; yyval.Meridian = MER24;
; ;
break;} break;}
case 50: case 50:
#line 467 "getdate.y" #line 472 "getdate.y"
{ {
yyval.Meridian = yyvsp[0].Meridian; yyval.Meridian = yyvsp[0].Meridian;
; ;
@@ -1515,7 +1520,7 @@ yyerrhandle:
} }
return 1; return 1;
} }
#line 472 "getdate.y" #line 477 "getdate.y"
/* Include this file down here because bison inserts code above which /* Include this file down here because bison inserts code above which

View File

@@ -80,6 +80,11 @@
# include <string.h> # include <string.h>
#endif #endif
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 7) #if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 7)
# define __attribute__(x) # define __attribute__(x)
#endif #endif

View File

@@ -31,7 +31,7 @@
#include <string.h> #include <string.h>
#include <stdarg.h> #include <stdarg.h>
CURLcode curl_getinfo(CURL *curl, CURLINFO info, ...) CURLcode Curl_getinfo(CURL *curl, CURLINFO info, ...)
{ {
va_list arg; va_list arg;
long *param_longp; long *param_longp;

View File

@@ -66,6 +66,11 @@
# endif # endif
#endif #endif
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/* no perror? make an fprintf! */ /* no perror? make an fprintf! */
#ifndef HAVE_PERROR #ifndef HAVE_PERROR
# define perror(x) fprintf(stderr, "Error in: %s\n", x) # define perror(x) fprintf(stderr, "Error in: %s\n", x)

View File

@@ -283,7 +283,7 @@ CURLcode Curl_ConnectHTTPProxyTunnel(struct connectdata *conn,
"%s" "%s"
"\r\n", "\r\n",
hostname, remote_port, hostname, remote_port,
(data->bits.proxy_user_passwd)?conn->allocptr.proxyuserpwd:"", (conn->bits.proxy_user_passwd)?conn->allocptr.proxyuserpwd:"",
(data->useragent?conn->allocptr.uagent:"") (data->useragent?conn->allocptr.uagent:"")
); );
@@ -340,7 +340,7 @@ CURLcode Curl_http_connect(struct connectdata *conn)
return CURLE_SSL_CONNECT_ERROR; return CURLE_SSL_CONNECT_ERROR;
} }
if(data->bits.user_passwd && !data->bits.this_is_a_follow) { if(conn->bits.user_passwd && !data->bits.this_is_a_follow) {
/* Authorization: is requested, this is not a followed location, get the /* Authorization: is requested, this is not a followed location, get the
original host name */ original host name */
data->auth_host = strdup(conn->hostname); data->auth_host = strdup(conn->hostname);
@@ -423,7 +423,7 @@ CURLcode Curl_http(struct connectdata *conn)
conn->allocptr.uagent=NULL; conn->allocptr.uagent=NULL;
} }
if((data->bits.user_passwd) && !checkheaders(data, "Authorization:")) { if((conn->bits.user_passwd) && !checkheaders(data, "Authorization:")) {
char *authorization; char *authorization;
/* To prevent the user+password to get sent to other than the original /* To prevent the user+password to get sent to other than the original
@@ -469,10 +469,14 @@ CURLcode Curl_http(struct connectdata *conn)
http->sendit = Curl_getFormData(data->httppost, &http->postsize); http->sendit = Curl_getFormData(data->httppost, &http->postsize);
} }
if(!checkheaders(data, "Host:") && if(!checkheaders(data, "Host:")) {
!conn->allocptr.host) { /* if ptr_host is already set, it is almost OK since we only re-use
/* if ptr_host is already set, it is OK since we only re-use connections connections to the very same host and port, but when we use a HTTP
to the very same host and port */ proxy we have a persistant connect and yet we must change the Host:
header! */
if(conn->allocptr.host)
free(conn->allocptr.host);
if(((conn->protocol&PROT_HTTPS) && (conn->remote_port == PORT_HTTPS)) || if(((conn->protocol&PROT_HTTPS) && (conn->remote_port == PORT_HTTPS)) ||
(!(conn->protocol&PROT_HTTPS) && (conn->remote_port == PORT_HTTP)) ) (!(conn->protocol&PROT_HTTPS) && (conn->remote_port == PORT_HTTP)) )
@@ -602,10 +606,14 @@ CURLcode Curl_http(struct connectdata *conn)
(data->bits.http_post || data->bits.http_formpost)?"POST": (data->bits.http_post || data->bits.http_formpost)?"POST":
(data->bits.http_put)?"PUT":"GET"), (data->bits.http_put)?"PUT":"GET"),
ppath, ppath,
(data->bits.proxy_user_passwd && conn->allocptr.proxyuserpwd)?conn->allocptr.proxyuserpwd:"", (conn->bits.proxy_user_passwd &&
(data->bits.user_passwd && conn->allocptr.userpwd)?conn->allocptr.userpwd:"", conn->allocptr.proxyuserpwd)?conn->allocptr.proxyuserpwd:"",
(data->bits.set_range && conn->allocptr.rangeline)?conn->allocptr.rangeline:"", (conn->bits.user_passwd && conn->allocptr.userpwd)?
(data->useragent && *data->useragent && conn->allocptr.uagent)?conn->allocptr.uagent:"", conn->allocptr.userpwd:"",
(data->bits.set_range && conn->allocptr.rangeline)?
conn->allocptr.rangeline:"",
(data->useragent && *data->useragent && conn->allocptr.uagent)?
conn->allocptr.uagent:"",
(conn->allocptr.cookie?conn->allocptr.cookie:""), /* Cookie: <data> */ (conn->allocptr.cookie?conn->allocptr.cookie:""), /* Cookie: <data> */
(conn->allocptr.host?conn->allocptr.host:""), /* Host: host */ (conn->allocptr.host?conn->allocptr.host:""), /* Host: host */
http->p_pragma?http->p_pragma:"", http->p_pragma?http->p_pragma:"",

View File

@@ -115,10 +115,15 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
ch->hexindex++; ch->hexindex++;
} }
else { else {
return 1; /* longer hex than we support */ return CHUNKE_TOO_LONG_HEX; /* longer hex than we support */
} }
} }
else { else {
if(0 == ch->hexindex) {
/* This is illegal data, we received junk where we expected
a hexadecimal digit. */
return CHUNKE_ILLEGAL_HEX;
}
/* length and datap are unmodified */ /* length and datap are unmodified */
ch->hexbuffer[ch->hexindex]=0; ch->hexbuffer[ch->hexindex]=0;
ch->datasize=strtoul(ch->hexbuffer, NULL, 16); ch->datasize=strtoul(ch->hexbuffer, NULL, 16);
@@ -127,7 +132,9 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
break; break;
case CHUNK_POSTHEX: case CHUNK_POSTHEX:
/* just a lame state waiting for CRLF to arrive */ /* In this state, we're waiting for CRLF to arrive. We support
this to allow so called chunk-extensions to show up here
before the CRLF comes. */
if(*datap == '\r') if(*datap == '\r')
ch->state = CHUNK_CR; ch->state = CHUNK_CR;
length--; length--;
@@ -174,10 +181,34 @@ CHUNKcode Curl_httpchunk_read(struct connectdata *conn,
length -= piece; /* decrease space left in this round */ length -= piece; /* decrease space left in this round */
if(0 == ch->datasize) if(0 == ch->datasize)
/* end of data this round, go back to get a new size */ /* end of data this round, we now expect a trailing CRLF */
Curl_httpchunk_init(conn); ch->state = CHUNK_POSTCR;
break; break;
case CHUNK_POSTCR:
if(*datap == '\r') {
ch->state = CHUNK_POSTLF;
datap++;
length--;
}
else
return CHUNKE_BAD_CHUNK;
break;
case CHUNK_POSTLF:
if(*datap == '\n') {
/*
* The last one before we go back to hex state and start all
* over.
*/
Curl_httpchunk_init(conn);
datap++;
length--;
}
else
return CHUNKE_BAD_CHUNK;
break;
case CHUNK_STOP: case CHUNK_STOP:
/* If we arrive here, there is data left in the end of the buffer /* If we arrive here, there is data left in the end of the buffer
even if there's no more chunks to read */ even if there's no more chunks to read */

View File

@@ -24,13 +24,13 @@
*****************************************************************************/ *****************************************************************************/
/* /*
* The longest possible hexadecimal number we support in a chunked transfer. * The longest possible hexadecimal number we support in a chunked transfer.
* Weird enoug, RFC2616 doesn't set a maximum size! Since we use strtoul() * Weird enough, RFC2616 doesn't set a maximum size! Since we use strtoul()
* to convert it, we "only" support 2^32 bytes chunk data. * to convert it, we "only" support 2^32 bytes chunk data.
*/ */
#define MAXNUM_SIZE 16 #define MAXNUM_SIZE 16
typedef enum { typedef enum {
CHUNK_LOST, /* never use */ CHUNK_FIRST, /* never use */
/* In this we await and buffer all hexadecimal digits until we get one /* In this we await and buffer all hexadecimal digits until we get one
that isn't a hexadecimal digit. When done, we go POSTHEX */ that isn't a hexadecimal digit. When done, we go POSTHEX */
@@ -45,10 +45,17 @@ typedef enum {
If the size given was zero, we set state to STOP and return. */ If the size given was zero, we set state to STOP and return. */
CHUNK_CR, CHUNK_CR,
/* We eat the amount of data specified. When done, we move back to the /* We eat the amount of data specified. When done, we move on to the
HEX state. */ POST_CR state. */
CHUNK_DATA, CHUNK_DATA,
/* POSTCR should get a CR and nothing else, then move to POSTLF */
CHUNK_POSTCR,
/* POSTLF should get a LF and nothing else, then move back to HEX as
the CRLF combination marks the end of a chunk */
CHUNK_POSTLF,
/* This is mainly used to really mark that we're out of the game. /* This is mainly used to really mark that we're out of the game.
NOTE: that there's a 'dataleft' field in the struct that will tell how NOTE: that there's a 'dataleft' field in the struct that will tell how
many bytes that were not passed to the client in the end of the last many bytes that were not passed to the client in the end of the last
@@ -62,6 +69,8 @@ typedef enum {
CHUNKE_STOP = -1, CHUNKE_STOP = -1,
CHUNKE_OK = 0, CHUNKE_OK = 0,
CHUNKE_TOO_LONG_HEX = 1, CHUNKE_TOO_LONG_HEX = 1,
CHUNKE_ILLEGAL_HEX,
CHUNKE_BAD_CHUNK,
CHUNKE_WRITE_ERROR, CHUNKE_WRITE_ERROR,
CHUNKE_STATE_ERROR, CHUNKE_STATE_ERROR,
CHUNKE_LAST CHUNKE_LAST

View File

@@ -70,6 +70,11 @@
#include "inet_ntoa_r.h" #include "inet_ntoa_r.h"
#endif #endif
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#define SYS_ERROR -1 #define SYS_ERROR -1
char *Curl_if2ip(char *interface, char *buf, int buf_size) char *Curl_if2ip(char *interface, char *buf, int buf_size)
@@ -90,6 +95,7 @@ char *Curl_if2ip(char *interface, char *buf, int buf_size)
strcpy(req.ifr_name, interface); strcpy(req.ifr_name, interface);
req.ifr_addr.sa_family = AF_INET; req.ifr_addr.sa_family = AF_INET;
if (SYS_ERROR == ioctl(dummy, SIOCGIFADDR, &req, sizeof(req))) { if (SYS_ERROR == ioctl(dummy, SIOCGIFADDR, &req, sizeof(req))) {
sclose(dummy);
return NULL; return NULL;
} }
else { else {
@@ -104,7 +110,7 @@ char *Curl_if2ip(char *interface, char *buf, int buf_size)
ip[buf_size - 1] = 0; ip[buf_size - 1] = 0;
#endif #endif
} }
close(dummy); sclose(dummy);
} }
return ip; return ip;
} }

View File

@@ -2,41 +2,30 @@
; Definition file for the DLL version of the LIBCURL library from curl ; Definition file for the DLL version of the LIBCURL library from curl
; ;
LIBRARY CURL LIBRARY LIBCURL
DESCRIPTION 'curl libcurl - http://curl.haxx.se' DESCRIPTION 'curl libcurl - http://curl.haxx.se'
EXPORTS EXPORTS
curl_close @ 1 ; curl_easy_cleanup @ 1 ;
curl_connect @ 2 ; curl_easy_getinfo @ 2 ;
curl_disconnect @ 3 ; curl_easy_init @ 3 ;
curl_do @ 4 ; curl_easy_perform @ 4 ;
curl_done @ 5 ; curl_easy_setopt @ 5 ;
curl_easy_cleanup @ 6 ; curl_escape @ 6 ;
curl_easy_getinfo @ 7 ; curl_formparse @ 7 ;
curl_easy_init @ 8 ; curl_formfree @ 8 ;
curl_easy_perform @ 9 ; curl_getdate @ 9 ;
curl_easy_setopt @ 10 ; curl_getenv @ 10 ;
curl_escape @ 11 ; curl_slist_append @ 11 ;
curl_formparse @ 12 ; curl_slist_free_all @ 12 ;
curl_free @ 13 ; curl_unescape @ 13 ;
curl_getdate @ 14 ; curl_version @ 14 ;
curl_getenv @ 15 ; curl_maprintf @ 15 ;
curl_init @ 16 ; curl_mfprintf @ 16 ;
curl_open @ 17 ; curl_mprintf @ 17 ;
curl_read @ 18 ; curl_msprintf @ 18 ;
curl_setopt @ 19 ; curl_msnprintf @ 19 ;
curl_slist_append @ 20 ; curl_mvfprintf @ 20 ;
curl_slist_free_all @ 21 ; curl_strequal @ 21 ;
curl_transfer @ 22 ; curl_strnequal @ 22 ;
curl_unescape @ 23 ;
curl_version @ 24 ;
curl_write @ 25 ;
curl_maprintf @ 26 ;
curl_mfprintf @ 27 ;
curl_mprintf @ 28 ;
curl_msprintf @ 29 ;
curl_msnprintf @ 30 ;
curl_mvfprintf @ 31 ;
Curl_strequal @ 32 ;
Curl_strnequal @ 33 ;

View File

@@ -120,7 +120,7 @@ int curl_socket(int domain, int type, int protocol, int line, char *source)
return sockfd; return sockfd;
} }
int curl_accept(int s, struct sockaddr *addr, int *addrlen, int curl_accept(int s, struct sockaddr *addr, socklen_t *addrlen,
int line, char *source) int line, char *source)
{ {
int sockfd=(accept)(s, addr, addrlen); int sockfd=(accept)(s, addr, addrlen);

View File

@@ -13,7 +13,7 @@ void curl_memdebug(char *logname);
/* file descriptor manipulators */ /* file descriptor manipulators */
int curl_socket(int domain, int type, int protocol, int, char *); int curl_socket(int domain, int type, int protocol, int, char *);
int curl_sclose(int sockfd, int, char *); int curl_sclose(int sockfd, int, char *);
int curl_accept(int s, struct sockaddr *addr, int *addrlen, int curl_accept(int s, struct sockaddr *addr, socklen_t *addrlen,
int line, char *source); int line, char *source);
/* FILE functions */ /* FILE functions */

View File

@@ -27,10 +27,26 @@
#include <stdlib.h> #include <stdlib.h>
#include <string.h> #include <string.h>
#ifdef HAVE_SYS_TYPES_H
#include <sys/types.h>
#endif
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#ifdef HAVE_PWD_H
#include <pwd.h>
#endif
#include <curl/curl.h> #include <curl/curl.h>
#include "strequal.h" #include "strequal.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/* Debug this single source file with: /* Debug this single source file with:
'make netrc' then run './netrc'! 'make netrc' then run './netrc'!
@@ -60,7 +76,7 @@ int Curl_parsenetrc(char *host,
char netrcbuffer[256]; char netrcbuffer[256];
int retcode=1; int retcode=1;
char *home = curl_getenv("HOME"); /* portable environment reader */ char *home = NULL;
int state=NOTHING; int state=NOTHING;
char state_login=0; char state_login=0;
@@ -68,10 +84,24 @@ int Curl_parsenetrc(char *host,
#define NETRC DOT_CHAR "netrc" #define NETRC DOT_CHAR "netrc"
if(!home) #if defined(HAVE_GETPWUID) && defined(HAVE_GETEUID)
struct passwd *pw;
pw= getpwuid(geteuid());
if (pw)
home = pw->pw_dir;
#else
void *pw=NULL;
#endif
if(NULL == pw) {
home = curl_getenv("HOME"); /* portable environment reader */
if(!home) {
return -1; return -1;
}
}
if(strlen(home)>(sizeof(netrcbuffer)-strlen(NETRC))) { if(strlen(home)>(sizeof(netrcbuffer)-strlen(NETRC))) {
if(NULL==pw)
free(home); free(home);
return -1; return -1;
} }
@@ -140,6 +170,7 @@ int Curl_parsenetrc(char *host,
fclose(file); fclose(file);
} }
if(NULL==pw)
free(home); free(home);
return retcode; return retcode;

View File

@@ -120,7 +120,7 @@ void curl_slist_free_all(struct curl_slist *list)
} }
/* infof() is for info message along the way */ /* Curl_infof() is for info message along the way */
void Curl_infof(struct UrlData *data, char *fmt, ...) void Curl_infof(struct UrlData *data, char *fmt, ...)
{ {
@@ -133,7 +133,7 @@ void Curl_infof(struct UrlData *data, char *fmt, ...)
} }
} }
/* failf() is for messages stating why we failed, the LAST one will be /* Curl_failf() is for messages stating why we failed, the LAST one will be
returned for the user (if requested) */ returned for the user (if requested) */
void Curl_failf(struct UrlData *data, char *fmt, ...) void Curl_failf(struct UrlData *data, char *fmt, ...)
@@ -142,7 +142,7 @@ void Curl_failf(struct UrlData *data, char *fmt, ...)
va_start(ap, fmt); va_start(ap, fmt);
if(data->errorbuffer) if(data->errorbuffer)
vsnprintf(data->errorbuffer, CURL_ERROR_SIZE, fmt, ap); vsnprintf(data->errorbuffer, CURL_ERROR_SIZE, fmt, ap);
else { else if(!data->bits.mute) {
/* no errorbuffer receives this, write to data->err instead */ /* no errorbuffer receives this, write to data->err instead */
vfprintf(data->err, fmt, ap); vfprintf(data->err, fmt, ap);
fprintf(data->err, "\n"); fprintf(data->err, "\n");
@@ -213,23 +213,6 @@ CURLcode Curl_write(struct connectdata *conn, int sockfd,
return CURLE_OK; return CURLE_OK;
} }
/*
* External write-function, writes to the data-socket.
* Takes care of plain sockets, SSL or kerberos transparently.
*/
CURLcode curl_write(CURLconnect *c_conn, char *buf, size_t amount,
size_t *n)
{
struct connectdata *conn = (struct connectdata *)c_conn;
if(!n || !conn || (conn->handle != STRUCT_CONNECT))
return CURLE_FAILED_INIT;
return Curl_write(conn, conn->sockfd, buf, amount, n);
}
/* client_write() sends data to the write callback(s) /* client_write() sends data to the write callback(s)
The bit pattern defines to what "streams" to write to. Body and/or header. The bit pattern defines to what "streams" to write to. Body and/or header.
@@ -299,19 +282,3 @@ CURLcode Curl_read(struct connectdata *conn, int sockfd,
return CURLE_OK; return CURLE_OK;
} }
/*
* The public read function reads from the 'sockfd' file descriptor only.
* Use the Curl_read() internally when you want to specify fd.
*/
CURLcode curl_read(CURLconnect *c_conn, char *buf, size_t buffersize,
ssize_t *n)
{
struct connectdata *conn = (struct connectdata *)c_conn;
if(!n || !conn || (conn->handle != STRUCT_CONNECT))
return CURLE_FAILED_INIT;
return Curl_read(conn, conn->sockfd, buf, buffersize, n);
}

View File

@@ -24,6 +24,7 @@
#include "setup.h" #include "setup.h"
#include <stdio.h> #include <stdio.h>
#include <string.h>
#if defined(__MINGW32__) #if defined(__MINGW32__)
#include <winsock.h> #include <winsock.h>
#endif #endif

View File

@@ -80,34 +80,39 @@ int random_the_seed(struct connectdata *conn)
{ {
char *buf = conn->data->buffer; /* point to the big buffer */ char *buf = conn->data->buffer; /* point to the big buffer */
int nread=0; int nread=0;
struct UrlData *data=conn->data;
/* Q: should we add support for a random file name as a libcurl option? /* Q: should we add support for a random file name as a libcurl option?
A: Yes */ A: Yes, it is here */
#if 0
/* something like this */ #ifndef RANDOM_FILE
nread += RAND_load_file(filename, number_of_bytes); /* if RANDOM_FILE isn't defined, we only perform this if an option tells
us to! */
if(data->ssl.random_file)
#define RANDOM_FILE "" /* doesn't matter won't be used */
#endif #endif
/* generates a default path for the random seed file */ {
buf[0]=0; /* blank it first */ /* let the option override the define */
RAND_file_name(buf, BUFSIZE); nread += RAND_load_file((data->ssl.random_file?
if ( buf[0] ) { data->ssl.random_file:RANDOM_FILE),
/* we got a file name to try */ 16384);
nread += RAND_load_file(buf, 16384);
if(seed_enough(conn, nread)) if(seed_enough(conn, nread))
return nread; return nread;
} }
#ifdef RANDOM_FILE #if defined(HAVE_RAND_EGD)
nread += RAND_load_file(RANDOM_FILE, 16384);
if(seed_enough(conn, nread))
return nread;
#endif
#if defined(HAVE_RAND_EGD) && defined(EGD_SOCKET)
/* only available in OpenSSL 0.9.5 and later */ /* only available in OpenSSL 0.9.5 and later */
/* EGD_SOCKET is set at configure time */ /* EGD_SOCKET is set at configure time or not at all */
#ifndef EGD_SOCKET
/* If we don't have the define set, we only do this if the egd-option
is set */
if(data->ssl.egdsocket)
#define EGD_SOCKET "" /* doesn't matter won't be used */
#endif
{ {
int ret = RAND_egd(EGD_SOCKET); /* If there's an option and a define, the option overrides the
define */
int ret = RAND_egd(data->ssl.egdsocket?data->ssl.egdsocket:EGD_SOCKET);
if(-1 != ret) { if(-1 != ret) {
nread += ret; nread += ret;
if(seed_enough(conn, nread)) if(seed_enough(conn, nread))
@@ -133,7 +138,17 @@ int random_the_seed(struct connectdata *conn)
RAND_seed(area, len); RAND_seed(area, len);
free(area); /* now remove the random junk */ free(area); /* now remove the random junk */
}
#endif #endif
/* generates a default path for the random seed file */
buf[0]=0; /* blank it first */
RAND_file_name(buf, BUFSIZE);
if ( buf[0] ) {
/* we got a file name to try */
nread += RAND_load_file(buf, 16384);
if(seed_enough(conn, nread))
return nread;
} }
infof(conn->data, "Your connection is using a weak random seed!\n"); infof(conn->data, "Your connection is using a weak random seed!\n");
@@ -343,7 +358,7 @@ Curl_SSLConnect(struct connectdata *conn)
X509_free(conn->ssl.server_cert); X509_free(conn->ssl.server_cert);
#else /* USE_SSLEAY */ #else /* USE_SSLEAY */
/* this is for "-ansi -Wall -pedantic" to stop complaining! (rabe) */ /* this is for "-ansi -Wall -pedantic" to stop complaining! (rabe) */
(void) data; (void) conn;
#endif #endif
return 0; return 0;
} }

View File

@@ -25,7 +25,7 @@
#include <string.h> #include <string.h>
int Curl_strequal(const char *first, const char *second) int curl_strequal(const char *first, const char *second)
{ {
#if defined(HAVE_STRCASECMP) #if defined(HAVE_STRCASECMP)
return !strcasecmp(first, second); return !strcasecmp(first, second);
@@ -45,7 +45,7 @@ int Curl_strequal(const char *first, const char *second)
#endif #endif
} }
int Curl_strnequal(const char *first, const char *second, size_t max) int curl_strnequal(const char *first, const char *second, size_t max)
{ {
#if defined(HAVE_STRCASECMP) #if defined(HAVE_STRCASECMP)
return !strncasecmp(first, second, max); return !strncasecmp(first, second, max);

View File

@@ -22,10 +22,14 @@
* *
* $Id$ * $Id$
*****************************************************************************/ *****************************************************************************/
int Curl_strequal(const char *first, const char *second);
int Curl_strnequal(const char *first, const char *second, size_t max);
#define strequal(a,b) Curl_strequal(a,b) /*
#define strnequal(a,b,c) Curl_strnequal(a,b,c) * These two actually are public functions.
*/
int curl_strequal(const char *first, const char *second);
int curl_strnequal(const char *first, const char *second, size_t max);
#define strequal(a,b) curl_strequal(a,b)
#define strnequal(a,b,c) curl_strnequal(a,b,c)
#endif #endif

View File

@@ -82,6 +82,11 @@
#include "arpa_telnet.h" #include "arpa_telnet.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#define SUBBUFSIZE 512 #define SUBBUFSIZE 512
#define SB_CLEAR(x) x->subpointer = x->subbuffer; #define SB_CLEAR(x) x->subpointer = x->subbuffer;
@@ -745,7 +750,7 @@ static int check_telnet_options(struct connectdata *conn)
/* Add the user name as an environment variable if it /* Add the user name as an environment variable if it
was given on the command line */ was given on the command line */
if(data->bits.user_passwd) if(conn->bits.user_passwd)
{ {
char *buf = malloc(256); char *buf = malloc(256);
sprintf(buf, "USER,%s", data->user); sprintf(buf, "USER,%s", data->user);

View File

@@ -53,7 +53,7 @@ gettimeofday (struct timeval *tp, void *nothing)
#endif #endif
#endif #endif
struct timeval Curl_tvnow () struct timeval Curl_tvnow (void)
{ {
struct timeval now; struct timeval now;
#ifdef HAVE_GETTIMEOFDAY #ifdef HAVE_GETTIMEOFDAY

View File

@@ -107,7 +107,7 @@
<butlerm@xmission.com>. */ <butlerm@xmission.com>. */
CURLcode static CURLcode static
_Transfer(struct connectdata *c_conn) Transfer(struct connectdata *c_conn)
{ {
ssize_t nread; /* number of bytes read */ ssize_t nread; /* number of bytes read */
int bytecount = 0; /* total number of bytes read */ int bytecount = 0; /* total number of bytes read */
@@ -127,7 +127,7 @@ _Transfer(struct connectdata *c_conn)
bool content_range = FALSE; /* set TRUE if Content-Range: was found */ bool content_range = FALSE; /* set TRUE if Content-Range: was found */
int offset = 0; /* possible resume offset read from the int offset = 0; /* possible resume offset read from the
Content-Range: header */ Content-Range: header */
int code = 0; /* error code from the 'HTTP/1.? XXX' line */ int httpcode = 0; /* error code from the 'HTTP/1.? XXX' line */
int httpversion = -1; /* the last digit in the HTTP/1.1 string */ int httpversion = -1; /* the last digit in the HTTP/1.1 string */
/* for the low speed checks: */ /* for the low speed checks: */
@@ -142,9 +142,6 @@ _Transfer(struct connectdata *c_conn)
char *buf; char *buf;
int maxfd; int maxfd;
if(!conn || (conn->handle != STRUCT_CONNECT))
return CURLE_BAD_FUNCTION_ARGUMENT;
data = conn->data; /* there's the root struct */ data = conn->data; /* there's the root struct */
buf = data->buffer; buf = data->buffer;
maxfd = (conn->sockfd>conn->writesockfd?conn->sockfd:conn->writesockfd)+1; maxfd = (conn->sockfd>conn->writesockfd?conn->sockfd:conn->writesockfd)+1;
@@ -184,7 +181,7 @@ _Transfer(struct connectdata *c_conn)
int keepon=0; int keepon=0;
/* timeout every X second /* timeout every X second
- makes a better progressmeter (i.e even when no data is read, the - makes a better progress meter (i.e even when no data is read, the
meter can be updated and reflect reality) meter can be updated and reflect reality)
- allows removal of the alarm() crap - allows removal of the alarm() crap
- variable timeout is easier - variable timeout is easier
@@ -313,8 +310,11 @@ _Transfer(struct connectdata *c_conn)
/* we now have a full line that p points to */ /* we now have a full line that p points to */
if (('\n' == *p) || ('\r' == *p)) { if (('\n' == *p) || ('\r' == *p)) {
/* Zero-length line means end of header! */ /* Zero-length line means end of header! */
#if 0
if (-1 != conn->size) /* if known */ if (-1 != conn->size) /* if known */
conn->size += bytecount; /* we append the already read size */ conn->size += bytecount; /* we append the already read
size */
#endif
if ('\r' == *p) if ('\r' == *p)
@@ -324,6 +324,19 @@ _Transfer(struct connectdata *c_conn)
#if 0 /* headers are not included in the size */ #if 0 /* headers are not included in the size */
Curl_pgrsSetDownloadSize(data, conn->size); Curl_pgrsSetDownloadSize(data, conn->size);
#endif #endif
if(100 == httpcode) {
/*
* we have made a HTTP PUT or POST and this is 1.1-lingo
* that tells us that the server is OK with this and ready
* to receive our stuff.
* However, we'll get more headers now so we must get
* back into the header-parsing state!
*/
header = TRUE;
headerline = 0; /* we restart the header line counter */
}
else
header = FALSE; /* no more header to parse! */ header = FALSE; /* no more header to parse! */
/* now, only output this if the header AND body are requested: /* now, only output this if the header AND body are requested:
@@ -339,7 +352,7 @@ _Transfer(struct connectdata *c_conn)
data->header_size += p - data->headerbuff; data->header_size += p - data->headerbuff;
if(!header) {
/* /*
* end-of-headers. * end-of-headers.
* *
@@ -349,19 +362,28 @@ _Transfer(struct connectdata *c_conn)
*/ */
if(!conn->bits.close && data->bits.no_body) if(!conn->bits.close && data->bits.no_body)
return CURLE_OK; return CURLE_OK;
break; /* exit header line loop */ break; /* exit header line loop */
} }
/* We continue reading headers, so reset the line-based
header parsing variables hbufp && hbuflen */
hbufp = data->headerbuff;
hbuflen = 0;
continue;
}
if (!headerline++) { if (!headerline++) {
/* This is the first header, it MUST be the error code line /* This is the first header, it MUST be the error code line
or else we consiser this to be the body right away! */ or else we consiser this to be the body right away! */
if (2 == sscanf (p, " HTTP/1.%d %3d", &httpversion, &code)) { if (2 == sscanf (p, " HTTP/1.%d %3d", &httpversion,
&httpcode)) {
/* 404 -> URL not found! */ /* 404 -> URL not found! */
if ( if (
( ((data->bits.http_follow_location) && (code >= 400)) ( ((data->bits.http_follow_location) &&
(httpcode >= 400))
|| ||
(!data->bits.http_follow_location && (code >= 300))) (!data->bits.http_follow_location &&
(httpcode >= 300)))
&& (data->bits.http_fail_on_error)) { && (data->bits.http_fail_on_error)) {
/* If we have been told to fail hard on HTTP-errors, /* If we have been told to fail hard on HTTP-errors,
here is the check for that: */ here is the check for that: */
@@ -369,7 +391,13 @@ _Transfer(struct connectdata *c_conn)
failf (data, "The requested file was not found"); failf (data, "The requested file was not found");
return CURLE_HTTP_NOT_FOUND; return CURLE_HTTP_NOT_FOUND;
} }
data->progress.httpcode = code; data->progress.httpcode = httpcode;
data->progress.httpversion = httpversion;
if(httpversion == 0)
/* Default action for HTTP/1.0 must be to close, unless
we get one of those fancy headers that tell us the
server keeps it open for us! */
conn->bits.close = TRUE;
} }
else { else {
header = FALSE; /* this is not a header line */ header = FALSE; /* this is not a header line */
@@ -382,6 +410,19 @@ _Transfer(struct connectdata *c_conn)
conn->size = contentlength; conn->size = contentlength;
Curl_pgrsSetDownloadSize(data, contentlength); Curl_pgrsSetDownloadSize(data, contentlength);
} }
else if((httpversion == 0) &&
conn->bits.httpproxy &&
strnequal("Proxy-Connection: keep-alive", p,
strlen("Proxy-Connection: keep-alive"))) {
/*
* When a HTTP/1.0 reply comes when using a proxy, the
* 'Proxy-Connection: keep-alive' line tells us the
* connection will be kept alive for our pleasure.
* Default action for 1.0 is to close.
*/
conn->bits.close = FALSE; /* don't close when done */
infof(data, "HTTP/1.0 proxy connection set to keep alive!\n");
}
else if (strnequal("Connection: close", p, else if (strnequal("Connection: close", p,
strlen("Connection: close"))) { strlen("Connection: close"))) {
/* /*
@@ -431,7 +472,7 @@ _Transfer(struct connectdata *c_conn)
if(data->bits.get_filetime) if(data->bits.get_filetime)
data->progress.filetime = timeofdoc; data->progress.filetime = timeofdoc;
} }
else if ((code >= 300 && code < 400) && else if ((httpcode >= 300 && httpcode < 400) &&
(data->bits.http_follow_location) && (data->bits.http_follow_location) &&
strnequal("Location: ", p, 10)) { strnequal("Location: ", p, 10)) {
/* this is the URL that the server advices us to get instead */ /* this is the URL that the server advices us to get instead */
@@ -446,7 +487,7 @@ _Transfer(struct connectdata *c_conn)
ptr++; ptr++;
backup = *ptr; /* store the ending letter */ backup = *ptr; /* store the ending letter */
*ptr = '\0'; /* zero terminate */ *ptr = '\0'; /* zero terminate */
data->newurl = strdup(start); /* clone string */ conn->newurl = strdup(start); /* clone string */
*ptr = backup; /* restore ending letter */ *ptr = backup; /* restore ending letter */
} }
@@ -490,9 +531,9 @@ _Transfer(struct connectdata *c_conn)
write a piece of the body */ write a piece of the body */
if(conn->protocol&PROT_HTTP) { if(conn->protocol&PROT_HTTP) {
/* HTTP-only checks */ /* HTTP-only checks */
if (data->newurl) { if (conn->newurl) {
/* abort after the headers if "follow Location" is set */ /* abort after the headers if "follow Location" is set */
infof (data, "Follow to new URL: %s\n", data->newurl); infof (data, "Follow to new URL: %s\n", conn->newurl);
return CURLE_OK; return CURLE_OK;
} }
else if (data->resume_from && else if (data->resume_from &&
@@ -530,7 +571,8 @@ _Transfer(struct connectdata *c_conn)
} /* switch */ } /* switch */
} /* two valid time strings */ } /* two valid time strings */
} /* we have a time condition */ } /* we have a time condition */
if(!conn->bits.close && (httpversion == 1)) {
if(!conn->bits.close) {
/* If this is not the last request before a close, we must /* If this is not the last request before a close, we must
set the maximum download size to the size of the expected set the maximum download size to the size of the expected
document or else, we won't know when to stop reading! */ document or else, we won't know when to stop reading! */
@@ -554,15 +596,17 @@ _Transfer(struct connectdata *c_conn)
CHUNKcode res = CHUNKcode res =
Curl_httpchunk_read(conn, str, nread, &nread); Curl_httpchunk_read(conn, str, nread, &nread);
if(CHUNKE_OK < res) if(CHUNKE_OK < res) {
failf(data, "Receeived problem in the chunky parser");
return CURLE_READ_ERROR; return CURLE_READ_ERROR;
}
else if(CHUNKE_STOP == res) { else if(CHUNKE_STOP == res) {
/* we're done reading chunks! */ /* we're done reading chunks! */
keepon &= ~KEEP_READ; /* read no more */ keepon &= ~KEEP_READ; /* read no more */
/* There are now possibly bytes at the end of the str buffer /* There are now possibly N number of bytes at the end of the
that weren't written to the client, but we don't care str buffer that weren't written to the client, but we don't
about them right now. */ care about them right now. */
} }
/* If it returned OK, we just keep going */ /* If it returned OK, we just keep going */
} }
@@ -670,6 +714,11 @@ _Transfer(struct connectdata *c_conn)
contentlength-bytecount); contentlength-bytecount);
return CURLE_PARTIAL_FILE; return CURLE_PARTIAL_FILE;
} }
else if(conn->bits.chunk && conn->proto.http->chunk.datasize) {
failf(data, "transfer closed with at least %d bytes remaining",
conn->proto.http->chunk.datasize);
return CURLE_PARTIAL_FILE;
}
if(Curl_pgrsUpdate(data)) if(Curl_pgrsUpdate(data))
return CURLE_ABORTED_BY_CALLBACK; return CURLE_ABORTED_BY_CALLBACK;
@@ -681,27 +730,27 @@ _Transfer(struct connectdata *c_conn)
return CURLE_OK; return CURLE_OK;
} }
CURLcode curl_transfer(CURL *curl) CURLcode Curl_perform(CURL *curl)
{ {
CURLcode res; CURLcode res;
struct UrlData *data = curl; struct UrlData *data = (struct UrlData *)curl;
struct connectdata *c_connect=NULL; struct connectdata *conn=NULL;
bool port=TRUE; /* allow data->use_port to set port to use */ bool port=TRUE; /* allow data->use_port to set port to use */
Curl_pgrsStartNow(data); Curl_pgrsStartNow(data);
do { do {
Curl_pgrsTime(data, TIMER_STARTSINGLE); Curl_pgrsTime(data, TIMER_STARTSINGLE);
res = curl_connect(curl, (CURLconnect **)&c_connect, port); res = Curl_connect(data, &conn, port);
if(res == CURLE_OK) { if(res == CURLE_OK) {
res = curl_do(c_connect); res = Curl_do(conn);
if(res == CURLE_OK) { if(res == CURLE_OK) {
res = _Transfer(c_connect); /* now fetch that URL please */ res = Transfer(conn); /* now fetch that URL please */
if(res == CURLE_OK) if(res == CURLE_OK)
res = curl_done(c_connect); res = Curl_done(conn);
} }
if((res == CURLE_OK) && data->newurl) { if((res == CURLE_OK) && conn->newurl) {
/* Location: redirect /* Location: redirect
This is assumed to happen for HTTP(S) only! This is assumed to happen for HTTP(S) only!
@@ -741,7 +790,7 @@ CURLcode curl_transfer(CURL *curl)
data->bits.http_set_referer = TRUE; /* might have been false */ data->bits.http_set_referer = TRUE; /* might have been false */
} }
if(2 != sscanf(data->newurl, "%15[^:]://%c", prot, &letter)) { if(2 != sscanf(conn->newurl, "%15[^:]://%c", prot, &letter)) {
/*** /***
*DANG* this is an RFC 2068 violation. The URL is supposed *DANG* this is an RFC 2068 violation. The URL is supposed
to be absolute and this doesn't seem to be that! to be absolute and this doesn't seem to be that!
@@ -766,7 +815,7 @@ CURLcode curl_transfer(CURL *curl)
protsep+=2; /* pass the slashes */ protsep+=2; /* pass the slashes */
} }
if('/' != data->newurl[0]) { if('/' != conn->newurl[0]) {
/* First we need to find out if there's a ?-letter in the URL, /* First we need to find out if there's a ?-letter in the URL,
and cut it and the right-side of that off */ and cut it and the right-side of that off */
pathsep = strrchr(protsep, '?'); pathsep = strrchr(protsep, '?');
@@ -789,14 +838,14 @@ CURLcode curl_transfer(CURL *curl)
newest=(char *)malloc( strlen(data->url) + newest=(char *)malloc( strlen(data->url) +
1 + /* possible slash */ 1 + /* possible slash */
strlen(data->newurl) + 1/* zero byte */); strlen(conn->newurl) + 1/* zero byte */);
if(!newest) if(!newest)
return CURLE_OUT_OF_MEMORY; return CURLE_OUT_OF_MEMORY;
sprintf(newest, "%s%s%s", data->url, ('/' == data->newurl[0])?"":"/", sprintf(newest, "%s%s%s", data->url, ('/' == conn->newurl[0])?"":"/",
data->newurl); conn->newurl);
free(data->newurl); free(conn->newurl);
data->newurl = newest; conn->newurl = newest;
} }
else { else {
/* This is an absolute URL, don't use the custom port number */ /* This is an absolute URL, don't use the custom port number */
@@ -807,8 +856,8 @@ CURLcode curl_transfer(CURL *curl)
free(data->url); free(data->url);
/* TBD: set the URL with curl_setopt() */ /* TBD: set the URL with curl_setopt() */
data->url = data->newurl; data->url = conn->newurl;
data->newurl = NULL; /* don't show! */ conn->newurl = NULL; /* don't show! */
data->bits.urlstringalloc = TRUE; /* the URL is allocated */ data->bits.urlstringalloc = TRUE; /* the URL is allocated */
infof(data, "Follows Location: to new URL: '%s'\n", data->url); infof(data, "Follows Location: to new URL: '%s'\n", data->url);
@@ -867,8 +916,10 @@ CURLcode curl_transfer(CURL *curl)
} while(1); /* loop if Location: */ } while(1); /* loop if Location: */
if(data->newurl) if(conn->newurl) {
free(data->newurl); free(conn->newurl);
conn->newurl = NULL;
}
return res; return res;
} }

View File

@@ -22,8 +22,9 @@
* *
* $Id$ * $Id$
*****************************************************************************/ *****************************************************************************/
CURLcode curl_transfer(CURL *curl); CURLcode Curl_perform(CURL *curl);
/* This sets up a forthcoming transfer */
CURLcode CURLcode
Curl_Transfer (struct connectdata *data, Curl_Transfer (struct connectdata *data,
int sockfd, /* socket to read from or -1 */ int sockfd, /* socket to read from or -1 */

736
lib/url.c

File diff suppressed because it is too large Load Diff

View File

@@ -98,27 +98,6 @@
#define MAX(x,y) ((x)>(y)?(x):(y)) #define MAX(x,y) ((x)>(y)?(x):(y))
#endif #endif
/* Type of handle. All publicly returned 'handles' in the curl interface
have a handle first in the struct that describes what kind of handle it
is. Used to detect bad handle usage. */
typedef enum {
STRUCT_NONE,
STRUCT_OPEN,
STRUCT_CONNECT,
STRUCT_LAST
} Handle;
/* Connecting to a remote server using the curl interface is moving through
a state machine, this type is used to store the current state */
typedef enum {
CONN_NONE, /* illegal state */
CONN_INIT, /* curl_connect() has been called */
CONN_DO, /* curl_do() has been called successfully */
CONN_DONE, /* curl_done() has been called successfully */
CONN_ERROR, /* and error has occurred */
CONN_LAST /* illegal state */
} ConnState;
#ifdef KRB4 #ifdef KRB4
/* Types needed for krb4-ftp connections */ /* Types needed for krb4-ftp connections */
struct krb4buffer { struct krb4buffer {
@@ -152,6 +131,8 @@ struct ssl_config_data {
long verifypeer; /* set TRUE if this is desired */ long verifypeer; /* set TRUE if this is desired */
char *CApath; /* DOES NOT WORK ON WINDOWS */ char *CApath; /* DOES NOT WORK ON WINDOWS */
char *CAfile; /* cerficate to verify peer against */ char *CAfile; /* cerficate to verify peer against */
char *random_file; /* path to file containing "random" data */
char *egdsocket; /* path to file containing the EGD daemon socket */
}; };
/**************************************************************************** /****************************************************************************
@@ -201,6 +182,9 @@ struct ConnectBits {
bool close; /* if set, we close the connection after this request */ bool close; /* if set, we close the connection after this request */
bool reuse; /* if set, this is a re-used connection */ bool reuse; /* if set, this is a re-used connection */
bool chunk; /* if set, this is a chunked transfer-encoding */ bool chunk; /* if set, this is a chunked transfer-encoding */
bool httpproxy; /* if set, this transfer is done through a http proxy */
bool user_passwd; /* do we use user+password for this connection? */
bool proxy_user_passwd; /* user+password for the proxy? */
}; };
/* /*
@@ -209,18 +193,10 @@ struct ConnectBits {
*/ */
struct connectdata { struct connectdata {
/**** Fields set when inited and not modified again */ /**** Fields set when inited and not modified again */
/* To better see what kind of struct that is passed as input, *ALL* publicly
returned handles MUST have this initial 'Handle'. */
Handle handle; /* struct identifier */
struct UrlData *data; /* link to the root CURL struct */ struct UrlData *data; /* link to the root CURL struct */
int connectindex; /* what index in the connects index this particular int connectindex; /* what index in the connects index this particular
struct has */ struct has */
/**** curl_connect() phase fields */
ConnState state; /* for state dependent actions */
long protocol; /* PROT_* flags concerning the protocol set */ long protocol; /* PROT_* flags concerning the protocol set */
#define PROT_MISSING (1<<0) #define PROT_MISSING (1<<0)
#define PROT_GOPHER (1<<1) #define PROT_GOPHER (1<<1)
@@ -250,7 +226,11 @@ struct connectdata {
not the proxy port! */ not the proxy port! */
char *ppath; char *ppath;
long bytecount; long bytecount;
struct timeval now; /* current time */
char *proxyhost; /* name of the http proxy host */
struct timeval now; /* "current" time */
struct timeval created; /* creation time */
int firstsocket; /* the main socket to use */ int firstsocket; /* the main socket to use */
int secondarysocket; /* for i.e ftp transfers */ int secondarysocket; /* for i.e ftp transfers */
@@ -309,6 +289,9 @@ struct connectdata {
char *host; /* free later if not NULL */ char *host; /* free later if not NULL */
} allocptr; } allocptr;
char *newurl; /* This can only be set if a Location: was in the
document headers */
#ifdef KRB4 #ifdef KRB4
enum protection_level command_prot; enum protection_level command_prot;
@@ -366,6 +349,7 @@ struct Progress {
double t_connect; double t_connect;
double t_pretransfer; double t_pretransfer;
int httpcode; int httpcode;
int httpversion;
time_t filetime; /* If requested, this is might get set. It may be 0 if time_t filetime; /* If requested, this is might get set. It may be 0 if
the time was unretrievable */ the time was unretrievable */
@@ -409,28 +393,22 @@ struct Configbits {
bool httpproxy; bool httpproxy;
bool mute; bool mute;
bool no_body; bool no_body;
bool proxy_user_passwd;
bool set_port; bool set_port;
bool set_range; bool set_range;
bool upload; bool upload;
bool use_netrc; bool use_netrc;
bool user_passwd;
bool verbose; bool verbose;
bool this_is_a_follow; /* this is a followed Location: request */ bool this_is_a_follow; /* this is a followed Location: request */
bool krb4; /* kerberos4 connection requested */ bool krb4; /* kerberos4 connection requested */
bool proxystringalloc; /* the http proxy string is malloc()'ed */ bool proxystringalloc; /* the http proxy string is malloc()'ed */
bool rangestringalloc; /* the range string is malloc()'ed */ bool rangestringalloc; /* the range string is malloc()'ed */
bool urlstringalloc; /* the URL string is malloc()'ed */ bool urlstringalloc; /* the URL string is malloc()'ed */
bool reuse_forbid; /* if this is forbidden to be reused, close
after use */
bool reuse_fresh; /* do not re-use an existing connection for this
transfer */
}; };
/* What type of interface that intiated this struct */
typedef enum {
CURLI_NONE,
CURLI_EASY,
CURLI_NORMAL,
CURLI_LAST
} CurlInterface;
/* /*
* As of April 11, 2000 we're now trying to split up the urldata struct in * As of April 11, 2000 we're now trying to split up the urldata struct in
* three different parts: * three different parts:
@@ -457,9 +435,6 @@ typedef enum {
*/ */
struct UrlData { struct UrlData {
Handle handle; /* struct identifier */
CurlInterface interf; /* created by WHAT interface? */
/*************** Global - specific items ************/ /*************** Global - specific items ************/
FILE *err; /* the stderr writes goes here */ FILE *err; /* the stderr writes goes here */
char *errorbuffer; /* store failure messages in here */ char *errorbuffer; /* store failure messages in here */
@@ -522,6 +497,7 @@ struct UrlData {
void *passwd_client; /* pointer to pass to the passwd callback */ void *passwd_client; /* pointer to pass to the passwd callback */
long timeout; /* in seconds, 0 means no timeout */ long timeout; /* in seconds, 0 means no timeout */
long connecttimeout; /* in seconds, 0 means no timeout */
long infilesize; /* size of file to upload, -1 means unknown */ long infilesize; /* size of file to upload, -1 means unknown */
char buffer[BUFSIZE+1]; /* buffer with size BUFSIZE */ char buffer[BUFSIZE+1]; /* buffer with size BUFSIZE */
@@ -535,9 +511,6 @@ struct UrlData {
char *cookie; /* HTTP cookie string to send */ char *cookie; /* HTTP cookie string to send */
char *newurl; /* This can only be set if a Location: was in the
document headers */
struct curl_slist *headers; /* linked list of extra headers */ struct curl_slist *headers; /* linked list of extra headers */
struct HttpPost *httppost; /* linked list of POST data */ struct HttpPost *httppost; /* linked list of POST data */
@@ -597,191 +570,26 @@ struct UrlData {
#define LIBCURL_NAME "libcurl" #define LIBCURL_NAME "libcurl"
#define LIBCURL_ID LIBCURL_NAME " " LIBCURL_VERSION " " SSL_ID #define LIBCURL_ID LIBCURL_NAME " " LIBCURL_VERSION " " SSL_ID
CURLcode Curl_getinfo(CURL *curl, CURLINFO info, ...);
/* /*
* Here follows function prototypes from what we used to plan to call * Here follows function prototypes from what we used to plan to call
* the "low level" interface. It is no longer prioritized and it is not likely * the "low level" interface. It is no longer prioritized and it is not likely
* to ever be supported to external users. * to ever be supported to external users.
*
* I removed all the comments to them as well, as they were no longer accurate
* and they're not meant for "public use" anymore.
*/ */
/* CURLcode Curl_open(CURL **curl, char *url);
* NAME curl_init() CURLcode Curl_setopt(CURL *handle, CURLoption option, ...);
* CURLcode Curl_close(CURL *curl); /* the opposite of curl_open() */
* DESCRIPTION CURLcode Curl_connect(struct UrlData *,
* struct connectdata **,
* Inits libcurl globally. This must be used before any libcurl calls can
* be used. This may install global plug-ins or whatever. (This does not
* do winsock inits in Windows.)
*
* EXAMPLE
*
* curl_init();
*
*/
CURLcode curl_init(void);
/*
* NAME curl_init()
*
* DESCRIPTION
*
* Frees libcurl globally. This must be used after all libcurl calls have
* been used. This may remove global plug-ins or whatever. (This does not
* do winsock cleanups in Windows.)
*
* EXAMPLE
*
* curl_free(curl);
*
*/
void curl_free(void);
/*
* NAME curl_open()
*
* DESCRIPTION
*
* Opens a general curl session. It does not try to connect or do anything
* on the network because of this call. The specified URL is only required
* to enable curl to figure out what protocol to "activate".
*
* A session should be looked upon as a series of requests to a single host. A
* session interacts with one host only, using one single protocol.
*
* The URL is not required. If set to "" or NULL, it can still be set later
* using the curl_setopt() function. If the curl_connect() function is called
* without the URL being known, it will return error.
*
* EXAMPLE
*
* CURLcode result;
* CURL *curl;
* result = curl_open(&curl, "http://curl.haxx.nu/libcurl/");
* if(result != CURL_OK) {
* return result;
* }
* */
CURLcode curl_open(CURL **curl, char *url);
/*
* NAME curl_setopt()
*
* DESCRIPTION
*
* Sets a particular option to the specified value.
*
* EXAMPLE
*
* CURL curl;
* curl_setopt(curl, CURL_HTTP_FOLLOW_LOCATION, TRUE);
*/
CURLcode curl_setopt(CURL *handle, CURLoption option, ...);
/*
* NAME curl_close()
*
* DESCRIPTION
*
* Closes a session previously opened with curl_open()
*
* EXAMPLE
*
* CURL *curl;
* CURLcode result;
*
* result = curl_close(curl);
*/
CURLcode curl_close(CURL *curl); /* the opposite of curl_open() */
CURLcode curl_read(CURLconnect *c_conn, char *buf, size_t buffersize,
ssize_t *n);
CURLcode curl_write(CURLconnect *c_conn, char *buf, size_t amount,
size_t *n);
/*
* NAME curl_connect()
*
* DESCRIPTION
*
* Connects to the peer server and performs the initial setup. This function
* writes a connect handle to its second argument that is a unique handle for
* this connect. This allows multiple connects from the same handle returned
* by curl_open().
*
* By setting 'allow_port' to FALSE, the data->use_port will *NOT* be
* respected.
*
* EXAMPLE
*
* CURLCode result;
* CURL curl;
* CURLconnect connect;
* result = curl_connect(curl, &connect); */
CURLcode curl_connect(CURL *curl,
CURLconnect **in_connect,
bool allow_port); bool allow_port);
CURLcode Curl_do(struct connectdata *);
/* CURLcode Curl_done(struct connectdata *);
* NAME curl_do() CURLcode Curl_disconnect(struct connectdata *);
*
* DESCRIPTION
*
* (Note: May 3rd 2000: this function does not currently allow you to
* specify a document, it will use the one set previously)
*
* This function asks for the particular document, file or resource that
* resides on the server we have connected to. You may specify a full URL,
* just an absolute path or even a relative path. That means, if you're just
* getting one file from the remote site, you can use the same URL as input
* for both curl_open() as well as for this function.
*
* In the even there is a host name, port number, user name or password parts
* in the URL, you can use the 'flags' argument to ignore them completely, or
* at your choice, make the function fail if you're trying to get a URL from
* different host than you connected to with curl_connect().
*
* You can only get one document at a time using the same connection. When one
* document has been received you can although request again.
*
* When the transfer is done, curl_done() MUST be called.
*
* EXAMPLE
*
* CURLCode result;
* char *url;
* CURLconnect *connect;
* result = curl_do(connect, url, CURL_DO_NONE); */
CURLcode curl_do(CURLconnect *in_conn);
/*
* NAME curl_done()
*
* DESCRIPTION
*
* When the transfer following a curl_do() call is done, this function should
* get called.
*
* EXAMPLE
*
* CURLCode result;
* char *url;
* CURLconnect *connect;
* result = curl_done(connect); */
CURLcode curl_done(CURLconnect *connect);
/*
* NAME curl_disconnect()
*
* DESCRIPTION
*
* Disconnects from the peer server and performs connection cleanup.
*
* EXAMPLE
*
* CURLcode result;
* CURLconnect *connect;
* result = curl_disconnect(connect); */
CURLcode curl_disconnect(CURLconnect *connect);
#endif #endif

View File

@@ -1 +1,3 @@
SUBDIRS = Win32 Linux SUBDIRS = Win32 Linux
EXTRA_DIST = README

35
perl/Curl_easy/Changes Normal file
View File

@@ -0,0 +1,35 @@
Revision history for Perl extension Curl::easy.
Check out the file README for more info.
1.0.2 Tue Oct 10 2000:
- runs with libcurl 7.4
- modified curl_easy_getinfo(). It now calls curl_getinfo() that has
been added to libcurl in version 7.4.
1.0.1 Tue Oct 10 2000:
- Added some missing features of curl_easy_setopt():
- CURLOPT_ERRORBUFFER now works by passing the name of a perl
variable that shall be crated and the errormessage (if any)
be stored to.
- Passing filehandles (Options FILE, INFILE and WRITEHEADER) now works.
Have a look at test.pl to see how it works...
- Added a new function, curl_easy_getinfo(), that for now always
returns the number of bytes that where written to disk during the last
download. If the curl_easy_getinfo() function is included in libcurl,
(as promised by Daniel ;-)) i will turn this into just a call to this
function.
1.0 Thu Oct 5 2000:
- first released version
- runs with libcurl 7.3
- some features of curl_easy_setopt() are still missing:
- passing function pointers doesn't work (options WRITEFUNCTION,
READFUNCTION and PROGRESSFUNCTION).
- passing FILE * pointers doesn't work (options FILE, INFILE and
WRITEHEADER).
- passing linked lists doesn't work (options HTTPHEADER and
HTTPPOST).
- setting the buffer where to store error messages in doesn't work
(option ERRORBUFFER).

6
perl/Curl_easy/MANIFEST Normal file
View File

@@ -0,0 +1,6 @@
Changes
MANIFEST
Makefile.PL
easy.pm
easy.xs
test.pl

View File

@@ -0,0 +1,14 @@
# Makefile.PL for Perl extension Curl::easy.
# Check out the file README for more info.
use ExtUtils::MakeMaker;
# See lib/ExtUtils/MakeMaker.pm for details of how to influence
# the contents of the Makefile that is written.
WriteMakefile(
'NAME' => 'Curl::easy',
'VERSION_FROM' => 'easy.pm', # finds $VERSION
'LIBS' => ['-lcurl '], # e.g., '-lm'
'DEFINE' => '', # e.g., '-DHAVE_SOMETHING'
'INC' => '', # e.g., '-I/usr/include/other'
'clean' => {FILES => "head.out body.out"}
);

View File

@@ -0,0 +1 @@
EXTRA_DIST = Changes easy.pm easy.xs Makefile.PL MANIFEST README test.pl

27
perl/Curl_easy/README Normal file
View File

@@ -0,0 +1,27 @@
README for Perl extension Curl::easy.
The perl module Curl::easy provides an interface to the cURL library "libcurl".
See http://curl.haxx.se/ for more information on cURL and libcurl.
This module requires libcurl and the corresponding headerfiles to be
installed. You then may install this module via the usual way:
perl Makefile.PL
make
make test
make install
The module provides the same functionality as libcurl provides to C programs,
please refer to the documentation of libcurl.
A short example how to use the module may be found in test.pl.
This Software is distributed AS IS, WITHOUT WARRANTY OF ANY KIND,
either express or implied. Send praise, patches, money, beer and
pizza to the author. Send complaints to /dev/null. ;-)
The author of this module is Georg Horn <horn@koblenz-net.de>
The latest version of this module can be dowloaded from
http://koblenz-net.de/~horn/export/

139
perl/Curl_easy/easy.pm Normal file
View File

@@ -0,0 +1,139 @@
# Perl interface for libcurl. Check out the file README for more info.
package Curl::easy;
use strict;
use Carp;
use vars qw($VERSION @ISA @EXPORT @EXPORT_OK $AUTOLOAD);
require Exporter;
require DynaLoader;
require AutoLoader;
@ISA = qw(Exporter DynaLoader);
# Items to export into callers namespace by default. Note: do not export
# names by default without a very good reason. Use EXPORT_OK instead.
# Do not simply export all your public functions/methods/constants.
@EXPORT = qw(
CURLOPT_AUTOREFERER
CURLOPT_COOKIE
CURLOPT_COOKIEFILE
CURLOPT_CRLF
CURLOPT_CUSTOMREQUEST
CURLOPT_ERRORBUFFER
CURLOPT_FAILONERROR
CURLOPT_FILE
CURLOPT_FOLLOWLOCATION
CURLOPT_FTPAPPEND
CURLOPT_FTPASCII
CURLOPT_FTPLISTONLY
CURLOPT_FTPPORT
CURLOPT_HEADER
CURLOPT_HTTPHEADER
CURLOPT_HTTPPOST
CURLOPT_HTTPPROXYTUNNEL
CURLOPT_HTTPREQUEST
CURLOPT_INFILE
CURLOPT_INFILESIZE
CURLOPT_INTERFACE
CURLOPT_KRB4LEVEL
CURLOPT_LOW_SPEED_LIMIT
CURLOPT_LOW_SPEED_TIME
CURLOPT_MUTE
CURLOPT_NETRC
CURLOPT_NOBODY
CURLOPT_NOPROGRESS
CURLOPT_NOTHING
CURLOPT_PORT
CURLOPT_POST
CURLOPT_POSTFIELDS
CURLOPT_POSTFIELDSIZE
CURLOPT_POSTQUOTE
CURLOPT_PROGRESSDATA
CURLOPT_PROGRESSFUNCTION
CURLOPT_PROXY
CURLOPT_PROXYPORT
CURLOPT_PROXYUSERPWD
CURLOPT_PUT
CURLOPT_QUOTE
CURLOPT_RANGE
CURLOPT_READFUNCTION
CURLOPT_REFERER
CURLOPT_RESUME_FROM
CURLOPT_SSLCERT
CURLOPT_SSLCERTPASSWD
CURLOPT_SSLVERSION
CURLOPT_STDERR
CURLOPT_TIMECONDITION
CURLOPT_TIMEOUT
CURLOPT_TIMEVALUE
CURLOPT_TRANSFERTEXT
CURLOPT_UPLOAD
CURLOPT_URL
CURLOPT_USERAGENT
CURLOPT_USERPWD
CURLOPT_VERBOSE
CURLOPT_WRITEFUNCTION
CURLOPT_WRITEHEADER
CURLINFO_EFFECTIVE_URL
CURLINFO_HTTP_CODE
CURLINFO_TOTAL_TIME
CURLINFO_NAMELOOKUP_TIME
CURLINFO_CONNECT_TIME
CURLINFO_PRETRANSFER_TIME
CURLINFO_SIZE_UPLOAD
CURLINFO_SIZE_DOWNLOAD
CURLINFO_SPEED_DOWNLOAD
CURLINFO_SPEED_UPLOAD
CURLINFO_HEADER_SIZE
CURLINFO_REQUEST_SIZE
);
$VERSION = '1.0.1';
sub AUTOLOAD {
# This AUTOLOAD is used to 'autoload' constants from the constant()
# XS function.
(my $constname = $AUTOLOAD) =~ s/.*:://;
return constant($constname, 0);
}
bootstrap Curl::easy $VERSION;
# Preloaded methods go here.
# Autoload methods go after =cut, and are processed by the autosplit program.
1;
__END__
# Below is the stub of documentation for your module. You better edit it!
=head1 NAME
Curl::easy - Perl extension for libcurl
=head1 SYNOPSIS
use Curl::easy;
$CURL = curl_easy_init();
$CURLcode = curl_easy_setopt($CURL, CURLoption, Value);
$CURLcode = curl_easy_perform($CURL);
curl_easy_cleanup($CURL);
=head1 DESCRIPTION
This perl module provides an interface to the libcurl C library. See
http://curl.haxx.se/ for more information on cURL and libcurl.
=head1 AUTHOR
Georg Horn <horn@koblenz-net.de>
=head1 SEE ALSO
http://curl.haxx.se/
=cut

290
perl/Curl_easy/easy.xs Normal file
View File

@@ -0,0 +1,290 @@
/* Perl interface for libcurl. Check out the file README for more info. */
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include <curl/curl.h>
#include <curl/easy.h>
/* Buffer and varname for option CURLOPT_ERRORBUFFER */
static char errbuf[CURL_ERROR_SIZE];
static char *errbufvarname = NULL;
static int
constant(char *name, int arg)
{
errno = 0;
if (strncmp(name, "CURLINFO_", 9) == 0) {
name += 9;
switch (*name) {
case 'A':
case 'B':
case 'C':
case 'D':
if (strEQ(name, "CONNECT_TIME")) return CURLINFO_CONNECT_TIME;
break;
case 'E':
case 'F':
if (strEQ(name, "EFFECTIVE_URL")) return CURLINFO_EFFECTIVE_URL;
break;
case 'G':
case 'H':
if (strEQ(name, "HEADER_SIZE")) return CURLINFO_HEADER_SIZE;
if (strEQ(name, "HTTP_CODE")) return CURLINFO_HTTP_CODE;
break;
case 'I':
case 'J':
case 'K':
case 'L':
case 'M':
case 'N':
if (strEQ(name, "NAMELOOKUP_TIME")) return CURLINFO_NAMELOOKUP_TIME;
break;
case 'O':
case 'P':
if (strEQ(name, "PRETRANSFER_TIME")) return CURLINFO_PRETRANSFER_TIME;
break;
case 'Q':
case 'R':
if (strEQ(name, "REQUEST_SIZE")) return CURLINFO_REQUEST_SIZE;
break;
case 'S':
case 'T':
if (strEQ(name, "SIZE_DOWNLOAD")) return CURLINFO_SIZE_DOWNLOAD;
if (strEQ(name, "SIZE_UPLOAD")) return CURLINFO_SIZE_UPLOAD;
if (strEQ(name, "SPEED_DOWNLOAD")) return CURLINFO_SPEED_DOWNLOAD;
if (strEQ(name, "SPEED_UPLOAD")) return CURLINFO_SPEED_UPLOAD;
if (strEQ(name, "TOTAL_TIME")) return CURLINFO_TOTAL_TIME;
break;
case 'U':
case 'V':
case 'W':
case 'X':
case 'Y':
case 'Z':
break;
}
}
if (strncmp(name, "CURLOPT_", 8) == 0) {
name += 8;
switch (*name) {
case 'A':
case 'B':
if (strEQ(name, "AUTOREFERER")) return CURLOPT_AUTOREFERER;
break;
case 'C':
case 'D':
if (strEQ(name, "COOKIE")) return CURLOPT_COOKIE;
if (strEQ(name, "COOKIEFILE")) return CURLOPT_COOKIEFILE;
if (strEQ(name, "CRLF")) return CURLOPT_CRLF;
if (strEQ(name, "CUSTOMREQUEST")) return CURLOPT_CUSTOMREQUEST;
break;
case 'E':
case 'F':
if (strEQ(name, "ERRORBUFFER")) return CURLOPT_ERRORBUFFER;
if (strEQ(name, "FAILONERROR")) return CURLOPT_FAILONERROR;
if (strEQ(name, "FILE")) return CURLOPT_FILE;
if (strEQ(name, "FOLLOWLOCATION")) return CURLOPT_FOLLOWLOCATION;
if (strEQ(name, "FTPAPPEND")) return CURLOPT_FTPAPPEND;
if (strEQ(name, "FTPASCII")) return CURLOPT_FTPASCII;
if (strEQ(name, "FTPLISTONLY")) return CURLOPT_FTPLISTONLY;
if (strEQ(name, "FTPPORT")) return CURLOPT_FTPPORT;
break;
case 'G':
case 'H':
if (strEQ(name, "HEADER")) return CURLOPT_HEADER;
if (strEQ(name, "HTTPHEADER")) return CURLOPT_HTTPHEADER;
if (strEQ(name, "HTTPPOST")) return CURLOPT_HTTPPOST;
if (strEQ(name, "HTTPPROXYTUNNEL")) return CURLOPT_HTTPPROXYTUNNEL;
if (strEQ(name, "HTTPREQUEST")) return CURLOPT_HTTPREQUEST;
break;
case 'I':
case 'J':
if (strEQ(name, "INFILE")) return CURLOPT_INFILE;
if (strEQ(name, "INFILESIZE")) return CURLOPT_INFILESIZE;
if (strEQ(name, "INTERFACE")) return CURLOPT_INTERFACE;
break;
case 'K':
case 'L':
if (strEQ(name, "KRB4LEVEL")) return CURLOPT_KRB4LEVEL;
if (strEQ(name, "LOW_SPEED_LIMIT")) return CURLOPT_LOW_SPEED_LIMIT;
if (strEQ(name, "LOW_SPEED_TIME")) return CURLOPT_LOW_SPEED_TIME;
break;
case 'M':
case 'N':
if (strEQ(name, "MUTE")) return CURLOPT_MUTE;
if (strEQ(name, "NETRC")) return CURLOPT_NETRC;
if (strEQ(name, "NOBODY")) return CURLOPT_NOBODY;
if (strEQ(name, "NOPROGRESS")) return CURLOPT_NOPROGRESS;
if (strEQ(name, "NOTHING")) return CURLOPT_NOTHING;
break;
case 'O':
case 'P':
if (strEQ(name, "PORT")) return CURLOPT_PORT;
if (strEQ(name, "POST")) return CURLOPT_POST;
if (strEQ(name, "POSTFIELDS")) return CURLOPT_POSTFIELDS;
if (strEQ(name, "POSTFIELDSIZE")) return CURLOPT_POSTFIELDSIZE;
if (strEQ(name, "POSTQUOTE")) return CURLOPT_POSTQUOTE;
if (strEQ(name, "PROGRESSDATA")) return CURLOPT_PROGRESSDATA;
if (strEQ(name, "PROGRESSFUNCTION")) return CURLOPT_PROGRESSFUNCTION;
if (strEQ(name, "PROXY")) return CURLOPT_PROXY;
if (strEQ(name, "PROXYPORT")) return CURLOPT_PROXYPORT;
if (strEQ(name, "PROXYUSERPWD")) return CURLOPT_PROXYUSERPWD;
if (strEQ(name, "PUT")) return CURLOPT_PUT;
break;
case 'Q':
case 'R':
if (strEQ(name, "QUOTE")) return CURLOPT_QUOTE;
if (strEQ(name, "RANGE")) return CURLOPT_RANGE;
if (strEQ(name, "READFUNCTION")) return CURLOPT_READFUNCTION;
if (strEQ(name, "REFERER")) return CURLOPT_REFERER;
if (strEQ(name, "RESUME_FROM")) return CURLOPT_RESUME_FROM;
break;
case 'S':
case 'T':
if (strEQ(name, "SSLCERT")) return CURLOPT_SSLCERT;
if (strEQ(name, "SSLCERTPASSWD")) return CURLOPT_SSLCERTPASSWD;
if (strEQ(name, "SSLVERSION")) return CURLOPT_SSLVERSION;
if (strEQ(name, "STDERR")) return CURLOPT_STDERR;
if (strEQ(name, "TIMECONDITION")) return CURLOPT_TIMECONDITION;
if (strEQ(name, "TIMEOUT")) return CURLOPT_TIMEOUT;
if (strEQ(name, "TIMEVALUE")) return CURLOPT_TIMEVALUE;
if (strEQ(name, "TRANSFERTEXT")) return CURLOPT_TRANSFERTEXT;
break;
case 'U':
case 'V':
if (strEQ(name, "UPLOAD")) return CURLOPT_UPLOAD;
if (strEQ(name, "URL")) return CURLOPT_URL;
if (strEQ(name, "USERAGENT")) return CURLOPT_USERAGENT;
if (strEQ(name, "USERPWD")) return CURLOPT_USERPWD;
if (strEQ(name, "VERBOSE")) return CURLOPT_VERBOSE;
break;
case 'W':
case 'X':
case 'Y':
case 'Z':
if (strEQ(name, "WRITEFUNCTION")) return CURLOPT_WRITEFUNCTION;
if (strEQ(name, "WRITEHEADER")) return CURLOPT_WRITEHEADER;
if (strEQ(name, "WRITEINFO")) return CURLOPT_WRITEINFO;
break;
}
}
errno = EINVAL;
return 0;
}
MODULE = Curl::easy PACKAGE = Curl::easy
int
constant(name,arg)
char * name
int arg
void *
curl_easy_init()
CODE:
if (errbufvarname) free(errbufvarname);
errbufvarname = NULL;
RETVAL = curl_easy_init();
OUTPUT:
RETVAL
int
curl_easy_setopt(curl, option, value)
void * curl
int option
char * value
CODE:
if (option < CURLOPTTYPE_OBJECTPOINT) {
/* This is an option specifying an integer value: */
long value = (long)SvIV(ST(2));
RETVAL = curl_setopt(curl, option, value);
} else if (option == CURLOPT_FILE || option == CURLOPT_INFILE ||
option == CURLOPT_WRITEHEADER) {
/* This is an option specifying a FILE * value: */
FILE * value = IoIFP(sv_2io(ST(2)));
RETVAL = curl_setopt(curl, option, value);
} else if (option == CURLOPT_ERRORBUFFER) {
SV *sv;
RETVAL = curl_setopt(curl, option, errbuf);
if (errbufvarname) free(errbufvarname);
errbufvarname = strdup(value);
sv = perl_get_sv(errbufvarname, TRUE | GV_ADDMULTI);
} else if (option == CURLOPT_WRITEFUNCTION || option ==
CURLOPT_READFUNCTION || option == CURLOPT_PROGRESSFUNCTION) {
/* This is an option specifying a callback function */
/* not yet implemented */
RETVAL = -1;
} else {
/* default, option specifying a char * value: */
RETVAL = curl_setopt(curl, option, value);
}
OUTPUT:
RETVAL
int
curl_easy_perform(curl)
void * curl
CODE:
RETVAL = curl_easy_perform(curl);
if (RETVAL && errbufvarname) {
SV *sv = perl_get_sv(errbufvarname, TRUE | GV_ADDMULTI);
sv_setpv(sv, errbuf);
}
OUTPUT:
RETVAL
int
curl_easy_getinfo(curl, option, value)
void * curl
int option
double value
CODE:
switch (option & CURLINFO_TYPEMASK) {
case CURLINFO_STRING: {
char * value = (char *)SvPV(ST(2), PL_na);
RETVAL = curl_getinfo(curl, option, &value);
sv_setpv(ST(2), value);
break;
}
case CURLINFO_LONG: {
long value = (long)SvIV(ST(2));
RETVAL = curl_getinfo(curl, option, &value);
sv_setiv(ST(2), value);
break;
}
case CURLINFO_DOUBLE: {
double value = (double)SvNV(ST(2));
RETVAL = curl_getinfo(curl, option, &value);
sv_setnv(ST(2), value);
break;
}
default: {
RETVAL = CURLE_BAD_FUNCTION_ARGUMENT;
break;
}
}
OUTPUT:
RETVAL
int
curl_easy_cleanup(curl)
void * curl
CODE:
curl_easy_cleanup(curl);
if (errbufvarname) free(errbufvarname);
errbufvarname = NULL;
RETVAL = 0;
OUTPUT:
RETVAL

101
perl/Curl_easy/test.pl Normal file
View File

@@ -0,0 +1,101 @@
# Test script for Perl extension Curl::easy.
# Check out the file README for more info.
# Before `make install' is performed this script should be runnable with
# `make test'. After `make install' it should work as `perl test.pl'
######################### We start with some black magic to print on failure.
# Change 1..1 below to 1..last_test_to_print .
# (It may become useful if the test is moved to ./t subdirectory.)
BEGIN { $| = 1; print "1..5\n"; }
END {print "not ok 1\n" unless $loaded;}
use Curl::easy;
$loaded = 1;
print "ok 1\n";
######################### End of black magic.
# Insert your test code below (better if it prints "ok 13"
# (correspondingly "not ok 13") depending on the success of chunk 13
# of the test code):
# Read URL to get
$defurl = "http://www/";
$url = "";
print "Please enter an URL to fetch [$defurl]: ";
$url = <STDIN>;
if ($url =~ /^\s*\n/) {
$url = $defurl;
}
# Use this for simple benchmarking
#for ($i=0; $i<1000; $i++) {
# Init the curl session
if (($curl = Curl::easy::curl_easy_init()) != 0) {
print "ok 2\n";
} else {
print "ko 2\n";
}
# Set URL to get
if (Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_URL, $url) == 0) {
print "ok 3\n";
} else {
print "ko 3\n";
}
# No progress meter please
Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_NOPROGRESS, 1);
# Shut up completely
Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_MUTE, 1);
# Follow location headers
Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_FOLLOWLOCATION, 1);
# Set timeout
Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_TIMEOUT, 30);
# Set file where to read cookies from
Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_COOKIEFILE, "cookies");
# Set file where to store the header
open HEAD, ">head.out";
Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_WRITEHEADER, HEAD);
# Set file where to store the body
open BODY, ">body.out";
Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_FILE, BODY);
# Store error messages in variable $errbuf
# NOTE: The name of the variable is passed as a string!
# curl_easy_setopt() creates a perl variable with that name, and
# curl_easy_perform() stores the errormessage into it if an error occurs.
Curl::easy::curl_easy_setopt($curl, Curl::easy::CURLOPT_ERRORBUFFER, "errbuf");
# Go get it
if (Curl::easy::curl_easy_perform($curl) == 0) {
Curl::easy::curl_easy_getinfo($curl, Curl::easy::CURLINFO_SIZE_DOWNLOAD, $bytes);
print "ok 4: $bytes bytes read\n";
print "check out the files head.out and body.out\n";
print "for the headers and content of the URL you just fetched...\n";
Curl::easy::curl_easy_getinfo($curl, Curl::easy::CURLINFO_EFFECTIVE_URL, $realurl);
Curl::easy::curl_easy_getinfo($curl, Curl::easy::CURLINFO_HTTP_CODE, $httpcode);
print "effective fetched url (http code: $httpcode) was: $url\n";
} else {
# We can acces the error message in $errbuf here
print "ko 4: '$errbuf'\n";
}
# Cleanup
close HEAD;
close BODY;
Curl::easy::curl_easy_cleanup($curl);
print "ok 5\n";
# Use this for simple benchmarking
#}

3
perl/Makefile.am Normal file
View File

@@ -0,0 +1,3 @@
SUBDIRS = Curl_easy
EXTRA_DIST = README

View File

@@ -1,33 +1,17 @@
This is just a small collection of perl scripts that use curl to do _ _ ____ _
their jobs. ___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
If you need a proxy configuration in order to get HTTP or FTP Perl
documents, do edit your .curlrc file in your HOME dir to contain:
-x <proxy host>:<proxy port>
These scripts are all written by Daniel Stenberg. Perl's a great script language, not the least for quick prototyping. Curl is
elegantly used from within it. You can either invoke external curl command
line or use the curl interface.
checklinks.pl Georg Horn's Perl interface to curl is available in the Curl_easy/
============= subdirectory.
This script fetches an HTML page, extracts all links and references to
other documents and then goes through them to check that they work.
Reports progress in a format intended for machine-parsing.
getlinks.pl Unfortunately, we don't have any examples nor any documentation for it at
=========== this point.
You ever wanted to download a bunch of programs a certain HTML page has
links to? This program extracts all links and references from a web page
and then compares them to the regex you supply. All matches will be
downloaded in the target directory of your choice.
recursiveftpget.pl
==================
This script recursively downloads all files from a directory on an ftp site
and all subdirectories it has. Optional depth-level.
formfind.pl
===========
Downloads an HTML page (or reads stdin) and reports a human readable report
about the FORM(s) present. What method, what URL, which input or select
field, what default values they have and what submit buttons there are. It
is useful if you intend to use curl to properly fake a form submission.

View File

0
perl/getlinks.pl.in → perl/contrib/getlinks.pl.in Executable file → Normal file
View File

104
perl/contrib/mirror.pl Normal file
View File

@@ -0,0 +1,104 @@
#!/usr/bin/perl
#
# Author: Daniel Stenberg <daniel@haxx.se>
# Version: 0.1
# Date: October 10, 2000
#
# This is public domain. Feel free to do whatever you please with this script.
# There are no warranties whatsoever! It might work, it might ruin your hard
# disk. Use this on your own risk.
#
# PURPOSE
#
# This script uses a local directory to maintain a "mirror" of the curl
# packages listed in the remote curl web sites package list. Files present in
# the local directory that aren't present in the remote list will be removed.
# Files that are present in the remote list but not in the local directory
# will be downloaded and put there. Files present at both places will not
# be touched.
#
# WARNING: don't put other files in the mirror directory, they will be removed
# when this script runs if they don't exist in the remote package list!
#
# this is the directory to keep all the mirrored curl files in:
$some_dir = $ARGV[0];
if( ! -d $some_dir ) {
print "$some_dir is not a dir!\n";
exit;
}
# path to the curl binary
$curl = "/home/danste/bin/curl";
# this is the remote file list
$filelist = "http://curl.haxx.se/download/curldist.txt";
# prepend URL:
$prepend = "http://curl.haxx.se/download";
opendir(DIR, $some_dir) || die "can't opendir $some_dir: $!";
@existing = grep { /^[^\.]/ } readdir(DIR);
closedir DIR;
$LOCAL_FILE = 1;
$REMOTE_FILE = 2;
# create a hash array
for(@existing) {
$allfiles{$_} |= $LOCAL_FILE;
}
# get remote file list
print "Getting file list from $filelist\n";
@remotefiles=`$curl -s $filelist`;
# fill in the hash array
for(@remotefiles) {
chomp;
$allfiles{$_} |= $REMOTE_FILE;
$remote++;
}
if($remote < 10) {
print "There's something wrong. The remote file list seems too smallish!\n";
exit;
}
@sfiles = sort { $a cmp $b } keys %allfiles;
$leftalone = $downloaded = $removed = 0;
for(@sfiles) {
$file = $_;
$info = $allfiles{$file};
if($info == ($REMOTE_FILE|$LOCAL_FILE)) {
print "$file is LOCAL and REMOTE, left alone\n";
$leftalone++;
}
elsif($info == $REMOTE_FILE) {
print "$file is only REMOTE, getting it...\n";
system("$curl $prepend/$file -o $some_dir/$file");
$downloaded++;
}
elsif($info == $LOCAL_FILE) {
print "$file is only LOCAL, removing it...\n";
system("rm $some_dir/$file");
$removed++;
}
else {
print "Problem, file $file was marked $info\n";
}
$loops++;
}
if(!$loops) {
print "No remote or local files were found!\n";
exit;
}
print "$leftalone files were already present\n",
"$downloaded files were added\n",
"$removed files were removed\n";

View File

2
php/Makefile.am Normal file
View File

@@ -0,0 +1,2 @@
SUBDIRS = examples
EXTRA_DIST = README

15
php/README Normal file
View File

@@ -0,0 +1,15 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
PHP
There's an excellent interface to curl written for PHP by Sterling Hughes. See
the subdirectory examples/ for some examples on how to program with it.
Unfortunately, we don't have much more information about the interface
included here yet, but there's a detailed online manual for it over at:
http://www.php.net/manual/ref.curl.php

1
php/examples/Makefile.am Normal file
View File

@@ -0,0 +1 @@
EXTRA_DIST = README getpageinvar.php simpleget.php simplepost.php

16
php/examples/README Normal file
View File

@@ -0,0 +1,16 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
PHP program examples
getpageinvar.php
- Fetch a single URL and return in a variable
simpleget.php
- A very simple example that gets a HTTP page
simplepost.php
- Example that sends a HTTP POST to a remote site

View File

@@ -29,8 +29,6 @@
#include <ctype.h> #include <ctype.h>
#include <curl/curl.h> #include <curl/curl.h>
#include <curl/types.h> /* new for v7 */
#include <curl/easy.h> /* new for v7 */
#define _MPRINTF_REPLACE /* we want curl-functions instead of native ones */ #define _MPRINTF_REPLACE /* we want curl-functions instead of native ones */
#include <curl/mprintf.h> #include <curl/mprintf.h>
@@ -40,11 +38,6 @@
#define CURLseparator "--_curl_--" #define CURLseparator "--_curl_--"
/* This define make use of the "Curlseparator" as opposed to the
MIMEseparator. We might add support for the latter one in the
future, and that's why this is left in the source. */
#define CURL_SEPARATORS
/* This is now designed to have its own local setup.h */ /* This is now designed to have its own local setup.h */
#include "setup.h" #include "setup.h"
@@ -242,61 +235,63 @@ static void help(void)
" -a/--append Append to target file when uploading (F)\n" " -a/--append Append to target file when uploading (F)\n"
" -A/--user-agent <string> User-Agent to send to server (H)\n" " -A/--user-agent <string> User-Agent to send to server (H)\n"
" -b/--cookie <name=string/file> Cookie string or file to read cookies from (H)\n" " -b/--cookie <name=string/file> Cookie string or file to read cookies from (H)\n"
" -B/--use-ascii Use ASCII/text transfer\n" " -B/--use-ascii Use ASCII/text transfer\n",
" -C/--continue-at <offset> Specify absolute resume offset\n" curl_version());
puts(" -C/--continue-at <offset> Specify absolute resume offset\n"
" -d/--data <data> HTTP POST data (H)\n" " -d/--data <data> HTTP POST data (H)\n"
" --data-ascii <data> HTTP POST ASCII data (H)\n" " --data-ascii <data> HTTP POST ASCII data (H)\n"
" --data-binary <data> HTTP POST binary data (H)\n" " --data-binary <data> HTTP POST binary data (H)\n"
" -D/--dump-header <file> Write the headers to this file\n" " -D/--dump-header <file> Write the headers to this file\n"
" -e/--referer Referer page (H)\n" " --egd-file <file> EGD socket path for random data (SSL)\n"
" -E/--cert <cert[:passwd]> Specifies your certificate file and password (HTTPS)\n" " -e/--referer Referer page (H)");
puts(" -E/--cert <cert[:passwd]> Specifies your certificate file and password (HTTPS)\n"
" --cacert <file> CA certifciate to verify peer against (HTTPS)\n" " --cacert <file> CA certifciate to verify peer against (HTTPS)\n"
" --connect-timeout <seconds> Maximum time allowed for connection\n"
" -f/--fail Fail silently (no output at all) on errors (H)\n" " -f/--fail Fail silently (no output at all) on errors (H)\n"
" -F/--form <name=content> Specify HTTP POST data (H)\n" " -F/--form <name=content> Specify HTTP POST data (H)\n"
" -g/--globoff Disable URL sequences and ranges using {} and []\n" " -g/--globoff Disable URL sequences and ranges using {} and []\n"
" -h/--help This help text\n" " -h/--help This help text\n"
" -H/--header <line> Custom header to pass to server. (H)\n" " -H/--header <line> Custom header to pass to server. (H)");
" -i/--include Include the HTTP-header in the output (H)\n" puts(" -i/--include Include the HTTP-header in the output (H)\n"
" -I/--head Fetch document info only (HTTP HEAD/FTP SIZE)\n" " -I/--head Fetch document info only (HTTP HEAD/FTP SIZE)\n"
" --interface <interface> Specify the interface to be used\n" " --interface <interface> Specify the interface to be used\n"
" --krb4 <level> Enable krb4 with specified security level (F)\n" " --krb4 <level> Enable krb4 with specified security level (F)\n"
" -K/--config Specify which config file to read\n" " -K/--config Specify which config file to read\n"
" -l/--list-only List only names of an FTP directory (F)\n" " -l/--list-only List only names of an FTP directory (F)");
" -L/--location Follow Location: hints (H)\n" puts(" -L/--location Follow Location: hints (H)\n"
" -m/--max-time <seconds> Maximum time allowed for the transfer\n" " -m/--max-time <seconds> Maximum time allowed for the transfer\n"
" -M/--manual Display huge help text\n" " -M/--manual Display huge help text\n"
" -n/--netrc Read .netrc for user name and password\n" " -n/--netrc Read .netrc for user name and password\n"
" -N/--no-buffer Disables the buffering of the output stream\n" " -N/--no-buffer Disables the buffering of the output stream");
" -o/--output <file> Write output to <file> instead of stdout\n" puts(" -o/--output <file> Write output to <file> instead of stdout\n"
" -O/--remote-name Write output to a file named as the remote file\n" " -O/--remote-name Write output to a file named as the remote file\n"
" -p/--proxytunnel Perform non-HTTP services through a HTTP proxy\n" " -p/--proxytunnel Perform non-HTTP services through a HTTP proxy\n"
" -P/--ftpport <address> Use PORT with address instead of PASV when ftping (F)\n" " -P/--ftpport <address> Use PORT with address instead of PASV when ftping (F)\n"
" -q When used as the first parameter disables .curlrc\n" " -q When used as the first parameter disables .curlrc\n"
" -Q/--quote <cmd> Send QUOTE command to FTP before file transfer (F)\n" " -Q/--quote <cmd> Send QUOTE command to FTP before file transfer (F)");
" -r/--range <range> Retrieve a byte range from a HTTP/1.1 or FTP server\n" puts(" -r/--range <range> Retrieve a byte range from a HTTP/1.1 or FTP server\n"
" -s/--silent Silent mode. Don't output anything\n" " -s/--silent Silent mode. Don't output anything\n"
" -S/--show-error Show error. With -s, make curl show errors when they occur\n" " -S/--show-error Show error. With -s, make curl show errors when they occur\n"
" --stderr <file> Where to redirect stderr. - means stdout.\n"
" -t/--telnet-option <OPT=val> Set telnet option\n" " -t/--telnet-option <OPT=val> Set telnet option\n"
" -T/--upload-file <file> Transfer/upload <file> to remote site\n" " -T/--upload-file <file> Transfer/upload <file> to remote site\n"
" --url <URL> Another way to specify URL to work with\n" " --url <URL> Another way to specify URL to work with");
" -u/--user <user[:password]> Specify user and password to use\n" puts(" -u/--user <user[:password]> Specify user and password to use\n"
" -U/--proxy-user <user[:password]> Specify Proxy authentication\n" " -U/--proxy-user <user[:password]> Specify Proxy authentication\n"
" -v/--verbose Makes the operation more talkative\n" " -v/--verbose Makes the operation more talkative\n"
" -V/--version Outputs version number then quits\n" " -V/--version Outputs version number then quits\n"
" -w/--write-out [format] What to output after completion\n" " -w/--write-out [format] What to output after completion\n"
" -x/--proxy <host[:port]> Use proxy. (Default port is 1080)\n" " -x/--proxy <host[:port]> Use proxy. (Default port is 1080)\n"
" -X/--request <command> Specific request command to use\n" " --random-file <file> File to use for reading random data from (SSL)\n"
" -y/--speed-time Time needed to trig speed-limit abort. Defaults to 30\n" " -X/--request <command> Specific request command to use");
puts(" -y/--speed-time Time needed to trig speed-limit abort. Defaults to 30\n"
" -Y/--speed-limit Stop transfer if below speed-limit for 'speed-time' secs\n" " -Y/--speed-limit Stop transfer if below speed-limit for 'speed-time' secs\n"
" -z/--time-cond <time> Includes a time condition to the server (H)\n" " -z/--time-cond <time> Includes a time condition to the server (H)\n"
" -Z/--max-redirs <num> Set maximum number of redirections allowed (H)\n" " -Z/--max-redirs <num> Set maximum number of redirections allowed (H)\n"
" -2/--sslv2 Force usage of SSLv2 (H)\n" " -2/--sslv2 Force usage of SSLv2 (H)\n"
" -3/--sslv3 Force usage of SSLv3 (H)\n" " -3/--sslv3 Force usage of SSLv3 (H)");
" -#/--progress-bar Display transfer progress as a progress bar\n" puts(" -#/--progress-bar Display transfer progress as a progress bar\n"
" --crlf Convert LF to CRLF in upload. Useful for MVS (OS/390)\n" " --crlf Convert LF to CRLF in upload. Useful for MVS (OS/390)");
" --stderr <file> Where to redirect stderr. - means stdout.\n",
curl_version()
);
} }
struct LongShort { struct LongShort {
@@ -306,6 +301,8 @@ struct LongShort {
}; };
struct Configurable { struct Configurable {
char *random_file;
char *egd_file;
char *useragent; char *useragent;
char *cookie; char *cookie;
bool use_resume; bool use_resume;
@@ -314,6 +311,7 @@ struct Configurable {
long postfieldsize; long postfieldsize;
char *referer; char *referer;
long timeout; long timeout;
long connecttimeout;
long maxredirs; long maxredirs;
char *headerfile; char *headerfile;
char *ftpport; char *ftpport;
@@ -525,6 +523,9 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
{"7", "interface", TRUE}, {"7", "interface", TRUE},
{"6", "krb4", TRUE}, {"6", "krb4", TRUE},
{"5", "url", TRUE}, {"5", "url", TRUE},
{"5a", "random-file", TRUE},
{"5b", "egd-file", TRUE},
{"5c", "connect-timeout", TRUE},
{"2", "sslv2", FALSE}, {"2", "sslv2", FALSE},
{"3", "sslv3", FALSE}, {"3", "sslv3", FALSE},
@@ -674,7 +675,17 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
GetStr(&config->krb4level, nextarg); GetStr(&config->krb4level, nextarg);
break; break;
case '5': case '5':
/* the URL! */ switch(subletter) {
case 'a': /* random-file */
GetStr(&config->random_file, nextarg);
break;
case 'b': /* egd-file */
GetStr(&config->egd_file, nextarg);
break;
case 'c': /* connect-timeout */
config->connecttimeout=atoi(nextarg);
break;
default: /* the URL! */
{ {
struct getout *url; struct getout *url;
if(config->url_get || (config->url_get=config->url_list)) { if(config->url_get || (config->url_get=config->url_list)) {
@@ -699,6 +710,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
url->flags |= GETOUT_URL; url->flags |= GETOUT_URL;
} }
} }
}
break; break;
case '#': /* added 19990617 larsa */ case '#': /* added 19990617 larsa */
config->progressmode ^= CURL_PROGRESS_BAR; config->progressmode ^= CURL_PROGRESS_BAR;
@@ -1368,6 +1380,10 @@ void progressbarinit(struct ProgressData *bar)
void free_config_fields(struct Configurable *config) void free_config_fields(struct Configurable *config)
{ {
if(config->random_file)
free(config->random_file);
if(config->egd_file)
free(config->egd_file);
if(config->userpwd) if(config->userpwd)
free(config->userpwd); free(config->userpwd);
if(config->postfields) if(config->postfields)
@@ -1441,12 +1457,10 @@ operate(struct Configurable *config, int argc, char *argv[])
curl_memdebug("memdump"); curl_memdebug("memdump");
#endif #endif
main_init(); /* inits winsock crap for windows */
config->showerror=TRUE; config->showerror=TRUE;
config->conf=CONF_DEFAULT; config->conf=CONF_DEFAULT;
#if 0
config->crlf=FALSE;
config->quote=NULL;
#endif
if(argc>1 && if(argc>1 &&
(!strnequal("--", argv[1], 2) && (argv[1][0] == '-')) && (!strnequal("--", argv[1], 2) && (argv[1][0] == '-')) &&
@@ -1455,9 +1469,6 @@ operate(struct Configurable *config, int argc, char *argv[])
* The first flag, that is not a verbose name, but a shortname * The first flag, that is not a verbose name, but a shortname
* and it includes the 'q' flag! * and it includes the 'q' flag!
*/ */
#if 0
fprintf(stderr, "I TURNED OFF THE CRAP\n");
#endif
; ;
} }
else { else {
@@ -1537,6 +1548,15 @@ operate(struct Configurable *config, int argc, char *argv[])
else else
allocuseragent = TRUE; allocuseragent = TRUE;
/*
* Get a curl handle to use for all forthcoming curl transfers. Cleanup
* when all transfers are done. This is supported with libcurl 7.7 and
* should not be attempted on previous versions.
*/
curl = curl_easy_init();
if(!curl)
return CURLE_FAILED_INIT;
urlnode = config->url_list; urlnode = config->url_list;
/* loop through the list of given URLs */ /* loop through the list of given URLs */
@@ -1722,10 +1742,6 @@ operate(struct Configurable *config, int argc, char *argv[])
#endif #endif
main_init();
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_FILE, (FILE *)&outs); /* where to store */ curl_easy_setopt(curl, CURLOPT_FILE, (FILE *)&outs); /* where to store */
/* what call to write: */ /* what call to write: */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
@@ -1821,22 +1837,19 @@ operate(struct Configurable *config, int argc, char *argv[])
/* new in libcurl 7.6.2: */ /* new in libcurl 7.6.2: */
curl_easy_setopt(curl, CURLOPT_TELNETOPTIONS, config->telnet_options); curl_easy_setopt(curl, CURLOPT_TELNETOPTIONS, config->telnet_options);
/* new in libcurl 7.7: */
curl_easy_setopt(curl, CURLOPT_RANDOM_FILE, config->random_file);
curl_easy_setopt(curl, CURLOPT_EGDSOCKET, config->egd_file);
curl_easy_setopt(curl, CURLOPT_CONNECTTIMEOUT, config->connecttimeout);
res = curl_easy_perform(curl); res = curl_easy_perform(curl);
if(config->writeout) { if(config->writeout) {
ourWriteOut(curl, config->writeout); ourWriteOut(curl, config->writeout);
} }
/* always cleanup */
curl_easy_cleanup(curl);
if((res!=CURLE_OK) && config->showerror) if((res!=CURLE_OK) && config->showerror)
fprintf(config->errors, "curl: (%d) %s\n", res, errorbuffer); fprintf(config->errors, "curl: (%d) %s\n", res, errorbuffer);
}
else
fprintf(config->errors, "curl: failed to init libcurl!\n");
main_free();
if((config->errors != stderr) && if((config->errors != stderr) &&
(config->errors != stdout)) (config->errors != stdout))
@@ -1884,6 +1897,11 @@ operate(struct Configurable *config, int argc, char *argv[])
if(allocuseragent) if(allocuseragent)
free(config->useragent); free(config->useragent);
/* cleanup the curl handle! */
curl_easy_cleanup(curl);
main_free(); /* cleanup the winsock stuff for windows */
return res; return res;
} }

View File

@@ -75,18 +75,18 @@ for(@out) {
$new = $_; $new = $_;
$outsize += length($new); $outsize += length($new)+1; # one for the newline
$new =~ s/\\/\\\\/g; $new =~ s/\\/\\\\/g;
$new =~ s/\"/\\\"/g; $new =~ s/\"/\\\"/g;
printf("\"%s\\n\"\n", $new); # gcc 2.96 claims ISO C89 only is required to support 509 letter strings
if($outsize > 500) {
if($outsize > 10000) {
# terminate and make another puts() call here # terminate and make another puts() call here
print ");\n puts(\n"; print ");\n puts(\n";
$outsize=0; $outsize=length($new)+1;
} }
printf("\"%s\\n\"\n", $new);
} }

View File

@@ -1,3 +1,3 @@
#define CURL_NAME "curl" #define CURL_NAME "curl"
#define CURL_VERSION "7.7-beta1" #define CURL_VERSION "7.7-beta5"
#define CURL_ID CURL_NAME " " CURL_VERSION " (" OS ") " #define CURL_ID CURL_NAME " " CURL_VERSION " (" OS ") "

View File

@@ -25,8 +25,6 @@
#include <string.h> #include <string.h>
#include <curl/curl.h> #include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
#define _MPRINTF_REPLACE /* we want curl-functions instead of native ones */ #define _MPRINTF_REPLACE /* we want curl-functions instead of native ones */
#include <curl/mprintf.h> #include <curl/mprintf.h>

View File

@@ -18,8 +18,10 @@ Run:
verbose output. Use -d to run the test servers with debug output enabled as verbose output. Use -d to run the test servers with debug output enabled as
well. well.
Use -s fort shorter output, or pass a string with test numbers to run Use -s for shorter output, or pass test numbers to run specific tests only
specific tests only (like ./runtests.pl "3 4" to test 3 and 4 only) (like "./runtests.pl 3 4" to test 3 and 4 only). It also supports test case
ranges with 'to'. As in "./runtests 3 to 9" which runs the seven tests from
3 to 9.
Memory: Memory:
The test script will check that all allocated memory is freed properly IF The test script will check that all allocated memory is freed properly IF

View File

@@ -64,4 +64,8 @@ command32.txt prot31.txt reply310001.txt reply320001.txt \
name31.txt prot32.txt reply310002.txt reply320002.txt \ name31.txt prot32.txt reply310002.txt reply320002.txt \
command33.txt extra33.txt name33.txt prot33.txt reply33.txt \ command33.txt extra33.txt name33.txt prot33.txt reply33.txt \
command34.txt prot34.txt reply340001.txt name34.txt reply34.txt \ command34.txt prot34.txt reply340001.txt name34.txt reply34.txt \
command35.txt name35.txt prot35.txt reply35.txt command35.txt name35.txt prot35.txt reply35.txt \
command36.txt error36.txt name36.txt reply36.txt \
command37.txt name37.txt prot37.txt reply37.txt \
command38.txt prot38.txt reply380001.txt name38.txt reply38.txt \
command39.txt prot39.txt reply390001.txt name39.txt reply39.txt reply390002.txt

View File

@@ -1,4 +1,4 @@
http://%HOSTIP:%HOSTPORT/want/25 -o - -o - http://%HOSTIP:%HOSTPORT/want/26 -o - -o -

1
tests/data/command36.txt Normal file
View File

@@ -0,0 +1 @@
http://%HOSTIP:%HOSTPORT/36

1
tests/data/command37.txt Normal file
View File

@@ -0,0 +1 @@
http://uUsSeErrr:pppasswrd@%HOSTIP:%HOSTPORT/37

1
tests/data/command38.txt Normal file
View File

@@ -0,0 +1 @@
http://user:pwd@%HOSTIP:%HOSTPORT/38 -L

3
tests/data/command39.txt Normal file
View File

@@ -0,0 +1,3 @@
http://%HOSTIP:%HOSTPORT/want/39 -L -C 20

1
tests/data/error36.txt Normal file
View File

@@ -0,0 +1 @@
26

View File

@@ -1 +1 @@
looping HTTP Location: following with --max-redirs looping HTTP Location: following with --max-redirs, no persistance

1
tests/data/name36.txt Normal file
View File

@@ -0,0 +1 @@
HTTP GET with badly formatted chunked Transfer-Encoding

1
tests/data/name37.txt Normal file
View File

@@ -0,0 +1 @@
HTTP GET with name+password in the URL

1
tests/data/name38.txt Normal file
View File

@@ -0,0 +1 @@
HTTP GET with user+password in URL and Location: and --include

1
tests/data/name39.txt Normal file
View File

@@ -0,0 +1 @@
HTTP GET with location following and -C

View File

@@ -1,3 +1,8 @@
GET /want/11 HTTP/1.1
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /want/data/110002.txt?coolsite=yes HTTP/1.1 GET /want/data/110002.txt?coolsite=yes HTTP/1.1
Host: 127.0.0.1:8999 Host: 127.0.0.1:8999
Pragma: no-cache Pragma: no-cache

View File

@@ -1,5 +1,14 @@
GET /11 HTTP/1.1 GET /3 HTTP/1.1
User-Agent: curl/7.4.2 (sparc-sun-solaris2.7) libcurl 7.4.2 Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /10 HTTP/1.1
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /11 HTTP/1.1
Host: 127.0.0.1:8999 Host: 127.0.0.1:8999
Pragma: no-cache Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

View File

@@ -1,4 +1,4 @@
GET /want/25 HTTP/1.1 GET /want/26 HTTP/1.1
User-Agent: curl/7.6-pre1 (sparc-sun-solaris2.7) libcurl 7.5.2 (SSL 0.9.6) (krb4 enabled) User-Agent: curl/7.6-pre1 (sparc-sun-solaris2.7) libcurl 7.5.2 (SSL 0.9.6) (krb4 enabled)
Host: 127.0.0.1:8999 Host: 127.0.0.1:8999
Pragma: no-cache Pragma: no-cache

View File

@@ -1,5 +1,14 @@
GET /want/22 HTTP/1.1 GET /want/25 HTTP/1.1
User-Agent: curl/7.6-pre1 (sparc-sun-solaris2.7) libcurl 7.6-pre1 (SSL 0.9.6) (krb4 enabled) Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /want/24 HTTP/1.1
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /want/22 HTTP/1.1
Host: 127.0.0.1:8999 Host: 127.0.0.1:8999
Pragma: no-cache Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

View File

@@ -1,5 +1,12 @@
POST /moo/moo/moo/310002 HTTP/1.1 POST /31 HTTP/1.1
User-Agent: curl/7.6 (i686-pc-linux-gnu) libcurl 7.6 (SSL 0.9.5) (ipv6 enabled) Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Content-Length: 9
Content-Type: application/x-www-form-urlencoded
mooo=fooo
POST /moo/moo/moo/310002 HTTP/1.1
Host: 127.0.0.1:8999 Host: 127.0.0.1:8999
Pragma: no-cache Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

View File

@@ -1,5 +1,12 @@
POST /32 HTTP/1.1
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Content-Length: 9
Content-Type: application/x-www-form-urlencoded
mooo=fooo
GET /moo/moo/moo/320002 HTTP/1.1 GET /moo/moo/moo/320002 HTTP/1.1
User-Agent: curl/7.6 (i686-pc-linux-gnu) libcurl 7.6 (SSL 0.9.5) (ipv6 enabled)
Host: 127.0.0.1:8999 Host: 127.0.0.1:8999
Pragma: no-cache Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */* Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

7
tests/data/prot37.txt Normal file
View File

@@ -0,0 +1,7 @@
GET /37 HTTP/1.1
Authorization: Basic dVVzU2VFcnJyOnBwcGFzc3dyZA==
User-Agent: curl/7.7-beta1 (i686-pc-linux-gnu) libcurl 7.7-beta1 (SSL 0.9.5)
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

11
tests/data/prot38.txt Normal file
View File

@@ -0,0 +1,11 @@
GET /38 HTTP/1.1
Authorization: Basic dXNlcjpwd2Q=
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /ffffooooooooooooooooooooooooooooooooooooooooooooooooooo/37?fake HTTP/1.1
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

14
tests/data/prot39.txt Normal file
View File

@@ -0,0 +1,14 @@
GET /want/39 HTTP/1.1
Range: bytes=20-
User-Agent: curl/7.7-beta4 (sparc-sun-solaris2.7) libcurl 7.7-beta4 (SSL 0.9.6) (krb4 enabled)
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /want/data/390002.txt?coolsite=yes HTTP/1.1
Range: bytes=20-
User-Agent: curl/7.7-beta4 (sparc-sun-solaris2.7) libcurl 7.7-beta4 (SSL 0.9.6) (krb4 enabled)
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

Some files were not shown because too many files have changed in this diff Show More