Compare commits

...

436 Commits

Author SHA1 Message Date
Daniel Stenberg
c2d4fd876c 7.9.5 commit 2002-03-07 08:50:18 +00:00
Daniel Stenberg
58cad04bbb added the "known bugs" file 2002-03-07 08:29:24 +00:00
Daniel Stenberg
9bb64d6827 new VMS messages from Nico Baggus 2002-03-06 23:18:22 +00:00
Daniel Stenberg
4441df90c1 Kevin Roth nicely saved us from this backslash-removing problem! 2002-03-06 22:52:00 +00:00
Daniel Stenberg
f51f2417c5 Brad corrected the include path (again) 2002-03-06 22:19:16 +00:00
Daniel Stenberg
aad617647d corrected the newlines 2002-03-06 22:08:11 +00:00
Daniel Stenberg
49c0d62dda two items since pre6 2002-03-06 15:05:00 +00:00
Daniel Stenberg
f752098ba5 when removed, an easy handle can be curl_easy_perform()ed again 2002-03-06 15:01:45 +00:00
Daniel Stenberg
a4477b9e4b Paul Nolan built it on pocket pc 2002-03-06 12:33:34 +00:00
Daniel Stenberg
ad3cef0fc8 Ralph Mitchell's minor #include patch to prevent some warnings 2002-03-06 09:40:06 +00:00
Daniel Stenberg
d89dbe5bd6 we don't skip what looks like already escaped strings, that was fixed
ages ago
2002-03-06 07:44:49 +00:00
Daniel Stenberg
b0475dbdbc read POST data using the read callback 2002-03-05 14:14:22 +00:00
Daniel Stenberg
60b2e74fa3 corrected the progress callback prototype!!! 2002-03-05 10:15:38 +00:00
Daniel Stenberg
cda16297d1 added text to the progress chapter 2002-03-05 09:01:58 +00:00
Daniel Stenberg
d6c9a72e15 explicitly mention easy handle 2002-03-04 13:10:15 +00:00
Daniel Stenberg
4d7b1512c1 mention 'easy handle' and not just handle, there will soon be other handles
to keep track of too
2002-03-04 13:06:46 +00:00
Daniel Stenberg
d8a35d745e cut off 2001 and put those changes in a separate file 2002-03-04 10:34:58 +00:00
Daniel Stenberg
e22657ea13 added docs/libcurl/
removed multi/
2002-03-04 10:28:02 +00:00
Daniel Stenberg
d06d6b5534 moved lots to the new subdir 'libcurl' 2002-03-04 10:27:37 +00:00
Daniel Stenberg
cec8ab1fde remove this directory, this is history 2002-03-04 10:19:32 +00:00
Daniel Stenberg
9fc62a8dd0 multi interface using examples 2002-03-04 10:15:44 +00:00
Daniel Stenberg
61540b98c2 no longer include the multi dir, the examples should be in the examples
dir
2002-03-04 10:15:12 +00:00
Daniel Stenberg
465ae39e86 moved to the new libcurl/ directory 2002-03-04 10:10:58 +00:00
Daniel Stenberg
01f04b9a41 ripped out from ../ and put in its own directory now 2002-03-04 10:09:48 +00:00
Daniel Stenberg
34f9ab1046 Added packages/EPM 2002-03-04 08:00:25 +00:00
Daniel Stenberg
699876778b Added EPM stuff, thanks to Giuseppe Corbelli 2002-03-04 07:59:53 +00:00
Daniel Stenberg
8fc5a0d19e bug report #524427 pointed out a mistake in the example source 2002-03-01 17:22:40 +00:00
Daniel Stenberg
62b5926d58 initial and still basic curl multi interface documentation 2002-03-01 15:34:23 +00:00
Daniel Stenberg
4d1037f385 removed incorrect and unnecessary words 2002-03-01 13:38:56 +00:00
Daniel Stenberg
e4addb3975 several little things since pre4 2002-03-01 10:48:08 +00:00
Daniel Stenberg
2aef351980 memanalyze is now moved to the tests/ dir 2002-03-01 09:20:03 +00:00
Daniel Stenberg
d88c153c7d include memanalyze.pl in the dist archive 2002-03-01 09:19:28 +00:00
Daniel Stenberg
9e9883082e moved memanalyze.pl into the tests dir 2002-03-01 09:18:54 +00:00
Daniel Stenberg
71440df4c7 Nico Baggus added more error codes to the VMS stuff. 2002-02-28 23:55:18 +00:00
Daniel Stenberg
80b004a57d Wesley Laxton's CURLOPT_PREQUOTE work 2002-02-28 23:31:23 +00:00
Daniel Stenberg
ea8476a2dc Ralph Mitchell's SSL problems made me notice that we didn't increase the
header byte counter properly
2002-02-28 15:13:35 +00:00
Daniel Stenberg
cb85ca18ab more fancy alloc, we store the size in each allocated block so that we
can destroy the full allocated area just before we free it
2002-02-28 12:37:05 +00:00
Daniel Stenberg
f1103b95cf set CURL_MEMDEBUG to enable memory debugging in case curl is compiled
with it
2002-02-28 12:36:25 +00:00
Daniel Stenberg
aa5ff53bcf added -t for trace, helps searching for leaks and similar 2002-02-28 12:35:54 +00:00
Daniel Stenberg
907dabed5d memory debugging is now only enabled if the CURL_MEMDEBUG environment
variable is set when curl is invoked
2002-02-28 12:35:09 +00:00
Daniel Stenberg
0cacbc892c always allocates at least 64 bytes for real, and damages them before free 2002-02-28 12:18:15 +00:00
Daniel Stenberg
6753c3c715 made building outside the source tree work again, Kevin Roth reported 2002-02-27 15:09:23 +00:00
Daniel Stenberg
36e1363e3d minor edit 2002-02-27 12:40:01 +00:00
Daniel Stenberg
d1a711eb6a oops, we weren't doing HTTPS - now we are 2002-02-27 07:50:22 +00:00
Daniel Stenberg
d8dea4dcc7 test 304, HTTPS multipart formpost 2002-02-27 07:49:01 +00:00
Daniel Stenberg
ca161737bc use the correct time in the cookie jar 2002-02-27 07:41:46 +00:00
Daniel Stenberg
3612c3774e made Max-Age work as defined in the RFC.
my brain damaged fix to not parse spaces as part of the value is now fixed
to instead strip off trailing spaces from values.
2002-02-27 07:38:04 +00:00
Daniel Stenberg
e6a65bb3ef modified cookie expire date 2002-02-26 13:38:12 +00:00
Daniel Stenberg
ff291eee48 new field1 functionality testing too 2002-02-26 13:18:39 +00:00
Daniel Stenberg
66b8f48a88 When saving a cookie jar, set field 1 (counted from 0) properly to TRUE if the
domain starts with a dot.
2002-02-26 13:18:08 +00:00
Daniel Stenberg
634760cbdc test 31: "HTTP with weirdly formatted cookies and cookiejar storage" 2002-02-26 13:09:46 +00:00
Daniel Stenberg
a23a897ad2 removed crash on weird input, this also better discards silly input 2002-02-26 13:07:53 +00:00
Daniel Stenberg
d9c244278d 7.9.5-pre4 commit 2002-02-26 07:59:43 +00:00
Daniel Stenberg
b6c4185b27 more custom stuff, much about dealing with cookies 2002-02-25 15:25:34 +00:00
Daniel Stenberg
5896d35e72 a never ending stream of things to do... 2002-02-25 14:09:31 +00:00
Daniel Stenberg
b4dfdd8bbc use env to run perl 2002-02-25 14:08:51 +00:00
Daniel Stenberg
e6ed3478ea automake usage and options cleanup 2002-02-25 14:08:18 +00:00
Daniel Stenberg
db08d9c6b9 happy new year 2002-02-25 13:25:33 +00:00
Daniel Stenberg
9490278ece We got this web server's embryo from Georg Horn, muchos gracias. 2002-02-25 12:49:21 +00:00
Daniel Stenberg
fd8bf5f171 the test suite http server is now automake'd 2002-02-25 12:45:48 +00:00
Daniel Stenberg
c9bc14a222 use the pid file, use the automake subdir 2002-02-25 12:45:20 +00:00
Daniel Stenberg
63708cbfb0 automake this dir too 2002-02-25 12:44:58 +00:00
Daniel Stenberg
d9f307623c use the former logfile name again since the ftp server also uses that... 2002-02-25 12:14:24 +00:00
Daniel Stenberg
540f77a627 we actually ran all tests just now, feb 25th, 2002 12:11 MET. with the
new http server on Linux
2002-02-25 11:12:10 +00:00
Daniel Stenberg
71bb2d0b8b reply/postcmd support for "wait" 2002-02-25 11:11:03 +00:00
Daniel Stenberg
87dc44e434 portability, step one, use a config.h.in file 2002-02-25 11:00:16 +00:00
Daniel Stenberg
29e0fcd091 generate a config file for the test suite http server too 2002-02-25 10:56:37 +00:00
Daniel Stenberg
2e9a798f09 create the pidfile and store the pid on invoke 2002-02-25 10:27:29 +00:00
Daniel Stenberg
b32a39f44f oops, #if not #ifdef 2002-02-25 10:12:04 +00:00
Daniel Stenberg
d86f9611b3 support HUGE requests too 2002-02-25 09:42:58 +00:00
Daniel Stenberg
6a62fc4a40 make sure -d is treated as a POST request and thus should fail if mixed
with -I for example
2002-02-25 09:08:28 +00:00
Daniel Stenberg
7cdd6455d7 modified the command to fail properly! ;-) 2002-02-25 09:07:26 +00:00
Daniel Stenberg
e4fefd088d cygnus can't include winsock.h even though it has it, why we need to
make a different and more complicated check for when to include it
2002-02-25 08:20:29 +00:00
Daniel Stenberg
95e601e2b1 "Yet Another Geek" made %{content_type} work in the -w/--writeout option. 2002-02-25 07:40:49 +00:00
Daniel Stenberg
b1ffe7b74a better time selection for the connect timeout 2002-02-22 15:44:37 +00:00
Daniel Stenberg
417c8fb602 16 tests OK 2002-02-22 15:40:17 +00:00
Daniel Stenberg
85efa64c31 cut off big parts of the banner 2002-02-22 15:17:41 +00:00
Daniel Stenberg
d8cb026e80 make sure the custom config-*.h files are in the dist as well 2002-02-22 15:12:17 +00:00
Daniel Stenberg
41dd5121f0 adjusted to work on test case 11 better 2002-02-22 13:54:06 +00:00
Daniel Stenberg
94482d7ca5 use -W too 2002-02-22 13:53:41 +00:00
Daniel Stenberg
4d0e51aead fixed to work with 'nonewline' and thus this passes OK with the new http
server and things
2002-02-22 10:51:19 +00:00
Daniel Stenberg
ae8a8c8ba4 support for using protocol without a trailing newline 2002-02-22 10:50:36 +00:00
Daniel Stenberg
7d043f46d5 hide debug output from screen, use log/ for logfiles 2002-02-22 10:40:05 +00:00
Daniel Stenberg
cbca19d6c2 lib/config.h.in added to dist 2002-02-22 07:51:23 +00:00
Daniel Stenberg
b40b9677b6 VMS adjustments 2002-02-20 23:24:04 +00:00
Daniel Stenberg
c80ad865db new from Nico! 2002-02-20 13:48:03 +00:00
Daniel Stenberg
758eae49ab four more bugfixes, one VMS adjustment 2002-02-20 13:47:36 +00:00
Daniel Stenberg
721b05e343 Nico Baggus' VMS tweaks 2002-02-20 13:46:53 +00:00
Daniel Stenberg
a333bddeeb Andrs Garca solved bug report #515228 by making sure the progress meter
is updated even if everything is read in one single pass, as the windows
functions apparantly does more often than other systems.
2002-02-20 13:38:34 +00:00
Daniel Stenberg
4c6a52fe90 corrected reference to multi-using examples 2002-02-19 11:02:01 +00:00
Daniel Stenberg
792d73a9cf include winsock.h on window boxen to work smoother there 2002-02-19 11:00:34 +00:00
Daniel Stenberg
9a95a3f8c3 moved the config.h to lib/config.h 2002-02-19 01:06:56 +00:00
Daniel Stenberg
485edb777f a minor step forwards 2002-02-19 01:04:46 +00:00
Daniel Stenberg
a8c3431ae9 use the new HTTP server input file 2002-02-19 01:04:31 +00:00
Daniel Stenberg
6fe4a6fa9a cut off the old perl one, this only runs the C coded version 2002-02-19 01:03:45 +00:00
Daniel Stenberg
6d8c7356d6 fixed the huge text just in case anyone actually reads it 2002-02-19 00:26:44 +00:00
Daniel Stenberg
a782c96e81 no .. in path 2002-02-19 00:26:25 +00:00
Daniel Stenberg
c795123cd5 fixed a long long mistake 2002-02-18 23:32:45 +00:00
Daniel Stenberg
0ec370e6fb auth on multiple hosts with follow-location 2002-02-18 23:17:57 +00:00
Daniel Stenberg
3d5732d4e0 Rick Richardson's getaddrinfo() usage fix to speed up name resolves 2002-02-18 23:12:37 +00:00
Daniel Stenberg
b795929858 INADDR_NONE should be in_addr_t to work with 64bit archs better.
Really, we should only #define this in one file, not both here and in
connect.c!
2002-02-18 22:59:26 +00:00
Daniel Stenberg
535258ffe4 Philip Gladstone's size problem in add_buffer_send() 2002-02-18 22:41:52 +00:00
Daniel Stenberg
cc161b96ac 4 fixes 2002-02-18 10:51:50 +00:00
Daniel Stenberg
5c4b422b18 offer SSL verfication callback,
add 'headers=' in client formpost
2002-02-18 10:51:28 +00:00
Daniel Stenberg
89bad584c3 updated LDAP URL syntax references by Aron Roberts 2002-02-18 10:47:27 +00:00
Daniel Stenberg
e21926f7f0 connection timeout comparison fix by Emil 2002-02-18 10:05:18 +00:00
Daniel Stenberg
e452f467d4 Philip Gladstone's 64-bit issues corrected.
Reminder for the future: when we're using malloc() we MUST include <stdlib.h>
as otherwise 64bit archs go bananas.

Bug report #517687
2002-02-17 14:55:35 +00:00
Daniel Stenberg
dfda7ba456 corrected the Expect: ignore, made Content-Type: possible to skip 2002-02-17 14:42:44 +00:00
Daniel Stenberg
feb6b6445e Giaslas Georgios's Host: over proxy fix 2002-02-17 11:17:37 +00:00
Daniel Stenberg
0b57fa9c51 http server added to CVS, config*h files moved 2002-02-07 15:13:11 +00:00
Daniel Stenberg
55c6f60c90 ugh. the VMS stuff must've been like that for a reason, I put it back again 2002-02-07 14:47:41 +00:00
Daniel Stenberg
9def011e8e moved the config-* files to lib/Makefile.am 2002-02-07 14:35:14 +00:00
Daniel Stenberg
7cf6e8c9cc moved the config-* files here from the ../Makefile.am 2002-02-07 14:34:34 +00:00
Daniel Stenberg
cdee43aa59 use the config files in this directory now, not ../ 2002-02-07 14:33:36 +00:00
Daniel Stenberg
9c25b58b4c moved the config-*.h files from root to the lib/ dir 2002-02-07 14:32:28 +00:00
Daniel Stenberg
83f35463f5 added note about persistancy in the server 2002-02-07 12:52:04 +00:00
Daniel Stenberg
818cdb879e POSTs seems to work somewhat now 2002-02-07 12:42:59 +00:00
Daniel Stenberg
3eead2d6c4 port number fix, now stores the processed request sent to the server 2002-02-07 12:40:06 +00:00
Daniel Stenberg
5cffe055ad added Cris Bailiff's CAdir option suggestion 2002-02-07 10:43:43 +00:00
Daniel Stenberg
3d4511daf3 the initial C code for the new HTTP test server 2002-02-07 09:39:15 +00:00
Daniel Stenberg
4748b40ad9 changes since 7.9.4 2002-02-07 09:34:43 +00:00
Daniel Stenberg
c40b4f6c39 don't add 2 to the post size, that was a previous mistake because there
was an extra CRLF added to the post data
2002-02-07 09:32:40 +00:00
Daniel Stenberg
d3b96dd394 Miklos Nemeth windows update 2002-02-06 16:04:03 +00:00
Daniel Stenberg
f946df640b Miklos Nemeth added comments 2002-02-06 16:03:28 +00:00
Daniel Stenberg
fef78bd6f1 Miklos Nemeth improved the windows section 2002-02-06 16:01:10 +00:00
Daniel Stenberg
9e6cc86bf7 Miklos Nemeth improved 2002-02-06 16:00:55 +00:00
Daniel Stenberg
b544c5fa5c ARGH the CRLF I removed recently was not only done after the initial
content-type header, it was used for each part and thus without this it
failed MISERABLY. *smacks forhead*
2002-02-06 15:48:53 +00:00
Daniel Stenberg
afa64ee31f a few of the SSL options were added in 7.9.3 and it should be noted
accordingly
2002-02-06 09:49:34 +00:00
Daniel Stenberg
e9bfef0eb1 Brent Beardsley found the content-type bug! 2002-02-06 07:02:13 +00:00
Daniel Stenberg
ddbcccd43d Kevin Roth's discovered SSL download problem 2002-02-05 15:33:00 +00:00
Daniel Stenberg
5370d7a6eb 7.9.4 2002-02-05 11:43:29 +00:00
Daniel Stenberg
685b180ab6 7.9.4-pre2 2002-02-04 09:51:41 +00:00
Daniel Stenberg
9dab850874 Eric Melville fixed spell mistakes on a few places 2002-02-03 15:00:51 +00:00
Daniel Stenberg
0d5bfe883e Andreas Damm made getdate use gmtime_r if available 2002-02-01 11:11:26 +00:00
Daniel Stenberg
cc2f1d4894 Added the recycle handles chapter
Added most of the Customizing Operations chapter
2002-01-31 14:41:01 +00:00
Daniel Stenberg
a8dd13db4c struct HttpHeader died ages ago, corrected comments 2002-01-31 14:24:55 +00:00
Daniel Stenberg
325391aef9 Albert Chin:
Forgot one case. On HP-UX 11.00, gethostbyname_r() is properly defined
if -D_REENTRANT is used. Without it, the compiler still accepts the
function prototype but gives a warning about hostent_data going out of
scope. This is because struct hostent_data is not declared. So, we
force an error by trying to set a variable to the struct.
2002-01-31 07:53:20 +00:00
Daniel Stenberg
3474ec4ecb _num_chars did wrong when called with a number that starts with 1! 2002-01-31 07:51:06 +00:00
Daniel Stenberg
ec1736d488 corrected the docs for CURLINFO_FILETIME 2002-01-31 07:17:32 +00:00
Daniel Stenberg
4522579688 Giaslas Georgios provided docs for CURLINFO_CONTENT_TYPE 2002-01-31 07:10:41 +00:00
Daniel Stenberg
907a6e0eed Georg Horn the previous SSL_read() fix, this was actually the fix I did
on my test machine! :-)
2002-01-30 21:49:29 +00:00
Daniel Stenberg
d20186a7b8 I have too many ideas of what to mention in this docs 2002-01-30 15:35:02 +00:00
Daniel Stenberg
b28051881e Georg Horn found yet another SSL reading problem caused by the non-blocks.
This was a real bummer!
2002-01-30 15:11:47 +00:00
Daniel Stenberg
bdea56cd3f big-time alert that this doesn't work 2002-01-30 10:18:47 +00:00
Daniel Stenberg
8a3ec2c659 the interface is simply called the "C" one these days 2002-01-30 10:07:49 +00:00
Daniel Stenberg
14e9420d2c extended the proxy chapter mucho 2002-01-30 10:04:40 +00:00
Daniel Stenberg
5b58e61f28 now re-seed by force (even if already seeded) if a random file or egd socket
is given
2002-01-30 08:17:23 +00:00
Daniel Stenberg
be2f3071b5 conn->upload_bufsize exists no more 2002-01-29 20:34:30 +00:00
Daniel Stenberg
85dbf82d93 append a CRLF pair after the content-type line 2002-01-29 20:32:10 +00:00
Daniel Stenberg
a9c4963cc0 removed three loust fprintf()s
removed the initial CRLF in the formpost, as they are part of the request
and should be written by the code in http.c!
2002-01-29 20:30:56 +00:00
Daniel Stenberg
a4934387d5 upload progress counter fix, removed the adjustable upload buffer size 2002-01-29 20:28:59 +00:00
Daniel Stenberg
e88a2ec6fc no more adjustable upload buffer size, we use non-blocking sockets now so
this work-around is not needed anymore!
2002-01-29 20:28:26 +00:00
Daniel Stenberg
0666960173 nine items since 7.9.3 2002-01-29 14:12:12 +00:00
Daniel Stenberg
f114caca90 - T. Bharath pointed out that we seed SSL on every connect, which is a time-
consuming operation that should only be needed to do once. We patched
  libcurl to now only seed on the first connect when unseeded. The seeded
  status is global so it'll now only happen once during a program's life time.
2002-01-29 14:11:38 +00:00
Daniel Stenberg
9468c9c796 bad tag 2002-01-29 10:55:57 +00:00
Daniel Stenberg
76c53c690c Giaslas Georgios introduced CURLINFO_CONTENT_TYPE 2002-01-29 10:49:32 +00:00
Daniel Stenberg
c341b11aaf Steve Marx helped us realize that we shouldn't treat customrequest as a
request of its own, it just changes the keyword of a request.
2002-01-28 19:31:26 +00:00
Daniel Stenberg
6212e6990a someone should have me punished, but this bug made curl bug seriously
on IPv4-linux machines
2002-01-28 19:23:18 +00:00
Daniel Stenberg
28049a183c don't count a custom request as a request type of its own, it is merely
a modifier of another type
2002-01-28 19:22:40 +00:00
Daniel Stenberg
5d3dd7911e newly generated 2002-01-28 18:39:55 +00:00
Daniel Stenberg
ae8375516b Andreas Damm made it reentrant safe! 2002-01-28 18:39:40 +00:00
Daniel Stenberg
e3f10eb825 no longer add CRLF _after_ POST data, it should not be needed. Pedro Neves
pointed out this ugliness.
2002-01-27 11:51:11 +00:00
Daniel Stenberg
2b1f683239 set header and request size to 0 before each *_perform() 2002-01-27 11:49:17 +00:00
Daniel Stenberg
a2b19c9a63 postit.c is removed, it used the deprecated curl_formparse() and may
encourage people to use bad functions
2002-01-25 10:07:07 +00:00
Daniel Stenberg
4146ce8267 bug report #508235 identified a non-working Location: following, and this
little fix seems to correct it. another case where we just returned and
didn't shut off the reading. This bug is introduced in 7.9.3 due to the
new internal "order".
2002-01-25 08:35:49 +00:00
Daniel Stenberg
170bd6dafc don't install the example programs! :-O 2002-01-24 07:38:01 +00:00
Daniel Stenberg
7e16ec8724 7.9.3 2002-01-23 18:10:00 +00:00
Daniel Stenberg
8c459156f8 7.9.3 public 2002-01-23 18:01:16 +00:00
Daniel Stenberg
2db894807b Andrs Garca found out that we didn't properly stop reading from a connection
after the headers on a HEAD request. This bug has been added in 7.9.3 and was
mnot present earlier.
2002-01-23 07:15:32 +00:00
Daniel Stenberg
95ceeb6e0b more about passwords and started about proxies 2002-01-22 13:41:00 +00:00
Daniel Stenberg
c9c00d2a23 verify big files 2002-01-22 13:10:16 +00:00
Daniel Stenberg
1afe49864d minor edit 2002-01-22 08:22:04 +00:00
Daniel Stenberg
6924bee3a0 added --cc description and an example 2002-01-21 14:57:07 +00:00
Daniel Stenberg
39d4552dab pre4 2002-01-21 12:11:45 +00:00
Daniel Stenberg
a23c63738f HTTP POST explained 2002-01-21 10:54:56 +00:00
Daniel Stenberg
e911945c55 #505514, as correctly pointed out by Antonio (anton@concord.ru), trying to
post a non-existing file should include nothing, not an error text!
2002-01-19 11:08:05 +00:00
Daniel Stenberg
6d58d13710 mingw fix, mac os x fix, long long check removed from configure,
--enable-debug uses even stricter options now
2002-01-18 15:16:08 +00:00
Daniel Stenberg
0b177cb165 newly generated 2002-01-18 15:14:35 +00:00
Daniel Stenberg
3e31b619de added more text in the 'passwords' section 2002-01-18 15:08:32 +00:00
Daniel Stenberg
f925979b2f satisfy gcc -Wundef 2002-01-18 13:10:41 +00:00
Daniel Stenberg
49f7fa82b9 #if [undefined] => #ifdef [undefined] 2002-01-18 13:04:48 +00:00
Daniel Stenberg
e4cd4cf3f3 playing with more strict gcc warnings with --enable-debug 2002-01-18 13:00:13 +00:00
Daniel Stenberg
e74b20926d prevents gcc -Wcast-align from complaining 2002-01-18 12:59:33 +00:00
Daniel Stenberg
a312127c91 made gcc -Wcast-align happy 2002-01-18 12:56:10 +00:00
Daniel Stenberg
1dc5bf4f73 #ifndef and #define magic to prevent compiler warnings when doing #if BLA
where BLA is undefined
2002-01-18 12:53:05 +00:00
Daniel Stenberg
01cfe670c5 updated to 2002 status ;-) 2002-01-18 12:48:36 +00:00
Daniel Stenberg
fd307bfe29 cut off a big piece of comment and added a pointer to the Trio web page
should anyone ever want a good printf() clone
2002-01-18 10:45:03 +00:00
Daniel Stenberg
a00de093a7 commented out the 'long long' and 'long double' checks, as we don't really
use them anyway and they cause warnings in lib/mprint.c
2002-01-18 10:43:55 +00:00
Daniel Stenberg
7bfe853af3 I wish I could type. Anyway, this proved it is a good habit to put the NULL
on the left side of comparisons...
2002-01-18 10:36:25 +00:00
Daniel Stenberg
cbaecca8e9 added typecast for a malloc() return, and added check for NULL 2002-01-18 10:30:51 +00:00
Daniel Stenberg
8edfb370a8 Added #include <errno.h> 2002-01-18 09:25:58 +00:00
Daniel Stenberg
4c08c8f7db Andrs Garca patched. It now checks for EWOULDBLOCK properly on windows
boxes.
2002-01-18 08:03:54 +00:00
Daniel Stenberg
c174680a03 patched by Andrs Garca 2002-01-18 08:03:12 +00:00
Daniel Stenberg
cb5f6e18e6 7.9.3-pre3 2002-01-17 14:34:26 +00:00
Daniel Stenberg
b798e7a5ae correct ssl version, fixed ssl writes, solved time-out disconnect without
text, fixed dns cache problem, made it compile with openssl before 0.9.5
again and extended libcurl-the-guide a bit more
2002-01-17 14:25:49 +00:00
Daniel Stenberg
5deab7ad27 more text added 2002-01-17 14:24:25 +00:00
Daniel Stenberg
12cdfd282d added a comment about this example only works with 7.9.3 and newer libs 2002-01-17 13:45:19 +00:00
Daniel Stenberg
eba8035e12 Richard Archer made it compile and build with OpenSSL versions prior to
0.9.5
2002-01-17 10:40:13 +00:00
Daniel Stenberg
edcbf4350b include our own sprintf() prototype to make it return sensible data on
all platforms, I also edited a few data types slightly to prevent my
compiler from warning on comparisions between signed and unsigned values
2002-01-17 08:03:48 +00:00
Sterling Hughes
9289ea471f Get this working, still need to check for leaks and such, but should be
fine..
2002-01-17 07:38:25 +00:00
Sterling Hughes
7d06185aa6 Make the keys for hostcache entries be in the format::
host:port, so accessing curl.haxx.se on port 80 would yield a key value
of ::
curl.haxx.se:80
2002-01-17 06:55:37 +00:00
Daniel Stenberg
01ecb1d7e7 filled-in text in the "Building" chapter and added a "libcurl with C++"
chapter
2002-01-17 00:27:56 +00:00
Daniel Stenberg
e177f14595 SSL writes passed back a silly length... 2002-01-16 23:28:58 +00:00
Daniel Stenberg
5c6eddcadd fixed time-out returned without error text set 2002-01-16 22:26:01 +00:00
Daniel Stenberg
b3b4786990 Kevin Roth's SSLeay() patch, slight edited by me. Works with OpenSSL 0.9.5
now.
2002-01-16 17:45:08 +00:00
Daniel Stenberg
fbe2907599 7.9.3-pre2 2002-01-16 15:12:12 +00:00
Daniel Stenberg
343da8d4b3 --cc and working non-blocking sockets uploads 2002-01-16 15:04:37 +00:00
Daniel Stenberg
8d97792dbc - shrunk the BUFSIZE define from 50K to 20K
- made a separate buffer for uploads (due to the non-blocking stuff)
- added two connectdata struct fields for non-blocking uploads
2002-01-16 14:53:19 +00:00
Daniel Stenberg
8d07c87be7 modified to deal with the new non-blocking versions of Curl_read() and
Curl_write().
2002-01-16 14:50:53 +00:00
Daniel Stenberg
ed21701df3 Curl_write's 5th argument now is signed 2002-01-16 14:49:51 +00:00
Daniel Stenberg
df01507582 Curl_read() and Curl_write() are both now adjusted to return properly in
cases where EWOULDBLOCK or equivalent is returned. We must not block.
2002-01-16 14:49:08 +00:00
Daniel Stenberg
f2bda5fd5b Curl_write() now takes a different 5th argument 2002-01-16 14:47:50 +00:00
Daniel Stenberg
cba9838e8f Somewhat ugly fix to deal with non-blocking sockets. We just loop and try
again. THIS IS NOT A NICE FIX.
2002-01-16 14:47:00 +00:00
Daniel Stenberg
b6dba9f5dd Somewhat ugly fix to deal with non-blocking sockets. We just loop and try
again. THIS IS NOT A NICE FIX. We should/must make a select() then and only
retry when we can write to the socket again.
2002-01-16 14:46:00 +00:00
Daniel Stenberg
6e9d1617c6 added support for --cc to output the compiler name. This makes it possible
to compile libcurl stuff without any prior knowledge:

cc=`curl-config --cc`
cflags=`curl-config --cflags`
libs=`curl-config --libs`

$cc $flags $libs -o example example.c

Or if you prefer, the oh-so-cool single-line version:

`curl-config --cc --cflags --libs` -o example example.c
2002-01-16 14:20:06 +00:00
Daniel Stenberg
ea811fee52 added a somewhat cool single-line command that builds most example sources
on unix-like systems
2002-01-16 14:13:54 +00:00
Daniel Stenberg
7391fd8f6a initial attempt to write a tutorial-like libcurl guide 2002-01-15 08:22:00 +00:00
Daniel Stenberg
6c00c58f2a fixed non-blocking reads, fixed ssl sessions, in_addr_t and more non-blocking 2002-01-14 23:32:57 +00:00
Daniel Stenberg
4931fbce49 Curl_read() now returns a negative return code if EWOULDBLOCK or similar 2002-01-14 23:14:59 +00:00
Daniel Stenberg
fefc7ea600 a memory leak when name lookup failed is now removed 2002-01-14 23:14:24 +00:00
Daniel Stenberg
d220389647 Stoned Elipot's patch for the in_addr_t test 2002-01-14 07:53:09 +00:00
Sterling Hughes
a1f910c159 Remove erreaneous include, setup.h is included one line above 2002-01-14 05:36:28 +00:00
Daniel Stenberg
e4866563de Gtz Babin-Ebell updated with some new 7.9.3 features 2002-01-13 11:32:36 +00:00
Daniel Stenberg
47f45aa229 Gtz Babin-Ebell provided some documantation for the ENGINE stuff 2002-01-13 11:32:05 +00:00
Daniel Stenberg
affe334675 added http-post.c 2002-01-10 09:00:02 +00:00
Daniel Stenberg
ee7e184e26 slightly extended to mention that -v and -i are good options to use when
reporting bugs
2002-01-10 07:38:53 +00:00
Daniel Stenberg
bec0ebacf1 bad comment begone 2002-01-09 13:23:01 +00:00
Daniel Stenberg
5bd6d631c6 cut off argc and argv as well 2002-01-09 13:22:31 +00:00
Daniel Stenberg
fd1799f3bb Cleaned up this example to make it even simpler. 2002-01-09 13:22:03 +00:00
Daniel Stenberg
d84a0c51e0 Cris Bailiff found out that when the SSL session cache was filled, libcurl
would crash. This corrects the problem.
2002-01-09 09:38:37 +00:00
Daniel Stenberg
d9a7c7de51 David Bentham's updated QNX notification 2002-01-08 23:27:42 +00:00
Daniel Stenberg
d57e09889a added a missing failf() before returning an error code 2002-01-08 23:23:24 +00:00
Daniel Stenberg
eecb86bfb0 this seems to correct the SSL reading problem introduced when switching
over to non-blocking sockets, but this loops very nastily. We should return
back to the select() and wait there until more data arrives, not just blindly
attempt again and again...
2002-01-08 23:19:32 +00:00
Daniel Stenberg
0b1197936c I made the write callback create the file the first time it gets called so
that it won't create an empty file if the remote file doesn't exist
2002-01-08 13:05:44 +00:00
Daniel Stenberg
b545ac6391 test case 38 added a few new requirements 2002-01-08 09:32:41 +00:00
Daniel Stenberg
a922132e4a updated 2002-01-08 09:32:21 +00:00
Daniel Stenberg
9474e8d6d2 added some tracability 2002-01-08 09:32:10 +00:00
Daniel Stenberg
6328428568 test case 38, try a HTTP download resume without the server supporting
ranges
2002-01-08 09:31:40 +00:00
Daniel Stenberg
ea9a88a9b8 another example source added 2002-01-08 08:26:22 +00:00
Daniel Stenberg
aec7358ca4 7.9.3 pre-release commit 2002-01-08 08:25:44 +00:00
Daniel Stenberg
3c334b2bb6 non-blocking sockets, DNS caching updated, cookies corrected, bool is now
unsigned everywhere
2002-01-08 07:22:33 +00:00
Daniel Stenberg
75bba0da92 added two typecasts to prevent compiler (gcc3) warnings 2002-01-08 07:06:07 +00:00
Sterling Hughes
c0bfe7be15 1) the dns_cache_timeout should be an integer, not a bool
2) in the curl_dns_cache_entry structure, timestamp should be
a time_t instead of an integer (although I doubt it matters).
2002-01-08 04:30:59 +00:00
Sterling Hughes
22ac08e06d Add support for DNS cache timeouts via the CURLOPT_DNS_CACHE_TIMEOUT option.
The default cache timeout for this is 60 seconds, which is arbitrary and
completely subject to change :)
2002-01-08 04:26:47 +00:00
Daniel Stenberg
87037136ef As identified in bug report #495290, the last "name=value" pair in a
Set-Cookie: line was ignored if they didn't end with a trailing
semicolon. This is indeed wrong syntax, but there are high-profile web sites
out there sending cookies like that so we must make a best-effort to parse
them.
2002-01-07 23:05:36 +00:00
Daniel Stenberg
2182e37433 the bool typedef is now made unsigned, to make sure it stays that on all
platforms, unrelated to what they might prefer by default
2002-01-07 22:47:21 +00:00
Daniel Stenberg
1de82b220d removed silly check for >=0 of a supposedly unsigned value! 2002-01-07 22:46:38 +00:00
Sterling Hughes
bd878756fc Probably not necessary, but good practice. 2002-01-07 20:55:35 +00:00
Sterling Hughes
8d7f402efb Make cach'ing work with threads now, there are now three cases:
- Use a global dns cache (via setting the tentatively named,
    CURLOPT_DNS_USE_GLOBAL_CACHE option to true)
    - Use a per-handle dns cache, by default
    - Use a pooled dns cache when in the "multi" interface
2002-01-07 20:52:32 +00:00
Daniel Stenberg
d3299beec7 Modified to use non-blocking sockets all the time. 2002-01-07 18:38:01 +00:00
Daniel Stenberg
f9192db358 VC++ makefile, HTTP 204, cookie fix, non-blocking socket for better SSL
connection timeout
2002-01-07 16:03:36 +00:00
Daniel Stenberg
c69c0c0446 added proper breaks in the switch() 2002-01-07 15:24:52 +00:00
Daniel Stenberg
deb2911c0e Added David Bentham's notes about QNX and FD_SETSIZE 2002-01-07 15:14:01 +00:00
Daniel Stenberg
e31a306a38 HTTP response 204 should be treated similar to 304, that is we must not
expect (nor read) any response-body
2002-01-07 14:57:18 +00:00
Daniel Stenberg
d9a7773011 added precautions to not go insane when two matching cookies end up in the
cookie list, even though they're not supposed to do that...
2002-01-07 14:56:15 +00:00
sm
2b14916813 Add hash and llist to VC dsp file 2002-01-04 23:48:28 +00:00
sm
1d1530e14c Add hash and llist to VC makefile 2002-01-04 23:47:07 +00:00
Daniel Stenberg
b4fdc025a8 -l lists all tests 2002-01-04 13:20:17 +00:00
Daniel Stenberg
f1c14fe0b4 The former -c is "-C -" these days 2002-01-04 13:15:07 +00:00
Daniel Stenberg
38306cda54 dns cache, ftp response read, 64bit fixes, printf replaces, inet_ntoa_r
corrections
2002-01-04 09:57:57 +00:00
Daniel Stenberg
5a0f0023cf replaced printf() => Curl_sendf() 2002-01-04 09:53:39 +00:00
Daniel Stenberg
6dcdb8b821 removed a commented line 2002-01-04 09:53:10 +00:00
Daniel Stenberg
781f52a287 fixed an inet_ntoa() occurance to use inet_ntoa_r() if it is available.
I also replaced all printf() calls with calls to Curl_failf()
2002-01-04 09:52:44 +00:00
Daniel Stenberg
f75ff58b4b an unconditional occurance of inet_ntoa() now uses inet_ntoa_r() on all
platforms that have such a function.
This affects multi-thread running libcurls on IPv4 systems that have VERBOSE
switched on. The previous version was risking that another thread overwrote
the data before it was read out in this thread. There could possibly also
be a slight risk that the data isn't zero terminated for a short while and
thus could cause the thread to crash...
2002-01-04 09:38:52 +00:00
Daniel Stenberg
ae9bf16dee #include the local "inet_ntoa_r.h" file if no proto was found in the global
header directory but the function *is* present!
2002-01-04 09:35:23 +00:00
Daniel Stenberg
17a8bf212f The buffer in ftp_pasv_verbose(), used for gethostbyaddr_r(), is now defined
to become properly 8-byte aligned on 64-bit archs. Philip Gladstone reported.
2002-01-04 09:17:52 +00:00
Daniel Stenberg
4fc76afef4 The FTP response lines are now passed to the function callback registered for
headers.
2002-01-04 09:03:11 +00:00
Daniel Stenberg
a31155a72a multi stuff from the multi-dev branch 2002-01-03 15:03:57 +00:00
Daniel Stenberg
75601f7924 multi interface example/test sources from the multi-dev branch 2002-01-03 15:03:14 +00:00
Daniel Stenberg
8b6314ccfb merged the multi-dev branch back into MAIN again 2002-01-03 15:01:22 +00:00
Daniel Stenberg
6de7dc5879 Sterling Hughes' provided initial DNS cache source code. 2002-01-03 10:22:59 +00:00
Daniel Stenberg
6aaee5f23b minor changes 2002-01-03 09:43:17 +00:00
Daniel Stenberg
dd06dcebe1 added required software and Guido Neitzer's Mac OS X build instructions 2002-01-03 09:12:41 +00:00
Daniel Stenberg
b35c26b751 added a little percentage for "ok coverage" 2002-01-03 08:22:05 +00:00
Daniel Stenberg
128f341635 Changed how -I/--head works when --include is also used... Test case 104
stopped working after the dec-20 fixes that now supports FTP operations to
skip the transfer phase.
2002-01-03 08:07:29 +00:00
Daniel Stenberg
e48bc1be48 Philip Gladstone's fixes 2002-01-03 07:23:21 +00:00
Daniel Stenberg
0077b9c0a2 pass an 'int' as the third argument to bind() 2002-01-03 00:51:33 +00:00
Daniel Stenberg
fe37fb5921 Philip Gladstone's 64-bit sparc native compiler compatibility issues fixed. 2002-01-02 10:06:47 +00:00
Daniel Stenberg
221ecd0a30 the changes from 1999 is now in CHANGES.1999 2001-12-21 09:55:13 +00:00
Daniel Stenberg
560492707d moved the changes from 1999 into its own file 2001-12-21 09:54:45 +00:00
Daniel Stenberg
dfdf4916fa rewrote 3.9 to be more generic with more languages:
"3.9 How do I use curl in my favourite programming language?"
2001-12-21 09:20:04 +00:00
Daniel Stenberg
97a8c98886 spell 2001-12-21 08:10:34 +00:00
Daniel Stenberg
62fb70e9d1 recent fixes 2001-12-21 08:02:35 +00:00
Daniel Stenberg
8a9098a36c *cool* fix by Bjrn Stenberg, makes proxy transfers work better...! :-) 2001-12-20 15:58:22 +00:00
Daniel Stenberg
28027c2aa2 If nobody is set we won't download any FTP file. If include_header is set,
we return a set of headers not more. This enables FTP operations that don't
transfer any data, only perform FTP commands.
2001-12-20 11:22:01 +00:00
Daniel Stenberg
d60029d66e Added 4.5.6 "301 Moved Permanently", as a reply to bug report #495215 2001-12-19 23:25:04 +00:00
Daniel Stenberg
226fe8bdf9 Gtz Babin-Ebell's contributed "simplessl.c" example source code 2001-12-18 10:13:41 +00:00
Daniel Stenberg
33237b4502 run automake last 2001-12-18 01:00:24 +00:00
Daniel Stenberg
af6c394785 Gtz Babin-Ebell's OpenSSL ENGINE patch 2001-12-17 23:01:39 +00:00
Daniel Stenberg
558d12d7f6 strip trailing CRs 2001-12-17 10:32:10 +00:00
Daniel Stenberg
bfa8a6da26 cut off the description to prevent people from using this! 2001-12-17 09:33:54 +00:00
Daniel Stenberg
aa6b3d22a2 Marcus Webster's added CURLFORM_CONTENTHEADER docs 2001-12-16 12:54:42 +00:00
Daniel Stenberg
2eb355733f Marcus Webster's newly added CURLFORM_CONTENTHEADER 2001-12-14 12:59:16 +00:00
Daniel Stenberg
e66cdacb93 minor changes 2001-12-13 07:16:27 +00:00
Daniel Stenberg
c67f2da283 solaris 2.5.1 needs the sys/types.h file before the sys/socket.h 2001-12-11 15:08:27 +00:00
Daniel Stenberg
e192261788 failf() calls should not have newlines in the message string! 2001-12-11 13:13:01 +00:00
Daniel Stenberg
c63ca99c1c when the file name given to -T is used to build an upload path, the local
directory part is now stripped off and only the actual file name part will be
used
2001-12-11 00:48:55 +00:00
Daniel Stenberg
1c99c4ad11 HTTP_PROXY => http_proxy as Bjrn pointed out 2001-12-10 11:59:05 +00:00
Daniel Stenberg
bbcfc10677 corrected the READFUNCTION docs slightly 2001-12-10 07:46:43 +00:00
Daniel Stenberg
47e67eab26 corrected the comment above gmtime_r 2001-12-07 15:56:57 +00:00
Daniel Stenberg
650b95045d added gmtime_r check 2001-12-07 15:51:59 +00:00
Cris Bailiff
5603134e58 Updated location information for Curl_easy 2001-12-07 09:24:42 +00:00
Daniel Stenberg
d12fd897cb Jason Mancini's -Oalways suggestion 2001-12-06 14:40:16 +00:00
Daniel Stenberg
5e95203a5d let us know if curl compiles on more platforms 2001-12-06 12:48:41 +00:00
Daniel Stenberg
cad4a571ce curl compiles on HURD 2001-12-06 07:11:33 +00:00
Daniel Stenberg
139ab3740a 7.9.2 commit 2001-12-05 08:36:48 +00:00
Daniel Stenberg
7b832e1745 Jon Travis suggested fix. when CURLOPT_HTTPGET is used we must assign
set.upload to FALSE or else we might still get an upload if the previous
operation was an upload!
2001-12-05 06:47:01 +00:00
Daniel Stenberg
914b9e441b Eric-update 2001-12-04 16:33:40 +00:00
Daniel Stenberg
f0f6ab49f5 Eric's updated version 2001-12-04 13:03:27 +00:00
Daniel Stenberg
436d147925 Eric's #include fixes for better macos compiles 2001-12-04 13:03:08 +00:00
Daniel Stenberg
4bd78a7df4 Eric brought some files for macos compiles 2001-12-04 09:16:09 +00:00
Daniel Stenberg
7ee6a9dc25 i'm soooo funny 2001-12-04 09:14:41 +00:00
Daniel Stenberg
1b56ae8478 added macos files to the distribution archive 2001-12-04 08:48:37 +00:00
Daniel Stenberg
d52c0b6f05 more comments 2001-12-04 07:47:21 +00:00
Daniel Stenberg
3ff2bfa0e4 MacOS (not Mac OS X) compilation files 2001-12-04 06:56:24 +00:00
Daniel Stenberg
aa21a3d5c3 Eric's update 2001-12-04 06:52:19 +00:00
Daniel Stenberg
fc33ad8cf2 the happy events so far today 2001-12-03 13:56:48 +00:00
Daniel Stenberg
779043f7a3 As Eric Lavigne pointed out, the ftp response reader MUST cache data that
is not dealt with when we find an end-of-response line, as there might be
important stuff even after the correct line. So on subsequent invokes, the
cached data must be used!
2001-12-03 13:48:59 +00:00
Daniel Stenberg
265bb99382 test case 126 added, this uses RETRWEIRDO that makes the FTP server send two
responses at once, to excerise the part of curl to make sure it can cache
(parts of) responses properly.
2001-12-03 13:46:56 +00:00
Daniel Stenberg
7493db2338 Eric nailed a but in strnequal() for macintosh 2001-12-03 12:57:45 +00:00
Daniel Stenberg
c3ad019c99 the final ftp ipv6 support has been added! 2001-12-03 10:38:31 +00:00
Daniel Stenberg
05b84bfe91 updates 2001-12-03 10:07:49 +00:00
Daniel Stenberg
dbfa1e55b6 updated the copyright year range 2001-12-03 10:00:19 +00:00
Daniel Stenberg
a0fd63f611 cool.haxx.se now only allows http downloads 2001-12-03 09:59:44 +00:00
Daniel Stenberg
4ec0401529 modified the stack trace section slightly 2001-12-03 09:44:11 +00:00
Daniel Stenberg
61e6554b7f pre7 and pre8 details 2001-12-03 08:22:59 +00:00
Daniel Stenberg
f6f3f79aa8 test127~ should not be included! 2001-12-03 07:43:42 +00:00
Daniel Stenberg
c16c017f8b more careful re-use of connections when SSL is used over proxies 2001-12-02 14:16:34 +00:00
Daniel Stenberg
2f03ef39d1 SM renamed the debug DLL 2001-12-02 12:09:00 +00:00
Daniel Stenberg
db33926432 added a in_addr_t #define 2001-12-02 12:07:36 +00:00
Daniel Stenberg
946090b9cd documented CURLOPT_HTTP_VERSION and CURLOPT_FTP_USE_EPSV 2001-11-30 13:40:23 +00:00
Daniel Stenberg
1f7f0fda71 added --disable-epsv 2001-11-30 13:30:02 +00:00
Daniel Stenberg
b84d947be4 no include, no const in strdup 2001-11-30 09:29:31 +00:00
Daniel Stenberg
07c67138c9 fixed the option parser to not loop when a long option is specified 2001-11-30 09:26:06 +00:00
Daniel Stenberg
10717bd39b remove the command file after each test 2001-11-29 20:15:59 +00:00
Daniel Stenberg
302bb4a4b3 test126 renamed to test190 as it has to be last among the FTP tests because
of some problems in the test server :-/
2001-11-29 20:15:41 +00:00
Daniel Stenberg
81b5af2d1b test 127 --disable-epsv 2001-11-29 19:58:16 +00:00
Daniel Stenberg
87c562845c --disable-epsv 2001-11-29 19:42:51 +00:00
Daniel Stenberg
6c81d74626 fixes for tru64, fixes for mac 2001-11-29 12:50:35 +00:00
Daniel Stenberg
533c24a471 disabling EPSV is now possible 2001-11-29 12:49:10 +00:00
Daniel Stenberg
6a9697387a stdin is file descriptor 0 2001-11-29 12:48:08 +00:00
Daniel Stenberg
85c8981b3d mac fixes 2001-11-29 12:47:41 +00:00
Daniel Stenberg
6c5b8e1d59 added mac stuff 2001-11-29 12:42:50 +00:00
Daniel Stenberg
2cc16d89e6 updated mac specific include files 2001-11-29 12:40:36 +00:00
Daniel Stenberg
42eb74922d unix newlines 2001-11-29 12:33:02 +00:00
Daniel Stenberg
c528a7ee33 wrongly set binary 2001-11-29 12:32:25 +00:00
Daniel Stenberg
eb2da7ec2b mucho stuff since pre6! 2001-11-28 23:29:38 +00:00
Daniel Stenberg
01ed950bbe added CURLOPT_FTP_USE_EPSV 2001-11-28 23:21:55 +00:00
Daniel Stenberg
b1076e0a9e in_addr_t added 2001-11-28 23:21:31 +00:00
Daniel Stenberg
332eb7651a CURLOPT_FTP_USE_EPSV can now be set to FALSE to prevent libcurl from
attempting to use EPSV before the standard PASV.
2001-11-28 23:20:14 +00:00
Daniel Stenberg
cfdcf5c933 fill memory with junk on malloc() 2001-11-28 23:19:17 +00:00
Daniel Stenberg
820de919b6 now sets a type for in_addr_t even if it isn't found in the #include files
like on my linux box
2001-11-28 23:14:20 +00:00
Daniel Stenberg
a32cd520bd more more more MORE 2001-11-28 16:00:18 +00:00
Daniel Stenberg
b93a60daf9 the perform "state machine" is more explained now 2001-11-28 15:46:25 +00:00
Daniel Stenberg
e2844f5e04 mods 2001-11-28 15:25:01 +00:00
Daniel Stenberg
cabb46db3d adjusted to new FTP commands in the command sequence 2001-11-28 13:45:50 +00:00
Daniel Stenberg
d09b436937 Added an in_addr_t check 2001-11-28 13:16:56 +00:00
Daniel Stenberg
10fdb1d743 EPSV and SIZE adjustments 2001-11-28 13:07:49 +00:00
Daniel Stenberg
f0d3fccd4b Added EPSV which is now unconditionally always tried before PASV, which
makes it work reaaaaly nicely on IPv6-enabled hosts!
Added SIZE before RETR is made, always done on downloads. It makes us know
the size prior to download much more frequently.
Unfortunately, this breaks all the FTP test cases. *fixfixfix*
2001-11-28 13:05:39 +00:00
Daniel Stenberg
aff19f64b5 use in_addr_t for inet_addr() return code. Now, now portable is this *REALLY*?
We should add some configure tests for this!
2001-11-28 12:16:52 +00:00
Daniel Stenberg
15a56b42d6 used in the new multi interface, not yet actually part of libcurl but
added to CVS to make them available to others
2001-11-28 11:09:18 +00:00
Daniel Stenberg
d3706814e9 support para makes more sense now 2001-11-27 13:37:29 +00:00
Daniel Stenberg
6513dcef68 language 2001-11-27 13:34:59 +00:00
Daniel Stenberg
81f22465ba the list of contributors are in the THANKS file these days 2001-11-27 13:33:21 +00:00
Daniel Stenberg
dccc77a325 Eric Lavigne updates 2001-11-27 07:27:32 +00:00
Daniel Stenberg
13ac89af24 for building on Mac before OS X 2001-11-27 07:27:05 +00:00
Daniel Stenberg
ffefcab1bc greep at mindspring.com provided an index.html file that links to all the
existing HTML documents. It makes it easier to browse all the docs with
your browser.
2001-11-27 06:53:39 +00:00
Daniel Stenberg
0226b53b75 EPSV details 2001-11-27 00:53:13 +00:00
Daniel Stenberg
bbf80d0f93 commented out the EPSV support 2001-11-27 00:50:52 +00:00
Daniel Stenberg
6003f24f78 initial code added to support EPSV (IPv6-style PASV) 2001-11-27 00:48:45 +00:00
Daniel Stenberg
4382a80b9a recent changes 2001-11-27 00:47:52 +00:00
Daniel Stenberg
9fe920cd90 made the -C more correct and detailed 2001-11-26 09:57:02 +00:00
Daniel Stenberg
f0ee7115d3 Andrs Garca's minor fix to make it compile on win32 2001-11-23 09:04:56 +00:00
Daniel Stenberg
5986c653ef recent fixes 2001-11-22 14:16:21 +00:00
Daniel Stenberg
0e7203be89 this fix seems to make the connect fail properly even on IPv4-only Linux
machines!
2001-11-22 13:57:00 +00:00
Daniel Stenberg
52dbc96c32 updated the list of machines 2001-11-22 13:03:11 +00:00
Daniel Stenberg
1c8da21083 Eric fixed a wild write 2001-11-22 09:40:34 +00:00
Daniel Stenberg
8f304d8167 Eric found a missing comma!! 2001-11-22 09:39:03 +00:00
sm
30a0bd9cf5 Fixed release-ssl build 2001-11-22 00:12:48 +00:00
sm
ae40cdf92f Undefine long_long - not supported by VC 2001-11-22 00:06:21 +00:00
Daniel Stenberg
b342fbdcda SM corrected wsock32 to ws2_32 2001-11-21 23:11:47 +00:00
Daniel Stenberg
d1ea596f88 SM added connect.obj 2001-11-21 23:10:55 +00:00
Daniel Stenberg
064cf971ef init the errorbuf to prevent junk from being output 2001-11-21 23:01:01 +00:00
Daniel Stenberg
91b1598756 SM's vc target updates 2001-11-21 22:59:29 +00:00
Daniel Stenberg
17b18bca3c added error text for a failed connect case 2001-11-21 22:57:42 +00:00
Daniel Stenberg
be3d601217 another Kevin Roth update 2001-11-21 08:10:29 +00:00
Daniel Stenberg
ca0fd33d2d Georg Horn's STARTTRANSFER_TIME patch 2001-11-20 15:00:50 +00:00
Daniel Stenberg
271f96f78f -p, not -P, for proxy tunneling 2001-11-20 08:03:01 +00:00
Daniel Stenberg
b0130e6b3b use the ws2_32.lib now (Miklos Nemeth reporteD) 2001-11-19 20:09:02 +00:00
Daniel Stenberg
d0c1f3e25b long port => int port, as the c source uses! (Miklos Nemeth found this) 2001-11-19 20:08:01 +00:00
Daniel Stenberg
b244710ddb Miklos Nemeth pointed out the missing connect.obj 2001-11-19 20:06:29 +00:00
Daniel Stenberg
d465291ded recent fixes 2001-11-19 19:56:07 +00:00
Daniel Stenberg
84e462d5f6 Lars M Gustafsson showed us that the free(urlbuffer) was totally unnecessary
and plain wrong.
2001-11-19 19:21:06 +00:00
Daniel Stenberg
508466a175 Kevin Roth's fixes 2001-11-19 09:42:15 +00:00
Daniel Stenberg
e6dd4a6456 Klevtsov Vadim's time condition fix 2001-11-16 11:21:50 +00:00
Sterling Hughes
8d62e21072 looks better on one line (testing the cvs diffing via mail, but I also think
this looks a bit better ;)
2001-11-15 14:16:13 +00:00
Daniel Stenberg
25fe47f262 spell, slightly modified "what you can do" crap 2001-11-14 20:13:38 +00:00
Daniel Stenberg
fe8365d214 added Richard Prescott's email 2001-11-14 13:43:15 +00:00
Daniel Stenberg
2519a8cc9f added Richard Levitte's suggestion to support multiple -T options 2001-11-14 09:32:30 +00:00
Daniel Stenberg
b8ff21124a Samuel Listopad's fix to allow global_init => global_cleanup => global_init
for ssl
2001-11-14 07:11:39 +00:00
Daniel Stenberg
6aafc2dfd2 corrected the ftp_getsize() usage, as the HPUX compiler warned on them 2001-11-13 12:46:29 +00:00
Daniel Stenberg
65b22480f4 uninitialized variable 2001-11-13 12:09:05 +00:00
Daniel Stenberg
60f19269d0 interface to export/import SSL session IDs 2001-11-13 09:56:29 +00:00
Daniel Stenberg
5121499082 more more more 2001-11-13 09:07:32 +00:00
Daniel Stenberg
3e049a90b7 2 removed, 1 added 2001-11-13 09:06:32 +00:00
Daniel Stenberg
c5d97df7f1 disable QUOTEs with NULL 2001-11-13 09:05:10 +00:00
Daniel Stenberg
c2479ccb7a my proxytunnel fix accidentally ruined the normal https connects 2001-11-13 08:34:24 +00:00
Daniel Stenberg
fc07eb45f4 point out that calling this function more than once is a sever error 2001-11-13 07:20:37 +00:00
Daniel Stenberg
c7cdb0f266 make sure to "read out" the server reply even if we didn't get any data from
the server when that's the only error
2001-11-12 22:27:05 +00:00
Daniel Stenberg
92aedf850e made Curl_tvdiff round the diff better and make the subtraction before
the multiply to not wrap-around
2001-11-12 22:10:09 +00:00
Daniel Stenberg
dd157fc349 post-weekend fixes 2001-11-12 14:15:14 +00:00
Daniel Stenberg
05f3ca880f made CURLOPT_HTTPPROXYTUNNEL work for plain HTTP as well 2001-11-12 14:08:41 +00:00
Daniel Stenberg
a18d41a463 include setup.h 2001-11-12 10:19:36 +00:00
Daniel Stenberg
1affbff8f9 new Curl_ConnectHTTPProxyTunnel() function, needs a **lot** of testing!!! 2001-11-12 09:47:09 +00:00
Daniel Stenberg
c55d0bb804 We need at least one millisecond to calculate current speed with! I also
made the getinfo() stuff divide with 1000.0 now to enforce floating point
since Paul Harrington claims the 7.9.1 still uses even second resolution
in the timers there
2001-11-12 08:50:59 +00:00
Daniel Stenberg
0ffec712e1 Marcus Webster reported and fixed this read-one-byte-too-many problem... 2001-11-08 15:06:58 +00:00
Daniel Stenberg
6ebac3dc76 now we make sure that NULL is defined in the gethostbyname_r() compiles
as it turned out they aren't everywhere, and that causes compiles to fail
and then we don't find the proper function call!
2001-11-08 14:48:50 +00:00
Daniel Stenberg
3b976ea9f1 Added two missing return codes... 2001-11-08 12:36:00 +00:00
Daniel Stenberg
2c16dfb526 the proof I did something yesterday as well 2001-11-08 12:16:10 +00:00
Daniel Stenberg
fe3a78ab19 we use signal() to ignore signals only as long as we have to, and we now
restore the previous (if any) signal handler properly on return.
2001-11-07 14:13:29 +00:00
Daniel Stenberg
1a984ea847 get the previous struct keep_sigact 2001-11-07 12:56:13 +00:00
Daniel Stenberg
2a0cde3041 adjusted after Ramana Mokkapati's comments 2001-11-07 09:39:49 +00:00
Daniel Stenberg
3552775b52 moo 2001-11-07 09:37:57 +00:00
Daniel Stenberg
818a632e80 Added VERSIONS that explains about the (lib)curl version numbers 2001-11-07 08:26:51 +00:00
Daniel Stenberg
00afb0f638 bug report #478780 fixed, cygwin stripped on install, some more details on
the changes of yesterday
2001-11-06 19:37:13 +00:00
Daniel Stenberg
2e32d415c0 myalarm() is history, we now use HAVE_ALARM and we now do our very best to
1 - restore the previous sigaction struct as soon as we are about to shut
off our timeout
2 - restore the previous alarm() timeout, in case an application or similar
had it running before we "borrowed" it for a while.

No, this does not fix the multi-thread problem you get with alarm(). This
patch should correct bug report #478780:
//sourceforge.net/tracker/?func=detail&atid=100976&aid=478780&group_id=976

If not, please post details!
2001-11-06 19:33:13 +00:00
Daniel Stenberg
3dfc509d33 Kevin's patch to install the binary stripped 2001-11-06 08:44:58 +00:00
Daniel Stenberg
4379142af7 Ramana Mokkapati's, John Lask's and Detlef Schmier's reports/changes 2001-11-05 14:11:19 +00:00
Daniel Stenberg
8a6dc57212 John Lask's fix that adds "-1/--TLSv1" support 2001-11-05 14:08:27 +00:00
Daniel Stenberg
af636c535c Added an CURL_SSLVERSION_* enum for SSL protocol versions 2001-11-05 14:07:20 +00:00
Daniel Stenberg
2f77b0a4c6 we can now tell ssl to use TLSv1 protocol, and we now use defines instead
of real integers for versions, the defines are added to curl.h
2001-11-05 14:06:42 +00:00
Daniel Stenberg
08ad385e0e Ramana Mokkapati did some good bug hunting, and we these fixes ldap transfers
should work a lot better!
2001-11-05 14:04:57 +00:00
Daniel Stenberg
5623e0bb0e corrected the Curl_tvnow prototype (-Wstrict-prototypes found it) 2001-11-05 12:37:22 +00:00
Daniel Stenberg
3d438d8d64 Curl_ftpsendf() had wrong return type 2001-11-05 12:24:21 +00:00
Daniel Stenberg
d89c495782 added john lask 2001-11-05 11:57:36 +00:00
Daniel Stenberg
f5ba174f4d John Lask's new makefile 2001-11-05 11:56:26 +00:00
199 changed files with 11722 additions and 5498 deletions

1836
CHANGES

File diff suppressed because it is too large Load Diff

835
CHANGES.0
View File

@@ -1,838 +1,3 @@
Daniel (28 December 1999):
- Tim Verhoeven correctly identified that curl
doesn't support URL formatted file names when getting ftp. Now, there's a
problem with getting very weird file names off FTP servers. RFC 959 defines
that the file name syntax to use should be the same as in the native OS of
the server. Since we don't know the peer server system we currently just
translate the URL syntax into plain letters. It is still better and with
the solaris 2.6-supplied ftp server it works with spaces in the file names.
Daniel (27 December 1999):
- When curl parsed cookies straight off a remote site, it corrupted the input
data, which, if the downloaded headers were stored made very odd characters
in the saved data. Correctly identified and reported by Paul Harrington.
Daniel (13 December 1999):
- General cleanups in the library interface. There had been some bad kludges
added during times of stress and I did my best to clean them off. It was
both regarding the lib API as well as include file confusions.
Daniel (3 December 1999):
- A small --stderr bug was reported by Eetu Ojanen...
- who also brought the suggestion of extending the -X flag to ftp list as
well. So, now it is and the long option is now --request instead. It is
only for ftp list for now (and the former http stuff too of course).
Lars J. Aas (24 November 1999):
- Patched curl to compile and build under BeOS. Doesn't work yet though!
- Corrected the Makefile.am files to allow putting object files in
different directories than the sources.
Version 6.3.1
Daniel (23 November 1999):
- I've had this major disk crash. My good old trust-worthy source disk died
along with the machine that hosted it. Thank goodness most of all the
things I've done are either backed up elsewhere or stored in this CVS
server!
- Michael S. Steuer pointed out a bug in the -F handling
that made curl hang if you posted an empty variable such as '-F name='. It
was one of those old bugs that never have worked properly...
- Jason Baietto pointed out a general flaw in the HTTP
download. Curl didn't complain if it was prematurely aborted before the
entire download was completed. It does now.
Daniel (19 November 1999):
- Chris Maltby very accurately criticized the lack of
return code checks on the fwrite() calls. I did a thorough check for all
occurrences and corrected this.
Daniel (17 November 1999):
- Paul Harrington pointed out that the -m/--max-time option
doesn't work for the slow system calls like gethostbyname()... I don't have
any good fix yet, just a slightly less bad one that makes curl exit hard
when the timeout is reached.
- Bjorn Reese helped me point out a possible problem that might be the reason
why Thomas Hurst experience problems in his Amiga version.
Daniel (12 November 1999):
- I found a crash in the new cookie file parser. It crashed when you gave
a plain http header file as input...
Version 6.3
Daniel (10 November 1999):
- I kind of found out that the HTTP time-conditional GETs (-z) aren't always
respected by the web server and the document is therefore sent in whole
again, even though it doesn't match the requested condition. After reading
section 13.3.4 of RFC 2616, I think I'm doing the right thing now when I do
my own check as well. If curl thinks the condition isn't met, the transfer
is aborted prematurely (after all the headers have been received).
- After comments from Robert Linden I also rewrote some parts of the man page
to better describe how the -F works.
- Michael Anti put up a new curl download mirror in
China: http://www.pshowing.com/curl/
- I added the list of download mirrors to the README file
- I did add more explanations to the man page
Daniel (8 November 1999):
- I made the -b/--cookie option capable of reading netscape formatted cookie
files as well as normal http-header files. It should be able to
transparently figure out what kind of file it got as input.
Daniel (29 October 1999):
- Another one of Sebastiaan van Erk's ideas (that has been requested before
but I seem to have forgotten who it was), is to add support for ranges in
FTP downloads. As usual, one request is just a request, when they're two
it is a demand. I've added simple support for X-Y style fetches. X has to
be the lower number, though you may omit one of the numbers. Use the -r/
--range switch (previously HTTP-only).
- Sebastiaan van Erk suggested that curl should be
able to show the file size of a specified file. I think this is a splendid
idea and the -I flag is now working for FTP. It displays the file size in
this manner:
Content-Length: XXXX
As it resembles normal headers, and leaves us the opportunity to add more
info in that display if we can come up with more in the future! It also
makes sense since if you access ftp through a HTTP proxy, you'd get the
file size the same way.
I changed the order of the QUOTE command executions. They're now executed
just after the login and before any other command. I made this to enable
quote commands to run before the -I stuff is done too.
- I found out that -D/--dump-header and -V/--version weren't documented in
the man page.
- Many HTTP/1.1 servers do not support ranges. Don't ask me why. I did add
some text about this in the man page for the range option. The thread in
the mailing list that started this was initiated by Michael Anti.
- I get reports about nroff crashes on solaris 2.6+ when displaying the curl
man page. Switch to gnroff instead, it is reported to work(!). Adam Barclay
reported and brought the suggestion.
- In a dialogue with Johannes G. Kristinsson we came
up with the idea to let -H/--header specified headers replace the
internally generated headers, if you happened to select to add a header
that curl normally uses by itself. The advantage with this is not entirely
obvious, but in Johannes' case it means that he can use another Host: than
the one curl would set.
Daniel (27 October 1999):
- Jongki Suwandi brought a nice patch for (yet another) crash when following
a location:. This time you had to follow a https:// server's redirect to
get the core.
Version 6.2
Daniel (21 October 1999):
- I think I managed to remove the suspicious (nil) that has been seen just
before the "Host:" in HTTP requests when -v was used.
- I found out that if you followed a location: when using a proxy, without
having specified http:// in the URL, the protocol part was added once again
when moving to the next URL! (The protocol part has to be added to the
URL when going through a proxy since it has no protocol-guessing system
such as curl has.)
- Benjamin Ritcey reported a core dump under solaris 2.6
with OpenSSL 0.9.4. It turned out this was due to a bad free() in main.c
that occurred after the download was done and completed.
- Benjamin found ftp downloads to show the first line of the download meter
to get written twice, and I removed that problem. It was introduced with
the multiple URL support.
- Dan Zitter correctly pointed out that curl 6.1 and earlier versions didn't
honor RFC 2616 chapter 4 section 2, "Message Headers": "...Field names are
case-insensitive..." HTTP header parsing assumed a certain casing. Dan
also provided me with a patch that corrected this, which I took the liberty
of editing slightly.
- Dan Zitter also provided a nice patch for config.guess to better recognize
the Mac OS X
- Dan also corrected a minor problem in the lib/Makefile that caused linking
to fail on OS X.
Daniel (19 October 1999):
- Len Marinaccio came up with some problems with curl. Since Windows has a
crippled shell, it can't redirect stderr and that causes trouble. I added
--stderr today which allows the user to redirect the stderr stream to a
file or stdout.
Daniel (18 October 1999):
- The configure script now understands the '--without-ssl' flag, which now
totally disable SSL/https support. Previously it wasn't possible to force
the configure script to leave SSL alone. The previous functionality has
been retained. Troy Engel helped test this new one.
Version 6.1
Daniel (17 October 1999):
- I ifdef'ed or commented all the zlib stuff in the sources and configure
script. It turned out we needed to mock more with zlib than I initially
thought, to make it capable of downloading compressed HTTP documents and
uncompress them on the fly. I didn't mean the zlib parts of curl to become
more than minor so this means I halt the zlib expedition for now and wait
until someone either writes the code or zlib gets updated and better
adjusted for this kind of usage. I won't get into details here, but a
short a summary is suitable:
- zlib can't automatically detect whether to use zlib or gzip
decompression methods.
- zlib is very neat for reading gzipped files from a file descriptor,
although not as nice for reading buffer-based data such as we would
want it.
- there are still some problems with the win32 version when reading from
a file descriptor if that is a socket
Daniel (14 October 1999):
- Moved the (external) include files for libcurl into a subdirectory named
curl and adjusted all #include lines to use <curl/XXXX> to maintain a
better name space and control of the headers. This has been requested.
Daniel (12 October 1999):
- I modified the 'maketgz' script to perform a 'make' too before a release
archive is put together in an attempt to make the time stamps better and
hopefully avoid the double configure-running that use to occur.
Daniel (11 October 1999):
- Applied J<>rn's patches that fixes zlib for mingw32 compiles as well as
some other missing zlib #ifdef and more text on the multiple URL docs in
the man page.
Version 6.1beta
Daniel (6 October 1999):
- Douglas E. Wegscheid sent me a patch that made the exact same thing as I
just made: the -d switch is now capable of reading post data from a named
file or stdin. Use it similarly to the -F. To read the post data from a
given file:
curl -d @path/to/filename www.postsite.com
or let curl read it out from stdin:
curl -d @- www.postit.com
J<>rn Hartroth (3 October 1999):
- Brought some more patches for multiple URL functionality. The MIME
separation ideas are almost scrapped now, and a custom separator is being
used instead. This is still compile-time "flagged".
Daniel
- Updated curl.1 with multiple URL info.
Daniel (30 September 1999):
- Felix von Leitner brought openssl-check fixes for configure.in to work
out-of-the-box when the openssl files are installed in the system default
dirs.
Daniel (28 September 1999)
- Added libz functionality. This should enable decompressing gzip, compress
or deflate encoding HTTP documents. It also makes curl send an accept that
it accepts that kind of encoding. Compressed contents usually shortens
download time. I *need* someone to tell me a site that uses compressed HTTP
documents so that I can test this out properly.
- As a result of the adding of zlib awareness, I changed the version string
a little. I plan to add openldap version reporting in there too.
Daniel (17 September 1999)
- Made the -F option allow stdin when specifying files. By using '-' instead
of file name, the data will be read from stdin.
Version 6.0
Daniel (13 September 1999)
- Added -X/--http-request <request> to enable any HTTP command to be sent.
Do not that your server has to support the exact string you enter. This
should possibly a string like DELETE or TRACE.
- Applied Douglas' mingw32-fixes for the makefiles.
Daniel (10 September 1999)
- Douglas E. Wegscheid pointed out a problem. Curl didn't check the FTP
servers return code properly after the --quote commands were issued. It
took anything non 200 as an error, when all 2XX codes should be accepted as
OK.
- Sending cookies to the same site in multiple lines like curl used to do
turned out to be bad and breaking the cookie specs. Curl now sends all
cookies on a single Cookie: line. Curl is not yet RFC 2109 compliant, but I
doubt that many servers do use that syntax (yet).
Daniel (8 September 1999)
- J<>rn helped me make sure it still compiles nicely with mingw32 under win32.
Daniel (7 September 1999)
- FTP upload through proxy is now turned into a HTTP PUT. Requested by
Stefan Kanthak.
- Added the ldap files to the .m32 makefile.
Daniel (3 September 1999)
- Made cookie matching work while using HTTP proxy.
Bjorn Reese (31 August 1999)
- Passed his ldap:// patch. Note that this requires the openldap shared
library to be installed and that LD_LIBRARY_PATH points to the
directory where the lib will be found when curl is run with a
ldap:// URL.
J<>rn Hartroth (31 August 1999)
- Made the Mingw32 makefiles into single files.
- Made file:// work for Win32. The same code is now used for unix as well for
performance reasons.
Douglas E. Wegscheid (30 August 1999)
- Patched the Mingw32 makefiles for SSL builds.
Matthew Clarke (30 August 1999)
- Made a cool patch for configure.in to allow --with-ssl to specify the
root dir of the openssl installation, as in
./configure --with-ssl=/usr/ssl_here
- Corrected the 'reconf' script to work better with some shells.
J<>rn Hartroth (26 August 1999)
- Fixed the Mingw32 makefiles in lib/ and corrected the file.c for win32
compiles.
Version 5.11
Daniel (25 August 1999)
- John Weismiller pointed out a bug in the header-line
realloc() system in download.c.
- I added lib/file.[ch] to offer a first, simple, file:// support. It
probably won't do much good on win32 system at this point, but I see it
as a start.
- Made the release archives get a Makefile in the root dir, which can be
used to start the compiling/building process easier. I haven't really
changed any INSTALL text yet, I wanted to get some feed-back on this
first.
Daniel (17 August 1999)
- Another Location: bug. Curl didn't do proper relative locations if the
original URL had cgi-parameters that contained a slash. Nusu's page
again.
- Corrected the NO_PROXY usage. It is a list of substrings that if one of
them matches the tail of the host name it should connect to, curl should
not use a proxy to connect there. Pointed out to me by Douglas
E. Wegscheid. I also changed the README text a little regarding this.
Daniel (16 August 1999)
- Fixed a memory bug with http-servers that sent Location: to a Location:
page. Nusu's page showed this too.
- Made cookies work a lot better. Setting the same cookie name several times
used to add more cookies instead of replacing the former one which it
should've. Nusu <nus at intergorj.ro> brought me an URL that made this
painfully visible...
Troy (15 August 1999)
- Brought new .spec files as well as a patch for configure.in that lets the
configure script find the openssl files better, even when the include
files are in /usr/include/openssl
Version 5.10
Daniel (13 August 1999)
- SSL_CTX_set_default_passwd_cb() has been modified in the 0.9.4 version of
OpenSSL. Now why couldn't they simply add a *new* function instead of
modifying the parameters of an already existing function? This way, we get
a compiler warning if compiling with 0.9.4 but not with earlier. So, I had
to come up with a #if construction that deals with this...
- Made curl output the SSL version number get displayed properly with 0.9.4.
Troy (12 August 1999)
- Added MingW32 (GCC-2.95) support under Win32. The INSTALL file was also
a bit rearranged.
Daniel (12 August 1999)
- I had to copy a good <arpa/telnet.h> include file into the curl source
tree to enable the silly win32 systems to compile. The distribution rights
allows us to do that as long as the file remains unmodified.
- I corrected a few minor things that made the compiler complain when
-Wall -pedantic was used.
- I'm moving the official curl web page to http://curl.haxx.nu. I think it
will make it easier to remember as it is a lot shorter and less cryptic.
The old one still works and shows the same info.
Daniel (11 August 1999)
- Albert Chin-A-Young mailed me another correction for NROFF in the
configure.in that is supposed to be better for IRIX users.
Daniel (10 August 1999)
- Albert Chin-A-Young helped me with some stupid Makefile things, as well as
some fiddling with the getdate.c stuff that he had problems with under
HP-UX v10. getdate.y will now be compiled into getdate.c if the appropriate
yacc or bison is found by the configure script. Since this is slightly new,
we need to test the output getdate.c with win32 systems to make sure it
still compiles there.
Daniel (5 August 1999)
- I've just setup a new mailing list with the intention to keep discussions
around libcurl development in it. I mainly expect it to be for thoughts and
brainstorming around a "next generation" library, rather than nitpicking
about the current implementation or details in the current libcurl.
To join our happy bunch of future-looking geeks, enter 'subscribe
<address>' in the body of a mail and send it to
libcurl-request@listserv.fts.frontec.se. Curl bug reports, the usual curl
talk and everything else should still be kept in this mailing list. I've
started to archive this mailing list and have put the libcurl web page at
www.fts.frontec.se/~dast/libcurl/.
- Stefan Kanthak contacted me regarding a few problems in the configure
script which he discovered when trying to make curl compile and build under
Siemens SINIX-Z V5.42B2004!
- Marcus Klein very accurately informed me that src/version.h was not present
in the CVS repository. Oh, how silly...
- Linus Nielsen rewrote the telnet:// part and now curl offers limited telnet
support. If you run curl like 'curl telnet://host' you'll get all output on
the screen and curl will read input from stdin. You'll be able to login and
run commands etc, but since the output is buffered, expect to get a little
weird output.
This is still in its infancy and it might get changed. We need your
feed-back and input in how this is best done.
WIN32 NOTE: I bet we'll get problems when trying to compile the current
lib/telnet.c on win32, but I think we can sort them out in time.
- David Sanderson reported that FORCE_ALLOCA_H or HAVE_ALLOCA_H must be
defined for getdate.c to compile properly on HP-UX 11.0. I updated the
configure script to check for alloca.h which should make it.
Daniel (4 August 1999)
- I finally got to understand Marcus Klein's ftp download resume problem,
which turns out to be due to different outputs from different ftp
servers. It makes ftp download resuming a little trickier, but I've made
some modifications I really believe will work for most ftp servers and I do
hope you report if you have problems with this!
- Added text about file transfer resuming to README.curl.
Daniel (2 August 1999)
- Applied a progress-bar patch from Lars J. Aas. It offers
a new styled progress bar enabled with -#/--progress-bar.
T. Yamada <tai at imasy.or.jp> (30 July 1999)
- It breaks with segfault when 1) curl is using .netrc to obtain
username/password (option '-n'), and 2) is automatically redirected to
another location (option '-L').
There is a small bug in lib/url.c (block starting from line 641), which
tries to take out username/password from user- supplied command-line
argument ('-u' option). This block is never executed on first attempt since
CONF_USERPWD bit isn't set at first, but curl later turns it on when it
checks for CONF_NETRC bit. So when curl tries to redo everything due to
redirection, it segfaults trying to access *data->userpwd.
Version 5.9.1
Daniel (30 July 1999)
- Steve Walch pointed out that there is a memory leak in the formdata
functions. I added a FormFree() function that is now used and supposed to
correct this flaw.
- Mark Wotton reported:
'curl -L https://www.cwa.com.au/' core dumps. I managed to cure this by
correcting the cleanup procedure. The bug seems to be gone with my OpenSSL
0.9.2b, although still occurs when I run the ~100 years old SSLeay 0.8.0. I
don't know whether it is curl or SSLeay that is to blame for that.
- Marcus Klein:
Reported an FTP upload resume bug that I really can't repeat nor understand.
I leave it here so that it won't be forgotten.
Daniel (29 July 1999)
- Costya Shulyupin suggested support for longer URLs when following Location:
and I could only agree and fix it!
- Leigh Purdie found a problem in the upload/POST department. It turned out
that http.c accidentaly cleared the pointer instead of the byte counter
when supposed to.
- Costya Shulyupin pointed out a problem with port numbers and Location:. If
you had a server at a non-standard port that redirected to an URL using a
standard port number, curl still used that first port number.
- Ralph Beckmann pointed out a problem when using both CONF_FOLLOWLOCATION
and CONF_FAILONERROR simultaneously. Since the CONF_FAILONERROR exits on
the 302-code that the follow location header outputs it will never show any
html on location: pages. I have now made it look for >=400 codes if
CONF_FOLLOWLOCATION is set.
- 'struct slist' is now renamed to 'struct curl_slist' (as suggested by Ralph
Beckmann).
- Joshua Swink and Rick Welykochy were the first to point out to me that the
latest OpenSSL package now have moved the standard include path. It is now
in /usr/local/ssl/include/openssl and I have now modified the --enable-ssl
option for the configure script to use that as the primary path, and I
leave the former path too to work with older packages of OpenSSL too.
Daniel (9 June 1999)
- I finally understood the IRIX problem and now it seem to compile on it!
I am gonna remove those #define strcasecmp() things once and for all now.
Daniel (4 June 1999)
- I adjusted the FTP reply 227 parser to make the PASV command work better
with more ftp servers. Appearantly the Roxen Challanger server replied
something curl 5.9 could deal with! :-( Reported by Ashley Reid-Montanaro
and Mark Butler brought a solution for it.
Daniel (26 May 1999)
- Rearranged. README is new, the old one is now README.curl and I added a
README.libcurl with text I got from Ralph Beckmann.
- I also updated the INSTALL text.
Daniel (25 May 1999)
- David Jonathan Lowsky correctly pointed out that curl didn't properly deal
with form posting where the variable shouldn't have any content, as in curl
-F "form=" www.site.com. It was now fixed.
Version 5.9
Daniel (22 May 1999)
- I've got a bug report from Aaron Scarisbrick in which he states he has some
problems with -L under FreeBSD 3.0. I have previously got another bug
report from Stefan Grether which points at an error with similar sympthoms
when using win32. I made the allocation of the new url string a bit faster
and different, don't know if it actually improves anything though...
Daniel (20 May 1999)
- Made the cookie parser deal with CRLF newlines too.
Daniel (19 May 1999)
- Download() didn't properly deal with failing return codes from the sread()
function. Adam Coyne found the problem in the win32 version, and Troy Engel
helped me out isolating it.
Daniel (16 May 1999)
- Richard Adams pointed out a bug I introduced in 5.8. --dump-header doesn't
work anymore! :-/ I fixed it now.
- After a suggestion by Joshua Swink I added -S / --show-error to force curl
to display the error message in case of an error, even if -s/--silent was
used.
Daniel (10 May 1999)
- I moved the stuff concerning HTTP, DICT and TELNET it their own source
files now. It is a beginning on my clean-up of the sources to make them
layer all those protocols better to enable more to be added easier in the
future!
- Leon Breedt sent me some files I've not put into the main curl
archive. They're for creating the Debian package thingie. He also sent me a
debian package that I've made available for download at the web page
Daniel (9 May 1999)
- Made it compile on cygwin too.
Troy Engel (7 May 1999)
- Brought a series of patches to allow curl to compile smoothly on MSVC++ 6
again!
Daniel (6 May 1999)
- I changed the #ifdef HAVE_STRFTIME placement for the -z code so that it
will be easier to discover systems that don't have that function and thus
can't use -z successfully. Made the strftime() get used if WIN32 is defined
too.
Version 5.8
Daniel (5 May 1999)
- I've had it with this autoconf/automake mess. It seems to work allright
for most people who don't have automake installed, but for those who have
there are problems all over.
I've got like five different bug reports on this only the last
week... Claudio Neves and Federico Bianchi and root <duggerj001 at
hawaii.rr.com> are some of them reporting this.
Currently, I have no really good fix since I want to use automake myself to
generate the Makefile.in files. I've found out that the @SHELL@-problems
can often be fixed by manually invoking 'automake' in the archive root
before you run ./configure... I've hacked my maketgz script now to fiddle
a bit with this and my tests seem to work better than before at least!
Daniel (4 May 1999)
- mkhelp.pl has been doing badly lately. I corrected a case problem in
the regexes.
- I've now remade the -o option to not touch the file unless it needs to.
I had to do this to make -z option really fine, since now you can make a
curl fetch and use a local copy's time when downloading to that file, as
in:
curl -z dump -o dump remote.site.com/file.html
This will only get the file if the remote one is newer than the local.
I'm aware that this alters previous behaviour a little. Some scripts out
there may depend on that the file is always touched...
- Corrected a bug in the SSLv2/v3 selection.
- Felix von Leitner requested that curl should be able to send
"If-Modified-Since" headers, which indeed is a fair idea. I implemented it
right away! Try -z <expression> where expression is a full GNU date
expression or a file name to get the date from!
Stephan Lagerholm (30 Apr 1999)
- Pointed out a problem with the src/Makefile for FreeBSD. The RM variable
isn't set and causes the make to fail.
Daniel (26 April 1999)
- Am I silly or what? Irving Wolfe pointed out to me that the curl version
number was not set properly. Hasn't been since 5.6. This was due to a bug
in my maketgz script!
David Eriksson (25 Apr 1999)
- Found a bug in cookies.c that made it crash at times.
Version 5.7.1
Doug Kaufman (23 Apr 1999)
- Brought two sunos 4 fixes. One of them being the hostip.c fix mentioned
below and the other one a correction in include/stdcheaders.h
- Added a paragraph about compiling with the US-version of openssl to the
INSTALL file.
Daniel
- New mailing list address. Info updated on the web page as well as in the
README file
Greg Onufer (20 Apr 1999)
- hostip.c didn't compile properly on SunOS 5.5.1.
It needs an #include <sys/types.h>
Version 5.7
Daniel (Apr 20 1999)
- Decided to upload a non-beta version right now!
- Made curl support any-length HTTP headers. The destination buffer is now
simply enlarged every time it turns out to be too small!
- Added the FAQ file to the archive. Still a bit smallish, but it is a
start.
Eric Thelin (15 Apr 1999)
- Made -D accept '-' instead of filename to write to stdout.
Version 5.6.3beta
Daniel (Apr 12 1999)
- Changed two #ifdef WIN32 to better #ifdef <errorcode> when connect()ing
in url.c and ftp.c. Makes cygwin32 deal with them better too. We should
try to get some decent win32-replacement there. Anyone?
- The old -3/--crlf option is now ONLY --crlf!
- I changed the "SSL fix" to a more lame one, but that doesn't remove as
much functionality. Now I've enabled the lib to select what SSL version it
should try first. Appearantly some older SSL-servers don't like when you
talk v3 with them so you need to be able to force curl to talk v2 from the
start. The fix dated April 6 and posted on the mailing list forced curl to
use v2 at all times using a modern OpenSSL version, but we don't really
want such a crippled solution.
- Marc Boucher sent me a patch that corrected a math error for the
"Curr.Speed" progress meter.
- Eric Thelin sent me a patch that enables '-K -' to read a config file from
stdin.
- I found out we didn't close the file properly before so I added it!
Daniel (Apr 9 1999)
- Yu Xin pointed out a problem with ftp download resume. It didn't work at
all! ;-O
Daniel (Apr 6 1999)
- Corrected the version string part generated for the SSL version.
- I found a way to make some other SSL page work with openssl 0.9.1+ that
previously didn't (ssleay 0.8.0 works with it though!). Trying to get
some real info from the OpenSSL guys to see how I should do to behave the
best way. SSLeay 0.8.0 shouldn't be that much in use anyway these days!
Version 5.6.2beta
Daniel (Apr 4 1999)
- Finally have curl more cookie "aware". Now read carefully. This is how
it works.
To make curl read cookies from an already existing file, in plain header-
format (like from the headers of a previous fetch) invoke curl with the
-b flag like:
curl -b file http://site/foo.html
Curl will then use all cookies it finds matching. The old style that sets
a single cookie with -b is still supported and is used if the string
following -b includes a '=' letter, as in "-b name=daniel".
To make curl read the cookies sent in combination with a location: (which
sites often do) point curl to read a non-existing file at first (i.e
to start with no existing cookies), like:
curl -b nowhere http://site/setcookieandrelocate.html
- Added a paragraph in the TODO file about the SSL problems recently
reported. Evidently, some kind of SSL-problem curl may need to address.
- Better "Location:" following.
Douglas E. Wegscheid (Tue, 30 Mar 1999)
- A subsecond display patch.
Daniel (Mar 14 1999)
- I've separated the version number of libcurl and curl now. To make
things a little easier, I decided to start the curl numbering from
5.6 and the former version number known as "curl" is now the one
set for libcurl.
- Removed the 'enable-no-pass' from configure, I doubt anyone wanted
that.
- Made lots of tiny adjustments to compile smoothly with cygwin under
win32. It's a killer for porting this to win32, bye bye VC++! ;-)
Compiles and builds out-of-the-box now. See the new wordings in
INSTALL for details.
- Beginning experiments with downloading multiple document from a http
server while remaining connected.
Version 5.6beta
Daniel (Mar 13 1999)
- Since I've changed so much, I thought I'd just go ahead and implement the
suggestion from Douglas E. Wegscheid. -D or --dump-header is now storing
HTTP headers separately in the specified file.
- Added new text to INSTALL on what to do to build this on win32 now.
- Aaargh. I had to take a step back and prefix the shared #include files
in the sources with "../include/" to please VC++...
Daniel (Mar 12 1999)
- Split the url.c source into many tiny sources for better readability
and smaller size.
Daniel (Mar 11 1999)
- Started to change stuff for a move to make libcurl and a more separate
curl application that uses the libcurl. Made the libcurl sources into
the new lib directory while the curl application will remain in src as
before. New makefiles, adjusted configure script and so.
libcurl.a built quickly and easily. I better make a better interface to
the lib functions though.
The new root dir include/ is supposed to contain the public information
about the new libcurl. It is a little ugly so far :-)
Daniel (Mar 1 1999)
- Todd Kaufmann sent me a good link to Netscape's cookie spec as well as the
info that RFC 2109 specifies how to use them. The link is now in the
README and the RFC in the RESOURCES.
Daniel (Feb 23 1999)
- Finally made configure accept --with-ssl to look for SSL libs and includes
in the "standard" place /usr/local/ssl...
Daniel (Feb 22 1999)
- Verified that curl linked fine with OpenSSL 0.9.1c which seems to be
the most recent.
Henri Gomez (Fri Feb 5 1999)
- Sent in an updated curl-ssl.spec. I still miss the script that builds an
RPM automatically...
Version 5.5.1
Mark Butler (27 Jan 1999)
- Corrected problems in Download().
Danitel Stenberg (25 Jan 1999)
- Jeremie Petit pointed out a few flaws in the source that prevented it from
compile warning free with the native compiler under Digital Unix v4.0d.
Version 5.5
Daniel Stenberg (15 Jan 1999)
- Added Bjorns small text to the README about the DICT protocol.
Daniel Stenberg (11 Jan 1999)
- <jswink at softcom.net> reported about the win32-versioin: "Doesn't use
ALL_PROXY environment variable". Turned out to be because of the static-
buffer nature of the win32 environment variable calls!
Bjorn Reese (10 Jan 1999)
- I have attached a simple addition for the DICT protocol (RFC 2229).
It performs dictionary lookups. The output still needs to be better
formatted.
To test it try (the exact format, and more examples are described in
the RFC)
dict://dict.org/m:hello
dict://dict.org/m:hello::soundex
Vicente Garcia (10 Jan 1999)
- Corrected the progress meter for files larger than 20MB.
Daniel Stenberg (7 Jan 1999)
- Corrected the -t and -T help texts. They claimed to be FTP only.
Version 5.4
Daniel Stenberg
(7 Jan 1999)
- Irving Wolfe reported that curl -s didn't always supress the progress
reporting. It was the form post that autoamtically always switched it on
again. This is now corrected!
(4 Jan 1999)
- Andreas Kostyrka suggested I'd add PUT and he helped me out to test it. If
you use -t or -T now on a http or https server, PUT will be used for file
upload.
I removed the former use of -T with HTTP. I doubt anyone ever really used
that.
(4 Jan 1999)
- Erik Jacobsen found a width bug in the mprintf() function. I corrected it
now.
(4 Jan 1999)
- As John V. Chow pointed out to me, curl accepted very limited URL sizes. It
should now accept path parts that are up to at least 4096 bytes.
- Somehow I screwed up when applying the AIX fix from Gilbert Ramirez, so
I redid that now.
Version 5.3a (win32 only)
Troy Engel

835
CHANGES.1999 Normal file
View File

@@ -0,0 +1,835 @@
Daniel (28 December 1999):
- Tim Verhoeven correctly identified that curl
doesn't support URL formatted file names when getting ftp. Now, there's a
problem with getting very weird file names off FTP servers. RFC 959 defines
that the file name syntax to use should be the same as in the native OS of
the server. Since we don't know the peer server system we currently just
translate the URL syntax into plain letters. It is still better and with
the solaris 2.6-supplied ftp server it works with spaces in the file names.
Daniel (27 December 1999):
- When curl parsed cookies straight off a remote site, it corrupted the input
data, which, if the downloaded headers were stored made very odd characters
in the saved data. Correctly identified and reported by Paul Harrington.
Daniel (13 December 1999):
- General cleanups in the library interface. There had been some bad kludges
added during times of stress and I did my best to clean them off. It was
both regarding the lib API as well as include file confusions.
Daniel (3 December 1999):
- A small --stderr bug was reported by Eetu Ojanen...
- who also brought the suggestion of extending the -X flag to ftp list as
well. So, now it is and the long option is now --request instead. It is
only for ftp list for now (and the former http stuff too of course).
Lars J. Aas (24 November 1999):
- Patched curl to compile and build under BeOS. Doesn't work yet though!
- Corrected the Makefile.am files to allow putting object files in
different directories than the sources.
Version 6.3.1
Daniel (23 November 1999):
- I've had this major disk crash. My good old trust-worthy source disk died
along with the machine that hosted it. Thank goodness most of all the
things I've done are either backed up elsewhere or stored in this CVS
server!
- Michael S. Steuer pointed out a bug in the -F handling
that made curl hang if you posted an empty variable such as '-F name='. It
was one of those old bugs that never have worked properly...
- Jason Baietto pointed out a general flaw in the HTTP
download. Curl didn't complain if it was prematurely aborted before the
entire download was completed. It does now.
Daniel (19 November 1999):
- Chris Maltby very accurately criticized the lack of
return code checks on the fwrite() calls. I did a thorough check for all
occurrences and corrected this.
Daniel (17 November 1999):
- Paul Harrington pointed out that the -m/--max-time option
doesn't work for the slow system calls like gethostbyname()... I don't have
any good fix yet, just a slightly less bad one that makes curl exit hard
when the timeout is reached.
- Bjorn Reese helped me point out a possible problem that might be the reason
why Thomas Hurst experience problems in his Amiga version.
Daniel (12 November 1999):
- I found a crash in the new cookie file parser. It crashed when you gave
a plain http header file as input...
Version 6.3
Daniel (10 November 1999):
- I kind of found out that the HTTP time-conditional GETs (-z) aren't always
respected by the web server and the document is therefore sent in whole
again, even though it doesn't match the requested condition. After reading
section 13.3.4 of RFC 2616, I think I'm doing the right thing now when I do
my own check as well. If curl thinks the condition isn't met, the transfer
is aborted prematurely (after all the headers have been received).
- After comments from Robert Linden I also rewrote some parts of the man page
to better describe how the -F works.
- Michael Anti put up a new curl download mirror in
China: http://www.pshowing.com/curl/
- I added the list of download mirrors to the README file
- I did add more explanations to the man page
Daniel (8 November 1999):
- I made the -b/--cookie option capable of reading netscape formatted cookie
files as well as normal http-header files. It should be able to
transparently figure out what kind of file it got as input.
Daniel (29 October 1999):
- Another one of Sebastiaan van Erk's ideas (that has been requested before
but I seem to have forgotten who it was), is to add support for ranges in
FTP downloads. As usual, one request is just a request, when they're two
it is a demand. I've added simple support for X-Y style fetches. X has to
be the lower number, though you may omit one of the numbers. Use the -r/
--range switch (previously HTTP-only).
- Sebastiaan van Erk suggested that curl should be
able to show the file size of a specified file. I think this is a splendid
idea and the -I flag is now working for FTP. It displays the file size in
this manner:
Content-Length: XXXX
As it resembles normal headers, and leaves us the opportunity to add more
info in that display if we can come up with more in the future! It also
makes sense since if you access ftp through a HTTP proxy, you'd get the
file size the same way.
I changed the order of the QUOTE command executions. They're now executed
just after the login and before any other command. I made this to enable
quote commands to run before the -I stuff is done too.
- I found out that -D/--dump-header and -V/--version weren't documented in
the man page.
- Many HTTP/1.1 servers do not support ranges. Don't ask me why. I did add
some text about this in the man page for the range option. The thread in
the mailing list that started this was initiated by Michael Anti.
- I get reports about nroff crashes on solaris 2.6+ when displaying the curl
man page. Switch to gnroff instead, it is reported to work(!). Adam Barclay
reported and brought the suggestion.
- In a dialogue with Johannes G. Kristinsson we came
up with the idea to let -H/--header specified headers replace the
internally generated headers, if you happened to select to add a header
that curl normally uses by itself. The advantage with this is not entirely
obvious, but in Johannes' case it means that he can use another Host: than
the one curl would set.
Daniel (27 October 1999):
- Jongki Suwandi brought a nice patch for (yet another) crash when following
a location:. This time you had to follow a https:// server's redirect to
get the core.
Version 6.2
Daniel (21 October 1999):
- I think I managed to remove the suspicious (nil) that has been seen just
before the "Host:" in HTTP requests when -v was used.
- I found out that if you followed a location: when using a proxy, without
having specified http:// in the URL, the protocol part was added once again
when moving to the next URL! (The protocol part has to be added to the
URL when going through a proxy since it has no protocol-guessing system
such as curl has.)
- Benjamin Ritcey reported a core dump under solaris 2.6
with OpenSSL 0.9.4. It turned out this was due to a bad free() in main.c
that occurred after the download was done and completed.
- Benjamin found ftp downloads to show the first line of the download meter
to get written twice, and I removed that problem. It was introduced with
the multiple URL support.
- Dan Zitter correctly pointed out that curl 6.1 and earlier versions didn't
honor RFC 2616 chapter 4 section 2, "Message Headers": "...Field names are
case-insensitive..." HTTP header parsing assumed a certain casing. Dan
also provided me with a patch that corrected this, which I took the liberty
of editing slightly.
- Dan Zitter also provided a nice patch for config.guess to better recognize
the Mac OS X
- Dan also corrected a minor problem in the lib/Makefile that caused linking
to fail on OS X.
Daniel (19 October 1999):
- Len Marinaccio came up with some problems with curl. Since Windows has a
crippled shell, it can't redirect stderr and that causes trouble. I added
--stderr today which allows the user to redirect the stderr stream to a
file or stdout.
Daniel (18 October 1999):
- The configure script now understands the '--without-ssl' flag, which now
totally disable SSL/https support. Previously it wasn't possible to force
the configure script to leave SSL alone. The previous functionality has
been retained. Troy Engel helped test this new one.
Version 6.1
Daniel (17 October 1999):
- I ifdef'ed or commented all the zlib stuff in the sources and configure
script. It turned out we needed to mock more with zlib than I initially
thought, to make it capable of downloading compressed HTTP documents and
uncompress them on the fly. I didn't mean the zlib parts of curl to become
more than minor so this means I halt the zlib expedition for now and wait
until someone either writes the code or zlib gets updated and better
adjusted for this kind of usage. I won't get into details here, but a
short a summary is suitable:
- zlib can't automatically detect whether to use zlib or gzip
decompression methods.
- zlib is very neat for reading gzipped files from a file descriptor,
although not as nice for reading buffer-based data such as we would
want it.
- there are still some problems with the win32 version when reading from
a file descriptor if that is a socket
Daniel (14 October 1999):
- Moved the (external) include files for libcurl into a subdirectory named
curl and adjusted all #include lines to use <curl/XXXX> to maintain a
better name space and control of the headers. This has been requested.
Daniel (12 October 1999):
- I modified the 'maketgz' script to perform a 'make' too before a release
archive is put together in an attempt to make the time stamps better and
hopefully avoid the double configure-running that use to occur.
Daniel (11 October 1999):
- Applied J<>rn's patches that fixes zlib for mingw32 compiles as well as
some other missing zlib #ifdef and more text on the multiple URL docs in
the man page.
Version 6.1beta
Daniel (6 October 1999):
- Douglas E. Wegscheid sent me a patch that made the exact same thing as I
just made: the -d switch is now capable of reading post data from a named
file or stdin. Use it similarly to the -F. To read the post data from a
given file:
curl -d @path/to/filename www.postsite.com
or let curl read it out from stdin:
curl -d @- www.postit.com
J<>rn Hartroth (3 October 1999):
- Brought some more patches for multiple URL functionality. The MIME
separation ideas are almost scrapped now, and a custom separator is being
used instead. This is still compile-time "flagged".
Daniel
- Updated curl.1 with multiple URL info.
Daniel (30 September 1999):
- Felix von Leitner brought openssl-check fixes for configure.in to work
out-of-the-box when the openssl files are installed in the system default
dirs.
Daniel (28 September 1999)
- Added libz functionality. This should enable decompressing gzip, compress
or deflate encoding HTTP documents. It also makes curl send an accept that
it accepts that kind of encoding. Compressed contents usually shortens
download time. I *need* someone to tell me a site that uses compressed HTTP
documents so that I can test this out properly.
- As a result of the adding of zlib awareness, I changed the version string
a little. I plan to add openldap version reporting in there too.
Daniel (17 September 1999)
- Made the -F option allow stdin when specifying files. By using '-' instead
of file name, the data will be read from stdin.
Version 6.0
Daniel (13 September 1999)
- Added -X/--http-request <request> to enable any HTTP command to be sent.
Do not that your server has to support the exact string you enter. This
should possibly a string like DELETE or TRACE.
- Applied Douglas' mingw32-fixes for the makefiles.
Daniel (10 September 1999)
- Douglas E. Wegscheid pointed out a problem. Curl didn't check the FTP
servers return code properly after the --quote commands were issued. It
took anything non 200 as an error, when all 2XX codes should be accepted as
OK.
- Sending cookies to the same site in multiple lines like curl used to do
turned out to be bad and breaking the cookie specs. Curl now sends all
cookies on a single Cookie: line. Curl is not yet RFC 2109 compliant, but I
doubt that many servers do use that syntax (yet).
Daniel (8 September 1999)
- J<>rn helped me make sure it still compiles nicely with mingw32 under win32.
Daniel (7 September 1999)
- FTP upload through proxy is now turned into a HTTP PUT. Requested by
Stefan Kanthak.
- Added the ldap files to the .m32 makefile.
Daniel (3 September 1999)
- Made cookie matching work while using HTTP proxy.
Bjorn Reese (31 August 1999)
- Passed his ldap:// patch. Note that this requires the openldap shared
library to be installed and that LD_LIBRARY_PATH points to the
directory where the lib will be found when curl is run with a
ldap:// URL.
J<>rn Hartroth (31 August 1999)
- Made the Mingw32 makefiles into single files.
- Made file:// work for Win32. The same code is now used for unix as well for
performance reasons.
Douglas E. Wegscheid (30 August 1999)
- Patched the Mingw32 makefiles for SSL builds.
Matthew Clarke (30 August 1999)
- Made a cool patch for configure.in to allow --with-ssl to specify the
root dir of the openssl installation, as in
./configure --with-ssl=/usr/ssl_here
- Corrected the 'reconf' script to work better with some shells.
J<>rn Hartroth (26 August 1999)
- Fixed the Mingw32 makefiles in lib/ and corrected the file.c for win32
compiles.
Version 5.11
Daniel (25 August 1999)
- John Weismiller pointed out a bug in the header-line
realloc() system in download.c.
- I added lib/file.[ch] to offer a first, simple, file:// support. It
probably won't do much good on win32 system at this point, but I see it
as a start.
- Made the release archives get a Makefile in the root dir, which can be
used to start the compiling/building process easier. I haven't really
changed any INSTALL text yet, I wanted to get some feed-back on this
first.
Daniel (17 August 1999)
- Another Location: bug. Curl didn't do proper relative locations if the
original URL had cgi-parameters that contained a slash. Nusu's page
again.
- Corrected the NO_PROXY usage. It is a list of substrings that if one of
them matches the tail of the host name it should connect to, curl should
not use a proxy to connect there. Pointed out to me by Douglas
E. Wegscheid. I also changed the README text a little regarding this.
Daniel (16 August 1999)
- Fixed a memory bug with http-servers that sent Location: to a Location:
page. Nusu's page showed this too.
- Made cookies work a lot better. Setting the same cookie name several times
used to add more cookies instead of replacing the former one which it
should've. Nusu <nus at intergorj.ro> brought me an URL that made this
painfully visible...
Troy (15 August 1999)
- Brought new .spec files as well as a patch for configure.in that lets the
configure script find the openssl files better, even when the include
files are in /usr/include/openssl
Version 5.10
Daniel (13 August 1999)
- SSL_CTX_set_default_passwd_cb() has been modified in the 0.9.4 version of
OpenSSL. Now why couldn't they simply add a *new* function instead of
modifying the parameters of an already existing function? This way, we get
a compiler warning if compiling with 0.9.4 but not with earlier. So, I had
to come up with a #if construction that deals with this...
- Made curl output the SSL version number get displayed properly with 0.9.4.
Troy (12 August 1999)
- Added MingW32 (GCC-2.95) support under Win32. The INSTALL file was also
a bit rearranged.
Daniel (12 August 1999)
- I had to copy a good <arpa/telnet.h> include file into the curl source
tree to enable the silly win32 systems to compile. The distribution rights
allows us to do that as long as the file remains unmodified.
- I corrected a few minor things that made the compiler complain when
-Wall -pedantic was used.
- I'm moving the official curl web page to http://curl.haxx.nu. I think it
will make it easier to remember as it is a lot shorter and less cryptic.
The old one still works and shows the same info.
Daniel (11 August 1999)
- Albert Chin-A-Young mailed me another correction for NROFF in the
configure.in that is supposed to be better for IRIX users.
Daniel (10 August 1999)
- Albert Chin-A-Young helped me with some stupid Makefile things, as well as
some fiddling with the getdate.c stuff that he had problems with under
HP-UX v10. getdate.y will now be compiled into getdate.c if the appropriate
yacc or bison is found by the configure script. Since this is slightly new,
we need to test the output getdate.c with win32 systems to make sure it
still compiles there.
Daniel (5 August 1999)
- I've just setup a new mailing list with the intention to keep discussions
around libcurl development in it. I mainly expect it to be for thoughts and
brainstorming around a "next generation" library, rather than nitpicking
about the current implementation or details in the current libcurl.
To join our happy bunch of future-looking geeks, enter 'subscribe
<address>' in the body of a mail and send it to
libcurl-request@listserv.fts.frontec.se. Curl bug reports, the usual curl
talk and everything else should still be kept in this mailing list. I've
started to archive this mailing list and have put the libcurl web page at
www.fts.frontec.se/~dast/libcurl/.
- Stefan Kanthak contacted me regarding a few problems in the configure
script which he discovered when trying to make curl compile and build under
Siemens SINIX-Z V5.42B2004!
- Marcus Klein very accurately informed me that src/version.h was not present
in the CVS repository. Oh, how silly...
- Linus Nielsen rewrote the telnet:// part and now curl offers limited telnet
support. If you run curl like 'curl telnet://host' you'll get all output on
the screen and curl will read input from stdin. You'll be able to login and
run commands etc, but since the output is buffered, expect to get a little
weird output.
This is still in its infancy and it might get changed. We need your
feed-back and input in how this is best done.
WIN32 NOTE: I bet we'll get problems when trying to compile the current
lib/telnet.c on win32, but I think we can sort them out in time.
- David Sanderson reported that FORCE_ALLOCA_H or HAVE_ALLOCA_H must be
defined for getdate.c to compile properly on HP-UX 11.0. I updated the
configure script to check for alloca.h which should make it.
Daniel (4 August 1999)
- I finally got to understand Marcus Klein's ftp download resume problem,
which turns out to be due to different outputs from different ftp
servers. It makes ftp download resuming a little trickier, but I've made
some modifications I really believe will work for most ftp servers and I do
hope you report if you have problems with this!
- Added text about file transfer resuming to README.curl.
Daniel (2 August 1999)
- Applied a progress-bar patch from Lars J. Aas. It offers
a new styled progress bar enabled with -#/--progress-bar.
T. Yamada <tai at imasy.or.jp> (30 July 1999)
- It breaks with segfault when 1) curl is using .netrc to obtain
username/password (option '-n'), and 2) is automatically redirected to
another location (option '-L').
There is a small bug in lib/url.c (block starting from line 641), which
tries to take out username/password from user- supplied command-line
argument ('-u' option). This block is never executed on first attempt since
CONF_USERPWD bit isn't set at first, but curl later turns it on when it
checks for CONF_NETRC bit. So when curl tries to redo everything due to
redirection, it segfaults trying to access *data->userpwd.
Version 5.9.1
Daniel (30 July 1999)
- Steve Walch pointed out that there is a memory leak in the formdata
functions. I added a FormFree() function that is now used and supposed to
correct this flaw.
- Mark Wotton reported:
'curl -L https://www.cwa.com.au/' core dumps. I managed to cure this by
correcting the cleanup procedure. The bug seems to be gone with my OpenSSL
0.9.2b, although still occurs when I run the ~100 years old SSLeay 0.8.0. I
don't know whether it is curl or SSLeay that is to blame for that.
- Marcus Klein:
Reported an FTP upload resume bug that I really can't repeat nor understand.
I leave it here so that it won't be forgotten.
Daniel (29 July 1999)
- Costya Shulyupin suggested support for longer URLs when following Location:
and I could only agree and fix it!
- Leigh Purdie found a problem in the upload/POST department. It turned out
that http.c accidentaly cleared the pointer instead of the byte counter
when supposed to.
- Costya Shulyupin pointed out a problem with port numbers and Location:. If
you had a server at a non-standard port that redirected to an URL using a
standard port number, curl still used that first port number.
- Ralph Beckmann pointed out a problem when using both CONF_FOLLOWLOCATION
and CONF_FAILONERROR simultaneously. Since the CONF_FAILONERROR exits on
the 302-code that the follow location header outputs it will never show any
html on location: pages. I have now made it look for >=400 codes if
CONF_FOLLOWLOCATION is set.
- 'struct slist' is now renamed to 'struct curl_slist' (as suggested by Ralph
Beckmann).
- Joshua Swink and Rick Welykochy were the first to point out to me that the
latest OpenSSL package now have moved the standard include path. It is now
in /usr/local/ssl/include/openssl and I have now modified the --enable-ssl
option for the configure script to use that as the primary path, and I
leave the former path too to work with older packages of OpenSSL too.
Daniel (9 June 1999)
- I finally understood the IRIX problem and now it seem to compile on it!
I am gonna remove those #define strcasecmp() things once and for all now.
Daniel (4 June 1999)
- I adjusted the FTP reply 227 parser to make the PASV command work better
with more ftp servers. Appearantly the Roxen Challanger server replied
something curl 5.9 could deal with! :-( Reported by Ashley Reid-Montanaro
and Mark Butler brought a solution for it.
Daniel (26 May 1999)
- Rearranged. README is new, the old one is now README.curl and I added a
README.libcurl with text I got from Ralph Beckmann.
- I also updated the INSTALL text.
Daniel (25 May 1999)
- David Jonathan Lowsky correctly pointed out that curl didn't properly deal
with form posting where the variable shouldn't have any content, as in curl
-F "form=" www.site.com. It was now fixed.
Version 5.9
Daniel (22 May 1999)
- I've got a bug report from Aaron Scarisbrick in which he states he has some
problems with -L under FreeBSD 3.0. I have previously got another bug
report from Stefan Grether which points at an error with similar sympthoms
when using win32. I made the allocation of the new url string a bit faster
and different, don't know if it actually improves anything though...
Daniel (20 May 1999)
- Made the cookie parser deal with CRLF newlines too.
Daniel (19 May 1999)
- Download() didn't properly deal with failing return codes from the sread()
function. Adam Coyne found the problem in the win32 version, and Troy Engel
helped me out isolating it.
Daniel (16 May 1999)
- Richard Adams pointed out a bug I introduced in 5.8. --dump-header doesn't
work anymore! :-/ I fixed it now.
- After a suggestion by Joshua Swink I added -S / --show-error to force curl
to display the error message in case of an error, even if -s/--silent was
used.
Daniel (10 May 1999)
- I moved the stuff concerning HTTP, DICT and TELNET it their own source
files now. It is a beginning on my clean-up of the sources to make them
layer all those protocols better to enable more to be added easier in the
future!
- Leon Breedt sent me some files I've not put into the main curl
archive. They're for creating the Debian package thingie. He also sent me a
debian package that I've made available for download at the web page
Daniel (9 May 1999)
- Made it compile on cygwin too.
Troy Engel (7 May 1999)
- Brought a series of patches to allow curl to compile smoothly on MSVC++ 6
again!
Daniel (6 May 1999)
- I changed the #ifdef HAVE_STRFTIME placement for the -z code so that it
will be easier to discover systems that don't have that function and thus
can't use -z successfully. Made the strftime() get used if WIN32 is defined
too.
Version 5.8
Daniel (5 May 1999)
- I've had it with this autoconf/automake mess. It seems to work allright
for most people who don't have automake installed, but for those who have
there are problems all over.
I've got like five different bug reports on this only the last
week... Claudio Neves and Federico Bianchi and root <duggerj001 at
hawaii.rr.com> are some of them reporting this.
Currently, I have no really good fix since I want to use automake myself to
generate the Makefile.in files. I've found out that the @SHELL@-problems
can often be fixed by manually invoking 'automake' in the archive root
before you run ./configure... I've hacked my maketgz script now to fiddle
a bit with this and my tests seem to work better than before at least!
Daniel (4 May 1999)
- mkhelp.pl has been doing badly lately. I corrected a case problem in
the regexes.
- I've now remade the -o option to not touch the file unless it needs to.
I had to do this to make -z option really fine, since now you can make a
curl fetch and use a local copy's time when downloading to that file, as
in:
curl -z dump -o dump remote.site.com/file.html
This will only get the file if the remote one is newer than the local.
I'm aware that this alters previous behaviour a little. Some scripts out
there may depend on that the file is always touched...
- Corrected a bug in the SSLv2/v3 selection.
- Felix von Leitner requested that curl should be able to send
"If-Modified-Since" headers, which indeed is a fair idea. I implemented it
right away! Try -z <expression> where expression is a full GNU date
expression or a file name to get the date from!
Stephan Lagerholm (30 Apr 1999)
- Pointed out a problem with the src/Makefile for FreeBSD. The RM variable
isn't set and causes the make to fail.
Daniel (26 April 1999)
- Am I silly or what? Irving Wolfe pointed out to me that the curl version
number was not set properly. Hasn't been since 5.6. This was due to a bug
in my maketgz script!
David Eriksson (25 Apr 1999)
- Found a bug in cookies.c that made it crash at times.
Version 5.7.1
Doug Kaufman (23 Apr 1999)
- Brought two sunos 4 fixes. One of them being the hostip.c fix mentioned
below and the other one a correction in include/stdcheaders.h
- Added a paragraph about compiling with the US-version of openssl to the
INSTALL file.
Daniel
- New mailing list address. Info updated on the web page as well as in the
README file
Greg Onufer (20 Apr 1999)
- hostip.c didn't compile properly on SunOS 5.5.1.
It needs an #include <sys/types.h>
Version 5.7
Daniel (Apr 20 1999)
- Decided to upload a non-beta version right now!
- Made curl support any-length HTTP headers. The destination buffer is now
simply enlarged every time it turns out to be too small!
- Added the FAQ file to the archive. Still a bit smallish, but it is a
start.
Eric Thelin (15 Apr 1999)
- Made -D accept '-' instead of filename to write to stdout.
Version 5.6.3beta
Daniel (Apr 12 1999)
- Changed two #ifdef WIN32 to better #ifdef <errorcode> when connect()ing
in url.c and ftp.c. Makes cygwin32 deal with them better too. We should
try to get some decent win32-replacement there. Anyone?
- The old -3/--crlf option is now ONLY --crlf!
- I changed the "SSL fix" to a more lame one, but that doesn't remove as
much functionality. Now I've enabled the lib to select what SSL version it
should try first. Appearantly some older SSL-servers don't like when you
talk v3 with them so you need to be able to force curl to talk v2 from the
start. The fix dated April 6 and posted on the mailing list forced curl to
use v2 at all times using a modern OpenSSL version, but we don't really
want such a crippled solution.
- Marc Boucher sent me a patch that corrected a math error for the
"Curr.Speed" progress meter.
- Eric Thelin sent me a patch that enables '-K -' to read a config file from
stdin.
- I found out we didn't close the file properly before so I added it!
Daniel (Apr 9 1999)
- Yu Xin pointed out a problem with ftp download resume. It didn't work at
all! ;-O
Daniel (Apr 6 1999)
- Corrected the version string part generated for the SSL version.
- I found a way to make some other SSL page work with openssl 0.9.1+ that
previously didn't (ssleay 0.8.0 works with it though!). Trying to get
some real info from the OpenSSL guys to see how I should do to behave the
best way. SSLeay 0.8.0 shouldn't be that much in use anyway these days!
Version 5.6.2beta
Daniel (Apr 4 1999)
- Finally have curl more cookie "aware". Now read carefully. This is how
it works.
To make curl read cookies from an already existing file, in plain header-
format (like from the headers of a previous fetch) invoke curl with the
-b flag like:
curl -b file http://site/foo.html
Curl will then use all cookies it finds matching. The old style that sets
a single cookie with -b is still supported and is used if the string
following -b includes a '=' letter, as in "-b name=daniel".
To make curl read the cookies sent in combination with a location: (which
sites often do) point curl to read a non-existing file at first (i.e
to start with no existing cookies), like:
curl -b nowhere http://site/setcookieandrelocate.html
- Added a paragraph in the TODO file about the SSL problems recently
reported. Evidently, some kind of SSL-problem curl may need to address.
- Better "Location:" following.
Douglas E. Wegscheid (Tue, 30 Mar 1999)
- A subsecond display patch.
Daniel (Mar 14 1999)
- I've separated the version number of libcurl and curl now. To make
things a little easier, I decided to start the curl numbering from
5.6 and the former version number known as "curl" is now the one
set for libcurl.
- Removed the 'enable-no-pass' from configure, I doubt anyone wanted
that.
- Made lots of tiny adjustments to compile smoothly with cygwin under
win32. It's a killer for porting this to win32, bye bye VC++! ;-)
Compiles and builds out-of-the-box now. See the new wordings in
INSTALL for details.
- Beginning experiments with downloading multiple document from a http
server while remaining connected.
Version 5.6beta
Daniel (Mar 13 1999)
- Since I've changed so much, I thought I'd just go ahead and implement the
suggestion from Douglas E. Wegscheid. -D or --dump-header is now storing
HTTP headers separately in the specified file.
- Added new text to INSTALL on what to do to build this on win32 now.
- Aaargh. I had to take a step back and prefix the shared #include files
in the sources with "../include/" to please VC++...
Daniel (Mar 12 1999)
- Split the url.c source into many tiny sources for better readability
and smaller size.
Daniel (Mar 11 1999)
- Started to change stuff for a move to make libcurl and a more separate
curl application that uses the libcurl. Made the libcurl sources into
the new lib directory while the curl application will remain in src as
before. New makefiles, adjusted configure script and so.
libcurl.a built quickly and easily. I better make a better interface to
the lib functions though.
The new root dir include/ is supposed to contain the public information
about the new libcurl. It is a little ugly so far :-)
Daniel (Mar 1 1999)
- Todd Kaufmann sent me a good link to Netscape's cookie spec as well as the
info that RFC 2109 specifies how to use them. The link is now in the
README and the RFC in the RESOURCES.
Daniel (Feb 23 1999)
- Finally made configure accept --with-ssl to look for SSL libs and includes
in the "standard" place /usr/local/ssl...
Daniel (Feb 22 1999)
- Verified that curl linked fine with OpenSSL 0.9.1c which seems to be
the most recent.
Henri Gomez (Fri Feb 5 1999)
- Sent in an updated curl-ssl.spec. I still miss the script that builds an
RPM automatically...
Version 5.5.1
Mark Butler (27 Jan 1999)
- Corrected problems in Download().
Danitel Stenberg (25 Jan 1999)
- Jeremie Petit pointed out a few flaws in the source that prevented it from
compile warning free with the native compiler under Digital Unix v4.0d.
Version 5.5
Daniel Stenberg (15 Jan 1999)
- Added Bjorns small text to the README about the DICT protocol.
Daniel Stenberg (11 Jan 1999)
- <jswink at softcom.net> reported about the win32-versioin: "Doesn't use
ALL_PROXY environment variable". Turned out to be because of the static-
buffer nature of the win32 environment variable calls!
Bjorn Reese (10 Jan 1999)
- I have attached a simple addition for the DICT protocol (RFC 2229).
It performs dictionary lookups. The output still needs to be better
formatted.
To test it try (the exact format, and more examples are described in
the RFC)
dict://dict.org/m:hello
dict://dict.org/m:hello::soundex
Vicente Garcia (10 Jan 1999)
- Corrected the progress meter for files larger than 20MB.
Daniel Stenberg (7 Jan 1999)
- Corrected the -t and -T help texts. They claimed to be FTP only.
Version 5.4
Daniel Stenberg
(7 Jan 1999)
- Irving Wolfe reported that curl -s didn't always supress the progress
reporting. It was the form post that autoamtically always switched it on
again. This is now corrected!
(4 Jan 1999)
- Andreas Kostyrka suggested I'd add PUT and he helped me out to test it. If
you use -t or -T now on a http or https server, PUT will be used for file
upload.
I removed the former use of -T with HTTP. I doubt anyone ever really used
that.
(4 Jan 1999)
- Erik Jacobsen found a width bug in the mprintf() function. I corrected it
now.
(4 Jan 1999)
- As John V. Chow pointed out to me, curl accepted very limited URL sizes. It
should now accept path parts that are up to at least 4096 bytes.
- Somehow I screwed up when applying the AIX fix from Gilbert Ramirez, so
I redid that now.

1957
CHANGES.2001 Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -10,14 +10,10 @@ This file is only present in the CVS - never in release archives. It contains
information about other files and things that the CVS repository keeps in its
inner sanctum.
Use autoconf 2.50 and no earlier. Also, try having automake 1.5 and libtool
1.4.1 at least.
You will need perl to generate the src/hugehelp.c file. The file
src/hugehelp.c.cvs is a one-shot file that you can rename to src/hugehelp.c if
you really can't generate the true file yourself!
Compile and build instructions follow below.
CHANGES.0 contains ancient changes.
CHANGES.$year contains changes for the particular year.
memanalyze.pl is for analyzing the output generated by curl if -DMALLOCDEBUG
is used when compiling
@@ -26,12 +22,38 @@ you really can't generate the true file yourself!
Makefile.dist is included as the root Makefile in distribution archives
perl/contrib/ is a subdirectory with various perl scripts
java/ is a subdirectory with the Java interface to libcurl
perl/ is a subdirectory with various perl scripts
To build after having extracted everything from CVS, do this:
./buildconf
./configure
make
REQUIREMENTS
You need the following software installed:
o autoconf 2.50 (or later)
o automake 1.5 (or later)
o libtool 1.4 (or later)
o GNU m4 (required by autoconf)
o nroff + perl (if you don't have nroff and perl and you for some reason
don't want to install them, you can rename the source file
src/hugehelp.c.cvs to src/hugehelp.c and avoid having to generate this
file. This will of course give you an older version of the file that isn't
up-to-date. That file was checked in once and won't be updated very
regularly.)
MAC OS X
For Mac OS X users, Guido Neitzer write down the following step-by-step guide:
1. Install fink (http://fink.sourceforge.net)
2. Update fink to the newest version (with the installed fink)
3. Install the latest version of autoconf, automake and m4 with fink
4. Install version 1.4.1 of libtool - you find it in the "unstable" section
(read the manual to see how to get unstable versions)
5. Get cURL from the cvs
6. Build cURL with "./buildconf", "./configure", "make", "sudo make install"

2
LEGAL
View File

@@ -1,4 +1,4 @@
Copyright (C) 2000, Daniel Stenberg, <daniel@haxx.se>, et al.
Copyright (C) 1998-2001, Daniel Stenberg, <daniel@haxx.se>, et al.
Everyone is permitted to copy and distribute verbatim copies of this license
document, but changing it is not allowed.

View File

@@ -2,13 +2,11 @@
# $Id$
#
AUTOMAKE_OPTIONS = foreign no-dependencies
AUTOMAKE_OPTIONS = foreign
EXTRA_DIST = \
CHANGES LEGAL maketgz MITX.txt MPL-1.1.txt \
config-win32.h reconf Makefile.dist \
curl-config.in build_vms.com config-riscos.h \
config-vms.h curl-mode.el
reconf Makefile.dist curl-config.in build_vms.com curl-mode.el
bin_SCRIPTS = curl-config

View File

@@ -5,7 +5,7 @@
# | (__| |_| | _ <| |___
# \___|\___/|_| \_\_____|
#
# Copyright (C) 2000, Daniel Stenberg, <daniel@haxx.se>, et al.
# Copyright (C) 2002, Daniel Stenberg, <daniel@haxx.se>, et al.
#
# In order to be useful for every potential user, curl and libcurl are
# dual-licensed under the MPL and the MIT/X-derivate licenses.
@@ -43,13 +43,19 @@ mingw32-ssl:
vc:
cd lib
nmake -f Makefile.vc6
nmake -f Makefile.vc6 cfg=release
cd ..\src
nmake -f Makefile.vc6
vc-ssl:
cd lib
nmake -f Makefile.vc6 release-ssl
nmake -f Makefile.vc6 cfg=release-ssl
cd ..\src
nmake -f Makefile.vc6 cfg=release-ssl
vc-ssl-dll:
cd lib
nmake -f Makefile.vc6 cfg=release-ssl-dll
cd ..\src
nmake -f Makefile.vc6

2
README
View File

@@ -26,7 +26,7 @@ README
The official download mirror sites are:
Sweden -- ftp://ftp.sunet.se/pub/www/utilities/curl/
Sweden -- ftp://cool.haxx.se/curl/
Sweden -- http://cool.haxx.se/curl/
Germany -- ftp://ftp.fu-berlin.de/pub/unix/network/curl/
To download the very latest source off the CVS server do this:

View File

@@ -62,3 +62,5 @@
#undef HAVE_O_NONBLOCK
#undef HAVE_DISABLED_NONBLOCKING
/* Define this to 'int' if in_addr_t is not an available typedefed type */
#undef in_addr_t

View File

@@ -143,6 +143,43 @@ AC_DEFUN([TYPE_SOCKLEN_T],
#include <sys/socket.h>])
])
dnl Check for in_addr_t: it is used to receive the return code of inet_addr()
dnl and a few other things. If not found, we set it to unsigned int, as even
dnl 64-bit implementations use to set it to a 32-bit type.
AC_DEFUN([TYPE_IN_ADDR_T],
[
AC_CHECK_TYPE([in_addr_t], ,[
AC_MSG_CHECKING([for in_addr_t equivalent])
AC_CACHE_VAL([curl_cv_in_addr_t_equiv],
[
# Systems have either "struct sockaddr *" or
# "void *" as the second argument to getpeername
curl_cv_in_addr_t_equiv=
for t in int size_t unsigned long "unsigned long"; do
AC_TRY_COMPILE([
#include <sys/types.h>
#include <sys/socket.h>
#include <arpa/inet.h>
],[
$t data = inet_addr ("1.2.3.4");
],[
curl_cv_in_addr_t_equiv="$t"
break
])
done
if test "x$curl_cv_in_addr_t_equiv" = x; then
AC_MSG_ERROR([Cannot find a type to use in place of in_addr_t])
fi
])
AC_MSG_RESULT($curl_cv_in_addr_t_equiv)
AC_DEFINE_UNQUOTED(in_addr_t, $curl_cv_in_addr_t_equiv,
[type to use in place of in_addr_t if not defined])],
[#include <sys/types.h>
#include <sys/socket.h>
#include <arpa/inet.h>])
])
dnl ************************************************************
dnl check for "localhost", if it doesn't exist, we can't do the
dnl gethostbyname_r tests!
@@ -335,9 +372,12 @@ AC_DEFUN(CURL_CHECK_GETHOSTBYNAME_R,
#include <string.h>
#include <sys/types.h>
#include <netdb.h>
#undef NULL
#define NULL (void *)0
int
gethostbyname_r(const char *, struct hostent *, struct hostent_data *);],[
struct hostent_data data;
gethostbyname_r(NULL, NULL, NULL);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_3)
@@ -350,9 +390,12 @@ gethostbyname_r(NULL, NULL, NULL);],[
#include <string.h>
#include <sys/types.h>
#include <netdb.h>
#undef NULL
#define NULL (void *)0
int
gethostbyname_r(const char *,struct hostent *, struct hostent_data *);],[
struct hostent_data data;
gethostbyname_r(NULL, NULL, NULL);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_3)
@@ -363,6 +406,8 @@ gethostbyname_r(NULL, NULL, NULL);],[
AC_TRY_COMPILE([
#include <sys/types.h>
#include <netdb.h>
#undef NULL
#define NULL (void *)0
struct hostent *
gethostbyname_r(const char *, struct hostent *, char *, int, int *);],[
@@ -375,6 +420,8 @@ gethostbyname_r(NULL, NULL, NULL, 0, NULL);],[
AC_TRY_COMPILE([
#include <sys/types.h>
#include <netdb.h>
#undef NULL
#define NULL (void *)0
int
gethostbyname_r(const char *, struct hostent *, char *, size_t,

View File

@@ -6,14 +6,16 @@ $ loc = f$environment("PROCEDURE")
$ def = f$parse("X.X;1",loc) - "X.X;1"
$
$ set def 'def'
$ cc_qual = "/define=HAVE_CONFIG_H=1/include=(""../include/"",""../"")"
$ cc_qual = "/define=HAVE_CONFIG_H=1/include=(""../include/"",""../"",""../../openssl-0_9_6c/include/"")"
$ if p1 .eqs. "LISTING" then cc_qual = cc_qual + "/LIST/MACHINE"
$ if p1 .eqs. "DEBUG" then cc_qual = cc_qual + "/LIST/MACHINE/DEBUG"
$ msg_qual = ""
$ call build "[.lib]" "*.c"
$ call build "[.src]" "*.c"
$ call build "[.src]" "*.msg"
$ link /exe=curl.exe [.src]curl/lib/include=main,[.lib]curl/lib
$ link /exe=curl.exe [.src]curl/lib/include=main,[.lib]curl/lib, -
[-.openssl-0_9_6c.axp.exe.ssl]libssl/lib, -
[-.openssl-0_9_6c.axp.exe.crypto]libcrypto/lib
$
$
$ goto Common_Exit

View File

@@ -5,7 +5,7 @@ die(){
exit
}
automake || die "The command 'automake $MAKEFILES' failed"
aclocal || die "The command 'aclocal' failed"
autoheader || die "The command 'autoheader' failed"
autoconf || die "The command 'autoconf' failed"
automake || die "The command 'automake $MAKEFILES' failed"

View File

@@ -8,7 +8,7 @@ AC_PREREQ(2.50)
dnl First some basic init macros
AC_INIT
AC_CONFIG_SRCDIR([lib/urldata.h])
AM_CONFIG_HEADER(config.h src/config.h)
AM_CONFIG_HEADER(lib/config.h src/config.h tests/server/config.h)
dnl figure out the libcurl version
VERSION=`sed -ne 's/^#define LIBCURL_VERSION "\(.*\)"/\1/p' ${srcdir}/include/curl/curl.h`
@@ -69,7 +69,7 @@ AC_ARG_ENABLE(debug,
*) AC_MSG_RESULT(yes)
CPPFLAGS="$CPPFLAGS -DMALLOCDEBUG"
CFLAGS="-W -Wall -Wwrite-strings -pedantic -g"
CFLAGS="-W -Wall -Wwrite-strings -pedantic -Wundef -Wpointer-arith -Wcast-align -Wnested-externs -g"
;;
esac ],
AC_MSG_RESULT(no)
@@ -392,6 +392,10 @@ else
OPENSSL_ENABLED=1)
fi
dnl Check for the OpenSSL engine header, it is kind of "separated"
dnl from the main SSL check
AC_CHECK_HEADERS(openssl/engine.h)
AC_SUBST(OPENSSL_ENABLED)
fi
@@ -469,6 +473,8 @@ else
dnl is there a localtime_r()
CURL_CHECK_LOCALTIME_R()
AC_CHECK_FUNCS( gmtime_r )
fi
dnl **********************************************************************
@@ -492,7 +498,6 @@ AC_CHECK_HEADERS( \
sys/stat.h \
sys/types.h \
sys/time.h \
getopt.h \
sys/param.h \
termios.h \
termio.h \
@@ -519,14 +524,15 @@ AC_HEADER_TIME
# mprintf() checks:
# check for 'long double'
AC_CHECK_SIZEOF(long double, 8)
# AC_CHECK_SIZEOF(long double, 8)
# check for 'long long'
AC_CHECK_SIZEOF(long long, 4)
# AC_CHECK_SIZEOF(long long, 4)
# check for ssize_t
AC_CHECK_TYPE(ssize_t, int)
TYPE_SOCKLEN_T
TYPE_IN_ADDR_T
dnl Checks for library functions.
dnl AC_PROG_GCC_TRADITIONAL
@@ -588,12 +594,14 @@ dnl AC_SUBST(RANLIB)
AC_CONFIG_FILES([Makefile \
docs/Makefile \
docs/examples/Makefile \
docs/libcurl/Makefile \
include/Makefile \
include/curl/Makefile \
src/Makefile \
lib/Makefile \
tests/Makefile \
tests/data/Makefile \
tests/server/Makefile \
packages/Makefile \
packages/Win32/Makefile \
packages/Win32/cygwin/Makefile \
@@ -602,6 +610,8 @@ AC_CONFIG_FILES([Makefile \
packages/Linux/RPM/curl.spec \
packages/Linux/RPM/curl-ssl.spec \
packages/Solaris/Makefile \
packages/EPM/curl.list \
packages/EPM/Makefile \
curl-config
])
AC_OUTPUT

View File

@@ -16,6 +16,7 @@ Usage: curl-config [OPTION]
Available values for OPTION include:
--cc compiler
--cflags pre-processor and compiler flags
--feature newline separated list of enabled features
--help display this help and exit
@@ -42,6 +43,10 @@ while test $# -gt 0; do
esac
case "$1" in
--cc)
echo @CC@
;;
--prefix)
echo $prefix
;;

View File

@@ -1,3 +1,4 @@
$Id$
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -22,11 +23,16 @@ BUGS
When reporting a bug, you should include information that will help us
understand what's wrong, what you expected to happen and how to repeat the
bad behaviour. You therefore need to supply your operating system's name and
bad behavior. You therefore need to supply your operating system's name and
version number (uname -a under a unix is fine), what version of curl you're
using (curl -V is fine), what URL you were working with and anything else
you think matters.
Since curl deals with networks, it often helps us a lot if you include a
protocol debug dump with your bug report. The output you get by using the -v
flag. Usually, you also get more info by using -i so that is likely to be
useful when reporting bugs as well.
If curl crashed, causing a core dump (in unix), there is hardly any use to
send that huge file to anyone of us. Unless we have an exact same system
setup as you, we can't do much with it. What we instead ask of you is to get
@@ -35,23 +41,23 @@ BUGS
The address and how to subscribe to the mailing list is detailed in the
MANUAL file.
HOW TO GET A STACK TRACE with a common unix debugger
====================================================
HOW TO GET A STACK TRACE
First, you must make sure that you compile all sources with -g and that you
don't 'strip' the final executable.
don't 'strip' the final executable. Try to avoid optimizing the code as
well, remove -O, -O2 etc from the compiler options.
Run the program until it bangs.
Run the program until it dumps core.
Run your debugger on the core file, like '<debugger> curl core'. <debugger>
should be replaced with the name of your debugger, in most cases that will
be 'gdb', but 'dbx' and others also occur.
When the debugger has finished loading the core file and presents you a
prompt, you can give the compiler instructions. Enter 'where' (without the
quotes) and press return.
prompt, enter 'where' (without the quotes) and press return.
The list that is presented is the stack trace. If everything worked, it is
supposed to contain the chain of functions that were called when curl
crashed.
crashed. Include the stack trace with your detailed bug report. It'll help a
lot.

View File

@@ -1,4 +1,4 @@
Updated: November 1, 2001 (http://curl.haxx.se/docs/faq.shtml)
Updated: January 22, 2002 (http://curl.haxx.se/docs/faq.shtml)
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -33,10 +33,10 @@ FAQ
3.6 Does curl support javascript, ASP, XML, XHTML or HTML version Y?
3.7 Can I use curl to delete/rename a file through FTP?
3.8 How do I tell curl to follow HTTP redirects?
3.9 How do I use curl in PHP, Perl, Tcl, Ruby or Java?
3.9 How do I use curl in my favourite programming language?
3.10 What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP?
3.11 How do I POST with a different Content-Type?
3.12 Why do FTP specific features over HTTP proxy fails?
3.12 Why do FTP specific features over HTTP proxy fail?
4. Running Problems
4.1 Problems connecting to SSL servers.
@@ -49,6 +49,7 @@ FAQ
4.5.3 "403 Forbidden"
4.5.4 "404 Not Found"
4.5.5 "405 Method Not Allowed"
4.5.6 "301 Moved Permanently"
4.6 Can you tell me what error code 142 means?
4.7 How do I keep user names and passwords secret in Curl command lines?
4.8 I found a bug!
@@ -154,26 +155,27 @@ FAQ
have them inserted in the main sources (of course on the condition that
developers agree on that the fixes are good).
The list of contributors in the bottom of the man page is only a small part
of all the people that every day provide us with bug reports, suggestions,
ideas and source code.
The list of contributors in the docs/THANKS file is only a small part of all
the people that every day provide us with bug reports, suggestions, ideas
and source code.
curl is developed by a community, with Daniel at the wheel.
1.6 What do you get for making cURL?
Project cURL is entirely free and open, without any commercial interests or
money involved. No person gets paid in any way for developing curl. We all
do this voluntarily on our spare time.
Project cURL is entirely free and open. No person gets paid in any way for
developing curl. We all do this voluntarily on our spare time.
We get some help from companies. Contactor Data hosts the curl web site and
the main mailing list, Haxx owns the curl web site's domain and
sourceforge.net hosts several project tools we take advantage from like the
bug tracker, mailing lists and more.
If you feel you want to show support our project with a donation, a very
nice way of doing that would be to buy "gift certificates" at useful online
shopping sites, such as amazon.com or thinkgeek.com.
If you want to support our project with a donation or similar, one way of
doing that would be to buy "gift certificates" at useful online shopping
sites, such as amazon.com or thinkgeek.com. Another way would be to sponsor
us through a banner-program or by simply helping us coding, documenting,
testing etc.
1.7 What about CURL from curl.com?
@@ -332,11 +334,12 @@ FAQ
curl -L http://redirector.com
3.9 How do I use curl in PHP, Perl, Tcl, Ruby or Java?
3.9 How do I use curl in my favourite programming language?
There exist many language-interfaces for curl that integrates it better with
various languages. If you are fluid in a script language, you may very well
opt to use such an interface instead of using the command line tool.
There exist many language interfaces/bindings for curl that integrates it
better with various languages. If you are fluid in a script language, you
may very well opt to use such an interface instead of using the command line
tool.
At the time of writing, there are bindings for the five language mentioned
above, but chances are there are even more by the time you read this. Or you
@@ -346,17 +349,10 @@ FAQ
install and use them, in the libcurl section of the curl web site:
http://curl.haxx.se/libcurl/
PHP4 has the ability to use libcurl as an internal module if built with that
option enabled. You then get a set of extra functions that can be used
within your PHP programs. You find all details about those functions in the
curl section in the PHP manual, see the online version at:
http://www.php.net/manual/ref.curl.php
PHP also offers the option to run a command line, and then you can of course
invoke the curl tool using a command line. This is the way to use curl if
you're using PHP3 or PHP4 built without curl module support.
In December 2001, there are interfaces available for the following
languages: C/C++, Cocoa, Dylan, Java, Perl, PHP, Python, Rexx, Ruby, Scheme
and Tcl. By the time you read this, additional ones may have appeared!
3.10 What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP?
@@ -383,8 +379,8 @@ FAQ
etc.
There is one exception to this rule, and that is if you can "tunnel through"
the given HTTP proxy. Proxy tunneling is enabled with a special option (-P)
and is generally not available as proxy admins usuable disable tunneling to
the given HTTP proxy. Proxy tunneling is enabled with a special option (-p)
and is generally not available as proxy admins usually disable tunneling to
other ports than 443 (which is used for HTTPS access through proxies).
4. Running Problems
@@ -476,11 +472,22 @@ FAQ
identified by the Request-URI. The response MUST include an Allow header
containing a list of valid methods for the requested resource.
4.5.6 "301 Moved Permanently"
If you get this return code and an HTML outpt similar to this:
<H1>Moved Permanently</H1> The document has moved <A
HREF="http://same_url_now_with_a_trailing_slash/">here</A>.
it might be because you request a directory URL but without the trailing
slash. Try the same operation again _with_ the trailing URL, or use the
-L/--location option to follow the redirection.
4.6. Can you tell me what error code 142 means?
All error codes that are larger than the highest documented error code means
that curl has exited due to a crash. This is a serious error, and we
appriciate a detailed bug report from you that describes how we could go
appreciate a detailed bug report from you that describes how we could go
ahead and repeat this!
4.7. How do I keep user names and passwords secret in Curl command lines?

View File

@@ -56,7 +56,7 @@ FTP
- download
- authentication
- kerberos security
- PORT or PASV
- active/passive using PORT, EPRT, PASV or EPSV
- single file size information (compare to HTTP HEAD)
- 'type=' URL support
- dir listing

View File

@@ -36,8 +36,7 @@ UNIX
The configure script always tries to find a working SSL library unless
explicitly told not to. If you have OpenSSL installed in the default search
path for your compiler/linker, you don't need to do anything special. If
you have OpenSSL installed in e.g /usr/local/ssl, you can run configure
like:
you have OpenSSL installed in /usr/local/ssl, you can run configure like:
./configure --with-ssl
@@ -46,13 +45,13 @@ UNIX
./configure --with-ssl=/opt/OpenSSL
If you insist on forcing a build *without* SSL support, even though you may
have it installed in your system, you can run configure like this:
If you insist on forcing a build without SSL support, even though you may
have OpenSSL installed in your system, you can run configure like this:
./configure --without-ssl
If you have OpenSSL installed, but with the libraries in one place and the
header files somewhere else, you'll have to set the LDFLAGS and CPPFLAGS
header files somewhere else, you have to set the LDFLAGS and CPPFLAGS
environment variables prior to running configure. Something like this
should work:
@@ -72,7 +71,7 @@ UNIX
LIBS=-lRSAglue -lrsaref
(as suggested by Doug Kaufman)
KNOWN PROBLEMS
KNOWN PROBLEMS (these ones should not happen anymore)
If you happen to have autoconf installed, but a version older than 2.12
you will get into trouble. Then you can still build curl by issuing these
@@ -101,8 +100,8 @@ UNIX
MORE OPTIONS
Remember, to force configure to use the standard cc compiler if both
cc and gcc are present, run configure like
To force configure to use the standard cc compiler if both cc and gcc are
present, run configure like
CC=cc ./configure
or
@@ -130,11 +129,6 @@ UNIX
./configure --with-krb4=/usr/athena
If your system support shared libraries, but you want to built a static
version only, you can disable building the shared version by using:
./configure --disable-shared
If you're a curl developer and use gcc, you might want to enable more
debug options with the --enable-debug option.
@@ -177,16 +171,16 @@ Win32
Make the sources in the src/ drawer be a "win32 console application"
project. Name it curl.
With VC++, add 'wsock32.lib' to the link libs when you build curl!
Borland seems to do that itself magically. Of course you have to
make sure it links with the libcurl too!
With VC++, add 'ws2_32.lib' to the link libs when you build curl!
Borland seems to do that itself magically. Of course you have to make
sure it links with the libcurl too!
For VC++ 6, there's an included Makefile.vc6 that should be possible
to use out-of-the-box.
Microsoft note: add /Zm200 to the compiler options to increase the
compiler's memory allocation limit, as the hugehelp.c won't compile
due to "too long puts string".
Microsoft note: add /Zm200 to the compiler options to increase the
compiler's memory allocation limit, as the hugehelp.c won't compile
due to "too long puts string".
With SSL:
@@ -209,15 +203,32 @@ Win32
----------------------------
Please read the OpenSSL documentation on how to compile and install
the OpenSSL library. This generates the libeay32.dll and ssleay32.dll
files.
files in the out32dll subdirectory in the OpenSSL home directory. If
you compiled OpenSSL static libraries (libeay32.lib, ssleay32.lib,
RSAglue.lib) they are created in the out32 subdirectory.
Run the 'vcvars32.bat' file to get the proper environment variables
set. Edit the makefile.vc6 in the lib directory and define
OPENSSL_PATH. Set the location of the OpenSSL library and run 'nmake
vc-ssl' in the root directory.
set. The vcvars32.bat file is part of the Microsoft development
environment and you may find it in 'C:\Program Files\Microsoft Visual
Studio\vc98\bin' if you installed Visual C/C++ 6 in the default
directory.
The vcvars32.bat file is part of the Microsoft development
environment.
Before running nmake define the OPENSSL_PATH environment variable with
the root/base directory of OpenSSL, for example:
set OPENSSL_PATH=c:\openssl-0.9.6b
Then run 'nmake vc-ssl' or 'nmake vc-ssl-dll' in the curl's root
directory. 'nmake vc-ssl' will create a libcurl static and dynamic
libraries in the lib subdirectory, as well as a statically linked
version of curl.exe in the scr subdirectory. This statically linked
version is a standalone executable not requiring any DLL at
runtime. This making method requires that you have build the static
libraries of OpenSSL available in OpenSSL's out32 subdirectory.
'nmake vc-ssl-dll' creates the libcurl dynamic library and
links curl.exe against libcurl and OpenSSL dynamically.
This executables requires libcurl.dll and the OpenSSL DLLs
at runtime.
Microsoft / Borland style
-------------------------
@@ -328,6 +339,20 @@ VMS
13-jul-2001
N. Baggus
QNX
===
(This section was graciously brought to us by David Bentham)
As QNX is targetted for resource constrained environments, the QNX headers
set conservative limits. This includes the FD_SETSIZE macro, set by default
to 32. Socket descriptors returned within the CURL library may exceed this,
resulting in memory faults/SIGSEGV crashes when passed into select(..)
calls using fd_set macros.
A good all-round solution to this is to override the default when building
libcurl, by overriding CFLAGS during configure, example
# configure CFLAGS='-DFD_SETSIZE=64 -g -O2'
CROSS COMPILE
=============
@@ -370,7 +395,8 @@ CROSS COMPILE
PORTS
=====
This is a probably incomplete list of known hardware and operating systems
that curl has been compiled for:
that curl has been compiled for. If you know one system curl compiles and
runs on, that isn't listed, please let us know!
- Alpha DEC OSF 4
- Alpha Digital UNIX v3.2
@@ -379,10 +405,14 @@ PORTS
- Alpha OpenVMS V7.1-1H2
- Alpha Tru64 v5.0 5.1
- HP-PA HP-UX 9.X 10.X 11.X
- HP-PA Linux
- MIPS IRIX 6.2, 6.5
- MIPS Linux
- Pocket PC/Win CE 3.0
- Power AIX 4.2, 4.3.1, 4.3.2
- PowerPC Darwin 1.0
- PowerPC Linux
- PowerPC Mac OS 9
- PowerPC Mac OS X
- SINIX-Z v5
- Sparc Linux
@@ -394,6 +424,7 @@ PORTS
- Ultrix 4.3a
- i386 BeOS
- i386 FreeBSD
- i386 HURD
- i386 Linux 1.3, 2.0, 2.2, 2.3, 2.4
- i386 NetBSD
- i386 OS/2
@@ -403,6 +434,7 @@ PORTS
- i386 Windows 95, 98, ME, NT, 2000
- ia64 Linux 2.3.99
- m68k AmigaOS 3
- m68k Linux
- m68k OpenBSD
- s390 Linux

View File

@@ -54,7 +54,7 @@ Windows vs Unix
Inside the source code, We make an effort to avoid '#ifdef [Your OS]'. All
conditionals that deal with features *should* instead be in the format
'#ifdef HAVE_THAT_WEIRD_FUNCTION'. Since Windows can't run configure scripts,
we maintain two config-win32.h files (one in / and one in src/) that are
we maintain two config-win32.h files (one in lib/ and one in src/) that are
supposed to look exactly as a config.h file would have looked like on a
Windows machine!
@@ -69,10 +69,10 @@ Library
rather small and easy-to-follow. All the ones prefixed with 'curl_easy' are
put in the lib/easy.c file.
Starting with libcurl 7.8, curl_global_init_() and curl_global_cleanup() were
introduced. They should be called by the application to initialize and clean
up global stuff in the library. As of today, they just do the global SSL
initing if SSL is enabled. libcurl itself has no "global" scope.
curl_global_init_() and curl_global_cleanup() should be called by the
application to initialize and clean up global stuff in the library. As of
today, it can handle the global SSL initing if SSL is enabled and it can init
the socket layer on windows machines. libcurl itself has no "global" scope.
All printf()-style functions use the supplied clones in lib/mprintf.c. This
makes sure we stay absolutely platform independent.

14
docs/KNOWN_BUGS Normal file
View File

@@ -0,0 +1,14 @@
These are problems known to exist at the time of this release. Feel free to
join in and help us correct one or more of these! Also be sure to check the
changelog of the current development status, as one or more of these problems
may have been fixed since this was written!
* curl_formadd() fails on OSF1. Why? Fix! Need help from OSF1 dudes.
https://sourceforge.net/tracker/index.php?func=detail&aid=524433&group_id=976&atid=100976
* Running 'make test' on Mac OS X gives 4 errors. This seems to be related
to some kind of libtool problem:
http://curl.haxx.se/mail/archive-2002-03/0029.html and
http://curl.haxx.se/mail/archive-2002-03/0033.html
* libcurl does not deal nicely with files larger than 2GB

View File

@@ -246,25 +246,25 @@ POST (HTTP)
-F accepts parameters like -F "name=contents". If you want the contents to
be read from a file, use <@filename> as contents. When specifying a file,
you can also specify the file content type by appending ';type=<mime type>'
to the file name. You can also post the contents of several files in one field.
For example, the field name 'coolfiles' is used to send three files, with
different content types using the following syntax:
to the file name. You can also post the contents of several files in one
field. For example, the field name 'coolfiles' is used to send three files,
with different content types using the following syntax:
curl -F "coolfiles=@fil1.gif;type=image/gif,fil2.txt,fil3.html" \
http://www.post.com/postit.cgi
If the content-type is not specified, curl will try to guess from the file
extension (it only knows a few), or use the previously specified type
(from an earlier file if several files are specified in a list) or else it
will using the default type 'text/plain'.
extension (it only knows a few), or use the previously specified type (from
an earlier file if several files are specified in a list) or else it will
using the default type 'text/plain'.
Emulate a fill-in form with -F. Let's say you fill in three fields in a
form. One field is a file name which to post, one field is your name and one
field is a file description. We want to post the file we have written named
"cooltext.txt". To let curl do the posting of this data instead of your
favourite browser, you have to read the HTML source of the form page and find
the names of the input fields. In our example, the input field names are
'file', 'yourname' and 'filedescription'.
favourite browser, you have to read the HTML source of the form page and
find the names of the input fields. In our example, the input field names
are 'file', 'yourname' and 'filedescription'.
curl -F "file=@cooltext.txt" -F "yourname=Daniel" \
-F "filedescription=Cool text file with cool text inside" \
@@ -601,15 +601,15 @@ RESUMING FILE TRANSFERS
Continue downloading a document:
curl -c -o file ftp://ftp.server.com/path/file
curl -C - -o file ftp://ftp.server.com/path/file
Continue uploading a document(*1):
curl -c -T file ftp://ftp.server.com/path/file
curl -C - -T file ftp://ftp.server.com/path/file
Continue downloading a document from a web server(*2):
curl -c -o file http://www.server.com/
curl -C - -o file http://www.server.com/
(*1) = This requires that the ftp server supports the non-standard command
SIZE. If it doesn't, curl will say so.
@@ -668,8 +668,14 @@ LDAP
and offer ldap:// support.
LDAP is a complex thing and writing an LDAP query is not an easy task. I do
advice you to dig up the syntax description for that elsewhere, RFC 1959 if
no other place is better.
advice you to dig up the syntax description for that elsewhere. Two places
that might suit you are:
Netscape's "Netscape Directory SDK 3.0 for C Programmer's Guide Chapter 10:
Working with LDAP URLs":
http://developer.netscape.com/docs/manuals/dirsdk/csdk30/url.htm
RFC 2255, "The LDAP URL Format" http://www.rfc-editor.org/rfc/rfc2255.txt
To show you an example, this is now I can get all people from my local LDAP
server that has a certain sub-domain in their email address:

View File

@@ -6,68 +6,24 @@ AUTOMAKE_OPTIONS = foreign no-dependencies
man_MANS = \
curl.1 \
curl-config.1 \
curl_easy_cleanup.3 \
curl_easy_getinfo.3 \
curl_easy_init.3 \
curl_easy_perform.3 \
curl_easy_setopt.3 \
curl_easy_duphandle.3 \
curl_formparse.3 \
curl_formadd.3 \
curl_formfree.3 \
curl_getdate.3 \
curl_getenv.3 \
curl_slist_append.3 \
curl_slist_free_all.3 \
curl_version.3 \
curl_escape.3 \
curl_unescape.3 \
curl_strequal.3 \
curl_strnequal.3 \
curl_mprintf.3 \
curl_global_init.3 \
curl_global_cleanup.3 \
libcurl.3
SUBDIRS = examples
curl-config.1
HTMLPAGES = \
curl.html \
curl-config.html \
curl_easy_cleanup.html \
curl_easy_getinfo.html \
curl_easy_init.html \
curl_easy_perform.html \
curl_easy_setopt.html \
curl_easy_duphandle.html \
curl_formadd.html \
curl_formparse.html \
curl_formfree.html \
curl_getdate.html \
curl_getenv.html \
curl_slist_append.html \
curl_slist_free_all.html \
curl_version.html \
curl_escape.html \
curl_unescape.html \
curl_strequal.html \
curl_strnequal.html \
curl_mprintf.html \
curl_global_init.html \
curl_global_cleanup.html \
libcurl.html
curl-config.html
EXTRA_DIST = $(man_MANS) \
MANUAL BUGS CONTRIBUTE FAQ FEATURES INTERNALS \
README.win32 RESOURCES TODO TheArtOfHttpScripting THANKS \
$(HTMLPAGES)
SUBDIRS = examples libcurl
EXTRA_DIST = MANUAL BUGS CONTRIBUTE FAQ FEATURES INTERNALS \
README.win32 RESOURCES TODO TheArtOfHttpScripting THANKS \
VERSIONS KNOWN_BUGS $(man_MANS) $(HTMLPAGES)
MAN2HTML= gnroff -man $< | man2html >$@
SUFFIXES = .1 .3 .html
html: $(HTMLPAGES)
cd libcurl; make html
.3.html:
$(MAN2HTML)

View File

@@ -12,18 +12,11 @@ README.win32
systems. While not being the main develop target, a fair share of curl users
are win32-based.
Some documentation in this archive will be tricky to read for Windows
people, as they come in unix-style man pages. You can either download a
freely available nroff binary for win32 (*pointers appriciated*), convert
the files into plain-text on your neighbor's unix machine or run over to the
curl web site and view them as plain HTML.
The unix-style man pages are tricky to read on windows, so therefore are all
those pages also converted to HTML and those are also included in the
release archives.
The main curl.1 man page is "built-in". Use a command line similar to this
in order to extract a separate text file:
The main curl.1 man page is also "built-in" in the command line tool. Use a
command line similar to this in order to extract a separate text file:
curl -M >manual.txt
Download all the libcurl man pages in HTML format using the link on the
bottom of this page:
http://curl.haxx.se/libcurl/c/

View File

@@ -53,7 +53,7 @@ that have contributed with non-trivial parts:
- Albert Chin-A-Young <china@thewrittenword.com>
- Stephen Kick <skick@epicrealm.com>
- Martin Hedenfalk <mhe@stacken.kth.se>
- Richard Prescott
- Richard Prescott <rip at step.polymtl.ca>
- Jason S. Priebe <priebe@wral-tv.com>
- T. Bharath <TBharath@responsenetworks.com>
- Alexander Kourakos <awk@users.sourceforge.net>
@@ -75,3 +75,7 @@ that have contributed with non-trivial parts:
- Andrew Francis <locust@familyhealth.com.au>
- Tomasz Lacki <Tomasz.Lacki@primark.pl>
- Georg Huettenegger <georg@ist.org>
- John Lask <johnlask@hotmail.com>
- Eric Lavigne <erlavigne@wanadoo.fr>
- Marcus Webster <marcus.webster@phocis.com>
- G<>tz Babin-Ebell <babin<69>ebell@trustcenter.de>

131
docs/TODO
View File

@@ -19,10 +19,7 @@ TODO
* The new 'multi' interface is being designed. Work out the details, start
implementing and write test applications!
[http://curl.haxx.se/dev/multi.h]
* Add a name resolve cache to libcurl to make repeated fetches to the same
host name (when persitancy isn't available) faster.
[http://curl.haxx.se/lxr/source/lib/multi.h]
* Introduce another callback interface for upload/download that makes one
less copy of data and thus a faster operation.
@@ -33,11 +30,36 @@ TODO
telnet, ldap, dict or file.
* Add asynchronous name resolving. http://curl.haxx.se/dev/async-resolver.txt
This should be made to work on most of the supported platforms, or
otherwise it isn't really interesting.
* Data sharing. Tell which easy handles within a multi handle that should
share cookies, connection cache, dns cache, ssl session cache.
* Mutexes. By adding mutex callback support, the 'data sharing' mentioned
above can be made between several easy handles running in different threads
too. The actual mutex implementations will be left for the application to
implement, libcurl will merely call 'getmutex' and 'leavemutex' callbacks.
* No-faster-then-this transfers. Many people have limited bandwidth and they
want the ability to make sure their transfers never use more bandwith than
they think is good.
* Set the SO_KEEPALIVE socket option to make libcurl notice and disconnect
very long time idle connections.
* Make sure we don't ever loop because of non-blocking sockets return
EWOULDBLOCK or similar. This concerns the HTTP request sending (and
especially regular HTTP POST), the FTP command sending etc.
* Go through the code and verify that libcurl deals with big files >2GB and
>4GB all over. Bug reports indicate that it doesn't currently work
properly.
DOCUMENTATION
* Document all CURLcode error codes, why they happen and what most likely
will make them not happen again.
will make them not happen again. In a libcurl point of view.
FTP
@@ -52,22 +74,24 @@ TODO
already working http dito works. It of course requires that 'MDTM' works,
and it isn't a standard FTP command.
* Suggested on the mailing list: CURLOPT_FTP_MKDIR...!
* Always use the FTP SIZE command before downloading, as that makes it more
likely that we know the size when downloading. Some sites support SIZE but
don't show the size in the RETR response!
* Make FTP PASV work with IPv6 support. RFC 2428 "FTP Extensions for IPv6 and
NATs" is interesting.
* Add FTPS support with SSL for the data connection too.
HTTP
* Make it possible to supply normal POST data through the ordinary read data
callback.
* HTTP PUT for files passed on stdin *OR* when the --crlf option is
used. Requires libcurl to send the file with chunked content
encoding. [http://curl.haxx.se/dev/HTTP-PUT-stdin.txt] When the filter
system mentioned above gets real, it'll be a piece of cake to add.
* Pass a list of host name to libcurl to which we allow the user name and
password to get sent to. Currently, it only get sent to the host name that
the first URL uses (to prevent others from being able to read it), but this
also prevents the authentication info from getting sent when following
locations to legitimate other host names.
* "Content-Encoding: compress/gzip/zlib" HTTP 1.1 clearly defines how to get
and decode compressed documents. There is the zlib that is pretty good at
decompressing stuff. This work was started in October 1999 but halted again
@@ -84,23 +108,55 @@ TODO
http://www.innovation.ch/java/ntlm.html that contains detailed reverse-
engineered info.
* RFC2617 compliance, "Digest Access Authentication"
A valid test page seem to exist at:
http://hopf.math.nwu.edu/testpage/digest/
And some friendly person's server source code is available at
http://hopf.math.nwu.edu/digestauth/index.html
Then there's the Apache mod_digest source code too of course. It seems as
if Netscape doesn't support this, and not many servers do. Although this is
a lot better authentication method than the more common "Basic". Basic
sends the password in cleartext over the network, this "Digest" method uses
a challange-response protocol which increases security quite a lot.
* RFC2617 compliance, "Digest Access Authentication" A valid test page seem
to exist at: http://hopf.math.nwu.edu/testpage/digest/ And some friendly
person's server source code is available at
http://hopf.math.nwu.edu/digestauth/index.html Then there's the Apache
mod_digest source code too of course. It seems as if Netscape doesn't
support this, and not many servers do. Although this is a lot better
authentication method than the more common "Basic". Basic sends the
password in cleartext over the network, this "Digest" method uses a
challange-response protocol which increases security quite a lot.
* Pipelining. Sending multiple requests before the previous one(s) are done.
This could possibly be implemented using the multi interface to queue
requests and the response data.
TELNET
* Make TELNET work on windows98!
* Reading input (to send to the remote server) on stdin is a crappy solution
for library purposes. We need to invent a good way for the application to
be able to provide the data to send.
* Move the telnet support's network select() loop go away and merge the code
into the main transfer loop. Until this is done, the multi interface won't
work for telnet.
SSL
* If you really want to improve the SSL situation, you should probably have a
look at SSL cafile loading as well - quick traces look to me like these are
done on every request as well, when they should only be necessary once per
ssl context (or once per handle). Even better would be to support the SSL
CAdir option - instead of loading all of the root CA certs for every
request, this option allows you to only read the CA chain that is actually
required (into the cache)...
* Add an interface to libcurl that enables "session IDs" to get
exported/imported. Cris Bailiff said: "OpenSSL has functions which can
serialise the current SSL state to a buffer of your choice, and
recover/reset the state from such a buffer at a later date - this is used
by mod_ssl for apache to implement and SSL session ID cache". This whole
idea might become moot if we enable the 'data sharing' as mentioned in the
LIBCURL label above.
* OpenSSL supports a callback for customised verification of the peer
certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
it be? There's so much that could be done if it were! (brought by Chris
Clark)
* Make curl's SSL layer option capable of using other free SSL libraries.
Such as the Mozilla Security Services
(http://www.mozilla.org/projects/security/pki/nss/) and GNUTLS
@@ -108,7 +164,34 @@ TODO
LDAP
* Multiple URL requests don't work. [http://sourceforge.net/tracker/index.php?func=detail&aid=475407&group_id=976&atid=100976]
* Look over the implementation. The looping will have to "go away" from the
lib/ldap.c source file and get moved to the main network code so that the
multi interface and friends will work for LDAP as well.
CLIENT
* "curl ftp://site.com/*.txt"
* Several URLs can be specified to get downloaded. We should be able to use
the same syntax to specify several files to get uploaded (using the same
persistant connection), using -T.
* When the multi interface has been implemented and proved to work, the
client could be told to use maximum N simultaneous transfers and then just
make sure that happens. It should of course not make more than one
connection to the same remote host.
* Extending the capabilities of the multipart formposting. How about leaving
the ';type=foo' syntax as it is and adding an extra tag (headers) which
works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
fil1.hdr contains extra headers like
Content-Type: text/plain; charset=KOI8-R"
Content-Transfer-Encoding: base64
X-User-Comment: Please don't use browser specific HTML code
which should overwrite the program reasonable defaults (plain/text,
8bit...) (Idea brough to us by kromJx)
TEST SUITE

64
docs/VERSIONS Normal file
View File

@@ -0,0 +1,64 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
Version Numbers and Releases
Curl is not only curl. Curl is also libcurl. They're actually individually
versioned, but they mostly follow each other rather closely.
The version numbering is always built up using the same system:
X.Y[.Z][-preN]
Where
X is main version number
Y is release number
Z is patch number
N is pre-release number
One of these numbers will get bumped in each new release. The numbers to the
right of a bumped number will be reset to zero. If Z is zero, it is not
included in the version number. The pre release number is only included in
pre releases (they're never used in public, official, releases).
The main version number will get bumped when *really* big, world colliding
changes are made. The release number is bumped when big changes are
performed. The patch number is bumped when the changes are mere bugfixes and
only minor feature changes. The pre-release is a counter, to identify which
pre-release a certain release is.
When reaching the end of a pre-release period, the version without the
pre-release part will be released as a public release.
It means that after release 1.2.3, we can release 2.0 if something really big
has been made, 1.3 if not that big changes were made or 1.2.4 if mostly bugs
were fixed. Before 1.2.4 is released, we might release a 1.2.4-pre1 release
for the brave people to try before the actual release.
Bumping, as in increasing the number with 1, is unconditionally only
affecting one of the numbers (except the ones to the right of it, that may be
set to zero). 1 becomes 2, 3 becomes 4, 9 becomes 10, 88 becomes 89 and 99
becomes 100. So, after 1.2.9 comes 1.2.10. After 3.99.3, 3.100 might come.
All original curl source release archives are named according to the libcurl
version (not according to the curl client version that, as said before, might
differ).
As a service to any application that might want to support new libcurl
features while still being able to build with older versions, all releases
have the libcurl version stored in the curl/curl.h file using a static
numbering scheme that can be used for comparison. The version number is
defined as:
#define LIBCURL_VERSION_NUM 0xXXYYZZ
Where XX, YY and ZZ are the main version, release and patch numbers in
hexadecimal. All three numbers are always represented using two digits. 1.2
would appear as "0x010200" while version 9.11.7 appears as "0x090b07".
This 6-digit hexadecimal number does not show pre-release number, and it is
always a greater number in a more recent release. It makes comparisons with
greater than and less than work.

View File

@@ -2,7 +2,7 @@
.\" nroff -man curl-config.1
.\" Written by Daniel Stenberg
.\"
.TH curl-config 1 "16 August 2001" "Curl 7.8.1" "curl-config manual"
.TH curl-config 1 "21 January 2002" "Curl 7.9.3" "curl-config manual"
.SH NAME
curl-config \- Get information about a libcurl installation
.SH SYNOPSIS
@@ -11,6 +11,8 @@ curl-config \- Get information about a libcurl installation
.B curl-config
displays information about a previous curl and libcurl installation.
.SH OPTIONS
.IP "--cc"
Displays the compiler used to build libcurl.
.IP "--cflags"
Set of compiler options (CFLAGS) to use when compiling files that use
libcurl. Currently that is only thw include path to the curl include files.
@@ -38,18 +40,23 @@ major, minor, patch. So that libcurl 7.7.4 would appear as 070704 and libcurl
.SH "EXAMPLES"
What linker options do I need when I link with libcurl?
curl-config --libs
$ curl-config --libs
What compiler options do I need when I compile using libcurl functions?
curl-config --cflags
$ curl-config --cflags
How do I know if libcurl was built with SSL support?
curl-config --feature | grep SSL
$ curl-config --feature | grep SSL
What's the installed libcurl version?
curl-config --version
$ curl-config --version
How do I build a single file with a one-line command?
$ `curl-config --cc --cflags --libs` -o example example.c
.SH "SEE ALSO"
.BR curl (1)

View File

@@ -2,10 +2,9 @@
.\" nroff -man curl.1
.\" Written by Daniel Stenberg
.\"
.TH curl 1 "30 Oct 2001" "Curl 7.9.1" "Curl Manual"
.TH curl 1 "25 Feb 2002" "Curl 7.9.5" "Curl Manual"
.SH NAME
curl \- get a URL with FTP, TELNET, LDAP, GOPHER, DICT, FILE, HTTP or
HTTPS syntax.
curl \- transfer a URL
.SH SYNOPSIS
.B curl [options]
.I [URL...]
@@ -117,13 +116,13 @@ be written to stdout. (Option added in curl 7.9)
If this option is used several times, the last specfied file name will be
used.
.IP "-C/--continue-at <offset>"
Continue/Resume a previous file transfer at the given offset. The
given offset is the exact number of bytes that will be skipped
counted from the beginning of the source file before it is transfered
to the destination.
If used with uploads, the ftp server command SIZE will not be used by
curl. Upload resume is for FTP only.
HTTP resume is only possible with HTTP/1.1 or later servers.
Continue/Resume a previous file transfer at the given offset. The given offset
is the exact number of bytes that will be skipped counted from the beginning
of the source file before it is transfered to the destination. If used with
uploads, the ftp server command SIZE will not be used by curl.
Use "-C -" to tell curl to automatically find out where/how to resume the
transfer. It then uses the given output/input files to figure that out.
If this option is used several times, the last one will be used.
.IP "-d/--data <data>"
@@ -161,10 +160,14 @@ using this option the entire context of the posted data is kept as-is. If you
want to post a binary file without the strip-newlines feature of the
--data-ascii option, this is for you.
If this option is used several times, the last one will be used.
If this option is used several times, the ones following the first will
append data.
.IP "--disable-epsv"
(FTP) Tell curl to disable the use of the EPSV command when doing passive FTP
downloads. Curl will normally always first attempt to use EPSV before PASV,
but with this option, it will not try using EPSV.
IF this option is used several times, each occurrence will toggle this on/off.
.IP "-D/--dump-header <file>"
(HTTP/FTP)
Write the HTTP headers to this file. Write the FTP file info to this
@@ -507,7 +510,7 @@ password is specified, curl will ask for it interactively.
If this option is used several times, the last one will be used.
.IP "--url <URL>"
Specify a URL to fetch. This option is mostly handy when you wanna specify
Specify a URL to fetch. This option is mostly handy when you want to specify
URL(s) in a config file.
This option may be used any number of times. To control where this URL is written, use the
@@ -535,7 +538,7 @@ write "@-".
The variables present in the output format will be substituted by the value or
text that curl thinks fit, as described below. All variables are specified
like %{variable_name} and to output a normal % you just write them like
%%. You can output a newline by using \\n, a carrige return with \\r and a tab
%%. You can output a newline by using \\n, a carriage return with \\r and a tab
space with \\t.
.B NOTE:
@@ -569,6 +572,11 @@ The time, in seconds, it took from the start until the file transfer is just
about to begin. This includes all pre-transfer commands and negotiations that
are specific to the particular protocol(s) involved.
.TP
.B time_starttransfer
The time, in seconds, it took from the start until the first byte is just about
to be transfered. This includes time_pretransfer and also the time the
server needs to calculate the result.
.TP
.B size_download
The total amount of bytes that were downloaded.
.TP
@@ -586,6 +594,9 @@ The average download speed that curl measured for the complete download.
.TP
.B speed_upload
The average upload speed that curl measured for the complete upload.
.TP
.B content_type
The Content-Type of the requested document, if there was any. (Added in 7.9.5)
.RE
If this option is used several times, the last one will be used.
@@ -668,7 +679,7 @@ If this option is used several times, the last one will be used.
Default config file.
.SH ENVIRONMENT
.IP "HTTP_PROXY [protocol://]<host>[:port]"
.IP "http_proxy [protocol://]<host>[:port]"
Sets proxy server to use for HTTP.
.IP "HTTPS_PROXY [protocol://]<host>[:port]"
Sets proxy server to use for HTTPS.
@@ -679,11 +690,8 @@ Sets proxy server to use for GOPHER.
.IP "ALL_PROXY [protocol://]<host>[:port]"
Sets proxy server to use if no protocol-specific proxy is set.
.IP "NO_PROXY <comma-separated list of hosts>"
list of host names that shouldn't go through any proxy. If set to a
asterisk '*' only, it matches all hosts.
.IP "COLUMNS <integer>"
The width of the terminal. This variable only affects curl when the
--progress-bar option is used.
list of host names that shouldn't go through any proxy. If set to a asterisk
'*' only, it matches all hosts.
.SH EXIT CODES
There exists a bunch of different error codes and their corresponding error
messages that may appear during bad conditions. At the time of this writing,
@@ -783,13 +791,17 @@ Internal error. A function was called in a bad order.
.IP 45
Interface error. A specified outgoing interface could not be used.
.IP 46
Bad password entered. An error was signalled when the password was entered.
Bad password entered. An error was signaled when the password was entered.
.IP 47
Too many redirects. When following redirects, curl hit the maximum amount.
.IP 48
Unknown TELNET option specified.
.IP 49
Malformed telnet option.
.IP 51
The remote peer's SSL certificate wasn't ok
.IP 52
The server didn't reply anything, which here is considered an error.
.IP XX
There will appear more error codes here in future releases. The existing ones
are meant to never change.

View File

@@ -1,29 +0,0 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_easy_cleanup 3 "5 March 2001" "libcurl 7.7" "libcurl Manual"
.SH NAME
curl_easy_cleanup - End a libcurl session
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "void curl_easy_cleanup(CURL *" handle ");"
.ad
.SH DESCRIPTION
This function must be the last function to call for a curl session. It is the
opposite of the
.I curl_easy_init
function and must be called with the same
.I handle
as input as the curl_easy_init call returned.
This will effectively close all connections libcurl has been used and possibly
has kept open until now. Don't call this function if you intend to transfer
more files (libcurl 7.7 or later).
.SH RETURN VALUE
None
.SH "SEE ALSO"
.BR curl_easy_init "(3), "
.SH BUGS
Surely there are some, you tell me!

View File

@@ -1,34 +0,0 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_easy_init 3 "14 August 2001" "libcurl 7.8.1" "libcurl Manual"
.SH NAME
curl_easy_init - Start a libcurl session
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "CURL *curl_easy_init( );"
.ad
.SH DESCRIPTION
This function must be the first function to call, and it returns a CURL handle
that you shall use as input to the other easy-functions. The init calls
intializes curl and this call MUST have a corresponding call to
.I curl_easy_cleanup
when the operation is complete.
On win32 systems, if you want to init the winsock stuff manually, libcurl will
not do that for you. WSAStartup() and WSACleanup() should then be called
accordingly. If you want libcurl to handle this, use the CURL_GLOBAL_WIN32
flag in the initial curl_global_init() call.
Using libcurl 7.7 and later, you should perform all your sequential file
transfers using the same curl handle. This enables libcurl to use persistant
connections where possible.
.SH RETURN VALUE
If this function returns NULL, something went wrong and you cannot use the
other curl functions.
.SH "SEE ALSO"
.BR curl_easy_cleanup "(3), " curl_global_init "(3)
.SH BUGS
Surely there are some, you tell me!

View File

@@ -1,87 +0,0 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_formparse 3 "21 May 2001" "libcurl 7.7.4" "libcurl Manual"
.SH NAME
curl_formparse - add a section to a multipart/formdata HTTP POST:
deprecated (use curl_formadd instead)
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "CURLcode curl_formparse(char * " string, " struct HttpPost ** " firstitem,
.BI "struct HttpPost ** " lastitem ");"
.ad
.SH DESCRIPTION
curl_formparse() is used to append sections when building a multipart/formdata
HTTP POST (sometimes refered to as rfc1867-style posts). Append one section at
a time until you've added all the sections you want included and then you pass
the \fIfirstitem\fP pointer as parameter to \fBCURLOPT_HTTPPOST\fP.
\fIlastitem\fP is set after each call and on repeated invokes it should be
left as set to allow repeated invokes to find the end of the list in a faster
way. \fIstring\fP must be a zero terminated string abiding to the syntax
described in a section below
The pointers \fI*firstitem\fP and \fI*lastitem\fP should both be pointing to
NULL in the first call to this function. All list-data will be allocated by
the function itself. You must call \fIcurl_formfree\fP after the form post has
been done to free the resources again.
This function will copy all input data and keep its own version of it
allocated until you call \fIcurl_formfree\fP. When you've passed the pointer
to \fIcurl_easy_setopt\fP, you must not free the list until after you've
called \fIcurl_easy_cleanup\fP for the curl handle.
See example below.
.SH "FORM PARSE STRINGS"
The
.I string
parameter must be using one of the following patterns. Note that the []
letters should not be included in the real-life string.
.TP 0.8i
.B [name]=[contents]
Add a form field named 'name' with the contents 'contents'. This is the
typcial contents of the HTML tag <input type=text>.
.TP
.B [name]=@[filename]
Add a form field named 'name' with the contents as read from the local file
named 'filename'. This is the typcial contents of the HTML tag <input
type=file>.
.TP
.B [name]=@[filename1,filename2,...]
Add a form field named 'name' with the contents as read from the local files
named 'filename1' and 'filename2'. This is identical to the upper, except that
you get the contents of several files in one section.
.TP
.B [name]=@[filename];[type=<content-type>]
Whenever you specify a file to read from, you can optionally specify the
content-type as well. The content-type is passed to the server together with
the contents of the file. curl_formparse() will guess content-type for a
number of well-known extensions and otherwise it will set it to binary. You
can override the internal decision by using this option.
.TP
.B [name]=@[filename1,filename2,...];[type=<content-type>]
When you specify several files to read the contents from, you can set the
content-type for all of them in the same way as with a single file.
.PP
.SH RETURN VALUE
Returns non-zero if an error occurs.
.SH EXAMPLE
HttpPost* post = NULL;
HttpPost* last = NULL;
/* Add an image section */
curl_formparse("picture=@my-face.jpg", &post, &last);
/* Add a normal text section */
curl_formparse("name=FooBar", &post, &last);
/* Set the form info */
curl_easy_setopt(curl, CURLOPT_HTTPPOST, post);
.SH "SEE ALSO"
.BR curl_easy_setopt "(3), "
.BR curl_formadd "(3), "
.BR curl_formfree "(3)
.SH BUGS
Surely there are some, you tell me!

View File

@@ -4,9 +4,10 @@
AUTOMAKE_OPTIONS = foreign no-dependencies
EXTRA_DIST = README curlgtk.c sepheaders.c simple.c postit.c postit2.c \
EXTRA_DIST = README curlgtk.c sepheaders.c simple.c postit2.c \
win32sockets.c persistant.c ftpget.c Makefile.example \
multithread.c getinmemory.c ftpupload.c httpput.c
multithread.c getinmemory.c ftpupload.c httpput.c \
simplessl.c ftpgetresp.c http-post.c
all:
@echo "done"

View File

@@ -10,6 +10,10 @@ them for submission in future packages and on the web site.
The Makefile.example is an example makefile that could be used to build these
examples. Just edit the file according to your system and requirements first.
Most examples should build fine using a command line like this:
$ gcc `curl-config --cflags` `curl-config --libs` -o example example.c
Try the php/examples/ directory for PHP programming snippets!
*PLEASE* do not use the curl.haxx.se site as a test target for your libcurl

View File

@@ -14,31 +14,70 @@
#include <curl/types.h>
#include <curl/easy.h>
/* to make this work under windows, use the win32-functions from the
win32socket.c file as well */
/*
* This is an example showing how to get a single file from an FTP server.
* It delays the actual destination file creation until the first write
* callback so that it won't create an empty file in case the remote file
* doesn't exist or something else fails.
*/
int main(int argc, char **argv)
struct FtpFile {
char *filename;
FILE *stream;
};
int my_fwrite(void *buffer, size_t size, size_t nmemb, void *stream)
{
struct FtpFile *out=(struct FtpFile *)stream;
if(out && !out->stream) {
/* open file for writing */
out->stream=fopen(out->filename, "wb");
if(!out->stream)
return -1; /* failure, can't open file to write */
}
return fwrite(buffer, size, nmemb, out->stream);
}
int main(void)
{
CURL *curl;
CURLcode res;
FILE *ftpfile;
/* local file name to store the file as */
ftpfile = fopen("curl.tar.gz", "wb"); /* b is binary for win */
struct FtpFile ftpfile={
"curl.tar.gz", /* name to store the file as if succesful */
NULL
};
curl_global_init(CURL_GLOBAL_DEFAULT);
curl = curl_easy_init();
if(curl) {
/* Get curl 7.7 from sunet.se's FTP site: */
/* Get curl 7.9.2 from sunet.se's FTP site: */
curl_easy_setopt(curl, CURLOPT_URL,
"ftp://ftp.sunet.se/pub/www/utilities/curl/curl-7.7.tar.gz");
curl_easy_setopt(curl, CURLOPT_FILE, ftpfile);
"ftp://ftp.sunet.se/pub/www/utilities/curl/curl-7.9.2.tar.gz");
/* Define our callback to get called when there's data to be written */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
/* Set a pointer to our struct to pass to the callback */
curl_easy_setopt(curl, CURLOPT_FILE, &ftpfile);
/* Switch on full protocol/debug output */
curl_easy_setopt(curl, CURLOPT_VERBOSE, TRUE);
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
if(CURLE_OK != res) {
/* we failed */
fprintf(stderr, "curl told us %d\n", res);
}
}
fclose(ftpfile); /* close the local file */
if(ftpfile.stream)
fclose(ftpfile.stream); /* close the local file */
curl_global_cleanup();
return 0;
}

View File

@@ -0,0 +1,61 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*/
#include <stdio.h>
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
/*
* Similar to ftpget.c but this also stores the received response-lines
* in a separate file using our own callback!
*
* This functionality was introduced in libcurl 7.9.3.
*/
size_t
write_response(void *ptr, size_t size, size_t nmemb, void *data)
{
FILE *writehere = (FILE *)data;
return fwrite(ptr, size, nmemb, writehere);
}
int main(int argc, char **argv)
{
CURL *curl;
CURLcode res;
FILE *ftpfile;
FILE *respfile;
/* local file name to store the file as */
ftpfile = fopen("ftp-list", "wb"); /* b is binary, needed on win32 */
/* local file name to store the FTP server's response lines in */
respfile = fopen("ftp-responses", "wb"); /* b is binary, needed on win32 */
curl = curl_easy_init();
if(curl) {
/* Get a file listing from sunet */
curl_easy_setopt(curl, CURLOPT_URL, "ftp://ftp.sunet.se/");
curl_easy_setopt(curl, CURLOPT_FILE, ftpfile);
curl_easy_setopt(curl, CURLOPT_HEADERFUNCTION, write_response);
curl_easy_setopt(curl, CURLOPT_WRITEHEADER, respfile);
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
fclose(ftpfile); /* close the local file */
fclose(respfile); /* close the response file */
return 0;
}

35
docs/examples/http-post.c Normal file
View File

@@ -0,0 +1,35 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*/
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res;
curl = curl_easy_init();
if(curl) {
/* First set the URL that is about to receive our POST. This URL can
just as well be a https:// URL if that is what should receive the
data. */
curl_easy_setopt(curl, CURLOPT_URL, "http://postit.example.com/moo.cgi");
/* Now specify the POST data */
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, "name=daniel&project=curl");
/* Perform the request, res will get the return code */
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
return 0;
}

92
docs/examples/multi-app.c Normal file
View File

@@ -0,0 +1,92 @@
/*
* This is an example application source code using the multi interface.
*/
#include <stdio.h>
#include <string.h>
/* somewhat unix-specific */
#include <sys/time.h>
#include <unistd.h>
/* To start with, we include the header from the lib directory. This should
later of course be moved to the proper include dir. */
#include "../lib/multi.h"
/*
* Download a HTTP file and upload an FTP file simultaneously.
*/
int main(int argc, char **argv)
{
CURL *http_handle;
CURL *ftp_handle;
CURLM *multi_handle;
int still_running; /* keep number of running handles */
http_handle = curl_easy_init();
ftp_handle = curl_easy_init();
/* set the options (I left out a few, you'll get the point anyway) */
curl_easy_setopt(http_handle, CURLOPT_URL, "http://website.com");
curl_easy_setopt(ftp_handle, CURLOPT_URL, "ftp://ftpsite.com");
curl_easy_setopt(ftp_handle, CURLOPT_UPLOAD, TRUE);
/* init a multi stack */
multi_handle = curl_multi_init();
/* add the individual transfers */
curl_multi_add_handle(multi_handle, http_handle);
curl_multi_add_handle(multi_handle, ftp_handle);
/* we start some action by calling perform right away */
while(CURLM_CALL_MULTI_PERFORM ==
curl_multi_perform(multi_handle, &still_running));
while(still_running) {
struct timeval timeout;
int rc; /* select() return code */
fd_set fdread;
fd_set fdwrite;
fd_set fdexcep;
int maxfd;
FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);
/* set a suitable timeout to play around with */
timeout.tv_sec = 1;
timeout.tv_usec = 0;
/* get file descriptors from the transfers */
curl_multi_fdset(multi_handle, &fdread, &fdwrite, &fdexcep, &maxfd);
rc = select(maxfd+1, &fdread, &fdwrite, &fdexcep, &timeout);
switch(rc) {
case -1:
/* select error */
break;
case 0:
/* timeout, do something else */
break;
default:
/* one or more of curl's file descriptors say there's data to read
or write */
curl_multi_perform(multi_handle, &still_running);
break;
}
}
curl_multi_cleanup(multi_handle);
curl_easy_cleanup(http_handle);
curl_easy_cleanup(ftp_handle);
return 0;
}

View File

@@ -0,0 +1,87 @@
/*
* This is a simple example using the multi interface.
*/
#include <stdio.h>
#include <string.h>
/* somewhat unix-specific */
#include <sys/time.h>
#include <unistd.h>
/* To start with, we include the header from the lib directory. This should
later of course be moved to the proper include dir. */
#include "../lib/multi.h"
/*
* Simply download two HTTP files!
*/
int main(int argc, char **argv)
{
CURL *http_handle;
CURL *http_handle2;
CURLM *multi_handle;
int still_running; /* keep number of running handles */
http_handle = curl_easy_init();
http_handle2 = curl_easy_init();
/* set options */
curl_easy_setopt(http_handle, CURLOPT_URL, "http://www.haxx.se/");
/* set options */
curl_easy_setopt(http_handle2, CURLOPT_URL, "http://localhost/");
/* init a multi stack */
multi_handle = curl_multi_init();
/* add the individual transfers */
curl_multi_add_handle(multi_handle, http_handle);
curl_multi_add_handle(multi_handle, http_handle2);
/* we start some action by calling perform right away */
while(CURLM_CALL_MULTI_PERFORM ==
curl_multi_perform(multi_handle, &still_running));
while(still_running) {
struct timeval timeout;
int rc; /* select() return code */
fd_set fdread;
fd_set fdwrite;
fd_set fdexcep;
int maxfd;
FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);
/* set a suitable timeout to play around with */
timeout.tv_sec = 1;
timeout.tv_usec = 0;
/* get file descriptors from the transfers */
curl_multi_fdset(multi_handle, &fdread, &fdwrite, &fdexcep, &maxfd);
rc = select(maxfd+1, &fdread, &fdwrite, &fdexcep, &timeout);
switch(rc) {
case -1:
/* select error */
break;
case 0:
default:
/* timeout or readable/writable sockets */
curl_multi_perform(multi_handle, &still_running);
break;
}
}
curl_multi_cleanup(multi_handle);
curl_easy_cleanup(http_handle);
curl_easy_cleanup(http_handle2);
return 0;
}

View File

@@ -0,0 +1,80 @@
/*
* This is a very simple example using the multi interface.
*/
#include <stdio.h>
#include <string.h>
/* somewhat unix-specific */
#include <sys/time.h>
#include <unistd.h>
/* To start with, we include the header from the lib directory. This should
later of course be moved to the proper include dir. */
#include "../lib/multi.h"
/*
* Simply download a HTTP file.
*/
int main(int argc, char **argv)
{
CURL *http_handle;
CURLM *multi_handle;
int still_running; /* keep number of running handles */
http_handle = curl_easy_init();
/* set the options (I left out a few, you'll get the point anyway) */
curl_easy_setopt(http_handle, CURLOPT_URL, "http://www.haxx.se/");
/* init a multi stack */
multi_handle = curl_multi_init();
/* add the individual transfers */
curl_multi_add_handle(multi_handle, http_handle);
/* we start some action by calling perform right away */
while(CURLM_CALL_MULTI_PERFORM ==
curl_multi_perform(multi_handle, &still_running));
while(still_running) {
struct timeval timeout;
int rc; /* select() return code */
fd_set fdread;
fd_set fdwrite;
fd_set fdexcep;
int maxfd;
FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);
/* set a suitable timeout to play around with */
timeout.tv_sec = 1;
timeout.tv_usec = 0;
/* get file descriptors from the transfers */
curl_multi_fdset(multi_handle, &fdread, &fdwrite, &fdexcep, &maxfd);
rc = select(maxfd+1, &fdread, &fdwrite, &fdexcep, &timeout);
switch(rc) {
case -1:
/* select error */
break;
case 0:
default:
/* timeout or readable/writable sockets */
curl_multi_perform(multi_handle, &still_running);
break;
}
}
curl_multi_cleanup(multi_handle);
curl_easy_cleanup(http_handle);
return 0;
}

View File

@@ -1,71 +0,0 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*
* Example code that uploads a file name 'foo' to a remote script that accepts
* "HTML form based" (as described in RFC1738) uploads using HTTP POST.
*
* The imaginary form we'll fill in looks like:
*
* <form method="post" enctype="multipart/form-data" action="examplepost.cgi">
* Enter file: <input type="file" name="sendfile" size="40">
* Enter file name: <input type="text" name="filename" size="30">
* <input type="submit" value="send" name="submit">
* </form>
*
* This exact source code has not been verified to work.
*/
/* to make this work under windows, use the win32-functions from the
win32socket.c file as well */
#include <stdio.h>
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
int main(int argc, char **argv)
{
CURL *curl;
CURLcode res;
struct HttpPost *formpost=NULL;
struct HttpPost *lastptr=NULL;
/* Fill in the file upload field */
curl_formparse("sendfile=@foo",
&formpost,
&lastptr);
/* Fill in the filename field */
curl_formparse("filename=foo",
&formpost,
&lastptr);
/* Fill in the submit field too, even if this is rarely needed */
curl_formparse("submit=send",
&formpost,
&lastptr);
curl = curl_easy_init();
if(curl) {
/* what URL that receives this POST */
curl_easy_setopt(curl, CURLOPT_URL, "http://curl.haxx.se/examplepost.cgi");
curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost);
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
/* then cleanup the formpost chain */
curl_formfree(formpost);
}
return 0;
}

View File

@@ -9,27 +9,16 @@
*/
#include <stdio.h>
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
/* to make this work under windows, use the win32-functions from the
win32socket.c file as well */
int main(int argc, char **argv)
int main(void)
{
CURL *curl;
CURLcode res;
FILE *headerfile;
headerfile = fopen("dumpit", "w");
curl = curl_easy_init();
if(curl) {
/* what call to write: */
curl_easy_setopt(curl, CURLOPT_URL, "curl.haxx.se");
curl_easy_setopt(curl, CURLOPT_WRITEHEADER, headerfile);
res = curl_easy_perform(curl);
/* always cleanup */

118
docs/examples/simplessl.c Normal file
View File

@@ -0,0 +1,118 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*/
#include <stdio.h>
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
/* some requirements for this to work:
1. set pCertFile to the file with the client certificate
2. if the key is passphrase protected, set pPassphrase to the
passphrase you use
3. if you are using a crypto engine:
3.1. set a #define USE_ENGINE
3.2. set pEngine to the name of the crypto engine you use
3.3. set pKeyName to the key identifier you want to use
4. if you don't use a crypto engine:
4.1. set pKeyName to the file name of your client key
4.2. if the format of the key file is DER, set pKeyType to "DER"
!! verify of the server certificate is not implemented here !!
**** This example only works with libcurl 7.9.3 and later! ****
*/
int main(int argc, char **argv)
{
CURL *curl;
CURLcode res;
FILE *headerfile;
const char *pCertFile = "testcert.pem";
const char *pCACertFile="cacert.pem"
const char *pKeyName;
const char *pKeyType;
const char *pEngine;
#if USE_ENGINE
pKeyName = "rsa_test";
pKeyType = "ENG";
pEngine = "chil"; /* for nChiper HSM... */
#else
pKeyName = "testkey.pem";
pKeyType = "PEM";
pEngine = NULL;
#endif
const char *pPassphrase = NULL;
headerfile = fopen("dumpit", "w");
curl_global_init(CURL_GLOBAL_DEFAULT);
curl = curl_easy_init();
if(curl) {
/* what call to write: */
curl_easy_setopt(curl, CURLOPT_URL, "HTTPS://curl.haxx.se");
curl_easy_setopt(curl, CURLOPT_WRITEHEADER, headerfile);
while(1) /* do some ugly short cut... */
{
if (pEngine) /* use crypto engine */
{
if (curl_easy_setopt(curl, CURLOPT_SSLENGINE,pEngine) != CURLE_OK)
{ /* load the crypto engine */
fprintf(stderr,"can't set crypto engine\n");
break;
}
if (curl_easy_setopt(curl, CURLOPT_SSLENGINE_DEFAULT,1) != CURLE_OK)
{ /* set the crypto engine as default */
/* only needed for the first time you load
a engine in a curl object... */
fprintf(stderr,"can't set crypto engine as default\n");
break;
}
}
/* cert is stored PEM coded in file... */
/* since PEM is default, we needn't set it for PEM */
curl_easy_setopt(curl,CURLOPT_SSLCERTTYPE,"PEM");
/* set the cert for client authentication */
curl_easy_setopt(curl,CURLOPT_SSLCERT,pCertFile);
/* sorry, for engine we must set the passphrase
(if the key has one...) */
if (pPassphrase)
curl_easy_setopt(curl,CURLOPT_SSLKEYPASSWD,pPassphrase);
/* if we use a key stored in a crypto engine,
we must set the key type to "ENG" */
curl_easy_setopt(curl,CURLOPT_SSLKEYTYPE,pKeyType);
/* set the private key (file or ID in engine) */
curl_easy_setopt(curl,CURLOPT_SSLKEY,pKeyName);
/* set the file with the certs vaildating the server */
curl_easy_setopt(curl,CURLOPT_CAINFO,pCACertFile);
/* disconnect if we can't validate server's cert */
curl_easy_setopt(curl,CURLOPT_SSL_VERIFYPEER,1);
res = curl_easy_perform(curl);
break; /* we are done... */
}
/* always cleanup */
curl_easy_cleanup(curl);
}
curl_global_cleanup();
return 0;
}

911
docs/libcurl-the-guide Normal file
View File

@@ -0,0 +1,911 @@
$Id$
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
PROGRAMMING WITH LIBCURL
About this Document
This document will attempt to describe the general principle and some basic
approaches to consider when programming with libcurl. The text will focus
mainly on the C interface but might apply fairly well on other interfaces as
well as they usually follow the C one pretty closely.
This document will refer to 'the user' as the person writing the source code
that uses libcurl. That would probably be you or someone in your position.
What will be generally refered to as 'the program' will be the collected
source code that you write that is using libcurl for transfers. The program
is outside libcurl and libcurl is outside of the program.
To get the more details on all options and functions described herein, please
refer to their respective man pages.
Building
There are many different ways to build C programs. This chapter will assume a
unix-style build process. If you use a different build system, you can still
read this to get general information that may apply to your environment as
well.
Compiling the Program
Your compiler needs to know where the libcurl headers are
located. Therefore you must set your compiler's include path to point to
the directory where you installed them. The 'curl-config'[3] tool can be
used to get this information:
$ curl-config --cflags
Linking the Program with libcurl
When having compiled the program, you need to link your object files to
create a single executable. For that to succeed, you need to link with
libcurl and possibly also with other libraries that libcurl itself depends
on. Like OpenSSL librararies, but even some standard OS libraries may be
needed on the command line. To figure out which flags to use, once again
the 'curl-config' tool comes to the rescue:
$ curl-config --libs
SSL or Not
libcurl can be built and customized in many ways. One of the things that
varies from different libraries and builds is the support for SSL-based
transfers, like HTTPS and FTPS. If OpenSSL was detected properly at
build-time, libcurl will be built with SSL support. To figure out if an
installed libcurl has been built with SSL support enabled, use
'curl-config' like this:
$ curl-config --feature
And if SSL is supported, the keyword 'SSL' will be written to stdout,
possibly together with a few other features that can be on and off on
different libcurls.
Portable Code in a Portable World
The people behind libcurl have put a considerable effort to make libcurl work
on a large amount of different operating systems and environments.
You program libcurl the same way on all platforms that libcurl runs on. There
are only very few minor considerations that differs. If you just make sure to
write your code portable enough, you may very well create yourself a very
portable program. libcurl shouldn't stop you from that.
Global Preparation
The program must initialize some of the libcurl functionality globally. That
means it should be done exactly once, no matter how many times you intend to
use the library. Once for your program's entire life time. This is done using
curl_global_init()
and it takes one parameter which is a bit pattern that tells libcurl what to
intialize. Using CURL_GLOBAL_ALL will make it initialize all known internal
sub modules, and might be a good default option. The current two bits that
are specified are:
CURL_GLOBAL_WIN32 which only does anything on Windows machines. When used on
a Windows machine, it'll make libcurl intialize the win32 socket
stuff. Without having that initialized properly, your program cannot use
sockets properly. You should only do this once for each application, so if
your program already does this or of another library in use does it, you
should not tell libcurl to do this as well.
CURL_GLOBAL_SSL which only does anything on libcurls compiled and built
SSL-enabled. On these systems, this will make libcurl init OpenSSL properly
for this application. This is only needed to do once for each application so
if your program or another library already does this, this bit should not be
needed.
libcurl has a default protection mechanism that detects if curl_global_init()
hasn't been called by the time curl_easy_perform() is called and if that is
the case, libcurl runs the function itself with a guessed bit pattern. Please
note that depending solely on this is not considered nice nor very good.
When the program no longer uses libcurl, it should call
curl_global_cleanup(), which is the opposite of the init call. It will then
do the reversed operations to cleanup the resources the curl_global_init()
call initialized.
Repeated calls to curl_global_init() and curl_global_cleanup() should be
avoided. They should only be called once each.
Handle the Easy libcurl
libcurl version 7 is oriented around the so called easy interface. All
operations in the easy interface are prefixed with 'curl_easy'.
Future libcurls will also offer the multi interface. More about that
interface, what it is targeted for and how to use it is still only debated on
the libcurl mailing list and developer web pages. Join up to discuss and
figure out!
To use the easy interface, you must first create yourself an easy handle. You
need one handle for each easy session you want to perform. Basicly, you
should use one handle for every thread you plan to use for transferring. You
must never share the same handle in multiple threads.
Get an easy handle with
easyhandle = curl_easy_init();
It returns an easy handle. Using that you proceed to the next step: setting
up your preferred actions. A handle is just a logic entity for the upcoming
transfer or series of transfers.
You set properties and options for this handle using curl_easy_setopt(). They
control how the subsequent transfer or transfers will be made. Options remain
set in the handle until set again to something different. Alas, multiple
requests using the same handle will use the same options.
Many of the informationals you set in libcurl are "strings", pointers to data
terminated with a zero byte. Keep in mind that when you set strings with
curl_easy_setopt(), libcurl will not copy the data. It will merely point to
the data. You MUST make sure that the data remains available for libcurl to
use until finished or until you use the same option again to point to
something else.
One of the most basic properties to set in the handle is the URL. You set
your preferred URL to transfer with CURLOPT_URL in a manner similar to:
curl_easy_setopt(easyhandle, CURLOPT_URL, "http://curl.haxx.se/");
Let's assume for a while that you want to receive data as the URL indentifies
a remote resource you want to get here. Since you write a sort of application
that needs this transfer, I assume that you would like to get the data passed
to you directly instead of simply getting it passed to stdout. So, you write
your own function that matches this prototype:
size_t write_data(void *buffer, size_t size, size_t nmemb, void *userp);
You tell libcurl to pass all data to this function by issuing a function
similar to this:
curl_easy_setopt(easyhandle, CURLOPT_WRITEFUNCTION, write_data);
You can control what data your function get in the forth argument by setting
another property:
curl_easy_setopt(easyhandle, CURLOPT_FILE, &internal_struct);
Using that property, you can easily pass local data between your application
and the function that gets invoked by libcurl. libcurl itself won't touch the
data you pass with CURLOPT_FILE.
libcurl offers its own default internal callback that'll take care of the
data if you don't set the callback with CURLOPT_WRITEFUNCTION. It will then
simply output the received data to stdout. You can have the default callback
write the data to a different file handle by passing a 'FILE *' to a file
opened for writing with the CURLOPT_FILE option.
Now, we need to take a step back and have a deep breath. Here's one of those
rare platform-dependent nitpicks. Did you spot it? On some platforms[2],
libcurl won't be able to operate on files opened by the program. Thus, if you
use the default callback and pass in a an open file with CURLOPT_FILE, it
will crash. You should therefore avoid this to make your program run fine
virtually everywhere.
There are of course many more options you can set, and we'll get back to a
few of them later. Let's instead continue to the actual transfer:
success = curl_easy_perform(easyhandle);
The curl_easy_perform() will connect to the remote site, do the necessary
commands and receive the transfer. Whenever it receives data, it calls the
callback function we previously set. The function may get one byte at a time,
or it may get many kilobytes at once. libcurl delivers as much as possible as
often as possible. Your callback function should return the number of bytes
it "took care of". If that is not the exact same amount of bytes that was
passed to it, libcurl will abort the operation and return with an error code.
When the transfer is complete, the function returns a return code that
informs you if it succeeded in its mission or not. If a return code isn't
enough for you, you can use the CURLOPT_ERRORBUFFER to point libcurl to a
buffer of yours where it'll store a human readable error message as well.
If you then want to transfer another file, the handle is ready to be used
again. Mind you, it is even preferred that you re-use an existing handle if
you intend to make another transfer. libcurl will then attempt to re-use the
previous
When It Doesn't Work
There will always be times when the transfer fails for some reason. You might
have set the wrong libcurl option or misunderstood what the libcurl option
actually does, or the remote server might return non-standard replies that
confuse the library which then confuses your program.
There's one golden rule when these things occur: set the CURLOPT_VERBOSE
option to TRUE. It'll cause the library to spew out the entire protocol
details it sends, some internal info and some received protcol data as well
(especially when using FTP). If you're using HTTP, adding the headers in the
received output to study is also a clever way to get a better understanding
wht the server behaves the way it does. Include headers in the normal body
output with CURLOPT_HEADER set TRUE.
Of course there are bugs left. We need to get to know about them to be able
to fix them, so we're quite dependent on your bug reports! When you do report
suspected bugs in libcurl, please include as much details you possibly can: a
protocol dump that CURLOPT_VERBOSE produces, library version, as much as
possible of your code that uses libcurl, operating system name and version,
compiler name and version etc.
Getting some in-depth knowledge about the protocols involved is never wrong,
and if you're trying to do funny things, you might very well understand
libcurl and how to use it better if you study the appropriate RFC documents
at least briefly.
Upload Data to a Remote Site
libcurl tries to keep a protocol independent approach to most transfers, thus
uploading to a remote FTP site is very similar to uploading data to a HTTP
server with a PUT request.
Of course, first you either create an easy handle or you re-use one existing
one. Then you set the URL to operate on just like before. This is the remote
URL, that we now will upload.
Since we write an application, we most likely want libcurl to get the upload
data by asking us for it. To make it do that, we set the read callback and
the custom pointer libcurl will pass to our read callback. The read callback
should have a prototype similar to:
size_t function(char *bufptr, size_t size, size_t nitems, void *userp);
Where bufptr is the pointer to a buffer we fill in with data to upload and
size*nitems is the size of the buffer and therefore also the maximum amount
of data we can return to libcurl in this call. The 'userp' pointer is the
custom pointer we set to point to a struct of ours to pass private data
between the application and the callback.
curl_easy_setopt(easyhandle, CURLOPT_READFUNCTION, read_function);
curl_easy_setopt(easyhandle, CURLOPT_INFILE, &filedata);
Tell libcurl that we want to upload:
curl_easy_setopt(easyhandle, CURLOPT_UPLOAD, TRUE);
A few protocols won't behave properly when uploads are done without any prior
knowledge of the expected file size. HTTP PUT is one example [1]. So, set the
upload file size using the CURLOPT_INFILESIZE like this:
curl_easy_setopt(easyhandle, CURLOPT_INFILESIZE, file_size);
When you call curl_easy_perform() this time, it'll perform all the necessary
operations and when it has invoked the upload it'll call your supplied
callback to get the data to upload. The program should return as much data as
possible in every invoke, as that is likely to make the upload perform as
fast as possible. The callback should return the number of bytes it wrote in
the buffer. Returning 0 will signal the end of the upload.
Passwords
Many protocols use or even require that user name and password are provided
to be able to download or upload the data of your choice. libcurl offers
several ways to specify them.
Most protocols support that you specify the name and password in the URL
itself. libcurl will detect this and use them accordingly. This is written
like this:
protocol://user:password@example.com/path/
If you need any odd letters in your user name or password, you should enter
them URL encoded, as %XX where XX is a two-digit hexadecimal number.
libcurl also provides options to set various passwords. The user name and
password as shown embedded in the URL can instead get set with the
CURLOPT_USERPWD option. The argument passed to libcurl should be a char * to
a string in the format "user:password:". In a manner like this:
curl_easy_setopt(easyhandle, CURLOPT_USERPWD, "myname:thesecret");
Another case where name and password might be needed at times, is for those
users who need to athenticate themselves to a proxy they use. libcurl offers
another option for this, the CURLOPT_PROXYUSERPWD. It is used quite similar
to the CURLOPT_USERPWD option like this:
curl_easy_setopt(easyhandle, CURLOPT_PROXYUSERPWD, "myname:thesecret");
There's a long time unix "standard" way of storing ftp user names and
passwords, namely in the $HOME/.netrc file. The file should be made private
so that only the user may read it (see also the "Security Considerations"
chapter), as it might contain the password in plain text. libcurl has the
ability to use this file to figure out what set of user name and password to
use for a particular host. As an extension to the normal functionality,
libcurl also supports this file for non-FTP protocols such as HTTP. To make
curl use this file, use the CURLOPT_NETRC option:
curl_easy_setopt(easyhandle, CURLOPT_NETRC, TRUE);
And a very basic example of how such a .netrc file may look like:
machine myhost.mydomain.com
login userlogin
password secretword
All these examples have been cases where the password has been optional, or
at least you could leave it out and have libcurl attempt to do its job
without it. There are times when the password isn't optional, like when
you're using an SSL private key for secure transfers.
You can in this situation either pass a password to libcurl to use to unlock
the private key, or you can let libcurl prompt the user for it. If you prefer
to ask the user, then you can provide your own callback function that will be
called when libcurl wants the password. That way, you can control how the
question will appear to the user.
To pass the known private key password to libcurl:
curl_easy_setopt(easyhandle, CURLOPT_SSLKEYPASSWD, "keypassword");
To make a password callback:
int enter_passwd(void *ourp, const char *prompt, char *buffer, int len);
curl_easy_setopt(easyhandle, CURLOPT_PASSWDFUNCTION, enter_passwd);
HTTP POSTing
We get many questions regarding how to issue HTTP POSTs with libcurl the
proper way. This chapter will thus include examples using both different
versions of HTTP POST that libcurl supports.
The first version is the simple POST, the most common version, that most HTML
pages using the <form> tag uses. We provide a pointer to the data and tell
libcurl to post it all to the remote site:
char *data="name=daniel&project=curl";
curl_easy_setopt(easyhandle, CURLOPT_POSTFIELDS, data);
curl_easy_setopt(easyhandle, CURLOPT_URL, "http://posthere.com/");
curl_easy_perform(easyhandle); /* post away! */
Simple enough, huh? Since you set the POST options with the
CURLOPT_POSTFIELDS, this automaticly switches the handle to use POST in the
upcoming request.
Ok, so what if you want to post binary data that also requires you to set the
Content-Type: header of the post? Well, binary posts prevents libcurl from
being able to do strlen() on the data to figure out the size, so therefore we
must tell libcurl the size of the post data. Setting headers in libcurl
requests are done in a generic way, by building a list of our own headers and
then passing that list to libcurl.
struct curl_slist *headers=NULL;
headers = curl_slist_append(headers, "Content-Type: text/xml");
/* post binary data */
curl_easy_setopt(easyhandle, CURLOPT_POSTFIELD, binaryptr);
/* set the size of the postfields data */
curl_easy_setopt(easyhandle, CURLOPT_POSTFIELDSIZE, 23);
/* pass our list of custom made headers */
curl_easy_setopt(easyhandle, CURLOPT_HTTPHEADER, headers);
curl_easy_perform(easyhandle); /* post away! */
curl_slist_free_all(headers); /* free the header list */
While the simple examples above cover the majority of all cases where HTTP
POST operations are required, they don't do multipart formposts. Multipart
formposts were introduced as a better way to post (possibly large) binary
data and was first documented in the RFC1867. They're called multipart
because they're built by a chain of parts, each being a single unit. Each
part has its own name and contents. You can in fact create and post a
multipart formpost with the regular libcurl POST support described above, but
that would require that you build a formpost yourself and provide to
libcurl. To make that easier, libcurl provides curl_formadd(). Using this
function, you add parts to the form. When you're done adding parts, you post
the whole form.
The following example sets two simple text parts with plain textual contents,
and then a file with binary contents and upload the whole thing.
struct HttpPost *post=NULL;
struct HttpPost *last=NULL;
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "name",
CURLFORM_COPYCONTENTS, "daniel", CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "project",
CURLFORM_COPYCONTENTS, "curl", CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "logotype-image",
CURLFORM_FILECONTENT, "curl.png", CURLFORM_END);
/* Set the form info */
curl_easy_setopt(easyhandle, CURLOPT_HTTPPOST, post);
curl_easy_perform(easyhandle); /* post away! */
/* free the post data again */
curl_formfree(post);
Multipart formposts are chains of parts using MIME-style separators and
headers. It means that each one of these separate parts get a few headers set
that describe the individual content-type, size etc. To enable your
application to handicraft this formpost even more, libcurl allows you to
supply your own set of custom headers to such an individual form part. You
can of course supply headers to as many parts you like, but this little
example will show how you set headers to one specific part when you add that
to the post handle:
struct curl_slist *headers=NULL;
headers = curl_slist_append(headers, "Content-Type: text/xml");
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "logotype-image",
CURLFORM_FILECONTENT, "curl.xml",
CURLFORM_CONTENTHEADER, headers,
CURLFORM_END);
curl_easy_perform(easyhandle); /* post away! */
curl_formfree(post); /* free post */
curl_slist_free_all(post); /* free custom header list */
Since all options on an easyhandle are "sticky", they remain the same until
changed even if you do call curl_easy_perform(), you may need to tell curl to
go back to a plain GET request if you intend to do such a one as your next
request. You force an easyhandle to back to GET by using the CURLOPT_HTTPGET
option:
curl_easy_setopt(easyhandle, CURLOPT_HTTPGET, TRUE);
Just setting CURLOPT_POSTFIELDS to "" or NULL will *not* stop libcurl from
doing a POST. It will just make it POST without any data to send!
Showing Progress
For historical and traditional reasons, libcurl has a built-in progress meter
that can be switched on and then makes it presents a progress meter in your
terminal.
Switch on the progress meter by, oddly enough, set CURLOPT_NOPROGRESS to
FALSE. This option is set to TRUE by default.
For most applications however, the built-in progress meter is useless and
what instead is interesting is the ability to specify a progress
callback. The function pointer you pass to libcurl will then be called on
irregular intervals with information about the current transfer.
Set the progress callback by using CURLOPT_PROGRESSFUNCTION. And pass a
pointer to a function that matches this prototype:
int progress_callback(void *clientp,
double dltotal,
double dlnow,
double ultotal,
double ulnow);
If any of the input arguments is unknown, a 0 will be passed. The first
argument, the 'clientp' is the pointer you pass to libcurl with
CURLOPT_PROGRESSDATA. libcurl won't touch it.
libcurl with C++
There's basicly only one thing to keep in mind when using C++ instead of C
when interfacing libcurl:
"The Callbacks Must Be Plain C"
So if you want a write callback set in libcurl, you should put it within
'extern'. Similar to this:
extern "C" {
size_t write_data(void *ptr, size_t size, size_t nmemb,
void *ourpointer)
{
/* do what you want with the data */
}
}
This will of course effectively turn the callback code into C. There won't be
any "this" pointer available etc.
Proxies
What "proxy" means according to Merriam-Webster: "a person authorized to act
for another" but also "the agency, function, or office of a deputy who acts
as a substitute for another".
Proxies are exceedingly common these days. Companies often only offer
internet access to employees through their HTTP proxies. Network clients or
user-agents ask the proxy for docuements, the proxy does the actual request
and then it returns them.
libcurl has full support for HTTP proxies, so when a given URL is wanted,
libcurl will ask the proxy for it instead of trying to connect to the actual
host identified in the URL.
The fact that the proxy is a HTTP proxy puts certain restrictions on what can
actually happen. A requested URL that might not be a HTTP URL will be still
be passed to the HTTP proxy to deliver back to libcurl. This happens
transparantly, and an application may not need to know. I say "may", because
at times it is very important to understand that all operations over a HTTP
proxy is using the HTTP protocol. For example, you can't invoke your own
custom FTP commands or even proper FTP directory listings.
Proxy Options
To tell libcurl to use a proxy at a given port number:
curl_easy_setopt(easyhandle, CURLOPT_PROXY, "proxy-host.com:8080");
Some proxies require user authentication before allowing a request, and
you pass that information similar to this:
curl_easy_setopt(easyhandle, CURLOPT_PROXYUSERPWD, "user:password");
If you want to, you can specify the host name only in the CURLOPT_PROXY
option, and set the port number separately with CURLOPT_PROXYPORT.
Environment Variables
libcurl automaticly checks and uses a set of environment variables to know
what proxies to use for certain protocols. The names of the variables are
following an ancient de facto standard and are built up as
"[protocol]_proxy" (note the lower casing). Which makes the variable
'http_proxy' checked for a name of a proxy to use when the input URL is
HTTP. Following the same rule, the variable named 'ftp_proxy' is checked
for FTP URLs. Again, the proxies are always HTTP proxies, the different
names of the variables simply allows different HTTP proxies to be used.
The proxy environment variable contents should be in the format
"[protocol://]machine[:port]". Where the protocol:// part is simply
ignored if present (so http://proxy and bluerk://proxy will do the same)
and the optional port number specifies on which port the proxy operates on
the host. If not specified, the internal default port number will be used
and that is most likely *not* the one you would like it to be.
There are two special environment variables. 'all_proxy' is what sets
proxy for any URL in case the protocol specific variable wasn't set, and
'no_proxy' defines a list of hosts that should not use a proxy even though
a variable may say so. If 'no_proxy' is a plain asterisk ("*") it matches
all hosts.
SSL and Proxies
SSL is for secure point-to-point connections. This involves strong
encryption and similar things, which effectivly makes it impossible for a
proxy to operate as a "man in between" which the proxy's task is, as
previously discussed. Instead, the only way to have SSL work over a HTTP
proxy is to ask the proxy to tunnel trough everything without being able
to check or fiddle with the traffic.
Opening an SSL connection over a HTTP proxy is therefor a matter of asking
the proxy for a straight connection to the target host on a specified
port. This is made with the HTTP request CONNECT. ("please mr proxy,
connect me to that remote host").
Because of the nature of this operation, where the proxy has no idea what
kind of data that is passed in and out through this tunnel, this breaks
some of the very few advantages that come from using a proxy, such as
caching. Many organizations prevent this kind of tunneling to other
destination port numbers than 443 (which is the default HTTPS port
number).
Tunneling Through Proxy
As explained above, tunneling is required for SSL to work and often even
restricted to the operation intended for SSL; HTTPS.
This is however not the only time proxy-tunneling might offer benefits to
you or your application.
As tunneling opens a direct connection from your application to the remote
machine, it suddenly also re-introduces the ability to do non-HTTP
operations over a HTTP proxy. You can in fact use things such as FTP
upload or FTP custom commands this way.
Again, this is often prevented by the adminstrators of proxies and is
rarely allowed.
Tell libcurl to use proxy tunneling like this:
curl_easy_setopt(easyhandle, CURLOPT_HTTPPROXYTUNNEL, TRUE);
In fact, there might even be times when you want to do plain HTTP
operations using a tunnel like this, as it then enables you to operate on
the remote server instead of asking the proxy to do so. libcurl will not
stand in the way for such innovative actions either!
Proxy Auto-Config
Netscape first came up with this. It is basicly a web page (usually using
a .pac extension) with a javascript that when executed by the browser with
the requested URL as input, returns information to the browser on how to
connect to the URL. The returned information might be "DIRECT" (which
means no proxy should be used), "PROXY host:port" (to tell the browser
where the proxy for this particular URL is) or "SOCKS host:port" (to
direct the brower to a SOCKS proxy).
libcurl has no means to interpret or evaluate javascript and thus it
doesn't support this. If you get yourself in a position where you face
this nasty invention, the following advice have been mentioned and used in
the past:
- Depending on the javascript complexity, write up a script that
translates it to another language and execute that.
- Read the javascript code and rewrite the same logic in another language.
- Implement a javascript interpreted, people have successfully used the
Mozilla javascript engine in the past.
- Ask your admins to stop this, for a static proxy setup or similar.
Persistancy Is The Way to Happiness
Re-cycling the same easy handle several times when doing multiple requests is
the way to go.
After each single curl_easy_perform() operation, libcurl will keep the
connection alive and open. A subsequent request using the same easy handle to
the same host might just be able to use the already open connection! This
reduces network impact a lot.
Even if the connection is dropped, all connections involving SSL to the same
host again, will benefit from libcurl's session ID cache that drasticly
reduces re-connection time.
FTP connections that are kept alive saves a lot of time, as the command-
response roundtrips are skipped, and also you don't risk getting blocked
without permission to login again like on many FTP servers only allowing N
persons to be logged in at the same time.
libcurl caches DNS name resolving results, to make lookups of a previously
looked up name a lot faster.
Other interesting details that improve performance for subsequent requests
may also be added in the future.
Each easy handle will attempt to keep the last few connections alive for a
while in case they are to be used again. You can set the size of this "cache"
with the CURLOPT_MAXCONNECTS option. Default is 5. It is very seldom any
point in changing this value, and if you think of changing this it is often
just a matter of thinking again.
When the connection cache gets filled, libcurl must close an existing
connection in order to get room for the new one. To know which connection to
close, libcurl uses a "close policy" that you can affect with the
CURLOPT_CLOSEPOLICY option. There's only two polices implemented as of this
writing (libcurl 7.9.4) and they are:
CURLCLOSEPOLICY_LEAST_RECENTLY_USED simply close the one that hasn't been
used for the longest time. This is the default behavior.
CURLCLOSEPOLICY_OLDEST closes the oldest connection, the one that was
createst the longest time ago.
There are, or at least were, plans to support a close policy that would call
a user-specified callback to let the user be able to decide which connection
to dump when this is necessary and therefor is the CURLOPT_CLOSEFUNCTION an
existing option still today. Nothing ever uses this though and this will not
be used within the forseeable future either.
To force your upcoming request to not use an already existing connection (it
will even close one first if there happens to be one alive to the same host
you're about to operate on), you can do that by setting CURLOPT_FRESH_CONNECT
to TRUE. In a similar spirit, you can also forbid the upcoming request to be
"lying" around and possibly get re-used after the request by setting
CURLOPT_FORBID_REUSE to TRUE.
Customizing Operations
There is an ongoing development today where more and more protocols are built
upon HTTP for transport. This has obvious benefits as HTTP is a tested and
reliable protocol that is widely deployed and have excellent proxy-support.
When you use one of these protocols, and even when doing other kinds of
programming you may need to change the traditional HTTP (or FTP or...)
manners. You may need to change words, headers or various data.
libcurl is your friend here too.
If just changing the actual HTTP request keyword is what you want, like when
GET, HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST is there
for you. It is very simple to use:
curl_easy_setopt(easyhandle, CURLOPT_CUSTOMREQUEST, "MYOWNRUQUEST");
When using the custom request, you change the request keyword of the actual
request you are performing. Thus, by default you make GET request but you can
also make a POST operation (as described before) and then replace the POST
keyword if you want to. You're the boss.
HTTP-like protocols pass a series of headers to the server when doing the
request, and you're free to pass any amount of extra headers that you think
fit. Adding headers are this easy:
struct curl_slist *headers;
headers = curl_slist_append(headers, "Hey-server-hey: how are you?");
headers = curl_slist_append(headers, "X-silly-content: yes");
/* pass our list of custom made headers */
curl_easy_setopt(easyhandle, CURLOPT_HTTPHEADER, headers);
curl_easy_perform(easyhandle); /* transfer http */
curl_slist_free_all(headers); /* free the header list */
... and if you think some of the internally generated headers, such as
User-Agent:, Accept: or Host: don't contain the data you want them to
contain, you can replace them by simply setting them too:
headers = curl_slist_append(headers, "User-Agent: 007");
headers = curl_slist_append(headers, "Host: munged.host.line");
If you replace an existing header with one with no contents, you will prevent
the header from being sent. Like if you want to completely prevent the
"Accept:" header to be sent, you can disable it with code similar to this:
headers = curl_slist_append(headers, "Accept:");
Both replacing and cancelling internal headers should be done with careful
consideration and you should be aware that you may violate the HTTP protocol
when doing so.
There's only one aspect left in the HTTP requests that we haven't yet
mentioned how to modify: the version field. All HTTP requests includes the
version number to tell the server which version we support. libcurl speak
HTTP 1.1 by default. Some very old servers don't like getting 1.1-requests
and when dealing with stubborn old things like that, you can tell libcurl to
use 1.0 instead by doing something like this:
curl_easy_setopt(easyhandle, CURLOPT_HTTP_VERSION, CURLHTTP_VERSION_1_0);
Not all protocols are HTTP-like, and thus the above may not help you when you
want to make for example your FTP transfers to behave differently.
Sending custom commands to a FTP server means that you need to send the
comands exactly as the FTP server expects them (RFC959 is a good guide here),
and you can only use commands that work on the control-connection alone. All
kinds of commands that requires data interchange and thus needs a
data-connection must be left to libcurl's own judgement. Also be aware that
libcurl will do its very best to change directory to the target directory
before doing any transfer, so if you change directory (with CWD or similar)
you might confuse libcurl and then it might not attempt to transfer the file
in the correct remote directory.
A little example that deletes a given file before an operation:
headers = curl_slist_append(headers, "DELE file-to-remove");
/* pass the list of custom commands to the handle */
curl_easy_setopt(easyhandle, CURLOPT_QUOTE, headers);
curl_easy_perform(easyhandle); /* transfer ftp data! */
curl_slist_free_all(headers); /* free the header list */
If you would instead want this operation (or chain of operations) to happen
_after_ the data transfer took place the option to curl_easy_setopt() would
instead be called CURLOPT_POSTQUOTE and used the exact same way.
The custom FTP command will be issued to the server in the same order they
are added to the list, and if a command gets an error code returned back from
the server, no more commands will be issued and libcurl will bail out with an
error code (CURLE_FTP_QUOTE_ERROR). Note that if you use CURLOPT_QUOTE to
send commands before a transfer, no transfer will actually take place when a
quote command has failed.
If you set the CURLOPT_HEADER to true, you will tell libcurl to get
information about the target file and output "headers" about it. The headers
will be in "HTTP-style", looking like they do in HTTP.
The option to enable headers or to run custom FTP commands may be useful to
combine with CURLOPT_NOBODY. If this option is set, no actual file content
transfer will be performed.
Cookies Without Chocolate Chips
In the HTTP sense, a cookie is a name with an associated value. A server
sends the name and value to the client, and expects it to get sent back on
every subsequent request to the server that matches the particular conditions
set. The conditions include that the domain name and path match and that the
cookie hasn't become too old.
In real-world cases, servers send new cookies to replace existing one to
update them. Server use cookies to "track" users and to keep "sessions".
Cookies are sent from server to clients with the header Set-Cookie: and
they're sent from clients to servers with the Cookie: header.
To just send whatever cookie you want to a server, you can use CURLOPT_COOKIE
to set a cookie string like this:
curl_easy_setopt(easyhandle, CURLOPT_COOKIE, "name1=var1; name2=var2;");
In many cases, that is not enough. You might want to dynamicly save whatever
cookies the remote server passes to you, and make sure those cookies are then
use accordingly on later requests.
One way to do this, is to save all headers you receive in a plain file and
when you make a request, you tell libcurl to read the previous headers to
figure out which cookies to use. Set header file to read cookies from with
CURLOPT_COOKIEFILE.
The CURLOPT_COOKIEFILE option also automaticly enables the cookie parser in
libcurl. Until the cookie parser is enabled, libcurl will not parse or
understand incoming cookies and they will just be ignored. However, when the
parser is enabled the cookies will be understood and the cookies will be kept
in memory and used properly in subsequent requests when the same handle is
used. Many times this is enough, and you may not have to save the cookies to
disk at all. Note that the file you specify to CURLOPT_COOKIEFILE doesn't
have to exist to enable the parser, so a common way to just enable the parser
and not read able might be to use a file name you know doesn't exist.
If you rather use existing cookies that you've previously received with your
Netscape or Mozilla browsers, you can make libcurl use that cookie file as
input. The CURLOPT_COOKIEFILE is used for that too, as libcurl will
automaticly find out what kind of file it is and act accordingly.
The perhaps most advanced cookie operation libcurl offers, is saving the
entire internal cookie state back into a Netscape/Mozilla formatted cookie
file. We call that the cookie-jar. When you set a file name with
CURLOPT_COOKIEJAR, that file name will be created and all received cookies
will be stored in it when curl_easy_cleanup() is called. This enabled cookies
to get passed on properly between multiple handles without any information
getting lost.
Headers Equal Fun
[ use the header callback for HTTP, FTP etc ]
Post Transfer Information
[ curl_easy_getinfo ]
Security Considerations
[ ps output, netrc plain text, plain text protocols / base64 ]
SSL, Certificates and Other Tricks
[ seeding, passwords, keys, certificates, ENGINE, ca certs ]
Future
[ multi interface, sharing between handles, mutexes, pipelining ]
-----
Footnotes:
[1] = HTTP PUT without knowing the size prior to transfer is indeed possible,
but libcurl does not support the chunked transfers on uploading that is
necessary for this feature to work. We'd gratefully appreciate patches
that bring this functionality...
[2] = This happens on Windows machines when libcurl is built and used as a
DLL. However, you can still do this on Windows if you link with a static
library.
[3] = The curl-config tool is generated at build-time (on unix-like systems)
and should be installed with the 'make install' or similar instruction
that installs the library, header files, man pages etc.

68
docs/libcurl/Makefile.am Normal file
View File

@@ -0,0 +1,68 @@
#
# $Id$
#
AUTOMAKE_OPTIONS = foreign no-dependencies
man_MANS = \
curl_easy_cleanup.3 \
curl_easy_getinfo.3 \
curl_easy_init.3 \
curl_easy_perform.3 \
curl_easy_setopt.3 \
curl_easy_duphandle.3 \
curl_formparse.3 \
curl_formadd.3 \
curl_formfree.3 \
curl_getdate.3 \
curl_getenv.3 \
curl_slist_append.3 \
curl_slist_free_all.3 \
curl_version.3 \
curl_escape.3 \
curl_unescape.3 \
curl_strequal.3 \
curl_strnequal.3 \
curl_mprintf.3 \
curl_global_init.3 \
curl_global_cleanup.3 \
libcurl.3
HTMLPAGES = \
curl_easy_cleanup.html \
curl_easy_getinfo.html \
curl_easy_init.html \
curl_easy_perform.html \
curl_easy_setopt.html \
curl_easy_duphandle.html \
curl_formadd.html \
curl_formparse.html \
curl_formfree.html \
curl_getdate.html \
curl_getenv.html \
curl_slist_append.html \
curl_slist_free_all.html \
curl_version.html \
curl_escape.html \
curl_unescape.html \
curl_strequal.html \
curl_strnequal.html \
curl_mprintf.html \
curl_global_init.html \
curl_global_cleanup.html \
libcurl.html \
index.html
EXTRA_DIST = $(man_MANS) $(HTMLPAGES)
MAN2HTML= gnroff -man $< | man2html >$@
SUFFIXES = .1 .3 .html
html: $(HTMLPAGES)
.3.html:
$(MAN2HTML)
.1.html:
$(MAN2HTML)

View File

@@ -0,0 +1,25 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_easy_cleanup 3 "4 March 2002" "libcurl 7.7" "libcurl Manual"
.SH NAME
curl_easy_cleanup - End a libcurl easy session
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "void curl_easy_cleanup(CURL *" handle ");"
.ad
.SH DESCRIPTION
This function must be the last function to call for an easy session. It is the
opposite of the \fIcurl_easy_init\fP function and must be called with the same
\fIhandle\fP as input that the curl_easy_init call returned.
This will effectively close all connections this handle has used and possibly
has kept open until now. Don't call this function if you intend to transfer
more files.
.SH RETURN VALUE
None
.SH "SEE ALSO"
.BR curl_easy_init "(3), "

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_easy_init 3 "5 March 2001" "libcurl 7.6.1" "libcurl Manual"
.TH curl_easy_init 3 "31 Jan 2001" "libcurl 7.9.4" "libcurl Manual"
.SH NAME
curl_easy_getinfo - Extract information from a curl session (added in 7.4)
.SH SYNOPSIS
@@ -30,9 +30,11 @@ Pass a pointer to a long to receive the last received HTTP code.
.TP
.B CURLINFO_FILETIME
Pass a pointer to a long to receive the remote time of the retrieved
document. If you get 0, it can be because of many reasons (unknown, the server
hides it or the server doesn't support the command that tells document time
etc) and the time of the document is unknown. (Added in 7.5)
document. If you get -1, it can be because of many reasons (unknown, the
server hides it or the server doesn't support the command that tells document
time etc) and the time of the document is unknown. Note that you must tell the
server to collect this information before the transfer is made, by using the
CURLOPT_FILETIME option to \fIcurl_easy_setopt(3)\fP. (Added in 7.5)
.TP
.B CURLINFO_TOTAL_TIME
Pass a pointer to a double to receive the total transaction time in seconds
@@ -52,6 +54,12 @@ start until the file transfer is just about to begin. This includes all
pre-transfer commands and negotiations that are specific to the particular
protocol(s) involved.
.TP
.B CURLINFO_STARTTRANSFER_TIME
Pass a pointer to a double to receive the time, in seconds, it took from the
start until the first byte is just about to be transfered. This includes
CURLINFO_PRETRANSFER_TIME and also the time the server needs to calculate
the result.
.TP
.B CURLINFO_SIZE_UPLOAD
Pass a pointer to a double to receive the total amount of bytes that were
uploaded.
@@ -89,6 +97,12 @@ is the value read from the Content-Length: field. (Added in 7.6.1)
.B CURLINFO_CONTENT_LENGTH_UPLOAD
Pass a pointer to a double to receive the specified size of the upload.
(Added in 7.6.1)
.TP
.B CURLINFO_CONTENT_TYPE
Pass a pointer to a 'char *' to receive the content-type of the downloaded
object. This is the value read from the Content-Type: field. If you get NULL,
it means that the server didn't send a valid Content-Type header or that the
protocol used doesn't support this. (Added in 7.9.4)
.PP
.SH RETURN VALUE

View File

@@ -0,0 +1,25 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_easy_init 3 "4 March 2002" "libcurl 7.8.1" "libcurl Manual"
.SH NAME
curl_easy_init - Start a libcurl easy session
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "CURL *curl_easy_init( );"
.ad
.SH DESCRIPTION
This function must be the first function to call, and it returns a CURL easy
handle that you must use as input to other easy-functions. curl_easy_init
intializes curl and this call MUST have a corresponding call to
\fIcurl_easy_cleanup\fP when the operation is complete.
.SH RETURN VALUE
If this function returns NULL, something went wrong and you cannot use the
other curl functions.
.SH "SEE ALSO"
.BR curl_easy_cleanup "(3), " curl_global_init "(3)
.SH BUGS
Surely there are some, you tell me!

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_easy_setopt 3 "11 Oct 2001" "libcurl 7.9.1" "libcurl Manual"
.TH curl_easy_setopt 3 "10 Dec 2001" "libcurl 7.9.2" "libcurl Manual"
.SH NAME
curl_easy_setopt - Set curl easy-session options
.SH SYNOPSIS
@@ -77,9 +77,8 @@ function gets called by libcurl as soon as it needs to read data in order to
send it to the peer. The data area pointed at by the pointer \fIptr\fP may be
filled with at most \fIsize\fP multiplied with \fInmemb\fP number of
bytes. Your function must return the actual number of bytes that you stored in
that memory area. Returning -1 will signal an error to the library and cause
it to abort the current transfer immediately (with a \fICURLE_READ_ERROR\fP
return code).
that memory area. Returning 0 will signal end-of-file to the library and cause
it to stop the current transfer.
.TP
.B CURLOPT_INFILESIZE
When uploading a file to a remote site, this option should be used to tell
@@ -320,13 +319,59 @@ with \fIcurl_easy_cleanup(3)\fP.
.TP
.B CURLOPT_SSLCERT
Pass a pointer to a zero terminated string as parameter. The string should be
the file name of your certificate in PEM format.
the file name of your certificate. The default format is "PEM" and can be
changed with \fICURLOPT_SSLCERTTYPE\fP.
.TP
.B CURLOPT_SSLCERTTYPE
Pass a pointer to a zero terminated string as parameter. The string should be
the format of your certificate. Supported formats are "PEM" and "DER". (Added
in 7.9.3)
.TP
.B CURLOPT_SSLCERTPASSWD
Pass a pointer to a zero terminated string as parameter. It will be used as
the password required to use the CURLOPT_SSLCERT certificate. If the password
is not supplied, you will be prompted for it. \fICURLOPT_PASSWDFUNCTION\fP can
be used to set your own prompt function.
\fBNOTE:\fPThis option is replaced by \fICURLOPT_SSLKEYPASSWD\fP and only
cept for backward compatibility. You never needed a pass phrase to load
a certificate but you need one to load your private key.
.TP
.B CURLOPT_SSLKEY
Pass a pointer to a zero terminated string as parameter. The string should be
the file name of your private key. The default format is "PEM" and can be
changed with \fICURLOPT_SSLKEYTYPE\fP. (Added in 7.9.3)
.TP
.B CURLOPT_SSLKEYTYPE
Pass a pointer to a zero terminated string as parameter. The string should be
the format of your private key. Supported formats are "PEM", "DER" and "ENG".
(Added in 7.9.3)
\fBNOTE:\fPThe format "ENG" enables you to load the private key from a crypto
engine. in this case \fICURLOPT_SSLKEY\fP is used as an identifier passed to
the engine. You have to set the crypto engine with \fICURLOPT_SSL_ENGINE\fP.
.TP
.B CURLOPT_SSLKEYASSWD
Pass a pointer to a zero terminated string as parameter. It will be used as
the password required to use the \fICURLOPT_SSLKEY\fP private key. If the
password is not supplied, you will be prompted for
it. \fICURLOPT_PASSWDFUNCTION\fP can be used to set your own prompt function.
(Added in 7.9.3)
.TP
.B CURLOPT_SSL_ENGINE
Pass a pointer to a zero terminated string as parameter. It will be used as
the identifier for the crypto engine you want to use for your private
key. (Added in 7.9.3)
\fBNOTE:\fPIf the crypto device cannot be loaded,
\fICURLE_SSL_ENGINE_NOTFOUND\fP is returned.
.TP
.B CURLOPT_SSL_ENGINEDEFAULT
Sets the actual crypto engine as the default for (asymetric) crypto
operations. (Added in 7.9.3)
\fBNOTE:\fPIf the crypto device cannot be set,
\fICURLE_SSL_ENGINE_SETFAILED\fP is returned.
.TP
.B CURLOPT_CRLF
Convert Unix newlines to CRLF newlines on FTP uploads.
@@ -336,13 +381,15 @@ Pass a pointer to a linked list of FTP commands to pass to the server prior to
your ftp request. The linked list should be a fully valid list of 'struct
curl_slist' structs properly filled in. Use \fIcurl_slist_append(3)\fP to
append strings (commands) to the list, and clear the entire list afterwards
with \fIcurl_slist_free_all(3)\fP.
with \fIcurl_slist_free_all(3)\fP. Disable this operation again by setting a
NULL to this option.
.TP
.B CURLOPT_POSTQUOTE
Pass a pointer to a linked list of FTP commands to pass to the server after
your ftp transfer request. The linked list should be a fully valid list of
struct curl_slist structs properly filled in as described for
\fICURLOPT_QUOTE\fP.
\fICURLOPT_QUOTE\fP. Disable this operation again by setting a NULL to this
option.
.TP
.B CURLOPT_WRITEHEADER
Pass a pointer to be used to write the header part of the received data to. If
@@ -545,6 +592,29 @@ compile OpenSSL.
You'll find more details about cipher lists on this URL:
\fIhttp://www.openssl.org/docs/apps/ciphers.html\fP
.TP
.B CURLOPT_HTTP_VERSION
Pass a long, set to one of the values described below. They force libcurl to
use the specific HTTP versions. This is not sensible to do unless you have a
good reason.
.RS
.TP 5
.B CURL_HTTP_VERSION_NONE
We don't care about what version the library uses. libcurl will use whatever
it thinks fit.
.TP
.B CURL_HTTP_VERSION_1_0
Enforce HTTP 1.0 requests.
.TP
.B CURL_HTTP_VERSION_1_1
Enforce HTTP 1.1 requests.
.RE
.TP
.B CURLOPT_FTP_USE_EPSV
Pass a long. If the value is non-zero, it tells curl to use the EPSV command
when doing passive FTP downloads (which is always does by default). Using EPSV
means that it will first attempt to use EPSV before using PASV, but if you
pass FALSE (zero) to this option, it will not try using EPSV, only plain PASV.
.PP
.SH RETURN VALUE
CURLE_OK (zero) means that the option was set properly, non-zero means an

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_escape 3 "22 March 2001" "libcurl 7.7" "libcurl Manual"
.TH curl_escape 3 "6 March 2002" "libcurl 7.9" "libcurl Manual"
.SH NAME
curl_escape - URL encodes the given string
.SH SYNOPSIS
@@ -13,10 +13,8 @@ curl_escape - URL encodes the given string
.SH DESCRIPTION
This function will convert the given input string to an URL encoded string and
return that as a new allocated string. All input characters that are not a-z,
A-Z or 0-9 will be converted to their "URL escaped" version. If a sequence of
%NN (where NN is a two-digit hexadecimal number) is found in the string to
encode, that 3-letter combination will be copied to the output unmodifed,
assuming that it is an already encoded piece of data.
A-Z or 0-9 will be converted to their "URL escaped" version (%NN where NN is a
two-digit hexadecimal number).
If the 'length' argument is set to 0, curl_escape() will use strlen() on the
input 'url' string to find out the size.

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_formadd 3 "29 October 2001" "libcurl 7.9.1" "libcurl Manual"
.TH curl_formadd 3 "1 Match 2002" "libcurl 7.9.1" "libcurl Manual"
.SH NAME
curl_formadd - add a section to a multipart/formdata HTTP POST
.SH SYNOPSIS
@@ -58,6 +58,13 @@ are allowed. The effect of this parameter is the same as giving multiple
\fBCURLFORM_FILE\fP options possibly with \fBCURLFORM_CONTENTTYPE\fP after or
before each \fBCURLFORM_FILE\fP option.
Should you need to specify extra headers for the form POST section, use
\fBCURLFORM_CONTENTHEADER\fP. This takes a curl_slist prepared in the usual way
using \fBcurl_slist_append\fP and appends the list of headers to those Curl
automatically generates for \fBCURLFORM_CONTENTTYPE\fP and the content
disposition. The list must exist while the POST occurs, if you free it before
the post completes you may experience problems.
The last argument in such an array must always be \fBCURLFORM_END\fP.
The pointers \fI*firstitem\fP and \fI*lastitem\fP should both be pointing to
@@ -80,8 +87,8 @@ Returns non-zero if an error occurs.
.SH EXAMPLE
.nf
HttpPost* post = NULL;
HttpPost* last = NULL;
struct HttpPost* post = NULL;
struct HttpPost* last = NULL;
char namebuffer[] = "name buffer";
long namelength = strlen(namebuffer);
char buffer[] = "test buffer";

View File

@@ -0,0 +1,18 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_formparse 3 "17 Dec 2001" "libcurl 7.9.2" "libcurl Manual"
.SH NAME
curl_formparse - add a section to a multipart/formdata HTTP POST:
deprecated (use curl_formadd instead)
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "CURLcode curl_formparse(char * " string, " struct HttpPost ** " firstitem,
.BI "struct HttpPost ** " lastitem ");"
.ad
.SH DESCRIPTION
This has been removed deliberately. The \fBcurl_formadd\fP has been introduced
to replace this function. Do not use this. Convert to the new function
now. curl_formparse() will be removed from a future version of libcurl.

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_global_init 3 "14 August 2001" "libcurl 7.8.1" "libcurl Manual"
.TH curl_global_init 3 "13 Nov 2001" "libcurl 7.9.1" "libcurl Manual"
.SH NAME
curl_global_init - Global libcurl initialisation
.SH SYNOPSIS
@@ -11,8 +11,8 @@ curl_global_init - Global libcurl initialisation
.BI "CURLcode curl_global_init(long " flags ");"
.ad
.SH DESCRIPTION
This function should be called once (no matter how many threads or libcurl
sessions that'll be used) by every application that uses libcurl.
This function should only be called once (no matter how many threads or
libcurl sessions that'll be used) by every application that uses libcurl.
If this function hasn't been invoked when \fIcurl_easy_init\fP is called, it
will be done automatically by libcurl.
@@ -23,6 +23,8 @@ init, as described below. Set the desired bits by ORing the values together.
You must however \fBalways\fP use the \fIcurl_global_cleanup\fP function, as
that cannot be called automatically for you by libcurl.
Calling this function more than once will cause unpredictable results.
This function was added in libcurl 7.8.
.SH FLAGS
.TP 5

View File

@@ -0,0 +1,20 @@
.\" $Id$
.\"
.TH curl_multi_add_handle 3 "4 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_add_handle - add an easy handle to a multi session
.SH SYNOPSIS
#include <curl/curl.h>
CURLMcode curl_multi_add_handle(CURLM *multi_handle, CURL *easy_handle);
.ad
.SH DESCRIPTION
Adds a standard easy handle to the multi stack. This will make this multi
handle control the specified easy handle.
When an easy handle has been added to a multi stack, you can not and you must
not use curl_easy_perform() on that handle!
.SH RETURN VALUE
CURLMcode type, general libcurl multi interface error code.
.SH "SEE ALSO"
.BR curl_multi_cleanup "(3)," curl_multi_init "(3)"

View File

@@ -0,0 +1,18 @@
.\" $Id$
.\"
.TH curl_multi_cleanup 3 "1 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_cleanup - close down a multi session
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "CURLMcode curl_multi_cleanup( CURLM *multi_handle );"
.ad
.SH DESCRIPTION
Cleans up and removes a whole multi stack. It does not free or touch any
individual easy handles in any way - they still need to be closed
individually, using the usual curl_easy_cleanup() way.
.SH RETURN VALUE
CURLMcode type, general libcurl multi interface error code.
.SH "SEE ALSO"
.BR curl_multi_init "(3)," curl_easy_cleanup "(3)," curl_easy_init "(3)"

View File

@@ -0,0 +1,23 @@
.\" $Id$
.\"
.TH curl_multi_fdset 3 "1 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_fdset - add an easy handle to a multi session
.SH SYNOPSIS
#include <curl/curl.h>
CURLMcode curl_multi_fdset(CURLM *multi_handle,
fd_set *read_fd_set,
fd_set *write_fd_set,
fd_set *exc_fd_set,
int *max_fd);
.ad
.SH DESCRIPTION
This function extracts file descriptor information from a given multi_handle.
libcurl returns its fd_set sets. The application can use these to select() or
poll() on. The curl_multi_perform() function should be called as soon as one
of them are ready to be read from or written to.
.SH RETURN VALUE
CURLMcode type, general libcurl multi interface error code.
.SH "SEE ALSO"
.BR curl_multi_cleanup "(3)," curl_multi_init "(3)"

View File

@@ -0,0 +1,35 @@
.\" $Id$
.\"
.TH curl_multi_info_read 3 "1 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_info_read - read multi stack informationals
.SH SYNOPSIS
#include <curl/curl.h>
CURLMsg *curl_multi_info_read( CURLM *multi_handle,
int *msgs_in_queue);
.ad
.SH DESCRIPTION
Ask the multi handle if there's any messages/informationals from the
individual transfers. Messages include informationals such as an error code
from the transfer or just the fact that a transfer is completed. More details
on these should be written down as well.
Repeated calls to this function will return a new struct each time, until a
special "end of msgs" struct is returned as a signal that there is no more to
get at this point. The integer pointed to with \fImsgs_in_queue\fP will
contain the number of remaining messages after this function was called.
The data the returned pointer points to will not survive calling
curl_multi_cleanup().
The 'CURLMsg' struct is very simple and only contain very basic informations.
If more involved information is wanted, the particular "easy handle" in
present in that struct and can thus be used in subsequent regular
curl_easy_getinfo() calls (or similar).
.SH "RETURN VALUE"
A pointer to a filled-in struct, or NULL if it failed or ran out of
structs. It also writes the number of messages left in the queue (after this
read) in the integer the second argument points to.
.SH "SEE ALSO"
.BR curl_multi_cleanup "(3)," curl_multi_init "(3)," curl_multi_perform "(3)"

View File

@@ -0,0 +1,22 @@
.\" $Id$
.\"
.TH curl_multi_init 3 "1 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_init - Start a multi session
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "CURLM *curl_multi_init( );"
.ad
.SH DESCRIPTION
This function returns a CURLM handle to be used as input to all the other
multi-functions, sometimes refered to as a multi handle on some places in the
documentation. This init call MUST have a corresponding call to
\fIcurl_multi_cleanup\fP when the operation is complete.
.SH RETURN VALUE
If this function returns NULL, something went wrong and you cannot use the
other curl functions.
.SH "SEE ALSO"
.BR curl_multi_cleanup "(3)," curl_global_init "(3)," curl_easy_init "(3)"
.SH BUGS
Surely there are some, you tell me!

View File

@@ -0,0 +1,30 @@
.\" $Id$
.\"
.TH curl_multi_perform 3 "1 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_perform - add an easy handle to a multi session
.SH SYNOPSIS
#include <curl/curl.h>
CURLMcode curl_multi_perform(CURLM *multi_handle, int *running_handles);
.ad
.SH DESCRIPTION
When the app thinks there's data available for the multi_handle, it should
call this function to read/write whatever there is to read or write right
now. curl_multi_perform() returns as soon as the reads/writes are done. This
function does not require that there actually is any data available for
reading or that data can be written, it can be called just in case. It will
write the number of handles that still transfer data in the second argument's
integer-pointer.
.SH "RETURN VALUE"
CURLMcode type, general libcurl multi interface error code.
NOTE that this only returns errors etc regarding the whole multi stack. There
might still have occurred problems on invidual transfers even when this
function returns OK.
.SH "TYPICAL USAGE"
Most application will use \fIcurl_multi_fdset\fP to get the multi_handle's
file descriptors, then it'll wait for action on them using select() and as
soon as one or more of them are ready, \fIcurl_multi_perform\fP gets called.
.SH "SEE ALSO"
.BR curl_multi_cleanup "(3)," curl_multi_init "(3)"

View File

@@ -0,0 +1,20 @@
.\" $Id$
.\"
.TH curl_multi_remove_handle 3 "6 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_remove_handle - add an easy handle to a multi session
.SH SYNOPSIS
#include <curl/curl.h>
CURLMcode curl_multi_remove_handle(CURLM *multi_handle, CURL *easy_handle);
.ad
.SH DESCRIPTION
Removes a given easy_handle from the multi_handle. This will make the
specified easy handle be removed from this multi handle's control.
When the easy handle has been removed from a multi stack, it is again
perfectly legal to invoke \fIcurl_easy_perform()\fP on this easy handle.
.SH RETURN VALUE
CURLMcode type, general libcurl multi interface error code.
.SH "SEE ALSO"
.BR curl_multi_cleanup "(3)," curl_multi_init "(3)"

38
docs/libcurl/index.html Normal file
View File

@@ -0,0 +1,38 @@
HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<TITLE>Index to Curl documentation</TITLE>
</HEAD>
<BODY>
<H1 ALIGN="CENTER">Index to Curl documentation</H1>
<H2>Programs</H2>
<P><A HREF="curl-config.html">curl-config.html</A>
<P><A HREF="curl.html">curl.html</A>
<H2>Library routines</H2>
<P><A HREF="libcurl.html">libcurl.html</A>
<P><A HREF="curl_easy_cleanup.html">curl_easy_cleanup.html</A>
<P><A HREF="curl_easy_duphandle.html">curl_easy_duphandle.html</A>
<P><A HREF="curl_easy_getinfo.html">curl_easy_getinfo.html</A>
<P><A HREF="curl_easy_init.html">curl_easy_init.html</A>
<P><A HREF="curl_easy_perform.html">curl_easy_perform.html</A>
<P><A HREF="curl_easy_setopt.html">curl_easy_setopt.html</A>
<P><A HREF="curl_escape.html">curl_escape.html</A>
<P><A HREF="curl_formadd.html">curl_formadd.html</A>
<P><A HREF="curl_formfree.html">curl_formfree.html</A>
<P><A HREF="curl_formparse.html">curl_formparse.html</A>
<P><A HREF="curl_getdate.html">curl_getdate.html</A>
<P><A HREF="curl_getenv.html">curl_getenv.html</A>
<P><A HREF="curl_global_cleanup.html">curl_global_cleanup.html</A>
<P><A HREF="curl_global_init.html">curl_global_init.html</A>
<P><A HREF="curl_mprintf.html">curl_mprintf.html</A>
<P><A HREF="curl_slist_append.html">curl_slist_append.html</A>
<P><A HREF="curl_slist_free_all.html">curl_slist_free_all.html</A>
<P><A HREF="curl_strequal.html">curl_strequal.html</A>
<P><A HREF="curl_strnequal.html">curl_strnequal.html</A>
<P><A HREF="curl_unescape.html">curl_unescape.html</A>
<P><A HREF="curl_version.html">curl_version.html</A>
</BODY>
</HTML>

View File

@@ -6,8 +6,10 @@
.SH NAME
libcurl \- client-side URL transfers
.SH DESCRIPTION
This is an overview on how to use libcurl in your c/c++ programs. There are
specific man pages for each function mentioned in here.
This is an overview on how to use libcurl in your C programs. There are
specific man pages for each function mentioned in here. There's also the
libcurl-the-guide document for a complete tutorial to programming with
libcurl.
libcurl can also be used directly from within your Java, PHP, Perl, Ruby or
Tcl programs as well, look elsewhere for documentation on this!
@@ -56,9 +58,6 @@ get information about a performed transfer
.B curl_formadd()
helps building a HTTP form POST
.TP
.B curl_formparse()
helps building a HTTP form POST (deprecated since 7.9 use curl_formadd()!)
.TP
.B curl_formfree()
free a list built with curl_formparse()/curl_formadd()
.TP

View File

@@ -7,7 +7,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2001, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 2002, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* In order to be useful for every potential user, curl and libcurl are
* dual-licensed under the MPL and the MIT/X-derivate licenses.
@@ -30,11 +30,11 @@
# include <time.h>
#else
# include <sys/types.h>
# if TIME_WITH_SYS_TIME
# ifdef TIME_WITH_SYS_TIME
# include <sys/time.h>
# include <time.h>
# else
# if HAVE_SYS_TIME_H
# ifdef HAVE_SYS_TIME_H
# include <sys/time.h>
# else
# include <time.h>
@@ -62,6 +62,7 @@ struct HttpPost {
char *contents; /* pointer to allocated data contents */
long contentslength; /* length of contents field */
char *contenttype; /* Content-Type */
struct curl_slist* contentheader; /* list of extra headers for this form */
struct HttpPost *more; /* if one field name has more than one file, this
link should link to following files */
long flags; /* as defined below */
@@ -74,10 +75,10 @@ struct HttpPost {
};
typedef int (*curl_progress_callback)(void *clientp,
size_t dltotal,
size_t dlnow,
size_t ultotal,
size_t ulnow);
double dltotal,
double dlnow,
double ultotal,
double ulnow);
typedef size_t (*curl_write_callback)(char *buffer,
size_t size,
@@ -155,7 +156,9 @@ typedef enum {
CURLE_OBSOLETE, /* 50 - removed after 7.7.3 */
CURLE_SSL_PEER_CERTIFICATE, /* 51 - peer's certificate wasn't ok */
CURLE_GOT_NOTHING, /* 52 - when this is a specific error */
CURLE_SSL_ENGINE_NOTFOUND, /* 53 - SSL crypto engine not found */
CURLE_SSL_ENGINE_SETFAILED, /* 54 - can not set SSL crypto engine as default */
CURL_LAST /* never use! */
} CURLcode;
@@ -212,10 +215,8 @@ typedef enum {
in the CURLOPT_FLAGS to activate this */
CINIT(RANGE, OBJECTPOINT, 7),
#if 0
/* Configuration flags */
CINIT(FLAGS, LONG, 8),
#endif
/* not used */
/* Specified file stream to upload from (use as input): */
CINIT(INFILE, OBJECTPOINT, 9),
@@ -271,7 +272,7 @@ typedef enum {
/* Set cookie in request: */
CINIT(COOKIE, OBJECTPOINT, 22),
/* This points to a linked list of headers, struct HttpHeader kind */
/* This points to a linked list of headers, struct curl_slist kind */
CINIT(HTTPHEADER, OBJECTPOINT, 23),
/* This points to a linked list of post entries, struct HttpPost */
@@ -280,8 +281,10 @@ typedef enum {
/* name of the file keeping your private SSL-certificate */
CINIT(SSLCERT, OBJECTPOINT, 25),
/* password for the SSL-certificate */
/* password for the SSL-private key, keep this for compatibility */
CINIT(SSLCERTPASSWD, OBJECTPOINT, 26),
/* password for the SSL private key */
CINIT(SSLKEYPASSWD, OBJECTPOINT, 26),
/* send TYPE parameter? */
CINIT(CRLF, LONG, 27),
@@ -298,7 +301,7 @@ typedef enum {
CINIT(COOKIEFILE, OBJECTPOINT, 31),
/* What version to specifly try to use.
3 = SSLv3, 2 = SSLv2, all else makes it try v3 first then v2 */
See CURL_SSLVERSION defines below. */
CINIT(SSLVERSION, LONG, 32),
/* What kind of HTTP time condition to use, see defines */
@@ -321,11 +324,8 @@ typedef enum {
/* HTTP request, for odd commands like DELETE, TRACE and others */
CINIT(STDERR, OBJECTPOINT, 37),
#if 0
/* Progress mode set alternative progress mode displays. Alternative
ones should now be made by the client, not the lib! */
CINIT(PROGRESSMODE, LONG, 38),
#endif
/* 38 is not used */
/* send linked-list of post-transfer QUOTE commands */
CINIT(POSTQUOTE, OBJECTPOINT, 39),
@@ -465,6 +465,37 @@ typedef enum {
/* Specify which HTTP version to use! This must be set to one of the
CURL_HTTP_VERSION* enums set below. */
CINIT(HTTP_VERSION, LONG, 84),
/* Specificly switch on or off the FTP engine's use of the EPSV command. By
default, that one will always be attempted before the more traditional
PASV command. */
CINIT(FTP_USE_EPSV, LONG, 85),
/* type of the file keeping your SSL-certificate ("DER", "PEM", "ENG") */
CINIT(SSLCERTTYPE, OBJECTPOINT, 86),
/* name of the file keeping your private SSL-key */
CINIT(SSLKEY, OBJECTPOINT, 87),
/* type of the file keeping your private SSL-key ("DER", "PEM", "ENG") */
CINIT(SSLKEYTYPE, OBJECTPOINT, 88),
/* crypto engine for the SSL-sub system */
CINIT(SSLENGINE, OBJECTPOINT, 89),
/* set the crypto engine for the SSL-sub system as default
the param has no meaning...
*/
CINIT(SSLENGINE_DEFAULT, LONG, 90),
/* Non-zero value means to use the global dns cache */
CINIT(DNS_USE_GLOBAL_CACHE, LONG, 91),
/* DNS cache timeout */
CINIT(DNS_CACHE_TIMEOUT, LONG, 92),
/* send linked-list of pre-transfer QUOTE commands (Wesley Laxton)*/
CINIT(PREQUOTE, OBJECTPOINT, 93),
CURLOPT_LASTENTRY /* the last unusued */
} CURLoption;
@@ -480,6 +511,15 @@ enum {
CURL_HTTP_VERSION_LAST /* *ILLEGAL* http version */
};
enum {
CURL_SSLVERSION_DEFAULT,
CURL_SSLVERSION_TLSv1,
CURL_SSLVERSION_SSLv2,
CURL_SSLVERSION_SSLv3,
CURL_SSLVERSION_LAST /* never use, keep last */
};
typedef enum {
TIMECOND_NONE,
@@ -534,6 +574,7 @@ typedef enum {
CFINIT(ARRAY_START), /* below are the options allowed within a array */
CFINIT(FILE),
CFINIT(CONTENTTYPE),
CFINIT(CONTENTHEADER),
CFINIT(END),
CFINIT(ARRAY_END), /* up are the options allowed within a array */
@@ -575,8 +616,8 @@ CURLcode curl_global_init(long flags);
void curl_global_cleanup(void);
/* This is the version number */
#define LIBCURL_VERSION "7.9.1"
#define LIBCURL_VERSION_NUM 0x070901
#define LIBCURL_VERSION "7.9.5"
#define LIBCURL_VERSION_NUM 0x070905
/* linked-list structure for the CURLOPT_QUOTE option (and other) */
struct curl_slist {
@@ -626,7 +667,13 @@ typedef enum {
CURLINFO_CONTENT_LENGTH_DOWNLOAD = CURLINFO_DOUBLE + 15,
CURLINFO_CONTENT_LENGTH_UPLOAD = CURLINFO_DOUBLE + 16,
CURLINFO_LASTONE = 17
CURLINFO_STARTTRANSFER_TIME = CURLINFO_DOUBLE + 17,
CURLINFO_CONTENT_TYPE = CURLINFO_STRING + 18,
/* Fill in new entries here! */
CURLINFO_LASTONE = 19
} CURLINFO;
/* unfortunately, the easy.h include file needs the options and info stuff

View File

@@ -2,19 +2,20 @@
# $Id$
#
AUTOMAKE_OPTIONS = foreign no-dependencies
AUTOMAKE_OPTIONS = foreign nostdinc
EXTRA_DIST = getdate.y \
Makefile.b32 Makefile.b32.resp Makefile.m32 Makefile.vc6 \
libcurl.def dllinit.c curllib.dsp curllib.dsw
libcurl.def dllinit.c curllib.dsp curllib.dsw \
config-vms.h config-win32.h config-riscos.h config-mac.h \
config.h.in
lib_LTLIBRARIES = libcurl.la
# Some flags needed when trying to cause warnings ;-)
# CFLAGS = -DMALLOCDEBUG -g # -Wall #-pedantic
INCLUDES = -I$(top_srcdir)/include
# we use srcdir/include for the static global include files
# we use builddir/lib for the generated lib/config.h file to get found
# we use srcdir/lib for the lib-private header files
INCLUDES = -I$(top_srcdir)/include -I$(top_builddir)/lib -I$(top_srcdir)/lib
libcurl_la_LDFLAGS = -no-undefined -version-info 2:2:0
# This flag accepts an argument of the form current[:revision[:age]]. So,
@@ -59,7 +60,9 @@ escape.c mprintf.c telnet.c \
escape.h getpass.c netrc.c telnet.h \
getinfo.c getinfo.h transfer.c strequal.c strequal.h easy.c \
security.h security.c krb4.c krb4.h memdebug.c memdebug.h inet_ntoa_r.h \
http_chunks.c http_chunks.h strtok.c strtok.h connect.c connect.h
http_chunks.c http_chunks.h strtok.c strtok.h connect.c connect.h \
llist.c llist.h hash.c hash.h multi.c multi.h
noinst_HEADERS = setup.h transfer.h

View File

@@ -23,7 +23,7 @@ INCDIRS = -I$(CURNTDIR);$(TOPDIR)/include/
# 'BCCDIR' has to be set up in your c:\autoexec.bat
# i.e. SET BCCDIR = c:\Borland\BCC55
# where c:\Borland\BCC55 is the compiler is installed
LINKLIB = $(BCCDIR)/lib/psdk/wsock32.lib
LINKLIB = $(BCCDIR)/lib/psdk/ws2_32.lib
LIBCURLLIB = libcurl.lib
.SUFFIXES: .c

View File

@@ -27,5 +27,5 @@
+version.obj &
+easy.obj &
+strequal.obj &
+strtok.obj
+strtok.obj &
+connect.obj

View File

@@ -35,14 +35,14 @@ libcurl_a_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c base64.c \
ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c \
telnet.h getinfo.c strequal.c strequal.h easy.c security.h \
security.c krb4.h krb4.c memdebug.h memdebug.c inet_ntoa_r.h http_chunks.h http_chunks.c \
strtok.c connect.c
strtok.c connect.c hash.c llist.c
libcurl_a_OBJECTS = file.o timeval.o base64.o hostip.o progress.o \
formdata.o cookie.o http.o sendf.o ftp.o url.o dict.o if2ip.o \
speedcheck.o getdate.o transfer.o ldap.o ssluse.o version.o \
getenv.o escape.o mprintf.o telnet.o getpass.o netrc.o getinfo.o \
strequal.o easy.o security.o krb4.o memdebug.o http_chunks.o \
strtok.o connect.o
strtok.o connect.o hash.o llist.o
LIBRARIES = $(libcurl_a_LIBRARIES)
SOURCES = $(libcurl_a_SOURCES)

View File

@@ -1,377 +1,226 @@
#############################################################
#
## Makefile for building libcurl.lib with MSVC6
## Use: nmake -f makefile.vc6 [release | release-ssl | debug]
## (default is release)
##
## Originally written by: Troy Engel <tengel@sonic.net>
## Updated by: Craig Davison <cd@securityfocus.com>
## Updated by: SM <sm@technologist.com>
# Makefile for building libcurl with MSVC6
#
# Usage: see usage message below
# Should be invoked from \lib directory
# Edit the paths and desired library name
# SSL path is only required if you intend compiling
# with SSL.
#
# This make file leaves the result either a .lib or .dll file
# in the \lib directory. It should be called from the \lib
# directory.
#
# An option would have been to allow the source directory to
# be specified, but I saw no requirement.
#
# Another option would have been to leave the .lib and .dll
# files in the "cfg" directory, but then the make file
# in \src would need to be changed.
#
##############################################################
# CHANGE LOG
# ------------------------------------------------------------
# 05.11.2001 John Lask Initial Release
# 02.05.2002 Miklos Nemeth OPENSSL_PATH environment; no need
# for OpenSSL libraries when creating a
# static libcurl.lib
#
#
##############################################################
PROGRAM_NAME = libcurl.lib
PROGRAM_NAME_DEBUG = libcurld.lib
#OPENSSL_PATH = ../../openssl-0.9.6b
LIB_NAME = libcurl
LIB_NAME_DEBUG = libcurld
!IFNDEF OPENSSL_PATH
OPENSSL_PATH = ../../openssl-0.9.6
!ENDIF
########################################################
#############################################################
## Nothing more to do below this line!
## Release
CCR = cl.exe /MD /O2 /D "NDEBUG"
LINKR = link.exe -lib /out:$(PROGRAM_NAME)
## Debug
CCD = cl.exe /MDd /Gm /ZI /Od /D "_DEBUG" /GZ
LINKD = link.exe -lib /out:$(PROGRAM_NAME_DEBUG)
## SSL Release
CCRS = cl.exe /MD /O2 /D "NDEBUG" /D "USE_SSLEAY" /I "$(OPENSSL_PATH)/inc32" /I "$(OPENSSL_PATH)/inc32/openssl"
LINKRS = link.exe -lib /out:$(PROGRAM_NAME) /LIBPATH:$(OPENSSL_PATH)/out32dll
CCNODBG = cl.exe /MD /O2 /D "NDEBUG"
CCDEBUG = cl.exe /MDd /Od /Gm /Zi /D "_DEBUG" /GZ
CFLAGSSSL = /D "USE_SSLEAY" /I "$(OPENSSL_PATH)/inc32" /I "$(OPENSSL_PATH)/inc32/openssl"
CFLAGS = /I "../include" /nologo /W3 /GX /D "WIN32" /D "VC6" /D "_MBCS" /D "_LIB" /YX /FD /c /D "MSDOS"
LFLAGS = /nologo
LINKLIBS = ws2_32.lib
LINKSLIBS = libeay32.lib ssleay32.lib RSAglue.lib
RELEASE_OBJS= \
base64r.obj \
cookier.obj \
transferr.obj \
escaper.obj \
formdatar.obj \
ftpr.obj \
httpr.obj \
http_chunksr.obj \
ldapr.obj \
dictr.obj \
telnetr.obj \
getdater.obj \
getenvr.obj \
getpassr.obj \
hostipr.obj \
if2ipr.obj \
mprintfr.obj \
netrcr.obj \
progressr.obj \
sendfr.obj \
speedcheckr.obj \
ssluser.obj \
timevalr.obj \
urlr.obj \
filer.obj \
getinfor.obj \
versionr.obj \
easyr.obj \
strequalr.obj \
strtokr.obj \
connectr.obj
LNKDLL = link.exe /DLL /def:libcurl.def
LNKLIB = link.exe -lib
LFLAGS = /nologo
LFLAGSSSL = /LIBPATH:$(OPENSSL_PATH)/out32dll
LINKLIBS = ws2_32.lib
SSLLIBS = libeay32.lib ssleay32.lib RSAglue.lib
CFGSET = FALSE
LFLAGSSSL=
SSLLIBS =
DEBUG_OBJS= \
base64d.obj \
cookied.obj \
transferd.obj \
escaped.obj \
formdatad.obj \
ftpd.obj \
httpd.obj \
http_chunksd.obj \
ldapd.obj \
dictd.obj \
telnetd.obj \
getdated.obj \
getenvd.obj \
getpassd.obj \
hostipd.obj \
if2ipd.obj \
mprintfd.obj \
netrcd.obj \
progressd.obj \
sendfd.obj \
speedcheckd.obj \
sslused.obj \
timevald.obj \
urld.obj \
filed.obj \
getinfod.obj \
versiond.obj \
easyd.obj \
strequald.obj \
strtokd.obj \
connectd.obj
######################
# release
RELEASE_SSL_OBJS= \
base64rs.obj \
cookiers.obj \
transferrs.obj \
escapers.obj \
formdatars.obj \
ftprs.obj \
httprs.obj \
http_chunksrs.obj \
ldaprs.obj \
dictrs.obj \
telnetrs.obj \
getdaters.obj \
getenvrs.obj \
getpassrs.obj \
hostiprs.obj \
if2iprs.obj \
mprintfrs.obj \
netrcrs.obj \
progressrs.obj \
sendfrs.obj \
speedcheckrs.obj \
sslusers.obj \
timevalrs.obj \
urlrs.obj \
filers.obj \
getinfors.obj \
versionrs.obj \
easyrs.obj \
strequalrs.obj \
strtokrs.obj \
connectrs.obj
!IF "$(CFG)" == "release"
TARGET =$(LIB_NAME).lib
DIROBJ =.\$(CFG)
LNK = $(LNKLIB) /out:$(TARGET)
CC = $(CCNODBG)
CFGSET = TRUE
!ENDIF
LINK_OBJS= \
base64.obj \
cookie.obj \
transfer.obj \
escape.obj \
formdata.obj \
ftp.obj \
http.obj \
http_chunks.obj \
ldap.obj \
dict.obj \
telnet.obj \
getdate.obj \
getenv.obj \
getpass.obj \
hostip.obj \
if2ip.obj \
mprintf.obj \
netrc.obj \
progress.obj \
sendf.obj \
speedcheck.obj \
ssluse.obj \
timeval.obj \
url.obj \
file.obj \
getinfo.obj \
version.obj \
easy.obj \
strequal.obj \
strtok.obj \
connect.obj
######################
# release-dll
all : release
!IF "$(CFG)" == "release-dll"
TARGET =$(LIB_NAME).dll
DIROBJ =.\$(CFG)
LNK = $(LNKDLL) /out:$(TARGET) /IMPLIB:"$(LIB_NAME).lib"
CC = $(CCNODBG)
CFGSET = TRUE
!ENDIF
release: $(RELEASE_OBJS)
$(LINKR) $(LFLAGS) $(LINKLIBS) $(LINK_OBJS)
######################
# release-ssl
debug: $(DEBUG_OBJS)
$(LINKD) $(LFLAGS) $(LINKLIBS) $(LINK_OBJS)
!IF "$(CFG)" == "release-ssl"
TARGET =$(LIB_NAME).lib
DIROBJ =.\$(CFG)
LNK = $(LNKLIB) $(LFLAGSSSL) /out:$(TARGET)
LINKLIBS = $(LINKLIBS) $(SSLLIBS)
CC = $(CCNODBG) $(CFLAGSSSL)
CFGSET = TRUE
!ENDIF
release-ssl: $(RELEASE_SSL_OBJS)
$(LINKRS) $(LFLAGS) $(LINKLIBS) $(LINKSLIBS) $(LINK_OBJS)
######################
# release-ssl-dll
## Release
base64r.obj: base64.c
$(CCR) $(CFLAGS) base64.c
cookier.obj: cookie.c
$(CCR) $(CFLAGS) cookie.c
transferr.obj: transfer.c
$(CCR) $(CFLAGS) transfer.c
escaper.obj: escape.c
$(CCR) $(CFLAGS) escape.c
formdatar.obj: formdata.c
$(CCR) $(CFLAGS) formdata.c
ftpr.obj: ftp.c
$(CCR) $(CFLAGS) ftp.c
httpr.obj: http.c
$(CCR) $(CFLAGS) http.c
http_chunksr.obj: http_chunks.c
$(CCR) $(CFLAGS) http_chunks.c
ldapr.obj: ldap.c
$(CCR) $(CFLAGS) ldap.c
dictr.obj: dict.c
$(CCR) $(CFLAGS) dict.c
telnetr.obj: telnet.c
$(CCR) $(CFLAGS) telnet.c
getdater.obj: getdate.c
$(CCR) $(CFLAGS) getdate.c
getenvr.obj: getenv.c
$(CCR) $(CFLAGS) getenv.c
getpassr.obj: getpass.c
$(CCR) $(CFLAGS) getpass.c
hostipr.obj: hostip.c
$(CCR) $(CFLAGS) hostip.c
if2ipr.obj: if2ip.c
$(CCR) $(CFLAGS) if2ip.c
mprintfr.obj: mprintf.c
$(CCR) $(CFLAGS) mprintf.c
netrcr.obj: netrc.c
$(CCR) $(CFLAGS) netrc.c
progressr.obj: progress.c
$(CCR) $(CFLAGS) progress.c
sendfr.obj: sendf.c
$(CCR) $(CFLAGS) sendf.c
speedcheckr.obj: speedcheck.c
$(CCR) $(CFLAGS) speedcheck.c
ssluser.obj: ssluse.c
$(CCR) $(CFLAGS) ssluse.c
timevalr.obj: timeval.c
$(CCR) $(CFLAGS) timeval.c
urlr.obj: url.c
$(CCR) $(CFLAGS) url.c
filer.obj: file.c
$(CCR) $(CFLAGS) file.c
getinfor.obj: getinfo.c
$(CCR) $(CFLAGS) getinfo.c
versionr.obj: version.c
$(CCR) $(CFLAGS) version.c
easyr.obj: easy.c
$(CCR) $(CFLAGS) easy.c
strequalr.obj: strequal.c
$(CCR) $(CFLAGS) strequal.c
strtokr.obj:strtok.c
$(CCR) $(CFLAGS) strtok.c
connectr.obj:connect.c
$(CCR) $(CFLAGS) connect.c
!IF "$(CFG)" == "release-ssl-dll"
TARGET =$(LIB_NAME).dll
DIROBJ =.\$(CFG)
LNK = $(LNKDLL) $(LFLAGSSSL) /out:$(TARGET) /IMPLIB:"$(LIB_NAME).lib"
LINKLIBS = $(LINKLIBS) $(SSLLIBS)
CC = $(CCNODBG) $(CFLAGSSSL)
CFGSET = TRUE
!ENDIF
## Debug
base64d.obj: base64.c
$(CCD) $(CFLAGS) base64.c
cookied.obj: cookie.c
$(CCD) $(CFLAGS) cookie.c
transferd.obj: transfer.c
$(CCD) $(CFLAGS) transfer.c
escaped.obj: escape.c
$(CCD) $(CFLAGS) escape.c
formdatad.obj: formdata.c
$(CCD) $(CFLAGS) formdata.c
ftpd.obj: ftp.c
$(CCD) $(CFLAGS) ftp.c
httpd.obj: http.c
$(CCD) $(CFLAGS) http.c
http_chunksd.obj: http_chunks.c
$(CCD) $(CFLAGS) http_chunks.c
ldapd.obj: ldap.c
$(CCD) $(CFLAGS) ldap.c
dictd.obj: dict.c
$(CCD) $(CFLAGS) dict.c
telnetd.obj: telnet.c
$(CCD) $(CFLAGS) telnet.c
getdated.obj: getdate.c
$(CCD) $(CFLAGS) getdate.c
getenvd.obj: getenv.c
$(CCD) $(CFLAGS) getenv.c
getpassd.obj: getpass.c
$(CCD) $(CFLAGS) getpass.c
hostipd.obj: hostip.c
$(CCD) $(CFLAGS) hostip.c
if2ipd.obj: if2ip.c
$(CCD) $(CFLAGS) if2ip.c
mprintfd.obj: mprintf.c
$(CCD) $(CFLAGS) mprintf.c
netrcd.obj: netrc.c
$(CCD) $(CFLAGS) netrc.c
progressd.obj: progress.c
$(CCD) $(CFLAGS) progress.c
sendfd.obj: sendf.c
$(CCD) $(CFLAGS) sendf.c
speedcheckd.obj: speedcheck.c
$(CCD) $(CFLAGS) speedcheck.c
sslused.obj: ssluse.c
$(CCD) $(CFLAGS) ssluse.c
timevald.obj: timeval.c
$(CCD) $(CFLAGS) timeval.c
urld.obj: url.c
$(CCD) $(CFLAGS) url.c
filed.obj: file.c
$(CCD) $(CFLAGS) file.c
getinfod.obj: getinfo.c
$(CCD) $(CFLAGS) getinfo.c
versiond.obj: version.c
$(CCD) $(CFLAGS) version.c
easyd.obj: easy.c
$(CCD) $(CFLAGS) easy.c
strequald.obj: strequal.c
$(CCD) $(CFLAGS) strequal.c
strtokd.obj:strtok.c
$(CCD) $(CFLAGS) strtok.c
connectd.obj:connect.c
$(CCR) $(CFLAGS) connect.c
## Release SSL
base64rs.obj: base64.c
$(CCRS) $(CFLAGS) base64.c
cookiers.obj: cookie.c
$(CCRS) $(CFLAGS) cookie.c
transferrs.obj: transfer.c
$(CCRS) $(CFLAGS) transfer.c
escapers.obj: escape.c
$(CCRS) $(CFLAGS) escape.c
formdatars.obj: formdata.c
$(CCRS) $(CFLAGS) formdata.c
ftprs.obj: ftp.c
$(CCRS) $(CFLAGS) ftp.c
httprs.obj: http.c
$(CCRS) $(CFLAGS) http.c
http_chunksrs.obj: http_chunks.c
$(CCRS) $(CFLAGS) http_chunks.c
ldaprs.obj: ldap.c
$(CCRS) $(CFLAGS) ldap.c
dictrs.obj: dict.c
$(CCRS) $(CFLAGS) dict.c
telnetrs.obj: telnet.c
$(CCRS) $(CFLAGS) telnet.c
getdaters.obj: getdate.c
$(CCRS) $(CFLAGS) getdate.c
getenvrs.obj: getenv.c
$(CCRS) $(CFLAGS) getenv.c
getpassrs.obj: getpass.c
$(CCRS) $(CFLAGS) getpass.c
hostiprs.obj: hostip.c
$(CCRS) $(CFLAGS) hostip.c
if2iprs.obj: if2ip.c
$(CCRS) $(CFLAGS) if2ip.c
mprintfrs.obj: mprintf.c
$(CCRS) $(CFLAGS) mprintf.c
netrcrs.obj: netrc.c
$(CCRS) $(CFLAGS) netrc.c
progressrs.obj: progress.c
$(CCRS) $(CFLAGS) progress.c
sendfrs.obj: sendf.c
$(CCRS) $(CFLAGS) sendf.c
speedcheckrs.obj: speedcheck.c
$(CCRS) $(CFLAGS) speedcheck.c
sslusers.obj: ssluse.c
$(CCRS) $(CFLAGS) ssluse.c
timevalrs.obj: timeval.c
$(CCRS) $(CFLAGS) timeval.c
urlrs.obj: url.c
$(CCRS) $(CFLAGS) url.c
filers.obj: file.c
$(CCRS) $(CFLAGS) file.c
getinfors.obj: getinfo.c
$(CCRS) $(CFLAGS) getinfo.c
versionrs.obj: version.c
$(CCRS) $(CFLAGS) version.c
easyrs.obj: easy.c
$(CCRS) $(CFLAGS) easy.c
strequalrs.obj: strequal.c
$(CCRS) $(CFLAGS) strequal.c
strtokrs.obj:strtok.c
$(CCRS) $(CFLAGS) strtok.c
connectrs.obj:connect.c
$(CCR) $(CFLAGS) connect.c
######################
# debug
!IF "$(CFG)" == "debug"
TARGET =$(LIB_NAME_DEBUG).lib
DIROBJ =.\$(CFG)
LNK = $(LNKLIB) /out:$(TARGET)
CC = $(CCDEBUG)
CFGSET = TRUE
!ENDIF
######################
# debug-dll
!IF "$(CFG)" == "debug-dll"
TARGET =$(LIB_NAME_DEBUG).dll
DIROBJ =.\$(CFG)
LNK = $(LNKDLL) /out:$(TARGET) /IMPLIB:"$(LIB_NAME_DEBUG).lib"
CC = $(CCDEBUG)
CFGSET = TRUE
!ENDIF
######################
# debug-ssl
#todo
!IF "$(CFG)" == "debug-ssl"
TARGET = $(LIB_NAME_DEBUG).lib
DIROBJ =.\$(CFG)
LNK = $(LNKLIB) $(LFLAGSSSL) /out:$(TARGET)
LINKLIBS = $(LINKLIBS) $(SSLLIBS)
CC = $(CCDEBUG) $(CFLAGSSSL)
CFGSET = TRUE
!ENDIF
######################
# debug-ssl-dll
!IF "$(CFG)" == "debug-ssl-dll"
TARGET =$(LIB_NAME_DEBUG).dll
DIROBJ =.\$(CFG)
LNK = $(LNKDLL) $(LFLAGSSSL) /out:$(TARGET) /IMPLIB:"$(LIB_NAME_DEBUG).lib"
LINKLIBS = $(LINKLIBS) $(SSLLIBS)
CC = $(CCDEBUG) $(CFLAGSSSL)
CFGSET = TRUE
!ENDIF
#######################
# Usage
#
!IF "$(CFGSET)" == "FALSE"
!MESSAGE Usage: nmake -f makefile.vc6 CFG=<config> <target>
!MESSAGE where <config> is one of:
!MESSAGE release - release static library
!MESSAGE release-dll - release dll
!MESSAGE release-ssl - release static library with ssl
!MESSAGE release-ssl-dll - release dll library with ssl
!MESSAGE debug - debug static library
!MESSAGE debug-dll - debug dll
!MESSAGE debug-ssl - debug static library with ssl
!MESSAGE debug-ssl-dll - debug dll library with ssl
!MESSAGE <target> can be left blank in which case all is assumed
!ERROR please choose a valid configuration "$(CFG)"
!ENDIF
#######################
#
X_OBJS= \
$(DIROBJ)\base64.obj \
$(DIROBJ)\cookie.obj \
$(DIROBJ)\transfer.obj \
$(DIROBJ)\escape.obj \
$(DIROBJ)\formdata.obj \
$(DIROBJ)\ftp.obj \
$(DIROBJ)\http.obj \
$(DIROBJ)\http_chunks.obj \
$(DIROBJ)\ldap.obj \
$(DIROBJ)\dict.obj \
$(DIROBJ)\telnet.obj \
$(DIROBJ)\getdate.obj \
$(DIROBJ)\getenv.obj \
$(DIROBJ)\getpass.obj \
$(DIROBJ)\hostip.obj \
$(DIROBJ)\if2ip.obj \
$(DIROBJ)\mprintf.obj \
$(DIROBJ)\netrc.obj \
$(DIROBJ)\progress.obj \
$(DIROBJ)\sendf.obj \
$(DIROBJ)\speedcheck.obj \
$(DIROBJ)\ssluse.obj \
$(DIROBJ)\timeval.obj \
$(DIROBJ)\url.obj \
$(DIROBJ)\file.obj \
$(DIROBJ)\getinfo.obj \
$(DIROBJ)\version.obj \
$(DIROBJ)\easy.obj \
$(DIROBJ)\strequal.obj \
$(DIROBJ)\strtok.obj \
$(DIROBJ)\connect.obj \
$(DIROBJ)\hash.obj \
$(DIROBJ)\llist.obj
all : $(TARGET)
$(TARGET): $(X_OBJS)
$(LNK) $(LFLAGS) $(LINKLIBS) $(X_OBJS)
$(X_OBJS): $(DIROBJ)
$(DIROBJ):
@if not exist "$(DIROBJ)" mkdir $(DIROBJ)
.SUFFIXES: .c .obj
{.\}.c{$(DIROBJ)\}.obj:
$(CC) $(CFLAGS) /Fo"$@" $<
clean:
-@erase *.obj
-@erase $(DIROBJ)\*.obj
-@erase vc60.idb
-@erase vc60.pch
distrib: clean
-@erase $(PROGRAM_NAME)

View File

@@ -42,7 +42,7 @@
static const char *telnetoptions[]=
{
"BINARY", "ECHO", "RCP", "SUPPRESS GO AHEAD",
"NAME", "STATUS" "TIMING MARK", "RCTE",
"NAME", "STATUS", "TIMING MARK", "RCTE",
"NAOL", "NAOP", "NAOCRD", "NAOHTS",
"NAOHTD", "NAOFFD", "NAOVTS", "NAOVTD",
"NAOLFD", "EXTEND ASCII", "LOGOUT", "BYTE MACRO",

45
lib/config-mac.h Normal file
View File

@@ -0,0 +1,45 @@
#define OS "mac"
#define HAVE_NETINET_IN_H 1
#define HAVE_SYS_SOCKET_H 1
#define HAVE_SYS_SELECT_H 1
#define HAVE_NETDB_H 1
#define HAVE_ARPA_INET_H 1
#define HAVE_UNISTD_H 1
#define HAVE_NET_IF_H 1
#define HAVE_SYS_TYPES_H 1
#define HAVE_GETTIMEOFDAY 1
#define HAVE_FCNTL_H 1
#define HAVE_SYS_STAT_H 1
#define HAVE_ALLOCA_H 1
#define HAVE_TIME_H 1
#define HAVE_STDLIB_H 1
#define HAVE_UTIME_H 1
#define TIME_WITH_SYS_TIME 1
#define HAVE_STRDUP 1
#define HAVE_UTIME 1
#define HAVE_INET_NTOA 1
#define HAVE_SETVBUF 1
#define HAVE_STRFTIME 1
#define HAVE_INET_ADDR 1
#define HAVE_MEMCPY 1
#define HAVE_SELECT 1
#define HAVE_SOCKET 1
//#define HAVE_STRICMP 1
#define HAVE_SIGACTION 1
#ifdef MACOS_SSL_SUPPORT
# define USE_SSLEAY 1
# define USE_OPENSSL 1
#endif
#define HAVE_RAND_STATUS 1
#define HAVE_RAND_EGD 1
#define HAVE_FIONBIO 1
#include <extra/stricmp.h>
#include <extra/strdup.h>

View File

@@ -221,22 +221,22 @@
#define HAVE_NETINET_IN_H 1
/* Define if you have the <openssl/crypto.h> header file. */
#undef HAVE_OPENSSL_CRYPTO_H
#define HAVE_OPENSSL_CRYPTO_H 1
/* Define if you have the <openssl/err.h> header file. */
#undef HAVE_OPENSSL_ERR_H
#define HAVE_OPENSSL_ERR_H 1
/* Define if you have the <openssl/pem.h> header file. */
#undef HAVE_OPENSSL_PEM_H
#define HAVE_OPENSSL_PEM_H 1
/* Define if you have the <openssl/rsa.h> header file. */
#undef HAVE_OPENSSL_RSA_H
#define HAVE_OPENSSL_RSA_H 1
/* Define if you have the <openssl/ssl.h> header file. */
#undef HAVE_OPENSSL_SSL_H
#define HAVE_OPENSSL_SSL_H 1
/* Define if you have the <openssl/x509.h> header file. */
#undef HAVE_OPENSSL_X509_H
#define HAVE_OPENSSL_X509_H 1
/* Define if you have the <pem.h> header file. */
#undef HAVE_PEM_H
@@ -296,7 +296,7 @@
#undef HAVE_X509_H
/* Define if you have the crypto library (-lcrypto). */
#undef HAVE_LIBCRYPTO
#define HAVE_LIBCRYPTO 1
/* Define if you have the dl library (-ldl). */
#undef HAVE_LIBDL
@@ -314,7 +314,7 @@
#define HAVE_LIBSOCKET 1
/* Define if you have the ssl library (-lssl). */
#undef HAVE_LIBSSL
#define HAVE_LIBSSL 1
/* Define if you have the ucb library (-lucb). */
#undef HAVE_LIBUCB
@@ -346,7 +346,7 @@
#undef HAVE_GETPASS
/* Define if you have a working OpenSSL installation */
#undef OPENSSL_ENABLED
#define OPENSSL_ENABLED 1
/* Define if you have the `dlopen' function. */
#undef HAVE_DLOPEN
@@ -365,3 +365,4 @@
#define HAVE_MEMORY_H 1
#define HAVE_FIONBIO 1

View File

@@ -42,7 +42,7 @@
#define SIZEOF_LONG_DOUBLE 16
/* The number of bytes in a long long. */
#define SIZEOF_LONG_LONG 8
/* #define SIZEOF_LONG_LONG 8 */
/* Define if you have the gethostbyaddr function. */
#define HAVE_GETHOSTBYADDR 1
@@ -179,6 +179,9 @@
/* Define if you have the RAND_screen function when using SSL */
#define HAVE_RAND_SCREEN 1
/* Define this to if in_addr_t is not an available typedefed type */
#define in_addr_t unsigned long
/*************************************************
* This section is for compiler specific defines.*
*************************************************/

View File

@@ -44,6 +44,14 @@
#ifdef HAVE_ARPA_INET_H
#include <arpa/inet.h>
#endif
#ifdef HAVE_STDLIB_H
#include <stdlib.h> /* required for free() prototype, without it, this crashes
on macos 68K */
#endif
#ifdef VMS
#include <in.h>
#include <inet.h>
#endif
#endif
#include <stdio.h>
@@ -61,6 +69,7 @@
#include <winsock.h>
#define EINPROGRESS WSAEINPROGRESS
#define EWOULDBLOCK WSAEWOULDBLOCK
#define EISCONN WSAEISCONN
#endif
#include "urldata.h"
@@ -188,7 +197,7 @@ static CURLcode bindlocal(struct connectdata *conn,
#ifdef HAVE_INET_NTOA
#ifndef INADDR_NONE
#define INADDR_NONE (unsigned long) ~0
#define INADDR_NONE (in_addr_t) ~0
#endif
struct SessionHandle *data = conn->data;
@@ -202,14 +211,14 @@ static CURLcode bindlocal(struct connectdata *conn,
char *hostdataptr=NULL;
size_t size;
char myhost[256] = "";
unsigned long in;
in_addr_t in;
if(Curl_if2ip(data->set.device, myhost, sizeof(myhost))) {
h = Curl_getaddrinfo(data, myhost, 0, &hostdataptr);
h = Curl_resolv(data, myhost, 0, &hostdataptr);
}
else {
if(strlen(data->set.device)>1) {
h = Curl_getaddrinfo(data, data->set.device, 0, &hostdataptr);
h = Curl_resolv(data, data->set.device, 0, &hostdataptr);
}
if(h) {
/* we know data->set.device is shorter than the myhost array */
@@ -224,14 +233,13 @@ static CURLcode bindlocal(struct connectdata *conn,
hostent_buf,
sizeof(hostent_buf));
*/
if(hostdataptr)
free(hostdataptr); /* allocated by Curl_getaddrinfo() */
return CURLE_HTTP_PORT_FAILED;
}
infof(data, "We bind local end to %s\n", myhost);
if ( (in=inet_addr(myhost)) != INADDR_NONE ) {
in=inet_addr(myhost);
if (INADDR_NONE != in) {
if ( h ) {
memset((char *)&sa, 0, sizeof(sa));
@@ -279,7 +287,7 @@ static CURLcode bindlocal(struct connectdata *conn,
failf(data, "Insufficient kernel memory was available: %d", errno);
break;
default:
failf(data, "errno %d\n", errno);
failf(data, "errno %d", errno);
break;
} /* end of switch(errno) */
@@ -298,9 +306,6 @@ static CURLcode bindlocal(struct connectdata *conn,
return CURLE_HTTP_PORT_FAILED;
}
if(hostdataptr)
free(hostdataptr); /* allocated by Curl_getaddrinfo() */
return CURLE_OK;
} /* end of device selection support */
@@ -309,9 +314,8 @@ static CURLcode bindlocal(struct connectdata *conn,
return CURLE_HTTP_PORT_FAILED;
}
#else /* end of ipv4-specific section */
#endif /* end of ipv4-specific section */
/* we only use socketerror() on IPv6-enabled machines */
static
int socketerror(int sockfd)
{
@@ -324,7 +328,6 @@ int socketerror(int sockfd)
return err;
}
#endif
/*
* TCP connect to the given host with timeout, proxy or remote doesn't matter.
@@ -361,8 +364,13 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
#endif
/* get the most strict timeout of the ones converted to milliseconds */
if(data->set.timeout &&
(data->set.timeout>data->set.connecttimeout))
if(data->set.timeout && data->set.connecttimeout) {
if (data->set.timeout < data->set.connecttimeout)
timeout_ms = data->set.timeout*1000;
else
timeout_ms = data->set.connecttimeout*1000;
}
else if(data->set.timeout)
timeout_ms = data->set.timeout*1000;
else
timeout_ms = data->set.connecttimeout*1000;
@@ -370,9 +378,11 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
/* subtract the passed time */
timeout_ms -= (long)has_passed;
if(timeout_ms < 0)
if(timeout_ms < 0) {
/* a precaution, no need to continue if time already is up */
failf(data, "Connection time-out");
return CURLE_OPERATION_TIMEOUTED;
}
}
#ifdef ENABLE_IPV6
@@ -449,8 +459,7 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
return CURLE_COULDNT_CONNECT;
}
/* now disable the non-blocking mode again */
Curl_nonblock(sockfd, FALSE);
/* leave the socket in non-blocking mode */
if(addr)
*addr = ai; /* the address we ended up connected to */
@@ -525,6 +534,16 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
}
}
if(0 == rc) {
int err = socketerror(sockfd);
if ((0 == err) || (EISCONN == err)) {
/* we are connected, awesome! */
break;
}
/* nope, not connected for real */
rc = -1;
}
if(0 != rc) {
/* get a new timeout for next attempt */
after = Curl_tvnow();
@@ -542,11 +561,11 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
/* no good connect was made */
sclose(sockfd);
*sockconn = -1;
failf(data, "Couldn't connect to host");
return CURLE_COULDNT_CONNECT;
}
/* now disable the non-blocking mode again */
Curl_nonblock(sockfd, FALSE);
/* leave the socket in non-blocking mode */
if(addr)
/* this is the address we've connected to */

View File

@@ -28,7 +28,7 @@ int Curl_nonblock(int socket, /* operate on this */
CURLcode Curl_connecthost(struct connectdata *conn,
Curl_addrinfo *host, /* connect to this */
long port, /* connect to this port number */
int port, /* connect to this port number */
int *sockconn, /* not set if error is returned */
Curl_ipconnect **addr /* the one we used */
); /* index we used */

View File

@@ -5,7 +5,7 @@
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2001, Daniel Stenberg, <daniel@haxx.se>, et al.
* Copyright (C) 2002, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* In order to be useful for every potential user, curl and libcurl are
* dual-licensed under the MPL and the MIT/X-derivate licenses.
@@ -127,22 +127,37 @@ Curl_cookie_add(struct CookieInfo *c,
if(httpheader) {
/* This line was read off a HTTP-header */
char *sep;
semiptr=strchr(lineptr, ';'); /* first, find a semicolon */
ptr = lineptr;
do {
/* we have a <what>=<this> pair or a 'secure' word here */
if(strchr(ptr, '=')) {
sep = strchr(ptr, '=');
if(sep && (!semiptr || (semiptr>sep)) ) {
/*
* There is a = sign and if there was a semicolon too, which make sure
* that the semicolon comes _after_ the equal sign.
*/
name[0]=what[0]=0; /* init the buffers */
if(1 <= sscanf(ptr, "%" MAX_NAME_TXT "[^=]=%"
if(1 <= sscanf(ptr, "%" MAX_NAME_TXT "[^;=]=%"
MAX_COOKIE_LINE_TXT "[^;\r\n]",
name, what)) {
/* this is a legal <what>=<this> pair */
/* this is a <name>=<what> pair */
/* Strip off trailing whitespace from the 'what' */
int len=strlen(what);
while(len && isspace((int)what[len-1])) {
what[len-1]=0;
len--;
}
if(strequal("path", name)) {
co->path=strdup(what);
}
else if(strequal("domain", name)) {
co->domain=strdup(what);
co->field1= (what[0]=='.')?2:1;
}
else if(strequal("version", name)) {
co->version=strdup(what);
@@ -159,7 +174,7 @@ Curl_cookie_add(struct CookieInfo *c,
*/
co->maxage = strdup(what);
co->expires =
atoi((*co->maxage=='\"')?&co->maxage[1]:&co->maxage[0]);
atoi((*co->maxage=='\"')?&co->maxage[1]:&co->maxage[0]) + now;
}
else if(strequal("expires", name)) {
co->expirestr=strdup(what);
@@ -187,15 +202,37 @@ Curl_cookie_add(struct CookieInfo *c,
}
}
if(!semiptr)
continue; /* we already know there are no more cookies */
if(!semiptr || !*semiptr) {
/* we already know there are no more cookies */
semiptr = NULL;
continue;
}
ptr=semiptr+1;
while(ptr && *ptr && isspace((int)*ptr))
ptr++;
semiptr=strchr(ptr, ';'); /* now, find the next semicolon */
if(!semiptr && *ptr)
/* There are no more semicolons, but there's a final name=value pair
coming up */
semiptr=strchr(ptr, '\0');
} while(semiptr);
if(NULL == co->name) {
/* we didn't get a cookie name, this is an illegal line, bail out */
if(co->domain)
free(co->domain);
if(co->path)
free(co->path);
if(co->name)
free(co->name);
if(co->value)
free(co->value);
free(co);
return NULL;
}
if(NULL == co->domain)
/* no domain given in the header line, set the default now */
co->domain=domain?strdup(domain):NULL;
@@ -377,8 +414,15 @@ Curl_cookie_add(struct CookieInfo *c,
free(co); /* free the newly alloced memory */
co = clist; /* point to the previous struct instead */
}
/* We have replaced a cookie, now skip the rest of the list but
make sure the 'lastc' pointer is properly set */
do {
lastc = clist;
clist = clist->next;
} while(clist);
break;
}
}
lastc = clist;
clist = clist->next;

View File

@@ -38,7 +38,7 @@ struct Cookie {
char *value; /* name = <this> */
char *path; /* path = <this> */
char *domain; /* domain = <this> */
time_t expires; /* expires = <this> */
long expires; /* expires = <this> */
char *expirestr; /* the plain text version */
char field1; /* read from a cookie file, 1 => FALSE, 2=> TRUE */

View File

@@ -151,6 +151,10 @@ SOURCE=.\getpass.c
# End Source File
# Begin Source File
SOURCE=.\hash.c
# End Source File
# Begin Source File
SOURCE=.\hostip.c
# End Source File
# Begin Source File
@@ -179,6 +183,10 @@ SOURCE=.\libcurl.def
# End Source File
# Begin Source File
SOURCE=.\llist.c
# End Source File
# Begin Source File
SOURCE=.\memdebug.c
# End Source File
# Begin Source File

View File

@@ -121,7 +121,7 @@ CURLcode Curl_dict(struct connectdata *conn)
}
if ((word == NULL) || (*word == (char)0)) {
failf(data, "lookup word is missing\n");
failf(data, "lookup word is missing");
}
if ((database == NULL) || (*database == (char)0)) {
database = (char *)"!";
@@ -174,7 +174,7 @@ CURLcode Curl_dict(struct connectdata *conn)
}
if ((word == NULL) || (*word == (char)0)) {
failf(data, "lookup word is missing\n");
failf(data, "lookup word is missing");
}
if ((database == NULL) || (*database == (char)0)) {
database = (char *)"!";

View File

@@ -81,6 +81,10 @@ DllMain (
}
return TRUE;
}
#else
#ifdef VMS
int VOID_VAR_DLLINIT;
#endif
#endif
/*

View File

@@ -76,6 +76,7 @@
#include "ssluse.h"
#include "url.h"
#include "getinfo.h"
#include "hostip.h"
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
@@ -139,7 +140,7 @@ CURLcode curl_global_init(long flags)
{
if (initialized)
return CURLE_OK;
if (flags & CURL_GLOBAL_SSL)
Curl_SSL_init();
@@ -162,6 +163,8 @@ void curl_global_cleanup(void)
if (!initialized)
return;
Curl_global_host_cache_dtor();
if (init_flags & CURL_GLOBAL_SSL)
Curl_SSL_cleanup();
@@ -230,12 +233,24 @@ CURLcode curl_easy_perform(CURL *curl)
{
struct SessionHandle *data = (struct SessionHandle *)curl;
if (!data->hostcache) {
if (Curl_global_host_cache_use(data)) {
data->hostcache = Curl_global_host_cache_get();
}
else {
data->hostcache = curl_hash_alloc(7, Curl_freeaddrinfo);
}
}
return Curl_perform(data);
}
void curl_easy_cleanup(CURL *curl)
{
struct SessionHandle *data = (struct SessionHandle *)curl;
if (!Curl_global_host_cache_use(data)) {
curl_hash_destroy(data->hostcache);
}
Curl_close(data);
}

View File

@@ -235,7 +235,6 @@ int FormParse(char *input,
if(2 != sscanf(type, "%127[^/]/%127[^,\n]",
major, minor)) {
fprintf(stderr, "Illegally formatted content-type field!\n");
free(contents);
return 2; /* illegal content-type syntax! */
}
@@ -371,7 +370,6 @@ int FormParse(char *input,
}
else {
fprintf(stderr, "Illegally formatted input field!\n");
free(contents);
return 1;
}
@@ -396,15 +394,16 @@ int curl_formparse(char *input,
* Returns newly allocated HttpPost on success and NULL if malloc failed.
*
***************************************************************************/
static struct HttpPost * AddHttpPost (char * name,
long namelength,
char * value,
long contentslength,
char *contenttype,
long flags,
struct HttpPost *parent_post,
struct HttpPost **httppost,
struct HttpPost **last_post)
static struct HttpPost * AddHttpPost(char * name,
long namelength,
char * value,
long contentslength,
char *contenttype,
long flags,
struct curl_slist* contentHeader,
struct HttpPost *parent_post,
struct HttpPost **httppost,
struct HttpPost **last_post)
{
struct HttpPost *post;
post = (struct HttpPost *)malloc(sizeof(struct HttpPost));
@@ -415,6 +414,7 @@ static struct HttpPost * AddHttpPost (char * name,
post->contents = value;
post->contentslength = contentslength;
post->contenttype = contenttype;
post->contentheader = contentHeader;
post->flags = flags;
}
else
@@ -823,8 +823,22 @@ FORMcode FormAdd(struct HttpPost **httppost,
}
break;
}
case CURLFORM_CONTENTHEADER:
{
struct curl_slist* list = NULL;
if( array_state )
list = (struct curl_slist*)array_value;
else
list = va_arg(params,struct curl_slist*);
if( current_form->contentheader )
return_value = FORMADD_OPTION_TWICE;
else
current_form->contentheader = list;
break;
}
default:
fprintf (stderr, "got unknown CURLFORM_OPTION: %d\n", option);
return_value = FORMADD_UNKNOWN_OPTION;
}
}
@@ -872,13 +886,16 @@ FORMcode FormAdd(struct HttpPost **httppost,
break;
}
}
if ( (post = AddHttpPost(form->name, form->namelength,
form->value, form->contentslength,
form->contenttype, form->flags,
post, httppost,
last_post)) == NULL) {
post = AddHttpPost(form->name, form->namelength,
form->value, form->contentslength,
form->contenttype, form->flags,
form->contentheader,
post, httppost,
last_post);
if(!post)
return_value = FORMADD_MEMORY;
}
if (form->contenttype)
prevtype = form->contenttype;
}
@@ -923,7 +940,7 @@ static int AddFormData(struct FormData **formp,
length = strlen((char *)line);
newform->line = (char *)malloc(length+1);
memcpy(newform->line, line, length+1);
memcpy(newform->line, line, length);
newform->length = length;
newform->line[length]=0; /* zero terminate for easier debugging */
@@ -1029,6 +1046,8 @@ struct FormData *Curl_getFormData(struct HttpPost *post,
int size =0;
char *boundary;
char *fileboundary=NULL;
struct curl_slist* curList;
if(!post)
return NULL; /* no input => no output! */
@@ -1046,8 +1065,11 @@ struct FormData *Curl_getFormData(struct HttpPost *post,
do {
if(size)
size += AddFormDataf(&form, "\r\n");
/* boundary */
size += AddFormDataf(&form, "\r\n--%s\r\n", boundary);
size += AddFormDataf(&form, "--%s\r\n", boundary);
size += AddFormData(&form,
"Content-Disposition: form-data; name=\"", 0);
@@ -1090,6 +1112,13 @@ struct FormData *Curl_getFormData(struct HttpPost *post,
file->contenttype);
}
curList = file->contentheader;
while( curList ) {
/* Process the additional headers specified for this form */
size += AddFormDataf( &form, "\r\n%s", curList->data );
curList = curList->next;
}
#if 0
/* The header Content-Transfer-Encoding: seems to confuse some receivers
* (like the built-in PHP engine). While I can't see any reason why it
@@ -1126,10 +1155,13 @@ struct FormData *Curl_getFormData(struct HttpPost *post,
}
if(fileread != stdin)
fclose(fileread);
} else {
size += AddFormData(&form, "[File wasn't found by client]", 0);
}
} else {
else {
/* File wasn't found, add a nothing field! */
size += AddFormData(&form, "", 0);
}
}
else {
/* include the contents we got */
size += AddFormData(&form, post->contents, post->contentslength);
}

View File

@@ -44,6 +44,7 @@ typedef struct FormInfo {
long contentslength;
char *contenttype;
long flags;
struct curl_slist* contentheader;
struct FormInfo *more;
} FormInfo;

421
lib/ftp.c
View File

@@ -55,6 +55,7 @@
#include <netdb.h>
#endif
#ifdef VMS
#include <in.h>
#include <inet.h>
#endif
#endif
@@ -84,6 +85,10 @@
#include "ssluse.h"
#include "connect.h"
#if defined(HAVE_INET_NTOA_R) && !defined(HAVE_INET_NTOA_R_DECL)
#include "inet_ntoa_r.h"
#endif
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
@@ -192,11 +197,16 @@ int Curl_GetFTPResponse(char *buf,
char *line_start;
int code=0; /* default "error code" to return */
#define SELECT_OK 0
#define SELECT_ERROR 1
#define SELECT_TIMEOUT 2
#define SELECT_OK 0
#define SELECT_ERROR 1 /* select() problems */
#define SELECT_TIMEOUT 2 /* took too long */
#define SELECT_MEMORY 3 /* no available memory */
#define SELECT_CALLBACK 4 /* aborted by callback */
int error = SELECT_OK;
struct FTP *ftp = conn->proto.ftp;
if (ftpcode)
*ftpcode = 0; /* 0 for errors */
@@ -229,23 +239,48 @@ int Curl_GetFTPResponse(char *buf,
interval.tv_sec = timeout;
interval.tv_usec = 0;
switch (select (sockfd+1, &readfd, NULL, NULL, &interval)) {
case -1: /* select() error, stop reading */
error = SELECT_ERROR;
failf(data, "Transfer aborted due to select() error");
break;
case 0: /* timeout */
error = SELECT_TIMEOUT;
failf(data, "Transfer aborted due to timeout");
break;
default:
if(!ftp->cache)
switch (select (sockfd+1, &readfd, NULL, NULL, &interval)) {
case -1: /* select() error, stop reading */
error = SELECT_ERROR;
failf(data, "Transfer aborted due to select() error");
break;
case 0: /* timeout */
error = SELECT_TIMEOUT;
failf(data, "Transfer aborted due to timeout");
break;
default:
error = SELECT_OK;
break;
}
if(SELECT_OK == error) {
/*
* This code previously didn't use the kerberos sec_read() code
* to read, but when we use Curl_read() it may do so. Do confirm
* that this is still ok and then remove this comment!
*/
if(CURLE_OK != Curl_read(conn, sockfd, ptr, BUFSIZE-nread, &gotbytes))
keepon = FALSE;
if(ftp->cache) {
/* we had data in the "cache", copy that instead of doing an actual
read */
memcpy(ptr, ftp->cache, ftp->cache_size);
gotbytes = ftp->cache_size;
free(ftp->cache); /* free the cache */
ftp->cache = NULL; /* clear the pointer */
ftp->cache_size = 0; /* zero the size just in case */
}
else {
int res = Curl_read(conn, sockfd, ptr,
BUFSIZE-nread, &gotbytes);
if(res < 0)
/* EWOULDBLOCK */
continue; /* go looping again */
if(CURLE_OK != res)
keepon = FALSE;
}
if(!keepon)
;
else if(gotbytes <= 0) {
keepon = FALSE;
error = SELECT_ERROR;
@@ -263,6 +298,7 @@ int Curl_GetFTPResponse(char *buf,
if(*ptr=='\n') {
/* a newline is CRLF in ftp-talk, so the CR is ignored as
the line isn't really terminated until the LF comes */
CURLcode result;
/* output debug output if that is requested */
if(data->set.verbose) {
@@ -271,6 +307,16 @@ int Curl_GetFTPResponse(char *buf,
/* no need to output LF here, it is part of the data */
}
/*
* We pass all response-lines to the callback function registered
* for "headers". The response lines can be seen as a kind of
* headers.
*/
result = Curl_client_write(data, CLIENTWRITE_HEADER,
line_start, perline);
if(result)
return -SELECT_CALLBACK;
#define lastline(line) (isdigit((int)line[0]) && isdigit((int)line[1]) && \
isdigit((int)line[2]) && (' ' == line[3]))
@@ -279,26 +325,41 @@ int Curl_GetFTPResponse(char *buf,
* line to the start of the buffer and zero terminate,
* for old times sake (and krb4)! */
char *meow;
int i;
for(meow=line_start, i=0; meow<ptr; meow++, i++)
buf[i] = *meow;
meow[i]=0; /* zero terminate */
int n;
for(meow=line_start, n=0; meow<ptr; meow++, n++)
buf[n] = *meow;
*meow=0; /* zero terminate */
keepon=FALSE;
line_start = ptr+1; /* advance pointer */
i++; /* skip this before getting out */
break;
}
perline=0; /* line starts over here */
line_start = ptr+1;
}
}
}
break;
} /* switch */
if(!keepon && (i != gotbytes)) {
/* We found the end of the response lines, but we didn't parse the
full chunk of data we have read from the server. We therefore
need to store the rest of the data to be checked on the next
invoke as it may actually contain another end of response
already! Cleverly figured out by Eric Lavigne in December
2001. */
ftp->cache_size = gotbytes - i;
ftp->cache = (char *)malloc(ftp->cache_size);
if(ftp->cache)
memcpy(ftp->cache, line_start, ftp->cache_size);
else
return -SELECT_MEMORY; /**BANG**/
}
} /* there was data */
} /* if(no error) */
} /* while there's buffer left and loop is requested */
if(!error)
code = atoi(buf);
#if KRB4
#ifdef KRB4
/* handle the security-oriented responses 6xx ***/
/* FIXME: some errorchecking perhaps... ***/
switch(code) {
@@ -558,6 +619,7 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
ssize_t nread;
char *buf = data->state.buffer; /* this is our buffer */
int ftpcode;
CURLcode result=CURLE_OK;
if(data->set.upload) {
if((-1 != data->set.infilesize) && (data->set.infilesize != *ftp->bytecountp)) {
@@ -575,8 +637,12 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
else if(!conn->bits.resume_done &&
!data->set.no_body &&
(0 == *ftp->bytecountp)) {
/* We consider this an error, but there's no true FTP error received
why we need to continue to "read out" the server response too.
We don't want to leave a "waiting" server reply if we'll get told
to make a second request on this same connection! */
failf(data, "No data was received!");
return CURLE_FTP_COULDNT_RETR_FILE;
result = CURLE_FTP_COULDNT_RETR_FILE;
}
}
@@ -604,12 +670,10 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
conn->bits.resume_done = FALSE; /* clean this for next connection */
/* Send any post-transfer QUOTE strings? */
if(data->set.postquote) {
CURLcode result = ftp_sendquote(conn, data->set.postquote);
return result;
}
if(!result && data->set.postquote)
result = ftp_sendquote(conn, data->set.postquote);
return CURLE_OK;
return result;
}
/***********************************************************************
@@ -633,8 +697,7 @@ CURLcode ftp_sendquote(struct connectdata *conn, struct curl_slist *quote)
if (item->data) {
FTPSENDF(conn, "%s", item->data);
nread = Curl_GetFTPResponse(
conn->data->state.buffer, conn, &ftpcode);
nread = Curl_GetFTPResponse(conn->data->state.buffer, conn, &ftpcode);
if (nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -810,24 +873,35 @@ ftp_pasv_verbose(struct connectdata *conn,
#ifdef HAVE_INET_NTOA_R
char ntoa_buf[64];
#endif
char hostent_buf[8192];
/* The array size trick below is to make this a large chunk of memory
suitably 8-byte aligned on 64-bit platforms. This was thoughtfully
suggested by Philip Gladstone. */
long bigbuf[9000 / sizeof(long)];
#if defined(HAVE_INET_ADDR)
unsigned long address;
in_addr_t address;
# if defined(HAVE_GETHOSTBYADDR_R)
int h_errnop;
# endif
char *hostent_buf = (char *)bigbuf; /* get a char * to the buffer */
address = inet_addr(newhost);
# ifdef HAVE_GETHOSTBYADDR_R
# ifdef HAVE_GETHOSTBYADDR_R_5
/* AIX, Digital Unix style:
/* AIX, Digital Unix (OSF1, Tru64) style:
extern int gethostbyaddr_r(char *addr, size_t len, int type,
struct hostent *htent, struct hostent_data *ht_data); */
/* Fred Noz helped me try this out, now it at least compiles! */
/* Bjorn Reese (November 28 2001):
The Tru64 man page on gethostbyaddr_r() says that
the hostent struct must be filled with zeroes before the call to
gethostbyaddr_r(). */
memset(hostent_buf, 0, sizeof(struct hostent));
if(gethostbyaddr_r((char *) &address,
sizeof(address), AF_INET,
(struct hostent *)hostent_buf,
@@ -838,7 +912,7 @@ ftp_pasv_verbose(struct connectdata *conn,
# ifdef HAVE_GETHOSTBYADDR_R_7
/* Solaris and IRIX */
answer = gethostbyaddr_r((char *) &address, sizeof(address), AF_INET,
(struct hostent *)hostent_buf,
(struct hostent *)bigbuf,
hostent_buf + sizeof(*answer),
sizeof(hostent_buf) - sizeof(*answer),
&h_errnop);
@@ -1125,30 +1199,30 @@ CURLcode ftp_use_port(struct connectdata *conn)
struct sockaddr_in sa;
struct hostent *h=NULL;
char *hostdataptr=NULL;
size_t size;
unsigned short porttouse;
char myhost[256] = "";
if(data->set.ftpport) {
if(Curl_if2ip(data->set.ftpport, myhost, sizeof(myhost))) {
h = Curl_getaddrinfo(data, myhost, 0, &hostdataptr);
h = Curl_resolv(data, myhost, 0, &hostdataptr);
}
else {
if(strlen(data->set.ftpport)>1)
h = Curl_getaddrinfo(data, data->set.ftpport, 0, &hostdataptr);
int len = strlen(data->set.ftpport);
if(len>1)
h = Curl_resolv(data, data->set.ftpport, 0, &hostdataptr);
if(h)
strcpy(myhost, data->set.ftpport); /* buffer overflow risk */
}
}
if(! *myhost) {
h=Curl_getaddrinfo(data,
getmyhost(myhost, sizeof(myhost)),
0, &hostdataptr);
char *tmp_host = getmyhost(myhost, sizeof(myhost));
h=Curl_resolv(data, tmp_host, 0, &hostdataptr);
}
infof(data, "We connect from %s\n", myhost);
if ( h ) {
if( (portsock = socket(AF_INET, SOCK_STREAM, 0)) >= 0 ) {
int size;
/* we set the secondary socket variable to this for now, it
is only so that the cleanup function will close it in case
@@ -1167,10 +1241,10 @@ CURLcode ftp_use_port(struct connectdata *conn)
if(bind(portsock, (struct sockaddr *)&sa, size) >= 0) {
/* we succeeded to bind */
struct sockaddr_in add;
size = sizeof(add);
socklen_t socksize = sizeof(add);
if(getsockname(portsock, (struct sockaddr *) &add,
(socklen_t *)&size)<0) {
&socksize)<0) {
failf(data, "getsockname() failed");
return CURLE_FTP_PORT_FAILED;
}
@@ -1193,9 +1267,6 @@ CURLcode ftp_use_port(struct connectdata *conn)
free(hostdataptr);
return CURLE_FTP_PORT_FAILED;
}
if(hostdataptr)
/* free the memory used for name lookup */
Curl_freeaddrinfo(hostdataptr);
}
else {
failf(data, "could't find my own IP address (%s)", myhost);
@@ -1256,41 +1327,64 @@ CURLcode ftp_use_pasv(struct connectdata *conn)
char *buf = data->state.buffer; /* this is our buffer */
int ftpcode; /* receive FTP response codes in this */
CURLcode result;
Curl_addrinfo *addr=NULL;
Curl_ipconnect *conninfo;
/*
Here's the excecutive summary on what to do:
PASV is RFC959, expect:
227 Entering Passive Mode (a1,a2,a3,a4,p1,p2)
LPSV is RFC1639, expect:
228 Entering Long Passive Mode (4,4,a1,a2,a3,a4,2,p1,p2)
EPSV is RFC2428, expect:
229 Entering Extended Passive Mode (|||port|)
*/
#if 1
const char *mode[] = { "EPSV", "PASV", NULL };
int results[] = { 229, 227, 0 };
#else
#if 0
/* no support for IPv6 passive mode yet */
char *mode[] = { "EPSV", "LPSV", "PASV", NULL };
int results[] = { 229, 228, 227, 0 };
#else
const char *mode[] = { "PASV", NULL };
int results[] = { 227, 0 };
#endif
#endif
int modeoff;
unsigned short connectport; /* the local port connect() should use! */
unsigned short newport; /* remote port, not necessary the local one */
char *hostdataptr=NULL;
for (modeoff = 0; mode[modeoff]; modeoff++) {
FTPSENDF(conn, mode[modeoff], "");
/* newhost must be able to hold a full IP-style address in ASCII, which
in the IPv6 case means 5*8-1 = 39 letters */
char newhost[48];
char *newhostp=NULL;
for (modeoff = (data->set.ftp_use_epsv?0:1);
mode[modeoff]; modeoff++) {
result = Curl_ftpsendf(conn, mode[modeoff]);
if(result)
return result;
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if (ftpcode == results[modeoff])
break;
if (ftpcode == results[modeoff])
break;
}
if (!mode[modeoff]) {
failf(data, "Odd return code after PASV");
return CURLE_FTP_WEIRD_PASV_REPLY;
}
else if (strcmp(mode[modeoff], "PASV") == 0) {
else if (227 == results[modeoff]) {
int ip[4];
int port[2];
unsigned short newport; /* remote port, not necessary the local one */
unsigned short connectport; /* the local port connect() should use! */
char newhost[32];
Curl_addrinfo *addr;
char *hostdataptr=NULL;
Curl_ipconnect *conninfo;
char *str=buf;
/*
@@ -1318,55 +1412,82 @@ CURLcode ftp_use_pasv(struct connectdata *conn)
}
sprintf(newhost, "%d.%d.%d.%d", ip[0], ip[1], ip[2], ip[3]);
newhostp = newhost;
newport = (port[0]<<8) + port[1];
if(data->change.proxy) {
/*
* This is a tunnel through a http proxy and we need to connect to the
* proxy again here. We already have the name info for it since the
* previous lookup.
*/
addr = conn->hostaddr;
connectport =
(unsigned short)conn->port; /* we connect to the proxy's port */
}
else {
/* normal, direct, ftp connection */
addr = Curl_getaddrinfo(data, newhost, newport, &hostdataptr);
if(!addr) {
failf(data, "Can't resolve new host %s", newhost);
return CURLE_FTP_CANT_GET_HOST;
}
connectport = newport; /* we connect to the remote port */
}
result = Curl_connecthost(conn,
addr,
connectport,
&conn->secondarysocket,
&conninfo);
if((CURLE_OK == result) &&
data->set.verbose)
/* this just dumps information about this second connection */
ftp_pasv_verbose(conn, conninfo, newhost, connectport);
if(hostdataptr)
Curl_freeaddrinfo(hostdataptr);
}
#if 1
else if (229 == results[modeoff]) {
char *ptr = strchr(buf, '(');
if(ptr) {
unsigned int num;
char separator[4];
ptr++;
if(5 == sscanf(ptr, "%c%c%c%u%c",
&separator[0],
&separator[1],
&separator[2],
&num,
&separator[3])) {
/* the four separators should be identical */
newport = num;
if(CURLE_OK != result)
return result;
if (data->set.tunnel_thru_httpproxy) {
/* We want "seamless" FTP operations through HTTP proxy tunnel */
result = Curl_ConnectHTTPProxyTunnel(conn, conn->secondarysocket,
newhost, newport);
if(CURLE_OK != result)
return result;
/* we should use the same host we already are connected to */
newhostp = conn->name;
}
else
ptr=NULL;
}
if(!ptr) {
failf(data, "Weirdly formatted EPSV reply");
return CURLE_FTP_WEIRD_PASV_REPLY;
}
}
#endif
else
return CURLE_FTP_CANT_RECONNECT;
if(data->change.proxy) {
/*
* This is a tunnel through a http proxy and we need to connect to the
* proxy again here. We already have the name info for it since the
* previous lookup.
*/
addr = conn->hostaddr;
connectport =
(unsigned short)conn->port; /* we connect to the proxy's port */
}
else {
/* normal, direct, ftp connection */
addr = Curl_resolv(data, newhostp, newport, &hostdataptr);
if(!addr) {
failf(data, "Can't resolve new host %s", newhost);
return CURLE_FTP_CANT_GET_HOST;
}
connectport = newport; /* we connect to the remote port */
}
result = Curl_connecthost(conn,
addr,
connectport,
&conn->secondarysocket,
&conninfo);
if((CURLE_OK == result) &&
data->set.verbose)
/* this just dumps information about this second connection */
ftp_pasv_verbose(conn, conninfo, newhost, connectport);
if(CURLE_OK != result)
return result;
if (data->set.tunnel_thru_httpproxy) {
/* We want "seamless" FTP operations through HTTP proxy tunnel */
result = Curl_ConnectHTTPProxyTunnel(conn, conn->secondarysocket,
newhost, newport);
if(CURLE_OK != result)
return result;
}
return CURLE_OK;
}
@@ -1420,9 +1541,10 @@ CURLcode ftp_perform(struct connectdata *conn)
return result;
}
/* If we have selected NOBODY, it means that we only want file information.
Which in FTP can't be much more than the file size! */
if(data->set.no_body) {
/* If we have selected NOBODY and HEADER, it means that we only want file
information. Which in FTP can't be much more than the file size and
date. */
if(data->set.no_body && data->set.include_header) {
/* The SIZE command is _not_ RFC 959 specified, and therefor many servers
may not support it! It is however the only way we have to get a file's
size! */
@@ -1454,7 +1576,7 @@ CURLcode ftp_perform(struct connectdata *conn)
struct tm buffer;
tm = (struct tm *)localtime_r(&data->info.filetime, &buffer);
#else
tm = localtime(&data->info.filetime);
tm = localtime((unsigned long *)&data->info.filetime);
#endif
/* format: "Tue, 15 Nov 1994 12:45:26 GMT" */
strftime(buf, BUFSIZE-1, "Last-Modified: %a, %d %b %Y %H:%M:%S %Z\r\n",
@@ -1468,20 +1590,27 @@ CURLcode ftp_perform(struct connectdata *conn)
return CURLE_OK;
}
if(data->set.no_body)
/* don't transfer the data */
;
/* Get us a second connection up and connected */
if(data->set.ftp_use_port)
else if(data->set.ftp_use_port) {
/* We have chosen to use the PORT command */
result = ftp_use_port(conn);
else
if(CURLE_OK == result)
/* we have the data connection ready */
infof(data, "Connected the data stream with PORT!\n");
}
else {
/* We have chosen (this is default) to use the PASV command */
result = ftp_use_pasv(conn);
if(CURLE_OK == result)
infof(data, "Connected the data stream with PASV!\n");
}
if(result)
return result;
/* we have the data connection ready */
infof(data, "Connected the data stream!\n");
if(data->set.upload) {
/* Set type to binary (unless specified ASCII) */
@@ -1489,6 +1618,12 @@ CURLcode ftp_perform(struct connectdata *conn)
if(result)
return result;
/* Send any PREQUOTE strings after transfer type is set? (Wesley Laxton)*/
if(data->set.prequote) {
if ((result = ftp_sendquote(conn, data->set.prequote)) != CURLE_OK)
return result;
}
if(conn->resume_from) {
/* we're about to continue the uploading of a file */
/* 1. get already existing file's size. We use the SIZE
@@ -1506,11 +1641,13 @@ CURLcode ftp_perform(struct connectdata *conn)
if(conn->resume_from < 0 ) {
/* we could've got a specified offset from the command line,
but now we know we didn't */
ssize_t gottensize;
if(CURLE_OK != ftp_getsize(conn, ftp->file, &conn->resume_from)) {
if(CURLE_OK != ftp_getsize(conn, ftp->file, &gottensize)) {
failf(data, "Couldn't get remote file size");
return CURLE_FTP_COULDNT_GET_SIZE;
}
conn->resume_from = gottensize;
}
if(conn->resume_from) {
@@ -1535,7 +1672,7 @@ CURLcode ftp_perform(struct connectdata *conn)
passed += actuallyread;
if(actuallyread != readthisamountnow) {
failf(data, "Could only read %d bytes from the input\n", passed);
failf(data, "Could only read %d bytes from the input", passed);
return CURLE_FTP_COULDNT_USE_REST;
}
}
@@ -1602,7 +1739,7 @@ CURLcode ftp_perform(struct connectdata *conn)
return result;
}
else {
else if(!data->set.no_body) {
/* Retrieve file or directory */
bool dirlist=FALSE;
long downloadsize=-1;
@@ -1665,22 +1802,38 @@ CURLcode ftp_perform(struct connectdata *conn)
(data->set.ftp_list_only?"NLST":"LIST"));
}
else {
ssize_t foundsize;
/* Set type to binary (unless specified ASCII) */
result = ftp_transfertype(conn, data->set.ftp_ascii);
if(result)
return result;
/* Send any PREQUOTE strings after transfer type is set? (Wesley Laxton)*/
if(data->set.prequote) {
if ((result = ftp_sendquote(conn, data->set.prequote)) != CURLE_OK)
return result;
}
/* Attempt to get the size, it'll be useful in some cases: for resumed
downloads and when talking to servers that don't give away the size
in the RETR response line. */
result = ftp_getsize(conn, ftp->file, &foundsize);
if(CURLE_OK == result)
downloadsize = foundsize;
if(conn->resume_from) {
/* Daniel: (August 4, 1999)
*
* We start with trying to use the SIZE command to figure out the size
* of the file we're gonna get. If we can get the size, this is by far
* the best way to know if we're trying to resume beyond the EOF. */
int foundsize=-1;
result = ftp_getsize(conn, ftp->file, &foundsize);
* the best way to know if we're trying to resume beyond the EOF.
*
* Daniel, November 28, 2001. We *always* get the size on downloads
* now, so it is done before this even when not doing resumes. I saved
* the comment above for nostalgical reasons! ;-)
*/
if(CURLE_OK != result) {
infof(data, "ftp server doesn't support SIZE\n");
/* We couldn't get the size and therefore we can't know if there
@@ -1918,9 +2071,11 @@ CURLcode Curl_ftp(struct connectdata *conn)
CURLcode Curl_ftpsendf(struct connectdata *conn,
const char *fmt, ...)
{
size_t bytes_written;
ssize_t bytes_written;
char s[256];
size_t write_len;
ssize_t write_len;
char *sptr=s;
CURLcode res = CURLE_OK;
va_list ap;
va_start(ap, fmt);
@@ -1934,9 +2089,23 @@ CURLcode Curl_ftpsendf(struct connectdata *conn,
bytes_written=0;
write_len = strlen(s);
Curl_write(conn, conn->firstsocket, s, write_len, &bytes_written);
return (bytes_written==write_len)?CURLE_OK:CURLE_WRITE_ERROR;
do {
res = Curl_write(conn, conn->firstsocket, sptr, write_len,
&bytes_written);
if(CURLE_OK != res)
break;
if(bytes_written != write_len) {
write_len -= bytes_written;
sptr += bytes_written;
}
else
break;
} while(1);
return res;
}
/***********************************************************************
@@ -1954,6 +2123,8 @@ CURLcode Curl_ftp_disconnect(struct connectdata *conn)
if(ftp) {
if(ftp->entrypath)
free(ftp->entrypath);
if(ftp->cache)
free(ftp->cache);
}
return CURLE_OK;
}

View File

@@ -28,7 +28,7 @@ CURLcode Curl_ftp_done(struct connectdata *conn);
CURLcode Curl_ftp_connect(struct connectdata *conn);
CURLcode Curl_ftp_disconnect(struct connectdata *conn);
size_t Curl_ftpsendf(struct connectdata *, const char *fmt, ...);
CURLcode Curl_ftpsendf(struct connectdata *, const char *fmt, ...);
/* The kerberos stuff needs this: */
int Curl_GetFTPResponse(char *buf, struct connectdata *conn,

View File

@@ -34,8 +34,6 @@
#include "setup.h"
#ifdef HAVE_CONFIG_H
# include "config.h"
# ifdef HAVE_ALLOCA_H
# include <alloca.h>
# endif
@@ -43,6 +41,10 @@
# ifdef HAVE_TIME_H
# include <time.h>
# endif
#ifndef YYDEBUG
/* to satisfy gcc -Wundef, we set this to 0 */
#define YYDEBUG 0
#endif
/* Since the code of getdate.y is not included in the Emacs executable
@@ -192,38 +194,40 @@ typedef enum _MERIDIAN {
MERam, MERpm, MER24
} MERIDIAN;
/* parse results and input string */
typedef struct _CONTEXT {
const char *yyInput;
int yyDayOrdinal;
int yyDayNumber;
int yyHaveDate;
int yyHaveDay;
int yyHaveRel;
int yyHaveTime;
int yyHaveZone;
int yyTimezone;
int yyDay;
int yyHour;
int yyMinutes;
int yyMonth;
int yySeconds;
int yyYear;
MERIDIAN yyMeridian;
int yyRelDay;
int yyRelHour;
int yyRelMinutes;
int yyRelMonth;
int yyRelSeconds;
int yyRelYear;
} CONTEXT;
/*
** Global variables. We could get rid of most of these by using a good
** union as the yacc stack. (This routine was originally written before
** yacc had the %union construct.) Maybe someday; right now we only use
** the %union very rarely.
/* enable use of extra argument to yyparse and yylex which can be used to pass
** in a user defined value (CONTEXT struct in our case)
*/
static const char *yyInput;
static int yyDayOrdinal;
static int yyDayNumber;
static int yyHaveDate;
static int yyHaveDay;
static int yyHaveRel;
static int yyHaveTime;
static int yyHaveZone;
static int yyTimezone;
static int yyDay;
static int yyHour;
static int yyMinutes;
static int yyMonth;
static int yySeconds;
static int yyYear;
static MERIDIAN yyMeridian;
static int yyRelDay;
static int yyRelHour;
static int yyRelMinutes;
static int yyRelMonth;
static int yyRelSeconds;
static int yyRelYear;
#define YYPARSE_PARAM cookie
#define YYLEX_PARAM cookie
#define context ((CONTEXT *) cookie)
#line 206 "getdate.y"
#line 215 "getdate.y"
typedef union {
int Number;
enum _MERIDIAN Meridian;
@@ -306,11 +310,11 @@ static const short yyrhs[] = { -1,
#if YYDEBUG != 0
static const short yyrline[] = { 0,
222, 223, 226, 229, 232, 235, 238, 241, 244, 250,
256, 265, 271, 283, 286, 289, 295, 299, 303, 309,
313, 331, 337, 343, 347, 352, 356, 363, 371, 374,
377, 380, 383, 386, 389, 392, 395, 398, 401, 404,
407, 410, 413, 416, 419, 422, 425, 430, 463, 467
231, 232, 235, 238, 241, 244, 247, 250, 253, 259,
265, 274, 280, 292, 295, 298, 304, 308, 312, 318,
322, 340, 346, 352, 356, 361, 365, 372, 380, 383,
386, 389, 392, 395, 398, 401, 404, 407, 410, 413,
416, 419, 422, 425, 428, 431, 434, 439, 473, 477
};
#endif
@@ -390,6 +394,8 @@ static const short yycheck[] = { 0,
11, 15, 13, 14, 16, 19, 17, 16, 21, 0,
56
};
#define YYPURE 1
/* -*-C-*- Note some compilers choke on comments on `#line' lines. */
#line 3 "/usr/local/share/bison.simple"
/* This file comes from bison-1.28. */
@@ -934,135 +940,135 @@ yyreduce:
switch (yyn) {
case 3:
#line 226 "getdate.y"
#line 235 "getdate.y"
{
yyHaveTime++;
context->yyHaveTime++;
;
break;}
case 4:
#line 229 "getdate.y"
#line 238 "getdate.y"
{
yyHaveZone++;
context->yyHaveZone++;
;
break;}
case 5:
#line 232 "getdate.y"
#line 241 "getdate.y"
{
yyHaveDate++;
context->yyHaveDate++;
;
break;}
case 6:
#line 235 "getdate.y"
#line 244 "getdate.y"
{
yyHaveDay++;
context->yyHaveDay++;
;
break;}
case 7:
#line 238 "getdate.y"
#line 247 "getdate.y"
{
yyHaveRel++;
context->yyHaveRel++;
;
break;}
case 9:
#line 244 "getdate.y"
#line 253 "getdate.y"
{
yyHour = yyvsp[-1].Number;
yyMinutes = 0;
yySeconds = 0;
yyMeridian = yyvsp[0].Meridian;
context->yyHour = yyvsp[-1].Number;
context->yyMinutes = 0;
context->yySeconds = 0;
context->yyMeridian = yyvsp[0].Meridian;
;
break;}
case 10:
#line 250 "getdate.y"
#line 259 "getdate.y"
{
yyHour = yyvsp[-3].Number;
yyMinutes = yyvsp[-1].Number;
yySeconds = 0;
yyMeridian = yyvsp[0].Meridian;
context->yyHour = yyvsp[-3].Number;
context->yyMinutes = yyvsp[-1].Number;
context->yySeconds = 0;
context->yyMeridian = yyvsp[0].Meridian;
;
break;}
case 11:
#line 256 "getdate.y"
#line 265 "getdate.y"
{
yyHour = yyvsp[-3].Number;
yyMinutes = yyvsp[-1].Number;
yyMeridian = MER24;
yyHaveZone++;
yyTimezone = (yyvsp[0].Number < 0
? -yyvsp[0].Number % 100 + (-yyvsp[0].Number / 100) * 60
: - (yyvsp[0].Number % 100 + (yyvsp[0].Number / 100) * 60));
context->yyHour = yyvsp[-3].Number;
context->yyMinutes = yyvsp[-1].Number;
context->yyMeridian = MER24;
context->yyHaveZone++;
context->yyTimezone = (yyvsp[0].Number < 0
? -yyvsp[0].Number % 100 + (-yyvsp[0].Number / 100) * 60
: - (yyvsp[0].Number % 100 + (yyvsp[0].Number / 100) * 60));
;
break;}
case 12:
#line 265 "getdate.y"
#line 274 "getdate.y"
{
yyHour = yyvsp[-5].Number;
yyMinutes = yyvsp[-3].Number;
yySeconds = yyvsp[-1].Number;
yyMeridian = yyvsp[0].Meridian;
context->yyHour = yyvsp[-5].Number;
context->yyMinutes = yyvsp[-3].Number;
context->yySeconds = yyvsp[-1].Number;
context->yyMeridian = yyvsp[0].Meridian;
;
break;}
case 13:
#line 271 "getdate.y"
#line 280 "getdate.y"
{
yyHour = yyvsp[-5].Number;
yyMinutes = yyvsp[-3].Number;
yySeconds = yyvsp[-1].Number;
yyMeridian = MER24;
yyHaveZone++;
yyTimezone = (yyvsp[0].Number < 0
? -yyvsp[0].Number % 100 + (-yyvsp[0].Number / 100) * 60
: - (yyvsp[0].Number % 100 + (yyvsp[0].Number / 100) * 60));
context->yyHour = yyvsp[-5].Number;
context->yyMinutes = yyvsp[-3].Number;
context->yySeconds = yyvsp[-1].Number;
context->yyMeridian = MER24;
context->yyHaveZone++;
context->yyTimezone = (yyvsp[0].Number < 0
? -yyvsp[0].Number % 100 + (-yyvsp[0].Number / 100) * 60
: - (yyvsp[0].Number % 100 + (yyvsp[0].Number / 100) * 60));
;
break;}
case 14:
#line 283 "getdate.y"
#line 292 "getdate.y"
{
yyTimezone = yyvsp[0].Number;
context->yyTimezone = yyvsp[0].Number;
;
break;}
case 15:
#line 286 "getdate.y"
#line 295 "getdate.y"
{
yyTimezone = yyvsp[0].Number - 60;
context->yyTimezone = yyvsp[0].Number - 60;
;
break;}
case 16:
#line 290 "getdate.y"
#line 299 "getdate.y"
{
yyTimezone = yyvsp[-1].Number - 60;
context->yyTimezone = yyvsp[-1].Number - 60;
;
break;}
case 17:
#line 295 "getdate.y"
#line 304 "getdate.y"
{
yyDayOrdinal = 1;
yyDayNumber = yyvsp[0].Number;
context->yyDayOrdinal = 1;
context->yyDayNumber = yyvsp[0].Number;
;
break;}
case 18:
#line 299 "getdate.y"
#line 308 "getdate.y"
{
yyDayOrdinal = 1;
yyDayNumber = yyvsp[-1].Number;
context->yyDayOrdinal = 1;
context->yyDayNumber = yyvsp[-1].Number;
;
break;}
case 19:
#line 303 "getdate.y"
#line 312 "getdate.y"
{
yyDayOrdinal = yyvsp[-1].Number;
yyDayNumber = yyvsp[0].Number;
context->yyDayOrdinal = yyvsp[-1].Number;
context->yyDayNumber = yyvsp[0].Number;
;
break;}
case 20:
#line 309 "getdate.y"
#line 318 "getdate.y"
{
yyMonth = yyvsp[-2].Number;
yyDay = yyvsp[0].Number;
context->yyMonth = yyvsp[-2].Number;
context->yyDay = yyvsp[0].Number;
;
break;}
case 21:
#line 313 "getdate.y"
#line 322 "getdate.y"
{
/* Interpret as YYYY/MM/DD if $1 >= 1000, otherwise as MM/DD/YY.
The goal in recognizing YYYY/MM/DD is solely to support legacy
@@ -1070,226 +1076,227 @@ case 21:
you want portability, use the ISO 8601 format. */
if (yyvsp[-4].Number >= 1000)
{
yyYear = yyvsp[-4].Number;
yyMonth = yyvsp[-2].Number;
yyDay = yyvsp[0].Number;
context->yyYear = yyvsp[-4].Number;
context->yyMonth = yyvsp[-2].Number;
context->yyDay = yyvsp[0].Number;
}
else
{
yyMonth = yyvsp[-4].Number;
yyDay = yyvsp[-2].Number;
yyYear = yyvsp[0].Number;
context->yyMonth = yyvsp[-4].Number;
context->yyDay = yyvsp[-2].Number;
context->yyYear = yyvsp[0].Number;
}
;
break;}
case 22:
#line 331 "getdate.y"
#line 340 "getdate.y"
{
/* ISO 8601 format. yyyy-mm-dd. */
yyYear = yyvsp[-2].Number;
yyMonth = -yyvsp[-1].Number;
yyDay = -yyvsp[0].Number;
context->yyYear = yyvsp[-2].Number;
context->yyMonth = -yyvsp[-1].Number;
context->yyDay = -yyvsp[0].Number;
;
break;}
case 23:
#line 337 "getdate.y"
#line 346 "getdate.y"
{
/* e.g. 17-JUN-1992. */
yyDay = yyvsp[-2].Number;
yyMonth = yyvsp[-1].Number;
yyYear = -yyvsp[0].Number;
context->yyDay = yyvsp[-2].Number;
context->yyMonth = yyvsp[-1].Number;
context->yyYear = -yyvsp[0].Number;
;
break;}
case 24:
#line 343 "getdate.y"
#line 352 "getdate.y"
{
yyMonth = yyvsp[-1].Number;
yyDay = yyvsp[0].Number;
context->yyMonth = yyvsp[-1].Number;
context->yyDay = yyvsp[0].Number;
;
break;}
case 25:
#line 347 "getdate.y"
#line 356 "getdate.y"
{
yyMonth = yyvsp[-3].Number;
yyDay = yyvsp[-2].Number;
yyYear = yyvsp[0].Number;
context->yyMonth = yyvsp[-3].Number;
context->yyDay = yyvsp[-2].Number;
context->yyYear = yyvsp[0].Number;
;
break;}
case 26:
#line 352 "getdate.y"
#line 361 "getdate.y"
{
yyMonth = yyvsp[0].Number;
yyDay = yyvsp[-1].Number;
context->yyMonth = yyvsp[0].Number;
context->yyDay = yyvsp[-1].Number;
;
break;}
case 27:
#line 356 "getdate.y"
#line 365 "getdate.y"
{
yyMonth = yyvsp[-1].Number;
yyDay = yyvsp[-2].Number;
yyYear = yyvsp[0].Number;
context->yyMonth = yyvsp[-1].Number;
context->yyDay = yyvsp[-2].Number;
context->yyYear = yyvsp[0].Number;
;
break;}
case 28:
#line 363 "getdate.y"
#line 372 "getdate.y"
{
yyRelSeconds = -yyRelSeconds;
yyRelMinutes = -yyRelMinutes;
yyRelHour = -yyRelHour;
yyRelDay = -yyRelDay;
yyRelMonth = -yyRelMonth;
yyRelYear = -yyRelYear;
context->yyRelSeconds = -context->yyRelSeconds;
context->yyRelMinutes = -context->yyRelMinutes;
context->yyRelHour = -context->yyRelHour;
context->yyRelDay = -context->yyRelDay;
context->yyRelMonth = -context->yyRelMonth;
context->yyRelYear = -context->yyRelYear;
;
break;}
case 30:
#line 374 "getdate.y"
#line 383 "getdate.y"
{
yyRelYear += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelYear += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 31:
#line 377 "getdate.y"
#line 386 "getdate.y"
{
yyRelYear += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelYear += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 32:
#line 380 "getdate.y"
#line 389 "getdate.y"
{
yyRelYear += yyvsp[0].Number;
context->yyRelYear += yyvsp[0].Number;
;
break;}
case 33:
#line 383 "getdate.y"
#line 392 "getdate.y"
{
yyRelMonth += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelMonth += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 34:
#line 386 "getdate.y"
#line 395 "getdate.y"
{
yyRelMonth += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelMonth += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 35:
#line 389 "getdate.y"
#line 398 "getdate.y"
{
yyRelMonth += yyvsp[0].Number;
context->yyRelMonth += yyvsp[0].Number;
;
break;}
case 36:
#line 392 "getdate.y"
#line 401 "getdate.y"
{
yyRelDay += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelDay += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 37:
#line 395 "getdate.y"
#line 404 "getdate.y"
{
yyRelDay += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelDay += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 38:
#line 398 "getdate.y"
#line 407 "getdate.y"
{
yyRelDay += yyvsp[0].Number;
context->yyRelDay += yyvsp[0].Number;
;
break;}
case 39:
#line 401 "getdate.y"
#line 410 "getdate.y"
{
yyRelHour += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelHour += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 40:
#line 404 "getdate.y"
#line 413 "getdate.y"
{
yyRelHour += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelHour += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 41:
#line 407 "getdate.y"
#line 416 "getdate.y"
{
yyRelHour += yyvsp[0].Number;
context->yyRelHour += yyvsp[0].Number;
;
break;}
case 42:
#line 410 "getdate.y"
#line 419 "getdate.y"
{
yyRelMinutes += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelMinutes += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 43:
#line 413 "getdate.y"
#line 422 "getdate.y"
{
yyRelMinutes += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelMinutes += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 44:
#line 416 "getdate.y"
#line 425 "getdate.y"
{
yyRelMinutes += yyvsp[0].Number;
context->yyRelMinutes += yyvsp[0].Number;
;
break;}
case 45:
#line 419 "getdate.y"
#line 428 "getdate.y"
{
yyRelSeconds += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelSeconds += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 46:
#line 422 "getdate.y"
#line 431 "getdate.y"
{
yyRelSeconds += yyvsp[-1].Number * yyvsp[0].Number;
context->yyRelSeconds += yyvsp[-1].Number * yyvsp[0].Number;
;
break;}
case 47:
#line 425 "getdate.y"
#line 434 "getdate.y"
{
yyRelSeconds += yyvsp[0].Number;
context->yyRelSeconds += yyvsp[0].Number;
;
break;}
case 48:
#line 431 "getdate.y"
#line 440 "getdate.y"
{
if (yyHaveTime && yyHaveDate && !yyHaveRel)
yyYear = yyvsp[0].Number;
if (context->yyHaveTime && context->yyHaveDate &&
!context->yyHaveRel)
context->yyYear = yyvsp[0].Number;
else
{
if (yyvsp[0].Number>10000)
{
yyHaveDate++;
yyDay= (yyvsp[0].Number)%100;
yyMonth= (yyvsp[0].Number/100)%100;
yyYear = yyvsp[0].Number/10000;
context->yyHaveDate++;
context->yyDay= (yyvsp[0].Number)%100;
context->yyMonth= (yyvsp[0].Number/100)%100;
context->yyYear = yyvsp[0].Number/10000;
}
else
{
yyHaveTime++;
context->yyHaveTime++;
if (yyvsp[0].Number < 100)
{
yyHour = yyvsp[0].Number;
yyMinutes = 0;
context->yyHour = yyvsp[0].Number;
context->yyMinutes = 0;
}
else
{
yyHour = yyvsp[0].Number / 100;
yyMinutes = yyvsp[0].Number % 100;
context->yyHour = yyvsp[0].Number / 100;
context->yyMinutes = yyvsp[0].Number % 100;
}
yySeconds = 0;
yyMeridian = MER24;
context->yySeconds = 0;
context->yyMeridian = MER24;
}
}
;
break;}
case 49:
#line 464 "getdate.y"
#line 474 "getdate.y"
{
yyval.Meridian = MER24;
;
break;}
case 50:
#line 468 "getdate.y"
#line 478 "getdate.y"
{
yyval.Meridian = yyvsp[0].Meridian;
;
@@ -1516,7 +1523,7 @@ yyerrhandle:
}
return 1;
}
#line 473 "getdate.y"
#line 483 "getdate.y"
/* Include this file down here because bison inserts code above which
@@ -1772,7 +1779,8 @@ ToYear (Year)
}
static int
LookupWord (buff)
LookupWord (yylval, buff)
YYSTYPE *yylval;
char *buff;
{
register char *p;
@@ -1788,12 +1796,12 @@ LookupWord (buff)
if (strcmp (buff, "am") == 0 || strcmp (buff, "a.m.") == 0)
{
yylval.Meridian = MERam;
yylval->Meridian = MERam;
return tMERIDIAN;
}
if (strcmp (buff, "pm") == 0 || strcmp (buff, "p.m.") == 0)
{
yylval.Meridian = MERpm;
yylval->Meridian = MERpm;
return tMERIDIAN;
}
@@ -1814,13 +1822,13 @@ LookupWord (buff)
{
if (strncmp (buff, tp->name, 3) == 0)
{
yylval.Number = tp->value;
yylval->Number = tp->value;
return tp->type;
}
}
else if (strcmp (buff, tp->name) == 0)
{
yylval.Number = tp->value;
yylval->Number = tp->value;
return tp->type;
}
}
@@ -1828,7 +1836,7 @@ LookupWord (buff)
for (tp = TimezoneTable; tp->name; tp++)
if (strcmp (buff, tp->name) == 0)
{
yylval.Number = tp->value;
yylval->Number = tp->value;
return tp->type;
}
@@ -1838,7 +1846,7 @@ LookupWord (buff)
for (tp = UnitsTable; tp->name; tp++)
if (strcmp (buff, tp->name) == 0)
{
yylval.Number = tp->value;
yylval->Number = tp->value;
return tp->type;
}
@@ -1850,7 +1858,7 @@ LookupWord (buff)
for (tp = UnitsTable; tp->name; tp++)
if (strcmp (buff, tp->name) == 0)
{
yylval.Number = tp->value;
yylval->Number = tp->value;
return tp->type;
}
buff[i] = 's'; /* Put back for "this" in OtherTable. */
@@ -1859,7 +1867,7 @@ LookupWord (buff)
for (tp = OtherTable; tp->name; tp++)
if (strcmp (buff, tp->name) == 0)
{
yylval.Number = tp->value;
yylval->Number = tp->value;
return tp->type;
}
@@ -1869,7 +1877,7 @@ LookupWord (buff)
for (tp = MilitaryTable; tp->name; tp++)
if (strcmp (buff, tp->name) == 0)
{
yylval.Number = tp->value;
yylval->Number = tp->value;
return tp->type;
}
}
@@ -1885,7 +1893,7 @@ LookupWord (buff)
for (tp = TimezoneTable; tp->name; tp++)
if (strcmp (buff, tp->name) == 0)
{
yylval.Number = tp->value;
yylval->Number = tp->value;
return tp->type;
}
@@ -1893,7 +1901,9 @@ LookupWord (buff)
}
static int
yylex ()
yylex (yylval, cookie)
YYSTYPE *yylval;
void *cookie;
{
register unsigned char c;
register char *p;
@@ -1903,42 +1913,42 @@ yylex ()
for (;;)
{
while (ISSPACE ((unsigned char) *yyInput))
yyInput++;
while (ISSPACE ((unsigned char) *context->yyInput))
context->yyInput++;
if (ISDIGIT (c = *yyInput) || c == '-' || c == '+')
if (ISDIGIT (c = *context->yyInput) || c == '-' || c == '+')
{
if (c == '-' || c == '+')
{
sign = c == '-' ? -1 : 1;
if (!ISDIGIT (*++yyInput))
if (!ISDIGIT (*++context->yyInput))
/* skip the '-' sign */
continue;
}
else
sign = 0;
for (yylval.Number = 0; ISDIGIT (c = *yyInput++);)
yylval.Number = 10 * yylval.Number + c - '0';
yyInput--;
for (yylval->Number = 0; ISDIGIT (c = *context->yyInput++);)
yylval->Number = 10 * yylval->Number + c - '0';
context->yyInput--;
if (sign < 0)
yylval.Number = -yylval.Number;
yylval->Number = -yylval->Number;
return sign ? tSNUMBER : tUNUMBER;
}
if (ISALPHA (c))
{
for (p = buff; (c = *yyInput++, ISALPHA (c)) || c == '.';)
for (p = buff; (c = *context->yyInput++, ISALPHA (c)) || c == '.';)
if (p < &buff[sizeof buff - 1])
*p++ = c;
*p = '\0';
yyInput--;
return LookupWord (buff);
context->yyInput--;
return LookupWord (yylval, buff);
}
if (c != '(')
return *yyInput++;
return *context->yyInput++;
Count = 0;
do
{
c = *yyInput++;
c = *context->yyInput++;
if (c == '\0')
return c;
if (c == '(')
@@ -1978,10 +1988,11 @@ curl_getdate (const char *p, const time_t *now)
{
struct tm tm, tm0, *tmp;
time_t Start;
CONTEXT cookie;
#ifdef HAVE_LOCALTIME_R
struct tm keeptime;
#endif
yyInput = p;
cookie.yyInput = p;
Start = now ? *now : time ((time_t *) NULL);
#ifdef HAVE_LOCALTIME_R
tmp = (struct tm *)localtime_r(&Start, &keeptime);
@@ -1990,52 +2001,55 @@ curl_getdate (const char *p, const time_t *now)
#endif
if (!tmp)
return -1;
yyYear = tmp->tm_year + TM_YEAR_ORIGIN;
yyMonth = tmp->tm_mon + 1;
yyDay = tmp->tm_mday;
yyHour = tmp->tm_hour;
yyMinutes = tmp->tm_min;
yySeconds = tmp->tm_sec;
cookie.yyYear = tmp->tm_year + TM_YEAR_ORIGIN;
cookie.yyMonth = tmp->tm_mon + 1;
cookie.yyDay = tmp->tm_mday;
cookie.yyHour = tmp->tm_hour;
cookie.yyMinutes = tmp->tm_min;
cookie.yySeconds = tmp->tm_sec;
tm.tm_isdst = tmp->tm_isdst;
yyMeridian = MER24;
yyRelSeconds = 0;
yyRelMinutes = 0;
yyRelHour = 0;
yyRelDay = 0;
yyRelMonth = 0;
yyRelYear = 0;
yyHaveDate = 0;
yyHaveDay = 0;
yyHaveRel = 0;
yyHaveTime = 0;
yyHaveZone = 0;
cookie.yyMeridian = MER24;
cookie.yyRelSeconds = 0;
cookie.yyRelMinutes = 0;
cookie.yyRelHour = 0;
cookie.yyRelDay = 0;
cookie.yyRelMonth = 0;
cookie.yyRelYear = 0;
cookie.yyHaveDate = 0;
cookie.yyHaveDay = 0;
cookie.yyHaveRel = 0;
cookie.yyHaveTime = 0;
cookie.yyHaveZone = 0;
if (yyparse ()
|| yyHaveTime > 1 || yyHaveZone > 1 || yyHaveDate > 1 || yyHaveDay > 1)
if (yyparse (&cookie)
|| cookie.yyHaveTime > 1 || cookie.yyHaveZone > 1 ||
cookie.yyHaveDate > 1 || cookie.yyHaveDay > 1)
return -1;
tm.tm_year = ToYear (yyYear) - TM_YEAR_ORIGIN + yyRelYear;
tm.tm_mon = yyMonth - 1 + yyRelMonth;
tm.tm_mday = yyDay + yyRelDay;
if (yyHaveTime || (yyHaveRel && !yyHaveDate && !yyHaveDay))
tm.tm_year = ToYear (cookie.yyYear) - TM_YEAR_ORIGIN + cookie.yyRelYear;
tm.tm_mon = cookie.yyMonth - 1 + cookie.yyRelMonth;
tm.tm_mday = cookie.yyDay + cookie.yyRelDay;
if (cookie.yyHaveTime ||
(cookie.yyHaveRel && !cookie.yyHaveDate && !cookie.yyHaveDay))
{
tm.tm_hour = ToHour (yyHour, yyMeridian);
tm.tm_hour = ToHour (cookie.yyHour, cookie.yyMeridian);
if (tm.tm_hour < 0)
return -1;
tm.tm_min = yyMinutes;
tm.tm_sec = yySeconds;
tm.tm_min = cookie.yyMinutes;
tm.tm_sec = cookie.yySeconds;
}
else
{
tm.tm_hour = tm.tm_min = tm.tm_sec = 0;
}
tm.tm_hour += yyRelHour;
tm.tm_min += yyRelMinutes;
tm.tm_sec += yyRelSeconds;
tm.tm_hour += cookie.yyRelHour;
tm.tm_min += cookie.yyRelMinutes;
tm.tm_sec += cookie.yyRelSeconds;
/* Let mktime deduce tm_isdst if we have an absolute timestamp,
or if the relative timestamp mentions days, months, or years. */
if (yyHaveDate | yyHaveDay | yyHaveTime | yyRelDay | yyRelMonth | yyRelYear)
if (cookie.yyHaveDate | cookie.yyHaveDay | cookie.yyHaveTime |
cookie.yyRelDay | cookie.yyRelMonth | cookie.yyRelYear)
tm.tm_isdst = -1;
tm0 = tm;
@@ -2053,18 +2067,18 @@ curl_getdate (const char *p, const time_t *now)
we apply mktime to 1970-01-02 08:00:00 instead and adjust the time
zone by 24 hours to compensate. This algorithm assumes that
there is no DST transition within a day of the time_t boundaries. */
if (yyHaveZone)
if (cookie.yyHaveZone)
{
tm = tm0;
if (tm.tm_year <= EPOCH - TM_YEAR_ORIGIN)
{
tm.tm_mday++;
yyTimezone -= 24 * 60;
cookie.yyTimezone -= 24 * 60;
}
else
{
tm.tm_mday--;
yyTimezone += 24 * 60;
cookie.yyTimezone += 24 * 60;
}
Start = mktime (&tm);
}
@@ -2073,22 +2087,29 @@ curl_getdate (const char *p, const time_t *now)
return Start;
}
if (yyHaveDay && !yyHaveDate)
if (cookie.yyHaveDay && !cookie.yyHaveDate)
{
tm.tm_mday += ((yyDayNumber - tm.tm_wday + 7) % 7
+ 7 * (yyDayOrdinal - (0 < yyDayOrdinal)));
tm.tm_mday += ((cookie.yyDayNumber - tm.tm_wday + 7) % 7
+ 7 * (cookie.yyDayOrdinal - (0 < cookie.yyDayOrdinal)));
Start = mktime (&tm);
if (Start == (time_t) -1)
return Start;
}
if (yyHaveZone)
if (cookie.yyHaveZone)
{
long delta;
struct tm *gmt = gmtime (&Start);
struct tm *gmt;
#ifdef HAVE_GMTIME_R
/* thread-safe version */
struct tm keeptime;
gmt = (struct tm *)gmtime_r(&Start, &keeptime);
#else
gmt = gmtime(&Start);
#endif
if (!gmt)
return -1;
delta = yyTimezone * 60L + difftm (&tm, gmt);
delta = cookie.yyTimezone * 60L + difftm (&tm, gmt);
if ((Start + delta < Start) != (delta < 0))
return -1; /* time_t overflow */
Start += delta;
@@ -2126,11 +2147,3 @@ main (ac, av)
/* NOTREACHED */
}
#endif /* defined (TEST) */
/*
* local variables:
* eval: (load-file "../curl-mode.el")
* end:
* vim600: fdm=marker
* vim: et sw=2 ts=2 sts=2 tw=78
*/

View File

@@ -7,9 +7,7 @@
** This code is in the public domain and has no copyright.
*/
#if HAVE_CONFIG_H
# include <config.h>
#endif
# include "setup.h"
#ifndef PARAMS
# if defined PROTOTYPES || (defined __STDC__ && __STDC__)

Some files were not shown because too many files have changed in this diff Show More