Compare commits

..

268 Commits

Author SHA1 Message Date
Daniel Stenberg
52707f9590 7.5-commit 2000-12-04 09:44:57 +00:00
Daniel Stenberg
be2369ed14 Craig Davison updated and made it work again! 2000-12-01 07:02:26 +00:00
Daniel Stenberg
76af68e8ab Craig Davison fixed the VC++ lines 2000-12-01 07:01:14 +00:00
Daniel Stenberg
421fccb12a Added -version-info and lots of info 2000-11-30 22:22:08 +00:00
Daniel Stenberg
173f12db68 added a typecast to shut up a VC++ warning when converting from long
to unsigned short
2000-11-30 21:59:51 +00:00
Daniel Stenberg
983e3ae8c5 Craig Davison updated this 2000-11-30 21:54:00 +00:00
Daniel Stenberg
62213e529c README.curl is now MANUAL 2000-11-30 08:08:49 +00:00
Daniel Stenberg
ea3b6914cc Added a small note about referer needing to be complete to comply to the
HTTP spec
2000-11-30 08:08:23 +00:00
Daniel Stenberg
c8cd35e640 Includes MANUAL instead of README.curl now 2000-11-30 07:56:32 +00:00
Daniel Stenberg
706f5e1a5d README.curl is renamed to MANUAL 2000-11-30 07:55:30 +00:00
Daniel Stenberg
db7d772d3e removed #if 0 sections 2000-11-29 08:19:23 +00:00
Daniel Stenberg
64761bc786 removed #if 0 section 2000-11-29 08:17:12 +00:00
Daniel Stenberg
9980568f42 removed '#if 0' sections 2000-11-29 08:16:27 +00:00
Daniel Stenberg
05a1910968 I'd love to see test cases with submitted patches... 2000-11-29 07:48:14 +00:00
Daniel Stenberg
a5217dd10e minor things about the test suite added 2000-11-29 07:47:51 +00:00
Daniel Stenberg
0d7ba0ec61 now counts all test cases and presents a counter at the end 2000-11-28 12:49:39 +00:00
Daniel Stenberg
b2f0ca8a43 maxredirs 2000-11-28 12:45:20 +00:00
Daniel Stenberg
a00bb13766 max-redirs test case data 2000-11-28 09:42:15 +00:00
Daniel Stenberg
7c7923761d free the URL on redirections, this was a previous memory leak 2000-11-28 09:41:01 +00:00
Daniel Stenberg
e9b69bc757 added maxredirs 2000-11-28 09:11:24 +00:00
Daniel Stenberg
2aaae10fe8 Added max-redirs support (James Griffiths' patch) 2000-11-28 09:10:43 +00:00
Daniel Stenberg
6bd75ab840 added maxredirs, moved CURL_PROGRESS* defines to src/main.c 2000-11-28 09:10:04 +00:00
Daniel Stenberg
b8f7d94ef1 James Griffiths' max-redirs fix 2000-11-28 09:05:47 +00:00
Daniel Stenberg
d4cd079b9c Added tests/ftpserver.pl 2000-11-27 15:11:25 +00:00
Daniel Stenberg
013770a7e2 I rearranged it and added 'make test' 2000-11-27 13:39:11 +00:00
Daniel Stenberg
f4c26ddb6a spell check 2000-11-27 13:32:11 +00:00
Daniel Stenberg
9f77434c3a modified for ftp custom commands 2000-11-27 12:53:50 +00:00
Daniel Stenberg
989ff585b1 allows simple custom modifications for single test cases 2000-11-27 12:53:32 +00:00
Daniel Stenberg
f589c1c024 Added the ftpdN.txt file 2000-11-27 12:53:05 +00:00
Daniel Stenberg
e86f3b9144 ftp graceful error detection check data 2000-11-27 12:52:36 +00:00
Daniel Stenberg
79a84d20f2 Added the memdebug include file 2000-11-27 12:04:51 +00:00
Daniel Stenberg
20801181b2 file:// test data 2000-11-27 11:54:35 +00:00
Daniel Stenberg
3723c52057 if the server doesn't output a logfile, the protocol file is not compared
with it! This makes tests without server (like for file://) smarter.
2000-11-27 11:53:50 +00:00
Daniel Stenberg
0e78911ce3 modified the --help output to have the passwords within [brackets] as they
are optional...
2000-11-27 10:24:08 +00:00
Daniel Stenberg
b7a5fb1794 added the new FILETIME stuff 2000-11-22 14:57:58 +00:00
Daniel Stenberg
6f4f3c79b6 7.5-pre3 commit 2000-11-22 14:27:47 +00:00
Daniel Stenberg
593df2f18a multiple URL support? forked? 2000-11-22 14:18:30 +00:00
Daniel Stenberg
fde82cd4e0 adjusted to modified FTP behaviour 2000-11-22 14:15:46 +00:00
Daniel Stenberg
801626de19 Added a curl-target to make it easier to build from this dir 2000-11-22 14:15:15 +00:00
Daniel Stenberg
92f53b0e4d added filetime for opt and info 2000-11-22 13:59:41 +00:00
Daniel Stenberg
d419d975b3 Added cacert and filetime support 2000-11-22 13:51:11 +00:00
Daniel Stenberg
b5739b3a97 document time fixes 2000-11-22 13:50:17 +00:00
Daniel Stenberg
86d4488cc7 Added struct fields to deal with time-of-document 2000-11-22 12:57:16 +00:00
Daniel Stenberg
ce1cb29d20 client_write() proto and defines added 2000-11-22 12:55:55 +00:00
Daniel Stenberg
526eca191a uses client_write() 2000-11-22 12:55:24 +00:00
Daniel Stenberg
79beebdabe uses client_write() now 2000-11-22 12:54:48 +00:00
Daniel Stenberg
39abde5db5 Added the client_write() function 2000-11-22 12:53:56 +00:00
Daniel Stenberg
fb962a281e uses the new client_write() function 2000-11-22 12:51:18 +00:00
Daniel Stenberg
2f6e61d5fb GetLastResponse() modified to return ftp code as integer
initial modified-time support
2000-11-22 12:50:41 +00:00
Daniel Stenberg
ea9ede15e3 HTTP GET fail silently on HTTP error return 2000-11-22 08:57:24 +00:00
Daniel Stenberg
4768c9cdbb Added --cacert 2000-11-22 08:51:41 +00:00
Daniel Stenberg
d6b1162a63 working with the test suite brings things up 2000-11-22 08:16:36 +00:00
Daniel Stenberg
486591f9d1 Added --url 2000-11-22 07:53:15 +00:00
Daniel Stenberg
458ec524e1 updated the config file section 2000-11-22 07:52:48 +00:00
Daniel Stenberg
a40b55d5c8 Added 5.2 How can I receive all data into a large memory chunk? 2000-11-22 07:27:26 +00:00
Daniel Stenberg
5aa5ecb29b modified to work with printf()s that writes 0x-prefix on %p data 2000-11-21 19:37:15 +00:00
Daniel Stenberg
20dd0670ba I run the perl programs with 'perl [script]' instead, to overcome the
hardcoded-path-in-scripts problem.
2000-11-21 19:30:09 +00:00
Daniel Stenberg
43e1e1cd1a upload check, better ability to specify test cases on the command line 2000-11-21 19:28:11 +00:00
Daniel Stenberg
55b7c1c364 REST support seems to work
NLST sends an NLST-looking list
renamed the upload file
2000-11-21 19:25:14 +00:00
Daniel Stenberg
190ecd652a Added the uploadN.txt description 2000-11-21 19:21:31 +00:00
Daniel Stenberg
2677c27b08 FTP test case data 2000-11-21 19:20:14 +00:00
Daniel Stenberg
c938166520 set rangestringalloc to 0 after the string has been freed to prevent it
from being freed twice (a NULL free the second time)
2000-11-21 19:06:55 +00:00
Daniel Stenberg
50d564b4d4 uses the "internal" mprintf() routines for formatted output 2000-11-21 19:05:26 +00:00
Daniel Stenberg
29d21bea18 bad directory name extractor fixed, now always free the file and directory
very early, as that could leak memory before
2000-11-21 19:04:25 +00:00
Daniel Stenberg
b734bc37eb curl_unescape() did not stop at the set length properly when %-codes were
used
2000-11-21 19:01:53 +00:00
Daniel Stenberg
2c123051bb added a command line log that logs all command lines run in the complete
test run
2000-11-21 17:07:16 +00:00
Daniel Stenberg
b82fa8d959 FTP test case data 2000-11-21 17:04:59 +00:00
Daniel Stenberg
c84aa663a1 httpN => protN and some other minor updates 2000-11-21 15:51:05 +00:00
Daniel Stenberg
7db43ae0ed says nothing if no errors were found 2000-11-21 15:50:17 +00:00
Daniel Stenberg
ae58d84429 Added support for verifiedserver that returns a static silly string that
allows the test script to verify that it is our test server running on the
particular port
2000-11-21 15:49:34 +00:00
Daniel Stenberg
eb993c28ca starts and stops both HTTP and FTP servers now
checks memanalyze output better
filters PORT output when doing FTP compares
2000-11-21 15:48:40 +00:00
Daniel Stenberg
2830504f4f removed the twice free_config_all() calls
made the big config struct local (big . => -> replace)
2000-11-21 15:37:54 +00:00
Daniel Stenberg
2a5e68ea89 added some defensive code around the GetHost()'s third argument result 2000-11-21 15:36:38 +00:00
Daniel Stenberg
c06f726614 GetHost() now sets the third pointer to NULL when the lookup fails, as the
memory is then freed in the function
2000-11-21 15:35:45 +00:00
Daniel Stenberg
52909688cf when using PORT, we now free the host name buffer properly 2000-11-21 15:34:40 +00:00
Daniel Stenberg
c1474b9507 http* is now prot* since we're about to use other protocols as well 2000-11-21 14:24:03 +00:00
Daniel Stenberg
708e9cf294 attempt to use a bad protocol 2000-11-21 13:41:11 +00:00
Daniel Stenberg
70778f2cb6 NLST does a LIST (a normal unix ftp client 'ls' becomes NLST)
multiple transfers are supported
2000-11-21 13:36:55 +00:00
Daniel Stenberg
bdb411c6ca STOR works! 2000-11-21 13:22:32 +00:00
Daniel Stenberg
56ac132401 removed the storenonprintable function as it isn't used anymore 2000-11-21 13:18:30 +00:00
Daniel Stenberg
44137c7932 fancier login text
removed lots of wasted comments
cleaned up a little
STOR doesn't work
2000-11-21 12:54:08 +00:00
Daniel Stenberg
19a754dc8c removed the forks, we don't need forking for single-task testing 2000-11-21 12:00:24 +00:00
Daniel Stenberg
641351ee16 runtests.pl -c should be -a 2000-11-21 11:37:58 +00:00
Daniel Stenberg
7b49d40bb0 removed pedantic compiler warnings 2000-11-21 09:38:41 +00:00
Daniel Stenberg
3e5ba33e2d removed two unused variables and added an extra set of parentheses, done
to remove pedantic compiler warnings
2000-11-21 09:31:55 +00:00
Daniel Stenberg
9a9013ac25 typecasted the argument to isspace() to int, to remove a pedantic compiler
warning
2000-11-21 09:31:03 +00:00
Daniel Stenberg
59693250c4 includes http.h for the proxytunnel stuff 2000-11-21 09:30:07 +00:00
Daniel Stenberg
336b0b7d82 added comment on a variable that is unused on some platforms 2000-11-21 09:29:21 +00:00
Daniel Stenberg
f22c690b1f flushes the log handles before fork, now the logs work too! 2000-11-20 16:02:53 +00:00
Daniel Stenberg
05ec503eac QUIT works, and now I can run a unix ftp client against the server and it
runs pretty good
2000-11-20 14:26:09 +00:00
Daniel Stenberg
4b8fd86f04 CWD runs 2000-11-20 13:47:25 +00:00
Daniel Stenberg
16cf5ee1c9 RETR seems to work too 2000-11-20 13:19:22 +00:00
Daniel Stenberg
a7937ed49c this is now a working ftp server, both PASV and PORT run fine, LIST works,
RETR and STORE don't
2000-11-20 13:07:04 +00:00
Daniel Stenberg
4c0bae3649 changed the comment for URL_MAX_LENGTH 2000-11-20 09:40:09 +00:00
Daniel Stenberg
4a7d62c8c3 formfree, config file, --url, more testcases, infinite URL lengths and more 2000-11-20 09:37:57 +00:00
Daniel Stenberg
d4a4b564ec extremely long URL test 2000-11-20 09:04:27 +00:00
Daniel Stenberg
5d4bceda20 removed URL size restrictions, dynamically allocates the needed buffer
size instead
2000-11-20 08:54:32 +00:00
Daniel Stenberg
42280e95bf removed URL size restrictions 2000-11-20 08:53:21 +00:00
Daniel Stenberg
b2ad1f68cc this is the first attempt of a tiny and simple ftp server in perl for curl
test purposes
2000-11-20 08:00:33 +00:00
Daniel Stenberg
13e9a4d8f4 added a description about the memory checks 2000-11-20 07:59:25 +00:00
Daniel Stenberg
9c0d9784f6 no more "leaked" memory when this fails on various kinds of bad usage 2000-11-20 07:54:57 +00:00
Daniel Stenberg
91c879461e Alexander Kourakos's lowercase environment variable fix 2000-11-20 07:35:21 +00:00
Daniel Stenberg
bda9fde4d8 spell correction resolv => resolve in two error messages 2000-11-18 16:31:27 +00:00
Daniel Stenberg
0def60bf9d now supports checks for exit codes and check for memory even when curl
returns (expected) exit code
2000-11-17 15:58:25 +00:00
Daniel Stenberg
1665435040 graceful failure test 2000-11-17 15:57:35 +00:00
Daniel Stenberg
aa86f697f6 output FAILED properly even when -s is used 2000-11-17 15:34:33 +00:00
Daniel Stenberg
e48747d95d updated to the new stdout stuff and the new -a option 2000-11-17 15:33:54 +00:00
Daniel Stenberg
0a72154cd2 fixed strdup() of a NULL pointer 2000-11-17 15:32:17 +00:00
Daniel Stenberg
3e6a354c4c now exits and alerts on bad uses of strdup() and free() 2000-11-17 15:31:45 +00:00
Daniel Stenberg
f0b8aac325 updated to the new stdout file behaviour of runtests.pl 2000-11-17 15:30:33 +00:00
Daniel Stenberg
ec3054e1f2 make test in root now runs make quiet-test in the test dir 2000-11-17 15:30:01 +00:00
Daniel Stenberg
7c6414ebbd uses stricter output 2000-11-17 15:15:48 +00:00
Daniel Stenberg
85705e105c better stdout check, full support for memory debug tests 2000-11-17 15:07:29 +00:00
Daniel Stenberg
874f6024e6 multiple URL test 2000-11-17 15:07:03 +00:00
Daniel Stenberg
a03cdd7e83 curl_formfree() added 2000-11-17 14:21:07 +00:00
Daniel Stenberg
f9155568c6 this has been missing all the time... 2000-11-17 14:11:22 +00:00
Daniel Stenberg
c0936824d4 added curl_formfree() 2000-11-17 14:06:24 +00:00
Daniel Stenberg
57ddd7e928 now includes stdlib.h 2000-11-17 14:05:43 +00:00
Daniel Stenberg
868488b518 memory leak cleanup campaign 2000-11-17 14:03:58 +00:00
Daniel Stenberg
7f77a061dd allows \r \n \t \v in config file parameters within quotes 2000-11-17 10:08:39 +00:00
Daniel Stenberg
2d16e1a777 config file test 2000-11-17 10:05:56 +00:00
Daniel Stenberg
2297bc4791 changed the 'port' field to long to better work with the va_arg() system 2000-11-17 09:48:21 +00:00
Daniel Stenberg
34a2d446e0 major config file hack, now works a lot better and slightly different
Added --url to allow URLs to be specified in the config file that way
2000-11-17 09:47:18 +00:00
Daniel Stenberg
fdd91b2209 moved out the FTP part 2000-11-16 09:06:18 +00:00
Daniel Stenberg
7ea4551b1b forgot to commit before 2000-11-16 07:32:45 +00:00
Daniel Stenberg
77bbbd868b data->err must be used, not stderr 2000-11-16 07:20:12 +00:00
Daniel Stenberg
3b91db110b fixed crash in config file parser 2000-11-15 20:45:29 +00:00
Daniel Stenberg
ab9dfac24e updated to catch bug 122480 2000-11-15 15:48:15 +00:00
Daniel Stenberg
5a07305dc8 not printf()ing %s normally for character that weren't isprint() made things
go weird, had to remove this. I should use trio soon for all the *printf()
stuff as this is too broken
2000-11-15 15:36:41 +00:00
Daniel Stenberg
56c0c67dff 'use strict' compliant
better complains if there are missing input files for a test case
explaced exit-calls with returns instead
2000-11-15 12:13:24 +00:00
Daniel Stenberg
885184aa14 proxy authorization test case 2000-11-15 12:06:59 +00:00
Daniel Stenberg
e0e67812de now sorts the test cases when "all" is used 2000-11-15 08:21:14 +00:00
Daniel Stenberg
eb72e001a7 'use strict' compliant 2000-11-15 07:09:37 +00:00
Daniel Stenberg
cdfa5f5d7b removed some /= 256 that was wrongly left 2000-11-14 11:56:16 +00:00
Daniel Stenberg
0c19d2518c added help text on -h 2000-11-14 10:28:25 +00:00
Daniel Stenberg
e64b8a8f86 more decriptions 2000-11-14 10:24:26 +00:00
Daniel Stenberg
e2641a394d removed lots of external program dependencies (for windows compliance)
added lots of comments
added -s for short output and made it possible to run specific test cases
from the command line
2000-11-14 10:18:44 +00:00
Daniel Stenberg
bd3dca96f6 somewhat more functioning FTP 2000-11-13 20:47:09 +00:00
Daniel Stenberg
3cd77a19ca basic and early ftp support 2000-11-13 19:58:40 +00:00
Daniel Stenberg
e02affb5d0 logs stderr as well now, which is good if the program crashes, and also
dumps more information in case curl doesn't return success
2000-11-13 18:34:27 +00:00
Daniel Stenberg
24f9ae1f72 *** empty log message *** 2000-11-13 18:23:52 +00:00
Daniel Stenberg
2bd70e1351 moved the followlocation field from the http struct to the urldata struct
since it has to survive http struct deletion
2000-11-13 18:23:21 +00:00
Daniel Stenberg
336124c3dc updated 2000-11-13 16:07:17 +00:00
Daniel Stenberg
8e735d1eea converted shell script to perl 2000-11-13 16:06:16 +00:00
Daniel Stenberg
aa9a60287d more test case data 2000-11-13 16:05:39 +00:00
Daniel Stenberg
6736c1610c removed the check that prevents -T and -o beinged used simultaneously! 2000-11-13 11:59:19 +00:00
Daniel Stenberg
1cc8af2779 if the server is already running when the script is started, it now verifies
that it actually is our test server that runs
2000-11-13 11:45:41 +00:00
Daniel Stenberg
bfb118e42a Added space after the Cookie: header keyword 2000-11-13 11:29:32 +00:00
Daniel Stenberg
3f0aa0648f defaults to run all available test cases in (1 - last) order 2000-11-13 09:51:01 +00:00
Daniel Stenberg
a58e336d85 updated test cases 2000-11-13 09:44:39 +00:00
Daniel Stenberg
27435f0648 new pid stuff, more filters, various fixes 2000-11-13 09:43:40 +00:00
Daniel Stenberg
69e82e7383 changed pid stuff, made it work with rfc1867 posts and made it work better
on paths
2000-11-13 09:42:58 +00:00
Daniel Stenberg
b2daec2477 more details added 2000-11-13 09:41:47 +00:00
Daniel Stenberg
c605f81a09 Jrg updated the list of exported functions 2000-11-13 08:36:17 +00:00
Daniel Stenberg
d5b06bcf3b replaced by a working server! 2000-11-13 08:03:16 +00:00
Daniel Stenberg
d5e6404b8b uses the new httpd server, runs the tests much faster 2000-11-13 08:02:26 +00:00
Daniel Stenberg
bc84fe1cf3 new perl http server that works better 2000-11-13 08:02:02 +00:00
Daniel Stenberg
460aa295e0 Chris Faherty fixed a free-twice problem 2000-11-13 07:51:23 +00:00
Daniel Stenberg
143ff23c4f updated config file section 2000-11-12 15:14:35 +00:00
Daniel Stenberg
6195412005 Added empty actions for all: and install: 2000-11-12 15:11:50 +00:00
Daniel Stenberg
4e120f34a5 The last few days of changes 2000-11-10 15:26:48 +00:00
Daniel Stenberg
14bcdcfcdd test files 2000-11-10 15:24:54 +00:00
Daniel Stenberg
3c0194bb72 initial checkin 2000-11-10 15:24:09 +00:00
Daniel Stenberg
172f0ba12d the tests dir is added 2000-11-10 14:42:06 +00:00
Daniel Stenberg
4035543763 set type before checking --head size, as the type may cause the server
to return different sizes
2000-11-10 13:42:45 +00:00
Daniel Stenberg
920579ba11 doing an ftp upload append that was already completed resulted in a
"hang", it now results in an error instead
2000-11-10 11:28:01 +00:00
Daniel Stenberg
1ff573c649 added getpass_r check 2000-11-10 09:19:47 +00:00
Daniel Stenberg
7b5c551835 adjusted to the changed getpass_r() 2000-11-10 09:19:09 +00:00
Daniel Stenberg
a5b2eb7962 new interface, updated Angus' license, dependent on HAVE_GETPASS_R 2000-11-10 09:18:25 +00:00
Daniel Stenberg
78423c5899 Venkataramana Mokkapati corrected a cookie parser bug 2000-11-10 08:10:04 +00:00
Daniel Stenberg
2bcb8abf40 haxx.nu => haxx.se 2000-11-09 12:51:43 +00:00
Daniel Stenberg
b32bf42763 Added RSAglue/rsaref lib check if the crypto lib is there but the ssl lib
check fails.
2000-11-09 12:35:45 +00:00
Daniel Stenberg
61fb8fea10 cleaned up the thread-safe checks into separate functions, added check for
gethostbyname() in the socket lib as it seems some systems need it
2000-11-08 14:27:46 +00:00
Daniel Stenberg
c0a44b4b9b Added typecast to localtime_r() 2000-11-07 23:09:08 +00:00
Daniel Stenberg
ef8741d23c removed the perror() outputs as they did nothing good to us 2000-11-07 07:33:40 +00:00
Daniel Stenberg
56548f9a13 getpass_r() is the new getpass name for thread-safe getpass! 2000-11-06 23:18:50 +00:00
Daniel Stenberg
36000e5287 Added T. Bharath to the list of contributors 2000-11-06 23:12:36 +00:00
Daniel Stenberg
8cb15395d0 Added descriptions for: CURLOPT_PASSWDDATA, CURLOPT_PASSWDFUNCTION,
CURLOPT_CAINFO and CURLOPT_SSL_VERIFYPEER.
2000-11-06 23:11:23 +00:00
Daniel Stenberg
4ccda6d692 Added CURLINFO_SSL_VERIFYRESULT 2000-11-06 22:59:05 +00:00
Daniel Stenberg
7390c3a8af bugfixes and improvements 2000-11-06 22:56:46 +00:00
Daniel Stenberg
e5e259030f removed bad mirror, added text about source contents (that should be here
according to the source license)
2000-11-06 22:55:59 +00:00
Daniel Stenberg
9f4f16b55d new getpass proto and function pointer usage 2000-11-06 22:53:50 +00:00
Daniel Stenberg
e05922c428 modified pgrsTime() to the new functionality 2000-11-06 15:32:16 +00:00
Daniel Stenberg
71fb701168 adjusted the time-keeping function to work better for location following
requests
2000-11-06 15:31:10 +00:00
Daniel Stenberg
b6bb734215 Emmanuel Tychon found a problem when specifying user-name only in a URL
(and the password entered interactively). This fix also includes proper
URL-decoding of the user name and password if specified in the URL.
2000-11-06 08:12:30 +00:00
Daniel Stenberg
e7736324b4 David Odin (aka DindinX) for MandrakeSoft, tiny example with GTK 2000-11-03 14:47:07 +00:00
Daniel Stenberg
e0e01e5a59 error code fix 2000-11-02 14:34:46 +00:00
Daniel Stenberg
852b664e45 added signal in case sigaction is missing 2000-11-01 08:19:10 +00:00
Daniel Stenberg
e6cdb68a88 adjusted to the new packages dir 2000-10-31 09:54:29 +00:00
Daniel Stenberg
349811f3da removed, see packages/Linux/RPM 2000-10-31 09:53:54 +00:00
Daniel Stenberg
823785c53e new package related file 2000-10-31 09:50:22 +00:00
Daniel Stenberg
1c0fd24a36 removed extra comma in the CURLINFO enum typedef 2000-10-30 23:17:06 +00:00
Daniel Stenberg
5c0b2f29b9 Added CURLOPT_SSL_VERIFYPEER and CURLOPT_CAINFO 2000-10-30 23:15:15 +00:00
Daniel Stenberg
e446edc288 the verify cert stuff is now added! 2000-10-30 15:07:58 +00:00
Daniel Stenberg
b5d152caf7 T. Bharath's ssl patch 2000-10-30 12:43:08 +00:00
Daniel Stenberg
6f7dcf3f22 typecasted the localtime_r() return code to not make it not warn even if the
function prototype is missting
2000-10-30 11:54:27 +00:00
Daniel Stenberg
0cff279063 new urldata ssl layout and T. Bharath brought the new SSL cert verify function 2000-10-30 11:53:40 +00:00
Daniel Stenberg
09ba856e39 Added section 4.8 I found a bug and did some minor cosmetics 2000-10-27 12:25:00 +00:00
Daniel Stenberg
1df033a1c5 Added description on how to use the newly supported multiple -d options 2000-10-27 10:52:38 +00:00
Daniel Stenberg
3264ce04ee Added sigaction check 2000-10-27 10:52:08 +00:00
Daniel Stenberg
3b0d49e1c9 post 7.4.1 changes 2000-10-27 10:51:14 +00:00
Daniel Stenberg
f6daff475f removed old unused getpass() leftovers 2000-10-26 21:59:54 +00:00
Daniel Stenberg
9d0d8280e9 Georg Horn provided a fix for the timeout signal stuff. Finally the timeout
switch should work under most unixes (requires sigaction())
2000-10-26 21:57:12 +00:00
Daniel Stenberg
cdfb83e0e3 removed getpass-check since getpass() is no longer being used 2000-10-26 10:32:31 +00:00
Daniel Stenberg
02037971ed renamed getpass() to my_getpass() and it is now thread-safe and should
disable passwd-echoing on win32 (supplied by Bjrn Stenberg)
2000-10-26 10:32:04 +00:00
Daniel Stenberg
a5b01cf4e8 Kevin Roth's bugreport with config files containing '-v defaulturl' is now
fixed
2000-10-26 08:15:13 +00:00
Daniel Stenberg
68c231e1b0 Kevin P Roth's idea of supporting multiple -d options was turned into reality 2000-10-26 07:06:52 +00:00
Daniel Stenberg
949eaf8ad4 Replaced the former bug report email address with the new curl-bug@haxx.se 2000-10-25 07:43:03 +00:00
Daniel Stenberg
950110ecb1 Added a few ideas 2000-10-25 07:42:23 +00:00
Daniel Stenberg
5f8e93d3b0 tiny spell correction 2000-10-25 07:41:58 +00:00
Daniel Stenberg
e4a7e18a0c compiles on Linux now 2000-10-25 07:41:11 +00:00
Daniel Stenberg
8f5ffd94a2 the configure script dynamically gets the version from the include file now
which lets the maketgz skip updating the configure.in file
2000-10-23 13:56:12 +00:00
Daniel Stenberg
c44b10de41 remote_port used in Host: headers only when non-default 2000-10-20 13:48:38 +00:00
Daniel Stenberg
135cc036aa made the speedcheck actually work again 2000-10-17 14:53:03 +00:00
Daniel Stenberg
f6163b375f 7.4.1 commit 2000-10-16 13:52:05 +00:00
Daniel Stenberg
b2d73c50d3 pre5 and pre6 fixes 2000-10-12 09:14:57 +00:00
Daniel Stenberg
834b7de33c Added lib/libcurl.def for win32 DLL creations 2000-10-12 09:13:55 +00:00
Daniel Stenberg
debdd93e1b just removed some example lines in the top comment 2000-10-12 09:13:22 +00:00
Daniel Stenberg
4e8ddedc8f Jrn added glob_cleanup() 2000-10-12 09:12:24 +00:00
Daniel Stenberg
751d503f54 sprintf() => snprintf() 2000-10-12 08:22:16 +00:00
Daniel Stenberg
b2e47dfde4 updated to better reflect reality 2000-10-11 10:59:36 +00:00
Daniel Stenberg
0af8201cc2 make curl capable of using the mozilla SSL engine 2000-10-11 10:59:16 +00:00
Daniel Stenberg
7717212912 free the URL string if that was allocated 2000-10-11 10:58:37 +00:00
Daniel Stenberg
ccb2b5d22c free the FTP struct already in the _done() function 2000-10-11 10:57:52 +00:00
Daniel Stenberg
85174ed358 memory leak adjusts 2000-10-11 10:29:25 +00:00
Daniel Stenberg
111d1d09d3 removed the header that confuses PHP 2000-10-09 22:29:35 +00:00
Daniel Stenberg
4f5a4c9bd5 added the bool typedef, moved here from curl/curl.h 2000-10-09 21:36:38 +00:00
Daniel Stenberg
8c62e337b0 bool typedef fix 2000-10-09 21:35:40 +00:00
Daniel Stenberg
51bcdb472b use this to analyze the memory debug logs MALLOCDEBUG will generate 2000-10-09 11:31:55 +00:00
Daniel Stenberg
5ee185f420 just too many to mention 2000-10-09 11:25:40 +00:00
Daniel Stenberg
fb739ac130 Added commented MALLOCDEBUG stuff for memory debugging 2000-10-09 11:24:49 +00:00
Daniel Stenberg
cdd91bed46 I commented the -DMALLOCDEBUG flag to make it easier to add 2000-10-09 11:24:18 +00:00
Daniel Stenberg
9defb83930 added memory debugging support 2000-10-09 11:13:17 +00:00
Daniel Stenberg
0f8facb49b added memory debugging include file 2000-10-09 11:12:34 +00:00
Daniel Stenberg
d49d05bce6 added for memory leak debugging etc 2000-10-09 11:11:43 +00:00
Daniel Stenberg
1e2e6a4e33 GetHost() did not properly assign the third argument pointer! 2000-10-08 12:50:51 +00:00
Daniel Stenberg
5b39a48e22 corrected the --longoption parser 2000-10-06 12:45:05 +00:00
Daniel Stenberg
2918836cef removed include "writeout.h" 2000-10-06 11:06:20 +00:00
Daniel Stenberg
b900318d8d Jrg's updated makefile 2000-10-06 11:03:43 +00:00
Daniel Stenberg
c58dc8f82f the --interface code doesn't work on win32 and is #ifndef WIN32 now 2000-10-06 11:03:20 +00:00
Daniel Stenberg
0ddacf929a added for the win32 version 2000-10-06 11:02:48 +00:00
Daniel Stenberg
a513e97464 moved the src/config.h stuff to the bottom, as automake were adding include
stuff to ../src in the lib directory's Makefile.in otherwise!
2000-10-06 10:40:43 +00:00
Daniel Stenberg
03a56b3e56 HTTP resume fix, now the range pointer may be allocated 2000-10-06 06:28:39 +00:00
Daniel Stenberg
18f67852be filled in more information on the options 2000-10-04 13:09:15 +00:00
Daniel Stenberg
693aab0e95 size_request and size_header added to the -w description 2000-10-04 13:08:54 +00:00
Daniel Stenberg
ccd0f07c41 -w supports size_header and size_request 2000-10-04 13:08:17 +00:00
Daniel Stenberg
5865860ad6 counts header and request size 2000-10-04 13:07:43 +00:00
Daniel Stenberg
bf56377865 Added Jason S. Priebe as contributor 2000-10-03 22:07:09 +00:00
Daniel Stenberg
e012d32e66 documented writeinfo as removed in 7.4 2000-10-03 22:06:26 +00:00
Daniel Stenberg
763797ab3c introduced in libcurl 7.4 2000-10-03 22:05:27 +00:00
Daniel Stenberg
2cdd150723 removed writeinfo stuff 2000-10-03 22:04:04 +00:00
Daniel Stenberg
d46b006f22 add_buffer_send() free()d the buffer *before* it was used! :-O 2000-10-03 16:53:41 +00:00
Daniel Stenberg
033263e696 added the new upload_bufsize to the connectdata struct 2000-10-03 11:05:09 +00:00
Daniel Stenberg
eee5c71aff inits the upload_bufsize at connect time 2000-10-03 11:03:55 +00:00
Daniel Stenberg
f1b8566ea2 new upload-buffer size design that starts with a smallish buffer and increases
its size in case of need
2000-10-03 11:02:52 +00:00
Daniel Stenberg
d3f9b2a490 introduced the new add_buffer() concept that makes the HTTP request to get
sent in only one shot
2000-10-03 11:01:32 +00:00
Daniel Stenberg
398d21696f Added curl_easy_getinfo.3 2000-10-02 06:49:51 +00:00
Daniel Stenberg
99fbcac6b9 added a small suggestion on how to get the curl man page in text format
without nroff
2000-10-02 06:40:14 +00:00
Daniel Stenberg
c23e387928 Uses the new "client-side" writeout function 2000-10-02 06:36:34 +00:00
Daniel Stenberg
ef77d484f0 removed writeout.[ch] and added getinfo.c 2000-10-02 06:32:31 +00:00
Daniel Stenberg
df7b9e7af6 Added writeout.c 2000-10-02 06:32:05 +00:00
Daniel Stenberg
f612f194be writeout.[ch] added in src/ 2000-10-02 06:31:10 +00:00
Daniel Stenberg
dfec172157 moved out from the library and put here, uses the new curl_easy_getinfo() 2000-10-02 06:30:40 +00:00
Daniel Stenberg
888182c16d adjusted for curl_easy_getinfo 2000-10-02 06:29:39 +00:00
Daniel Stenberg
d5ad450db6 getinfo.c replaces the former writeout.c 2000-10-02 06:28:55 +00:00
Daniel Stenberg
b0274a553b Added curl_easy_getinfo() 2000-10-02 06:27:43 +00:00
Daniel Stenberg
e372a440c0 #include <malloc.h> was removed, it causes warnings on openbsd 2000-09-29 06:34:50 +00:00
Daniel Stenberg
91bda5650c include base64.h instead of base64_krb.h 2000-09-28 10:36:31 +00:00
275 changed files with 6349 additions and 1907 deletions

412
CHANGES
View File

@@ -6,6 +6,418 @@
History of Changes
Version 7.5
Daniel (1 December 2000)
- Craig Davison gave us his updates on the VC++ makefiles, so now curl should
build fine with the Microsoft compiler on windows too.
- Fixed the libcurl versioning so that we don't ruin old programs when
releasing new shared library interfaces.
Daniel (30 November 2000)
- Renamed docs/README.curl to docs/MANUAL to better reflect what the document
actually contains.
Daniel (29 November 2000)
- I removed a bunch of '#if 0' sections from the code. They only make things
harder to follow. After all, we do have all older versions in the CVS.
Version 7.5-pre5
Daniel (28 November 2000)
- I filled in more error codes in the man page error code list that had been
lagging.
- James Griffiths mailed me a fine patch that introduces the CURLOPT_MAXREDIRS
libcurl option. When used, it'll prevent location following more than the
set number of times. It is useful to break out of endless redirect-loops.
Daniel (27 November 2000)
- Added two test cases for file://.
Daniel (22 November 2000)
- Added the libcurl CURLOPT_FILETIME setopt, when set it tries to get the
modified time of the remote document. This is a special option since it
involves an extra set of commands on FTP servers. (Using the MDTM command
which is not in the RFC959)
curl_easy_getinfo() got a corresponding CURLINFO_FILETIME to get the time
after a transfer. It'll return a zero if CURLOPT_FILETIME wasn't used or if
the time wasn't possible to get.
--head/-I used on a FTP server will now present a 'Last-Modified:' header
if curl could get the time of the specified file.
- Added the option '--cacert [file]' to curl, which allows a specified PEM
file to be used to verify the peer's certificate when doing HTTPS
connections. This has been requested, rather recently by Hulka Bohuslav but
others have asked for it before as well.
Daniel (21 November 2000)
- Numerous fixes the test suite has brought into the daylight:
* curl_unescape() could return a too long string
* on ftp transfer failures, there could be memory leaks
* ftp CWD could use bad directory names
* memdebug now uses the mprintf() routines for better portability
* free(NULL) removed when doing resumed transfers
- Added a bunch of test cases for FTP.
- General cleanups to make less warnings with gcc -Wall -pedantic.
- I made the tests/ftpserver.pl work with the most commonly used ftp
operations. PORT, PASV, RETR, STOR, LIST, SIZE, USER, PASS all work now. Now
all I have to do is integrate the ftp server doings in the runtests.pl
script so that ftp tests can be run the same way http tests already run.
Daniel (20 November 2000)
- Made libcurl capable of dealing with any-length URLs. The former limit of
4096 bytes was a bit annoying when people wanted to use curl to really make
life tough on a web server. Now, the command line limit is the most annoying
but that can be circumvented by using a config file.
NOTE: there is still a 4096-byte limit on URLs extracted from Location:
headers.
- Corrected the spelling of 'resolve' in two error messages.
- Alexander Kourakos posted a bug report and a patch that corrected it! It
turned out that lynx and wget support lowercase environment variable names
where curl only looked for the uppercase versions. Now curl will use the
lowercase versions if they exist, but if they don't, it'll use the uppercase
versions.
Daniel (17 November 2000)
- curl_formfree() was added. How come no one missed that one before? I ran the
test suite with the malloc debug enabled and got lots of "nice" warnings on
memory leaks. The most serious one was this. There were also leaks in the
cookie handling, and a few errors when curl failed to connect and similar
things. More tests cases were added to cover up and to verify that these
problems have been removed.
- Mucho updated config file parser (I'm dead tired of all the bug reports and
weird behaviour I get on the former one). It works slightly differently now,
although I doubt many people will notice the differences. The main
difference being that if you use options that require parameters, they must
both be specified on the same line. With this new parser, you can also
specify long options without '--' and you may separate options and
parameters with : or =. It makes a config file line could look like:
user-agent = "foobar and something"
Parameters within quotes may contain spaces. Without quotes, they're
expected to be a single non-space word.
Had to patch the command line argument parser a little to make this work.
- Added --url as an option to allow the URL to be specified this way. It makes
way nicer config files. The previous way of specifying URLs in the config
file doesn't work anymore.
Daniel (15 November 2000)
- Using certain characters in usernames or passwords for HTTP authentication
failed. This was due to the mprintf() that had a silly check for letters,
and if they weren't isprint() they weren't outputed "as-is". This caused
passwords and usernames using '<27>' (for example) to fail.
Version 7.4.2
Daniel (15 November 2000)
- 'tests/runtests.pl' now sorts the test cases properly when 'all' is used.
Daniel (14 November 2000)
- I fell over the draft-ietf-ftpext-mlst-12.txt Internet Draft titled
"Extensions to FTP" that contains a defined way how the ftp command SIZE
could be assumed to work.
- Laurent Papier posted a bug report about using "-C -" and FTP uploading a
file that isn't prsent on the server. The server might then return a 550 and
curl will fail. Should it instead as Laurent Papier suggests, start
uploading from the beginning as a normal upload?
Daniel (13 November 2000)
- Fixed a crash with the followlocation counter.
- While writing test cases for the test suite, I discovered an old limitation
that prevented -o and -T to be used at the same time. I removed this
immediately as this has no relevance in the current libcurl.
- Chris Faherty fixed a free-twice problem in lib/file.c
- I fixed the perl http server problem in the test suite.
Version 7.4.2 pre4
Daniel (10 November 2000)
- I've (finally) started working on the curl test suite. It is in the new
tests/ directory. It requires sh and perl. There's a TCP server in perl and
most of the other stuff running a pretty simple shell script.
I've only made four test cases so far, but it proves the system can work.
- Laurent Papier noticed that curl didn't set TYPE when doing --head checks
for sizes on FTP servers. Some servers seem to return different sizes
depending on whether ASCII or BINARY is used!
- Laurent Papier detected that if you appended a FTP upload and everything was
already uploaded, curl would hang.
- Angus Mackay's getpass_r() in lib/getpass.c is now compliant with the
getpass_r() function it seems some systems actually have.
- Venkataramana Mokkapati detected a bug in the cookie parser and corrected
it. If the cookie was set for the full host name (domain=full.host.com),
the cookie was never sent back because of a faulty length comparison between
the set domain length and the current host name.
Daniel (9 November 2000)
- Added a configure check for gethostbyname in -lsocket (OS/2 seems to need
it). Added a check for RSAglue/rsaref for the cases where libcrypto is found
but libssl isn't. I haven't verified this fix yet though, as I have no
system that requires those libs to build.
Version 7.4.2 pre3
Daniel (7 November 2000)
- Removed perror() outputs from getpass.c. Angus Mackay also agreed to a
slightly modified license of the getpass.c file as the prototype was changed.
Daniel (6 November 2000)
- Added possibility to set a password callback to use instead of the built-in.
They're controled with curl_easy_setopt() of course, the tags are
CURLOPT_PASSWDFUNCTION and CURLOPT_PASSWDDATA.
- Used T. Bharath's thinking and fixed the timers that showed terribly wrong
times when location: headers were followed.
- Emmanuel Tychon discovered that curl didn't really like user names only in
the URL. I corrected this and I also fixed the since long living problem
with URL encoded user names and passwords in the URLs. They should work now.
Daniel (2 November 2000)
- When I added --interface, the new error code that was added with it was
inserted in the wrong place and thus all error codes from 35 and upwards got
increased one step. This is now corrected, we're back at the previous
numbers. All new exit codes should be added at the end.
Daniel (1 November 2000)
- Added a check for signal() in the configure script so that if sigaction()
isn't present, we can use signal() instead.
- I'm having a license discussion going on privately. The issue is yet again
GPL-licensed programs that have problems with MPL. I am leaning towards
making a kind of dual-license that will solve this once and for all...
Daniel (31 October 2000)
- Added the packages/ directory. I intend to let this contain some docs and
templates on how to generate custom-format packages for various platforms.
I've now removed the RPM related curl.spec files from the archive root.
Daniel (30 October 2000)
- T. Bharath brought a set of patches that bring new functionality to
curl_easy_getinfo() and curl_easy_setopt(). Now you can request peer
certificate verification with the *setopt() CURLOPT_SSL_VERIFYPEER option
and then use the CURLOPT_CAINFO to set the certificate to verify the remote
peer against. After an such an operation with a verification request, the
*_getinfo() option CURLINFO_SSL_VERIFYRESULT will return information about
whether the verification succeeded or not.
Daniel (27 October 2000)
- Georg Horn brought us a splendid patch that solves the long-standing
annoying problem with timeouts that made curl exit with silly exit codes
(which as been commented out lately). This solution is sigaction() based and
of course then only works for unixes (and only those unixes that actually
have the sigaction() function).
Daniel (26 October 2000)
- Bj<42>rn Stenberg supplied a patch that fixed the flaw mentioned by Kevin Roth
that made the password get echoed when prompted for interactively. The
getpass() function (now known as my_getpass()) was also fixed to not use any
static buffers. This also means we cannot use the "standard" getpass()
function even for those systems that have it, since it isn't thread-safe.
- Kevin Roth found out that if you'd write a config file with '-v url', the
url would not be used as "default URL" as documented, although if you wrote
it 'url -v' it worked! This has been corrected now.
- Kevin Roth's idea of using multiple -d options on the same command line was
just brilliant, and I couldn't really think of any reason why we shouldn't
support it! The append function always append '&' and then the new -d
chunk. This enables constructs like the following:
curl -d name=daniel -d age=unknown foobarsite.com
Daniel (24 October 2000)
- I fixed the lib/memdebug.c source so that it compiles on Linux and other
systems. It will be useful one day when someone else but me wants to run the
memory debugging system.
Daniel (23 October 2000)
- I modified the maketgz and configure scripts, so that the configure script
will fetch the version number from the include/curl/curl.h header files, and
then the maketgz doesn't have to rebuild the configure script when I build
release-archives.
- Bj<42>rn Stenberg and Linus Nielsen correctly pointed out that curl was silly
enough to not allow @-letters in passwords when they were specified with the
-u or -U flags (CURLOPT_USERPWD and CURLOPT_PROXYUSERPWD). This also
suggests that curl probably should url-decode the password piece of an URL
so that you could pass an encoded @-letter there...
Daniel (20 October 2000)
- Yet another http server barfed on curl's request that include the port
number in the Host: header always. I now only include the port number if it
isn't the default (80 for HTTP, 443 for HTTPS). www.perl.com turned out to
run one of those nasty servers.
- The PHP4 module for curl had problems with referer that seems to have been
corrected just yesterday. (Sterling Hughes of the PHP team confirmed this)
Daniel (17 October 2000)
- Vladimir Oblomov reported that the -Y and -y options didn't work. They
didn't work for me either. This once again proves we should have that test
suite...
- I finally changed the error message libcurl returns if you try a https://
URL when the library wasn't build with SSL enabled. It will now return this
error:
"libcurl was built with SSL disabled, https: not supported!"
I really hope it will make it a bit clearer to users where the actual
problem lies.
Version 7.4.1
Daniel (16 October 2000)
- I forgot to remove some of the malloc debug defines from the makefiles in
the release archive (of course).
Version 7.4
Daniel (16 October 2000)
- The buffer overflow mentioned below was posted to bugtraq on Friday 13th.
Daniel (12 October 2000)
- Colin Robert Phipps elegantly corrected a buffer overflow. It could be used
by an evil ftp server to crash curl. I took the opportunity of replacing a
few other sprintf()s into snprintf()s as well.
Daniel (11 October 2000)
- Found some more memory leaks. This new simple memory debugger has turned out
really useful!
Version 7.4 pre6
Daniel (9 October 2000)
- Florian Koenig pointed out that the bool typedef in the curl/curl.h include
file was breaking PHP 4.0.3 compiling. The bool typedef is not used in the
public interface and was wrongly inserted in that header file.
- J<>rg Hartroth corrected a minor memory leak in the src/urlglob.c stuff. It
didn't harm anyone since the memory is free()ed on exit anyway.
- Corrected the src/main.c. We use the _MPRINTF_REPLACE #define to use our
libcurl-printf() functions. This gives us snprintf() et al on all
platforms. I converted the allocated useragent string to one that uses a
local buffer.
- I've set an #if 0 section around the Content-Transfer-Encoding header
generated in lib/formdata.c. This will hopefully make curl do more
PHP-friendly multi-part posts.
Version 7.4 pre5
Daniel (9 October 2000)
- Nico Baggus found out that curl's ability to force a ASCII download when
using FTP was no longer working! I corrected this. This problem was probably
introduced when I redesigned libcurl for version 7.
- Georg Horn provided a source example that proved a memory leak in libcurl.
I added simple memory debugging facilities and now we can make libcurl log
all memory fiddling functions. An additional perl script is used to analyze
the output logfile and to match malloc()s with free()s etc. The memory leak
Georg found turned out to be the main cookie struct that cookie_cleanup()
didn't free! The perl script is named memanalyze.pl and it is available in
the CVS respository, not in the release archive.
Daniel (8 October 2000)
- Georg Horn found a GetHost() problem. It turned out it never assigned the
pointer in the third argument properly! This could make a crash, or at best
a memory leak!
Version 7.4 pre4
Daniel (6 October 2000)
- Is the -F post following the RFC 1867 spec? We had this dicussion on the
mailing list since it appears curl can't post -F form posts to a PHP
receiver... I've been in touch with the PHP developers about this.
- Domenico Andreoli found out that the long option '--proxy' wasn't working
anymore! The option parser got confused when I added the --proxytunnel for
7.3. This was indeed a very old flaw that hasn't turned up until now...
- J<>rn Hartroth provided patches, updated makefiles and two new files for DLL
stuff on win32. He also pointed out that lib source files were compiled with
-I../src which isn't only wrong but plain stupid!
- Troels Walsted Hansen fixed a problem with HTTP resume. Curl previously used
a local variable badly, that could lead to crashes.
Version 7.4 pre3
Daniel (4 October 2000)
- More docs written. The curl_easy_getinfo.3 man page is now pretty accurate,
as is the -w section in curl.1. I added two options to enable the user to
get information about the received headers' size and the size of the HTTP
request. T. Bharath requested them.
Daniel (3 October 2000)
- Corrected a sever free() before use in the new add_buffer_send()! ;-)
Version 7.4 pre2
Daniel (3 October 2000)
- Jason S. Priebe sent me patches that changed the way curl issues HTTP
requests. The entire request is now issued in one single shot. It didn't do
this previously, and it has turned out that since the common browsers do it
this way, some sites have turned out to work with browsers but not with
curl! Although this is not a client-side problem, we want to be able to
fully emulate browsers, and thus we have now adjusted the networking layer
to slightly more appear as a browser. I adjusted Jason's patch, the faults
are probably mine.
Daniel (2 October 2000)
- Anyone who ever uploaded data with curl on a slow link has noticed that the
progess meter is updated very infrequently. That is due to the large buffer
size curl is using. It reads 50Kb and sends it, updates the progress meter
and loops. 50Kb is very much on a slow link, although it is pretty neat to
use on a fast one.
I've now made an adjustment that makes curl use a 2Kb buffer for uploads to
start with. If curl's average upload speed is faster than buffer size bytes
per second, curl will increase the used buffer size up to max 50Kb. It
should make the progress meter work better.
Version 7.4 pre1
Daniel (29 September 2000)
- Ripped out the -w stuff from the library and put in the curl tool. It gets
all the relevant info from the library using the new curl_easy_getinfo()
function.
- brad at openbsd.org mailed me a patch that corrected my kerberos mistake and
removed a compiler warning from hostip.c that OpenBSD people get.
Daniel (28 September 2000)
- Of course (I should probably get punished somehow) I didn't properly correct
the #include lines for the base64 stuff in the kerberos sources in the just
released 7.3 package. They still include the *_krb.h files! Now, the error
is sooo very easy to spot and fix so I won't bother with a quick bug fix
release. I'll post a patch whenever one is needed instead. It'll be
available in the CVS in a few minutes anyway.
Version 7.3
Daniel (28 September 2000)

21
FILES
View File

@@ -3,15 +3,13 @@ FILES
LEGAL
MPL-1.0.txt
README
*spec
*spec.in
docs/BUGS
docs/CONTRIBUTE
docs/FAQ
docs/FEATURES
docs/INSTALL
docs/INTERNALS
docs/README.curl
docs/MANUAL
docs/README.win32
docs/README.libcurl
docs/RESOURCES
@@ -49,6 +47,8 @@ src/setup.h
src/urlglob.c
src/urlglob.h
src/version.h
src/writeout.c
src/writeout.h
src/*.in
src/*.am
src/mkhelp.pl
@@ -60,10 +60,23 @@ lib/*in
lib/*am
lib/Makefile.vc6
lib/*m32
lib/libcurl.def
include/README
include/Makefile.in
include/Makefile.am
include/curl/*.h
include/curl/Makefile.in
include/curl/Makefile.am
packages/Linux/RPM/curl-ssl.spec
packages/Linux/RPM/curl.spec
packages/Linux/RPM/make_curl_rpm
packages/Linux/RPM/README
packages/Win32/README
packages/README
tests/Makefile.am
tests/Makefile.in
tests/runtests.pl
tests/README
tests/httpserver.pl
tests/ftpserver.pl
tests/data/*.txt

View File

@@ -6,5 +6,7 @@ AUTOMAKE_OPTIONS = foreign no-dependencies
EXTRA_DIST = curl.spec curl-ssl.spec
SUBDIRS = docs lib src include
SUBDIRS = docs lib src include tests
test:
@(cd tests; make quiet-test)

View File

@@ -42,7 +42,7 @@
############################################################################
all:
./configure
./configure
make
ssl:
@@ -58,9 +58,17 @@ mingw32-ssl:
cd src; make -f Makefile.m32 SSL=1
vc:
cd lib; nmake -f Makefile.vc6
cd src; nmake -f Makefile.vc6
cd lib
nmake -f Makefile.vc6
cd ..\src
nmake -f Makefile.vc6
vc-ssl:
cd lib
nmake -f Makefile.vc6 release-ssl
cd ..\src
nmake -f Makefile.vc6
cygwin:
./configure
make

7
README
View File

@@ -8,7 +8,7 @@ README
Curl is a command line tool for transfering data specified with URL
syntax. Find out how to use Curl by reading the curl.1 man page or the
README.curl document. Find out how to install Curl by reading the INSTALL
MANUAL document. Find out how to install Curl by reading the INSTALL
document.
libcurl is a library that Curl is using to do its job. It is readily
@@ -25,7 +25,6 @@ README
Sweden -- ftp://ftp.sunet.se/pub/www/utilities/curl/
Germany -- ftp://ftp.fu-berlin.de/pub/unix/network/curl/
Australia -- http://curl.linuxworx.com.au/
To download the very latest source off the CVS server do this:
@@ -42,3 +41,7 @@ README
cvs -d :pserver:anonymous@cvs.curl.sourceforge.net:/cvsroot/curl logout
(you're off the hook!)
Curl contains pieces of source code that is Copyright (c) 1998, 1999
Kungliga Tekniska H<>gskolan. This notice is included here to comply with the
distribution terms.

View File

@@ -79,8 +79,8 @@
/* Define if you have the gethostname function. */
#undef HAVE_GETHOSTNAME
/* Define if you have the getpass function. */
#undef HAVE_GETPASS
/* Define if you have the getpass_r function. */
#undef HAVE_GETPASS_R
/* Define if you have the getservbyname function. */
#undef HAVE_GETSERVBYNAME
@@ -112,6 +112,12 @@
/* Define if you have the setvbuf function. */
#undef HAVE_SETVBUF
/* Define if you have the sigaction function. */
#undef HAVE_SIGACTION
/* Define if you have the signal function. */
#undef HAVE_SIGNAL
/* Define if you have the socket function. */
#undef HAVE_SOCKET

View File

@@ -2,7 +2,9 @@ dnl $Id$
dnl Process this file with autoconf to produce a configure script.
AC_INIT(lib/urldata.h)
AM_CONFIG_HEADER(config.h src/config.h)
AM_INIT_AUTOMAKE(curl,"7.3")
VERSION=`sed -ne 's/^#define LIBCURL_VERSION "\(.*\)"/\1/p' include/curl/curl.h`
AM_INIT_AUTOMAKE(curl,$VERSION)
AM_PROG_LIBTOOL
dnl
@@ -24,14 +26,230 @@ dnl The install stuff has already been taken care of by the automake stuff
dnl AC_PROG_INSTALL
AC_PROG_MAKE_SET
AC_DEFUN(CURL_CHECK_LOCALTIME_R,
[
dnl check for a few thread-safe functions
AC_CHECK_FUNCS(localtime_r,[
AC_MSG_CHECKING(whether localtime_r is declared)
AC_EGREP_CPP(localtime_r,[
#include <time.h>],[
AC_MSG_RESULT(yes)],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(whether localtime_r with -D_REENTRANT is declared)
AC_EGREP_CPP(localtime_r,[
#define _REENTRANT
#include <time.h>],[
AC_DEFINE(NEED_REENTRANT)
AC_MSG_RESULT(yes)],
AC_MSG_RESULT(no))])])
])
AC_DEFUN(CURL_CHECK_INET_NTOA_R,
[
dnl determine if function definition for inet_ntoa_r exists.
AC_CHECK_FUNCS(inet_ntoa_r,[
AC_MSG_CHECKING(whether inet_ntoa_r is declared)
AC_EGREP_CPP(inet_ntoa_r,[
#include <arpa/inet.h>],[
AC_DEFINE(HAVE_INET_NTOA_R_DECL)
AC_MSG_RESULT(yes)],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(whether inet_ntoa_r with -D_REENTRANT is declared)
AC_EGREP_CPP(inet_ntoa_r,[
#define _REENTRANT
#include <arpa/inet.h>],[
AC_DEFINE(HAVE_INET_NTOA_R_DECL)
AC_DEFINE(NEED_REENTRANT)
AC_MSG_RESULT(yes)],
AC_MSG_RESULT(no))])])
])
AC_DEFUN(CURL_CHECK_GETHOSTBYADDR_R,
[
dnl check for number of arguments to gethostbyaddr_r. it might take
dnl either 5, 7, or 8 arguments.
AC_CHECK_FUNCS(gethostbyaddr_r,[
AC_MSG_CHECKING(if gethostbyaddr_r takes 5 arguments)
AC_TRY_COMPILE([
#include <sys/types.h>
#include <netdb.h>],[
char * address;
int length;
int type;
struct hostent h;
struct hostent_data hdata;
int rc;
rc = gethostbyaddr_r(address, length, type, &h, &hdata);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYADDR_R_5)
ac_cv_gethostbyaddr_args=5],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyaddr_r with -D_REENTRANT takes 5 arguments)
AC_TRY_COMPILE([
#define _REENTRANT
#include <sys/types.h>
#include <netdb.h>],[
char * address;
int length;
int type;
struct hostent h;
struct hostent_data hdata;
int rc;
rc = gethostbyaddr_r(address, length, type, &h, &hdata);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYADDR_R_5)
AC_DEFINE(NEED_REENTRANT)
ac_cv_gethostbyaddr_args=5],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyaddr_r takes 7 arguments)
AC_TRY_COMPILE([
#include <sys/types.h>
#include <netdb.h>],[
char * address;
int length;
int type;
struct hostent h;
char buffer[8192];
int h_errnop;
struct hostent * hp;
hp = gethostbyaddr_r(address, length, type, &h,
buffer, 8192, &h_errnop);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYADDR_R_7)
ac_cv_gethostbyaddr_args=7],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyaddr_r takes 8 arguments)
AC_TRY_COMPILE([
#include <sys/types.h>
#include <netdb.h>],[
char * address;
int length;
int type;
struct hostent h;
char buffer[8192];
int h_errnop;
struct hostent * hp;
int rc;
rc = gethostbyaddr_r(address, length, type, &h,
buffer, 8192, &hp, &h_errnop);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYADDR_R_8)
ac_cv_gethostbyaddr_args=8],[
AC_MSG_RESULT(no)
have_missing_r_funcs="$have_missing_r_funcs gethostbyaddr_r"])])])])])
])
AC_DEFUN(CURL_CHECK_GETHOSTBYNAME_R,
[
dnl check for number of arguments to gethostbyname_r. it might take
dnl either 3, 5, or 6 arguments.
AC_CHECK_FUNCS(gethostbyname_r,[
AC_MSG_CHECKING(if gethostbyname_r takes 3 arguments)
AC_TRY_RUN([
#include <string.h>
#include <sys/types.h>
#include <netdb.h>
int
main () {
struct hostent h;
struct hostent_data hdata;
char *name = "localhost";
int rc;
memset(&h, 0, sizeof(struct hostent));
memset(&hdata, 0, sizeof(struct hostent_data));
rc = gethostbyname_r(name, &h, &hdata);
exit (rc != 0 ? 1 : 0); }],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_3)
ac_cv_gethostbyname_args=3],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyname_r with -D_REENTRANT takes 3 arguments)
AC_TRY_RUN([
#define _REENTRANT
#include <string.h>
#include <sys/types.h>
#include <netdb.h>
int
main () {
struct hostent h;
struct hostent_data hdata;
char *name = "localhost";
int rc;
memset(&h, 0, sizeof(struct hostent));
memset(&hdata, 0, sizeof(struct hostent_data));
rc = gethostbyname_r(name, &h, &hdata);
exit (rc != 0 ? 1 : 0); }],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_3)
AC_DEFINE(NEED_REENTRANT)
ac_cv_gethostbyname_args=3],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyname_r takes 5 arguments)
AC_TRY_RUN([
#include <sys/types.h>
#include <netdb.h>
int
main () {
struct hostent *hp;
struct hostent h;
char *name = "localhost";
char buffer[8192];
int h_errno;
hp = gethostbyname_r(name, &h, buffer, 8192, &h_errno);
exit (hp == NULL ? 1 : 0); }],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_5)
ac_cv_gethostbyname_args=5],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyname_r takes 6 arguments)
AC_TRY_RUN([
#include <sys/types.h>
#include <netdb.h>
int
main () {
struct hostent h;
struct hostent *hp;
char *name = "localhost";
char buf[8192];
int rc;
int h_errno;
rc = gethostbyname_r(name, &h, buf, 8192, &hp, &h_errno);
exit (rc != 0 ? 1 : 0); }],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_6)
ac_cv_gethostbyname_args=6],[
AC_MSG_RESULT(no)
have_missing_r_funcs="$have_missing_r_funcs gethostbyname_r"],
[ac_cv_gethostbyname_args=0])],
[ac_cv_gethostbyname_args=0])],
[ac_cv_gethostbyname_args=0])],
[ac_cv_gethostbyname_args=0])])
])
dnl **********************************************************************
dnl Checks for libraries.
dnl **********************************************************************
dnl nsl lib?
dnl gethostbyname in the nsl lib?
AC_CHECK_FUNC(gethostbyname, , AC_CHECK_LIB(nsl, gethostbyname))
if test "$ac_cv_lib_nsl_gethostbyname" != "yes" -a "$ac_cv_func_gethostbyname" != "yes"; then
dnl gethostbyname in the socket lib?
AC_CHECK_FUNC(gethostbyname, , AC_CHECK_LIB(socket, gethostbyname))
fi
dnl At least one system has been identified to require BOTH nsl and
dnl socket libs to link properly.
if test "$ac_cv_lib_nsl_gethostbyname" = "$ac_cv_func_gethostbyname"; then
@@ -192,6 +410,23 @@ else
dnl SSL libs NOTE: it is important to do this AFTER the crypto lib
AC_CHECK_LIB(ssl, SSL_connect)
if test "$ac_cv_lib_ssl_SSL_connect" != yes; then
dnl we didn't find the SSL lib, try the RSAglue/rsaref stuff
AC_MSG_CHECKING(for ssl with RSAglue/rsaref libs in use);
OLIBS=$LIBS
LIBS="$LIBS -lRSAglue -lrsaref"
AC_CHECK_LIB(ssl, SSL_connect)
if test "$ac_cv_lib_ssl_SSL_connect" != yes; then
dnl still no SSL_connect
AC_MSG_RESULT(no)
LIBS=$OLIBS
else
AC_MSG_RESULT(yes)
fi
fi
dnl Check for SSLeay headers
AC_CHECK_HEADERS(openssl/x509.h openssl/rsa.h openssl/crypto.h openssl/pem.h openssl/ssl.h openssl/err.h)
@@ -254,200 +489,19 @@ then
AC_DEFINE(DISABLED_THREADSAFE, 1, \
Set to explicitly specify we don't want to use thread-safe functions)
else
dnl check for number of arguments to gethostbyname_r. it might take
dnl either 3, 5, or 6 arguments.
AC_CHECK_FUNCS(gethostbyname_r,[
AC_MSG_CHECKING(if gethostbyname_r takes 3 arguments)
AC_TRY_RUN([
#include <string.h>
#include <sys/types.h>
#include <netdb.h>
int
main () {
struct hostent h;
struct hostent_data hdata;
char *name = "localhost";
int rc;
memset(&h, 0, sizeof(struct hostent));
memset(&hdata, 0, sizeof(struct hostent_data));
rc = gethostbyname_r(name, &h, &hdata);
exit (rc != 0 ? 1 : 0); }],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_3)
ac_cv_gethostbyname_args=3],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyname_r with -D_REENTRANT takes 3 arguments)
AC_TRY_RUN([
#define _REENTRANT
dnl dig around for gethostbyname_r()
CURL_CHECK_GETHOSTBYNAME_R()
#include <string.h>
#include <sys/types.h>
#include <netdb.h>
dnl dig around for gethostbyaddr_r()
CURL_CHECK_GETHOSTBYADDR_R()
int
main () {
struct hostent h;
struct hostent_data hdata;
char *name = "localhost";
int rc;
memset(&h, 0, sizeof(struct hostent));
memset(&hdata, 0, sizeof(struct hostent_data));
rc = gethostbyname_r(name, &h, &hdata);
exit (rc != 0 ? 1 : 0); }],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_3)
AC_DEFINE(NEED_REENTRANT)
ac_cv_gethostbyname_args=3],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyname_r takes 5 arguments)
AC_TRY_RUN([
#include <sys/types.h>
#include <netdb.h>
dnl poke around for inet_ntoa_r()
CURL_CHECK_INET_NTOA_R()
int
main () {
struct hostent *hp;
struct hostent h;
char *name = "localhost";
char buffer[8192];
int h_errno;
hp = gethostbyname_r(name, &h, buffer, 8192, &h_errno);
exit (hp == NULL ? 1 : 0); }],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_5)
ac_cv_gethostbyname_args=5],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyname_r takes 6 arguments)
AC_TRY_RUN([
#include <sys/types.h>
#include <netdb.h>
dnl is there a localtime_r()
CURL_CHECK_LOCALTIME_R()
int
main () {
struct hostent h;
struct hostent *hp;
char *name = "localhost";
char buf[8192];
int rc;
int h_errno;
rc = gethostbyname_r(name, &h, buf, 8192, &hp, &h_errno);
exit (rc != 0 ? 1 : 0); }],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYNAME_R_6)
ac_cv_gethostbyname_args=6],[
AC_MSG_RESULT(no)
have_missing_r_funcs="$have_missing_r_funcs gethostbyname_r"],
[ac_cv_gethostbyname_args=0])],
[ac_cv_gethostbyname_args=0])],
[ac_cv_gethostbyname_args=0])],
[ac_cv_gethostbyname_args=0])])
dnl check for number of arguments to gethostbyaddr_r. it might take
dnl either 5, 7, or 8 arguments.
AC_CHECK_FUNCS(gethostbyaddr_r,[
AC_MSG_CHECKING(if gethostbyaddr_r takes 5 arguments)
AC_TRY_COMPILE([
#include <sys/types.h>
#include <netdb.h>],[
char * address;
int length;
int type;
struct hostent h;
struct hostent_data hdata;
int rc;
rc = gethostbyaddr_r(address, length, type, &h, &hdata);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYADDR_R_5)
ac_cv_gethostbyaddr_args=5],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyaddr_r with -D_REENTRANT takes 5 arguments)
AC_TRY_COMPILE([
#define _REENTRANT
#include <sys/types.h>
#include <netdb.h>],[
char * address;
int length;
int type;
struct hostent h;
struct hostent_data hdata;
int rc;
rc = gethostbyaddr_r(address, length, type, &h, &hdata);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYADDR_R_5)
AC_DEFINE(NEED_REENTRANT)
ac_cv_gethostbyaddr_args=5],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyaddr_r takes 7 arguments)
AC_TRY_COMPILE([
#include <sys/types.h>
#include <netdb.h>],[
char * address;
int length;
int type;
struct hostent h;
char buffer[8192];
int h_errnop;
struct hostent * hp;
hp = gethostbyaddr_r(address, length, type, &h,
buffer, 8192, &h_errnop);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYADDR_R_7)
ac_cv_gethostbyaddr_args=7],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(if gethostbyaddr_r takes 8 arguments)
AC_TRY_COMPILE([
#include <sys/types.h>
#include <netdb.h>],[
char * address;
int length;
int type;
struct hostent h;
char buffer[8192];
int h_errnop;
struct hostent * hp;
int rc;
rc = gethostbyaddr_r(address, length, type, &h,
buffer, 8192, &hp, &h_errnop);],[
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_GETHOSTBYADDR_R_8)
ac_cv_gethostbyaddr_args=8],[
AC_MSG_RESULT(no)
have_missing_r_funcs="$have_missing_r_funcs gethostbyaddr_r"])])])])])
dnl determine if function definition for inet_ntoa_r exists.
AC_CHECK_FUNCS(inet_ntoa_r,[
AC_MSG_CHECKING(whether inet_ntoa_r is declared)
AC_EGREP_CPP(inet_ntoa_r,[
#include <arpa/inet.h>],[
AC_DEFINE(HAVE_INET_NTOA_R_DECL)
AC_MSG_RESULT(yes)],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(whether inet_ntoa_r with -D_REENTRANT is declared)
AC_EGREP_CPP(inet_ntoa_r,[
#define _REENTRANT
#include <arpa/inet.h>],[
AC_DEFINE(HAVE_INET_NTOA_R_DECL)
AC_DEFINE(NEED_REENTRANT)
AC_MSG_RESULT(yes)],
AC_MSG_RESULT(no))])])
dnl check for a few thread-safe functions
AC_CHECK_FUNCS(localtime_r,[
AC_MSG_CHECKING(whether localtime_r is declared)
AC_EGREP_CPP(localtime_r,[
#include <time.h>],[
AC_MSG_RESULT(yes)],[
AC_MSG_RESULT(no)
AC_MSG_CHECKING(whether localtime_r with -D_REENTRANT is declared)
AC_EGREP_CPP(localtime_r,[
#define _REENTRANT
#include <time.h>],[
AC_DEFINE(NEED_REENTRANT)
AC_MSG_RESULT(yes)],
AC_MSG_RESULT(no))])])
fi
dnl **********************************************************************
@@ -525,11 +579,15 @@ AC_CHECK_FUNCS( socket \
tcsetattr \
tcgetattr \
perror \
getpass \
closesocket \
setvbuf
setvbuf \
sigaction \
signal \
getpass_r
)
dnl removed 'getpass' check on October 26, 2000
if test "$ac_cv_func_select" != "yes"; then
AC_MSG_ERROR(Can't work without an existing socket() function)
fi
@@ -549,13 +607,12 @@ dnl $PATH:/usr/bin/:/usr/local/bin )
dnl AC_SUBST(RANLIB)
AC_OUTPUT( Makefile \
curl.spec \
curl-ssl.spec \
docs/Makefile \
include/Makefile \
include/curl/Makefile \
src/Makefile \
lib/Makefile )
lib/Makefile \
tests/Makefile)
dnl perl/checklinks.pl \
dnl perl/getlinks.pl \
dnl perl/formfind.pl \

View File

@@ -20,7 +20,10 @@ The License Issue
GNU Public License. We can never re-use sources from a GPL program in curl.
If you add a larger piece of code, you can opt to make that file or set of
files to use a different license as long as they don't enfore any changes to
the rest of the package. Such "separate parts" can not be GPL either.
the rest of the package and they make sense. Such "separate parts" can not be
GPL either (although they should use "GPL compatible" licenses).
Curl and libcurl will soon become dual licensed, MozPL/MITX!
Naming
@@ -82,3 +85,12 @@ Write Access to CVS Repository
course get write access to the CVS repository and then you'll be able to
check-in all your changes straight into the CVS tree instead of sending all
changes by mail as patches. Just ask if this is what you'd want.
Test Cases
Since the introduction of the test suite, we will get the possibility to
quickly verify that the main features are working as supposed to. To maintain
this situation and improve it, all new features and functions that are added
need tro be tested. Every feature that is added should get at least one valid
test case that verifies that it works as documented. If every submitter also
post a few test cases, it won't end up as a heavy burden on a single person!

131
docs/FAQ
View File

@@ -1,4 +1,4 @@
Updated: August 22, 2000 (http://curl.haxx.se/docs/faq.shtml)
Updated: November 22, 2000 (http://curl.haxx.se/docs/faq.shtml)
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -43,9 +43,12 @@ FAQ
4.5.4 "404 Not Found"
4.5.5 "405 Method Not Allowed"
4.6 Can you tell me what error code 142 means?
4.7 How do I keep usernames and passwords secret in Curl command lines?
4.8 I found a bug!
5. libcurl Issues
5.1 Is libcurl thread safe?
5.2 How can I receive all data into a large memory chunk?
6. License Issues
6.1 I have a GPL program, can I use the libcurl library?
@@ -63,7 +66,8 @@ FAQ
cURL (or simply just 'curl') is a command line tool for getting or sending
files using URL syntax. The name is a play on 'Client for URLs', originally
with URL spelled in uppercase to make it obvious it deals with URLs.
with URL spelled in uppercase to make it obvious it deals with URLs. The
fact it can also be pronounced 'see URL' also helped.
Curl supports a range of common internet protocols, currently including
HTTP, HTTPS, FTP, GOPHER, LDAP, DICT and FILE.
@@ -76,7 +80,7 @@ FAQ
transfer library.
Any application is free to use libcurl, even commercial or closed-source
ones. Just make sure changes to the lib itself is made public.
ones. Just make sure changes to the lib itself are made public.
1.3 What is cURL not?
@@ -97,7 +101,7 @@ FAQ
or with PHP.
Curl is not a single-OS program. Curl exists, compiles, builds and runs
under a long range of operating systems, including all modern Unixes,
under a wide range of operating systems, including all modern Unixes,
Windows, Amiga, BeOS, OS/2, OS X, QNX etc.
1.4 When will you make curl do XXXX ?
@@ -117,7 +121,7 @@ FAQ
program or redirect to another file for the next program to interpret.
* I focus on protocol related issues and improvements. If you wanna do more
magic with the supported protocols than curl currently does, changes are
magic with the supported protocols than curl currently does, chances are
big I will agree. If you wanna add more protocols, I may very well
agree.
@@ -135,7 +139,7 @@ FAQ
This may be because of several reasons.
2.1.1. native linker doesn't find openssl
2.1.1. native linker doesn't find openssl
Affected platforms:
Solaris (native cc compiler)
@@ -157,7 +161,7 @@ FAQ
Solution submitted by: Bob Allison <allisonb@users.sourceforge.net>
2.1.2. only the libssl lib is missing
2.1.2. only the libssl lib is missing
If all include files and the libcrypto lib is present, with only the
libssl being missing according to configure, this is mostly likely because
@@ -194,7 +198,6 @@ FAQ
brings this functionality.
3. Usage problems
3.1. curl: (1) SSL is disabled, https: not supported
@@ -325,26 +328,26 @@ FAQ
RFC2616 clearly explains the return codes. I'll make a short transcript
here. Go read the RFC for exact details:
4.5.1 "400 Bad Request"
4.5.1 "400 Bad Request"
The request could not be understood by the server due to malformed
syntax. The client SHOULD NOT repeat the request without modifications.
4.5.2 "401 Unauthorized"
4.5.2 "401 Unauthorized"
The request requires user authentication.
4.5.3 "403 Forbidden"
4.5.3 "403 Forbidden"
The server understood the request, but is refusing to fulfill it.
Authorization will not help and the request SHOULD NOT be repeated.
4.5.4 "404 Not Found"
4.5.4 "404 Not Found"
The server has not found anything matching the Request-URI. No indication
is given of whether the condition is temporary or permanent.
4.5.5 "405 Method Not Allowed"
4.5.5 "405 Method Not Allowed"
The method specified in the Request-Line is not allowed for the resource
identified by the Request-URI. The response MUST include an Allow header
@@ -353,9 +356,9 @@ FAQ
4.6. Can you tell me what error code 142 means?
All error codes that are larger than the highest documented error code means
that curl has existed due to a timeout. There is currentl no nice way for
that curl has existed due to a timeout. There is currently no nice way for
curl to abort from such a condition and that's why it gets this undocumented
error. This is planned to change in a future release.
error. This should be changed in releases after 7.4.1.
4.7. How do I keep usernames and passwords secret in Curl command lines?
@@ -371,27 +374,77 @@ FAQ
at least hide them from being read by human eyes, but that is not what
anyone would call security.
4.8 I found a bug!
It is not a bug if the behaviour is documented. Read the docs first.
If it is a problem with a binary you've downloaded or a package for your
particular platform, try contacting the person who built the package/archive
you have.
If there is a bug, post a bug report in the Curl Bug Track System over at
http://sourceforge.net/bugs/?group_id=976 or mail a detailed bug description
to curl-bug@haxx.se.
Always include as many details you can think of, including curl version,
operating system name and version and complete instructions how to repeat
the bug.
5. libcurl Issues
5.1. Is libcurl thread safe?
As version seven is slowly marching in as the libcurl version to use, we
have made a serious attempt to address all places in the code where we could
forsee problems for multi-threaded programs. If your system has them, curl
will attempt to use threadsafe functions instead of non-safe ones.
We have attempted to write the entire code adjusted for multi-threaded
programs. If your system has such, curl will attempt to use threadsafe
functions instead of non-safe ones.
I am very interested in once and for all getting some kind of report or
README file from those who have used libcurl in a threaded environment,
since I haven't and I get this question more and more frequently!
5.2 How can I receive all data into a large memory chunk?
You are in full control of the callback function that gets called every time
there is data received from the remote server. You can make that callback do
whatever you want. You do not have to write the receivied data to a file.
One solution to this problem could be to have a pointer to a struct that you
pass to the callback function. You set the pointer using the
curl_easy_setopt(CURLOPT_FILE) function. Then that pointer will be passed to
the callback instead of a FILE * to a file:
/* imaginary struct */
struct MemoryStruct {
char *memory;
size_t size;
};
/* imaginary callback function */
size_t
WriteMemoryCallback(void *ptr, size_t size, size_t nmemb, void *data)
{
register int realsize = size * nmemb;
struct MemoryStruct *mem = (struct MemoryStruct *)data;
mem->memory = (char *)realloc(mem->memory, mem->size + realsize + 1);
if (mem->memory) {
memcpy(&(mem->memory[mem->size]), ptr, realsize);
mem->size += realsize;
mem->memory[mem->size] = 0;
}
return realsize;
}
6. License Issues
Curl and libcurl are released under the MPL, the Mozilla Public License. To
get a really good answer to this or other licensing questions, you should
get a really good answer to your license conflict questions, you should
study the MPL license and the license you are about to use and check for
clashes yourself. This is a brief summary for the cases we get the most
questions. (Parts of this section was enhanced by Bjorn Reese.)
clashes yourself. This section is just a brief summary for the cases we get
the most questions. (Parts of this section was much enhanced by Bjorn
Reese.)
6.1. I have a GPL program, can I use the libcurl library?
@@ -427,33 +480,33 @@ FAQ
6.2. I have a closed-source program, can I use the libcurl library?
Yes, libcurl does not put any restrictions on the program that uses the
library. If you end up doing changes to the library, only those changes
must be made available, not the ones to your program.
Yes, libcurl does not put any restrictions on the program that uses the
library. If you end up doing changes to the library, only those changes must
be made available, not the ones to your program.
6.3. I have a BSD licensed program, can I use the libcurl library?
Yes, libcurl does not put any restrictions on the program that uses the
library. If you end up doing changes to the library, only those changes
must be made available, not the ones to your program.
Yes, libcurl does not put any restrictions on the program that uses the
library. If you end up doing changes to the library, only those changes must
be made available, not the ones to your program.
6.4. I have a program that uses LGPL libraries, can I use libcurl?
Yes you can. LGPL libraries don't spread to other libraries the same way
GPL ones do.
Yes you can. LGPL libraries don't spread to other libraries the same way GPL
ones do.
However, when you read paragraph (3) of the LGPL license, you'll see that
anyone - at will - may at any time convert that LGPL program into GPL. And
GPL programs can't be distributed together with MPL programs, neither with
(lib)curl source code and not as a binary.
However, when you read paragraph (3) of the LGPL license, you'll see that
anyone - at will - may at any time convert that LGPL program into GPL. And
GPL programs can't be distributed together with MPL programs, neither with
(lib)curl source code and not as a binary.
6.5. Can I modify curl/libcurl for my program and keep the changes secret?
No, you're not allowed to do that.
No, you're not allowed to do that.
6.6. Can you please change the curl/libcurl license to XXXX?
No. We carefully picked this license years ago and a large amount of
people have contributed with source code knowing that this is the license
we use. This license puts the restrictions we want on curl/libcurl and it
does not spread to other programs or libraries that use it.
No. We carefully picked this license years ago and a large amount of people
have contributed with source code knowing that this is the license we
use. This license puts the restrictions we want on curl/libcurl and it does
not spread to other programs or libraries that use it.

View File

@@ -11,41 +11,8 @@ way to proceed is mainly divided in two different ways: the unix way or the
windows way.
If you're using Windows (95, 98, NT) or OS/2, you should continue reading from
the Win32 header below. All other systems should be capable of being installed
as described in the the UNIX header.
PORTS
=====
Just to show off, this is a probably incomplete list of known hardware and
operating systems that curl has been compiled for:
- Ultrix
- SINIX-Z v5
- Alpha DEC OSF 4
- Alpha Digital UNIX v3.2
- Alpha FreeBSD 4.1
- Alpha Linux 2.2.16
- Alpha Tru64 v5.0 5.1
- HP-PA HP-UX 9.X 10.X 11.X
- MIPS IRIX 6.2, 6.5
- Power AIX 4.2, 4.3.1, 4.3.2
- PowerPC Darwin 1.0
- PowerPC Linux
- PowerPC Mac OS X
- Sparc Linux
- Sparc Solaris 2.4, 2.5, 2.5.1, 2.6, 7, 8
- Sparc SunOS 4.1.*
- i386 BeOS
- i386 FreeBSD
- i386 Linux 1.3, 2.0, 2.2, 2.3, 2.4
- i386 NetBSD
- i386 OS/2
- i386 OpenBSD
- i386 Solaris 2.7
- i386 Windows 95, 98, NT, 2000
- ia64 Linux 2.3.99
- m68k AmigaOS 3
- m68k OpenBSD
the Win32 or OS/2 headers further down. All other systems should be capable of
being installed as described below.
UNIX
====
@@ -53,7 +20,9 @@ UNIX
The configure script *always* tries to find a working SSL library unless
explicitly told not to. If you have OpenSSL installed in the default
search path for your compiler/linker, you don't need to do anything
special.
special:
./configure
If you have OpenSSL installed in /usr/local/ssl, you can run configure
like:
@@ -101,9 +70,17 @@ UNIX
Use the executable `curl` in src/ directory.
'make install' copies the curl file to /usr/local/bin/ (or $prefix/bin if
you used the --prefix option to configure) and copies the man pages, the
lib and the include files to a suitable place too.
To install curl on your system, run
make install
This will copy curl to /usr/local/bin/ (or $prefix/bin if you used the
--prefix option to configure) and it copies the man pages, the lib and the
include files to suitable places.
To make sure everything runs as supposed, run the test suite:
make test
KNOWN PROBLEMS
@@ -259,6 +236,39 @@ IBM OS/2
If you're getting huge binaries, probably your makefiles have the -g in
CFLAGS.
PORTS
=====
Just to show off, this is a probably incomplete list of known hardware and
operating systems that curl has been compiled for:
- Ultrix
- SINIX-Z v5
- Alpha DEC OSF 4
- Alpha Digital UNIX v3.2
- Alpha FreeBSD 4.1
- Alpha Linux 2.2.16
- Alpha Tru64 v5.0 5.1
- HP-PA HP-UX 9.X 10.X 11.X
- MIPS IRIX 6.2, 6.5
- Power AIX 4.2, 4.3.1, 4.3.2
- PowerPC Darwin 1.0
- PowerPC Linux
- PowerPC Mac OS X
- Sparc Linux
- Sparc Solaris 2.4, 2.5, 2.5.1, 2.6, 7, 8
- Sparc SunOS 4.1.*
- i386 BeOS
- i386 FreeBSD
- i386 Linux 1.3, 2.0, 2.2, 2.3, 2.4
- i386 NetBSD
- i386 OS/2
- i386 OpenBSD
- i386 Solaris 2.7
- i386 Windows 95, 98, NT, 2000
- ia64 Linux 2.3.99
- m68k AmigaOS 3
- m68k OpenBSD
OpenSSL
=======

View File

@@ -12,6 +12,17 @@ INTERNALS
Thus, the largest amount of code and complexity is in the library part.
CVS
===
All changes to the sources are committed to the CVS repository as soon as
they're somewhat verified to work. Changes shall be commited as independently
as possible so that individual changes can be easier spotted and tracked
afterwards.
Tagging shall be used extensively, and by the time we release new archives we
should tag the sources with a name similar to the released version number.
Windows vs Unix
===============
@@ -34,8 +45,7 @@ Windows vs Unix
(3) is simply avoided by not trying any funny tricks on file descriptors.
(4) is left alone, giving windows users problems when they pipe binary data
through stdout...
(4) we set stdout to binary under windows
Inside the source code, I do make an effort to avoid '#ifdef WIN32'. All
conditionals that deal with features *should* instead be in the format
@@ -48,9 +58,9 @@ Library
=======
As described elsewhere, libcurl is meant to get two different "layers" of
interface. At the present point only the high-level, the "easy", interface
has been fully implemented and thus documented. We assume the easy-interface
in this description, the low-level interface will be documented when fully
interfaces. At the present point only the high-level, the "easy", interface
has been fully implemented and documented. We assume the easy-interface in
this description, the low-level interface will be documented when fully
implemented.
There are plenty of entry points to the library, namely each publicly defined
@@ -58,11 +68,14 @@ Library
rather small and easy-to-follow. All the ones prefixed with 'curl_easy' are
put in the lib/easy.c file.
curl_easy_init() allocates an internal struct and makes some initializations.
The returned handle does not revail internals.
curl_easy_setopt() takes a three arguments, where the option stuff must be
passed in pairs, the parameter-ID and the parameter-value. The list of
options is documented in the man page.
curl_easy_perform() does a whole lot of things.
curl_easy_perform() does a whole lot of things:
The function analyzes the URL, get the different components and connects to
the remote host. This may involve using a proxy and/or using SSL. The
@@ -84,10 +97,6 @@ Library
called). The speedcheck functions in lib/speedcheck.c are also used to verify
that the transfer is as fast as required.
When the operation is done, the writeout() function in lib/writeout.c may be
called to report about the operation as specified previously in the arguments
to curl_easy_setopt().
When completed curl_easy_cleanup() should be called to free up used
resources.
@@ -136,11 +145,12 @@ Library
lib/getenv.c offers curl_getenv() which is for reading environment variables
in a neat platform independent way. That's used in the client, but also in
lib/url.c when checking the PROXY variables.
lib/url.c when checking the proxy environment variables.
lib/netrc.c keeps the .netrc parser
lib/netrc.c holds the .netrc parser
lib/timeval.c features replacement functions for systems that don't have
gettimeofday().
A function named curl_version() that returns the full curl version string is
found in lib/version.c.
@@ -148,13 +158,31 @@ Library
Client
======
main() resides in src/main.c together with most of the client
code. src/hugehelp.c is automatically generated by the mkhelp.pl perl script
to display the complete "manual" and the src/urlglob.c file holds the
functions used for the multiple-URL support.
main() resides in src/main.c together with most of the client code.
src/hugehelp.c is automatically generated by the mkhelp.pl perl script to
display the complete "manual" and the src/urlglob.c file holds the functions
used for the multiple-URL support.
The client mostly mess around to setup its config struct properly, then it
calls the curl_easy_*() functions of the library and when it gets back
control after the curl_easy_perform() it cleans up the library, checks status
and exits.
When the operation is done, the ourWriteOut() function in src/writeout.c may
be called to report about the operation. That function is using the
curl_easy_getinfo() function to extract useful information from the curl
session.
Test Suite
==========
During November 2000, a test suite has evolved. It is placed in its own
subdirectory directly off the root in the curl archive tree, and it contains
a bunch of scripts and a lot of test case data.
The main test script is runtests.pl that will invoke the two servers
httpserver.pl and ftpserver.pl before all the test cases are performed. The
test suite currently only runs on unix-like platforms.
You'll find a complete description of the test case data files in the README
file in the test directory.

View File

@@ -282,6 +282,8 @@ REFERER
curl -e www.coolsite.com http://www.showme.com/
NOTE: The referer field is defined in the HTTP spec to be a full URL.
USER AGENT
A HTTP request has the option to include information about the browser
@@ -400,17 +402,26 @@ SPEED LIMIT
CONFIG FILE
Curl automatically tries to read the .curlrc file (or _curlrc file on win32
systems) from the user's home dir on startup. The config file should be
made up with normal command line switches. Comments can be used within the
file. If the first letter on a line is a '#'-letter the rest of the line
is treated as a comment.
systems) from the user's home dir on startup.
The config file could be made up with normal command line switches, but you
can also specify the long options without the dashes to make it more
readable. You can separate the options and the parameter with spaces, or
with = or :. Comments can be used within the file. If the first letter on a
line is a '#'-letter the rest of the line is treated as a comment.
If you want the parameter to contain spaces, you must inclose the entire
parameter within double quotes ("). Within those quotes, you specify a
quote as \".
NOTE: You must specify options and their arguments on the same line.
Example, set default time out and proxy in a config file:
# We want a 30 minute timeout:
-m 1800
# ... and we use a proxy for all accesses:
-x proxy.our.domain.com:8080
proxy = proxy.our.domain.com:8080
White spaces ARE significant at the end of lines, but all white spaces
leading up to the first characters of each line are ignored.
@@ -424,14 +435,14 @@ CONFIG FILE
without URL by making a config file similar to:
# default url to get
http://help.with.curl.com/curlhelp.html
url = "http://help.with.curl.com/curlhelp.html"
You can specify another config file to be read by using the -K/--config
flag. If you set config file name to "-" it'll read the config from stdin,
which can be handy if you want to hide options from being visible in process
tables etc:
echo "-u user:passwd" | curl -K - http://that.secret.site.com
echo "user = user:passwd" | curl -K - http://that.secret.site.com
EXTRA HEADERS

View File

@@ -7,6 +7,7 @@ AUTOMAKE_OPTIONS = foreign no-dependencies
man_MANS = \
curl.1 \
curl_easy_cleanup.3 \
curl_easy_getinfo.3 \
curl_easy_init.3 \
curl_easy_perform.3 \
curl_easy_setopt.3 \

View File

@@ -17,3 +17,8 @@ README.win32
freely available nroff binary for win32 (*pointers appriciated*), convert
the files into plain-text on your neighbor's unix machine or run over to the
curl web site and view them as plain HTML.
The main curl.1 man page is "built-in". Use a command line similar to this
in order to extract a separate text file:
curl -M >manual.txt

View File

@@ -13,6 +13,32 @@ For the future
product! (Yes, you may add things not mentioned here, these are just a
few teasers...)
* Improve the command line option parser to accept '-m300' as well as the '-m
300' convention. It should be able to work if '-m300' is considered to be
space separated to the next option.
* Make the curl tool support URLs that start with @ that would then mean that
the following is a plain list with URLs to download. Thus @filename.txt
reads a list of URLs from a local file. A fancy option would then be to
support @http://whatever.com that would first load a list and then get the
URLs mentioned in the list. I figure -O or something would have to be
implied by such an action.
* Make curl with multiple URLs, even outside of {}-letters. I could also
imagine an optional fork()ed system that downloads each URL in its own
thread. It should of course have a maximum amount of simultaneous fork()s.
* Improve the regular progress meter with --continue is used. It should be
noticable when there's a resume going on.
* Add a command line option that allows the output file to get the same time
stamp as the remote file. This requires some fiddling on FTP but comes
almost free for HTTP.
* Make the SSL layer option capable of using the Mozilla Security Services as
an alternative to OpenSSL:
http://www.mozilla.org/projects/security/pki/nss/
* Make sure the low-level interface works. highlevel.c should basically be
possible to write using that interface. Document the low-level interface
@@ -22,12 +48,41 @@ For the future
* Move non-URL related functions that are used by both the lib and the curl
application to a separate "portability lib".
* Add support for other languages than C. C++ and perl comes to mind. Python?
* Add support for other languages than C. C++ (rumours have been heard about
something being worked on in this area) and perl (we have seen the first
versions of this!) comes to mind. Python anyone?
* Improve the -K config file parser (the parameter following the flag should
be possible to get specified *exactly* as it is done on a shell command
line).
Alternatively, and preferably, we rewrite the entire config file to become
a true config file that uses its own format instead of the currently
crippled and stupid format:
[option] = [value]
Where [option] would be the same as the --long-option and [value] would
either be 'on/off/true/false' for booleans or a plain value for [option]s
that accept variable input (such as -d, -o, -H, -d, -F etc).
[value] could be written as plain text, and then the initial and trailing
white spaces would be stripped off, or it can be specified within quotes
and then all white spaces within the quotes will count.
[value] could then be made to accept some format to specify an environment
variable. I could even think of supporting
[option] += [value]
for appending stuff to an option.
As has been suggested, ${name} could be used to read environment variables
and possibly other options. That could then be used instead of += operators
like:
bar = "foo ${bar}"
* rtsp:// support -- "Real Time Streaming Protocol" (RFC 2326)
* "Content-Encoding: compress/gzip/zlib"
@@ -77,5 +132,3 @@ For the future
* HTTP POST resume using Range:
* Make curl capable of verifying the server's certificate when connecting
with HTTPS://.

View File

@@ -2,7 +2,7 @@
.\" nroff -man curl.1
.\" Written by Daniel Stenberg
.\"
.TH curl 1 "26 September 2000" "Curl 7.3" "Curl Manual"
.TH curl 1 "22 November 2000" "Curl 7.5" "Curl Manual"
.SH NAME
curl \- get a URL with FTP, TELNET, LDAP, GOPHER, DICT, FILE, HTTP or
HTTPS syntax.
@@ -93,11 +93,16 @@ HTTP resume is only possible with HTTP/1.1 or later servers.
that the data is sent exactly as specified with no extra processing (with all
newlines cut off). The data is expected to be "url-encoded". This will cause
curl to pass the data to the server using the content-type
application/x-www-form-urlencoded. Compare to -F.
application/x-www-form-urlencoded. Compare to -F. If more than one -d/--data
option is used on the same command line, the data pieces specified will be
merged together with a separating &-letter. Thus, using '-d name=daniel -d
skill=lousy' would generate a post chunk that looks like
'name=daniel&skill=lousy'.
If you start the data with the letter @, the rest should be a file name to
read the data from, or - if you want curl to read the data from stdin.
The contents of the file must already be url-encoded.
read the data from, or - if you want curl to read the data from stdin. The
contents of the file must already be url-encoded. Multiple files can also be
specified.
To post data purely binary, you should instead use the --data-binary option.
@@ -131,6 +136,9 @@ with HTTPS. The certificate must be in PEM format.
If the optional password isn't specified, it will be queried for on
the terminal. Note that this certificate is the private key and the private
certificate concatenated!
.IP "--cacert <CA certificate>"
(HTTPS) Tells curl to use the specified certificate file to verify the
peer. The certificate must be in PEM format.
.IP "-f/--fail"
(HTTP)
Fail silently (no output at all) on server errors. This is mostly done
@@ -352,6 +360,9 @@ ask for it interactively.
.IP "-U/--proxy-user <user:password>"
Specify user and password to use for Proxy authentication. If no
password is specified, curl will ask for it interactively.
.IP "--url <URL>"
Set the URL to fetch. This option is mostly handy when you wanna specify URL
in a config file.
.IP "-v/--verbose"
Makes the fetching more verbose/talkative. Mostly usable for
debugging. Lines starting with '>' means data sent by curl, '<'
@@ -410,11 +421,17 @@ The total amount of bytes that were downloaded.
.B size_upload
The total amount of bytes that were uploaded.
.TP
.B size_header
The total amount of bytes of the downloaded headers.
.TP
.B size_request
The total amount of bytes that were sent in the HTTP request.
.TP
.B speed_download
The average download speed that curl measured for the complete download.
.TP
.B speed_upload
The average upload speed that curl measured for the complete download.
The average upload speed that curl measured for the complete upload.
.RE
.IP "-x/--proxy <proxyhost[:port]>"
Use specified proxy. If the port number is not specified, it is assumed at
@@ -519,7 +536,7 @@ FTP weird USER reply. Curl couldn't parse the reply sent to the USER request.
.IP 13
FTP weird PASV reply, Curl couldn't parse the reply sent to the PASV request.
.IP 14
FTP weird 227 formay. Curl couldn't parse the 227-line the server sent.
FTP weird 227 format. Curl couldn't parse the 227-line the server sent.
.IP 15
FTP can't get host. Couldn't resolve the host IP we got in the 227-line.
.IP 16
@@ -577,12 +594,23 @@ LDAP search failed.
Library not found. The LDAP library was not found.
.IP 41
Function not found. A required LDAP function was not found.
.IP 42
Aborted by callback. An application told curl to abort the operation.
.IP 43
Internal error. A function was called with a bad parameter.
.IP 44
Internal error. A function was called in a bad order.
.IP 45
Interface error. A specified outgoing interface could not be used.
.IP 46
Bad password entered. An error was signalled when the password was entered.
.IP 47
Too many redirects. When following redirects, curl hit the maximum amount.
.IP XX
There will appear more error codes here in future releases. The existing ones
are meant to never change.
.SH BUGS
If you do find any (or have other suggestions), mail Daniel Stenberg
<Daniel.Stenberg@haxx.se>.
If you do find bugs, mail them to curl-bug@haxx.se.
.SH AUTHORS / CONTRIBUTORS
- Daniel Stenberg <Daniel.Stenberg@haxx.se>
- Rafael Sagula <sagula@inf.ufrgs.br>
@@ -636,6 +664,10 @@ If you do find any (or have other suggestions), mail Daniel Stenberg
- Stephen Kick <skick@epicrealm.com>
- Martin Hedenfalk <mhe@stacken.kth.se>
- Richard Prescott
- Jason S. Priebe <priebe@wral-tv.com>
- T. Bharath <TBharath@responsenetworks.com>
- Alexander Kourakos <awk@users.sourceforge.net>
- James Griffiths <griffiths_james@yahoo.com>
.SH WWW
http://curl.haxx.se

92
docs/curl_easy_getinfo.3 Normal file
View File

@@ -0,0 +1,92 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_easy_init 3 "22 November 2000" "Curl 7.5" "libcurl Manual"
.SH NAME
curl_easy_getinfo - Extract information from a curl session (added in 7.4)
.SH SYNOPSIS
.B #include <curl/easy.h>
.sp
.BI "CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ... );"
.ad
.SH DESCRIPTION
Request internal information from the curl session with this function. The
third argument
.B MUST
be a pointer to a long, a pointer to a char * or a pointer to a double (as
this documentation describes further down). The data pointed-to will be
filled in accordingly and can be relied upon only if the function returns
CURLE_OK. This function is intended to get used *AFTER* a performed transfer,
all results from this function are undefined until the transfer is completed.
.SH AVAILABLE INFORMATION
These are informations that can be extracted:
.TP 0.8i
.B CURLINFO_EFFECTIVE_URL
Pass a pointer to a 'char *' to receive the last used effective URL.
.TP
.B CURLINFO_HTTP_CODE
Pass a pointer to a long to receive the last received HTTP code.
.TP
.B CURLINFO_FILETIME
Pass a pointer to a long to receive the remote time of the retrieved
document. If you get 0, it can be because of many reasons (unknown, the server
hides it or the server doesn't support the command that tells document time
etc) and the time of the document is unknown. (Added in 7.5)
.TP
.B CURLINFO_TOTAL_TIME
Pass a pointer to a double to receive the total transaction time in seconds
for the previous transfer.
.TP
.B CURLINFO_NAMELOOKUP_TIME
Pass a pointer to a double to receive the time, in seconds, it took from the
start until the name resolving was completed.
.TP
.B CURLINFO_CONNECT_TIME
Pass a pointer to a double to receive the time, in seconds, it took from the
start until the connect to the remote host (or proxy) was completed.
.TP
.B CURLINFO_PRETRANSFER_TIME
Pass a pointer to a double to receive the time, in seconds, it took from the
start until the file transfer is just about to begin. This includes all
pre-transfer commands and negotiations that are specific to the particular
protocol(s) involved.
.TP
.B CURLINFO_SIZE_UPLOAD
Pass a pointer to a double to receive the total amount of bytes that were
uploaded.
.TP
.B CURLINFO_SIZE_DOWNLOAD
Pass a pointer to a double to receive the total amount of bytes that were
downloaded.
.TP
.B CURLINFO_SPEED_DOWNLOAD
Pass a pointer to a double to receive the average download speed that curl
measured for the complete download.
.TP
.B CURLINFO_SPEED_UPLOAD
Pass a pointer to a double to receive the average upload speed that curl
measured for the complete upload.
.TP
.B CURLINFO_HEADER_SIZE
Pass a pointer to a long to receive the total size of all the headers
received.
.TP
.B CURLINFO_REQUEST_SIZE
Pass a pointer to a long to receive the total size of the issued
requests. This is so far only for HTTP requests. Note that this may be more
than one request if FOLLOWLOCATION is true.
.TP
.B CURLINFO_SSL_VERIFYRESULT
Pass a pointer to a long to receive the result of the certification
verification that was requested (using the CURLOPT_SSL_VERIFYPEER option to
curl_easy_setopt). (Added in 7.4.2)
.PP
.SH RETURN VALUE
If the operation was successful, CURLE_OK is returned. Otherwise an
appropriate error code will be returned.
.SH "SEE ALSO"
.BR curl_easy_setopt "(3)"
.SH BUGS
Surely there are some, you tell me!

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_easy_setopt 3 "26 September 2000" "Curl 7.3" "libcurl Manual"
.TH curl_easy_setopt 3 "28 November 2000" "Curl 7.5" "libcurl Manual"
.SH NAME
curl_easy_setopt - Set curl easy-session options
.SH SYNOPSIS
@@ -330,6 +330,7 @@ will be used. Set the string to NULL to disable kerberos4. The kerberos
support only works for FTP. (Added in libcurl 7.3)
.TP
.B CURLOPT_WRITEINFO
(NOT PRESENT IN 7.4 or later!)
Pass a pointer to a zero terminated string as parameter. It will be used to
report information after a successful request. This string may contain
variables that will be substituted by their contents when output. Described
@@ -351,6 +352,52 @@ Pass a pointer that will be untouched by libcurl and passed as the first
argument in the progress callback set with
.I CURLOPT_PROGRESSFUNCTION
.
.TP
.B CURLOPT_SSL_VERIFYPEER
Pass a long that is set to a non-zero value to make curl verify the peer's
certificate. The certificate to verify against must be specified with the
CURLOPT_CAINFO option. (Added in 7.4.2)
.TP
.B CURLOPT_CAINFO
Pass a char * to a zero terminated file naming holding the certificate to
verify the peer with. This only makes sense when used in combination with the
CURLOPT_SSL_VERIFYPEER option. (Added in 7.4.2)
.TP
.B CURLOPT_PASSWDFUNCTION
Pass a pointer to a curl_passwd_callback function that will then be called
instead of the internal one if libcurl requests a password. The function must
match this prototype:
.BI "int my_getpass(void *client, char *prompt, char* buffer, int buflen );"
If set to NULL, it equals to making the function always fail. If the function
returns a non-zero value, it will abort the operation and an error
(CURLE_BAD_PASSWORD_ENTERED) will be returned.
.I client
is a generic pointer, see CURLOPT_PASSWDDATA.
.I prompt
is a zero-terminated string that is text that prefixes the input request.
.I buffer
is a pointer to data where the entered password should be stored and
.I buflen
is the maximum number of bytes that may be written in the buffer.
(Added in 7.4.2)
.TP
.B CURLOPT_PASSWDDATA
Pass a void * to whatever data you want. The passed pointer will be the first
argument sent to the specifed CURLOPT_PASSWDFUNCTION function. (Added in
7.4.2)
.TP
.B CURLOPT_FILETIME
Pass a long. If it is a non-zero value, libcurl will attempt to get the
modification date of the remote document in this operation. This requires that
the remote server sends the time or replies to a time querying command. The
curl_easy_getinfo() function with the CURLINFO_FILETIME argument can be used
after a transfer to extract the received time (if any). (Added in 7.5)
.TP
.B CURLOPT_MAXREDIRS
Pass a long. The set number will be the redirection limit. If that many
redirections have been followed, the next redirect will cause an error. This
option only makes sense if the CURLOPT_FOLLOWLOCATION is used at the same
time. (Added in 7.5)
.PP
.SH RETURN VALUE
0 means the option was set properly, non-zero means an error as

23
docs/curl_formfree.3 Normal file
View File

@@ -0,0 +1,23 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" Written by daniel@haxx.se
.\"
.TH curl_formfree 3 "17 November 2000" "Curl 7.5" "libcurl Manual"
.SH NAME
curl_formfree - free a previously build multipart/formdata HTTP POST chain
.SH SYNOPSIS
.B #include <curl/curl.h>
.sp
.BI "void curl_formfree(struct HttpPost *" form);
.ad
.SH DESCRIPTION
curl_formfree() is used to clean up data previously built/appended with
curl_formparse(). This must be called when the data has been used, which
typically means after the curl_easy_perform() has been called.
.SH RETURN VALUE
None
.SH "SEE ALSO"
.BR curl_formparse "(3) "
.SH BUGS
Surely there are some, you tell me!

87
docs/examples/curlgtk.c Normal file
View File

@@ -0,0 +1,87 @@
/* curlgtk.c */
/* Copyright (c) 2000 David Odin (aka DindinX) for MandrakeSoft */
/* an attempt to use the curl library in concert with a gtk-threaded application */
#include <stdio.h>
#include <gtk/gtk.h>
#include <curl/curl.h>
#include <curl/types.h> /* new for v7 */
#include <curl/easy.h> /* new for v7 */
#include <pthread.h>
GtkWidget *Bar;
size_t my_read_func(void *ptr, size_t size, size_t nmemb, FILE *stream)
{
return fread(ptr, size, nmemb, stream);
}
int my_progress_func(GtkWidget *Bar, int t, int d)
{
/* printf("%d / %d (%g %%)\n", d, t, d*100.0/t);*/
gdk_threads_enter();
gtk_progress_set_value(GTK_PROGRESS(Bar), d*100.0/t);
gdk_threads_leave();
return 0;
}
void *curl_thread(void *ptr)
{
CURL *curl;
CURLcode res;
FILE *outfile;
gchar *url = ptr;
curl = curl_easy_init();
if(curl)
{
outfile = fopen("/tmp/test.curl", "w");
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_FILE, outfile);
curl_easy_setopt(curl, CURLOPT_READFUNCTION, my_read_func);
curl_easy_setopt(curl, CURLOPT_PROGRESSFUNCTION, my_progress_func);
curl_easy_setopt(curl, CURLOPT_PROGRESSDATA, Bar);
res = curl_easy_perform(curl);
fclose(outfile);
/* always cleanup */
curl_easy_cleanup(curl);
}
return NULL;
}
int main(int argc, char **argv)
{
GtkWidget *Window, *Frame, *Frame2;
GtkAdjustment *adj;
pthread_t curl_tid;
/* Init thread */
g_thread_init(NULL);
gtk_init(&argc, &argv);
Window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
Frame = gtk_frame_new(NULL);
gtk_frame_set_shadow_type(GTK_FRAME(Frame), GTK_SHADOW_OUT);
gtk_container_add(GTK_CONTAINER(Window), Frame);
Frame2 = gtk_frame_new(NULL);
gtk_frame_set_shadow_type(GTK_FRAME(Frame2), GTK_SHADOW_IN);
gtk_container_add(GTK_CONTAINER(Frame), Frame2);
gtk_container_set_border_width(GTK_CONTAINER(Frame2), 5);
adj = (GtkAdjustment*)gtk_adjustment_new(0, 0, 100, 0, 0, 0);
Bar = gtk_progress_bar_new_with_adjustment(adj);
gtk_container_add(GTK_CONTAINER(Frame2), Bar);
gtk_widget_show_all(Window);
pthread_create(&curl_tid, NULL, curl_thread, argv[1]);
gdk_threads_enter();
gtk_main();
gdk_threads_leave();
return 0;
}

View File

@@ -99,8 +99,17 @@ typedef size_t (*curl_read_callback)(char *buffer,
size_t nitems,
FILE *instream);
/* All possible error codes from this version of urlget(). Future versions
may return other values, stay prepared. */
typedef int (*curl_passwd_callback)(void *clientp,
char *prompt,
char *buffer,
int buflen);
/* All possible error codes from all sorts of curl functions. Future versions
may return other values, stay prepared.
Always add new return codes last. Never *EVER* remove any. The return
codes must remain the same!
*/
typedef enum {
CURLE_OK = 0,
@@ -145,8 +154,6 @@ typedef enum {
CURLE_HTTP_POST_ERROR,
CURLE_HTTP_PORT_FAILED, /* HTTP Interface operation failed */
CURLE_SSL_CONNECT_ERROR, /* something was wrong when connecting with SSL */
CURLE_FTP_BAD_DOWNLOAD_RESUME, /* couldn't resume download */
@@ -159,10 +166,14 @@ typedef enum {
CURLE_FUNCTION_NOT_FOUND,
CURLE_ABORTED_BY_CALLBACK,
CURLE_BAD_FUNCTION_ARGUMENT,
CURLE_BAD_CALLING_ORDER,
CURLE_HTTP_PORT_FAILED, /* HTTP Interface operation failed */
CURLE_BAD_PASSWORD_ENTERED, /* when the my_getpass() returns fail */
CURLE_TOO_MANY_REDIRECTS , /* catch endless re-direct loops */
CURL_LAST
} CURLcode;
@@ -171,7 +182,7 @@ typedef enum {
#define CURL_ERROR_SIZE 256
/* maximum URL length we deal with */
/* maximum URL length we deal with in headers */
#define URL_MAX_LENGTH 4096
#define URL_MAX_LENGTH_TXT "4095"
@@ -394,12 +405,30 @@ typedef enum {
* set but doesn't match one of these, 'private' will be used. */
CINIT(KRB4LEVEL, OBJECTPOINT, 63),
/* Set if we should verify the peer in ssl handshake, set 1 to verify. */
CINIT(SSL_VERIFYPEER, LONG, 64),
/* The CApath or CAfile used to validate the peer certificate
this option is used only if SSL_VERIFYPEER is true */
CINIT(CAINFO, OBJECTPOINT, 65),
/* Function pointer to replace the internal password prompt */
CINIT(PASSWDFUNCTION, FUNCTIONPOINT, 66),
/* Custom pointer that gets passed as first argument to the password
function */
CINIT(PASSWDDATA, OBJECTPOINT, 67),
/* Maximum number of http redirects to follow */
CINIT(MAXREDIRS, LONG, 68),
/* Pass a pointer to a time_t to get a possible date of the requested
document! Pass a NULL to shut it off. */
CINIT(FILETIME, OBJECTPOINT, 69),
CURLOPT_LASTENTRY /* the last unusued */
} CURLoption;
#define CURL_PROGRESS_STATS 0 /* default progress display */
#define CURL_PROGRESS_BAR 1
typedef enum {
TIMECOND_NONE,
@@ -412,10 +441,6 @@ typedef enum {
#ifdef __BEOS__
#include <support/SupportDefs.h>
#else
#ifndef __cplusplus /* (rabe) */
typedef char bool;
#endif /* (rabe) */
#endif
@@ -434,18 +459,21 @@ int curl_formparse(char *string,
struct HttpPost **httppost,
struct HttpPost **last_post);
/* cleanup a form: */
void curl_formfree(struct HttpPost *form);
/* Unix and Win32 getenv function call, this returns a malloc()'ed string that
MUST be free()ed after usage is complete. */
char *curl_getenv(char *variable);
/* returns ascii string of the libcurl version */
/* Returns a static ascii string of the libcurl version. */
char *curl_version(void);
/* This is the version number */
#define LIBCURL_VERSION "7.3"
#define LIBCURL_VERSION_NUM 0x070300
#define LIBCURL_VERSION "7.5"
#define LIBCURL_VERSION_NUM 0x070500
/* linked-list structure for the CURLOPT_QUOTE option */
/* linked-list structure for the CURLOPT_QUOTE option (and other) */
struct curl_slist {
char *data;
struct curl_slist *next;
@@ -642,6 +670,47 @@ CURLcode curl_disconnect(CURLconnect *connect);
*/
time_t curl_getdate(const char *p, const time_t *now);
#define CURLINFO_STRING 0x100000
#define CURLINFO_LONG 0x200000
#define CURLINFO_DOUBLE 0x300000
#define CURLINFO_MASK 0x0fffff
#define CURLINFO_TYPEMASK 0xf00000
typedef enum {
CURLINFO_NONE, /* first, never use this */
CURLINFO_EFFECTIVE_URL = CURLINFO_STRING + 1,
CURLINFO_HTTP_CODE = CURLINFO_LONG + 2,
CURLINFO_TOTAL_TIME = CURLINFO_DOUBLE + 3,
CURLINFO_NAMELOOKUP_TIME = CURLINFO_DOUBLE + 4,
CURLINFO_CONNECT_TIME = CURLINFO_DOUBLE + 5,
CURLINFO_PRETRANSFER_TIME = CURLINFO_DOUBLE + 6,
CURLINFO_SIZE_UPLOAD = CURLINFO_DOUBLE + 7,
CURLINFO_SIZE_DOWNLOAD = CURLINFO_DOUBLE + 8,
CURLINFO_SPEED_DOWNLOAD = CURLINFO_DOUBLE + 9,
CURLINFO_SPEED_UPLOAD = CURLINFO_DOUBLE + 10,
CURLINFO_HEADER_SIZE = CURLINFO_LONG + 11,
CURLINFO_REQUEST_SIZE = CURLINFO_LONG + 12,
CURLINFO_SSL_VERIFYRESULT = CURLINFO_LONG + 13,
CURLINFO_FILETIME = CURLINFO_LONG + 14,
CURLINFO_LASTONE = 15
} CURLINFO;
/*
* NAME curl_getinfo()
*
* DESCRIPTION
*
* Request internal information from the curl session with this function.
* The third argument MUST be a pointer to a long or a pointer to a char *.
* The data pointed to will be filled in accordingly and can be relied upon
* only if the function returns CURLE_OK.
* This function is intended to get used *AFTER* a performed transfer, all
* results are undefined before the transfer is completed.
*/
CURLcode curl_getinfo(CURL *curl, CURLINFO info, ...);
#ifdef __cplusplus
}
#endif

View File

@@ -48,6 +48,21 @@ CURLcode curl_easy_setopt(CURL *curl, CURLoption option, ...);
CURLcode curl_easy_perform(CURL *curl);
void curl_easy_cleanup(CURL *curl);
/*
* NAME curl_easy_getinfo()
*
* DESCRIPTION
*
* Request internal information from the curl session with this function. The
* third argument MUST be a pointer to a long, a pointer to a char * or a
* pointer to a double (as the documentation describes elsewhere). The data
* pointed to will be filled in accordingly and can be relied upon only if the
* function returns CURLE_OK. This function is intended to get used *AFTER* a
* performed transfer, all results from this function are undefined until the
* transfer is completed.
*/
CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ...);
#ifdef __cplusplus
}
#endif

View File

@@ -7,10 +7,40 @@ AUTOMAKE_OPTIONS = foreign
lib_LTLIBRARIES = libcurl.la
# Some flags needed when trying to cause warnings ;-)
# CFLAGS = -g -Wall #-pedantic
# CFLAGS = -DMALLOCDEBUG -g # -Wall #-pedantic
INCLUDES = -I$(top_srcdir)/include
libcurl_la_LDFLAGS = -version-info 1:0:0
# This flag accepts an argument of the form current[:revision[:age]]. So,
# passing -version-info 3:12:1 sets current to 3, revision to 12, and age to
# 1.
#
# If either revision or age are omitted, they default to 0. Also note that age
# must be less than or equal to the current interface number.
#
# Here are a set of rules to help you update your library version information:
#
# 1.Start with version information of 0:0:0 for each libtool library.
#
# 2.Update the version information only immediately before a public release of
# your software. More frequent updates are unnecessary, and only guarantee
# that the current interface number gets larger faster.
#
# 3.If the library source code has changed at all since the last update, then
# increment revision (c:r:a becomes c:r+1:a).
#
# 4.If any interfaces have been added, removed, or changed since the last
# update, increment current, and set revision to 0.
#
# 5.If any interfaces have been added since the last public release, then
# increment age.
#
# 6.If any interfaces have been removed since the last public release, then
# set age to 0.
#
libcurl_la_SOURCES = \
arpa_telnet.h file.c getpass.h netrc.h timeval.c \
base64.c file.h hostip.c progress.c timeval.h \
@@ -23,8 +53,8 @@ download.c getdate.h ldap.c ssluse.c version.c \
download.h getenv.c ldap.h ssluse.h \
escape.c getenv.h mprintf.c telnet.c \
escape.h getpass.c netrc.c telnet.h \
writeout.c writeout.h highlevel.c strequal.c strequal.h easy.c \
security.h security.c krb4.c
getinfo.c highlevel.c strequal.c strequal.h easy.c \
security.h security.c krb4.c memdebug.c memdebug.h
# Say $(srcdir), so GNU make does not report an ambiguity with the .y.c rule.
$(srcdir)/getdate.c: getdate.y

View File

@@ -82,11 +82,40 @@ AUTOMAKE_OPTIONS = foreign
lib_LTLIBRARIES = libcurl.la
# Some flags needed when trying to cause warnings ;-)
# CFLAGS = -g -Wall #-pedantic
# CFLAGS = -DMALLOCDEBUG -g # -Wall #-pedantic
INCLUDES = -I$(top_srcdir)/include
libcurl_la_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c base64.c file.h hostip.c progress.c timeval.h base64.h formdata.c hostip.h progress.h cookie.c formdata.h http.c sendf.c cookie.h ftp.c http.h sendf.h url.c dict.c ftp.h if2ip.c speedcheck.c url.h dict.h getdate.c if2ip.h speedcheck.h urldata.h download.c getdate.h ldap.c ssluse.c version.c download.h getenv.c ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c telnet.h writeout.c writeout.h highlevel.c strequal.c strequal.h easy.c security.h security.c krb4.c
libcurl_la_LDFLAGS = -version-info 1:0:0
# This flag accepts an argument of the form current[:revision[:age]]. So,
# passing -version-info 3:12:1 sets current to 3, revision to 12, and age to
# 1.
#
# If either revision or age are omitted, they default to 0. Also note that age
# must be less than or equal to the current interface number.
#
# Here are a set of rules to help you update your library version information:
#
# 1.Start with version information of 0:0:0 for each libtool library.
#
# 2.Update the version information only immediately before a public release of
# your software. More frequent updates are unnecessary, and only guarantee
# that the current interface number gets larger faster.
#
# 3.If the library source code has changed at all since the last update, then
# increment revision (c:r:a becomes c:r+1:a).
#
# 4.If any interfaces have been added, removed, or changed since the last
# update, increment current, and set revision to 0.
#
# 5.If any interfaces have been added since the last public release, then
# increment age.
#
# 6.If any interfaces have been removed since the last public release, then
# set age to 0.
#
libcurl_la_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c base64.c file.h hostip.c progress.c timeval.h base64.h formdata.c hostip.h progress.h cookie.c formdata.h http.c sendf.c cookie.h ftp.c http.h sendf.h url.c dict.c ftp.h if2ip.c speedcheck.c url.h dict.h getdate.c if2ip.h speedcheck.h urldata.h download.c getdate.h ldap.c ssluse.c version.c download.h getenv.c ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c telnet.h getinfo.c highlevel.c strequal.c strequal.h easy.c security.h security.c krb4.c memdebug.c memdebug.h
mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs
CONFIG_HEADER = ../config.h ../src/config.h
@@ -98,13 +127,12 @@ DEFS = @DEFS@ -I. -I$(srcdir) -I.. -I../src
CPPFLAGS = @CPPFLAGS@
LDFLAGS = @LDFLAGS@
LIBS = @LIBS@
libcurl_la_LDFLAGS =
libcurl_la_LIBADD =
libcurl_la_OBJECTS = file.lo timeval.lo base64.lo hostip.lo progress.lo \
formdata.lo cookie.lo http.lo sendf.lo ftp.lo url.lo dict.lo if2ip.lo \
speedcheck.lo getdate.lo download.lo ldap.lo ssluse.lo version.lo \
getenv.lo escape.lo mprintf.lo telnet.lo getpass.lo netrc.lo \
writeout.lo highlevel.lo strequal.lo easy.lo security.lo krb4.lo
getenv.lo escape.lo mprintf.lo telnet.lo getpass.lo netrc.lo getinfo.lo \
highlevel.lo strequal.lo easy.lo security.lo krb4.lo memdebug.lo
CFLAGS = @CFLAGS@
COMPILE = $(CC) $(DEFS) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
LTCOMPILE = $(LIBTOOL) --mode=compile $(CC) $(DEFS) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)

View File

@@ -1,70 +1,81 @@
#############################################################
## Makefile for building libcurl.a with MingW32 (GCC-2.95) and
## optionally OpenSSL (0.9.4)
## Use: make -f Makefile.m32
##
## Comments to: Troy Engel <tengel@sonic.net> or
## Joern Hartroth <hartroth@acm.org>
CC = gcc
AR = ar
RANLIB = ranlib
OPENSSL_PATH = ../../openssl-0.9.5a
########################################################
## Nothing more to do below this line!
INCLUDES = -I. -I.. -I../include
CFLAGS = -g -O2 -DMINGW32
ifdef SSL
INCLUDES += -I"$(OPENSSL_PATH)/outinc" -I"$(OPENSSL_PATH)/outinc/openssl"
CFLAGS += -DUSE_SSLEAY
endif
COMPILE = $(CC) $(INCLUDES) $(CFLAGS)
libcurl_a_LIBRARIES = libcurl.a
libcurl_a_SOURCES = base64.c getenv.c if2ip.h progress.h \
base64.h getenv.h mprintf.c setup.h url.c download.c getpass.c \
mprintf.h ssluse.c url.h download.h hostip.c netrc.c ssluse.h \
urldata.h formdata.c hostip.h netrc.h stdcheaders.h formdata.h \
if2ip.c progress.c sendf.c sendf.h speedcheck.c speedcheck.h \
ftp.c ftp.h getpass.h version.c timeval.c timeval.h cookie.c \
cookie.h escape.c escape.h getdate.c getdate.h dict.h dict.c http.c \
http.h telnet.c telnet.h file.c file.h ldap.c ldap.h writeout.c writeout.h \
highlevel.c strequal.c strequal.h easy.c
libcurl_a_OBJECTS = base64.o getenv.o mprintf.o url.o download.o \
getpass.o ssluse.o hostip.o netrc.o formdata.o if2ip.o progress.o \
sendf.o speedcheck.o ftp.o version.o timeval.o \
cookie.o escape.o getdate.o dict.o http.o telnet.o file.o ldap.o writeout.o \
highlevel.o strequal.o easy.o
LIBRARIES = $(libcurl_a_LIBRARIES)
SOURCES = $(libcurl_a_SOURCES)
OBJECTS = $(libcurl_a_OBJECTS)
all: libcurl.a
libcurl.a: $(libcurl_a_OBJECTS) $(libcurl_a_DEPENDENCIES)
-@erase libcurl.a
$(AR) cru libcurl.a $(libcurl_a_OBJECTS)
$(RANLIB) libcurl.a
.c.o:
$(COMPILE) -c $<
.s.o:
$(COMPILE) -c $<
.S.o:
$(COMPILE) -c $<
clean:
-@erase $(libcurl_a_OBJECTS)
distrib: clean
-@erase $(libcurl_a_LIBRARIES)
#############################################################
## Makefile for building libcurl.a with MingW32 (GCC-2.95) and
## optionally OpenSSL (0.9.6)
## Use: make -f Makefile.m32
##
## Comments to: Troy Engel <tengel@sonic.net> or
## Joern Hartroth <hartroth@acm.org>
CC = gcc
AR = ar
RANLIB = ranlib
STRIP = strip -g
OPENSSL_PATH = ../../openssl-0.9.6
########################################################
## Nothing more to do below this line!
INCLUDES = -I. -I.. -I../include -I../src
CFLAGS = -g -O2 -DMINGW32
ifdef SSL
INCLUDES += -I"$(OPENSSL_PATH)/outinc" -I"$(OPENSSL_PATH)/outinc/openssl"
CFLAGS += -DUSE_SSLEAY
DLL_LIBS = -leay32 -lssl32 -lRSAglue
endif
COMPILE = $(CC) $(INCLUDES) $(CFLAGS)
libcurl_a_LIBRARIES = libcurl.a
libcurl_a_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c base64.c \
file.h hostip.c progress.c timeval.h base64.h formdata.c hostip.h progress.h \
cookie.c formdata.h http.c sendf.c cookie.h ftp.c http.h sendf.h url.c dict.c \
ftp.h if2ip.c speedcheck.c url.h dict.h getdate.c if2ip.h speedcheck.h \
urldata.h download.c getdate.h ldap.c ssluse.c version.c download.h getenv.c \
ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c \
telnet.h getinfo.c highlevel.c strequal.c strequal.h easy.c security.h \
security.c krb4.c
libcurl_a_OBJECTS = file.o timeval.o base64.o hostip.o progress.o \
formdata.o cookie.o http.o sendf.o ftp.o url.o dict.o if2ip.o \
speedcheck.o getdate.o download.o ldap.o ssluse.o version.o \
getenv.o escape.o mprintf.o telnet.o getpass.o netrc.o getinfo.o \
highlevel.o strequal.o easy.o security.o krb4.o
LIBRARIES = $(libcurl_a_LIBRARIES)
SOURCES = $(libcurl_a_SOURCES)
OBJECTS = $(libcurl_a_OBJECTS)
all: libcurl.a libcurl.dll libcurldll.a
libcurl.a: $(libcurl_a_OBJECTS) $(libcurl_a_DEPENDENCIES)
-@erase libcurl.a
$(AR) cru libcurl.a $(libcurl_a_OBJECTS)
$(RANLIB) libcurl.a
$(STRIP) $@
# remove the last line above to keep debug info
libcurl.dll libcurldll.a: libcurl.a libcurl.def dllinit.o
-@erase $@
dllwrap --dllname $@ --output-lib libcurldll.a --export-all --def libcurl.def $(libcurl_a_LIBRARIES) dllinit.o -L$(OPENSSL_PATH)/out $(DLL_LIBS) -lwsock32
$(STRIP) $@
# remove the last line above to keep debug info
.c.o:
$(COMPILE) -c $<
.s.o:
$(COMPILE) -c $<
.S.o:
$(COMPILE) -c $<
clean:
-@erase $(libcurl_a_OBJECTS)
distrib: clean
-@erase $(libcurl_a_LIBRARIES)

View File

@@ -4,28 +4,30 @@
## (default is release)
##
## Comments to: Troy Engel <tengel@sonic.net>
## Updated by: Craig Davison <cd@securityfocus.com>
PROGRAM_NAME = libcurl.lib
OPENSSL_PATH = ../../openssl-0.9.3a
PROGRAM_NAME = libcurl.lib
PROGRAM_NAME_DEBUG = libcurld.lib
OPENSSL_PATH = ../../openssl-0.9.6
########################################################
## Nothing more to do below this line!
## Release
CCR = cl.exe /ML /O2 /D "NDEBUG"
LINKR = link.exe -lib
CCR = cl.exe /MD /O2 /D "NDEBUG"
LINKR = link.exe -lib /out:$(PROGRAM_NAME)
## Debug
CCD = cl.exe /MLd /Gm /ZI /Od /D "_DEBUG" /GZ
LINKD = link.exe -lib
CCD = cl.exe /MDd /Gm /ZI /Od /D "_DEBUG" /GZ
LINKD = link.exe -lib /out:$(PROGRAM_NAME_DEBUG)
## SSL Release
CCRS = cl.exe /ML /O2 /D "NDEBUG" /D "USE_SSLEAY" /I "$(OPENSSL_PATH)/inc32" /I "$(OPENSSL_PATH)/inc32/openssl"
LINKRS = link.exe -lib /LIBPATH:$(OPENSSL_PATH)/out32dll
CCRS = cl.exe /MD /O2 /D "NDEBUG" /D "USE_SSLEAY" /I "$(OPENSSL_PATH)/include" /I "$(OPENSSL_PATH)/include/openssl"
LINKRS = link.exe -lib /out:$(PROGRAM_NAME) /LIBPATH:$(OPENSSL_PATH)/out32dll
CFLAGS = /I "../include" /nologo /W3 /GX /D "WIN32" /D "VC6" /D "_MBCS" /D "_LIB" /YX /FD /c /D "MSDOS"
LFLAGS = /nologo /out:$(PROGRAM_NAME)
LINKLIBS = kernel32.lib wsock32.lib
LFLAGS = /nologo
LINKLIBS = wsock32.lib
LINKSLIBS = libeay32.lib ssleay32.lib RSAglue.lib
RELEASE_OBJS= \
@@ -53,11 +55,11 @@ RELEASE_OBJS= \
timevalr.obj \
urlr.obj \
filer.obj \
writeoutr.obj \
getinfor.obj \
versionr.obj \
easyr.obj \
highlevelr.obj \
strequalr.obj
easyr.obj \
highlevelr.obj \
strequalr.obj
DEBUG_OBJS= \
base64d.obj \
@@ -67,7 +69,7 @@ DEBUG_OBJS= \
formdatad.obj \
ftpd.obj \
httpd.obj \
ldapd.obj \
ldapd.obj \
dictd.obj \
telnetd.obj \
getdated.obj \
@@ -84,11 +86,11 @@ DEBUG_OBJS= \
timevald.obj \
urld.obj \
filed.obj \
writeoutd.obj \
versiond.obj \
easyd.obj \
highleveld.obj \
strequald.obj
getinfod.obj \
versiond.obj \
easyd.obj \
highleveld.obj \
strequald.obj
RELEASE_SSL_OBJS= \
base64rs.obj \
@@ -98,7 +100,7 @@ RELEASE_SSL_OBJS= \
formdatars.obj \
ftprs.obj \
httprs.obj \
ldaprs.obj \
ldaprs.obj \
dictrs.obj \
telnetrs.obj \
getdaters.obj \
@@ -115,12 +117,12 @@ RELEASE_SSL_OBJS= \
timevalrs.obj \
urlrs.obj \
filers.obj \
writeouts.obj \
getinfors.obj \
versionrs.obj \
easyrs.obj \
highlevelrs.obj \
strequalrs.obj
easyrs.obj \
highlevelrs.obj \
strequalrs.obj
LINK_OBJS= \
base64.obj \
cookie.obj \
@@ -129,7 +131,7 @@ LINK_OBJS= \
formdata.obj \
ftp.obj \
http.obj \
ldap.obj \
ldap.obj \
dict.obj \
telnet.obj \
getdate.obj \
@@ -146,11 +148,11 @@ LINK_OBJS= \
timeval.obj \
url.obj \
file.obj \
writeout.obj \
getinfo.obj \
version.obj \
easy.obj \
highlevel.obj \
strequal.obj
easy.obj \
highlevel.obj \
strequal.obj
all : release
@@ -163,7 +165,6 @@ debug: $(DEBUG_OBJS)
release-ssl: $(RELEASE_SSL_OBJS)
$(LINKRS) $(LFLAGS) $(LINKLIBS) $(LINKSLIBS) $(LINK_OBJS)
## Release
base64r.obj: base64.c
$(CCR) $(CFLAGS) base64.c
@@ -213,8 +214,8 @@ urlr.obj: url.c
$(CCR) $(CFLAGS) url.c
filer.obj: file.c
$(CCR) $(CFLAGS) file.c
writeoutr.obj: writeout.c
$(CCR) $(CFLAGS) writeout.c
getinfor.obj: getinfo.c
$(CCR) $(CFLAGS) getinfo.c
versionr.obj: version.c
$(CCR) $(CFLAGS) version.c
easyr.obj: easy.c
@@ -240,7 +241,7 @@ ftpd.obj: ftp.c
httpd.obj: http.c
$(CCD) $(CFLAGS) http.c
ldapd.obj: ldap.c
$(CCR) $(CFLAGS) ldap.c
$(CCD) $(CFLAGS) ldap.c
dictd.obj: dict.c
$(CCD) $(CFLAGS) dict.c
telnetd.obj: telnet.c
@@ -273,16 +274,16 @@ urld.obj: url.c
$(CCD) $(CFLAGS) url.c
filed.obj: file.c
$(CCD) $(CFLAGS) file.c
writeoutd.obj: writeout.c
$(CCR) $(CFLAGS) writeout.c
getinfod.obj: getinfo.c
$(CCD) $(CFLAGS) getinfo.c
versiond.obj: version.c
$(CCD) $(CFLAGS) version.c
easyd.obj: easy.c
$(CCR) $(CFLAGS) easy.c
$(CCD) $(CFLAGS) easy.c
highleveld.obj: highlevel.c
$(CCR) $(CFLAGS) highlevel.c
$(CCD) $(CFLAGS) highlevel.c
strequald.obj: strequal.c
$(CCR) $(CFLAGS) strequal.c
$(CCD) $(CFLAGS) strequal.c
## Release SSL
@@ -301,7 +302,7 @@ ftprs.obj: ftp.c
httprs.obj: http.c
$(CCRS) $(CFLAGS) http.c
ldaprs.obj: ldap.c
$(CCR) $(CFLAGS) ldap.c
$(CCRS) $(CFLAGS) ldap.c
dictrs.obj: dict.c
$(CCRS) $(CFLAGS) dict.c
telnetrs.obj: telnet.c
@@ -334,17 +335,18 @@ urlrs.obj: url.c
$(CCRS) $(CFLAGS) url.c
filers.obj: file.c
$(CCRS) $(CFLAGS) file.c
writeoutrs.obj: writeout.c
$(CCR) $(CFLAGS) writeout.c
getinfors.obj: getinfo.c
$(CCRS) $(CFLAGS) getinfo.c
versionrs.obj: version.c
$(CCRS) $(CFLAGS) version.c
easyrs.obj: easy.c
$(CCR) $(CFLAGS) easy.c
$(CCRS) $(CFLAGS) easy.c
highlevelrs.obj: highlevel.c
$(CCR) $(CFLAGS) highlevel.c
$(CCRS) $(CFLAGS) highlevel.c
strequalrs.obj: strequal.c
$(CCR) $(CFLAGS) strequal.c
$(CCRS) $(CFLAGS) strequal.c
clean:
-@erase *.obj
-@erase vc60.idb

View File

@@ -38,6 +38,11 @@
#include <string.h>
#include "base64.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
static char base64[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
static int pos(char c)

View File

@@ -65,6 +65,11 @@ Example set of cookies:
#include "getdate.h"
#include "strequal.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/****************************************************************************
*
* cookie_add()
@@ -404,7 +409,7 @@ struct Cookie *cookie_getlist(struct CookieInfo *c,
/* now check if the domain is correct */
domlen=co->domain?strlen(co->domain):0;
if(!co->domain ||
((domlen<hostlen) &&
((domlen<=hostlen) &&
strequal(host+(hostlen-domlen), co->domain)) ) {
/* the right part of the host matches the domain stuff in the
cookie data */
@@ -496,6 +501,7 @@ void cookie_cleanup(struct CookieInfo *c)
free(co);
co = next;
}
free(c); /* free the base struct as well */
}
}

View File

@@ -233,7 +233,7 @@ CURLcode dict(struct connectdata *conn)
int i;
ppath++;
for (i = 0; (i < URL_MAX_LENGTH) && (ppath[i]); i++) {
for (i = 0; ppath[i]; i++) {
if (ppath[i] == ':')
ppath[i] = ' ';
}

82
lib/dllinit.c Normal file
View File

@@ -0,0 +1,82 @@
/* dllinit.c -- Portable DLL initialization.
Copyright (C) 1998, 1999 Free Software Foundation, Inc.
Contributed by Mumit Khan (khan@xraylith.wisc.edu).
I've used DllMain as the DLL "main" since that's the most common
usage. MSVC and Mingw32 both default to DllMain as the standard
callback from the linker entry point. Cygwin, as of b20.1, also
uses DllMain as the default callback from the entry point.
The real entry point is typically always defined by the runtime
library, and usually never overridden by (casual) user. What you can
override however is the callback routine that the entry point calls,
and this file provides such a callback function, DllMain.
Mingw32: The default entry point for mingw32 is DllMainCRTStartup
which is defined in libmingw32.a This in turn calls DllMain which is
defined here. If not defined, there is a stub in libmingw32.a which
does nothing.
Cygwin: The default entry point for Cygwin b20.1 or newer is
__cygwin_dll_entry which is defined in libcygwin.a. This in turn
calls the routine DllMain. If not defined, there is a stub in
libcygwin.a which does nothing.
MSVC: MSVC runtime calls DllMain, just like Mingw32.
Summary: If you need to do anything special in DllMain, just add it
here. Otherwise, the default setup should be just fine for 99%+ of
the time. I strongly suggest that you *not* change the entry point,
but rather change DllMain as appropriate.
*/
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#undef WIN32_LEAN_AND_MEAN
#include <stdio.h>
BOOL APIENTRY DllMain (HINSTANCE hInst, DWORD reason,
LPVOID reserved /* Not used. */ );
/*
*----------------------------------------------------------------------
*
* DllMain --
*
* This routine is called by the Mingw32, Cygwin32 or VC++ C run
* time library init code, or the Borland DllEntryPoint routine. It
* is responsible for initializing various dynamically loaded
* libraries.
*
* Results:
* TRUE on sucess, FALSE on failure.
*
* Side effects:
*
*----------------------------------------------------------------------
*/
BOOL APIENTRY
DllMain (
HINSTANCE hInst /* Library instance handle. */ ,
DWORD reason /* Reason this function is being called. */ ,
LPVOID reserved /* Not used. */ )
{
switch (reason)
{
case DLL_PROCESS_ATTACH:
break;
case DLL_PROCESS_DETACH:
break;
case DLL_THREAD_ATTACH:
break;
case DLL_THREAD_DETACH:
break;
}
return TRUE;
}

View File

@@ -162,3 +162,13 @@ void curl_easy_cleanup(CURL *curl)
curl_close(curl);
curl_free();
}
CURLcode curl_easy_getinfo(CURL *curl, CURLINFO info, ...)
{
va_list arg;
void *paramp;
va_start(arg, info);
paramp = va_arg(arg, void *);
return curl_getinfo(curl, info, paramp);
}

View File

@@ -48,6 +48,11 @@
#include <stdlib.h>
#include <string.h>
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
char *curl_escape(char *string)
{
int alloc=strlen(string)+1;
@@ -95,7 +100,7 @@ char *curl_unescape(char *string, int length)
the "query part" where '+' should become ' '.
RFC 2316, section 3.10 */
while(--alloc) {
while(--alloc > 0) {
in = *string;
if(querypart && ('+' == in))
in = ' ';
@@ -108,6 +113,7 @@ char *curl_unescape(char *string, int length)
if(sscanf(string+1, "%02X", &hex)) {
in = hex;
string+=2;
alloc-=2;
}
}

View File

@@ -103,6 +103,10 @@
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
CURLcode file(struct connectdata *conn)
{
@@ -151,9 +155,6 @@ CURLcode file(struct connectdata *conn)
this is both more efficient than the former call to download() and
it avoids problems with select() and recv() on file descriptors
in Winsock */
#if 0
ProgressInit (data, expected_size);
#endif
if(expected_size != -1)
pgrsSetDownloadSize(data, expected_size);
@@ -170,10 +171,11 @@ CURLcode file(struct connectdata *conn)
Windows systems if the target is stdout. Use -O or -o parameters
to prevent CR/LF translation (this then goes to a binary mode
file descriptor). */
if(nread != data->fwrite (buf, 1, nread, data->out)) {
failf (data, "Failed writing output");
return CURLE_WRITE_ERROR;
}
res = client_write(data, CLIENTWRITE_BODY, buf, nread);
if(res)
return res;
now = tvnow();
if(pgrsUpdate(data))
res = CURLE_ABORTED_BY_CALLBACK;
@@ -184,7 +186,5 @@ CURLcode file(struct connectdata *conn)
close(fd);
free(actual_path);
return res;
}

View File

@@ -63,6 +63,11 @@
#include "strequal.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/* Length of the random boundary string. The risk of this being used
in binary data is very close to zero, 64^32 makes
6277101735386680763835789423207666416102355444464034512896
@@ -377,8 +382,8 @@ char *MakeFormBoundary(void)
return retstring;
}
/* Used from http.c */
void FormFree(struct FormData *form)
{
struct FormData *next;
@@ -386,6 +391,28 @@ void FormFree(struct FormData *form)
next=form->next; /* the following form line */
free(form->line); /* free the line */
free(form); /* free the struct */
} while((form=next)); /* continue */
}
/* external function to free up a whole form post chain */
void curl_formfree(struct HttpPost *form)
{
struct HttpPost *next;
do {
next=form->next; /* the following form line */
/* recurse to sub-contents */
if(form->more)
curl_formfree(form->more);
if(form->name)
free(form->name); /* free the name */
if(form->contents)
free(form->contents); /* free the contents */
if(form->contenttype)
free(form->contenttype); /* free the content type */
free(form); /* free the struct */
} while((form=next)); /* continue */
}
@@ -458,12 +485,20 @@ struct FormData *getFormData(struct HttpPost *post,
"\r\nContent-Type: %s",
file->contenttype);
}
#if 0
/* The header Content-Transfer-Encoding: seems to confuse some receivers
* (like the built-in PHP engine). While I can't see any reason why it
* should, I can just as well skip this to the benefit of the users who
* are using such confused receivers.
*/
if(file->contenttype &&
!strnequal("text/", file->contenttype, 5)) {
/* this is not a text content, mention our binary encoding */
size += AddFormData(&form, "\r\nContent-Transfer-Encoding: binary", 0);
}
#endif
size += AddFormData(&form, "\r\n\r\n", 0);

299
lib/ftp.c
View File

@@ -85,10 +85,15 @@
#include "progress.h"
#include "download.h"
#include "escape.h"
#include "http.h" /* for HTTP proxy tunnel stuff */
#ifdef KRB4
#include "security.h"
#endif
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/* returns last node in linked list */
static struct curl_slist *slist_get_last(struct curl_slist *list)
@@ -124,7 +129,7 @@ struct curl_slist *curl_slist_append(struct curl_slist *list, char *data)
}
else {
fprintf(stderr, "Cannot allocate memory for QUOTE list.\n");
exit(-1);
return NULL;
}
if (list) {
@@ -213,7 +218,8 @@ static CURLcode AllowServerConnect(struct UrlData *data,
isdigit((int)line[2]) && (' ' == line[3]))
int GetLastResponse(int sockfd, char *buf,
struct connectdata *conn)
struct connectdata *conn,
int *ftpcode)
{
int nread;
int keepon=TRUE;
@@ -229,6 +235,8 @@ int GetLastResponse(int sockfd, char *buf,
#define SELECT_TIMEOUT 2
int error = SELECT_OK;
*ftpcode=0; /* 0 for errors */
if(data->timeout) {
/* if timeout is requested, find out how much remaining time we have */
timeout = data->timeout - /* timeout time */
@@ -269,8 +277,8 @@ int GetLastResponse(int sockfd, char *buf,
break;
default:
#ifdef USE_SSLEAY
if (data->use_ssl) {
keepon = SSL_read(data->ssl, ptr, 1);
if (data->ssl.use) {
keepon = SSL_read(data->ssl.handle, ptr, 1);
}
else {
#endif
@@ -313,6 +321,8 @@ int GetLastResponse(int sockfd, char *buf,
if(error)
return -error;
*ftpcode=atoi(buf); /* return the initial number like this */
return nread;
}
@@ -333,53 +343,6 @@ char *getmyhost(char *buf, int buf_size)
return buf;
}
#if 0
/*
* URLfix()
*
* This function returns a string converted FROM the input URL format to a
* format that is more likely usable for the remote server. That is, all
* special characters (found as %XX-codes) will be eascaped with \<letter>.
*/
static char *URLfix(char *string)
{
/* The length of the new string can't be longer than twice the original
string, if all letters are '+'... */
int alloc = strlen(string)*2;
char *ns = malloc(alloc);
unsigned char in;
int index=0;
int hex;
while(*string) {
in = *string;
switch(in) {
case '+':
ns[index++] = '\\';
ns[index++] = ' ';
string++;
continue;
case '%':
/* encoded part */
if(sscanf(string+1, "%02X", &hex)) {
ns[index++] = '\\';
ns[index++] = hex;
string+=3;
continue;
}
/* FALLTHROUGH */
default:
ns[index++] = in;
string++;
}
}
ns[index]=0; /* terminate it */
return ns;
}
#endif
/* ftp_connect() should do everything that is to be considered a part
of the connection phase. */
CURLcode ftp_connect(struct connectdata *conn)
@@ -390,6 +353,7 @@ CURLcode ftp_connect(struct connectdata *conn)
char *buf = data->buffer; /* this is our buffer */
struct FTP *ftp;
CURLcode result;
int ftpcode;
myalarm(0); /* switch off the alarm stuff */
@@ -414,10 +378,11 @@ CURLcode ftp_connect(struct connectdata *conn)
}
/* The first thing we do is wait for the "220*" line: */
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "220", 3)) {
if(ftpcode != 220) {
failf(data, "This doesn't seem like a nice ftp-server response");
return CURLE_FTP_WEIRD_SERVER_REPLY;
}
@@ -446,31 +411,31 @@ CURLcode ftp_connect(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "USER %s", ftp->user);
/* wait for feedback */
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(!strncmp(buf, "530", 3)) {
if(ftpcode == 530) {
/* 530 User ... access denied
(the server denies to log the specified user) */
failf(data, "Access denied: %s", &buf[4]);
return CURLE_FTP_ACCESS_DENIED;
}
else if(!strncmp(buf, "331", 3)) {
else if(ftpcode == 331) {
/* 331 Password required for ...
(the server requires to send the user's password too) */
ftpsendf(data->firstsocket, conn, "PASS %s", ftp->passwd);
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(!strncmp(buf, "530", 3)) {
if(ftpcode == 530) {
/* 530 Login incorrect.
(the username and/or the password are incorrect) */
failf(data, "the username and/or the password are incorrect");
return CURLE_FTP_USER_PASSWORD_INCORRECT;
}
else if(!strncmp(buf, "230", 3)) {
else if(ftpcode == 230) {
/* 230 User ... logged in.
(user successfully logged in) */
@@ -481,7 +446,7 @@ CURLcode ftp_connect(struct connectdata *conn)
return CURLE_FTP_WEIRD_PASS_REPLY;
}
}
else if(/*! strncmp(buf, "230", 3)***/ buf[0] == '2') {
else if(buf[0] == '2') {
/* 230 User ... logged in.
(the user logged in without password) */
infof(data, "We have successfully logged in\n");
@@ -516,6 +481,7 @@ CURLcode ftp_done(struct connectdata *conn)
size_t nread;
char *buf = data->buffer; /* this is our buffer */
struct curl_slist *qitem; /* QUOTE item */
int ftpcode;
if(data->bits.upload) {
if((-1 != data->infilesize) && (data->infilesize != *ftp->bytecountp)) {
@@ -545,12 +511,12 @@ CURLcode ftp_done(struct connectdata *conn)
if(!data->bits.no_body) {
/* now let's see what the server says about the transfer we
just performed: */
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
/* 226 Transfer complete, 250 Requested file action okay, completed. */
if(!strncmp(buf, "226", 3) && !strncmp(buf, "250", 3)) {
if((ftpcode != 226) && (ftpcode != 250)) {
failf(data, "%s", buf+4);
return CURLE_FTP_WRITE_ERROR;
}
@@ -565,7 +531,7 @@ CURLcode ftp_done(struct connectdata *conn)
if (qitem->data) {
ftpsendf(data->firstsocket, conn, "%s", qitem->data);
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -579,12 +545,8 @@ CURLcode ftp_done(struct connectdata *conn)
}
}
if(ftp->file)
free(ftp->file);
if(ftp->dir)
free(ftp->dir);
/* TBD: the ftp struct is still allocated here */
free(ftp);
data->proto.ftp=NULL; /* it is gone */
return CURLE_OK;
}
@@ -612,6 +574,7 @@ CURLcode _ftp(struct connectdata *conn)
struct FTP *ftp = data->proto.ftp;
long *bytecountp = ftp->bytecountp;
int ftpcode; /* for ftp status */
/* Send any QUOTE strings? */
if(data->quote) {
@@ -622,7 +585,7 @@ CURLcode _ftp(struct connectdata *conn)
if (qitem->data) {
ftpsendf(data->firstsocket, conn, "%s", qitem->data);
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -639,16 +602,45 @@ CURLcode _ftp(struct connectdata *conn)
/* change directory first! */
if(ftp->dir && ftp->dir[0]) {
ftpsendf(data->firstsocket, conn, "CWD %s", ftp->dir);
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "250", 3)) {
if(ftpcode != 250) {
failf(data, "Couldn't change to directory %s", ftp->dir);
return CURLE_FTP_ACCESS_DENIED;
}
}
if(data->bits.get_filetime && ftp->file) {
/* we have requested to get the modified-time of the file, this is yet
again a grey area as the MDTM is not kosher RFC959 */
ftpsendf(data->firstsocket, conn, "MDTM %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(ftpcode == 213) {
/* we got a time. Format should be: "YYYYMMDDHHMMSS[.sss]" where the
last .sss part is optional and means fractions of a second */
int year, month, day, hour, minute, second;
if(6 == sscanf(buf+4, "%04d%02d%02d%02d%02d%02d",
&year, &month, &day, &hour, &minute, &second)) {
/* we have a time, reformat it */
time_t secs=time(NULL);
sprintf(buf, "%04d%02d%02d %02d:%02d:%02d",
year, month, day, hour, minute, second);
/* now, convert this into a time() value: */
data->progress.filetime = curl_getdate(buf, &secs);
}
else {
infof(data, "unsupported MDTM reply format\n");
}
}
}
/* If we have selected NOBODY, it means that we only want file information.
Which in FTP can't be much more than the file size! */
if(data->bits.no_body) {
@@ -656,33 +648,59 @@ CURLcode _ftp(struct connectdata *conn)
may not support it! It is however the only way we have to get a file's
size! */
int filesize;
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn);
/* Some servers return different sizes for different modes, and thus we
must set the proper type before we check the size */
ftpsendf(data->firstsocket, conn, "TYPE %s",
(data->bits.ftp_ascii)?"A":"I");
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "213", 3)) {
if(ftpcode != 200) {
failf(data, "Couldn't set %s mode",
(data->bits.ftp_ascii)?"ASCII":"binary");
return (data->bits.ftp_ascii)? CURLE_FTP_COULDNT_SET_ASCII:
CURLE_FTP_COULDNT_SET_BINARY;
}
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(ftpcode != 213) {
failf(data, "Couldn't get file size: %s", buf+4);
return CURLE_FTP_COULDNT_GET_SIZE;
}
/* get the size from the ascii string: */
filesize = atoi(buf+4);
sprintf(buf, "Content-Length: %d\n", filesize);
sprintf(buf, "Content-Length: %d\r\n", filesize);
result = client_write(data, CLIENTWRITE_BOTH, buf, 0);
if(result)
return result;
if(strlen(buf) != data->fwrite(buf, 1, strlen(buf), data->out)) {
failf (data, "Failed writing output");
return CURLE_WRITE_ERROR;
}
if(data->writeheader) {
/* the header is requested to be written to this file */
if(strlen(buf) != data->fwrite (buf, 1, strlen(buf),
data->writeheader)) {
failf (data, "Failed writing output");
return CURLE_WRITE_ERROR;
}
#ifdef HAVE_STRFTIME
if(data->bits.get_filetime && data->progress.filetime) {
struct tm *tm;
#ifdef HAVE_LOCALTIME_R
struct tm buffer;
tm = (struct tm *)localtime_r(&data->progress.filetime, &buffer);
#else
tm = localtime(&data->progress.filetime);
#endif
/* format: "Tue, 15 Nov 1994 12:45:26 GMT" */
strftime(buf, BUFSIZE-1, "Last-Modified: %a, %d %b %Y %H:%M:%S %Z\r\n",
tm);
result = client_write(data, CLIENTWRITE_BOTH, buf, 0);
if(result)
return result;
}
#endif
return CURLE_OK;
}
@@ -751,6 +769,9 @@ CURLcode _ftp(struct connectdata *conn)
free(hostdataptr);
return CURLE_FTP_PORT_FAILED;
}
if(hostdataptr)
/* free the memory used for name lookup */
free(hostdataptr);
}
else {
failf(data, "could't find my own IP address (%s)", myhost);
@@ -776,11 +797,11 @@ CURLcode _ftp(struct connectdata *conn)
porttouse & 255);
}
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "200", 3)) {
if(ftpcode != 200) {
failf(data, "Server does not grok PORT, try without it!");
return CURLE_FTP_PORT_FAILED;
}
@@ -789,11 +810,11 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "PASV");
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "227", 3)) {
if(ftpcode != 227) {
failf(data, "Odd return code after PASV");
return CURLE_FTP_WEIRD_PASV_REPLY;
}
@@ -839,7 +860,8 @@ CURLcode _ftp(struct connectdata *conn)
* previous lookup.
*/
he = conn->hp;
connectport = data->port; /* we connect to the proxy's port */
connectport =
(unsigned short)data->port; /* we connect to the proxy's port */
}
else {
/* normal, direct, ftp connection */
@@ -964,11 +986,11 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "TYPE %s",
(data->bits.ftp_ascii)?"A":"I");
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "200", 3)) {
if(ftpcode != 200) {
failf(data, "Couldn't set %s mode",
(data->bits.ftp_ascii)?"ASCII":"binary");
return (data->bits.ftp_ascii)? CURLE_FTP_COULDNT_SET_ASCII:
@@ -995,11 +1017,11 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "213", 3)) {
if(ftpcode != 213) {
failf(data, "Couldn't get file size: %s", buf+4);
return CURLE_FTP_COULDNT_GET_SIZE;
}
@@ -1011,25 +1033,9 @@ CURLcode _ftp(struct connectdata *conn)
if(data->resume_from) {
/* do we still game? */
int passed=0;
#if 0
/* Set resume file transfer offset */
infof(data, "Instructs server to resume from offset %d\n",
data->resume_from);
ftpsendf(data->firstsocket, conn, "REST %d", data->resume_from);
nread = GetLastResponse(data->firstsocket, buf, conn);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "350", 3)) {
failf(data, "Couldn't use REST: %s", buf+4);
return CURLE_FTP_COULDNT_USE_REST;
}
#else
/* enable append instead */
data->bits.ftp_append = 1;
#endif
/* Now, let's read off the proper amount of bytes from the
input. If we knew it was a proper file we could've just
fseek()ed but we only have a stream here */
@@ -1057,8 +1063,8 @@ CURLcode _ftp(struct connectdata *conn)
data->infilesize -= data->resume_from;
if(data->infilesize <= 0) {
infof(data, "File already completely uploaded\n");
return CURLE_OK;
failf(data, "File already completely uploaded\n");
return CURLE_FTP_COULDNT_STOR_FILE;
}
}
/* we've passed, proceed as normal */
@@ -1072,11 +1078,11 @@ CURLcode _ftp(struct connectdata *conn)
else
ftpsendf(data->firstsocket, conn, "STOR %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(atoi(buf)>=400) {
if(ftpcode>=400) {
failf(data, "Failed FTP upload:%s", buf+3);
/* oops, we never close the sockets! */
return CURLE_FTP_COULDNT_STOR_FILE;
@@ -1094,9 +1100,7 @@ CURLcode _ftp(struct connectdata *conn)
size prior to the actual upload. */
pgrsSetUploadSize(data, data->infilesize);
#if 0
ProgressInit(data, data->infilesize);
#endif
result = Transfer(conn, -1, -1, FALSE, NULL, /* no download */
data->secondarysocket, bytecountp);
if(result)
@@ -1144,11 +1148,7 @@ CURLcode _ftp(struct connectdata *conn)
infof(data, "range-download from %d to %d, totally %d bytes\n",
from, to, totalsize);
}
#if 0
if(!ppath[0])
/* make sure this becomes a valid name */
ppath="./";
#endif
if((data->bits.ftp_list_only) || !ftp->file) {
/* The specified path ends with a slash, and therefore we think this
is a directory that is requested, use LIST. But before that we
@@ -1158,11 +1158,11 @@ CURLcode _ftp(struct connectdata *conn)
/* Set type to ASCII */
ftpsendf(data->firstsocket, conn, "TYPE A");
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "200", 3)) {
if(ftpcode != 200) {
failf(data, "Couldn't set ascii mode");
return CURLE_FTP_COULDNT_SET_ASCII;
}
@@ -1178,13 +1178,13 @@ CURLcode _ftp(struct connectdata *conn)
else {
/* Set type to binary (unless specified ASCII) */
ftpsendf(data->firstsocket, conn, "TYPE %s",
(data->bits.ftp_list_only)?"A":"I");
(data->bits.ftp_ascii)?"A":"I");
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "200", 3)) {
if(ftpcode != 200) {
failf(data, "Couldn't set %s mode",
(data->bits.ftp_ascii)?"ASCII":"binary");
return (data->bits.ftp_ascii)? CURLE_FTP_COULDNT_SET_ASCII:
@@ -1201,11 +1201,11 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "213", 3)) {
if(ftpcode != 213) {
infof(data, "server doesn't support SIZE: %s", buf+4);
/* We couldn't get the size and therefore we can't know if there
really is a part of the file left to get, although the server
@@ -1245,11 +1245,11 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "REST %d", data->resume_from);
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(strncmp(buf, "350", 3)) {
if(ftpcode != 350) {
failf(data, "Couldn't use REST: %s", buf+4);
return CURLE_FTP_COULDNT_USE_REST;
}
@@ -1258,11 +1258,11 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "RETR %s", ftp->file);
}
nread = GetLastResponse(data->firstsocket, buf, conn);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
if(!strncmp(buf, "150", 3) || !strncmp(buf, "125", 3)) {
if((ftpcode == 150) || (ftpcode == 125)) {
/*
A;
@@ -1317,25 +1317,10 @@ CURLcode _ftp(struct connectdata *conn)
}
}
#if 0
if(2 != sscanf(buf, "%*[^(](%d bytes%c", &size, &paren))
size=-1;
#endif
}
else if(downloadsize > -1)
size = downloadsize;
#if 0
if((size > -1) && (data->resume_from>0)) {
size -= data->resume_from;
if(size <= 0) {
failf(data, "Offset (%d) was beyond file size (%d)",
data->resume_from, data->resume_from+size);
return CURLE_PARTIAL_FILE;
}
}
#endif
if(data->bits.ftp_use_port) {
result = AllowServerConnect(data, portsock);
if( result )
@@ -1380,13 +1365,15 @@ CURLcode ftp(struct connectdata *conn)
it */
ftp->file = strrchr(conn->ppath, '/');
if(ftp->file) {
if(ftp->file != conn->ppath)
dirlength=ftp->file-conn->ppath; /* don't count the traling slash */
ftp->file++; /* point to the first letter in the file name part or
remain NULL */
}
else {
ftp->file = conn->ppath; /* there's only a file part */
}
dirlength=ftp->file-conn->ppath;
if(*ftp->file) {
ftp->file = curl_unescape(ftp->file, 0);
@@ -1414,6 +1401,14 @@ CURLcode ftp(struct connectdata *conn)
retcode = _ftp(conn);
/* clean up here, success or error doesn't matter */
if(ftp->file)
free(ftp->file);
if(ftp->dir)
free(ftp->dir);
ftp->file = ftp->dir = NULL; /* zero */
return retcode;
}

View File

@@ -390,7 +390,7 @@ static const short yycheck[] = { 0,
56
};
/* -*-C-*- Note some compilers choke on comments on `#line' lines. */
#line 3 "/opt/TWWfsw/bison/share/bison.simple"
#line 3 "/usr/local/share/bison.simple"
/* This file comes from bison-1.28. */
/* Skeleton output parser for bison,
@@ -604,7 +604,7 @@ __yy_memcpy (char *to, char *from, unsigned int count)
#endif
#endif
#line 217 "/opt/TWWfsw/bison/share/bison.simple"
#line 217 "/usr/local/share/bison.simple"
/* The user can define YYPARSE_PARAM as the name of an argument to be passed
into yyparse. The argument should have type void *.
@@ -1295,7 +1295,7 @@ case 50:
break;}
}
/* the action file gets copied in in place of this dollarsign */
#line 543 "/opt/TWWfsw/bison/share/bison.simple"
#line 543 "/usr/local/share/bison.simple"
yyvsp -= yylen;
yyssp -= yylen;
@@ -1981,7 +1981,7 @@ curl_getdate (const char *p, const time_t *now)
yyInput = p;
Start = now ? *now : time ((time_t *) NULL);
#ifdef HAVE_LOCALTIME_R
tmp = localtime_r(&Start, &keeptime);
tmp = (struct tm *)localtime_r(&Start, &keeptime);
#else
tmp = localtime (&Start);
#endif

View File

@@ -934,7 +934,7 @@ curl_getdate (const char *p, const time_t *now)
yyInput = p;
Start = now ? *now : time ((time_t *) NULL);
#ifdef HAVE_LOCALTIME_R
tmp = localtime_r(&Start, &keeptime);
tmp = (struct tm *)localtime_r(&Start, &keeptime);
#else
tmp = localtime (&Start);
#endif

View File

@@ -45,6 +45,10 @@
#include <windows.h>
#endif
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
char *GetEnv(char *variable)
{
#ifdef WIN32

127
lib/getinfo.c Normal file
View File

@@ -0,0 +1,127 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* The contents of this file are subject to the Mozilla Public License
* Version 1.0 (the "License"); you may not use this file except in
* compliance with the License. You may obtain a copy of the License at
* http://www.mozilla.org/MPL/
*
* Software distributed under the License is distributed on an "AS IS"
* basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the
* License for the specific language governing rights and limitations
* under the License.
*
* The Original Code is Curl.
*
* The Initial Developer of the Original Code is Daniel Stenberg.
*
* Portions created by the Initial Developer are Copyright (C) 1999.
* All Rights Reserved.
*
* ------------------------------------------------------------
* Main author:
* - Daniel Stenberg <daniel@haxx.se>
*
* http://curl.haxx.se
*
* $Source$
* $Revision$
* $Date$
* $Author$
* $State$
* $Locker$
*
* ------------------------------------------------------------
****************************************************************************/
#include "setup.h"
#include <curl/curl.h>
#include "urldata.h"
#include <stdio.h>
#include <string.h>
#include <stdarg.h>
CURLcode curl_getinfo(CURL *curl, CURLINFO info, ...)
{
va_list arg;
long *param_longp;
double *param_doublep;
char **param_charp;
struct UrlData *data = (struct UrlData *)curl;
va_start(arg, info);
switch(info&CURLINFO_TYPEMASK) {
default:
return CURLE_BAD_FUNCTION_ARGUMENT;
case CURLINFO_STRING:
param_charp = va_arg(arg, char **);
if(NULL == param_charp)
return CURLE_BAD_FUNCTION_ARGUMENT;
break;
case CURLINFO_LONG:
param_longp = va_arg(arg, long *);
if(NULL == param_longp)
return CURLE_BAD_FUNCTION_ARGUMENT;
break;
case CURLINFO_DOUBLE:
param_doublep = va_arg(arg, double *);
if(NULL == param_doublep)
return CURLE_BAD_FUNCTION_ARGUMENT;
break;
}
switch(info) {
case CURLINFO_EFFECTIVE_URL:
*param_charp = data->url?data->url:"";
break;
case CURLINFO_HTTP_CODE:
*param_longp = data->progress.httpcode;
break;
case CURLINFO_FILETIME:
*param_longp = data->progress.filetime;
break;
case CURLINFO_HEADER_SIZE:
*param_longp = data->header_size;
break;
case CURLINFO_REQUEST_SIZE:
*param_longp = data->request_size;
break;
case CURLINFO_TOTAL_TIME:
*param_doublep = data->progress.timespent;
break;
case CURLINFO_NAMELOOKUP_TIME:
*param_doublep = data->progress.t_nslookup;
break;
case CURLINFO_CONNECT_TIME:
*param_doublep = data->progress.t_connect;
break;
case CURLINFO_PRETRANSFER_TIME:
*param_doublep = data->progress.t_pretransfer;
break;
case CURLINFO_SIZE_UPLOAD:
*param_doublep = data->progress.uploaded;
break;
case CURLINFO_SIZE_DOWNLOAD:
*param_doublep = data->progress.downloaded;
break;
case CURLINFO_SPEED_DOWNLOAD:
*param_doublep = data->progress.dlspeed;
break;
case CURLINFO_SPEED_UPLOAD:
*param_doublep = data->progress.ulspeed;
break;
case CURLINFO_SSL_VERIFYRESULT:
*param_longp = data->ssl.certverifyresult;
break;
default:
return CURLE_BAD_FUNCTION_ARGUMENT;
}
return CURLE_OK;
}

View File

@@ -4,10 +4,11 @@
* Redistribution and use are freely permitted provided that:
*
* 1) This header remain in tact.
* 2) The prototype for getpass is not changed from:
* 2) The prototypes for getpass and getpass_r are not changed from:
* char *getpass(const char *prompt)
* char *getpass_r(const char *prompt, char* buffer, int buflen)
* 3) This source code is not used outside of this(getpass.c) file.
* 3) Any changes to this(getpass.c) source code are made publicly available.
* 4) Any changes to this(getpass.c) source code are made publicly available.
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES,
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
@@ -34,19 +35,19 @@
* Daniel Stenberg <daniel@haxx.se>
*/
#ifndef WIN32
#ifdef HAVE_CONFIG_H
# include <config.h>
#endif
#ifndef HAVE_GETPASS_R
#ifndef WIN32
#ifdef HAVE_TERMIOS_H
# if !defined(HAVE_TCGETATTR) && !defined(HAVE_TCSETATTR)
# undef HAVE_TERMIOS_H
# endif
#endif
#define INPUT_BUFFER 128
#ifndef RETSIGTYPE
# define RETSIGTYPE void
#endif
@@ -70,11 +71,10 @@
# define perror(x) fprintf(stderr, "Error in: %s\n", x)
#endif
char *getpass(const char *prompt)
char *getpass_r(const char *prompt, char *buffer, int buflen)
{
FILE *infp;
FILE *outfp;
static char buf[INPUT_BUFFER];
RETSIGTYPE (*sigint)();
#ifndef __EMX__
RETSIGTYPE (*sigtstp)();
@@ -115,25 +115,25 @@ char *getpass(const char *prompt)
#ifdef HAVE_TERMIOS_H
if(tcgetattr(outfd, &orig) != 0)
{
perror("tcgetattr");
; /*perror("tcgetattr");*/
}
noecho = orig;
noecho.c_lflag &= ~ECHO;
if(tcsetattr(outfd, TCSANOW, &noecho) != 0)
{
perror("tcgetattr");
; /*perror("tcgetattr");*/
}
#else
# ifdef HAVE_TERMIO_H
if(ioctl(outfd, TCGETA, &orig) != 0)
{
perror("ioctl");
; /*perror("ioctl");*/
}
noecho = orig;
noecho.c_lflag &= ~ECHO;
if(ioctl(outfd, TCSETA, &noecho) != 0)
{
perror("ioctl");
; /*perror("ioctl");*/
}
# else
# endif
@@ -142,8 +142,8 @@ char *getpass(const char *prompt)
fputs(prompt, outfp);
fflush(outfp);
bytes_read=read(infd, buf, INPUT_BUFFER);
buf[bytes_read > 0 ? (bytes_read -1) : 0] = '\0';
bytes_read=read(infd, buffer, buflen);
buffer[bytes_read > 0 ? (bytes_read -1) : 0] = '\0';
/* print a new line if needed */
#ifdef HAVE_TERMIOS_H
@@ -157,18 +157,18 @@ char *getpass(const char *prompt)
/*
* reset term charectaristics, use TCSAFLUSH incase the
* user types more than INPUT_BUFFER
* user types more than buflen
*/
#ifdef HAVE_TERMIOS_H
if(tcsetattr(outfd, TCSAFLUSH, &orig) != 0)
{
perror("tcgetattr");
; /*perror("tcgetattr");*/
}
#else
# ifdef HAVE_TERMIO_H
if(ioctl(outfd, TCSETA, &orig) != 0)
{
perror("ioctl");
; /*perror("ioctl");*/
}
# else
# endif
@@ -179,15 +179,38 @@ char *getpass(const char *prompt)
signal(SIGTSTP, sigtstp);
#endif
return(buf);
return buffer; /* we always return success */
}
#else
#else /* WIN32 */
#include <stdio.h>
#include <conio.h>
char *getpass_r(const char *prompt, char *buffer, int buflen)
{
int i;
printf("%s", prompt);
for(i=0; i<buflen; i++) {
buffer[i] = getch();
if ( buffer[i] == '\r' ) {
buffer[i] = 0;
break;
}
}
/* if user didn't hit ENTER, terminate buffer */
if (i==buflen)
buffer[buflen-1]=0;
return buffer; /* we always return success */
}
#endif
#endif /* ifndef HAVE_GETPASS_R */
#if 0
/* for consistensy, here's the old-style function: */
char *getpass(const char *prompt)
{
static char password[80];
printf(prompt);
gets(password);
return password;
static char buf[256];
return getpass_r(prompt, buf, sizeof(buf));
}
#endif /* don't do anything if WIN32 */
#endif

View File

@@ -1 +1,8 @@
char *getpass(const char *prompt);
#ifndef __GETPASS_H
#define __GETPASS_H
/*
* Returning NULL will abort the continued operation!
*/
char* getpass_r(char *prompt, char* buffer, int buflen );
#endif

View File

@@ -107,11 +107,19 @@
#include "getpass.h"
#include "progress.h"
#include "getdate.h"
#include "writeout.h"
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#ifndef min
#define min(a, b) ((a) < (b) ? (a) : (b))
#endif
CURLcode
_Transfer(struct connectdata *c_conn)
{
@@ -141,6 +149,7 @@ _Transfer(struct connectdata *c_conn)
long bodywrites=0;
char newurl[URL_MAX_LENGTH]; /* buffer for Location: URL */
int writetype;
/* the highest fd we use + 1 */
struct UrlData *data;
@@ -166,6 +175,7 @@ _Transfer(struct connectdata *c_conn)
#define KEEP_WRITE 2
pgrsTime(data, TIMER_PRETRANSFER);
speedinit(data);
if (!conn->getheader) {
header = FALSE;
@@ -327,24 +337,16 @@ _Transfer(struct connectdata *c_conn)
/* now, only output this if the header AND body are requested:
*/
if (data->bits.http_include_header) {
if((p - data->headerbuff) !=
data->fwrite (data->headerbuff, 1,
p - data->headerbuff, data->out)) {
failf (data, "Failed writing output");
return CURLE_WRITE_ERROR;
}
}
if(data->writeheader) {
/* obviously, the header is requested to be written to
this file: */
if((p - data->headerbuff) !=
data->fwrite (data->headerbuff, 1, p - data->headerbuff,
data->writeheader)) {
failf (data, "Failed writing output");
return CURLE_WRITE_ERROR;
}
}
writetype = CLIENTWRITE_HEADER;
if (data->bits.http_include_header)
writetype |= CLIENTWRITE_BODY;
urg = client_write(data, writetype, data->headerbuff,
p - data->headerbuff);
if(urg)
return urg;
data->header_size += p - data->headerbuff;
break; /* exit header line loop */
}
@@ -393,9 +395,11 @@ _Transfer(struct connectdata *c_conn)
}
else if(strnequal("Last-Modified:", p,
strlen("Last-Modified:")) &&
data->timecondition) {
(data->timecondition || data->bits.get_filetime) ) {
time_t secs=time(NULL);
timeofdoc = curl_getdate(p+strlen("Last-Modified:"), &secs);
if(data->bits.get_filetime)
data->progress.filetime = timeofdoc;
}
else if ((code >= 300 && code < 400) &&
(data->bits.http_follow_location) &&
@@ -406,21 +410,16 @@ _Transfer(struct connectdata *c_conn)
instead */
data->newurl = strdup (newurl);
}
if (data->bits.http_include_header) {
if(hbuflen != data->fwrite (p, 1, hbuflen, data->out)) {
failf (data, "Failed writing output");
return CURLE_WRITE_ERROR;
}
}
if(data->writeheader) {
/* the header is requested to be written to this file */
if(hbuflen != data->fwrite (p, 1, hbuflen,
data->writeheader)) {
failf (data, "Failed writing output");
return CURLE_WRITE_ERROR;
}
}
writetype = CLIENTWRITE_HEADER;
if (data->bits.http_include_header)
writetype |= CLIENTWRITE_BODY;
urg = client_write(data, writetype, p, hbuflen);
if(urg)
return urg;
data->header_size += hbuflen;
/* reset hbufp pointer && hbuflen */
hbufp = data->headerbuff;
@@ -504,10 +503,9 @@ _Transfer(struct connectdata *c_conn)
pgrsSetDownloadCounter(data, (double)bytecount);
if (nread != data->fwrite (str, 1, nread, data->out)) {
failf (data, "Failed writing output");
return CURLE_WRITE_ERROR;
}
urg = client_write(data, CLIENTWRITE_BODY, str, nread);
if(urg)
return urg;
} /* if (! header and data to read ) */
} /* if( read from socket ) */
@@ -522,7 +520,7 @@ _Transfer(struct connectdata *c_conn)
if(data->crlf)
buf = data->buffer; /* put it back on the buffer */
nread = data->fread(buf, 1, BUFSIZE, data->in);
nread = data->fread(buf, 1, conn->upload_bufsize, data->in);
/* the signed int typecase of nread of for systems that has
unsigned size_t */
@@ -570,6 +568,15 @@ _Transfer(struct connectdata *c_conn)
if (urg)
return urg;
if(data->progress.ulspeed > conn->upload_bufsize) {
/* If we're transfering more data per second than fits in our buffer,
we increase the buffer size to adjust to the current
speed. However, we must not set it larger than BUFSIZE. We don't
adjust it downwards again since we don't see any point in that!
*/
conn->upload_bufsize=(long)min(data->progress.ulspeed, BUFSIZE);
}
if (data->timeout && (tvdiff (now, start) > data->timeout)) {
failf (data, "Operation timed out with %d out of %d bytes received",
bytecount, conn->size);
@@ -600,11 +607,12 @@ CURLcode curl_transfer(CURL *curl)
{
CURLcode res;
struct UrlData *data = curl;
struct connectdata *c_connect;
struct connectdata *c_connect=NULL;
pgrsStartNow(data);
do {
pgrsTime(data, TIMER_STARTSINGLE);
res = curl_connect(curl, (CURLconnect **)&c_connect);
if(res == CURLE_OK) {
res = curl_do(c_connect);
@@ -615,13 +623,24 @@ CURLcode curl_transfer(CURL *curl)
}
if((res == CURLE_OK) && data->newurl) {
/* Location: redirect */
/* Location: redirect
This is assumed to happen for HTTP(S) only!
*/
char prot[16];
char path[URL_MAX_LENGTH];
if (data->maxredirs && (data->followlocation >= data->maxredirs)) {
failf(data,"Maximum (%d) redirects followed", data->maxredirs);
curl_disconnect(c_connect);
res=CURLE_TOO_MANY_REDIRECTS;
break;
}
/* mark the next request as a followed location: */
data->bits.this_is_a_follow = TRUE;
data->followlocation++; /* count location-followers */
if(data->bits.http_auto_referer) {
/* We are asked to automatically set the previous URL as the
referer when we get the next URL. We pick the ->url field,
@@ -699,10 +718,14 @@ CURLcode curl_transfer(CURL *curl)
/* TBD: set the port with curl_setopt() */
data->port = 0;
}
if(data->bits.urlstringalloc)
free(data->url);
/* TBD: set the URL with curl_setopt() */
data->url = data->newurl;
data->newurl = NULL; /* don't show! */
data->bits.urlstringalloc = TRUE; /* the URL is allocated */
/* Disable both types of POSTs, since doing a second POST when
following isn't what anyone would want! */
@@ -724,10 +747,6 @@ CURLcode curl_transfer(CURL *curl)
if(data->newurl)
free(data->newurl);
if((CURLE_OK == res) && data->writeinfo) {
/* Time to output some info to stdout */
WriteOut(data);
}
return res;
}

View File

@@ -41,7 +41,6 @@
#include "setup.h"
#include <string.h>
#include <malloc.h>
#include <errno.h>
#define _REENTRANT
@@ -73,6 +72,11 @@
#include "inet_ntoa_r.h"
#endif
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/* --- resolve name or IP-number --- */
char *MakeIP(unsigned long num,char *addr, int addr_len)
@@ -107,7 +111,7 @@ struct hostent *GetHost(struct UrlData *data,
{
struct hostent *h = NULL;
unsigned long in;
int ret;
int ret; /* this variable is unused on several platforms but used on some */
#define CURL_NAMELOOKUP_SIZE 9000
@@ -118,6 +122,7 @@ struct hostent *GetHost(struct UrlData *data,
char *buf = (char *)malloc(CURL_NAMELOOKUP_SIZE);
if(!buf)
return NULL; /* major failure */
*bufp = buf;
if ( (in=inet_addr(hostname)) != INADDR_NONE ) {
struct in_addr *addrentry;
@@ -182,12 +187,14 @@ struct hostent *GetHost(struct UrlData *data,
infof(data, "gethostbyname_r(2) failed for %s\n", hostname);
h = NULL; /* set return code to NULL */
free(buf);
*bufp=NULL;
}
#else
else {
if ((h = gethostbyname(hostname)) == NULL ) {
infof(data, "gethostbyname(2) failed for %s\n", hostname);
free(buf);
*bufp=NULL;
}
#endif
}

View File

@@ -117,6 +117,11 @@
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/*
* This function checks the linked list of custom HTTP headers for a particular
* header (prefix).
@@ -249,7 +254,8 @@ CURLcode http_done(struct connectdata *conn)
*bytecount = http->readbytecount + http->writebytecount;
}
/* TBD: the HTTP struct remains allocated here */
free(http);
data->proto.http=NULL; /* it is gone */
return CURLE_OK;
}
@@ -321,7 +327,7 @@ CURLcode http(struct connectdata *conn)
}
if ((data->bits.httpproxy) && !(conn->protocol&PROT_HTTPS)) {
/* The path sent to the proxy is in fact the entire URL */
strncpy(ppath, data->url, URL_MAX_LENGTH-1);
ppath = data->url;
}
if(data->bits.http_formpost) {
/* we must build the whole darned post sequence first, so that we have
@@ -330,7 +336,13 @@ CURLcode http(struct connectdata *conn)
}
if(!checkheaders(data, "Host:")) {
data->ptr_host = maprintf("Host: %s:%d\r\n", host, data->remote_port);
if(((conn->protocol&PROT_HTTPS) && (data->remote_port == PORT_HTTPS)) ||
(!(conn->protocol&PROT_HTTPS) && (data->remote_port == PORT_HTTP)) )
/* If (HTTPS on port 443) OR (non-HTTPS on port 80) then don't include
the port number in the host string */
data->ptr_host = maprintf("Host: %s\r\n", host);
else
data->ptr_host = maprintf("Host: %s:%d\r\n", host, data->remote_port);
}
if(!checkheaders(data, "Pragma:"))
@@ -340,57 +352,61 @@ CURLcode http(struct connectdata *conn)
http->p_accept = "Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*\r\n";
do {
send_buffer *req_buffer;
struct curl_slist *headers=data->headers;
sendf(data->firstsocket, data,
"%s " /* GET/HEAD/POST/PUT */
"%s HTTP/1.0\r\n" /* path */
"%s" /* proxyuserpwd */
"%s" /* userpwd */
"%s" /* range */
"%s" /* user agent */
"%s" /* cookie */
"%s" /* host */
"%s" /* pragma */
"%s" /* accept */
"%s", /* referer */
data->customrequest?data->customrequest:
(data->bits.no_body?"HEAD":
(data->bits.http_post || data->bits.http_formpost)?"POST":
(data->bits.http_put)?"PUT":"GET"),
ppath,
(data->bits.proxy_user_passwd && data->ptr_proxyuserpwd)?data->ptr_proxyuserpwd:"",
(data->bits.user_passwd && data->ptr_userpwd)?data->ptr_userpwd:"",
(data->bits.set_range && data->ptr_rangeline)?data->ptr_rangeline:"",
(data->useragent && *data->useragent && data->ptr_uagent)?data->ptr_uagent:"",
(data->ptr_cookie?data->ptr_cookie:""), /* Cookie: <data> */
(data->ptr_host?data->ptr_host:""), /* Host: host */
http->p_pragma?http->p_pragma:"",
http->p_accept?http->p_accept:"",
(data->bits.http_set_referer && data->ptr_ref)?data->ptr_ref:"" /* Referer: <data> <CRLF> */
);
/* initialize a dynamic send-buffer */
req_buffer = add_buffer_init();
/* add the main request stuff */
add_bufferf(req_buffer,
"%s " /* GET/HEAD/POST/PUT */
"%s HTTP/1.0\r\n" /* path */
"%s" /* proxyuserpwd */
"%s" /* userpwd */
"%s" /* range */
"%s" /* user agent */
"%s" /* cookie */
"%s" /* host */
"%s" /* pragma */
"%s" /* accept */
"%s", /* referer */
data->customrequest?data->customrequest:
(data->bits.no_body?"HEAD":
(data->bits.http_post || data->bits.http_formpost)?"POST":
(data->bits.http_put)?"PUT":"GET"),
ppath,
(data->bits.proxy_user_passwd && data->ptr_proxyuserpwd)?data->ptr_proxyuserpwd:"",
(data->bits.user_passwd && data->ptr_userpwd)?data->ptr_userpwd:"",
(data->bits.set_range && data->ptr_rangeline)?data->ptr_rangeline:"",
(data->useragent && *data->useragent && data->ptr_uagent)?data->ptr_uagent:"",
(data->ptr_cookie?data->ptr_cookie:""), /* Cookie: <data> */
(data->ptr_host?data->ptr_host:""), /* Host: host */
http->p_pragma?http->p_pragma:"",
http->p_accept?http->p_accept:"",
(data->bits.http_set_referer && data->ptr_ref)?data->ptr_ref:"" /* Referer: <data> <CRLF> */
);
if(co) {
int count=0;
struct Cookie *store=co;
/* now loop through all cookies that matched */
while(co) {
if(co->value && strlen(co->value)) {
if(0 == count) {
sendf(data->firstsocket, data,
"Cookie:");
add_bufferf(req_buffer, "Cookie: ");
}
sendf(data->firstsocket, data,
"%s%s=%s", count?"; ":"", co->name,
co->value);
add_bufferf(req_buffer,
"%s%s=%s", count?"; ":"", co->name, co->value);
count++;
}
co = co->next; /* next cookie please */
}
if(count) {
sendf(data->firstsocket, data,
"\r\n");
add_buffer(req_buffer, "\r\n", 2);
}
cookie_freelist(co); /* free the cookie list */
cookie_freelist(store); /* free the cookie list */
co=NULL;
}
@@ -419,16 +435,16 @@ CURLcode http(struct connectdata *conn)
switch(data->timecondition) {
case TIMECOND_IFMODSINCE:
default:
sendf(data->firstsocket, data,
"If-Modified-Since: %s\r\n", buf);
add_bufferf(req_buffer,
"If-Modified-Since: %s\r\n", buf);
break;
case TIMECOND_IFUNMODSINCE:
sendf(data->firstsocket, data,
"If-Unmodified-Since: %s\r\n", buf);
add_bufferf(req_buffer,
"If-Unmodified-Since: %s\r\n", buf);
break;
case TIMECOND_LASTMOD:
sendf(data->firstsocket, data,
"Last-Modified: %s\r\n", buf);
add_bufferf(req_buffer,
"Last-Modified: %s\r\n", buf);
break;
}
}
@@ -439,15 +455,13 @@ CURLcode http(struct connectdata *conn)
/* we require a colon for this to be a true header */
ptr++; /* pass the colon */
while(*ptr && isspace(*ptr))
while(*ptr && isspace((int)*ptr))
ptr++;
if(*ptr) {
/* only send this if the contents was non-blank */
sendf(data->firstsocket, data,
"%s\015\012",
headers->data);
add_bufferf(req_buffer, "%s\r\n", headers->data);
}
}
headers = headers->next;
@@ -468,12 +482,14 @@ CURLcode http(struct connectdata *conn)
generated form data */
data->in = (FILE *)&http->form;
sendf(data->firstsocket, data,
"Content-Length: %d\r\n",
http->postsize-2);
add_bufferf(req_buffer,
"Content-Length: %d\r\n", http->postsize-2);
/* set upload size to the progress meter */
pgrsSetUploadSize(data, http->postsize);
data->request_size =
add_buffer_send(data->firstsocket, conn, req_buffer);
result = Transfer(conn, data->firstsocket, -1, TRUE,
&http->readbytecount,
data->firstsocket,
@@ -487,16 +503,21 @@ CURLcode http(struct connectdata *conn)
/* Let's PUT the data to the server! */
if(data->infilesize>0) {
sendf(data->firstsocket, data,
"Content-Length: %d\r\n\r\n", /* file size */
data->infilesize );
add_bufferf(req_buffer,
"Content-Length: %d\r\n\r\n", /* file size */
data->infilesize );
}
else
sendf(data->firstsocket, data,
"\015\012");
add_bufferf(req_buffer, "\015\012");
/* set the upload size to the progress meter */
pgrsSetUploadSize(data, data->infilesize);
/* this sends the buffer and frees all the buffer resources */
data->request_size =
add_buffer_send(data->firstsocket, conn, req_buffer);
/* prepare for transfer */
result = Transfer(conn, data->firstsocket, -1, TRUE,
&http->readbytecount,
data->firstsocket,
@@ -512,28 +533,35 @@ CURLcode http(struct connectdata *conn)
if(!checkheaders(data, "Content-Length:"))
/* we allow replacing this header, although it isn't very wise to
actually set your own */
sendf(data->firstsocket, data,
"Content-Length: %d\r\n",
(data->postfieldsize?data->postfieldsize:
strlen(data->postfields)) );
add_bufferf(req_buffer,
"Content-Length: %d\r\n",
(data->postfieldsize?data->postfieldsize:
strlen(data->postfields)) );
if(!checkheaders(data, "Content-Type:"))
sendf(data->firstsocket, data,
"Content-Type: application/x-www-form-urlencoded\r\n");
add_bufferf(req_buffer,
"Content-Type: application/x-www-form-urlencoded\r\n");
/* and here comes the actual data */
if(data->postfieldsize) {
ssend(data->firstsocket, conn, "\r\n", 2);
ssend(data->firstsocket, conn, data->postfields, data->postfieldsize);
ssend(data->firstsocket, conn, "\r\n", 2);
add_buffer(req_buffer, "\r\n", 2);
add_buffer(req_buffer, data->postfields,
data->postfieldsize);
add_buffer(req_buffer, "\r\n", 2);
}
else {
add_bufferf(req_buffer,
"\r\n"
"%s\r\n",
data->postfields );
}
sendf(data->firstsocket, data,
"\r\n"
"%s\r\n",
data->postfields );
}
else
sendf(data->firstsocket, data, "\r\n");
add_buffer(req_buffer, "\r\n", 2);
/* issue the request */
data->request_size =
add_buffer_send(data->firstsocket, conn, req_buffer);
/* HTTP GET/HEAD download: */
result = Transfer(conn, data->firstsocket, -1, TRUE, bytecount,

View File

@@ -40,13 +40,18 @@
#ifdef KRB4
#include "security.h"
#include "base64_krb.h"
#include "base64.h"
#include <stdlib.h>
#include <netdb.h>
#include <syslog.h>
#include <string.h>
#include <krb.h>
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#ifdef FTP_SERVER
#define LOCAL_ADDR ctrl_addr
#define REMOTE_ADDR his_addr

View File

@@ -134,8 +134,7 @@ static void * DynaGetFunction(char *name)
static int WriteProc(void *param, char *text, int len)
{
struct UrlData *data = (struct UrlData *)param;
data->fwrite(text, 1, strlen(text), data->out);
client_write(data, CLIENTWRITE_BODY, text, 0);
return 0;
}

42
lib/libcurl.def Normal file
View File

@@ -0,0 +1,42 @@
;
; Definition file for the DLL version of the LIBCURL library from curl
;
LIBRARY LIBCURL
DESCRIPTION 'curl libcurl - http://curl.haxx.se'
EXPORTS
curl_close @ 1 ;
curl_connect @ 2 ;
curl_disconnect @ 3 ;
curl_do @ 4 ;
curl_done @ 5 ;
curl_easy_cleanup @ 6 ;
curl_easy_getinfo @ 7 ;
curl_easy_init @ 8 ;
curl_easy_perform @ 9 ;
curl_easy_setopt @ 10 ;
curl_escape @ 11 ;
curl_formparse @ 12 ;
curl_free @ 13 ;
curl_getdate @ 14 ;
curl_getenv @ 15 ;
curl_init @ 16 ;
curl_open @ 17 ;
curl_read @ 18 ;
curl_setopt @ 19 ;
curl_slist_append @ 20 ;
curl_slist_free_all @ 21 ;
curl_transfer @ 22 ;
curl_unescape @ 23 ;
curl_version @ 24 ;
curl_write @ 25 ;
maprintf @ 26 ;
mfprintf @ 27 ;
mprintf @ 28 ;
msprintf @ 29 ;
msnprintf @ 30 ;
mvfprintf @ 31 ;
strequal @ 32 ;
strnequal @ 33 ;

118
lib/memdebug.c Normal file
View File

@@ -0,0 +1,118 @@
#ifdef MALLOCDEBUG
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* The contents of this file are subject to the Mozilla Public License
* Version 1.0 (the "License"); you may not use this file except in
* compliance with the License. You may obtain a copy of the License at
* http://www.mozilla.org/MPL/
*
* Software distributed under the License is distributed on an "AS IS"
* basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the
* License for the specific language governing rights and limitations
* under the License.
*
* The Original Code is Curl.
*
* The Initial Developer of the Original Code is Daniel Stenberg.
*
* Portions created by the Initial Developer are Copyright (C) 1999.
* All Rights Reserved.
*
* ------------------------------------------------------------
* Main author:
* - Daniel Stenberg <daniel@haxx.se>
*
* http://curl.haxx.se
*
* $Source$
* $Revision$
* $Date$
* $Author$
* $State$
* $Locker$
*
* ------------------------------------------------------------
****************************************************************************/
#include "setup.h"
#include <curl/curl.h>
#define _MPRINTF_REPLACE
#include <curl/mprintf.h>
#include "urldata.h"
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
/*
* Note that these debug functions are very simple and they are meant to
* remain so. For advanced analysis, record a log file and write perl scripts
* to analyze them!
*
* Don't use these with multithreaded test programs!
*/
FILE *logfile;
/* this sets the log file name */
void curl_memdebug(char *logname)
{
logfile = fopen(logname, "w");
}
void *curl_domalloc(size_t size, int line, char *source)
{
void *mem=(malloc)(size);
fprintf(logfile?logfile:stderr, "MEM %s:%d malloc(%d) = %p\n",
source, line, size, mem);
return mem;
}
char *curl_dostrdup(char *str, int line, char *source)
{
char *mem;
size_t len;
if(NULL ==str) {
fprintf(stderr, "ILLEGAL strdup() on NULL at %s:%d\n",
source, line);
exit(2);
}
mem=(strdup)(str);
len=strlen(str)+1;
fprintf(logfile?logfile:stderr, "MEM %s:%d strdup(%p) (%d) = %p\n",
source, line, str, len, mem);
return mem;
}
void *curl_dorealloc(void *ptr, size_t size, int line, char *source)
{
void *mem=(realloc)(ptr, size);
fprintf(logfile?logfile:stderr, "MEM %s:%d realloc(%p, %d) = %p\n",
source, line, ptr, size, mem);
return mem;
}
void curl_dofree(void *ptr, int line, char *source)
{
if(NULL == ptr) {
fprintf(stderr, "ILLEGAL free() on NULL at %s:%d\n",
source, line);
exit(2);
}
(free)(ptr);
fprintf(logfile?logfile:stderr, "MEM %s:%d free(%p)\n",
source, line, ptr);
}
#endif /* MALLOCDEBUG */

13
lib/memdebug.h Normal file
View File

@@ -0,0 +1,13 @@
#ifdef MALLOCDEBUG
void *curl_domalloc(size_t size, int line, char *source);
void *curl_dorealloc(void *ptr, size_t size, int line, char *source);
void curl_dofree(void *ptr, int line, char *source);
char *curl_dostrdup(char *str, int line, char *source);
void curl_memdebug(char *logname);
/* Set this symbol on the command-line, recompile all lib-sources */
#define strdup(ptr) curl_dostrdup(ptr, __LINE__, __FILE__)
#define malloc(size) curl_domalloc(size, __LINE__, __FILE__)
#define realloc(ptr,size) curl_dorealloc(ptr, size, __LINE__, __FILE__)
#define free(ptr) curl_dofree(ptr, __LINE__, __FILE__)
#endif

View File

@@ -98,6 +98,10 @@ static const char rcsid[] = "@(#)$Id$";
#include <ctype.h>
#include <string.h>
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#define BUFFSIZE 256 /* buffer for long-to-str and float-to-str calcs */
#define MAX_PARAMETERS 128 /* lame static limit */
@@ -997,33 +1001,6 @@ static int dprintf_formatf(
return done;
}
static int StoreNonPrintable(int output, struct nsprintf *infop)
{
/* If the character isn't printable then we convert it */
char work[64], *w;
int num = output;
w = &work[sizeof(work)];
*(--w) = (char)0;
for(; num > 0; num /= 10) {
*(--w) = lower_digits[num % 10];
}
if (infop->length + strlen(w) + 1 < infop->max)
{
infop->buffer[0] = '\\';
infop->buffer++;
infop->length++;
for (; *w; w++)
{
infop->buffer[0] = *w;
infop->buffer++;
infop->length++;
}
return output;
}
return -1;
}
/* fputc() look-alike */
static int addbyter(int output, FILE *data)
{
@@ -1031,16 +1008,9 @@ static int addbyter(int output, FILE *data)
if(infop->length < infop->max) {
/* only do this if we haven't reached max length yet */
if (isprint(output) || isspace(output))
{
infop->buffer[0] = (char)output; /* store */
infop->buffer++; /* increase pointer */
infop->length++; /* we are now one byte larger */
}
else
{
return StoreNonPrintable(output, infop);
}
infop->buffer[0] = (char)output; /* store */
infop->buffer++; /* increase pointer */
infop->length++; /* we are now one byte larger */
return output; /* fputc() returns like this on success */
}
return -1;

View File

@@ -113,7 +113,7 @@ void pgrsDone(struct UrlData *data)
if(!(data->progress.flags & PGRS_HIDE)) {
data->progress.lastshow=0;
pgrsUpdate(data); /* the final (forced) update */
fprintf(stderr, "\n");
fprintf(data->err, "\n");
}
}
@@ -124,14 +124,24 @@ void pgrsTime(struct UrlData *data, timerid timer)
case TIMER_NONE:
/* mistake filter */
break;
case TIMER_STARTSINGLE:
/* This is set at the start of a single fetch, there may be several
fetches within an operation, why we add all other times relative
to this one */
data->progress.t_startsingle = tvnow();
break;
case TIMER_NAMELOOKUP:
data->progress.t_nslookup = tvnow();
data->progress.t_nslookup += tvdiff(tvnow(),
data->progress.t_startsingle);
break;
case TIMER_CONNECT:
data->progress.t_connect = tvnow();
data->progress.t_connect += tvdiff(tvnow(),
data->progress.t_startsingle);
break;
case TIMER_PRETRANSFER:
data->progress.t_pretransfer = tvnow();
data->progress.t_pretransfer += tvdiff(tvnow(),
data->progress.t_startsingle);
break;
case TIMER_POSTRANSFER:
/* this is the normal end-of-transfer thing */
@@ -312,7 +322,7 @@ int pgrsUpdate(struct UrlData *data)
if(total_expected_transfer)
total_percen=(double)(total_transfer/total_expected_transfer)*100;
fprintf(stderr,
fprintf(data->err,
"\r%3d %s %3d %s %3d %s %s %s %s %s %s %s",
(int)total_percen, /* total % */
max5data(total_expected_transfer, max5[2]), /* total size */

View File

@@ -49,6 +49,7 @@ typedef enum {
TIMER_CONNECT,
TIMER_PRETRANSFER,
TIMER_POSTRANSFER,
TIMER_STARTSINGLE,
TIMER_LAST /* must be last */
} timerid;

View File

@@ -38,15 +38,19 @@
#include "setup.h"
#include <curl/mprintf.h>
#ifdef KRB4
#include <curl/mprintf.h>
#include "security.h"
#include <stdlib.h>
#include <string.h>
#include <netdb.h>
#include "base64_krb.h"
#include "base64.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#define min(a, b) ((a) < (b) ? (a) : (b))

View File

@@ -53,13 +53,19 @@
#include <curl/curl.h>
#include "urldata.h"
#include "sendf.h"
#define _MPRINTF_REPLACE /* use the internal *printf() functions */
#include <curl/mprintf.h>
#ifdef KRB4
#include "security.h"
#include <string.h>
#endif
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/* infof() is for info message along the way */
@@ -82,14 +88,14 @@ void failf(struct UrlData *data, char *fmt, ...)
va_list ap;
va_start(ap, fmt);
if(data->errorbuffer)
vsprintf(data->errorbuffer, fmt, ap);
vsnprintf(data->errorbuffer, CURL_ERROR_SIZE, fmt, ap);
else /* no errorbuffer receives this, write to data->err instead */
vfprintf(data->err, fmt, ap);
va_end(ap);
}
/* sendf() sends the formated data to the server */
int sendf(int fd, struct UrlData *data, char *fmt, ...)
size_t sendf(int fd, struct UrlData *data, char *fmt, ...)
{
size_t bytes_written;
char *s;
@@ -105,8 +111,8 @@ int sendf(int fd, struct UrlData *data, char *fmt, ...)
#ifndef USE_SSLEAY
bytes_written = swrite(fd, s, strlen(s));
#else /* USE_SSLEAY */
if (data->use_ssl) {
bytes_written = SSL_write(data->ssl, s, strlen(s));
if (data->ssl.use) {
bytes_written = SSL_write(data->ssl.handle, s, strlen(s));
} else {
bytes_written = swrite(fd, s, strlen(s));
}
@@ -118,7 +124,7 @@ int sendf(int fd, struct UrlData *data, char *fmt, ...)
/*
* ftpsendf() sends the formated string as a ftp command to a ftp server
*/
int ftpsendf(int fd, struct connectdata *conn, char *fmt, ...)
size_t ftpsendf(int fd, struct connectdata *conn, char *fmt, ...)
{
size_t bytes_written;
char *s;
@@ -154,12 +160,9 @@ size_t ssend(int fd, struct connectdata *conn, void *mem, size_t len)
size_t bytes_written;
struct UrlData *data=conn->data; /* conn knows data, not vice versa */
if(data->bits.verbose)
fprintf(data->err, "> [binary output]\n");
#ifdef USE_SSLEAY
if (data->use_ssl) {
bytes_written = SSL_write(data->ssl, mem, len);
if (data->ssl.use) {
bytes_written = SSL_write(data->ssl.handle, mem, len);
}
else {
#endif
@@ -177,6 +180,125 @@ size_t ssend(int fd, struct connectdata *conn, void *mem, size_t len)
return bytes_written;
}
/* client_write() sends data to the write callback(s)
The bit pattern defines to what "streams" to write to. Body and/or header.
The defines are in sendf.h of course.
*/
CURLcode client_write(struct UrlData *data,
int type,
char *ptr,
size_t len)
{
size_t wrote;
if(0 == len)
len = strlen(ptr);
if(type & CLIENTWRITE_BODY) {
wrote = data->fwrite(ptr, 1, len, data->out);
if(wrote != len) {
failf (data, "Failed writing body");
return CURLE_WRITE_ERROR;
}
}
if((type & CLIENTWRITE_HEADER) && data->writeheader) {
wrote = data->fwrite(ptr, 1, len, data->writeheader);
if(wrote != len) {
failf (data, "Failed writing header");
return CURLE_WRITE_ERROR;
}
}
return CURLE_OK;
}
/*
* add_buffer_init() returns a fine buffer struct
*/
send_buffer *add_buffer_init(void)
{
send_buffer *blonk;
blonk=(send_buffer *)malloc(sizeof(send_buffer));
if(blonk) {
memset(blonk, 0, sizeof(send_buffer));
return blonk;
}
return NULL; /* failed, go home */
}
/*
* add_buffer_send() sends a buffer and frees all associated memory.
*/
size_t add_buffer_send(int sockfd, struct connectdata *conn, send_buffer *in)
{
size_t amount;
if(conn->data->bits.verbose) {
fputs("> ", conn->data->err);
/* this data _may_ contain binary stuff */
fwrite(in->buffer, in->size_used, 1, conn->data->err);
}
amount = ssend(sockfd, conn, in->buffer, in->size_used);
if(in->buffer)
free(in->buffer);
free(in);
return amount;
}
/*
* add_bufferf() builds a buffer from the formatted input
*/
CURLcode add_bufferf(send_buffer *in, char *fmt, ...)
{
CURLcode result = CURLE_OUT_OF_MEMORY;
char *s;
va_list ap;
va_start(ap, fmt);
s = mvaprintf(fmt, ap); /* this allocs a new string to append */
va_end(ap);
if(s) {
result = add_buffer(in, s, strlen(s));
free(s);
}
return result;
}
/*
* add_buffer() appends a memory chunk to the existing one
*/
CURLcode add_buffer(send_buffer *in, void *inptr, size_t size)
{
char *new_rb;
int new_size;
if(size > 0) {
if(!in->buffer ||
((in->size_used + size) > (in->size_max - 1))) {
new_size = (in->size_used+size)*2;
if(in->buffer)
/* we have a buffer, enlarge the existing one */
new_rb = (char *)realloc(in->buffer, new_size);
else
/* create a new buffer */
new_rb = (char *)malloc(new_size);
if(!new_rb)
return CURLE_OUT_OF_MEMORY;
in->buffer = new_rb;
in->size_max = new_size;
}
memcpy(&in->buffer[in->size_used], inptr, size);
in->size_used += size;
}
return CURLE_OK;
}

View File

@@ -46,4 +46,23 @@ size_t ssend(int fd, struct connectdata *, void *fmt, size_t len);
void infof(struct UrlData *, char *fmt, ...);
void failf(struct UrlData *, char *fmt, ...);
struct send_buffer {
char *buffer;
long size_max;
long size_used;
};
typedef struct send_buffer send_buffer;
#define CLIENTWRITE_BODY 1
#define CLIENTWRITE_HEADER 2
#define CLIENTWRITE_BOTH (CLIENTWRITE_BODY|CLIENTWRITE_HEADER)
CURLcode client_write(struct UrlData *data, int type, char *ptr,
size_t len);
send_buffer *add_buffer_init(void);
CURLcode add_buffer(send_buffer *in, void *inptr, size_t size);
CURLcode add_bufferf(send_buffer *in, char *fmt, ...);
size_t add_buffer_send(int sockfd, struct connectdata *conn, send_buffer *in);
#endif

View File

@@ -57,6 +57,10 @@
#endif
#endif
#ifndef __cplusplus /* (rabe) */
typedef char bool;
#endif /* (rabe) */
#include <stdio.h>
#ifndef OS
#ifdef WIN32
@@ -158,13 +162,4 @@ int fileno( FILE *stream);
#endif
/*
* FIXME: code for getting a passwd in windows/non termcap/signal systems?
*/
#ifndef WIN32
#define get_password(x) getpass(x)
#else
#define get_password(x)
#endif
#endif /* __CONFIG_H */

View File

@@ -50,21 +50,24 @@
#include "sendf.h"
#include "speedcheck.h"
void speedinit(struct UrlData *data)
{
memset(&data->keeps_speed, 0, sizeof(struct timeval));
}
CURLcode speedcheck(struct UrlData *data,
struct timeval now)
{
static struct timeval keeps_speed;
if((data->current_speed >= 0) &&
if((data->progress.current_speed >= 0) &&
data->low_speed_time &&
(tvlong(keeps_speed) != 0) &&
(data->current_speed < data->low_speed_limit)) {
(tvlong(data->keeps_speed) != 0) &&
(data->progress.current_speed < data->low_speed_limit)) {
/* We are now below the "low speed limit". If we are below it
for "low speed time" seconds we consider that enough reason
to abort the download. */
if( tvdiff(now, keeps_speed) > data->low_speed_time) {
if( tvdiff(now, data->keeps_speed) > data->low_speed_time) {
/* we have been this slow for long enough, now die */
failf(data,
"Operation too slow. "
@@ -76,7 +79,7 @@ CURLcode speedcheck(struct UrlData *data,
}
else {
/* we keep up the required speed all right */
keeps_speed = now;
data->keeps_speed = now;
}
return CURLE_OK;
}

View File

@@ -44,6 +44,7 @@
#include "timeval.h"
void speedinit(struct UrlData *data);
CURLcode speedcheck(struct UrlData *data,
struct timeval now);

View File

@@ -94,10 +94,10 @@ int SSL_cert_stuff(struct UrlData *data,
*/
strcpy(global_passwd, data->cert_passwd);
/* Set passwd callback: */
SSL_CTX_set_default_passwd_cb(data->ctx, passwd_callback);
SSL_CTX_set_default_passwd_cb(data->ssl.ctx, passwd_callback);
}
if (SSL_CTX_use_certificate_file(data->ctx,
if (SSL_CTX_use_certificate_file(data->ssl.ctx,
cert_file,
SSL_FILETYPE_PEM) <= 0) {
failf(data, "unable to set certificate file (wrong password?)\n");
@@ -106,14 +106,14 @@ int SSL_cert_stuff(struct UrlData *data,
if (key_file == NULL)
key_file=cert_file;
if (SSL_CTX_use_PrivateKey_file(data->ctx,
if (SSL_CTX_use_PrivateKey_file(data->ssl.ctx,
key_file,
SSL_FILETYPE_PEM) <= 0) {
failf(data, "unable to set public key file\n");
return(0);
}
ssl=SSL_new(data->ctx);
ssl=SSL_new(data->ssl.ctx);
x509=SSL_get_certificate(ssl);
if (x509 != NULL)
@@ -127,7 +127,7 @@ int SSL_cert_stuff(struct UrlData *data,
/* Now we know that a key and cert have been set against
* the SSL context */
if (!SSL_CTX_check_private_key(data->ctx)) {
if (!SSL_CTX_check_private_key(data->ssl.ctx)) {
failf(data, "Private key does not match the certificate public key\n");
return(0);
}
@@ -140,7 +140,7 @@ int SSL_cert_stuff(struct UrlData *data,
#endif
#if SSL_VERIFY_CERT
#ifdef USE_SSLEAY
int cert_verify_callback(int ok, X509_STORE_CTX *ctx)
{
X509 *err_cert;
@@ -164,7 +164,7 @@ UrgSSLConnect (struct UrlData *data)
SSL_METHOD *req_method;
/* mark this is being ssl enabled from here on out. */
data->use_ssl = 1;
data->ssl.use = TRUE;
/* Lets get nice error messages */
SSL_load_error_strings();
@@ -195,7 +195,7 @@ UrgSSLConnect (struct UrlData *data)
/* Setup all the global SSL stuff */
SSLeay_add_ssl_algorithms();
switch(data->ssl_version) {
switch(data->ssl.version) {
default:
req_method = SSLv23_client_method();
break;
@@ -207,9 +207,9 @@ UrgSSLConnect (struct UrlData *data)
break;
}
data->ctx = SSL_CTX_new(req_method);
data->ssl.ctx = SSL_CTX_new(req_method);
if(!data->ctx) {
if(!data->ssl.ctx) {
failf(data, "SSL: couldn't create a context!");
return 1;
}
@@ -221,22 +221,31 @@ UrgSSLConnect (struct UrlData *data)
}
}
#if SSL_VERIFY_CERT
SSL_CTX_set_verify(data->ctx,
SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT|
SSL_VERIFY_CLIENT_ONCE,
cert_verify_callback);
#endif
if(data->ssl.verifypeer){
SSL_CTX_set_verify(data->ssl.ctx,
SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT|
SSL_VERIFY_CLIENT_ONCE,
cert_verify_callback);
if (!SSL_CTX_load_verify_locations(data->ssl.ctx,
data->ssl.CAfile,
data->ssl.CApath)) {
failf(data,"error setting cerficate verify locations\n");
return 2;
}
}
else
SSL_CTX_set_verify(data->ssl.ctx, SSL_VERIFY_NONE, cert_verify_callback);
/* Lets make an SSL structure */
data->ssl = SSL_new (data->ctx);
SSL_set_connect_state (data->ssl);
data->ssl.handle = SSL_new (data->ssl.ctx);
SSL_set_connect_state (data->ssl.handle);
data->server_cert = 0x0;
data->ssl.server_cert = 0x0;
/* pass the raw socket into the SSL layers */
SSL_set_fd (data->ssl, data->firstsocket);
err = SSL_connect (data->ssl);
SSL_set_fd (data->ssl.handle, data->firstsocket);
err = SSL_connect (data->ssl.handle);
if (-1 == err) {
err = ERR_get_error();
@@ -244,8 +253,9 @@ UrgSSLConnect (struct UrlData *data)
return 10;
}
infof (data, "SSL connection using %s\n", SSL_get_cipher (data->ssl));
/* Informational message */
infof (data, "SSL connection using %s\n",
SSL_get_cipher(data->ssl.handle));
/* Get server's certificate (note: beware of dynamic allocation) - opt */
/* major serious hack alert -- we should check certificates
@@ -253,14 +263,15 @@ UrgSSLConnect (struct UrlData *data)
* attack
*/
data->server_cert = SSL_get_peer_certificate (data->ssl);
if(!data->server_cert) {
data->ssl.server_cert = SSL_get_peer_certificate (data->ssl.handle);
if(!data->ssl.server_cert) {
failf(data, "SSL: couldn't get peer certificate!");
return 3;
}
infof (data, "Server certificate:\n");
str = X509_NAME_oneline (X509_get_subject_name (data->server_cert), NULL, 0);
str = X509_NAME_oneline (X509_get_subject_name (data->ssl.server_cert),
NULL, 0);
if(!str) {
failf(data, "SSL: couldn't get X509-subject!");
return 4;
@@ -268,7 +279,8 @@ UrgSSLConnect (struct UrlData *data)
infof(data, "\t subject: %s\n", str);
CRYPTO_free(str);
str = X509_NAME_oneline (X509_get_issuer_name (data->server_cert), NULL, 0);
str = X509_NAME_oneline (X509_get_issuer_name (data->ssl.server_cert),
NULL, 0);
if(!str) {
failf(data, "SSL: couldn't get X509-issuer name!");
return 5;
@@ -279,11 +291,14 @@ UrgSSLConnect (struct UrlData *data)
/* We could do all sorts of certificate verification stuff here before
deallocating the certificate. */
#if SSL_VERIFY_CERT
infof(data, "Verify result: %d\n", SSL_get_verify_result(data->ssl));
#endif
if(data->ssl.verifypeer) {
data->ssl.certverifyresult=SSL_get_verify_result(data->ssl.handle);
infof(data, "Verify result: %d\n", data->ssl.certverifyresult);
}
else
data->ssl.certverifyresult=0;
X509_free(data->server_cert);
X509_free(data->ssl.server_cert);
#else /* USE_SSLEAY */
/* this is for "-ansi -Wall -pedantic" to stop complaining! (rabe) */
(void) data;

View File

@@ -713,8 +713,8 @@ void telrcv(struct UrlData *data,
{
break; /* Ignore \0 after CR */
}
data->fwrite((char *)&c, 1, 1, data->out);
client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
continue;
case TS_DATA:
@@ -728,7 +728,7 @@ void telrcv(struct UrlData *data,
telrcv_state = TS_CR;
}
data->fwrite((char *)&c, 1, 1, data->out);
client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
continue;
case TS_IAC:
@@ -752,8 +752,8 @@ void telrcv(struct UrlData *data,
telrcv_state = TS_SB;
continue;
case IAC:
data->fwrite((char *)&c, 1, 1, data->out);
break;
client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
break;
case DM:
case NOP:
case GA:
@@ -861,8 +861,9 @@ void telwrite(struct UrlData *data,
#ifndef USE_SSLEAY
bytes_written = swrite(data->firstsocket, outbuf, out_count);
#else
if (data->use_ssl) {
bytes_written = SSL_write(data->ssl, (char *)outbuf, out_count);
if (data->ssl.use) {
bytes_written = SSL_write(data->ssl.handle, (char *)outbuf,
out_count);
}
else {
bytes_written = swrite(data->firstsocket, outbuf, out_count);
@@ -918,8 +919,8 @@ CURLcode telnet(struct connectdata *conn)
#ifndef USE_SSLEAY
nread = sread (sockfd, buf, BUFSIZE - 1);
#else
if (data->use_ssl) {
nread = SSL_read (data->ssl, buf, BUFSIZE - 1);
if (data->ssl.use) {
nread = SSL_read (data->ssl.handle, buf, BUFSIZE - 1);
}
else {
nread = sread (sockfd, buf, BUFSIZE - 1);

398
lib/url.c
View File

@@ -109,7 +109,7 @@
#include "progress.h"
#include "cookie.h"
#include "strequal.h"
#include "writeout.h"
#include "escape.h"
/* And now for the protocols */
#include "ftp.h"
@@ -127,6 +127,10 @@
#ifdef KRB4
#include "security.h"
#endif
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/* -- -- */
@@ -147,19 +151,19 @@ void curl_free(void)
void static urlfree(struct UrlData *data, bool totally)
{
#ifdef USE_SSLEAY
if (data->use_ssl) {
if(data->ssl) {
(void)SSL_shutdown(data->ssl);
SSL_set_connect_state(data->ssl);
if (data->ssl.use) {
if(data->ssl.handle) {
(void)SSL_shutdown(data->ssl.handle);
SSL_set_connect_state(data->ssl.handle);
SSL_free (data->ssl);
data->ssl = NULL;
SSL_free (data->ssl.handle);
data->ssl.handle = NULL;
}
if(data->ctx) {
SSL_CTX_free (data->ctx);
data->ctx = NULL;
if(data->ssl.ctx) {
SSL_CTX_free (data->ssl.ctx);
data->ssl.ctx = NULL;
}
data->use_ssl = FALSE; /* get back to ordinary socket usage */
data->ssl.use = FALSE; /* get back to ordinary socket usage */
}
#endif /* USE_SSLEAY */
@@ -183,6 +187,12 @@ void static urlfree(struct UrlData *data, bool totally)
switch off that knowledge again... */
data->bits.httpproxy=FALSE;
}
if(data->bits.rangestringalloc) {
free(data->range);
data->range=NULL;
data->bits.rangestringalloc=0; /* free now */
}
if(data->ptr_proxyuserpwd) {
free(data->ptr_proxyuserpwd);
@@ -227,6 +237,10 @@ void static urlfree(struct UrlData *data, bool totally)
if(data->free_referer)
free(data->referer);
if(data->bits.urlstringalloc)
/* the URL is allocated, free it! */
free(data->url);
cookie_cleanup(data->cookies);
free(data);
@@ -251,6 +265,17 @@ CURLcode curl_close(CURL *curl)
return CURLE_OK;
}
int my_getpass(void *clientp, char *prompt, char* buffer, int buflen )
{
char *retbuf;
retbuf = getpass_r(prompt, buffer, buflen);
if(NULL == retbuf)
return 1;
else
return 0; /* success */
}
CURLcode curl_open(CURL **curl, char *url)
{
/* We don't yet support specifying the URL at this point */
@@ -273,13 +298,6 @@ CURLcode curl_open(CURL **curl, char *url)
data-> headersize=HEADERSIZE;
#if 0
/* Let's set some default values: */
curl_setopt(data, CURLOPT_FILE, stdout); /* default output to stdout */
curl_setopt(data, CURLOPT_INFILE, stdin); /* default input from stdin */
curl_setopt(data, CURLOPT_STDERR, stderr); /* default stderr to stderr! */
#endif
data->out = stdout; /* default output to stdout */
data->in = stdin; /* default input from stdin */
data->err = stderr; /* default stderr to stderr */
@@ -293,6 +311,9 @@ CURLcode curl_open(CURL **curl, char *url)
/* use fread as default function to read input */
data->fread = (size_t (*)(char *, size_t, size_t, FILE *))fread;
/* set the default passwd function */
data->fpasswd = my_getpass;
data->infilesize = -1; /* we don't know any size */
data->current_speed = -1; /* init to negative == impossible */
@@ -337,6 +358,9 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
case CURLOPT_POST:
data->bits.http_post = va_arg(param, long)?TRUE:FALSE;
break;
case CURLOPT_FILETIME:
data->bits.get_filetime = va_arg(param, long)?TRUE:FALSE;
break;
case CURLOPT_FTPLISTONLY:
data->bits.ftp_list_only = va_arg(param, long)?TRUE:FALSE;
break;
@@ -368,7 +392,7 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
break;
case CURLOPT_SSLVERSION:
data->ssl_version = va_arg(param, long);
data->ssl.version = va_arg(param, long);
break;
case CURLOPT_COOKIEFILE:
@@ -419,9 +443,7 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
data->url = va_arg(param, char *);
break;
case CURLOPT_PORT:
/* this typecast is used to fool the compiler to NOT warn for a
"cast from pointer to integer of different size" */
data->port = (unsigned short)(va_arg(param, long));
data->port = va_arg(param, long);
break;
case CURLOPT_POSTFIELDS:
data->postfields = va_arg(param, char *);
@@ -449,6 +471,9 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
case CURLOPT_TIMEOUT:
data->timeout = va_arg(param, long);
break;
case CURLOPT_MAXREDIRS:
data->maxredirs = va_arg(param, long);
break;
case CURLOPT_USERAGENT:
data->useragent = va_arg(param, char *);
break;
@@ -466,6 +491,12 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
case CURLOPT_PROGRESSDATA:
data->progress_client = va_arg(param, void *);
break;
case CURLOPT_PASSWDFUNCTION:
data->fpasswd = va_arg(param, curl_passwd_callback);
break;
case CURLOPT_PASSWDDATA:
data->passwd_client = va_arg(param, void *);
break;
case CURLOPT_PROXYUSERPWD:
data->proxyuserpwd = va_arg(param, char *);
data->bits.proxy_user_passwd = data->proxyuserpwd?1:0;
@@ -483,9 +514,6 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
case CURLOPT_WRITEFUNCTION:
data->fwrite = va_arg(param, curl_write_callback);
break;
case CURLOPT_WRITEINFO:
data->writeinfo = va_arg(param, char *);
break;
case CURLOPT_READFUNCTION:
data->fread = va_arg(param, curl_read_callback);
break;
@@ -508,6 +536,13 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
data->krb4_level = va_arg(param, char *);
data->bits.krb4=data->krb4_level?TRUE:FALSE;
break;
case CURLOPT_SSL_VERIFYPEER:
data->ssl.verifypeer = va_arg(param, long);
break;
case CURLOPT_CAINFO:
data->ssl.CAfile = va_arg(param, char *);
data->ssl.CApath = NULL; /*This does not work on windows.*/
break;
default:
/* unknown tag and its companion, just ignore: */
return CURLE_READ_ERROR; /* correct this */
@@ -532,8 +567,8 @@ int GetLine(int sockfd, char *buf, struct UrlData *data)
(nread<BUFSIZE) && read_rc;
nread++, ptr++) {
#ifdef USE_SSLEAY
if (data->use_ssl) {
read_rc = SSL_read(data->ssl, ptr, 1);
if (data->ssl.use) {
read_rc = SSL_read(data->ssl.handle, ptr, 1);
}
else {
#endif
@@ -579,8 +614,8 @@ CURLcode curl_write(CURLconnect *c_conn, char *buf, size_t amount,
data = conn->data;
#ifdef USE_SSLEAY
if (data->use_ssl) {
bytes_written = SSL_write(data->ssl, buf, amount);
if (data->ssl.use) {
bytes_written = SSL_write(data->ssl.handle, buf, amount);
}
else {
#endif
@@ -610,8 +645,8 @@ CURLcode curl_read(CURLconnect *c_conn, char *buf, size_t buffersize,
data = conn->data;
#ifdef USE_SSLEAY
if (data->use_ssl) {
nread = SSL_read (data->ssl, buf, buffersize);
if (data->ssl.use) {
nread = SSL_read (data->ssl.handle, buf, buffersize);
}
else {
#endif
@@ -637,6 +672,9 @@ CURLcode curl_disconnect(CURLconnect *c_connect)
if(conn->hostent_buf) /* host name info */
free(conn->hostent_buf);
if(conn->path) /* the URL path part */
free(conn->path);
free(conn); /* free the connection oriented data */
/* clean up the sockets and SSL stuff from the previous "round" */
@@ -645,33 +683,18 @@ CURLcode curl_disconnect(CURLconnect *c_connect)
return CURLE_OK;
}
/*
* NAME curl_connect()
*
* DESCRIPTION
*
* Connects to the peer server and performs the initial setup. This function
* writes a connect handle to its second argument that is a unique handle for
* this connect. This allows multiple connects from the same handle returned
* by curl_open().
*
* EXAMPLE
*
* CURLCode result;
* CURL curl;
* CURLconnect connect;
* result = curl_connect(curl, &connect);
*/
CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
{
char *tmp;
char *buf;
CURLcode result;
char resumerange[12]="";
char resumerange[40]="";
struct UrlData *data = curl;
struct connectdata *conn;
#ifdef HAVE_SIGACTION
struct sigaction sigact;
#endif
int urllen;
if(!data || (data->handle != STRUCT_OPEN))
return CURLE_BAD_FUNCTION_ARGUMENT; /* TBD: make error codes */
@@ -692,19 +715,43 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
conn->data = data; /* remember our daddy */
conn->state = CONN_INIT;
conn->upload_bufsize = UPLOAD_BUFSIZE; /* the smallest upload buffer size
we use */
buf = data->buffer; /* this is our buffer */
#if 0
#ifdef HAVE_SIGACTION
sigaction(SIGALRM, NULL, &sigact);
sigact.sa_handler = alarmfunc;
sigact.sa_flags &= ~SA_RESTART;
sigaction(SIGALRM, &sigact, NULL);
#else
/* no sigaction(), revert to the much lamer signal() */
#ifdef HAVE_SIGNAL
signal(SIGALRM, alarmfunc);
#endif
#endif
/* We need to allocate memory to store the path in. We get the size of the
full URL to be sure, and we need to make it at least 256 bytes since
other parts of the code will rely on this fact */
#define LEAST_PATH_ALLOC 256
urllen=strlen(data->url);
if(urllen < LEAST_PATH_ALLOC)
urllen=LEAST_PATH_ALLOC;
conn->path=(char *)malloc(urllen);
if(NULL == conn->path)
return CURLE_OUT_OF_MEMORY; /* really bad error */
/* Parse <url> */
/* We need to parse the url, even when using the proxy, because
* we will need the hostname and port in case we are trying
* to SSL connect through the proxy -- and we don't know if we
* will need to use SSL until we parse the url ...
*/
if((2 == sscanf(data->url, "%64[^:]://%" URL_MAX_LENGTH_TXT "[^\n]",
if((2 == sscanf(data->url, "%64[^:]://%[^\n]",
conn->proto,
conn->path)) && strequal(conn->proto, "file")) {
/* we deal with file://<host>/<path> differently since it
@@ -724,11 +771,11 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
strcpy(conn->path, "/");
if (2 > sscanf(data->url,
"%64[^\n:]://%256[^\n/]%" URL_MAX_LENGTH_TXT "[^\n]",
"%64[^\n:]://%256[^\n/]%[^\n]",
conn->proto, conn->gname, conn->path)) {
/* badly formatted, let's try the browser-style _without_ 'http://' */
if((1 > sscanf(data->url, "%256[^\n/]%" URL_MAX_LENGTH_TXT "[^\n]",
if((1 > sscanf(data->url, "%256[^\n/]%[^\n]",
conn->gname, conn->path)) ) {
failf(data, "<url> malformed");
return CURLE_URL_MALFORMAT;
@@ -763,16 +810,19 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
if(*data->userpwd != ':') {
/* the name is given, get user+password */
sscanf(data->userpwd, "%127[^:]:%127[^@]",
sscanf(data->userpwd, "%127[^:]:%127[^\n]",
data->user, data->passwd);
}
else
/* no name given, get the password only */
sscanf(data->userpwd+1, "%127[^@]", data->passwd);
sscanf(data->userpwd+1, "%127[^\n]", data->passwd);
/* check for password, if no ask for one */
if( !data->passwd[0] ) {
strncpy(data->passwd, getpass("password: "), sizeof(data->passwd));
if(!data->fpasswd ||
data->fpasswd(data->passwd_client,
"password:", data->passwd, sizeof(data->passwd)))
return CURLE_BAD_PASSWORD_ENTERED;
}
}
@@ -782,16 +832,21 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
if(*data->proxyuserpwd != ':') {
/* the name is given, get user+password */
sscanf(data->proxyuserpwd, "%127[^:]:%127[^@]",
sscanf(data->proxyuserpwd, "%127[^:]:%127[^\n]",
data->proxyuser, data->proxypasswd);
}
else
/* no name given, get the password only */
sscanf(data->proxyuserpwd+1, "%127[^@]", data->proxypasswd);
sscanf(data->proxyuserpwd+1, "%127[^\n]", data->proxypasswd);
/* check for password, if no ask for one */
if( !data->proxypasswd[0] ) {
strncpy(data->proxypasswd, getpass("proxy password: "), sizeof(data->proxypasswd));
if(!data->fpasswd ||
data->fpasswd( data->passwd_client,
"proxy password:",
data->proxypasswd,
sizeof(data->proxypasswd)))
return CURLE_BAD_PASSWORD_ENTERED;
}
}
@@ -805,20 +860,29 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
/* If proxy was not specified, we check for default proxy environment
variables, to enable i.e Lynx compliance:
HTTP_PROXY http://some.server.dom:port/
HTTPS_PROXY http://some.server.dom:port/
FTP_PROXY http://some.server.dom:port/
GOPHER_PROXY http://some.server.dom:port/
NO_PROXY host.domain.dom (a comma-separated list of hosts which should
not be proxied, or an asterisk to override all proxy variables)
ALL_PROXY seems to exist for the CERN www lib. Probably the first to
check for.
http_proxy=http://some.server.dom:port/
https_proxy=http://some.server.dom:port/
ftp_proxy=http://some.server.dom:port/
gopher_proxy=http://some.server.dom:port/
no_proxy=domain1.dom,host.domain2.dom
(a comma-separated list of hosts which should
not be proxied, or an asterisk to override
all proxy variables)
all_proxy=http://some.server.dom:port/
(seems to exist for the CERN www lib. Probably
the first to check for.)
For compatibility, the all-uppercase versions of these variables are
checked if the lowercase versions don't exist.
*/
char *no_proxy=GetEnv("NO_PROXY");
char *no_proxy=NULL;
char *proxy=NULL;
char proxy_env[128];
no_proxy=GetEnv("no_proxy");
if(!no_proxy)
no_proxy=GetEnv("NO_PROXY");
if(!no_proxy || !strequal("*", no_proxy)) {
/* NO_PROXY wasn't specified or it wasn't just an asterisk */
char *nope;
@@ -841,23 +905,31 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
char *envp = proxy_env;
char *prox;
/* Now, build <PROTOCOL>_PROXY and check for such a one to use */
while(*protop) {
*envp++ = toupper(*protop++);
}
/* append _PROXY */
strcpy(envp, "_PROXY");
#if 0
infof(data, "DEBUG: checks the environment variable %s\n", proxy_env);
#endif
/* Now, build <protocol>_proxy and check for such a one to use */
while(*protop)
*envp++ = tolower(*protop++);
/* append _proxy */
strcpy(envp, "_proxy");
/* read the protocol proxy: */
prox=GetEnv(proxy_env);
if(!prox) {
/* There was no lowercase variable, try the uppercase version: */
for(envp = proxy_env; *envp; envp++)
*envp = toupper(*envp);
prox=GetEnv(proxy_env);
}
if(prox && *prox) { /* don't count "" strings */
proxy = prox; /* use this */
}
else
proxy = GetEnv("ALL_PROXY"); /* default proxy to use */
}
else {
proxy = GetEnv("all_proxy"); /* default proxy to use */
if(!proxy)
proxy=GetEnv("ALL_PROXY");
}
if(proxy && *proxy) {
/* we have a proxy here to set */
@@ -865,7 +937,7 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
data->bits.proxystringalloc=1; /* this needs to be freed later */
data->bits.httpproxy=1;
}
} /* if (!nope) - it wasn't specfied non-proxy */
} /* if (!nope) - it wasn't specified non-proxy */
} /* NO_PROXY wasn't specified or '*' */
if(no_proxy)
free(no_proxy);
@@ -900,8 +972,9 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
if(data->resume_from) {
if(!data->bits.set_range) {
/* if it already was in use, we just skip this */
sprintf(resumerange, "%d-", data->resume_from);
data->range=resumerange; /* tell ourselves to fetch this range */
snprintf(resumerange, sizeof(resumerange), "%d-", data->resume_from);
data->range=strdup(resumerange); /* tell ourselves to fetch this range */
data->bits.rangestringalloc = TRUE; /* mark as allocated */
data->bits.set_range = 1; /* switch on range usage */
}
}
@@ -943,7 +1016,7 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
conn->curl_close = http_close;
#else /* USE_SSLEAY */
failf(data, "SSL is disabled, https: not supported!");
failf(data, "libcurl was built with SSL disabled, https: not supported!");
return CURLE_UNSUPPORTED_PROTOCOL;
#endif /* !USE_SSLEAY */
}
@@ -1076,13 +1149,7 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
user+password pair in a string like:
ftp://user:password@ftp.my.site:8021/README */
char *ptr=NULL; /* assign to remove possible warnings */
#if 0
if(':' == *conn->name) {
failf(data, "URL malformat: user can't be zero length");
return CURLE_URL_MALFORMAT_USER;
}
#endif
if(ptr=strchr(conn->name, '@')) {
if((ptr=strchr(conn->name, '@'))) {
/* there's a user+password given here, to the left of the @ */
data->user[0] =0;
@@ -1090,16 +1157,37 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
if(*conn->name != ':') {
/* the name is given, get user+password */
sscanf(conn->name, "%127[^:]:%127[^@]",
sscanf(conn->name, "%127[^:@]:%127[^@]",
data->user, data->passwd);
}
else
/* no name given, get the password only */
sscanf(conn->name+1, "%127[^@]", data->passwd);
if(data->user[0]) {
char *newname=curl_unescape(data->user, 0);
if(strlen(newname) < sizeof(data->user)) {
strcpy(data->user, newname);
}
/* if the new name is longer than accepted, then just use
the unconverted name, it'll be wrong but what the heck */
free(newname);
}
/* check for password, if no ask for one */
if( !data->passwd[0] ) {
strncpy(data->passwd, getpass("password: "), sizeof(data->passwd));
if(!data->fpasswd ||
data->fpasswd(data->passwd_client,
"password:",data->passwd,sizeof(data->passwd)))
return CURLE_BAD_PASSWORD_ENTERED;
}
else {
/* we have a password found in the URL, decode it! */
char *newpasswd=curl_unescape(data->passwd, 0);
if(strlen(newpasswd) < sizeof(data->passwd)) {
strcpy(data->passwd, newpasswd);
}
free(newpasswd);
}
conn->name = ++ptr;
@@ -1119,11 +1207,12 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
*tmp++ = '\0';
data->port = atoi(tmp);
}
data->remote_port = data->port; /* it is the same port */
/* Connect to target host right on */
conn->hp = GetHost(data, conn->name, &conn->hostent_buf);
if(!conn->hp) {
failf(data, "Couldn't resolv host '%s'", conn->name);
failf(data, "Couldn't resolve host '%s'", conn->name);
return CURLE_COULDNT_RESOLVE_HOST;
}
}
@@ -1179,7 +1268,7 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
/* connect to proxy */
conn->hp = GetHost(data, proxyptr, &conn->hostent_buf);
if(!conn->hp) {
failf(data, "Couldn't resolv proxy '%s'", proxyptr);
failf(data, "Couldn't resolve proxy '%s'", proxyptr);
return CURLE_COULDNT_RESOLVE_PROXY;
}
@@ -1195,9 +1284,14 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
conn->serv_addr.sin_family = conn->hp->h_addrtype;
conn->serv_addr.sin_port = htons(data->port);
/* sck 8/31/2000 add support for specifing device to bind socket to */
/* #ifdef LINUX */
/* I am using this, but it may not work everywhere, only tested on RedHat 6.2 */
#ifndef WIN32
/* We don't generally like checking for OS-versions, we should make this
HAVE_XXXX based, although at the moment I don't have a decent test for
this! */
/* sck 8/31/2000 add support for specifing device to bind socket to */
/* I am using this, but it may not work everywhere, only tested on
RedHat 6.2 */
#ifdef HAVE_INET_NTOA
#ifndef INADDR_NONE
@@ -1205,12 +1299,10 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
#endif
if (data->device && (strlen(data->device)<255)) {
struct ifreq ifr;
struct sockaddr_in sa;
struct hostent *h=NULL;
char *hostdataptr;
char *hostdataptr=NULL;
size_t size;
unsigned short porttouse;
char myhost[256] = "";
unsigned long in;
@@ -1286,23 +1378,6 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
case ENOMEM:
failf(data, "Insufficient kernel memory was available: %d", errno);
break;
#if 0
case EROFS:
failf(data,
"Socket inode would reside on a read-only file system: %d",
errno);
break;
case ENOENT:
failf(data, "File does not exist: %d", errno);
break;
case ENOTDIR:
failf(data, "Component of path prefix is not a directory: %d",
errno);
break;
case ELOOP:
failf(data,"Too many symbolic links encountered: %d",errno);
break;
#endif
default:
failf(data,"errno %d\n");
} /* end of switch */
@@ -1322,10 +1397,12 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
return CURLE_HTTP_PORT_FAILED;
}
free(hostdataptr); /* allocated by GetHost() */
if(hostdataptr)
free(hostdataptr); /* allocated by GetHost() */
} /* end of device selection support */
#endif /* end of HAVE_INET_NTOA */
#endif /* end of not WIN32 */
if (connect(data->firstsocket,
(struct sockaddr *) &(conn->serv_addr),
@@ -1366,22 +1443,9 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
failf(data, "Attempt to connect to broadcast address without socket broadcast flag or local firewall rule violated: %d",errno);
break;
#endif
#ifdef EINTR
case EINTR:
failf(data, "Connection timeouted");
break;
#endif
#if 0
case EAFNOSUPPORT:
failf(data, "Incorrect address family: %d",errno);
break;
case ENOTSOCK:
failf(data, "File descriptor is not a socket: %d",errno);
break;
case EBADF:
failf(data, "File descriptor is not a valid index in descriptor table: %d",errno);
break;
#endif
default:
failf(data, "Can't connect to server: %d", errno);
break;
@@ -1391,7 +1455,8 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
if(data->bits.proxy_user_passwd) {
char *authorization;
sprintf(data->buffer, "%s:%s", data->proxyuser, data->proxypasswd);
snprintf(data->buffer, BUFSIZE, "%s:%s",
data->proxyuser, data->proxypasswd);
if(base64_encode(data->buffer, strlen(data->buffer),
&authorization) >= 0) {
data->ptr_proxyuserpwd =
@@ -1427,10 +1492,6 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
infof(data, "Connected to %s (%s)\n", conn->hp->h_name, inet_ntoa(in));
}
#if 0 /* Kerberos experiements! Beware! Take cover! */
kerberos_connect(data, name);
#endif
#ifdef __EMX__
/* 20000330 mgs
* the check is quite a hack...
@@ -1446,6 +1507,52 @@ CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
return CURLE_OK;
}
CURLcode curl_connect(CURL *curl, CURLconnect **in_connect)
{
CURLcode code;
struct connectdata *conn;
/* call the stuff that needs to be called */
code = _connect(curl, in_connect);
if(CURLE_OK != code) {
/* We're not allowed to return failure with memory left allocated
in the connectdata struct, free those here */
conn = (struct connectdata *)*in_connect;
if(conn) {
if(conn->path)
free(conn->path);
if(conn->hostent_buf)
free(conn->hostent_buf);
free(conn);
*in_connect=NULL;
}
}
return code;
}
/*
* NAME curl_connect()
*
* DESCRIPTION
*
* Connects to the peer server and performs the initial setup. This function
* writes a connect handle to its second argument that is a unique handle for
* this connect. This allows multiple connects from the same handle returned
* by curl_open().
*
* EXAMPLE
*
* CURLCode result;
* CURL curl;
* CURLconnect connect;
* result = curl_connect(curl, &connect);
*/
CURLcode curl_done(CURLconnect *c_connect)
{
struct connectdata *conn = c_connect;
@@ -1497,13 +1604,6 @@ CURLcode curl_do(CURLconnect *in_conn)
conn->state = CONN_DO; /* we have entered this state */
#if 0
if(conn->bytecount) {
double ittook = tvdiff (tvnow(), conn->now);
infof(data, "%i bytes transfered in %.3lf seconds (%.0lf bytes/sec).\n",
conn->bytecount, ittook, (double)conn->bytecount/(ittook!=0.0?ittook:1));
}
#endif
return CURLE_OK;
}

View File

@@ -99,6 +99,11 @@
/* Download buffer size, keep it fairly big for speed reasons */
#define BUFSIZE (1024*50)
/* Upload buffer size, keep it smallish to get faster progress meter
updates. This should probably become dynamic and adjust to the upload
speed. */
#define UPLOAD_BUFSIZE (1024*2)
/* Initial size of the buffer to store headers in, it'll be enlarged in case
of need. */
#define HEADERSIZE 256
@@ -170,11 +175,14 @@ struct connectdata {
char proto[64];
char gname[256];
char *name;
char path[URL_MAX_LENGTH];
char *path; /* formerly staticly this size: URL_MAX_LENGTH */
char *ppath;
long bytecount;
struct timeval now;
long upload_bufsize; /* adjust as you see fit, never bigger than BUFSIZE
never smaller than UPLOAD_BUFSIZE */
/* These two functions MUST be set by the curl_connect() function to be
be protocol dependent */
CURLcode (*curl_do)(struct connectdata *connect);
@@ -203,7 +211,6 @@ struct connectdata {
the same we read from. -1 disables */
long *writebytecountp; /* return number of bytes written or NULL */
#ifdef KRB4
enum protection_level command_prot;
@@ -237,11 +244,14 @@ struct Progress {
double ulspeed;
struct timeval start;
struct timeval t_startsingle;
/* various data stored for possible later report */
struct timeval t_nslookup;
struct timeval t_connect;
struct timeval t_pretransfer;
double t_nslookup;
double t_connect;
double t_pretransfer;
int httpcode;
time_t filetime; /* If requested, this is might get set. It may be 0 if
the time was unretrievable */
#define CURR_TIME 5
@@ -279,6 +289,7 @@ struct FTP {
};
struct Configbits {
bool get_filetime;
bool tunnel_thru_httpproxy;
bool ftp_append;
bool ftp_ascii;
@@ -297,7 +308,6 @@ struct Configbits {
bool mute;
bool no_body;
bool proxy_user_passwd;
bool proxystringalloc; /* the http proxy string is malloc()'ed */
bool set_port;
bool set_range;
bool upload;
@@ -306,6 +316,10 @@ struct Configbits {
bool verbose;
bool this_is_a_follow; /* this is a followed Location: request */
bool krb4; /* kerberos4 connection requested */
bool proxystringalloc; /* the http proxy string is malloc()'ed */
bool rangestringalloc; /* the range string is malloc()'ed */
bool urlstringalloc; /* the URL string is malloc()'ed */
};
/* What type of interface that intiated this struct */
@@ -316,6 +330,21 @@ typedef enum {
CURLI_LAST
} CurlInterface;
struct ssldata {
bool use; /* use ssl encrypted communications TRUE/FALSE */
long version; /* what version the client wants to use */
long certverifyresult; /* result from the certificate verification */
long verifypeer; /* set TRUE if this is desired */
char *CApath; /* DOES NOT WORK ON WINDOWS */
char *CAfile; /* cerficate to verify peer against */
#ifdef USE_SSLEAY
/* these ones requires specific SSL-types */
SSL_CTX* ctx;
SSL* handle;
X509* server_cert;
#endif /* USE_SSLEAY */
};
/*
* As of April 11, 2000 we're now trying to split up the urldata struct in
* three different parts:
@@ -349,6 +378,10 @@ struct UrlData {
proxy string features a ":[port]" that one will override
this. */
long header_size; /* size of read header(s) in bytes */
long request_size; /* the amount of bytes sent in the request(s) */
/*************** Request - specific items ************/
union {
@@ -371,8 +404,8 @@ struct UrlData {
char *url; /* what to get */
char *freethis; /* if non-NULL, an allocated string for the URL */
char *hostname; /* hostname to connect, as parsed from url */
unsigned short port; /* which port to use (if non-protocol bind) set
CONF_PORT to use this */
long port; /* which port to use (if non-protocol bind) set
CONF_PORT to use this */
unsigned short remote_port; /* what remote port to connect to, not the proxy
port! */
struct Configbits bits; /* new-style (v7) flag data */
@@ -380,17 +413,24 @@ struct UrlData {
char *userpwd; /* <user:password>, if used */
char *range; /* range, if used. See README for detailed specification on
this syntax. */
/* stuff related to HTTP */
long followlocation;
long maxredirs; /* maximum no. of http(s) redirects to follow */
char *referer;
bool free_referer; /* set TRUE if 'referer' points to a string we
allocated */
char *useragent; /* User-Agent string */
char *postfields; /* if POST, set the fields' values here */
long postfieldsize; /* if POST, this might have a size to use instead of
strlen(), and then the data *may* be binary (contain
zero bytes) */
bool free_referer; /* set TRUE if 'referer' points to a string we
allocated */
char *referer;
char *useragent; /* User-Agent string */
/* stuff related to FTP */
char *ftpport; /* port to send with the PORT command */
/* general things */
char *device; /* Interface to use */
/* function that stores the output:*/
@@ -403,6 +443,10 @@ struct UrlData {
curl_progress_callback fprogress;
void *progress_client; /* pointer to pass to the progress callback */
/* function to call instead of the internal for password */
curl_passwd_callback fpasswd;
void *passwd_client; /* pointer to pass to the passwd callback */
long timeout; /* in seconds, 0 means no timeout */
long infilesize; /* size of file to upload, -1 means unknown */
@@ -424,8 +468,6 @@ struct UrlData {
char *cookie; /* HTTP cookie string to send */
short use_ssl; /* use ssl encrypted communications */
char *newurl; /* This can only be set if a Location: was in the
document headers */
@@ -437,12 +479,8 @@ struct UrlData {
struct CookieInfo *cookies;
long ssl_version; /* what version the client wants to use */
#ifdef USE_SSLEAY
SSL_CTX* ctx;
SSL* ssl;
X509* server_cert;
#endif /* USE_SSLEAY */
struct ssldata ssl; /* this is for ssl-stuff */
long crlf;
struct curl_slist *quote; /* before the transfer */
struct curl_slist *postquote; /* after the transfer */
@@ -455,8 +493,11 @@ struct UrlData {
char *headerbuff; /* allocated buffer to store headers in */
int headersize; /* size of the allocation */
#if 0
/* this was removed in libcurl 7.4 */
char *writeinfo; /* if non-NULL describes what to output on a successful
completion */
#endif
struct Progress progress;
@@ -486,6 +527,8 @@ struct UrlData {
#ifdef KRB4
FILE *cmdchannel;
#endif
struct timeval keeps_speed; /* this should be request-specific */
};
#define LIBCURL_NAME "libcurl"

38
maketgz
View File

@@ -62,25 +62,25 @@ findprog()
# brand new version number:
#
if { findprog autoconf >/dev/null 2>/dev/null; } then
echo "- No autoconf found, we leave configure as it is"
else
# Replace version number in configure.in file:
CONF="configure.in"
sed 's/^AM_INIT_AUTOMAKE.*/AM_INIT_AUTOMAKE(curl,"'$version'")/g' $CONF >$CONF.new
# Save old file
cp -p $CONF $CONF.old
# Make new configure.in
mv $CONF.new $CONF
# Update the configure script
echo "Runs autoconf"
autoconf
fi
#if { findprog autoconf >/dev/null 2>/dev/null; } then
# echo "- No autoconf found, we leave configure as it is"
#else
# # Replace version number in configure.in file:
#
# CONF="configure.in"
#
# sed 's/^AM_INIT_AUTOMAKE.*/AM_INIT_AUTOMAKE(curl,"'$version'")/g' $CONF >$CONF.new
#
# # Save old file
# cp -p $CONF $CONF.old
#
# # Make new configure.in
# mv $CONF.new $CONF
#
# # Update the configure script
# echo "Runs autoconf"
# autoconf
#fi
############################################################################
#

95
memanalyze.pl Executable file
View File

@@ -0,0 +1,95 @@
#!/usr/bin/perl
#
# Example input:
#
# MEM mprintf.c:1094 malloc(32) = e5718
# MEM mprintf.c:1103 realloc(e5718, 64) = e6118
# MEM sendf.c:232 free(f6520)
do {
if($ARGV[0] eq "-v") {
$verbose=1;
}
} while (shift @ARGV);
while(<STDIN>) {
chomp $_;
$line = $_;
if($verbose) {
print "IN: $line\n";
}
if($line =~ /^MEM ([^:]*):(\d*) (.*)/) {
# generic match for the filename+linenumber
$source = $1;
$linenum = $2;
$function = $3;
if($function =~ /free\(0x([0-9a-f]*)/) {
$addr = $1;
if($sizeataddr{$addr} <= 0) {
print "FREE ERROR: No memory allocated: $line\n";
}
else {
$totalmem -= $sizeataddr{$addr};
$sizeataddr{$addr}=0;
$getmem{$addr}=""; # forget after a good free()
}
}
elsif($function =~ /malloc\((\d*)\) = 0x([0-9a-f]*)/) {
$size = $1;
$addr = $2;
$sizeataddr{$addr}=$size;
$totalmem += $size;
$getmem{$addr}="$source:$linenum";
}
elsif($function =~ /realloc\(0x([0-9a-f]*), (\d*)\) = 0x([0-9a-f]*)/) {
$oldaddr = $1;
$newsize = $2;
$newaddr = $3;
$totalmem -= $sizeataddr{$oldaddr};
$sizeataddr{$oldaddr}=0;
$totalmem += $newsize;
$sizeataddr{$newaddr}=$newsize;
$getmem{$oldaddr}="";
$getmem{$newaddr}="$source:$linenum";
}
elsif($function =~ /strdup\(0x([0-9a-f]*)\) \((\d*)\) = 0x([0-9a-f]*)/) {
# strdup(a5b50) (8) = df7c0
$dup = $1;
$size = $2;
$addr = $3;
$getmem{$addr}="$source:$linenum";
$sizeataddr{$addr}=$size;
$totalmem += $size;
}
else {
print "Not recognized input line: $function\n";
}
}
else {
print "Not recognized prefix line: $line\n";
}
if($verbose) {
print "TOTAL: $totalmem\n";
}
}
if($totalmem) {
print "Leak detected: memory still allocated: $totalmem bytes\n";
for(keys %sizeataddr) {
$addr = $_;
$size = $sizeataddr{$addr};
if($size) {
print "At $addr, there's $size bytes.\n";
print " allocated by ".$getmem{$addr}."\n";
}
}
}

View File

@@ -0,0 +1,5 @@
Author: Daniel (I'm not trustworthy, replace this!)
Paul Marquis's 'make_curl_rpm' script is a fine example on how to automate the
jobs. You need to fill in your own name and email at least.

View File

@@ -1,26 +1,26 @@
%define ver @VERSION@
%define ver 7.4.2
%define rel 1
%define prefix /usr
Summary: get a file from a FTP, GOPHER or HTTP server.
Name: @PACKAGE@-ssl
Name: curl-ssl
Version: %ver
Release: %rel
Copyright: MPL
Group: Utilities/Console
Source: @PACKAGE@-%{version}.tar.gz
URL: http://@PACKAGE@.haxx.se
Source: curl-%{version}.tar.gz
URL: http://curl.haxx.se
BuildPrereq: openssl
BuildRoot: /tmp/%{name}-%{version}-%{rel}-root
Packager: Fill In As You Wish
Docdir: %{prefix}/doc
%description
@PACKAGE@-ssl is a client to get documents/files from servers, using
curl-ssl is a client to get documents/files from servers, using
any of the supported protocols. The command is designed to
work without user interaction or any kind of interactivity.
@PACKAGE@-ssl offers a busload of useful tricks like proxy support,
curl-ssl offers a busload of useful tricks like proxy support,
user authentication, ftp upload, HTTP post, file transfer
resume and more.
@@ -31,7 +31,7 @@ Authors:
%prep
%setup -n @PACKAGE@-@VERSION@
%setup -n curl-7.4.2
%build
@@ -74,7 +74,7 @@ find ${RPM_BUILD_ROOT}%{prefix} -type f | sed -e "s#^${RPM_BUILD_ROOT}##g" >> fi
%clean
(cd ..; rm -rf @PACKAGE@-@VERSION@ ${RPM_BUILD_ROOT})
(cd ..; rm -rf curl-7.4.2 ${RPM_BUILD_ROOT})
%files -f file-lists
@@ -90,7 +90,7 @@ find ${RPM_BUILD_ROOT}%{prefix} -type f | sed -e "s#^${RPM_BUILD_ROOT}##g" >> fi
%doc MPL-1.0.txt
%doc README
%doc README.curl
%doc README.lib@PACKAGE@
%doc README.libcurl
%doc RESOURCES
%doc TODO
%doc %{name}-ssl.spec.in

View File

@@ -1,25 +1,25 @@
%define ver @VERSION@
%define ver 7.4.2
%define rel 1
%define prefix /usr
Summary: get a file from a FTP, GOPHER or HTTP server.
Name: @PACKAGE@
Name: curl
Version: %ver
Release: %rel
Copyright: MPL
Group: Utilities/Console
Source: %{name}-%{version}.tar.gz
URL: http://@PACKAGE@.haxx.se
URL: http://curl.haxx.se
BuildRoot: /tmp/%{name}-%{version}-%{rel}-root
Packager: Fill In As You Wish
Docdir: %{prefix}/doc
%description
@PACKAGE@ is a client to get documents/files from servers, using
curl is a client to get documents/files from servers, using
any of the supported protocols. The command is designed to
work without user interaction or any kind of interactivity.
@PACKAGE@ offers a busload of useful tricks like proxy support,
curl offers a busload of useful tricks like proxy support,
user authentication, ftp upload, HTTP post, file transfer
resume and more.
@@ -88,7 +88,7 @@ find ${RPM_BUILD_ROOT}%{prefix} -type f | sed -e "s#^${RPM_BUILD_ROOT}##g" >> fi
%doc MPL-1.0.txt
%doc README
%doc README.curl
%doc README.lib@PACKAGE@
%doc README.libcurl
%doc RESOURCES
%doc TODO
%doc %{name}-ssl.spec.in

View File

@@ -0,0 +1,62 @@
#! /bin/sh
# script to build curl RPM from src RPM (SSL and non-SSL versions)
# initialize
top_dir=/usr/src/redhat
sources_dir=$top_dir/SOURCES
specs_dir=$top_dir/SPECS
rpms_dir=$top_dir/RPMS
arch=`rpm --showrc | awk 'NF == 3 && $2 == "_arch" { print $3 }'`
# fill in your own name and email here
packager_name="Mr Joe Packager Person"
packager_email='<Joe@packager.person>'
# make sure we're running as root
if test `id -u` -ne `id -u root`
then
echo "you must build the RPM as root"
exit 1
fi
# get version and release number
if test $# -lt 1
then
echo "version number?"
read version
else
version=$1
fi
if test $# -lt 2
then
echo "release number?"
read release
else
release=$2
fi
# build all the files
targets="curl curl-ssl"
for target in $targets
do
# make sure src RPM exist
src_rpm="$target-$version-$release.src.rpm"
if test -f $src_rpm
then
rpm -ivh $src_rpm
# replace packager in spec file
sed -e 's/^Packager: .*/Packager: $packager_name $packager_email/' $specs_dir/$target.spec > $specs_dir/$target-$version-$arch.spec
# build it
if ! rpm -ba $specs_dir/$target-$version-$arch.spec
then
echo "error building $target for $arch -- check output above"
fi
echo "$target rpm is now in $rpms_dir/$arch"
else
echo $src_rpm does not exist
fi
done

27
packages/README Normal file
View File

@@ -0,0 +1,27 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
PACKAGES
This directory and all its subdirectories are for special package
information, template, scripts and docs. The files herein should be of use for
those of you who want to package curl in a binary or source format using one
of those custom formats.
The hierarchy for these directories is something like this:
packages/[OS]/[FORMAT]/
Currently, we have Win32 and Linux for [OS]. There might be different formats
for the same OS so for Linux we have RPM as format.
We might need to add some differentiation for CPU as well, as there is
Linux-RPMs for several CPUs. However, it might not be necessary since the
packaging should be pretty much the same no matter what CPU that is used.
For each unique OS-FORMAT pair, there's a directory to "fill"! I'd like to
see a single README with as much details as possible, and then I'd like some
template files for the package process.

50
packages/Win32/README Normal file
View File

@@ -0,0 +1,50 @@
Author: J<>rn Hartroth
DESCRIPTION
Packaging of the curl binaries for Win32 should at this point in time be based
on the InfoZip (zip/unzip) archiver family as the de-facto standard for
Windows archives. A package should contain the main binary curl.exe along with
the appropriate documentation and license information files. For development
releases, you should also include the header directory and probably the
compiled binaries of libcurl and the appropriate Makefiles/project definition
files for the compiler used.
A simple packaging mechanism can be based on a set of batch files which call
zip.exe with the appropriate files from the curl distribution - see the
samples included below (Long lines have been split with "\" as the split
marker, you'll want to rejoin the pieces to be all on one line in the batch
file). Call any of these batch files - after compiling the curl binaries -
with a single parameter specifying the name of the archive file to be created.
It is implicitely assumed that all of the binary files (curl.exe, libcurl.a,
etc) have previously been copied to the main directory of the curl source
package (the directory where the main README resides), because that is where
they should end up in the zip archive. The archive should *not* be built with
absolute path information because the user will want to locally extract the
archive contents and shift the binaries to his executable directory.
SCRIPT_TEMPLATES
curlpkg.bat:
zip -9 %1 curl.exe CHANGES LEGAL MPL-1.0.txt README \
docs/FAQ docs/FEATURES docs/README.curl docs/README.win32 docs/TODO
curldevpkg.bat:
zip -9 %1 curl.exe include\README include\curl\*.h CHANGES docs\* \
curl.spec curl-ssl.spec LEGAL lib/Makefile.m32 src/Makefile.m32 \
libcurl.a libcurl.def libcurl.dll libcurldll.a MPL-1.0.txt README
PROCEDURE_EXAMPLE
A standard packaging routine (for MingW32) using the above batch files could
go like this:
(No SSL) (With SSL)
cd <curl-sourcedir>\lib cd <curl-sourcedir>\lib
make -f Makefile.m32 make -f Makefile.m32 SSL=1
cd ..\src cd ..\src
make -f Makefile.m32 make -f Makefile.m32 SSL=1
cd .. cd ..
copy lib\libcurl.a . copy lib\libcurl.a .
copy src\curl.exe . copy src\curl.exe .
curlpkg curl-win32-nossl.zip curlpkg curl-win32-ssl.zip

View File

@@ -3,14 +3,17 @@
#
# Some flags needed when trying to cause warnings ;-)
# CFLAGS = -Wall -pedantic
# CFLAGS = -g -DMALLOCDEBUG # -Wall -pedantic
#CPPFLAGS = -DGLOBURL -DCURL_SEPARATORS
INCLUDES = -I$(top_srcdir)/include
bin_PROGRAMS = curl
bin_PROGRAMS = curl #memtest
curl_SOURCES = main.c hugehelp.c urlglob.c
#memtest_SOURCES = memtest.c
#memtest_LDADD = $(top_srcdir)/lib/libcurl.la
curl_SOURCES = main.c hugehelp.c urlglob.c writeout.c
curl_LDADD = $(top_srcdir)/lib/libcurl.la
curl_DEPENDENCIES = $(top_srcdir)/lib/libcurl.la
BUILT_SOURCES = hugehelp.c
@@ -22,7 +25,7 @@ EXTRA_DIST = mkhelp.pl Makefile.vc6
AUTOMAKE_OPTIONS = foreign no-dependencies
MANPAGE=$(top_srcdir)/docs/curl.1
README=$(top_srcdir)/docs/README.curl
README=$(top_srcdir)/docs/MANUAL
MKHELP=$(top_srcdir)/src/mkhelp.pl
# This generates the hugehelp.c file

View File

@@ -1,65 +1,71 @@
#############################################################
## Makefile for building curl.exe with MingW32 (GCC-2.95) and
## optionally OpenSSL (0.9.4)
##
## Use: make -f Makefile.m32 [SSL=1]
##
## Comments to: Troy Engel <tengel@sonic.net> or
## Joern Hartroth <hartroth@acm.org>
CC = gcc
STRIP = strip -s
OPENSSL_PATH = ../../openssl-0.9.5a
# We may need these someday
# PERL = perl
# NROFF = nroff
########################################################
## Nothing more to do below this line!
INCLUDES = -I. -I.. -I../include
CFLAGS = -g -O2 -DMINGW32
LDFLAGS =
COMPILE = $(CC) $(INCLUDES) $(CFLAGS)
LINK = $(CC) $(CFLAGS) $(LDFLAGS) -o $@
curl_PROGRAMS = curl.exe
curl_OBJECTS = main.o hugehelp.o urlglob.o
curl_SOURCES = main.c hugehelp.c urlglob.c
curl_DEPENDENCIES = ../lib/libcurl.a
curl_LDADD = -L../lib -lcurl -lwsock32
ifdef SSL
curl_LDADD += -L$(OPENSSL_PATH)/out -leay32 -lssl32 -lRSAglue
endif
PROGRAMS = $(curl_PROGRAMS)
SOURCES = $(curl_SOURCES)
OBJECTS = $(curl_OBJECTS)
all: curl
curl: $(curl_OBJECTS) $(curl_DEPENDENCIES)
-@erase curl.exe
$(LINK) $(curl_OBJECTS) $(curl_LDADD)
$(STRIP) $(curl_PROGRAMS)
# We don't have nroff normally under win32
# hugehelp.c: ../README.curl ../curl.1 mkhelp.pl
# -@erase hugehelp.c
# $(NROFF) -man ../curl.1 | $(PERL) mkhelp.pl ../README.curl > hugehelp.c
.c.o:
$(COMPILE) -c $<
.s.o:
$(COMPILE) -c $<
.S.o:
$(COMPILE) -c $<
clean:
-@erase $(curl_OBJECTS)
distrib: clean
-@erase $(curl_PROGRAMS)
#############################################################
## Makefile for building curl.exe with MingW32 (GCC-2.95) and
## optionally OpenSSL (0.9.6)
##
## Use: make -f Makefile.m32 [SSL=1] [DYN=1]
##
## Comments to: Troy Engel <tengel@sonic.net> or
## Joern Hartroth <hartroth@acm.org>
CC = gcc
STRIP = strip -s
OPENSSL_PATH = ../../openssl-0.9.6
# We may need these someday
# PERL = perl
# NROFF = nroff
########################################################
## Nothing more to do below this line!
INCLUDES = -I. -I.. -I../include
CFLAGS = -g -O2 -DMINGW32
LDFLAGS =
COMPILE = $(CC) $(INCLUDES) $(CFLAGS)
LINK = $(CC) $(CFLAGS) $(LDFLAGS) -o $@
curl_PROGRAMS = curl.exe
curl_OBJECTS = main.o hugehelp.o urlglob.o writeout.o
curl_SOURCES = main.c hugehelp.c urlglob.c writeout.c
ifdef DYN
curl_DEPENDENCIES = ../lib/libcurldll.a ../lib/libcurl.dll
curl_LDADD = -L../lib -lcurldll
else
curl_DEPENDENCIES = ../lib/libcurl.a
curl_LDADD = -L../lib -lcurl
endif
curl_LDADD += -lwsock32
ifdef SSL
curl_LDADD += -L$(OPENSSL_PATH)/out -leay32 -lssl32 -lRSAglue
endif
PROGRAMS = $(curl_PROGRAMS)
SOURCES = $(curl_SOURCES)
OBJECTS = $(curl_OBJECTS)
all: curl.exe
curl.exe: $(curl_OBJECTS) $(curl_DEPENDENCIES)
-@erase $@
$(LINK) $(curl_OBJECTS) $(curl_LDADD)
$(STRIP) $@
# We don't have nroff normally under win32
# hugehelp.c: ../README.curl ../curl.1 mkhelp.pl
# -@erase hugehelp.c
# $(NROFF) -man ../curl.1 | $(PERL) mkhelp.pl ../README.curl > hugehelp.c
.c.o:
$(COMPILE) -c $<
.s.o:
$(COMPILE) -c $<
.S.o:
$(COMPILE) -c $<
clean:
-@erase $(curl_OBJECTS)
distrib: clean
-@erase $(curl_PROGRAMS)

View File

@@ -4,6 +4,7 @@
## (default is release)
##
## Comments to: Troy Engel <tengel@sonic.net>
## Updated by: Craig Davison <cd@securityfocus.com>
PROGRAM_NAME = curl.exe
@@ -11,27 +12,34 @@ PROGRAM_NAME = curl.exe
## Nothing more to do below this line!
## Release
CCR = cl.exe /ML /O2 /D "NDEBUG"
CCR = cl.exe /MD /O2 /D "NDEBUG"
LINKR = link.exe /incremental:no /libpath:"../lib"
## Debug
CCD = cl.exe /MLd /Gm /ZI /Od /D "_DEBUG" /GZ
CCD = cl.exe /MDd /Gm /ZI /Od /D "_DEBUG" /GZ
LINKD = link.exe /incremental:yes /debug
CFLAGS = /nologo /W3 /GX /D "WIN32" /D "_CONSOLE" /D "_MBCS" /YX /FD /c
CFLAGS = /I "../include" /nologo /W3 /GX /D "WIN32" /D "_CONSOLE" /D "_MBCS" /YX /FD /c
LFLAGS = /nologo /out:$(PROGRAM_NAME) /subsystem:console /machine:I386
LINKLIBS = kernel32.lib wsock32.lib libcurl.lib
LINKLIBS = wsock32.lib libcurl.lib
LINKLIBS_DEBUG = wsock32.lib libcurld.lib
RELEASE_OBJS= \
hugehelpr.obj \
writeoutr.obj \
urlglobr.obj \
mainr.obj
DEBUG_OBJS= \
hugehelpd.obj \
writeoutd.obj \
urlglobd.obj \
maind.obj
LINK_OBJS= \
hugehelp.obj \
writeout.obj \
urlglob.obj \
main.obj
all : release
@@ -40,17 +48,25 @@ release: $(RELEASE_OBJS)
$(LINKR) $(LFLAGS) $(LINKLIBS) $(LINK_OBJS)
debug: $(DEBUG_OBJS)
$(LINKD) $(LFLAGS) $(LINKLIBS) $(LINK_OBJS)
$(LINKD) $(LFLAGS) $(LINKLIBS_DEBUG) $(LINK_OBJS)
## Release
hugehelpr.obj: hugehelp.c
$(CCR) $(CFLAGS) /Zm200 hugehelp.c
writeoutr.obj: writeout.c
$(CCR) $(CFLAGS) writeout.c
urlglobr.obj: urlglob.c
$(CCR) $(CFLAGS) urlglob.c
mainr.obj: main.c
$(CCR) $(CFLAGS) main.c
## Debug
hugehelpd.obj: hugehelp.c
$(CCD) $(CFLAGS) /Zm200 hugehelp.c
writeoutd.obj: writeout.c
$(CCD) $(CFLAGS) writeout.c
urlglobd.obj: urlglob.c
$(CCD) $(CFLAGS) urlglob.c
maind.obj: main.c
$(CCD) $(CFLAGS) main.c

View File

@@ -114,14 +114,19 @@ puts (
" cut off). The data is expected to be \"url-encoded\".\n"
" This will cause curl to pass the data to the server\n"
" using the content-type application/x-www-form-urlen<65>\n"
" coded. Compare to -F.\n"
" coded. Compare to -F. If more than one -d/--data option\n"
" is used on the same command line, the data pieces spec<65>\n"
" ified will be merged together with a separating &-let<65>\n"
" ter. Thus, using '-d name=daniel -d skill=lousy' would\n"
" generate a post chunk that looks like\n"
"\n"
" If you start the data with the letter @, the rest\n"
" should be a file name to read the data from, or - if\n"
" you want curl to read the data from stdin. The con<6F>\n"
" tents of the file must already be url-encoded.\n"
" If you start the data with the letter @, the rest\n"
" should be a file name to read the data from, or - if\n"
" you want curl to read the data from stdin. The con<6F>\n"
" tents of the file must already be url-encoded. Multiple\n"
" files can also be specified.\n"
"\n"
" To post data purely binary, you should instead use the\n"
" To post data purely binary, you should instead use the\n"
" --data-binary option.\n"
"\n"
" -d/--data is the same as --data-ascii.\n"
@@ -130,62 +135,67 @@ puts (
" (HTTP) This is an alias for the -d/--data option.\n"
"\n"
" --data-binary <data>\n"
" (HTTP) This posts data in a similar manner as --data-\n"
" ascii does, although when using this option the entire\n"
" context of the posted data is kept as-is. If you want\n"
" to post a binary file without the strip-newlines fea<EFBFBD>\n"
" (HTTP) This posts data in a similar manner as --data-\n"
" ascii does, although when using this option the entire\n"
" context of the posted data is kept as-is. If you want\n"
" to post a binary file without the strip-newlines fea<65>\n"
" ture of the --data-ascii option, this is for you.\n"
"\n"
" -D/--dump-header <file>\n"
" (HTTP/FTP) Write the HTTP headers to this file. Write\n"
" (HTTP/FTP) Write the HTTP headers to this file. Write\n"
" the FTP file info to this file if -I/--head is used.\n"
"\n"
" This option is handy to use when you want to store the\n"
" cookies that a HTTP site sends to you. The cookies\n"
" This option is handy to use when you want to store the\n"
" cookies that a HTTP site sends to you. The cookies\n"
" could then be read in a second curl invoke by using the\n"
" -b/--cookie option!\n"
"\n"
" -e/--referer <URL>\n"
" (HTTP) Sends the \"Referer Page\" information to the HTTP\n"
" server. This can also be set with the -H/--header flag\n"
" server. This can also be set with the -H/--header flag\n"
" of course. When used with -L/--location you can append\n"
" \";auto\" to the referer URL to make curl automatically\n"
" set the previous URL when it follows a Location:\n"
" header. The \";auto\" string can be used alone, even if\n"
" \";auto\" to the referer URL to make curl automatically\n"
" set the previous URL when it follows a Location:\n"
" header. The \";auto\" string can be used alone, even if\n"
" you don't set an initial referer.\n"
"\n"
" -E/--cert <certificate[:password]>\n"
" (HTTPS) Tells curl to use the specified certificate\n"
" file when getting a file with HTTPS. The certificate\n"
" must be in PEM format. If the optional password isn't\n"
" (HTTPS) Tells curl to use the specified certificate\n"
" file when getting a file with HTTPS. The certificate\n"
" must be in PEM format. If the optional password isn't\n"
" specified, it will be queried for on the terminal. Note\n"
" that this certificate is the private key and the pri<72>\n"
" that this certificate is the private key and the pri<72>\n"
" vate certificate concatenated!\n"
"\n"
" --cacert <CA certificate>\n"
" (HTTPS) Tells curl to use the specified certificate\n"
" file to verify the peer. The certificate must be in PEM\n"
" format.\n"
"\n"
" -f/--fail\n"
" (HTTP) Fail silently (no output at all) on server\n"
" errors. This is mostly done like this to better enable\n"
" scripts etc to better deal with failed attempts. In\n"
" (HTTP) Fail silently (no output at all) on server\n"
" errors. This is mostly done like this to better enable\n"
" scripts etc to better deal with failed attempts. In\n"
" normal cases when a HTTP server fails to deliver a doc<6F>\n"
" ument, it returns a HTML document stating so (which\n"
" ument, it returns a HTML document stating so (which\n"
" often also describes why and more). This flag will pre<72>\n"
" vent curl from outputting that and fail silently\n"
" vent curl from outputting that and fail silently\n"
" instead.\n"
"\n"
" -F/--form <name=content>\n"
" (HTTP) This lets curl emulate a filled in form in which\n"
" a user has pressed the submit button. This causes curl\n"
" a user has pressed the submit button. This causes curl\n"
" to POST data using the content-type multipart/form-data\n"
" according to RFC1867. This enables uploading of binary\n"
" according to RFC1867. This enables uploading of binary\n"
" files etc. To force the 'content' part to be be a file,\n"
" prefix the file name with an @ sign. To just get the\n"
" prefix the file name with an @ sign. To just get the\n"
" content part from a file, prefix the file name with the\n"
" letter <. The difference between @ and < is then that @\n"
" makes a file get attached in the post as a file upload,\n"
" while the < makes a text field and just get the con<EFBFBD>\n"
" while the < makes a text field and just get the con<6F>\n"
" tents for that text field from a file.\n"
"\n"
" Example, to send your password file to the server,\n"
" Example, to send your password file to the server,\n"
" where input:\n"
"\n"
" curl -F password=@/etc/passwd www.mypasswords.com\n"
@@ -199,110 +209,111 @@ puts (
"\n"
" -H/--header <header>\n"
" (HTTP) Extra header to use when getting a web page. You\n"
" may specify any number of extra headers. Note that if\n"
" you should add a custom header that has the same name\n"
" may specify any number of extra headers. Note that if\n"
" you should add a custom header that has the same name\n"
" as one of the internal ones curl would use, your exter<65>\n"
" nally set header will be used instead of the internal\n"
" one. This allows you to make even trickier stuff than\n"
" curl would normally do. You should not replace inter<65>\n"
" nally set headers without knowing perfectly well what\n"
" you're doing. Replacing an internal header with one\n"
" without content on the right side of the colon will\n"
" nally set header will be used instead of the internal\n"
" one. This allows you to make even trickier stuff than\n"
" curl would normally do. You should not replace\n"
" internally set headers without knowing perfectly well\n"
" what you're doing. Replacing an internal header with\n"
" one without content on the right side of the colon will\n"
" prevent that header from appearing.\n"
"\n"
" -i/--include\n"
" (HTTP) Include the HTTP-header in the output. The HTTP-\n"
" header includes things like server-name, date of the\n"
" header includes things like server-name, date of the\n"
" document, HTTP-version and more...\n"
"\n"
" --interface <name>\n"
" Perform an operation using a specified interface. You\n"
" can enter interface name, IP address or host name. An\n"
" Perform an operation using a specified interface. You\n"
" can enter interface name, IP address or host name. An\n"
" example could look like:\n"
"\n"
" curl --interface eth0:1 http://www.netscape.com/\n"
);
puts(
"\n"
" -I/--head\n"
" (HTTP/FTP) Fetch the HTTP-header only! HTTP-servers\n"
" (HTTP/FTP) Fetch the HTTP-header only! HTTP-servers\n"
" feature the command HEAD which this uses to get nothing\n"
" but the header of a document. When used on a FTP file,\n"
" but the header of a document. When used on a FTP file,\n"
" curl displays the file size only.\n"
"\n"
" --krb4 <level>\n"
" (FTP) Enable kerberos4 authentication and use. The\n"
);
puts(
" level must be entered and should be one of 'clear',\n"
" 'safe', 'confidential' or 'private'. Should you use a\n"
" level that is not one of these, 'private' will instead\n"
" (FTP) Enable kerberos4 authentication and use. The\n"
" level must be entered and should be one of 'clear',\n"
" 'safe', 'confidential' or 'private'. Should you use a\n"
" level that is not one of these, 'private' will instead\n"
" be used.\n"
"\n"
" -K/--config <config file>\n"
" Specify which config file to read curl arguments from.\n"
" The config file is a text file in which command line\n"
" arguments can be written which then will be used as if\n"
" they were written on the actual command line. If the\n"
" first column of a config line is a '#' character, the\n"
" Specify which config file to read curl arguments from.\n"
" The config file is a text file in which command line\n"
" arguments can be written which then will be used as if\n"
" they were written on the actual command line. If the\n"
" first column of a config line is a '#' character, the\n"
" rest of the line will be treated as a comment.\n"
"\n"
" Specify the filename as '-' to make curl read the file\n"
" Specify the filename as '-' to make curl read the file\n"
" from stdin.\n"
"\n"
" -l/--list-only\n"
" (FTP) When listing an FTP directory, this switch forces\n"
" a name-only view. Especially useful if you want to\n"
" machine-parse the contents of an FTP directory since\n"
" the normal directory view doesn't use a standard look\n"
" a name-only view. Especially useful if you want to\n"
" machine-parse the contents of an FTP directory since\n"
" the normal directory view doesn't use a standard look\n"
" or format.\n"
"\n"
" -L/--location\n"
" (HTTP/HTTPS) If the server reports that the requested\n"
" page has a different location (indicated with the\n"
" header line Location:) this flag will let curl attempt\n"
" (HTTP/HTTPS) If the server reports that the requested\n"
" page has a different location (indicated with the\n"
" header line Location:) this flag will let curl attempt\n"
" to reattempt the get on the new place. If used together\n"
" with -i or -I, headers from all requested pages will be\n"
" shown. If this flag is used when making a HTTP POST,\n"
" shown. If this flag is used when making a HTTP POST,\n"
" curl will automatically switch to GET after the initial\n"
" POST has been done.\n"
"\n"
" -m/--max-time <seconds>\n"
" Maximum time in seconds that you allow the whole opera<72>\n"
" tion to take. This is useful for preventing your batch\n"
" jobs from hanging for hours due to slow networks or\n"
" links going down. This doesn't work fully in win32\n"
" jobs from hanging for hours due to slow networks or\n"
" links going down. This doesn't work fully in win32\n"
" systems.\n"
"\n"
" -M/--manual\n"
" Manual. Display the huge help text.\n"
"\n"
" -n/--netrc\n"
" Makes curl scan the .netrc file in the user's home\n"
" directory for login name and password. This is typi<70>\n"
" cally used for ftp on unix. If used with http, curl\n"
" will enable user authentication. See netrc(4) for\n"
" details on the file format. Curl will not complain if\n"
" that file hasn't the right permissions (it should not\n"
" be world nor group readable). The environment variable\n"
" Makes curl scan the .netrc file in the user's home\n"
" directory for login name and password. This is typi<70>\n"
" cally used for ftp on unix. If used with http, curl\n"
" will enable user authentication. See netrc(4) for\n"
" details on the file format. Curl will not complain if\n"
" that file hasn't the right permissions (it should not\n"
" be world nor group readable). The environment variable\n"
" \"HOME\" is used to find the home directory.\n"
"\n"
" A quick and very simple example of how to setup a\n"
" .netrc to allow curl to ftp to the machine\n"
" A quick and very simple example of how to setup a\n"
" .netrc to allow curl to ftp to the machine\n"
" host.domain.com with user name\n"
"\n"
" machine host.domain.com login myself password secret\n"
"\n"
" -N/--no-buffer\n"
" Disables the buffering of the output stream. In normal\n"
" Disables the buffering of the output stream. In normal\n"
" work situations, curl will use a standard buffered out<75>\n"
" put stream that will have the effect that it will out<75>\n"
" put the data in chunks, not necessarily exactly when\n"
" the data arrives. Using this option will disable that\n"
" put stream that will have the effect that it will out<75>\n"
" put the data in chunks, not necessarily exactly when\n"
" the data arrives. Using this option will disable that\n"
" buffering.\n"
"\n"
" -o/--output <file>\n"
" Write output to <file> instead of stdout. If you are\n"
" Write output to <file> instead of stdout. If you are\n"
" using {} or [] to fetch multiple documents, you can use\n"
" '#' followed by a number in the <file> specifier. That\n"
" variable will be replaced with the current string for\n"
" '#' followed by a number in the <file> specifier. That\n"
" variable will be replaced with the current string for\n"
" the URL being fetched. Like in:\n"
"\n"
" curl http://{one,two}.site.com -o \"file_#1.txt\"\n"
@@ -310,7 +321,6 @@ puts (
" or use several variables like:\n"
"\n"
" curl http://{site,host}.host[1-5].com -o \"#1_#2\"\n"
"\n"
" -O/--remote-name\n"
" Write output to a local file named like the remote file\n"
" we get. (Only the file part of the remote file is used,\n"
@@ -318,22 +328,22 @@ puts (
"\n"
" -p/--proxytunnel\n"
" When an HTTP proxy is used, this option will cause non-\n"
" HTTP protocols to attempt to tunnel through the proxy\n"
" instead of merely using it to do HTTP-like operations.\n"
" HTTP protocols to attempt to tunnel through the proxy\n"
" instead of merely using it to do HTTP-like operations.\n"
" The tunnel approach is made with the HTTP proxy CONNECT\n"
" request and requires that the proxy allows direct con<EFBFBD>\n"
" nect to the remote port number curl wants to tunnel\n"
" request and requires that the proxy allows direct con<6F>\n"
" nect to the remote port number curl wants to tunnel\n"
" through to.\n"
"\n"
" -P/--ftpport <address>\n"
" (FTP) Reverses the initiator/listener roles when con<EFBFBD>\n"
" necting with ftp. This switch makes Curl use the PORT\n"
" command instead of PASV. In practice, PORT tells the\n"
" (FTP) Reverses the initiator/listener roles when con<6F>\n"
" necting with ftp. This switch makes Curl use the PORT\n"
" command instead of PASV. In practice, PORT tells the\n"
" server to connect to the client's specified address and\n"
" port, while PASV asks the server for an ip address and\n"
" port, while PASV asks the server for an ip address and\n"
" port to connect to. <address> should be one of:\n"
"\n"
" interface i.e \"eth0\" to specify which interface's IP\n"
" interface i.e \"eth0\" to specify which interface's IP\n"
" address you want to use (Unix only)\n"
"\n"
" IP address i.e \"192.168.10.1\" to specify exact IP num<75>\n"
@@ -341,28 +351,28 @@ puts (
"\n"
" host name i.e \"my.host.domain\" to specify machine\n"
"\n"
" - (any single-letter string) to make it pick\n"
" - (any single-letter string) to make it pick\n"
" the machine's default\n"
"\n"
" -q If used as the first parameter on the command line, the\n"
" $HOME/.curlrc file will not be read and used as a con<6F>\n"
" $HOME/.curlrc file will not be read and used as a con<6F>\n"
" fig file.\n"
"\n"
" -Q/--quote <comand>\n"
" (FTP) Send an arbitrary command to the remote FTP\n"
" server, by using the QUOTE command of the server. Not\n"
" all servers support this command, and the set of QUOTE\n"
" commands are server specific! Quote commands are sent\n"
" BEFORE the transfer is taking place. To make commands\n"
" take place after a successful transfer, prefix them\n"
" (FTP) Send an arbitrary command to the remote FTP\n"
" server, by using the QUOTE command of the server. Not\n"
" all servers support this command, and the set of QUOTE\n"
" commands are server specific! Quote commands are sent\n"
" BEFORE the transfer is taking place. To make commands\n"
" take place after a successful transfer, prefix them\n"
" with a dash '-'. You may specify any amount of commands\n"
" to be run before and after the transfer. If the server\n"
" returns failure for one of the commands, the entire\n"
" to be run before and after the transfer. If the server\n"
" returns failure for one of the commands, the entire\n"
" operation will be aborted.\n"
"\n"
" -r/--range <range>\n"
" (HTTP/FTP) Retrieve a byte range (i.e a partial docu<63>\n"
" ment) from a HTTP/1.1 or FTP server. Ranges can be\n"
" (HTTP/FTP) Retrieve a byte range (i.e a partial docu<63>\n"
" ment) from a HTTP/1.1 or FTP server. Ranges can be\n"
" specified in a number of ways.\n"
"\n"
" 0-499 specifies the first 500 bytes\n"
@@ -371,8 +381,8 @@ puts (
"\n"
" -500 specifies the last 500 bytes\n"
"\n"
" 9500 specifies the bytes from offset 9500 and\n"
" forward\n"
" 9500 specifies the bytes from offset 9500 and for<6F>\n"
" ward\n"
"\n"
" 0-0,-1 specifies the first and last byte only(*)(H)\n"
"\n"
@@ -382,161 +392,173 @@ puts (
" 100-199,500-599\n"
" specifies two separate 100 bytes ranges(*)(H)\n"
"\n"
" (*) = NOTE that this will cause the server to reply with a\n"
" (*) = NOTE that this will cause the server to reply with a\n"
" multipart response!\n"
"\n"
" You should also be aware that many HTTP/1.1 servers do not\n"
" You should also be aware that many HTTP/1.1 servers do not\n"
" have this feature enabled, so that when you attempt to get a\n"
" range, you'll instead get the whole document.\n"
"\n"
" FTP range downloads only support the simple syntax 'start-\n"
" stop' (optionally with one of the numbers omitted). It\n"
" FTP range downloads only support the simple syntax 'start-\n"
" stop' (optionally with one of the numbers omitted). It\n"
" depends on the non-RFC command SIZE.\n"
"\n"
" -s/--silent\n"
" Silent mode. Don't show progress meter or error mes<EFBFBD>\n"
" Silent mode. Don't show progress meter or error mes<65>\n"
" sages. Makes Curl mute.\n"
"\n"
" -S/--show-error\n"
" When used with -s it makes curl show error message if\n"
" When used with -s it makes curl show error message if\n"
" it fails.\n"
"\n"
" -t/--upload\n"
" Deprecated. Use '-T -' instead. Transfer the stdin\n"
" data to the specified file. Curl will read everything\n"
" from stdin until EOF and store with the supplied name.\n"
" If this is used on a http(s) server, the PUT command\n"
" Deprecated. Use '-T -' instead. Transfer the stdin\n"
" data to the specified file. Curl will read everything\n"
" from stdin until EOF and store with the supplied name.\n"
" If this is used on a http(s) server, the PUT command\n"
" will be used.\n"
"\n"
" -T/--upload-file <file>\n"
" Like -t, but this transfers the specified local file.\n"
" If there is no file part in the specified URL, Curl\n"
" Like -t, but this transfers the specified local file.\n"
" If there is no file part in the specified URL, Curl\n"
" will append the local file name. NOTE that you must use\n"
" a trailing / on the last directory to really prove to\n"
" a trailing / on the last directory to really prove to\n"
" Curl that there is no file name or curl will think that\n"
" your last directory name is the remote file name to\n"
" use. That will most likely cause the upload operation\n"
" to fail. If this is used on a http(s) server, the PUT\n"
" your last directory name is the remote file name to\n"
" use. That will most likely cause the upload operation\n"
" to fail. If this is used on a http(s) server, the PUT\n"
" command will be used.\n"
"\n"
" -u/--user <user:password>\n"
" Specify user and password to use when fetching. See\n"
" README.curl for detailed examples of how to use this.\n"
" If no password is specified, curl will ask for it\n"
" Specify user and password to use when fetching. See\n"
" README.curl for detailed examples of how to use this.\n"
" If no password is specified, curl will ask for it\n"
" interactively.\n"
"\n"
" -U/--proxy-user <user:password>\n"
" Specify user and password to use for Proxy authentica<63>\n"
" Specify user and password to use for Proxy authentica<63>\n"
" tion. If no password is specified, curl will ask for it\n"
" interactively.\n"
"\n"
" --url <URL>\n"
" Set the URL to fetch. This option is mostly handy when\n"
" you wanna specify URL in a config file.\n"
"\n"
" -v/--verbose\n"
" Makes the fetching more verbose/talkative. Mostly\n"
" usable for debugging. Lines starting with '>' means\n"
" Makes the fetching more verbose/talkative. Mostly\n"
" usable for debugging. Lines starting with '>' means\n"
" data sent by curl, '<' means data received by curl that\n"
" is hidden in normal cases and lines starting with '*'\n"
" is hidden in normal cases and lines starting with '*'\n"
" means additional info provided by curl.\n"
"\n"
" -V/--version\n"
" Displays the full version of curl, libcurl and other\n"
" Displays the full version of curl, libcurl and other\n"
" 3rd party libraries linked with the executable.\n"
"\n"
" -w/--write-out <format>\n"
" Defines what to display after a completed and success<EFBFBD>\n"
" ful operation. The format is a string that may contain\n"
" plain text mixed with any number of variables. The\n"
" Defines what to display after a completed and success<73>\n"
" ful operation. The format is a string that may contain\n"
" plain text mixed with any number of variables. The\n"
" string can be specified as \"string\", to get read from a\n"
" particular file you specify it \"@filename\" and to tell\n"
" particular file you specify it \"@filename\" and to tell\n"
" curl to read the format from stdin you write \"@-\".\n"
"\n"
" The variables present in the output format will be sub<75>\n"
" stituted by the value or text that curl thinks fit, as\n"
" described below. All variables are specified like\n"
" %{variable_name} and to output a normal % you just\n"
" write them like %%. You can output a newline by using\n"
" stituted by the value or text that curl thinks fit, as\n"
" described below. All variables are specified like\n"
" %{variable_name} and to output a normal % you just\n"
" write them like %%. You can output a newline by using\n"
" \\n, a carrige return with \\r and a tab space with \\t.\n"
"\n"
" NOTE: The %-letter is a special letter in the\n"
" win32-environment, where all occurrences of % must be\n"
" NOTE: The %-letter is a special letter in the\n"
" win32-environment, where all occurrences of % must be\n"
" doubled when using this option.\n"
"\n"
" Available variables are at this point:\n"
);
puts(
"\n"
" url_effective The URL that was fetched last. This is\n"
" url_effective The URL that was fetched last. This is\n"
" mostly meaningful if you've told curl to\n"
" follow location: headers.\n"
"\n"
" http_code The numerical code that was found in the\n"
);
puts(
" last retrieved HTTP(S) page.\n"
"\n"
" time_total The total time, in seconds, that the\n"
" full operation lasted. The time will be\n"
" time_total The total time, in seconds, that the\n"
" full operation lasted. The time will be\n"
" displayed with millisecond resolution.\n"
"\n"
" time_namelookup\n"
" The time, in seconds, it took from the\n"
" start until the name resolving was com<EFBFBD>\n"
" The time, in seconds, it took from the\n"
" start until the name resolving was com<6F>\n"
" pleted.\n"
" time_connect The time, in seconds, it took from the\n"
" start until the connect to the remote\n"
"\n"
" time_connect The time, in seconds, it took from the\n"
" start until the connect to the remote\n"
" host (or proxy) was completed.\n"
"\n"
" time_pretransfer\n"
" The time, in seconds, it took from the\n"
" start until the file transfer is just\n"
" about to begin. This includes all pre-\n"
" transfer commands and negotiations that\n"
" are specific to the particular proto<74>\n"
" The time, in seconds, it took from the\n"
" start until the file transfer is just\n"
" about to begin. This includes all pre-\n"
" transfer commands and negotiations that\n"
" are specific to the particular proto<74>\n"
" col(s) involved.\n"
"\n"
" size_download The total amount of bytes that were\n"
" size_download The total amount of bytes that were\n"
" downloaded.\n"
"\n"
" size_upload The total amount of bytes that were\n"
" size_upload The total amount of bytes that were\n"
" uploaded.\n"
"\n"
" speed_download The average download speed that curl\n"
" size_header The total amount of bytes of the down<77>\n"
" loaded headers.\n"
"\n"
" size_request The total amount of bytes that were sent\n"
" in the HTTP request.\n"
"\n"
" speed_download The average download speed that curl\n"
" measured for the complete download.\n"
"\n"
" speed_upload The average upload speed that curl mea<65>\n"
" sured for the complete download.\n"
" speed_upload The average upload speed that curl mea<65>\n"
" sured for the complete upload.\n"
"\n"
" -x/--proxy <proxyhost[:port]>\n"
" Use specified proxy. If the port number is not speci<EFBFBD>\n"
" Use specified proxy. If the port number is not speci<63>\n"
" fied, it is assumed at port 1080.\n"
"\n"
" -X/--request <command>\n"
" (HTTP) Specifies a custom request to use when communi<6E>\n"
" cating with the HTTP server. The specified request\n"
" (HTTP) Specifies a custom request to use when communi<6E>\n"
" cating with the HTTP server. The specified request\n"
" will be used instead of the standard GET. Read the HTTP\n"
" 1.1 specification for details and explanations.\n"
"\n"
" (FTP) Specifies a custom FTP command to use instead of\n"
" (FTP) Specifies a custom FTP command to use instead of\n"
" LIST when doing file lists with ftp.\n"
"\n"
" -y/--speed-time <time>\n"
" If a download is slower than speed-limit bytes per sec<65>\n"
" ond during a speed-time period, the download gets\n"
" ond during a speed-time period, the download gets\n"
" aborted. If speed-time is used, the default speed-limit\n"
" will be 1 unless set with -y.\n"
"\n"
" -Y/--speed-limit <speed>\n"
" If a download is slower than this given speed, in bytes\n"
" per second, for speed-time seconds it gets aborted.\n"
" per second, for speed-time seconds it gets aborted.\n"
" speed-time is set with -Y and is 30 if not set.\n"
"\n"
" -z/--time-cond <date expression>\n"
" (HTTP) Request to get a file that has been modified\n"
" later than the given time and date, or one that has\n"
" (HTTP) Request to get a file that has been modified\n"
" later than the given time and date, or one that has\n"
" been modified before that time. The date expression can\n"
" be all sorts of date strings or if it doesn't match any\n"
" internal ones, it tries to get the time from a given\n"
" file name instead! See the GNU date(1) or curl_get<65>\n"
" internal ones, it tries to get the time from a given\n"
" file name instead! See the GNU date(1) or curl_get<65>\n"
" date(3) man pages for date expression details.\n"
"\n"
" Start the date expression with a dash (-) to make it\n"
" request for a document that is older than the given\n"
" Start the date expression with a dash (-) to make it\n"
" request for a document that is older than the given\n"
" date/time, default is a document that is newer than the\n"
" specified date/time.\n"
"\n"
@@ -549,19 +571,18 @@ puts (
" ing with a remote SSL server.\n"
"\n"
" -#/--progress-bar\n"
" Make curl display progress information as a progress\n"
" Make curl display progress information as a progress\n"
" bar instead of the default statistics.\n"
"\n"
" --crlf\n"
" (FTP) Convert LF to CRLF in upload. Useful for MVS\n"
" (FTP) Convert LF to CRLF in upload. Useful for MVS\n"
" (OS/390).\n"
"\n"
" --stderr <file>\n"
" Redirect all writes to stderr to the specified file\n"
" Redirect all writes to stderr to the specified file\n"
" instead. If the file name is a plain '-', it is instead\n"
" written to stdout. This option has no point when you're\n"
" using a shell with decent redirecting capabilities.\n"
"\n"
"FILES\n"
" ~/.curlrc\n"
" Default config file.\n"
@@ -580,7 +601,7 @@ puts (
" Sets proxy server to use for GOPHER.\n"
"\n"
" ALL_PROXY [protocol://]<host>[:port]\n"
" Sets proxy server to use if no protocol-specific proxy\n"
" Sets proxy server to use if no protocol-specific proxy\n"
" is set.\n"
"\n"
" NO_PROXY <comma-separated list of hosts>\n"
@@ -588,12 +609,12 @@ puts (
" If set to a asterisk '*' only, it matches all hosts.\n"
"\n"
" COLUMNS <integer>\n"
" The width of the terminal. This variable only affects\n"
" The width of the terminal. This variable only affects\n"
" curl when the --progress-bar option is used.\n"
"\n"
"EXIT CODES\n"
" There exists a bunch of different error codes and their cor<6F>\n"
" responding error messages that may appear during bad condi<EFBFBD>\n"
" responding error messages that may appear during bad condi<64>\n"
" tions. At the time of this writing, the exit codes are:\n"
"\n"
" 1 Unsupported protocol. This build of curl has no support\n"
@@ -603,94 +624,94 @@ puts (
"\n"
" 3 URL malformat. The syntax was not correct.\n"
"\n"
" 4 URL user malformatted. The user-part of the URL syntax\n"
" 4 URL user malformatted. The user-part of the URL syntax\n"
" was not correct.\n"
"\n"
" 5 Couldn't resolve proxy. The given proxy host could not\n"
" 5 Couldn't resolve proxy. The given proxy host could not\n"
" be resolved.\n"
"\n"
" 6 Couldn't resolve host. The given remote host was not\n"
" 6 Couldn't resolve host. The given remote host was not\n"
" resolved.\n"
"\n"
" 7 Failed to connect to host.\n"
"\n"
" 8 FTP weird server reply. The server sent data curl\n"
" 8 FTP weird server reply. The server sent data curl\n"
" couldn't parse.\n"
"\n"
" 9 FTP access denied. The server denied login.\n"
"\n"
" 10 FTP user/password incorrect. Either one or both were\n"
" 10 FTP user/password incorrect. Either one or both were\n"
" not accepted by the server.\n"
"\n"
" 11 FTP weird PASS reply. Curl couldn't parse the reply\n"
" 11 FTP weird PASS reply. Curl couldn't parse the reply\n"
" sent to the PASS request.\n"
"\n"
" 12 FTP weird USER reply. Curl couldn't parse the reply\n"
" 12 FTP weird USER reply. Curl couldn't parse the reply\n"
" sent to the USER request.\n"
"\n"
" 13 FTP weird PASV reply, Curl couldn't parse the reply\n"
" 13 FTP weird PASV reply, Curl couldn't parse the reply\n"
" sent to the PASV request.\n"
"\n"
" 14 FTP weird 227 formay. Curl couldn't parse the 227-line\n"
" 14 FTP weird 227 format. Curl couldn't parse the 227-line\n"
" the server sent.\n"
"\n"
" 15 FTP can't get host. Couldn't resolve the host IP we got\n"
" in the 227-line.\n"
"\n"
" 16 FTP can't reconnect. Couldn't connect to the host we\n"
" 16 FTP can't reconnect. Couldn't connect to the host we\n"
" got in the 227-line.\n"
"\n"
" 17 FTP couldn't set binary. Couldn't change transfer\n"
" 17 FTP couldn't set binary. Couldn't change transfer\n"
" method to binary.\n"
"\n"
" 18 Partial file. Only a part of the file was transfered.\n"
"\n"
" 19 FTP couldn't RETR file. The RETR command failed.\n"
"\n"
" 20 FTP write error. The transfer was reported bad by the\n"
" 20 FTP write error. The transfer was reported bad by the\n"
" server.\n"
"\n"
" 21 FTP quote error. A quote command returned error from\n"
" 21 FTP quote error. A quote command returned error from\n"
" the server.\n"
"\n"
" 22 HTTP not found. The requested page was not found. This\n"
" 22 HTTP not found. The requested page was not found. This\n"
" return code only appears if --fail is used.\n"
"\n"
" 23 Write error. Curl couldn't write data to a local\n"
" 23 Write error. Curl couldn't write data to a local\n"
" filesystem or similar.\n"
"\n"
" 24 Malformat user. User name badly specified.\n"
"\n"
" 25 FTP couldn't STOR file. The server denied the STOR\n"
" 25 FTP couldn't STOR file. The server denied the STOR\n"
" operation.\n"
"\n"
" 26 Read error. Various reading problems.\n"
"\n"
" 27 Out of memory. A memory allocation request failed.\n"
"\n"
" 28 Operation timeout. The specified time-out period was\n"
" 28 Operation timeout. The specified time-out period was\n"
" reached according to the conditions.\n"
"\n"
" 29 FTP couldn't set ASCII. The server returned an unknown\n"
" 29 FTP couldn't set ASCII. The server returned an unknown\n"
" reply.\n"
"\n"
" 30 FTP PORT failed. The PORT command failed.\n"
"\n"
" 31 FTP couldn't use REST. The REST command failed.\n"
"\n"
" 32 FTP couldn't use SIZE. The SIZE command failed. The\n"
" command is an extension to the original FTP spec RFC\n"
" 32 FTP couldn't use SIZE. The SIZE command failed. The\n"
" command is an extension to the original FTP spec RFC\n"
" 959.\n"
"\n"
" 33 HTTP range error. The range \"command\" didn't work.\n"
"\n"
" 34 HTTP post error. Internal post-request generation\n"
" 34 HTTP post error. Internal post-request generation\n"
" error.\n"
"\n"
" 35 SSL connect error. The SSL handshaking failed.\n"
"\n"
" 36 FTP bad download resume. Couldn't continue an earlier\n"
" 36 FTP bad download resume. Couldn't continue an earlier\n"
" aborted download.\n"
"\n"
" 37 FILE couldn't read file. Failed to open the file. Per<65>\n"
" 37 FILE couldn't read file. Failed to open the file. Per<65>\n"
" missions?\n"
"\n"
" 38 LDAP cannot bind. LDAP bind operation failed.\n"
@@ -699,15 +720,30 @@ puts (
"\n"
" 40 Library not found. The LDAP library was not found.\n"
"\n"
" 41 Function not found. A required LDAP function was not\n"
" 41 Function not found. A required LDAP function was not\n"
" found.\n"
"\n"
" XX There will appear more error codes here in future\n"
" releases. The existing ones are meant to never change.\n"
" 42 Aborted by callback. An application told curl to abort\n"
" the operation.\n"
"\n"
" 43 Internal error. A function was called with a bad param<61>\n"
" eter.\n"
"\n"
" 44 Internal error. A function was called in a bad order.\n"
"\n"
" 45 Interface error. A specified outgoing interface could\n"
" not be used.\n"
"\n"
" 46 Bad password entered. An error was signalled when the\n"
" password was entered.\n"
" 47 Too many redirects. When following redirects, curl hit\n"
" the maximum amount.\n"
"\n"
" XX There will appear more error codes here in future\n"
" releases. The existing ones are meant to never change.\n"
"\n"
"BUGS\n"
" If you do find any (or have other suggestions), mail Daniel\n"
" Stenberg <Daniel.Stenberg@haxx.se>.\n"
" If you do find bugs, mail them to curl-bug@haxx.se.\n"
"\n"
"AUTHORS / CONTRIBUTORS\n"
" - Daniel Stenberg <Daniel.Stenberg@haxx.se>\n"
@@ -730,6 +766,8 @@ puts (
" - Douglas E. Wegscheid <wegscd@whirlpool.com>\n"
" - Mark Butler <butlerm@xmission.com>\n"
" - Eric Thelin <eric@generation-i.com>\n"
);
puts(
" - Marc Boucher <marc@mbsi.ca>\n"
" - Greg Onufer <Greg.Onufer@Eng.Sun.COM>\n"
" - Doug Kaufman <dkaufman@rahul.net>\n"
@@ -748,8 +786,6 @@ puts (
" - Paul Marquis <pmarquis@iname.com>\n"
" - Ellis Pritchard <ellis@citria.com>\n"
" - Damien Adant <dams@usa.net>\n"
);
puts(
" - Chris <cbayliss@csc.come>\n"
" - Marco G. Salvagno <mgs@whiz.cjb.net>\n"
" - Paul Marquis <pmarquis@iname.com>\n"
@@ -764,6 +800,10 @@ puts (
" - Stephen Kick <skick@epicrealm.com>\n"
" - Martin Hedenfalk <mhe@stacken.kth.se>\n"
" - Richard Prescott\n"
" - Jason S. Priebe <priebe@wral-tv.com>\n"
" - T. Bharath <TBharath@responsenetworks.com>\n"
" - Alexander Kourakos <awk@users.sourceforge.net>\n"
" - James Griffiths <griffiths_james@yahoo.com>\n"
"\n"
"WWW\n"
" http://curl.haxx.se\n"
@@ -1036,6 +1076,8 @@ puts (
"\n"
" curl -F \"file=@cooltext.txt\" -F \"yourname=Daniel\" \\\n"
" -F \"filedescription=Cool text file with cool text inside\" \\\n"
);
puts(
" http://www.post.com/postit.cgi\n"
"\n"
" So, to send two files in one post you can do it in two ways:\n"
@@ -1058,13 +1100,13 @@ puts (
"\n"
" curl -e www.coolsite.com http://www.showme.com/\n"
"\n"
" NOTE: The referer field is defined in the HTTP spec to be a full URL.\n"
"\n"
"USER AGENT\n"
"\n"
" A HTTP request has the option to include information about the browser\n"
" that generated the request. Curl allows it to be specified on the command\n"
" line. It is especially useful to fool or trick stupid servers or CGI\n"
);
puts(
" scripts that only accept certain browsers.\n"
"\n"
" Example:\n"
@@ -1178,17 +1220,26 @@ puts (
"CONFIG FILE\n"
"\n"
" Curl automatically tries to read the .curlrc file (or _curlrc file on win32\n"
" systems) from the user's home dir on startup. The config file should be\n"
" made up with normal command line switches. Comments can be used within the\n"
" file. If the first letter on a line is a '#'-letter the rest of the line\n"
" is treated as a comment.\n"
" systems) from the user's home dir on startup.\n"
"\n"
" The config file could be made up with normal command line switches, but you\n"
" can also specify the long options without the dashes to make it more\n"
" readable. You can separate the options and the parameter with spaces, or\n"
" with = or :. Comments can be used within the file. If the first letter on a\n"
" line is a '#'-letter the rest of the line is treated as a comment.\n"
"\n"
" If you want the parameter to contain spaces, you must inclose the entire\n"
" parameter within double quotes (\"). Within those quotes, you specify a\n"
" quote as \\\".\n"
"\n"
" NOTE: You must specify options and their arguments on the same line.\n"
"\n"
" Example, set default time out and proxy in a config file:\n"
"\n"
" # We want a 30 minute timeout:\n"
" -m 1800\n"
" # ... and we use a proxy for all accesses:\n"
" -x proxy.our.domain.com:8080\n"
" proxy = proxy.our.domain.com:8080\n"
"\n"
" White spaces ARE significant at the end of lines, but all white spaces\n"
" leading up to the first characters of each line are ignored.\n"
@@ -1202,14 +1253,14 @@ puts (
" without URL by making a config file similar to:\n"
"\n"
" # default url to get\n"
" http://help.with.curl.com/curlhelp.html\n"
" url = \"http://help.with.curl.com/curlhelp.html\"\n"
"\n"
" You can specify another config file to be read by using the -K/--config\n"
" flag. If you set config file name to \"-\" it'll read the config from stdin,\n"
" which can be handy if you want to hide options from being visible in process\n"
" tables etc:\n"
"\n"
" echo \"-u user:passwd\" | curl -K - http://that.secret.site.com\n"
" echo \"user = user:passwd\" | curl -K - http://that.secret.site.com\n"
"\n"
"EXTRA HEADERS\n"
"\n"
@@ -1296,6 +1347,8 @@ puts (
" curl https://www.secure-site.com\n"
"\n"
" Curl is also capable of using your personal certificates to get/post files\n"
);
puts(
" from sites that require valid certificates. The only drawback is that the\n"
" certificate needs to be in PEM-format. PEM is a standard and open format to\n"
" store certificates with, but it is not used by the most commonly used\n"
@@ -1324,8 +1377,6 @@ puts (
" curl -2 https://secure.site.com/\n"
"\n"
" Otherwise, curl will first attempt to use v3 and then v2.\n"
);
puts(
"\n"
" To use OpenSSL to convert your favourite browser's certificate into a PEM\n"
" formatted one that curl can use, do something like this (assuming netscape,\n"

1143
src/main.c

File diff suppressed because it is too large Load Diff

View File

@@ -26,9 +26,9 @@
*
* ------------------------------------------------------------
* Main author:
* - Daniel Stenberg <Daniel.Stenberg@haxx.nu>
* - Daniel Stenberg <daniel@haxx.se>
*
* http://curl.haxx.nu
* http://curl.haxx.se
*
* $Source$
* $Revision$

View File

@@ -24,9 +24,9 @@
*
* ------------------------------------------------------------
* Main author:
* - Daniel Stenberg <Daniel.Stenberg@haxx.nu>
* - Daniel Stenberg <daniel@haxx.se>
*
* http://curl.haxx.nu
* http://curl.haxx.se
*
* $Source$
* $Revision$
@@ -45,7 +45,11 @@
#include <curl/curl.h>
#include "urlglob.h"
char glob_buffer[URL_MAX_LENGTH];
#ifdef MALLOCDEBUG
#include "../lib/memdebug.h"
#endif
char *glob_buffer;
URLGlob *glob_expand;
int glob_word(char*, int);
@@ -206,10 +210,13 @@ int glob_word(char *pattern, int pos) {
int glob_url(URLGlob** glob, char* url, int *urlnum)
{
if (strlen(url)>URL_MAX_LENGTH) {
printf("Illegally sized URL\n");
return CURLE_URL_MALFORMAT;
}
/*
* We can deal with any-size, just make a buffer with the same length
* as the specified URL!
*/
glob_buffer=(char *)malloc(strlen(url)+1);
if(NULL == glob_buffer)
return CURLE_OUT_OF_MEMORY;
glob_expand = (URLGlob*)malloc(sizeof(URLGlob));
glob_expand->size = 0;
@@ -218,6 +225,25 @@ int glob_url(URLGlob** glob, char* url, int *urlnum)
return CURLE_OK;
}
void glob_cleanup(URLGlob* glob) {
int i, elem;
for (i = glob->size - 1; i >= 0; --i) {
if (!(i & 1)) { /* even indexes contain literals */
free(glob->literal[i/2]);
} else { /* odd indexes contain sets or ranges */
if (glob->pattern[i/2].type == UPTSet) {
for (elem = glob->pattern[i/2].content.Set.size - 1; elem >= 0; --elem) {
free(glob->pattern[i/2].content.Set.elements[elem]);
}
free(glob->pattern[i/2].content.Set.elements);
}
}
}
free(glob);
free(glob_buffer);
}
char *next_url(URLGlob *glob)
{
static int beenhere = 0;

View File

@@ -26,9 +26,9 @@
*
* ------------------------------------------------------------
* Main author:
* - Daniel Stenberg <Daniel.Stenberg@haxx.nu>
* - Daniel Stenberg <daniel@haxx.se>
*
* http://curl.haxx.nu
* http://curl.haxx.se
*
* $Source$
* $Revision$
@@ -70,5 +70,6 @@ typedef struct {
int glob_url(URLGlob**, char*, int *);
char* next_url(URLGlob*);
char* match_url(char*, URLGlob);
void glob_cleanup(URLGlob* glob);
#endif

View File

@@ -1,3 +1,3 @@
#define CURL_NAME "curl"
#define CURL_VERSION "7.3"
#define CURL_VERSION "7.5"
#define CURL_ID CURL_NAME " " CURL_VERSION " (" OS ") "

View File

@@ -38,12 +38,16 @@
* ------------------------------------------------------------
****************************************************************************/
#include "setup.h"
#include <stdio.h>
#include <string.h>
#include "strequal.h"
#include <curl/curl.h>
#include <curl/types.h>
#include <curl/easy.h>
#define _MPRINTF_REPLACE /* we want curl-functions instead of native ones */
#include <curl/mprintf.h>
#include "writeout.h"
typedef enum {
@@ -57,6 +61,8 @@ typedef enum {
VAR_SPEED_DOWNLOAD,
VAR_SPEED_UPLOAD,
VAR_HTTP_CODE,
VAR_HEADER_SIZE,
VAR_REQUEST_SIZE,
VAR_EFFECTIVE_URL,
VAR_NUM_OF_VARS /* must be the last */
} replaceid;
@@ -74,6 +80,8 @@ static struct variable replacements[]={
{"time_namelookup", VAR_NAMELOOKUP_TIME},
{"time_connect", VAR_CONNECT_TIME},
{"time_pretransfer", VAR_PRETRANSFER_TIME},
{"size_header", VAR_HEADER_SIZE},
{"size_request", VAR_REQUEST_SIZE},
{"size_download", VAR_SIZE_DOWNLOAD},
{"size_upload", VAR_SIZE_UPLOAD},
{"speed_download", VAR_SPEED_DOWNLOAD},
@@ -81,10 +89,14 @@ static struct variable replacements[]={
{NULL}
};
void WriteOut(struct UrlData *data)
void ourWriteOut(CURL *curl, char *writeinfo)
{
FILE *stream = stdout;
char *ptr=data->writeinfo;
char *ptr=writeinfo;
char *stringp;
long longinfo;
double doubleinfo;
while(*ptr) {
if('%' == *ptr) {
if('%' == ptr[1]) {
@@ -105,37 +117,67 @@ void WriteOut(struct UrlData *data)
if(strequal(ptr, replacements[i].name)) {
switch(replacements[i].id) {
case VAR_EFFECTIVE_URL:
fprintf(stream, "%s", data->url?data->url:"");
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, &stringp))
fputs(stringp, stream);
break;
case VAR_HTTP_CODE:
fprintf(stream, "%03d", data->progress.httpcode);
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_HTTP_CODE, &longinfo))
fprintf(stream, "%03d", longinfo);
break;
case VAR_HEADER_SIZE:
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_HEADER_SIZE, &longinfo))
fprintf(stream, "%d", longinfo);
break;
case VAR_REQUEST_SIZE:
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_REQUEST_SIZE, &longinfo))
fprintf(stream, "%d", longinfo);
break;
case VAR_TOTAL_TIME:
fprintf(stream, "%.3f", data->progress.timespent);
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_TOTAL_TIME, &doubleinfo))
fprintf(stream, "%.3f", doubleinfo);
break;
case VAR_NAMELOOKUP_TIME:
fprintf(stream, "%.3f", tvdiff(data->progress.t_nslookup,
data->progress.start));
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_NAMELOOKUP_TIME,
&doubleinfo))
fprintf(stream, "%.3f", doubleinfo);
break;
case VAR_CONNECT_TIME:
fprintf(stream, "%.3f", tvdiff(data->progress.t_connect,
data->progress.start));
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_CONNECT_TIME, &doubleinfo))
fprintf(stream, "%.3f", doubleinfo);
break;
case VAR_PRETRANSFER_TIME:
fprintf(stream, "%.3f", tvdiff(data->progress.t_pretransfer,
data->progress.start));
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_PRETRANSFER_TIME, &doubleinfo))
fprintf(stream, "%.3f", doubleinfo);
break;
case VAR_SIZE_UPLOAD:
fprintf(stream, "%.0f", data->progress.uploaded);
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_SIZE_UPLOAD, &doubleinfo))
fprintf(stream, "%.3f", doubleinfo);
break;
case VAR_SIZE_DOWNLOAD:
fprintf(stream, "%.0f", data->progress.downloaded);
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_SIZE_DOWNLOAD, &doubleinfo))
fprintf(stream, "%.3f", doubleinfo);
break;
case VAR_SPEED_DOWNLOAD:
fprintf(stream, "%.2f", data->progress.dlspeed);
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_SPEED_DOWNLOAD, &doubleinfo))
fprintf(stream, "%.3f", doubleinfo);
break;
case VAR_SPEED_UPLOAD:
fprintf(stream, "%.2f", data->progress.ulspeed);
if(CURLE_OK ==
curl_easy_getinfo(curl, CURLINFO_SPEED_UPLOAD, &doubleinfo))
fprintf(stream, "%.3f", doubleinfo);
break;
default:
break;
}
break;

View File

@@ -40,8 +40,6 @@
* ------------------------------------------------------------
****************************************************************************/
#include "urldata.h"
void WriteOut(struct UrlData *data);
void ourWriteOut(CURL *curl, char *out);
#endif

17
tests/Makefile.am Normal file
View File

@@ -0,0 +1,17 @@
all:
install:
curl:
@(cd ..; make)
test:
perl runtests.pl
quiet-test:
perl runtests.pl -s -a
clean:
rm -rf log
find . -name "*~" | xargs rm -f

69
tests/README Normal file
View File

@@ -0,0 +1,69 @@
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
The cURL Test Suite
Requires:
perl
Run:
'make test'. This invokes the 'runtests.pl' perl script. Edit the top
variables of that script in case you have some specific needs.
The script breaks on the first test that doesn't do OK. Use -a to prevent
the script to abort on the first error. Run the script with -v for more
verbose output.
Use -s fort shorter output, or pass a string with test numbers to run
specific tests only (like ./runtests.pl "3 4" to test 3 and 4 only)
Memory:
The test script will check that all allocated memory is freed properly IF
curl has been built with the MALLOCDEBUG define set. The script will
automatically detect if that is the case, and it will use the ../memanalyze
script to analyze the memory debugging output.
Logs:
All logs are generated in the logs/ subdirctory (it is emtpied first
in the runtests.sh script)
Data:
All test-data are put in the data/ subdirctory.
For each tests there exist a few files, all with their own separate and
special purpose. Replace N with the test number:
nameN.txt: test description as displayed when run
commandN.txt: command line options for this test
protN.txt: the full dump of the protocol communication that curl is
expected to use when performing this test
replyN.txt: the full dump the server should reply to curl for this test.
If the final result that curl should've got is not in this
file, you can instead name the file replyN0001.txt. This enables
you to fiddle more. ;-)
stdoutN.txt: if this file is present, curl's stdout is compared against
this file to see that they're identical. If this is present,
curl will not be run with -o but instead all output is compared
against this file!
errorN.txt: if this file is present, it should contain the error number
curl is supposed to return when this test is run.
uploadN.txt: if this file is present, it should contain the same data as
the log/upload.N does, after a curl upload has been performed.
ftpdN.txt: this file may contain instructions how to modify the behaviour
of the ftp server. It uses a simple syntax that is left to
describe here!
FIX:
* Make httpserver.pl work when we PUT without Content-Length:

1
tests/data/command1.txt Normal file
View File

@@ -0,0 +1 @@
http://%HOSTIP:%HOSTPORT/1

3
tests/data/command10.txt Normal file
View File

@@ -0,0 +1,3 @@
http://%HOSTIP:%HOSTPORT/we/want/10 -T data/command10.txt

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/

View File

@@ -0,0 +1,3 @@
ftp://%HOSTIP:%FTPPORT/ -P %HOSTIP

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/102

View File

@@ -0,0 +1,4 @@
ftp://%HOSTIP:%FTPPORT/a/path/103 -P -

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/a/path/103 --head

View File

@@ -0,0 +1,2 @@
ftp://userdude:passfellow@%HOSTIP:%FTPPORT/103 --use-ascii

View File

@@ -0,0 +1,2 @@
"ftp://%HOSTIP:%FTPPORT//path%20with%20%20spaces/and%20things2/106;type=A"

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/107 -T data/reply106.txt

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/CWD/STOR/RETR/108 -T data/reply106.txt -P -

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/109 -T data/reply106.txt --append

3
tests/data/command11.txt Normal file
View File

@@ -0,0 +1,3 @@
http://%HOSTIP:%HOSTPORT/want/11 -L

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/110 -C 20

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/110 -C 2000

View File

@@ -0,0 +1 @@
ftp://%HOSTIP:%FTPPORT/112 -T data/reply106.txt -C 40

View File

@@ -0,0 +1,2 @@
ftp://%HOSTIP:%FTPPORT/113

Some files were not shown because too many files have changed in this diff Show More