Compare commits

...

249 Commits

Author SHA1 Message Date
Daniel Stenberg
bf9b9ca29d 7.10.3 commit 2003-01-14 12:42:26 +00:00
Daniel Stenberg
64f224bb22 more 2003-01-13 12:08:39 +00:00
Daniel Stenberg
285a8fe4d0 there is SOCKS support these days 2003-01-13 06:35:31 +00:00
Daniel Stenberg
3773d76dfd Steve Oliphant pointed out that test case 105 did not work anymore and this
was due to a missing fix for the password prompting
2003-01-10 16:19:32 +00:00
Daniel Stenberg
94c5c7bd6d added test 136 2003-01-09 16:48:51 +00:00
Daniel Stenberg
12cfc4c0b0 verify -u username: with ftp to use a blank password 2003-01-09 16:47:55 +00:00
Daniel Stenberg
9a2de6e6ee if userpwd is "username:", this now implies a blank password while only
"username" will cause libcurl to prompt for password. Bryan Kemp noticed.

test case 136 is added for this
2003-01-09 16:47:09 +00:00
Daniel Stenberg
2ede47b8c8 Wai (Simon) Liu provided the HTTP200ALIASES paragraph. 2003-01-09 15:04:55 +00:00
Daniel Stenberg
76e107506f Philippe Raoult's added note for HTTPHEADER 2003-01-09 14:58:54 +00:00
Daniel Stenberg
6f35ed51dc This fixed yet another connect problem with the multi interface and ipv4
stack. Kjetil Jacobsen reported and verified the fix.
2003-01-09 14:52:51 +00:00
Daniel Stenberg
c94ba66310 removed 2003-01-09 11:57:50 +00:00
Daniel Stenberg
a15133f5cf removed unused code 2003-01-09 11:50:34 +00:00
Daniel Stenberg
cc09e9d4c2 fix 2003-01-09 11:43:08 +00:00
Daniel Stenberg
16e0da2c4b call curl_multi_perform() correctly 2003-01-09 11:42:07 +00:00
Daniel Stenberg
ed22f75241 proper indent 2003-01-09 11:31:49 +00:00
Daniel Stenberg
ba25cad6e2 pass a file name to memanalyze to read from instead of using stdin 2003-01-09 11:26:57 +00:00
Daniel Stenberg
abb01123cb share.h is now a used header file 2003-01-09 11:19:51 +00:00
Daniel Stenberg
e2d249f8c5 fixed to deal with file names that contain colons, as in Windows 2003-01-09 11:03:02 +00:00
Daniel Stenberg
4a2ac166fa 7.10.3-pre4 2003-01-09 10:36:24 +00:00
Daniel Stenberg
5fab55383d rename the curl share error enum prefix 2003-01-09 10:26:29 +00:00
Daniel Stenberg
f152f23a68 Updated more and now looks and and the API possibly works almost like the
design document specifies. There is still no code inside that uses this.
2003-01-09 10:21:03 +00:00
Daniel Stenberg
24e78b3571 7+8 jan 2003 2003-01-09 09:53:08 +00:00
Daniel Stenberg
9a239edb52 updated to use the modified share-types 2003-01-08 15:50:52 +00:00
Daniel Stenberg
abcc5c5a82 cleaned up the share data types and prototypes to be more in line what
the design draft mentioned and what I think is fit
2003-01-08 15:50:06 +00:00
Daniel Stenberg
cb5ba675a7 mkdir() fix for win32 2003-01-08 15:04:42 +00:00
Daniel Stenberg
2288086695 nah, include test.h instead 2003-01-08 09:37:35 +00:00
Daniel Stenberg
61421b7a8f include curl.h without directory 2003-01-08 09:33:19 +00:00
Jean-Philippe Barette-LaPierre
6a7e53a7c7 fixed a very, very rare and very, very little memory leak 2003-01-08 02:27:47 +00:00
Daniel Stenberg
ca134d5522 Philippe Raoult's fix to handle wildcard certificate name checks 2003-01-07 16:33:11 +00:00
Daniel Stenberg
ec24efda74 Simon Liu's HTTP200ALIASES-patch! 2003-01-07 16:15:53 +00:00
Daniel Stenberg
7f0f10e498 stuff 2003-01-07 15:40:01 +00:00
Daniel Stenberg
aa5af100b4 clarified error code 19 2003-01-07 15:39:38 +00:00
Daniel Stenberg
37ae32f688 Only output valid filetime.
Return file-error if 550 is returned when trying MDTM
2003-01-07 11:25:44 +00:00
Daniel Stenberg
d0cffdec5d when sending an error message to the debugfunction, we append a newline so
that the output looks better
2003-01-07 11:23:52 +00:00
Daniel Stenberg
0f34521612 fixed the create_dir_hierarchy() to not use uninited memory, as noticed by
Matthew Blain.
2003-01-07 09:35:57 +00:00
Daniel Stenberg
e69362df22 Matthew Blain's improvements for debug builds 2003-01-07 09:31:45 +00:00
Daniel Stenberg
3de8f6f38e better ignore 2003-01-07 09:30:05 +00:00
Daniel Stenberg
5359bc8083 ignore lib504 too 2003-01-07 09:27:32 +00:00
Daniel Stenberg
eb6a14fe10 updated 2003-01-07 07:54:14 +00:00
Daniel Stenberg
2912537533 indent fix 2003-01-06 12:41:33 +00:00
Sterling Hughes
cfb32da198 fix bug (?) :-)
previously, if you called curl_easy_perform and then set the global dns
cache, the global cache wouldn't be used.  I don't see this really happening
in practice, but this code allows you to do it.
2003-01-06 06:17:15 +00:00
Daniel Stenberg
9b4f92130f return -1 even if SSL_pending() doesn't return non-zero, as we don't really
care how many bytes that is readable NOW. Philippe Raoult reported the
bug in 7.10.3-pre3.
2002-12-29 16:27:31 +00:00
Daniel Stenberg
5a2ab686a6 Marc Herbert's suggstion: mention that insecure is ignored if cacert or capath
is used.
2002-12-29 16:23:52 +00:00
Daniel Stenberg
3b8583b014 example configure command line 2002-12-20 16:00:56 +00:00
Daniel Stenberg
ed29552b1e Use AM_MAINTAINER_MODE which thus makes less maintainer stuff in the default
makefile when --enable-maintainer-mode is not used.
2002-12-20 15:54:24 +00:00
Daniel Stenberg
a2ada3cf96 7.10.3-commit 2002-12-20 09:03:38 +00:00
Daniel Stenberg
88825a1187 fixes 2002-12-19 16:37:07 +00:00
Daniel Stenberg
264e7fc58b removed fruitless attempts to overload some targets 2002-12-19 16:36:35 +00:00
Daniel Stenberg
1698015e3c Curl_base64_decode() fixed by Matthew B 2002-12-19 16:02:51 +00:00
Daniel Stenberg
39dc14c002 Fixed the usage of SSL_read() to properly return -1 if the EWOULDBLOCK
situation occurs, which it previously didn't!

This was reptoed by Evan Jordan in bug report #653022.

Also, if ERROR_SYSCALL is returned from SSL_write(), include the errno number
in the error string for easier error detection.
2002-12-19 15:45:15 +00:00
Daniel Stenberg
04c499a5fc CURLOPT_DNS_USE_GLOBAL_CACHE is not thread-safe 2002-12-19 15:22:36 +00:00
Daniel Stenberg
efbe930a69 CURLE_HTTP_NOT_FOUND => CURLE_HTTP_RETURNED_ERROR 2002-12-18 16:51:02 +00:00
Daniel Stenberg
747f87f61e Removed weird special multi interface condition that caused bug report
#651464.
2002-12-17 10:05:00 +00:00
Daniel Stenberg
5a4c56fc44 don't install the test programs 2002-12-17 09:40:13 +00:00
Daniel Stenberg
81f45ba92a writefunction data is not zero terminated 2002-12-16 17:33:21 +00:00
Daniel Stenberg
a5dc4e32f2 removed junk 2002-12-16 15:32:37 +00:00
Daniel Stenberg
2b839853ec Added test case 504, using multi interface and a local proxy without anything
listening on the port we use.
2002-12-16 15:30:10 +00:00
Daniel Stenberg
66b6cd68ed better desc 2002-12-16 15:05:31 +00:00
Daniel Stenberg
0ef3d90838 mistake, this only requires http 2002-12-16 14:50:10 +00:00
Daniel Stenberg
5cc50f9b27 the hostip.c commit 2002-12-16 11:40:57 +00:00
Daniel Stenberg
e879e26a5b EAGAIN on older (correct) glibc versions indicate a problem and not the need
for a bigger buffer and this is indeed badness for us. Making this work
on both old and new glibc versions require an ugly loop that in its worse
form cause 45 bad loops when using the correct glibc and a non-resolving
host name... :-/

We want a better fix. Badly.
2002-12-16 11:33:44 +00:00
Daniel Stenberg
96d84150e1 changes from last week 2002-12-16 10:55:18 +00:00
Daniel Stenberg
2aa0c6c488 cut off -O properly when building for debug
setup the Makefile in tests/libtest/
2002-12-16 10:31:25 +00:00
Daniel Stenberg
811138386f documented the %-variables 2002-12-13 16:25:39 +00:00
Daniel Stenberg
c433cf7459 fixed another space issue 2002-12-13 16:24:57 +00:00
Daniel Stenberg
e0d6ebc2f2 please mr CVS ignore these 2002-12-13 16:24:04 +00:00
Daniel Stenberg
4938991ab8 set up arg2 to point to argv[2] to be used at will by programs 2002-12-13 16:22:57 +00:00
Daniel Stenberg
13722f536e added 503 2002-12-13 16:22:17 +00:00
Daniel Stenberg
57f0e3292d used this to verify bug report 651460 2002-12-13 16:21:18 +00:00
Daniel Stenberg
da5ae565ab added support for CONNECT, both good and bad 2002-12-13 16:20:07 +00:00
Daniel Stenberg
87c5066242 test case 503 entered the dir 2002-12-13 16:17:27 +00:00
Daniel Stenberg
b528bde470 conn->bits.tcpconnect now keeps track of if this connection is connected
or not
2002-12-13 16:15:19 +00:00
Daniel Stenberg
57572e550f include files without the curl/ to reduce the risk of us including the wrong
set of include files during tests
2002-12-13 14:14:35 +00:00
Daniel Stenberg
3aea0d3d68 Evan Jordan's fix for a memory leak. Bug report 650989. 2002-12-13 14:08:49 +00:00
Daniel Stenberg
9ae920c1b6 make a little work-around for file:// in _is_connected() and voila, now the
multi interface works with file:// URLs fine (previously it crashed). This
won't make it work on Windows though...
2002-12-13 13:47:58 +00:00
Daniel Stenberg
dff406a360 one slash too many 2002-12-13 13:41:28 +00:00
Daniel Stenberg
d346ba5c3c lib502.c for multi interface tests on a single URL without select() 2002-12-13 13:40:25 +00:00
Daniel Stenberg
978541adc2 test 502, multi interface with file:// 2002-12-13 13:39:39 +00:00
Daniel Stenberg
637bce2707 bail out on crap received, makes test case 402 *NOT* ruin the test series
anymore!
2002-12-12 18:07:10 +00:00
Daniel Stenberg
07e3dc2ee2 missing space added, nows run old tests fine again 2002-12-12 16:46:45 +00:00
Daniel Stenberg
ead065d803 remove test piece 2002-12-12 13:44:26 +00:00
Daniel Stenberg
0150bff7b4 make ftps and https invoke both necessary servers 2002-12-12 13:42:21 +00:00
Daniel Stenberg
0f493b6038 fixes 2002-12-12 13:40:16 +00:00
Daniel Stenberg
f26b709c50 link the test tools this way instead 2002-12-12 13:39:02 +00:00
Daniel Stenberg
ae10d9cf22 no more 2002-12-12 13:36:50 +00:00
Daniel Stenberg
81af9674ed corrected 2002-12-12 12:49:29 +00:00
Daniel Stenberg
b63df7991a new subdir added 'libtest' 2002-12-12 12:20:33 +00:00
Daniel Stenberg
a79990465c supports the new 'tool' and 'server' tags 2002-12-12 12:20:06 +00:00
Daniel Stenberg
ad6bd530ac describe the new sections added for (better) libcurl testing 2002-12-12 12:15:02 +00:00
Daniel Stenberg
c1b369fd4c 500 + 501 added 2002-12-12 12:13:18 +00:00
Daniel Stenberg
01fcd3c2d5 run tiny specific libcurl-testing tools 2002-12-12 12:12:01 +00:00
Daniel Stenberg
7196d784d3 The first ever attempts to do pure libcurl test cases 2002-12-12 12:11:16 +00:00
Daniel Stenberg
0f0aaf51e0 Deal with HTML where ' is used instead of "
Cut off name from option
2002-12-12 11:43:59 +00:00
Daniel Stenberg
b5f493c55a moved the includes to outside the extern "C" stuff
decreased the write buffer size to 16KB to perform a lot better on Windows(!)
2002-12-11 11:42:40 +00:00
Daniel Stenberg
0aa031beb9 recent fluff 2002-12-10 13:11:24 +00:00
Daniel Stenberg
db6ff224f8 The initial HTTP request can now be sent in multiple parts, as part of the
regular transfer process. This required some new tweaks, like for example
we need to be able to tell the tranfer loop to not chunky-encode uploads
while we're transferring the rest of the request...
2002-12-10 13:10:00 +00:00
Daniel Stenberg
b3c7cd61f3 send_buffer is no more here 2002-12-10 13:08:22 +00:00
Daniel Stenberg
9ae05c4d91 added test56, nearly 100KB big! 2002-12-10 13:01:05 +00:00
Daniel Stenberg
264e6f6efd Test case for sending insanely big HTTP requests. Mainly done this way to
make sure that it isn't all sent off in one single send() but instead
really tests the multiple-part-send logic.
2002-12-10 13:00:32 +00:00
Daniel Stenberg
ec7bccf671 more logging, now logs the full response too, basic support for dealing
with chunked transfer-encoding uploads added
2002-12-10 12:59:16 +00:00
Daniel Stenberg
49f75ee8ce A normal POST now provides data to the main transfer loop via the usual
read callback, and thus won't put a lot of stress on the request sending
code (which currently does an ugly loop).
2002-12-09 16:05:57 +00:00
Daniel Stenberg
4bcc866c52 The fread() callback pointer and associated pointer is now stored in the
connectdata struct instead, and is no longer modified within the 'set' struct
as previously (which was a really BAAAD thing).
2002-12-09 15:37:54 +00:00
Daniel Stenberg
c65e088caf Added a default headers section and also made some minor details more
up-to-date with recent changes.
2002-12-09 14:39:01 +00:00
Daniel Stenberg
6ca4116555 better errno include and no extern 2002-12-05 19:39:17 +00:00
Daniel Stenberg
f6cdb820af read and write as much as possible until end of data or EWOULDBLOCK before
returning back to the select() loop. Consider this a test so far.
2002-12-05 14:26:30 +00:00
Daniel Stenberg
081e5a82ff deal with spaces in name and value tags a lot better! 2002-12-05 12:54:08 +00:00
Daniel Stenberg
2ad2a4bd9f changed proto for Curl_krb_kauth() 2002-12-05 11:26:20 +00:00
Daniel Stenberg
645e700da3 Solaris needs errno as an extern int. 2002-12-05 11:25:36 +00:00
Daniel Stenberg
92aea29a30 make WIN32 defined for Borland properly, as told by Alexander J. Oss 2002-12-04 11:06:17 +00:00
Daniel Stenberg
e1c01af929 called SSLCERTS now 2002-12-04 09:53:09 +00:00
Daniel Stenberg
7ef749497d 7.10.3-pre2 2002-12-04 09:09:26 +00:00
Daniel Stenberg
d72aa49126 The waiting for the 226 or 250 line expected to come after a transfer is
complete is now only made for 60 seconds and if no data was received during
those 60 seconds, we store a special error message (preparing to make this
a special error code) as this most likely means that the control connection
has died while we were transferring data.
2002-12-04 08:56:55 +00:00
Daniel Stenberg
e92bd312ec missing } 2002-12-03 12:41:10 +00:00
Daniel Stenberg
b097c2cfb0 clarified 2002-12-03 12:40:12 +00:00
Daniel Stenberg
a39cdc80b7 Jeff pointed out this flaw in the example 2002-12-03 12:34:43 +00:00
Daniel Stenberg
a47250810e -@ is no longer an official shortcut for --create-dirs 2002-12-03 11:13:12 +00:00
Daniel Stenberg
1f50f3031f don't officially use -@ for --create-dirs, only use the long form 2002-12-03 11:12:18 +00:00
Daniel Stenberg
75145dd753 clarify the DEBUGFUNCTION data not being zero terminated 2002-12-03 10:37:20 +00:00
Daniel Stenberg
d0b97f7e1f Curl_GetFTPResponse() takes a different set of parameters and now return a
proper CURLcode. The default timeout for reading one response is now also
possible to change while running.
2002-12-03 10:25:31 +00:00
Daniel Stenberg
199a0311e2 updated to reality 2002-12-03 09:32:57 +00:00
Daniel Stenberg
fa446f860f Nicolas Berloquin's fix of his previous dir creation patch 2002-12-03 08:07:52 +00:00
Daniel Stenberg
7a74303f3c Nicolas Berloquin's description of his -@/--create-dirs fix 2002-12-02 14:40:54 +00:00
Daniel Stenberg
7d9eabb981 Nicolas Berloquin's added code for dealing with -@/--create-dirs to create
the necessary directories as specified with -o.
2002-12-02 14:37:59 +00:00
Daniel Stenberg
ff5308a5af if the PWD reply parser failed, we leaked memory 2002-12-02 07:18:24 +00:00
Daniel Stenberg
3f8ba3a986 clarified SSL_VERIFYPEER and SSL_VERIFYHOST a bit, thanks to Soren Spies 2002-12-02 06:47:16 +00:00
Daniel Stenberg
4a555de1b2 wrapped the line for PRIVATE nicer 2002-12-01 11:23:06 +00:00
Daniel Stenberg
d27e4a08f9 more to ignore 2002-12-01 11:21:36 +00:00
Daniel Stenberg
bf678a1ca9 only use Content-Length: header if not transfering data chunked 2002-12-01 11:20:41 +00:00
Daniel Stenberg
13a903de28 mention CVS-INFO for more info when checked out from CVS
removed old section about problems with old autoconfs, I don't think that
happens anymore
2002-11-30 16:00:10 +00:00
Daniel Stenberg
a3c14c031e stuff done since the 7.10.2 release 2002-11-29 08:29:21 +00:00
Daniel Stenberg
e90d528026 let the Curl_FormReader() return 0 when it reaches end of data to that the
chunked transfer work
2002-11-29 08:12:20 +00:00
Daniel Stenberg
d64dd77993 fix the hash init to call the correct dns cleanup function 2002-11-28 15:48:54 +00:00
Daniel Stenberg
113850a748 added compareheader proto 2002-11-28 15:48:23 +00:00
Daniel Stenberg
1c49a00d64 compareheader() was moved over to http.c and got a Curl_ prefix
The chunked transfer upload never stopped due to a silly add before we checked
for >0!
2002-11-28 15:46:22 +00:00
Daniel Stenberg
eef6c83503 Moved the compareheader function into this file and added Curl_ prefix
We now check if the chunked transfer-encoding header has been added "by force"
and if so, we enabled the chunky upload!
2002-11-28 15:45:06 +00:00
Daniel Stenberg
ceb5648eb7 mention how to generate patches 2002-11-28 14:07:14 +00:00
Daniel Stenberg
a0eadb76ea bad use of AM_CONDITIONAL removed and now configure runs better when used
with --disable-ipv6 --without-zlib
2002-11-28 13:29:42 +00:00
Daniel Stenberg
065852e46c execve.net is an official download mirror in HK 2002-11-27 11:59:52 +00:00
Daniel Stenberg
e5e2fb8274 Dan Becker fixed a minor memory leak on persistent connnections using
FOLLOWLOCATION and CURLOPT_USERPWD.
2002-11-26 17:32:15 +00:00
Daniel Stenberg
0210b3c893 removed extra space from trace output 'Send data' 2002-11-26 17:13:30 +00:00
Daniel Stenberg
7df5677b46 fixed Curl_freeaddrinfo() to only free addrinfo, and added Curl_freednsinfo()
for freeing single dns cache entries
2002-11-26 09:41:54 +00:00
sm
2e71876b28 Removed MFC dependency in Release Build when using VC++ IDE 2002-11-26 02:12:27 +00:00
Daniel Stenberg
11576b1142 Nedelcho Stanev's work-around for SFU 3.0 2002-11-24 19:30:21 +00:00
Daniel Stenberg
ce011b8a2d bug fix for the problem Juan Ignacio Hervs discovered today 2002-11-22 16:59:40 +00:00
Daniel Stenberg
12cfb4f7ee this fix seems to make the '305 306' test case combination to run ok finally! 2002-11-22 13:48:24 +00:00
Daniel Stenberg
9e1123debe don't use curl.haxx.se 2002-11-22 07:39:15 +00:00
Daniel Stenberg
c7354142c0 dead code removal 2002-11-21 15:11:26 +00:00
Daniel Stenberg
dee84f448f new name, supports <textarea> and the <option> tags within <select> better 2002-11-21 15:09:04 +00:00
Daniel Stenberg
1607711603 4.12 Why do I get "certificate verify failed" ? 2002-11-20 19:17:43 +00:00
Daniel Stenberg
8bca5e05b8 Kjetil Jacobsen's patch that introduces CURLOPT_PRIVATE and CURLINFO_PRIVATE
for storage and retrieval of private data in the curl handle.
2002-11-20 19:11:22 +00:00
Daniel Stenberg
f68505ee23 Karol Pietrzak pointed out that simply including the include dir in --cflags
is not a good thing, as recent gccs for example complain if it is /usr/include

Right now, we just output "" until we think of something better.
2002-11-20 19:04:34 +00:00
Daniel Stenberg
d2174da641 7.10.2 2002-11-18 22:10:06 +00:00
Daniel Stenberg
255b1e68d0 as requested, CURLE_OPERATION_TIMEDOUT is now the same as
CURLE_OPERATION_TIMEOUTED
2002-11-18 21:58:46 +00:00
Daniel Stenberg
fbee6b87f5 fflush() the trace stream on each call 2002-11-15 14:15:28 +00:00
Daniel Stenberg
3836a70f97 removed nroff mistake 2002-11-15 14:13:46 +00:00
Daniel Stenberg
e0ec9fa294 no more dllinit.o usage 2002-11-15 14:13:05 +00:00
Daniel Stenberg
80fe50590f recent fixes 2002-11-15 14:11:45 +00:00
Daniel Stenberg
ae18d1c55a attempts to filter off optimize flags when --enable-debug is used 2002-11-15 14:11:20 +00:00
Daniel Stenberg
75194373e0 language 2002-11-14 09:55:00 +00:00
Daniel Stenberg
f3875048f6 clarified that strings need to be kept around until the handle is closed or
until the pointers are set to another value
2002-11-14 09:54:10 +00:00
Daniel Stenberg
210af986ad dllinit.c is removed 2002-11-13 22:16:44 +00:00
Daniel Stenberg
c03044f492 not used and we don't have permission to distribute this! 2002-11-13 22:16:16 +00:00
Daniel Stenberg
522b85ae21 4.11 Why does my HTTP range requests return the full document? 2002-11-12 20:00:02 +00:00
Daniel Stenberg
208e56dbe9 removed dllinit.c as MSVC doesn't need it 2002-11-12 08:15:38 +00:00
Daniel Stenberg
42acb00c81 moved the bools in the connectdata struct into the substruct named
ConnectBits where the other bools already are
2002-11-11 23:03:03 +00:00
Daniel Stenberg
ca6e770837 The test for DNS cache entries left locked is now only built if
AGGRESIVE_TEST is also defined, as an addition to MALLOCDEBUG. It doesn't
work for multi interface usage and should only be used with careful
consideration.
2002-11-11 22:51:09 +00:00
Daniel Stenberg
775968003c changed header 2002-11-11 22:41:45 +00:00
Daniel Stenberg
323d3e9b5d include SSLCERTS and not UPGRADE. We leave UPGRADE a while in CVS, but it
should be removed soonish.
2002-11-11 22:38:32 +00:00
Daniel Stenberg
16f9755e73 UPGRADE was renamed into this "SSLCERTS" 2002-11-11 22:37:59 +00:00
Daniel Stenberg
66eb98bb0a unlock dns cache entries with a function call instead of a variable fiddle 2002-11-11 22:36:00 +00:00
Daniel Stenberg
299546f5c0 Dave Halbakken added curl_version_info 2002-11-11 21:57:14 +00:00
Daniel Stenberg
7be9b4c418 transfer-encoding: chunked was implemented 2002-11-11 10:00:48 +00:00
Daniel Stenberg
03c22b4576 Now supports "Transfer-Encoding: chunked" for HTTP PUT operations where the
size of the uploaded file is unknown.
2002-11-11 08:40:37 +00:00
Daniel Stenberg
ef749fa9ce Bug report #634625 identified how curl returned timeout immediately when
CURLOPT_CONNECTTIMEOUT was used and provided a fix.
2002-11-07 08:45:10 +00:00
Daniel Stenberg
a23c92596e recent changes 2002-11-06 08:30:08 +00:00
Daniel Stenberg
abb1497c98 output all test case numbers with three digits 2002-11-06 08:29:48 +00:00
Daniel Stenberg
7a8594da43 language fix 2002-11-06 08:29:26 +00:00
Daniel Stenberg
cbf28daed9 Lehel Bernadt's fix to prevent debug message to get sent on errors when
debug wasn't enabled
2002-11-05 11:11:10 +00:00
Daniel Stenberg
0ff1ca30c3 ipv4-fixes for the new Curl_dns_entry struct and Curl_resolv() proto 2002-11-05 11:07:49 +00:00
Daniel Stenberg
2cff251863 Curl_resolv() now returns a different struct, and it contains a reference
counter so that the caller needs to decrease that counter when done with
the returned data.

If compiled with MALLOCDEBUG I've added some extra checking that the counter
is decreased before a handle is closed etc.
2002-11-05 10:51:41 +00:00
Daniel Stenberg
73d996bf26 Soren Spies filled in some info about Mac OS X 10.2 2002-10-31 13:25:03 +00:00
Daniel Stenberg
5bc78cb724 Disable the DNS cache (by setting the timeout to 0) made libcurl leak
memory. Avery Fay brought the example code that proved this.
2002-10-31 13:09:11 +00:00
Daniel Stenberg
cdba92ac3c when using checkprefix(), the first argument must be the prefix! 2002-10-28 22:19:23 +00:00
Daniel Stenberg
6d28f92ffe Transfer-Encoding: needs 17 bytes passed, not 18 2002-10-28 21:52:27 +00:00
Daniel Stenberg
01387f42c5 kromJx@crosswinds.net's fix that now uses checkprefix() instead of
strnequal() when the third argument was strlen(first argument) anyway.
This makes it less prone to errors. (Slightly edited by me)
2002-10-28 21:52:00 +00:00
Daniel Stenberg
8f52b731f4 the malloc debug system assumes single thread 2002-10-28 21:05:14 +00:00
Daniel Stenberg
d442088ed3 kromJx@crosswinds.net fixed typos 2002-10-28 20:58:28 +00:00
Daniel Stenberg
22a323890a works now with autoconf 2.54 2002-10-28 20:39:23 +00:00
Daniel Stenberg
163bba1410 Kevin Roth's patch that checks for the CA cert file at two more places if the
--cacert option is not used.

1. An environment variable named CURL_CA_BUNDLE may contain the full file
name to the file.

2. On Windows, the cert file may be named curl-ca-bundle.crt and put in the
same dir as curl is located (or the CWD) and curl will then use that file
instead.
2002-10-28 19:49:58 +00:00
Daniel Stenberg
db1c618fcf Kevin Roth's patch. $(RM) instead of @erase, and it also passes on the
USE_SSLEAY variable
2002-10-28 19:39:58 +00:00
Daniel Stenberg
01bdfa7b6d Kevin Roth's fixes that use $(RM) instead of @erase and modified SSL version 2002-10-28 19:38:46 +00:00
Daniel Stenberg
6a88c8d845 prevent compiler warning 2002-10-28 19:21:30 +00:00
Daniel Stenberg
b8a6913e09 prevent compiler warnings 2002-10-28 19:20:59 +00:00
Daniel Stenberg
744d8c1006 fixes 2002-10-28 19:17:49 +00:00
Daniel Stenberg
c2e2c98d81 fixed the cygwin check for -no-undefined 2002-10-23 14:45:28 +00:00
Daniel Stenberg
3fa353a2d3 improved the check for an ISO cpp by checking specificly for __BORLANDC__
too, as Emiliano Ida has confirmed it to work
2002-10-23 14:15:29 +00:00
Daniel Stenberg
c27c9f80d2 kromJx@crosswinds.net made it run properly with stunnel >=4.0 2002-10-23 14:07:34 +00:00
Daniel Stenberg
b5a74715cf bad headers can come in two kinds, we either treat everything as one big
badly assumed header, or we think that parts of the buffer is a bad header
and the rest is treated as a normal body part
2002-10-23 13:48:37 +00:00
Daniel Stenberg
13ee2901f4 another week, 7 fixes 2002-10-21 14:04:26 +00:00
Daniel Stenberg
32c03eadd6 glibc 2.2.93 gethostbyname_r() no longer returns ERANGE if the given buffer
size isn't big enough. For some reason they now return EAGAIN.

Redhat 8 ships with this glibc version.
2002-10-21 13:20:30 +00:00
Daniel Stenberg
0fa512f26d Nikita Schmidt's fix to debian bug report #165382. This is verified with
the new test case 55.
2002-10-21 12:07:02 +00:00
Daniel Stenberg
219d88518c Added test 55, follow location with a single slash in the original path.
This caused curl 7.10.1 to crash.
2002-10-21 12:02:44 +00:00
Daniel Stenberg
ecf3aee43a check for cygwin and if built on that, enable the no-undefined option for
libtool. Otherwise disable it.
2002-10-21 06:49:42 +00:00
Daniel Stenberg
7f08cab73e test 54 added, blank Location: field 2002-10-21 06:18:51 +00:00
Daniel Stenberg
c4e9ef199e --enable-debug now checks if gcc is used before it sets all those gcc-
specific options. This should make this option work on more platforms with
other compilers.
2002-10-21 05:52:05 +00:00
Daniel Stenberg
9e612b5550 make very sure that we return 'done' properly when a transfer is done, as
otherwise the multi interface gets problems
2002-10-18 15:28:33 +00:00
Daniel Stenberg
203633d34d return call_multi when we follow a location 2002-10-18 15:27:49 +00:00
Daniel Stenberg
45bd009bb1 if we found no string on the Location: line, don't try to follow it 2002-10-18 13:51:00 +00:00
Daniel Stenberg
ee656415c4 moved comments to first column and automake stopped complaining 2002-10-18 07:55:38 +00:00
Daniel Stenberg
156aad198f Make the COOKIESESSION work better by creating a list of cookie files files
when given in the curl_easy_setopt() and then parse them all on the first
curl_easy_perform() call instead.
2002-10-17 07:10:39 +00:00
Daniel Stenberg
b1ffb79a50 junk cookies test53 added 2002-10-17 07:03:26 +00:00
Daniel Stenberg
d6654bfe00 mucho fixed 2002-10-16 09:53:38 +00:00
Daniel Stenberg
eefdd67d22 Added new mirror 2002-10-15 14:18:31 +00:00
Daniel Stenberg
86a86d7afd Andrs Garca's corrections 2002-10-15 08:39:30 +00:00
Daniel Stenberg
b6dac2b484 ignore .ps and .pdf files too 2002-10-14 07:47:40 +00:00
Daniel Stenberg
e6367abae9 generate and include PDF versions of the docs in the release archive 2002-10-14 07:39:49 +00:00
Daniel Stenberg
fc4d1d9a60 my first take at a memory leak detection document 2002-10-13 10:34:33 +00:00
Daniel Stenberg
94bae20776 some more 2002-10-13 10:28:38 +00:00
Daniel Stenberg
bb8c8d273c added more info 2002-10-13 10:18:10 +00:00
Daniel Stenberg
ee600ace37 three silly bugs 2002-10-12 12:35:30 +00:00
Daniel Stenberg
da86e32eb4 -y and -Y was switched in the examples 2002-10-12 12:14:09 +00:00
Daniel Stenberg
b5bbc04ad1 return error properly when a non-blocking connect fails using the multi
interface
2002-10-12 11:18:08 +00:00
Daniel Stenberg
265c58611f When we receive a "bad header" we must sure not to write down the data part
as well, as then we write the same data twice.
2002-10-11 20:55:08 +00:00
Daniel Stenberg
25c973a39e fix bad free() that caused segfault 2002-10-11 17:44:36 +00:00
Daniel Stenberg
123c7b32db 7.10.1 commit 2002-10-11 13:25:08 +00:00
Daniel Stenberg
e2d8e2c4ae more 2002-10-10 08:04:26 +00:00
Daniel Stenberg
701509d322 Jeff Lawson fixed a few problems with connection re-use that remained when
you set CURLOPT_PROXY to "".
2002-10-10 08:00:49 +00:00
Daniel Stenberg
c3cc616264 Junk data could get inserted when saving/getting HTTP headers, as discovered
by Craig Davison. Now we deal with the 'nread' variable correctly between
each header line.
2002-10-09 13:03:51 +00:00
Daniel Stenberg
91b84b89e4 failf() now sends the text to the debug function callback 2002-10-08 16:10:37 +00:00
Daniel Stenberg
017ec204a9 set version and date 2002-10-08 13:30:34 +00:00
Daniel Stenberg
8dbfecd153 added --ca 2002-10-08 13:30:15 +00:00
Daniel Stenberg
512db1bc54 Added timeout support for the non-windows version. 2002-10-08 13:03:26 +00:00
Daniel Stenberg
e157aabd4d rewrote the --with-zlib check, based on Albert Chin's input. 2002-10-08 12:53:04 +00:00
Daniel Stenberg
db2fea448c 7.10 not 7.9.9 (there never was one named that) 2002-10-08 09:24:21 +00:00
Daniel Stenberg
dd82d69b8c 5.7 Link errors when building libcurl on Windows! 2002-10-08 07:16:17 +00:00
Daniel Stenberg
27328281b7 more blurb 2002-10-08 07:11:34 +00:00
Daniel Stenberg
51d205b267 Kevin's fix to use DESTDIR instead of prefix on make install 2002-10-08 06:50:10 +00:00
Daniel Stenberg
84800914f6 added libcurl-the-guide to the dist 2002-10-07 18:23:52 +00:00
Daniel Stenberg
9b296e65bd Following locations properly, if told to do so. 2002-10-07 13:38:59 +00:00
Daniel Stenberg
5f649a1649 Move the URL concat code to Curl_follow(), and added a proto for that
function. For Location: following.
2002-10-07 13:38:34 +00:00
Daniel Stenberg
daea056210 Kevin Roth pointed out that 'make install' failed if built outside the
sourcedir if we're not using $(srcdir) properly.
2002-10-07 09:04:50 +00:00
Daniel Stenberg
30c0db06bd Kevin's update 2002-10-07 07:38:33 +00:00
Daniel Stenberg
91168c005c fixes since 7.10 2002-10-04 14:27:31 +00:00
Daniel Stenberg
cfa0054077 The -no-undefined flag is CRUCIAL for this to build fine on Cygwin. If we
find a case in which we need to remove this flag, we should most likely
write a configure check that detects when this flag is needed and when its
not.
2002-10-04 14:26:10 +00:00
Daniel Stenberg
3d5820648b as Ralph Mitchell pointed out, the Location: following code needs some
basic ./ and ../ strip-off understanding, and this change introduces with.
test cases 49 - 52 test this.
2002-10-04 14:15:01 +00:00
Daniel Stenberg
d08df97fe5 new redirect tests with ./ and ../ 2002-10-04 14:06:12 +00:00
Daniel Stenberg
fd6624a058 Kevin Roth's patch for his new packaging 2002-10-04 08:22:57 +00:00
Daniel Stenberg
8aa41dd04b Bjorn Wiren pointed out that INSTALL was missing in the tarballs 2002-10-03 12:50:48 +00:00
Daniel Stenberg
e890113fc6 --with-libz and --without-libz are now supported 2002-10-01 11:16:36 +00:00
125 changed files with 5330 additions and 3088 deletions

400
CHANGES
View File

@@ -7,6 +7,403 @@
Changelog
Version 7.10.3 (14 Jan 2003)
Daniel (10 Jan 2003)
- Steve Oliphant pointed out that test case 105 did not work anymore and this
was due to a missing fix for the password prompting.
Version 7.10.3-pre6 (10 Jan 2003)
Daniel (9 Jan 2003)
- Bryan Kemp pointed out that curl -u could not provide a blank password
without prompting the user. It can now. -u username: makes the password
empty, while -u username makes curl prompt the user for a password.
- Kjetil Jacobsen found a remaining connect problem in the multi interface on
ipv4 systems (Linux only?), that I fixed and Kjetil verified that it fixed
his problems.
- memanalyze.pl now reads a file name from the command line, and no longer
takes the data on stdin as before.
Version 7.10.3-pre5 (9 Jan 2003)
Daniel (9 Jan 2003)
- Fixed tests/memanalyze.pl to work with file names that contain colons (as on
Windows).
- Kjetil Jacobsen quickly pointed out that lib/share.h was missing...
Version 7.10.3-pre4 (9 Jan 2003)
Daniel (9 Jan 2003)
- Updated lib/share.c quite a bit to match the design document at
http://curl.haxx.se/dev/sharing.txt a lot more.
I'll try to update the document soonish. share.c is still not actually used
by libcurl, but the API is slowly getting there and we can start
implementing code that takes advantage of this system.
Daniel (8 Jan 2003)
- Updated share stuff in curl/curl.h, including data types, structs and
function prototypes. The corresponding files in lib/ were also modified
of course to remain compilable. Based on input from Jean-Philippe and also
to make it more in line with the design document.
- Jean-Philippe Barrette-LaPierre patched a very trivial memory leak in
curl_escape() that would happen when realloc() returns NULL...
- Matthew Blain provided feedback to make the --create-dirs stuff build
properly on Windows.
- Fixed the #include in tests/libtest/first.c as Legoff Vincent pointed out.
Daniel (7 Jan 2003)
- Philippe Raoult provided a patch that now makes libcurl properly support
wildcard checks for certificate names.
- Simon Liu added CURLOPT_HTTP200ALIASES, to let an application set other
strings recognized as "HTTP 200" to allow http-like protocols to get
downloaded fine by curl.
- Now using autoconf 2.57 and automake 1.7.2
- Doing "curl -I ftp://domain/non-existing-file" still outputed a date!
Wayne Haigh reported.
- The error message is now written properly with a newline in the --trace
file.
Daniel (6 Jan 2003)
- Sterling Hughes fixed a possible bug: previously, if you called
curl_easy_perform and then set the global dns cache, the global cache
wouldn't be used. Pointed out by Jean-Philippe Barrette-LaPierre.
- Matthew Blain's fixed the VC6 libcurl makefile to include better debug data
on debug builds.
Daniel (27 Dec 2002)
- Philippe Raoult reported a bug with HTTPS connections which I evidently
added in my 19 dec fix. I corrected it.
Daniel (20 Dec)
- Idea from the Debian latest patch: use AM_MAINTAINER_MODE in the configure
script to make the default makefile less confusing "to the casual
installer".
Version 7.10.3-pre3 (20 Dec)
Daniel (19 Dec)
- Matthew Blain patched the Curl_base64_decode() function.
- Evan Jordan reported in bug report #653022 that the SSL_read() usage was
wrong, and it certainly was. It could lead to curl using too much CPU due to
a stupid loop.
Daniel (18 Dec)
- As suggested by Margus Freudenthal, CURLE_HTTP_NOT_FOUND was renamed to
CURLE_HTTP_RETURNED_ERROR since it is returned on any >= 400 code when
CURLOPT_FAILONERROR is set.
Daniel (17 Dec)
- Bug reported #651464, reported by Christopher Palmer, provided an example
source code using the multi interface that hang when trying to connect to a
proxy on a localhost port where no proxy was listening. This bug was not
repeatable on libcurls that were IPv6-enabled.
Daniel (16 Dec)
- Christopher Palmer also noticed what Vojtech Janota already was
experiencing: The attempted name resolve fix for glibc 2.2.93 caused libcurl
to crash when used on some older glibc versions. The problem is of course
the silliness of the 2.2.93. I committed a fix that hopefully should make
the binary run fine on either one of the versions, even though the solution
is not as nice as I'd like it to be.
Daniel (13 Dec)
- Bug report #651460 by Christopher R. Palmer showed that when using libcurl
to for example go over a proxy on localhost, it would attempt to connect
through the proxy TWICE.
I added test case 503 with which I managed to repeat this problem and I
fixed the code to not re-attempt any connects (which also made it a nicer
fix for the #650941 bug mentioned below).
The sws server was extended to deal with CONNECT in order to make test
case 503 do good.
- Evan Jordan posted bug report #650989 about a memory leak in the public key
retrieving code. He provided a suggested fix and I merely applied it!
- Bug report #650941, posted by Christopher R. Palmer identified a problem
with the multi interface and getting file:// URLs. This was now fixed and
test case 502 was added to verify this.
Daniel (12 Dec)
- Test case 500 and 501 are the first ever libcurl test cases that run.
- Made "configure --enable-debug" cut off all -O* options to the compiler
- Finally fixed the test suite's ftp server so that test case 402 doesn't
cause the following test case to fail anymore!
Daniel (11 Dec)
- CURL_MAX_WRITE_SIZE is now decreased to 16KB since it makes the Windows
version perform uploads much faster!!! RBramante did lots of research on
this topic.
- Fixed the #include in curl/curl.h to include the other files outside the
extern "C" scope.
Daniel (10 Dec)
- Moved around and added more logic:
First, POST data is never sent as part of the request headers in the http.c
code. It is always sent the "normal" read callback then send() way. This now
enables a plain HTTP POST to be sent chunked if we want to. This also
reduces the risk of having very big POSTs causing problems.
Further, sending off the initial HTTP request is not done using a loop
anymore. If it wasn't all sent off in the first send(), the rest of the
request is sent off in the normal transfer select() loop. This makes several
things possible, but mainly it makes libcurl block less when used from the
multi interface and it also reduces the risk of problems with issuing very
large requests.
Daniel (9 Dec)
- Moved the read callback pointer and data within the structs to a more
suitable place. This in preparation for a better HTTP-request sending code
without (a silly) loop.
- The Dodds fix seems not to work.
- Vojtech Janota tests proved that the resolve fix from oct 21st is not good
enough since obviously older glibcs might return EAGAIN without this meaning
that the buffer was too small.
- [the other day] Made libcurl loop on recv() and send() now until done, and
then get back to select(). Previously it went back to select() more often
which really was a slight overhead. This was due to the reported performance
problems on HTTP PUT on Windows. I couldn't see any notable difference on
Linux...
Version 7.10.3-pre2 (4 Dec 2002)
Daniel (4 Dec 2002)
- Lots of work with Malcolm Dodds made me add a temporary code fix that now
shortens the timeout waiting for the 226 or 250 line after a completed
FTP transfer.
If no data is received within 60 seconds, this is taken as a sign of a dead
control connection and we bail out.
Daniel (3 Dec 2002)
- Ralph's bug report #644841 identified a problem in which curl returned a
timeout error code when in fact the problem was not a timeout. The proper
error should now be propagated better when they're detected in the FTP
response reading function.
- Updated the Borland Makefiles.
Daniel (2 Dec 2002)
- Nicolas Berloquin provided a patch that introduced --create-dirs to the
command line tool. When used in combination with -o, it lets curl create
[non-existing] directories used in -o, suitably used with #-combinations
such as:
curl "www.images.com/{flowers,cities,parks,mountains}/pic_[1-100].jpg \
-o "dir_#1/pic#2.jpg" --create-dirs
Version 7.10.3-pre1
Daniel (28 Nov 2002)
- I visited Lars Nordgren and had a go with his problem, which lead me to
implement this fix. If libcurl detects the added custom header
"Transfer-Encoding: chunked", it will now enable a chunked transfer.
Also, chunked transfer didn't quite work before but seems to do so now.
- Kjetil Jacobsen pointed out that ./configure --disable-ipv6 --without-zlib
didn't work on any platform...
Daniel (26 Nov 2002)
- Fixed a bad addrinfo free in the hostip.c code, hardly exposed anywhere
- Dan Becker found and fixed a minor memory leak on persistent connnections
using CURLOPT_USERPWD.
Daniel (22 Nov 2002)
- Based on Ralph Mitchell's excellent analysis I found a bug in the test suite
web server (sws) which now lets test case 306 run fine even in combination
with the other test cases.
- Juan Ignacio Herv<72>s found a crash in the verbose connect message that is
used on persistent connections. This bug was added in 7.10.2 due to the
rearranged name resolve code.
Daniel (20 Nov 2002)
- Kjetil Jacobsen provided a patch that introduces:
CURLOPT_PRIVATE stores a private pointer in the curl handle.
CURLINFO_PRIVATE retrieves the private pointer from the curl handle.
- Karol Pietrzak pointed out how curl-config --cflags didn't output a good
include dir so I've removed that for now.
Version 7.10.2 (18 Nov 2002)
Daniel (11 Nov 2002)
- Dave Halbakken added curl_version_info to lib/libcurl.def to make libcurl
properly build with MSVC on Windows.
Daniel (8 Nov 2002)
- Doing HTTP PUT without a specified file size now makes libcurl use
Transfer-Encoding: chunked.
Daniel (7 Nov 2002)
- Bug report #634625 identified how curl returned timeout immediately when
CURLOPT_CONNECTTIMEOUT was used and provided a fix.
Version 7.10.2-pre4 (6 Nov 2002)
Daniel (5 Nov 2002)
- Lehel Bernadt found out and fixed. libcurl sent error message to the debug
output when it stored the error message.
- Avery Fay found some problems with the DNS cache (when the cache time was
set to 0 we got a memory leak, but when the leak was fixed he got a crash
when he used the CURLOPT_INTERFACE with that) that had me do some real
restructuring so that we now have a reference counter in the dns cache
entries to prevent an entry to get flushed while still actually in use.
I also detected that we previously didn't update the time stamp when we
extracted an entry from the cache so that must've been a reason for some
very weird dns cache bugs.
Version 7.10.2-pre3
Daniel (31 Oct 2002)
- Downgraded automake to 1.6.3 in an attempt to fix cygwin problems. (It
turned out this didn't help though.)
- Disable the DNS cache (by setting the timeout to 0) made libcurl leak
memory. Avery Fay brought the example code that proved this.
Version 7.10.2-pre2
Daniel (28 Oct 2002)
- Upgraded to autoconf 2.54 and automake 1.7 on the release-build host.
- Kevin Roth made the command line tool check for a CURL_CA_BUNDLE environment
variable (if --cacert isn't used) and if not set, the Windows version will
check for a file named "curl-ca-bundle.crt" in the current directory or the
directory where curl is located. That file is then used as CA root cert
bundle.
- Avery Fay pointed out that curl's configure scrip didn't get right if you
used autoconf newer than 2.52. This was due to some badly quoted code.
Version 7.10.2-pre1
Daniel (23 Oct 2002)
- Emiliano Ida confirmed that we now build properly with the Borland C++
compiler too. We needed yet another fix for the ISO cpp check in the curl.h
header file.
- Yet another fix was needed to get the HTTP download without headers to work.
This time it was needed if the first "believed header" was read all in the
first read. Test 306 has not run properly since the 11th october fix.
Daniel (21 Oct 2002)
- Zvi Har'El pointed out a problem with curl's name resolving on Redhat 8
machines (running IPv6 disabled). Mats Lidell let me use an account on his
machine and I could verify that gethostbyname_r() has been changed to return
EAGAIN instead of ERANGE when the given buffer size is too small. This is
glibc 2.2.93.
- Albert Chin helped me get the -no-undefined option corrected in
lib/Makefile.am since Cygwin builds want it there while Solaris builds don't
want it present. Kevin Roth helped me try it out on cygwin.
- Nikita Schmidt provided a bug fix for a FOLLOWLOCATION bug introduced when
the ../ support got in (7.10.1).
Daniel (18 Oct 2002)
- Fabrizio Ammollo pointed out a remaining problem with FOLLOWLOCATION in
the multi interface.
Daniel (17 Oct 2002)
- Richard Cooper's experimenting proved that -j (CURLOPT_COOKIESESSION) didn't
work quite as supposed. You needed to set it *before* you use
CURLOPT_COOKIEFILE, and we dont' want that kind of dependencies.
Daniel (15 Oct 2002)
- Andr<64>s Garc<72>a provided corrections for erratas in four libcurl man pages.
Daniel (13 Oct 2002)
- Starting now, we generate and include PDF versions of all the docs in the
release archives.
Daniel (12 Oct 2002)
- Trying to connect to a host on a bad port number caused the multi interface
to never return failure and it appeared to keep on trying forever (it just
didn't do anything).
Daniel (11 Oct 2002)
- Downloading HTTP without headers didn't work 100%, some of the initial data
got written twice. Kevin Roth reported.
- Kevin Roth found out the "config file" parser in the client code could
segfault, like if DOS newlines were used.
Version 7.10.1 (11 Oct 2002)
Daniel (10 Oct 2002)
- Jeff Lawson fixed a few problems with connection re-use that remained when
you set CURLOPT_PROXY to "".
Daniel (9 Oct 2002)
- Craig Davison found a terrible flaw and Cris Bailiff helped out in the
search. Getting HTTP data from servers when the headers are split up in
multiple reads, could cause junk data to get inserted among the saved
headers. This only concerns HTTP(S) headers.
Daniel (8 Oct 2002)
- Vincent Penquerc'h gave us the good suggestion that when the ERRRORBUFFER
is set internally, the error text is sent to the debug function as well.
- I fixed the telnet code to timeout properly as the option tells it to. On
non-windows platforms.
Daniel (7 Oct 2002)
- John Crow pointed out that libcurl-the-guide wasn't included in the release
tarball!
- Kevin Roth pointed out that make install didn't do right if build outside
the source tree (ca-bundle wise).
- FOLLOWLOCATION bugfix for the multi interface
Daniel (4 Oct 2002)
- Kevin Roth got problems with his cygwin build with -no-undefined was not
present in lib/Makefile.am so I put it back in there again. The poor one who
needs to remove it again must write a configure script to detect that need.
- Ralph Mitchell pointed out that curl was a bit naive and didn't deal with ./
or ../ stuff in the string passed back in a Location: header when following
locations.
- Albert Chin helped me to work out a better configure.in check for zlib, and
both --without-zlib and -with-zlib seem to work rather well right now.
- Zvi Har'El improvied the OpenSSL ENGINE check in the configure script to
become more accurate.
Daniel (1 Oct 2002)
- Detlef Schmier pointed out the lack of a --without-libz option to configure,
so I added one.
Version 7.10 (1 Oct 2002)
Daniel (30 Sep 2002)
@@ -31,7 +428,8 @@ Daniel (26 Sep 2002)
- Extended curl_version_info() more and wrote a man page for it.
Daniel (25 Sep 2002)
- libcurl could leak memory when downloading multiple files using http ranges.
- libcurl could leak memory when downloading multiple files using http ranges,
reported and fixed by Jean-Luc Guevel.
- Walter J. Mack provided code and docs for the new curl_free() function that
shall be used to free memory that is allocated by libcurl and returned back

View File

@@ -30,6 +30,11 @@ To build after having extracted everything from CVS, do this:
./configure
make
Daniel uses a ./configure line similar to this for easier development:
./configure --disable-shared --enable-debug --enable-maintainer-mode
REQUIREMENTS
You need the following software installed:
@@ -48,7 +53,9 @@ REQUIREMENTS
MAC OS X
For Mac OS X users, Guido Neitzer write down the following step-by-step guide:
With Mac OS X 10.2 and the associated Developer Tools, the installed versions
of the build tools are adequate. For Mac OS X 10.1 users, Guido Neitzer
wrote the following step-by-step guide:
1. Install fink (http://fink.sourceforge.net)
2. Update fink to the newest version (with the installed fink)

View File

@@ -1,21 +0,0 @@
COPYRIGHT AND PERMISSION NOTICE
Copyright (c) 1996 - 2002, Daniel Stenberg, <daniel@haxx.se>.
All rights reserved.
Permission to use, copy, modify, and distribute this software for any purpose
with or without fee is hereby granted, provided that the above copyright
notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
OR OTHER DEALINGS IN THE SOFTWARE.
Except as contained in this notice, the name of a copyright holder shall not
be used in advertising or otherwise to promote the sale, use or other dealings
in this Software without prior written authorization of the copyright holder.

View File

@@ -1,470 +0,0 @@
MOZILLA PUBLIC LICENSE
Version 1.1
---------------
1. Definitions.
1.0.1. "Commercial Use" means distribution or otherwise making the
Covered Code available to a third party.
1.1. "Contributor" means each entity that creates or contributes to
the creation of Modifications.
1.2. "Contributor Version" means the combination of the Original
Code, prior Modifications used by a Contributor, and the Modifications
made by that particular Contributor.
1.3. "Covered Code" means the Original Code or Modifications or the
combination of the Original Code and Modifications, in each case
including portions thereof.
1.4. "Electronic Distribution Mechanism" means a mechanism generally
accepted in the software development community for the electronic
transfer of data.
1.5. "Executable" means Covered Code in any form other than Source
Code.
1.6. "Initial Developer" means the individual or entity identified
as the Initial Developer in the Source Code notice required by Exhibit
A.
1.7. "Larger Work" means a work which combines Covered Code or
portions thereof with code not governed by the terms of this License.
1.8. "License" means this document.
1.8.1. "Licensable" means having the right to grant, to the maximum
extent possible, whether at the time of the initial grant or
subsequently acquired, any and all of the rights conveyed herein.
1.9. "Modifications" means any addition to or deletion from the
substance or structure of either the Original Code or any previous
Modifications. When Covered Code is released as a series of files, a
Modification is:
A. Any addition to or deletion from the contents of a file
containing Original Code or previous Modifications.
B. Any new file that contains any part of the Original Code or
previous Modifications.
1.10. "Original Code" means Source Code of computer software code
which is described in the Source Code notice required by Exhibit A as
Original Code, and which, at the time of its release under this
License is not already Covered Code governed by this License.
1.10.1. "Patent Claims" means any patent claim(s), now owned or
hereafter acquired, including without limitation, method, process,
and apparatus claims, in any patent Licensable by grantor.
1.11. "Source Code" means the preferred form of the Covered Code for
making modifications to it, including all modules it contains, plus
any associated interface definition files, scripts used to control
compilation and installation of an Executable, or source code
differential comparisons against either the Original Code or another
well known, available Covered Code of the Contributor's choice. The
Source Code can be in a compressed or archival form, provided the
appropriate decompression or de-archiving software is widely available
for no charge.
1.12. "You" (or "Your") means an individual or a legal entity
exercising rights under, and complying with all of the terms of, this
License or a future version of this License issued under Section 6.1.
For legal entities, "You" includes any entity which controls, is
controlled by, or is under common control with You. For purposes of
this definition, "control" means (a) the power, direct or indirect,
to cause the direction or management of such entity, whether by
contract or otherwise, or (b) ownership of more than fifty percent
(50%) of the outstanding shares or beneficial ownership of such
entity.
2. Source Code License.
2.1. The Initial Developer Grant.
The Initial Developer hereby grants You a world-wide, royalty-free,
non-exclusive license, subject to third party intellectual property
claims:
(a) under intellectual property rights (other than patent or
trademark) Licensable by Initial Developer to use, reproduce,
modify, display, perform, sublicense and distribute the Original
Code (or portions thereof) with or without Modifications, and/or
as part of a Larger Work; and
(b) under Patents Claims infringed by the making, using or
selling of Original Code, to make, have made, use, practice,
sell, and offer for sale, and/or otherwise dispose of the
Original Code (or portions thereof).
(c) the licenses granted in this Section 2.1(a) and (b) are
effective on the date Initial Developer first distributes
Original Code under the terms of this License.
(d) Notwithstanding Section 2.1(b) above, no patent license is
granted: 1) for code that You delete from the Original Code; 2)
separate from the Original Code; or 3) for infringements caused
by: i) the modification of the Original Code or ii) the
combination of the Original Code with other software or devices.
2.2. Contributor Grant.
Subject to third party intellectual property claims, each Contributor
hereby grants You a world-wide, royalty-free, non-exclusive license
(a) under intellectual property rights (other than patent or
trademark) Licensable by Contributor, to use, reproduce, modify,
display, perform, sublicense and distribute the Modifications
created by such Contributor (or portions thereof) either on an
unmodified basis, with other Modifications, as Covered Code
and/or as part of a Larger Work; and
(b) under Patent Claims infringed by the making, using, or
selling of Modifications made by that Contributor either alone
and/or in combination with its Contributor Version (or portions
of such combination), to make, use, sell, offer for sale, have
made, and/or otherwise dispose of: 1) Modifications made by that
Contributor (or portions thereof); and 2) the combination of
Modifications made by that Contributor with its Contributor
Version (or portions of such combination).
(c) the licenses granted in Sections 2.2(a) and 2.2(b) are
effective on the date Contributor first makes Commercial Use of
the Covered Code.
(d) Notwithstanding Section 2.2(b) above, no patent license is
granted: 1) for any code that Contributor has deleted from the
Contributor Version; 2) separate from the Contributor Version;
3) for infringements caused by: i) third party modifications of
Contributor Version or ii) the combination of Modifications made
by that Contributor with other software (except as part of the
Contributor Version) or other devices; or 4) under Patent Claims
infringed by Covered Code in the absence of Modifications made by
that Contributor.
3. Distribution Obligations.
3.1. Application of License.
The Modifications which You create or to which You contribute are
governed by the terms of this License, including without limitation
Section 2.2. The Source Code version of Covered Code may be
distributed only under the terms of this License or a future version
of this License released under Section 6.1, and You must include a
copy of this License with every copy of the Source Code You
distribute. You may not offer or impose any terms on any Source Code
version that alters or restricts the applicable version of this
License or the recipients' rights hereunder. However, You may include
an additional document offering the additional rights described in
Section 3.5.
3.2. Availability of Source Code.
Any Modification which You create or to which You contribute must be
made available in Source Code form under the terms of this License
either on the same media as an Executable version or via an accepted
Electronic Distribution Mechanism to anyone to whom you made an
Executable version available; and if made available via Electronic
Distribution Mechanism, must remain available for at least twelve (12)
months after the date it initially became available, or at least six
(6) months after a subsequent version of that particular Modification
has been made available to such recipients. You are responsible for
ensuring that the Source Code version remains available even if the
Electronic Distribution Mechanism is maintained by a third party.
3.3. Description of Modifications.
You must cause all Covered Code to which You contribute to contain a
file documenting the changes You made to create that Covered Code and
the date of any change. You must include a prominent statement that
the Modification is derived, directly or indirectly, from Original
Code provided by the Initial Developer and including the name of the
Initial Developer in (a) the Source Code, and (b) in any notice in an
Executable version or related documentation in which You describe the
origin or ownership of the Covered Code.
3.4. Intellectual Property Matters
(a) Third Party Claims.
If Contributor has knowledge that a license under a third party's
intellectual property rights is required to exercise the rights
granted by such Contributor under Sections 2.1 or 2.2,
Contributor must include a text file with the Source Code
distribution titled "LEGAL" which describes the claim and the
party making the claim in sufficient detail that a recipient will
know whom to contact. If Contributor obtains such knowledge after
the Modification is made available as described in Section 3.2,
Contributor shall promptly modify the LEGAL file in all copies
Contributor makes available thereafter and shall take other steps
(such as notifying appropriate mailing lists or newsgroups)
reasonably calculated to inform those who received the Covered
Code that new knowledge has been obtained.
(b) Contributor APIs.
If Contributor's Modifications include an application programming
interface and Contributor has knowledge of patent licenses which
are reasonably necessary to implement that API, Contributor must
also include this information in the LEGAL file.
(c) Representations.
Contributor represents that, except as disclosed pursuant to
Section 3.4(a) above, Contributor believes that Contributor's
Modifications are Contributor's original creation(s) and/or
Contributor has sufficient rights to grant the rights conveyed by
this License.
3.5. Required Notices.
You must duplicate the notice in Exhibit A in each file of the Source
Code. If it is not possible to put such notice in a particular Source
Code file due to its structure, then You must include such notice in a
location (such as a relevant directory) where a user would be likely
to look for such a notice. If You created one or more Modification(s)
You may add your name as a Contributor to the notice described in
Exhibit A. You must also duplicate this License in any documentation
for the Source Code where You describe recipients' rights or ownership
rights relating to Covered Code. You may choose to offer, and to
charge a fee for, warranty, support, indemnity or liability
obligations to one or more recipients of Covered Code. However, You
may do so only on Your own behalf, and not on behalf of the Initial
Developer or any Contributor. You must make it absolutely clear than
any such warranty, support, indemnity or liability obligation is
offered by You alone, and You hereby agree to indemnify the Initial
Developer and every Contributor for any liability incurred by the
Initial Developer or such Contributor as a result of warranty,
support, indemnity or liability terms You offer.
3.6. Distribution of Executable Versions.
You may distribute Covered Code in Executable form only if the
requirements of Section 3.1-3.5 have been met for that Covered Code,
and if You include a notice stating that the Source Code version of
the Covered Code is available under the terms of this License,
including a description of how and where You have fulfilled the
obligations of Section 3.2. The notice must be conspicuously included
in any notice in an Executable version, related documentation or
collateral in which You describe recipients' rights relating to the
Covered Code. You may distribute the Executable version of Covered
Code or ownership rights under a license of Your choice, which may
contain terms different from this License, provided that You are in
compliance with the terms of this License and that the license for the
Executable version does not attempt to limit or alter the recipient's
rights in the Source Code version from the rights set forth in this
License. If You distribute the Executable version under a different
license You must make it absolutely clear that any terms which differ
from this License are offered by You alone, not by the Initial
Developer or any Contributor. You hereby agree to indemnify the
Initial Developer and every Contributor for any liability incurred by
the Initial Developer or such Contributor as a result of any such
terms You offer.
3.7. Larger Works.
You may create a Larger Work by combining Covered Code with other code
not governed by the terms of this License and distribute the Larger
Work as a single product. In such a case, You must make sure the
requirements of this License are fulfilled for the Covered Code.
4. Inability to Comply Due to Statute or Regulation.
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Code due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description
must be included in the LEGAL file described in Section 3.4 and must
be included with all distributions of the Source Code. Except to the
extent prohibited by statute or regulation, such description must be
sufficiently detailed for a recipient of ordinary skill to be able to
understand it.
5. Application of this License.
This License applies to code to which the Initial Developer has
attached the notice in Exhibit A and to related Covered Code.
6. Versions of the License.
6.1. New Versions.
Netscape Communications Corporation ("Netscape") may publish revised
and/or new versions of the License from time to time. Each version
will be given a distinguishing version number.
6.2. Effect of New Versions.
Once Covered Code has been published under a particular version of the
License, You may always continue to use it under the terms of that
version. You may also choose to use such Covered Code under the terms
of any subsequent version of the License published by Netscape. No one
other than Netscape has the right to modify the terms applicable to
Covered Code created under this License.
6.3. Derivative Works.
If You create or use a modified version of this License (which you may
only do in order to apply it to code which is not already Covered Code
governed by this License), You must (a) rename Your license so that
the phrases "Mozilla", "MOZILLAPL", "MOZPL", "Netscape",
"MPL", "NPL" or any confusingly similar phrase do not appear in your
license (except to note that your license differs from this License)
and (b) otherwise make it clear that Your version of the license
contains terms which differ from the Mozilla Public License and
Netscape Public License. (Filling in the name of the Initial
Developer, Original Code or Contributor in the notice described in
Exhibit A shall not of themselves be deemed to be modifications of
this License.)
7. DISCLAIMER OF WARRANTY.
COVERED CODE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" BASIS,
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, WARRANTIES THAT THE COVERED CODE IS FREE OF
DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING.
THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED CODE
IS WITH YOU. SHOULD ANY COVERED CODE PROVE DEFECTIVE IN ANY RESPECT,
YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE
COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER
OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF
ANY COVERED CODE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER.
8. TERMINATION.
8.1. This License and the rights granted hereunder will terminate
automatically if You fail to comply with terms herein and fail to cure
such breach within 30 days of becoming aware of the breach. All
sublicenses to the Covered Code which are properly granted shall
survive any termination of this License. Provisions which, by their
nature, must remain in effect beyond the termination of this License
shall survive.
8.2. If You initiate litigation by asserting a patent infringement
claim (excluding declatory judgment actions) against Initial Developer
or a Contributor (the Initial Developer or Contributor against whom
You file such action is referred to as "Participant") alleging that:
(a) such Participant's Contributor Version directly or indirectly
infringes any patent, then any and all rights granted by such
Participant to You under Sections 2.1 and/or 2.2 of this License
shall, upon 60 days notice from Participant terminate prospectively,
unless if within 60 days after receipt of notice You either: (i)
agree in writing to pay Participant a mutually agreeable reasonable
royalty for Your past and future use of Modifications made by such
Participant, or (ii) withdraw Your litigation claim with respect to
the Contributor Version against such Participant. If within 60 days
of notice, a reasonable royalty and payment arrangement are not
mutually agreed upon in writing by the parties or the litigation claim
is not withdrawn, the rights granted by Participant to You under
Sections 2.1 and/or 2.2 automatically terminate at the expiration of
the 60 day notice period specified above.
(b) any software, hardware, or device, other than such Participant's
Contributor Version, directly or indirectly infringes any patent, then
any rights granted to You by such Participant under Sections 2.1(b)
and 2.2(b) are revoked effective as of the date You first made, used,
sold, distributed, or had made, Modifications made by that
Participant.
8.3. If You assert a patent infringement claim against Participant
alleging that such Participant's Contributor Version directly or
indirectly infringes any patent where such claim is resolved (such as
by license or settlement) prior to the initiation of patent
infringement litigation, then the reasonable value of the licenses
granted by such Participant under Sections 2.1 or 2.2 shall be taken
into account in determining the amount or value of any payment or
license.
8.4. In the event of termination under Sections 8.1 or 8.2 above,
all end user license agreements (excluding distributors and resellers)
which have been validly granted by You or any distributor hereunder
prior to termination shall survive termination.
9. LIMITATION OF LIABILITY.
UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT
(INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE INITIAL
DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF COVERED CODE,
OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR
ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY
CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF GOODWILL,
WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER
COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN
INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF
LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY
RESULTING FROM SUCH PARTY'S NEGLIGENCE TO THE EXTENT APPLICABLE LAW
PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE
EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO
THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU.
10. U.S. GOVERNMENT END USERS.
The Covered Code is a "commercial item," as that term is defined in
48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer
software" and "commercial computer software documentation," as such
terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent with 48
C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995),
all U.S. Government End Users acquire Covered Code with only those
rights set forth herein.
11. MISCELLANEOUS.
This License represents the complete agreement concerning subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. This License shall be governed by
California law provisions (except to the extent applicable law, if
any, provides otherwise), excluding its conflict-of-law provisions.
With respect to disputes in which at least one party is a citizen of,
or an entity chartered or registered to do business in the United
States of America, any litigation relating to this License shall be
subject to the jurisdiction of the Federal Courts of the Northern
District of California, with venue lying in Santa Clara County,
California, with the losing party responsible for costs, including
without limitation, court costs and reasonable attorneys' fees and
expenses. The application of the United Nations Convention on
Contracts for the International Sale of Goods is expressly excluded.
Any law or regulation which provides that the language of a contract
shall be construed against the drafter shall not apply to this
License.
12. RESPONSIBILITY FOR CLAIMS.
As between Initial Developer and the Contributors, each party is
responsible for claims and damages arising, directly or indirectly,
out of its utilization of rights under this License and You agree to
work with Initial Developer and Contributors to distribute such
responsibility on an equitable basis. Nothing herein is intended or
shall be deemed to constitute any admission of liability.
13. MULTIPLE-LICENSED CODE.
Initial Developer may designate portions of the Covered Code as
"Multiple-Licensed". "Multiple-Licensed" means that the Initial
Developer permits you to utilize portions of the Covered Code under
Your choice of the NPL or the alternative licenses, if any, specified
by the Initial Developer in the file described in Exhibit A.
EXHIBIT A -Mozilla Public License.
``The contents of this file are subject to the Mozilla Public License
Version 1.1 (the "License"); you may not use this file except in
compliance with the License. You may obtain a copy of the License at
http://www.mozilla.org/MPL/
Software distributed under the License is distributed on an "AS IS"
basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the
License for the specific language governing rights and limitations
under the License.
The Original Code is ______________________________________.
The Initial Developer of the Original Code is ________________________.
Portions created by ______________________ are Copyright (C) ______
_______________________. All Rights Reserved.
Contributor(s): ______________________________________.
Alternatively, the contents of this file may be used under the terms
of the _____ license (the "[___] License"), in which case the
provisions of [______] License are applicable instead of those
above. If you wish to allow use of your version of this file only
under the terms of the [____] License and not to allow others to use
your version of this file under the MPL, indicate your decision by
deleting the provisions above and replace them with the notice and
other provisions required by the [___] License. If you do not delete
the provisions above, a recipient may use your version of this file
under either the MPL or the [___] License."
[NOTE: The text of this Exhibit A may differ slightly from the text of
the notices in the Source Code files of the Original Code. You should
use the text of this Exhibit A rather than the text found in the
Original Code Source Code for Your Modifications.]

View File

@@ -4,7 +4,7 @@
AUTOMAKE_OPTIONS = foreign
EXTRA_DIST = CHANGES COPYING maketgz UPGRADE reconf Makefile.dist \
EXTRA_DIST = CHANGES COPYING maketgz SSLCERTS reconf Makefile.dist \
curl-config.in build_vms.com curl-mode.el
bin_SCRIPTS = curl-config
@@ -18,6 +18,9 @@ dist-hook:
html:
cd docs; make html
pdf:
cd docs; make pdf
check: test
test:

6
README
View File

@@ -19,16 +19,20 @@ README
Study the LEGAL file for distribution terms and similar.
Visit the curl web site or mirror for the latest news:
Visit the curl web site or mirrors for the latest news:
http://curl.haxx.se/
http://curl.sf.net/
http://curl.planetmirror.com/
The official download mirror sites are:
Sweden -- ftp://ftp.sunet.se/pub/www/utilities/curl/
Sweden -- http://cool.haxx.se/curl/
Germany -- ftp://ftp.fu-berlin.de/pub/unix/network/curl/
Australia -- http://curl.planetmirror.com/download/
US -- http://curl.sourceforge.net/download/
Hongkong -- http://www.execve.net/curl/
To download the very latest source off the CVS server do this:

View File

@@ -1,9 +1,10 @@
Upgrading to curl/libcurl 7.10 from any previous version
========================================================
Peer SSL Certificate Verification
=================================
libcurl 7.10 performs peer SSL certificate verification by default. This is
done by installing a default CA cert bundle on 'make install' (or similar),
that CA bundle package is used by default on operations against SSL servers.
Starting in 7.10, libcurl performs peer SSL certificate verification by
default. This is done by installing a default CA cert bundle on 'make install'
(or similar), that CA bundle package is used by default on operations against
SSL servers.
Alas, if you communicate with HTTPS servers using certifcates that are signed
by CAs present in the bundle, you will not notice any changed behavior and you

View File

@@ -9,6 +9,7 @@ dnl First some basic init macros
AC_INIT
AC_CONFIG_SRCDIR([lib/urldata.h])
AM_CONFIG_HEADER(lib/config.h src/config.h tests/server/config.h lib/ca-bundle.h)
AM_MAINTAINER_MODE
dnl figure out the libcurl version
VERSION=`sed -ne 's/^#define LIBCURL_VERSION "\(.*\)"/\1/p' ${srcdir}/include/curl/curl.h`
@@ -34,7 +35,7 @@ dnl
AC_CANONICAL_HOST
dnl Get system canonical name
AC_DEFINE_UNQUOTED(OS, "${host}")
AC_DEFINE_UNQUOTED(OS, "${host}", [cpu-machine-OS])
dnl Check for AIX weirdos
AC_AIX
@@ -51,6 +52,17 @@ AC_LIBTOOL_WIN32_DLL
dnl libtool setup
AM_PROG_LIBTOOL
case $host in
*-*-cygwin | *-*-mingw* | *-*-pw32*)
need_no_undefined=yes
;;
*)
need_no_undefined=no
;;
esac
AM_CONDITIONAL(NO_UNDEFINED, test x$need_no_undefined = xyes)
dnl The install stuff has already been taken care of by the automake stuff
dnl AC_PROG_INSTALL
AC_PROG_MAKE_SET
@@ -65,9 +77,9 @@ AC_ARG_ENABLE(http,
[ case "$enableval" in
no)
AC_MSG_RESULT(no)
AC_DEFINE(CURL_DISABLE_HTTP)
AC_DEFINE(CURL_DISABLE_HTTP, 1, [to disable HTTP])
AC_MSG_WARN([disable HTTP disables FTP over proxy and GOPHER too])
AC_DEFINE(CURL_DISABLE_GOPHER)
AC_DEFINE(CURL_DISABLE_GOPHER, 1, [to disable GOPHER])
AC_SUBST(CURL_DISABLE_HTTP)
AC_SUBST(CURL_DISABLE_GOPHER)
;;
@@ -83,7 +95,7 @@ AC_ARG_ENABLE(ftp,
[ case "$enableval" in
no)
AC_MSG_RESULT(no)
AC_DEFINE(CURL_DISABLE_FTP)
AC_DEFINE(CURL_DISABLE_FTP, 1, [to disable FTP])
AC_SUBST(CURL_DISABLE_FTP)
;;
*) AC_MSG_RESULT(yes)
@@ -98,7 +110,7 @@ AC_ARG_ENABLE(gopher,
[ case "$enableval" in
no)
AC_MSG_RESULT(no)
AC_DEFINE(CURL_DISABLE_GOPHER)
AC_DEFINE(CURL_DISABLE_GOPHER, 1, [to disable GOPHER])
AC_SUBST(CURL_DISABLE_GOPHER)
;;
*) AC_MSG_RESULT(yes)
@@ -113,7 +125,7 @@ AC_ARG_ENABLE(file,
[ case "$enableval" in
no)
AC_MSG_RESULT(no)
AC_DEFINE(CURL_DISABLE_FILE)
AC_DEFINE(CURL_DISABLE_FILE, 1, [to disable FILE])
AC_SUBST(CURL_DISABLE_FILE)
;;
*) AC_MSG_RESULT(yes)
@@ -128,7 +140,7 @@ AC_ARG_ENABLE(ldap,
[ case "$enableval" in
no)
AC_MSG_RESULT(no)
AC_DEFINE(CURL_DISABLE_LDAP)
AC_DEFINE(CURL_DISABLE_LDAP, 1, [to disable LDAP])
AC_SUBST(CURL_DISABLE_LDAP)
;;
*) AC_MSG_RESULT(yes)
@@ -143,7 +155,7 @@ AC_ARG_ENABLE(dict,
[ case "$enableval" in
no)
AC_MSG_RESULT(no)
AC_DEFINE(CURL_DISABLE_DICT)
AC_DEFINE(CURL_DISABLE_DICT, 1 [to disable DICT])
AC_SUBST(CURL_DISABLE_DICT)
;;
*) AC_MSG_RESULT(yes)
@@ -158,7 +170,7 @@ AC_ARG_ENABLE(telnet,
[ case "$enableval" in
no)
AC_MSG_RESULT(no)
AC_DEFINE(CURL_DISABLE_TELNET)
AC_DEFINE(CURL_DISABLE_TELNET, 1, [to disable TELNET])
AC_SUBST(CURL_DISABLE_TELNET)
;;
*) AC_MSG_RESULT(yes)
@@ -215,11 +227,11 @@ dnl Checks for libraries.
dnl **********************************************************************
dnl gethostbyname in the nsl lib?
AC_CHECK_FUNC(gethostbyname, , AC_CHECK_LIB(nsl, gethostbyname))
AC_CHECK_FUNC(gethostbyname, , [ AC_CHECK_LIB(nsl, gethostbyname) ])
if test "$ac_cv_lib_nsl_gethostbyname" != "yes" -a "$ac_cv_func_gethostbyname" != "yes"; then
dnl gethostbyname in the socket lib?
AC_CHECK_FUNC(gethostbyname, , AC_CHECK_LIB(socket, gethostbyname))
AC_CHECK_FUNC(gethostbyname, , [ AC_CHECK_LIB(socket, gethostbyname) ])
fi
dnl At least one system has been identified to require BOTH nsl and
@@ -244,7 +256,7 @@ if test "$ac_cv_lib_nsl_gethostbyname" = "$ac_cv_func_gethostbyname"; then
fi
dnl resolve lib?
AC_CHECK_FUNC(strcasecmp, , AC_CHECK_LIB(resolve, strcasecmp))
AC_CHECK_FUNC(strcasecmp, , [ AC_CHECK_LIB(resolve, strcasecmp) ])
if test "$ac_cv_lib_resolve_strcasecmp" = "$ac_cv_func_strcasecmp"; then
AC_CHECK_LIB(resolve, strcasecmp,
@@ -254,10 +266,10 @@ if test "$ac_cv_lib_resolve_strcasecmp" = "$ac_cv_func_strcasecmp"; then
fi
dnl socket lib?
AC_CHECK_FUNC(connect, , AC_CHECK_LIB(socket, connect))
AC_CHECK_FUNC(connect, , [ AC_CHECK_LIB(socket, connect) ])
dnl dl lib?
AC_CHECK_FUNC(dlclose, , AC_CHECK_LIB(dl, dlopen))
AC_CHECK_FUNC(dlclose, , [ AC_CHECK_LIB(dl, dlopen) ])
dnl **********************************************************************
dnl Check how non-blocking sockets are set
@@ -268,7 +280,8 @@ AC_ARG_ENABLE(nonblocking,
[
if test "$enableval" = "no" ; then
AC_MSG_WARN([non-blocking sockets disabled])
AC_DEFINE(HAVE_DISABLED_NONBLOCKING)
AC_DEFINE(HAVE_DISABLED_NONBLOCKING, 1,
[to disable NON-BLOCKING connections])
else
CURL_CHECK_NONBLOCKING_SOCKET
fi
@@ -286,7 +299,8 @@ AC_ARG_WITH(egd-socket,
[ EGD_SOCKET="$withval" ]
)
if test -n "$EGD_SOCKET" ; then
AC_DEFINE_UNQUOTED(EGD_SOCKET, "$EGD_SOCKET")
AC_DEFINE_UNQUOTED(EGD_SOCKET, "$EGD_SOCKET",
[your Entropy Gathering Daemon socket pathname] )
fi
dnl Check for user-specified random device
@@ -295,16 +309,13 @@ AC_ARG_WITH(random,
[ RANDOM_FILE="$withval" ],
[
dnl Check for random device
AC_CHECK_FILE("/dev/urandom",
[
RANDOM_FILE="/dev/urandom";
]
)
AC_CHECK_FILE("/dev/urandom", [ RANDOM_FILE="/dev/urandom"] )
]
)
if test -n "$RANDOM_FILE" ; then
AC_SUBST(RANDOM_FILE)
AC_DEFINE_UNQUOTED(RANDOM_FILE, "$RANDOM_FILE")
AC_DEFINE_UNQUOTED(RANDOM_FILE, "$RANDOM_FILE",
[a suitable file to read random data from])
fi
dnl **********************************************************************
@@ -366,7 +377,7 @@ then
AC_CHECK_HEADERS(des.h)
dnl resolv lib?
AC_CHECK_FUNC(res_search, , AC_CHECK_LIB(resolv, res_search))
AC_CHECK_FUNC(res_search, , [AC_CHECK_LIB(resolv, res_search)])
dnl Check for the Kerberos4 library
AC_CHECK_LIB(krb, krb_net_read,
@@ -382,7 +393,8 @@ then
AC_CHECK_FUNCS(krb_get_our_ip_for_realm)
dnl add define KRB4
AC_DEFINE(KRB4)
AC_DEFINE(KRB4, 1,
[if you have the Kerberos4 libraries (including -ldes)])
dnl substitute it too!
KRB4_ENABLED=1
@@ -405,10 +417,9 @@ dnl **********************************************************************
dnl Default to compiler & linker defaults for SSL files & libraries.
OPT_SSL=off
AC_ARG_WITH(ssl,dnl
[ --with-ssl[=DIR] where to look for SSL [compiler/linker default paths]
DIR points to the SSL installation [/usr/local/ssl]],
OPT_SSL=$withval
)
AC_HELP_STRING([--with-ssl=PATH], [where to look for SSL, PATH points to the SSL installation (default: /usr/local/ssl)])
AC_HELP_STRING([--without-ssl], [disable SSL]),
OPT_SSL=$withval)
if test X"$OPT_SSL" = Xno
then
@@ -482,9 +493,9 @@ else
OPENSSL_ENABLED=1)
fi
dnl Check for the OpenSSL engine header, it is kind of "separated"
dnl from the main SSL check
AC_CHECK_HEADERS(openssl/engine.h)
dnl If the ENGINE library seems to be around, check for the OpenSSL engine
dnl header, it is kind of "separated" from the main SSL check
AC_CHECK_FUNC(ENGINE_init, [ AC_CHECK_HEADERS(openssl/engine.h) ])
AC_SUBST(OPENSSL_ENABLED)
@@ -508,37 +519,43 @@ dnl **********************************************************************
dnl Check for the presence of ZLIB libraries and headers
dnl **********************************************************************
dnl Default to compiler & linker defaults for files & libraries.
dnl OPT_ZLIB=no
dnl AC_ARG_WITH(zlib,dnl
dnl [ --with-zlib[=DIR] where to look for ZLIB [compiler/linker default paths]
dnl DIR points to the ZLIB installation prefix [/usr/local]],
dnl OPT_ZLIB=$withval,
dnl )
dnl Check for & handle argument to --with-zlib.
dnl
dnl NOTE: We *always* look for ZLIB headers & libraries, all this option
dnl does is change where we look (by adjusting LIBS and CPPFLAGS.)
dnl
AC_MSG_CHECKING(where to look for ZLIB)
if test X"$OPT_ZLIB" = Xno
then
AC_MSG_RESULT([defaults (or given in environment)])
else
test X"$OPT_ZLIB" = Xyes && OPT_ZLIB=/usr/local
LIBS="$LIBS -L$OPT_ZLIB/lib"
_cppflags=$CPPFLAGS
_ldflags=$LDFLAGS
OPT_ZLIB="/usr/local"
AC_ARG_WITH(zlib,
AC_HELP_STRING([--with-zlib=PATH], [search for zlib in PATH])
AC_HELP_STRING([--without-zlib], [disable use of zlib]),
[OPT_ZLIB="$withval"])
case "$OPT_ZLIB" in
no)
AC_MSG_WARN([zlib disabled]) ;;
*)
dnl check for the lib first without setting any new path, since many
dnl people have it in the default path
AC_CHECK_LIB(z, inflateEnd, ,
[if test -d "$OPT_ZLIB"; then
CPPFLAGS="$CPPFLAGS -I$OPT_ZLIB/include"
AC_MSG_RESULT([$OPT_ZLIB])
fi
LDFLAGS="$LDFLAGS -L$OPT_ZLIB/lib"
fi])
dnl AC_CHECK_FUNC(gzread, , AC_CHECK_LIB(z, gzread))
AC_CHECK_LIB(z, gzread, [AM_CONDITIONAL(CONTENT_ENCODING, true)
AC_DEFINE(HAVE_LIBZ)
AC_CHECK_HEADER(zlib.h,[
AC_CHECK_LIB(z, gzread,
[HAVE_LIBZ="1"
AC_SUBST(HAVE_LIBZ)
LIBS="$LIBS -lz"
HAVE_LIBZ="1"
AC_SUBST(HAVE_LIBZ)])
AC_DEFINE(HAVE_ZLIB_H, 1, [if you have the zlib.h header file])
AC_DEFINE(HAVE_LIBZ, 1, [If zlib is available])],
[ CPPFLAGS=$_cppflags
LDFLAGS=$_ldflags])],
[ CPPFLAGS=$_cppflags
LDFLAGS=$_ldflags]
)
;;
esac
dnl Default is to try the thread-safe versions of a few functions
OPT_THREAD=on
@@ -608,9 +625,6 @@ AC_CHECK_HEADERS( \
setjmp.h
)
dnl Check for libz header
AC_CHECK_HEADERS(zlib.h)
dnl Checks for typedefs, structures, and compiler characteristics.
AC_C_CONST
AC_TYPE_SIZE_T
@@ -672,7 +686,7 @@ if test "$ac_cv_func_sigsetjmp" != "yes"; then
[sigjmp_buf jmpenv;
sigsetjmp(jmpenv, 1);],
AC_MSG_RESULT(yes)
AC_DEFINE(HAVE_SIGSETJMP),
AC_DEFINE(HAVE_SIGSETJMP, 1, [If you have sigsetjmp]),
AC_MSG_RESULT(no)
)
fi
@@ -740,7 +754,23 @@ AC_ARG_ENABLE(debug,
*) AC_MSG_RESULT(yes)
CPPFLAGS="$CPPFLAGS -DMALLOCDEBUG"
CFLAGS="-W -Wall -Wwrite-strings -pedantic -Wundef -Wpointer-arith -Wcast-align -Wnested-externs -g"
CFLAGS="$CFLAGS -g"
if test "$GCC" = "yes"; then
CFLAGS="$CFLAGS -W -Wall -Wwrite-strings -pedantic -Wundef -Wpointer-arith -Wcast-align -Wnested-externs"
fi
dnl strip off optimizer flags
NEWFLAGS=""
for flag in $CFLAGS; do
case "$flag" in
-O*)
dnl echo "cut off $flag"
;;
*)
NEWFLAGS="$NEWFLAGS $flag"
;;
esac
done
CFLAGS=$NEWFLAGS
;;
esac ],
AC_MSG_RESULT(no)
@@ -757,6 +787,7 @@ AC_CONFIG_FILES([Makefile \
tests/Makefile \
tests/data/Makefile \
tests/server/Makefile \
tests/libtest/Makefile \
packages/Makefile \
packages/Win32/Makefile \
packages/Win32/cygwin/Makefile \
@@ -770,4 +801,3 @@ AC_CONFIG_FILES([Makefile \
curl-config
])
AC_OUTPUT

View File

@@ -107,7 +107,8 @@ while test $# -gt 0; do
;;
--cflags)
echo -I@includedir@
#echo -I@includedir@
echo ""
;;
--libs)

View File

@@ -1,3 +1,5 @@
Makefile
Makefile.in
*html
*ps
*pdf

View File

@@ -6,15 +6,16 @@
To Think About When Contributing Source Code
This document is intended to offer some guidelines that can be useful to keep
in mind when you decide to write a contribution to the project. This concerns
This document is intended to offer some simple guidelines that can be useful
to keep in mind when you decide to contribute to the project. This concerns
new features as well as corrections to existing flaws or bugs.
Join the Community
Skip over to http://curl.haxx.se/mail/ and join the appropriate mailing
list(s). Read up on details before you post questions. Read this file before
you start sending patches!
you start sending patches! We prefer patches and discussions being held on
the mailing list(s), not sent to individuals.
The License Issue
@@ -29,9 +30,9 @@ The License Issue
What To Read
Source code, the man pages, the INTERALS document, the TODO, the most recent
Source code, the man pages, the INTERNALS document, the TODO, the most recent
CHANGES. Just lurking on the libcurl mailing list is gonna give you a lot of
insights on what's going on right now.
insights on what's going on right now. Asking there is a good idea too.
Naming
@@ -39,26 +40,32 @@ Naming
names. It doesn't necessarily have to mean that you should use the same as in
other places of the code, just that the names should be logical,
understandable and be named according to what they're used for. File-local
functions should be made static.
functions should be made static. We like lower case names.
See the INTERNALS document on how we name non-exported library-global
symbols.
Indenting
Please try using the same indenting levels and bracing method as all the
other code already does. It makes the source code a lot easier to follow if
all of it is written using the same style. We don't ask you to like it, we
just ask you to follow the tradition! ;-)
just ask you to follow the tradition! ;-) This mainly means: 2-level indents,
using spaces only (no tabs) and having the opening brace ({) on the same line
as the if() or while().
Commenting
Comment your source code extensively. Commented code is quality code and
enables future modifications much more. Uncommented code much more risk being
Comment your source code extensively using C comments (/* comment */), DO NOT
use C++ comments (// this style). Commented code is quality code and enables
future modifications much more. Uncommented code much more risk being
completely replaced when someone wants to extend things, since other persons'
source code can get quite hard to read.
General Style
Keep your functions small. If they're small you avoid a lot of mistakes and
you don't accidentally mix up variables.
you don't accidentally mix up variables etc.
Non-clobbering All Over
@@ -69,7 +76,14 @@ Non-clobbering All Over
functionality, try writing it in a new source file. If you fix bugs, try to
fix one bug at a time and send them as separate patches.
Separate Patches Doing Different Things
Platform Dependent Code
Use #ifdef HAVE_FEATURE to do conditional code. We avoid checking for
particular operating systems or hardware in the #ifdef lines. The
HAVE_FEATURE shall be generated by the configure script for unix-like systems
and they are hard-coded in the config-[system].h files for the others.
Separate Patches
It is annoying when you get a huge patch from someone that is said to fix 511
odd problems, but discussions and opinions don't agree with 510 of them - or
@@ -94,6 +108,10 @@ Document
small description of your fix or your new features with every contribution so
that it can be swiftly added to the package documentation.
The documentation is always made in man pages (nroff formatted) or plain
ASCII files. All HTML files on the web site and in the release archives are
generated from the nroff/ASCII versions.
Write Access to CVS Repository
If you are a frequent contributor, or have another good reason, you can of
@@ -111,3 +129,21 @@ Test Cases
in the test suite. Every feature that is added should get at least one valid
test case that verifies that it works as documented. If every submitter also
post a few test cases, it won't end up as a heavy burden on a single person!
How To Make a Patch
Keep a copy of the unmodified curl sources. Make your changes in a separate
source tree. When you think you have something that you want to offer the
curl community, use GNU diff to generate patches.
If you have modified a single file, try something like:
diff -u undmodified-file.c my-changed-one.c > my-fixes.diff
If you have modified several files, possibly in different directories, you
can use diff recursively:
diff -ur curl-original-dir curl-modfied-sources-dir > my-fixes.diff
GNU diff exists for virtually all platforms, including all kinds of unixes
and Windows.

View File

@@ -1,4 +1,4 @@
Updated: September 3, 2002 (http://curl.haxx.se/docs/faq.html)
Updated: January 13, 2003 (http://curl.haxx.se/docs/faq.html)
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -58,6 +58,8 @@ FAQ
4.8 I found a bug!
4.9 Curl can't authenticate to the server that requires NTLM?
4.10 My HTTP request using HEAD, PUT or DELETE doesn't work!
4.11 Why does my HTTP range requests return the full document?
4.12 Why do I get "certificate verify failed" ?
5. libcurl Issues
5.1 Is libcurl thread-safe?
@@ -66,6 +68,7 @@ FAQ
5.4 Does libcurl do Winsock initing on win32 systems?
5.5 Does CURLOPT_FILE and CURLOPT_INFILE work on win32 ?
5.6 What about Keep-Alive or persistent connections?
5.7 Link errors when building libcurl on Windows!
6. License Issues
6.1 I have a GPL program, can I use the libcurl library?
@@ -272,8 +275,8 @@ FAQ
2.4. Does cURL support Socks (RFC 1928) ?
No. Nobody has wanted it that badly yet. We appreciate patches that bring
this functionality.
There is limited support for SOCKS5 for curl built with IPv6 support
disabled.
3. Usage problems
@@ -600,6 +603,34 @@ FAQ
software you're trying to interact with. This is not anything curl can do
anything about.
4.11 Why does my HTTP range requests return the full document?
Because the range may not be supported by the server, or the server may
choose to ignore it and return the full document anyway.
4.12 Why do I get "certificate verify failed" ?
You invoke curl 7.10 or later to communicate on a https:// URL and get an
error back looking something similar to this:
curl: (35) SSL: error:14090086:SSL routines:
SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Then it means that curl couldn't verify that the server's certificate was
good. Curl verifies the certificate using the CA cert bundle that comes with
the curl installation.
To disable the verification (which makes it act like curl did before 7.10),
use -k. This does however enable man-in-the-middle attacks.
If you get this failure but are having a CA cert bundle installed and used,
the server's certificate is not signed by one of the CA's in the bundle. It
might for example be self-signed. You then correct this problem by obtaining
a valid CA cert for the server. Or again, decrease the security by disabling
this check.
Details are also in the SSLCERTS file in the release archives, found online
here: http://curl.haxx.se/lxr/source/SSLCERTS
5. libcurl Issues
@@ -686,6 +717,19 @@ FAQ
Previous versions had no persistent connection support.
5.7 Link errors when building libcurl on Windows!
You need to make sure that your project, and all the libraries (both static
and dynamic) that it links against, are compiled/linked against the same run
time library.
This is determined by the /MD, /ML, /MT (and their corresponding /M?d)
options to the command line compiler. /MD (linking against MSVCRT dll) seems
to be the most commonly used option.
(Provided by Andrew Francis)
6. License Issues
Curl and libcurl are released under a MIT/X derivate license. The license is

View File

@@ -4,7 +4,7 @@
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
How cURL Become Like This
How cURL Became Like This
In the second half of 1997, Daniel Stenberg came up with the idea to make
@@ -58,8 +58,8 @@ visits daily.
Released curl 6.0 in September. 15000 lines of code.
December 28 1999, added project to Sourceforge and started using its services
for managing the project.
December 28 1999, added the project on Sourceforge and started using its
services for managing the project.
Spring 2000, major internal overhaul to provide a suitable library interface.
The first non-beta release was named 7.1 and arrived in August. This offered

View File

@@ -28,11 +28,22 @@ UNIX
You probably need to be root when doing the last command.
If you have checked out the sources from the CVS repository, read the
CVS-INFO on how to proceed.
If you want to install curl in a different file hierarchy than /usr/local,
you need to specify that already when running configure:
./configure --prefix=/path/to/curl/tree
If you happen to have write permission in that directory, you can do 'make
install' without being root. An example of this would be to make a local
install in your own home directory:
./configure --prefix=$HOME
make
make install
The configure script always tries to find a working SSL library unless
explicitly told not to. If you have OpenSSL installed in the default search
path for your compiler/linker, you don't need to do anything special. If
@@ -71,33 +82,6 @@ UNIX
LIBS=-lRSAglue -lrsaref
(as suggested by Doug Kaufman)
KNOWN PROBLEMS (these ones should not happen anymore)
If you happen to have autoconf installed, but a version older than 2.12
you will get into trouble. Then you can still build curl by issuing these
commands (note that this requires curl to be built staticly): (from Ralph
Beckmann)
./configure [...]
cd lib; make; cd ..
cd src; make; cd ..
cp src/curl elsewhere/bin/
As suggested by David West, you can make a faked version of autoconf and
autoheader:
----start of autoconf----
#!/bin/bash
#fake autoconf for building curl
if [ "$1" = "--version" ] then
echo "Autoconf version 2.13"
fi
----end of autoconf----
Then make autoheader a symbolic link to the same script and make sure
they're executable and set to appear in the path *BEFORE* the actual (but
obsolete) autoconf and autoheader scripts.
MORE OPTIONS
To force configure to use the standard cc compiler if both cc and gcc are

View File

@@ -404,12 +404,28 @@ SPEED LIMIT
To have curl abort the download if the speed is slower than 3000 bytes per
second for 1 minute, run:
curl -y 3000 -Y 60 www.far-away-site.com
curl -Y 3000 -y 60 www.far-away-site.com
This can very well be used in combination with the overall time limit, so
that the above operatioin must be completed in whole within 30 minutes:
curl -m 1800 -y 3000 -Y 60 www.far-away-site.com
curl -m 1800 -Y 3000 -y 60 www.far-away-site.com
Forcing curl not to transfer data faster than a given rate is also possible,
which might be useful if you're using a limited bandwidth connection and you
don't want your transfer to use all of it.
Make curl transfer data no faster than 10 kilobytes per second:
curl --limit-rate 10K www.far-away-site.com
or
curl --limit-rate 10240 www.far-away-site.com
Or prevent curl from uploading data faster than 1 megabyte per second:
curl -T upload --limit-rate 1M ftp://uploadshereplease.com
CONFIG FILE
@@ -548,7 +564,7 @@ HTTPS
from sites that require valid certificates. The only drawback is that the
certificate needs to be in PEM-format. PEM is a standard and open format to
store certificates with, but it is not used by the most commonly used
browsers (Netscape and MSEI both use the so called PKCS#12 format). If you
browsers (Netscape and MSIE both use the so called PKCS#12 format). If you
want curl to use the certificates you use with your (favourite) browser, you
may need to download/compile a converter that can convert your browser's
formatted certificates to PEM formatted ones. This kind of converter is
@@ -567,8 +583,8 @@ HTTPS
Many older SSL-servers have problems with SSLv3 or TLS, that newer versions
of OpenSSL etc is using, therefore it is sometimes useful to specify what
SSL-version curl should use. Use -3 or -2 to specify that exact SSL version
to use:
SSL-version curl should use. Use -3, -2 or -1 to specify that exact SSL
version to use (for SSLv3, SSLv2 or TLSv1 respectively):
curl -2 https://secure.site.com/
@@ -826,13 +842,13 @@ MAILING LISTS
Receives notifications on all CVS commits done to the curl source module.
This can become quite a large amount of mails during intense development,
be aware. This is for us who liks email...
be aware. This is for us who like email...
curl-www-commits
Receives notifications on all CVS commits done to the curl www module
(basicly the web site). This can become quite a large amount of mails
during intense changing, be aware. This is for us who liks email...
during intense changing, be aware. This is for us who like email...
Please direct curl questions, feature requests and trouble reports to one of
these mailing lists instead of mailing any individual.

View File

@@ -12,16 +12,20 @@ HTMLPAGES = \
curl.html \
curl-config.html
PDFPAGES = \
curl.pdf \
curl-config.pdf
SUBDIRS = examples libcurl
EXTRA_DIST = MANUAL BUGS CONTRIBUTE FAQ FEATURES INTERNALS \
README.win32 RESOURCES TODO TheArtOfHttpScripting THANKS \
VERSIONS KNOWN_BUGS BINDINGS $(man_MANS) $(HTMLPAGES) \
HISTORY
HISTORY INSTALL libcurl-the-guide $(PDFPAGES)
MAN2HTML= gnroff -man $< | man2html >$@
SUFFIXES = .1 .3 .html
SUFFIXES = .1 .3 .html .pdf
html: $(HTMLPAGES)
cd libcurl; make html
@@ -31,3 +35,13 @@ html: $(HTMLPAGES)
.1.html:
$(MAN2HTML)
MAN2PDF = groff -Tps -man curl.1 $< >$@
pdf:
for file in $(man_MANS); do \
foo=`echo $$file | sed -e 's/\.[0-9]$$//g'`; \
groff -Tps -man $$file >$$foo.ps; \
ps2pdf $$foo.ps $$foo.pdf; \
done
cd libcurl; make pdf

View File

@@ -15,7 +15,8 @@ TODO
* Introduce an interface to libcurl that allows applications to easier get to
know what cookies that are received. Pushing interface that calls a
callback on each received cookie? Querying interface that asks about
existing cookies? We probably need both.
existing cookies? We probably need both. Enable applications to modify
existing cookies as well.
* Make content encoding/decoding internally be made using a filter system.
@@ -23,13 +24,6 @@ TODO
less copy of data and thus a faster operation.
[http://curl.haxx.se/dev/no_copy_callbacks.txt]
* Run-time querying about library characterics. What protocols do this
running libcurl support? What is the version number of the running libcurl
(returning the well-defined version-#define). This could possibly be made
by allowing curl_easy_getinfo() work with a NULL pointer for global info,
but perhaps better would be to introduce a new curl_getinfo() (or similar)
function for global info reading.
* Add asynchronous name resolving (http://daniel.haxx.se/resolver/). This
should be made to work on most of the supported platforms, or otherwise it
isn't really interesting.
@@ -51,12 +45,9 @@ TODO
>4GB all over. Bug reports (and source reviews) indicate that it doesn't
currently work properly.
* Make the built-in progress meter use its own dedicated output stream, and
make it possible to set it. Use stderr by default.
* CURLOPT_MAXFILESIZE. Prevent downloads that are larger than the specified
size. CURLE_FILESIZE_EXCEEDED would then be returned. Gautam Mani
requested. That is, the download should even begin but be aborted
requested. That is, the download should not even begin but be aborted
immediately.
* Allow the http_proxy (and other) environment variables to contain user and
@@ -66,8 +57,7 @@ TODO
LIBCURL - multi interface
* Make sure we don't ever loop because of non-blocking sockets return
EWOULDBLOCK or similar. This concerns the HTTP request sending (and
especially regular HTTP POST), the FTP command sending etc.
EWOULDBLOCK or similar. This FTP command sending etc.
* Make uploads treated better. We need a way to tell libcurl we have data to
write, as the current system expects us to upload data each time the socket
@@ -86,6 +76,9 @@ TODO
receiver will convert the data from the standard form to his own internal
form."
* Since USERPWD always override the user and password specified in URLs, we
might need another way to specify user+password for anonymous ftp logins.
* An option to only download remote FTP files if they're newer than the local
one is a good idea, and it would fit right into the same syntax as the
already working http dito works. It of course requires that 'MDTM' works,
@@ -97,23 +90,12 @@ TODO
HTTP
* HTTP PUT for files passed on stdin *OR* when the --crlf option is
used. Requires libcurl to send the file with chunked content
encoding. [http://curl.haxx.se/dev/HTTP-PUT-stdin.txt] When the filter
system mentioned above gets real, it'll be a piece of cake to add.
* Pass a list of host name to libcurl to which we allow the user name and
password to get sent to. Currently, it only get sent to the host name that
the first URL uses (to prevent others from being able to read it), but this
also prevents the authentication info from getting sent when following
locations to legitimate other host names.
* "Content-Encoding: compress/gzip/zlib" HTTP 1.1 clearly defines how to get
and decode compressed documents. There is the zlib that is pretty good at
decompressing stuff. This work was started in October 1999 but halted again
since it proved more work than we thought. It is still a good idea to
implement though. This requires the filter system mentioned above.
* Authentication: NTLM. Support for that MS crap called NTLM
authentication. MS proxies and servers sometime require that. Since that
protocol is a proprietary one, it involves reverse engineering and network

View File

@@ -2,7 +2,7 @@
.\" nroff -man curl-config.1
.\" Written by Daniel Stenberg
.\"
.TH curl-config 1 "21 January 2002" "Curl 7.9.3" "curl-config manual"
.TH curl-config 1 "8 Oct 2002" "Curl 7.10" "curl-config manual"
.SH NAME
curl-config \- Get information about a libcurl installation
.SH SYNOPSIS
@@ -11,6 +11,8 @@ curl-config \- Get information about a libcurl installation
.B curl-config
displays information about a previous curl and libcurl installation.
.SH OPTIONS
.IP "--ca"
Displays the built-in path to the CA cert bundle this libcurl uses.
.IP "--cc"
Displays the compiler used to build libcurl.
.IP "--cflags"

View File

@@ -122,6 +122,9 @@ Use "-C -" to tell curl to automatically find out where/how to resume the
transfer. It then uses the given output/input files to figure that out.
If this option is used several times, the last one will be used.
.IP "---create-dirs"
When used in conjunction with the -o option, curl will create the necessary
local directory hierarchy as needed.
.IP "--crlf"
(FTP) Convert LF to CRLF in upload. Useful for MVS (OS/390).
@@ -315,6 +318,8 @@ to be made secure by using the CA certificate bundle installed by
default. This makes all connections considered "insecure" to fail unless
-k/--insecure is used.
This option is ignored if --cacert or --capath is used!
If this option is used twice, the second time will again disable it.
.IP "--krb4 <level>"
(FTP) Enable kerberos4 authentication and use. The level must be entered and
@@ -350,7 +355,7 @@ appended. Appending 'k' or 'K' will count the number as kilobytes, 'm' or M'
makes it megabytes while 'g' or 'G' makes it gigabytes. Examples: 200K, 3m and
1G.
This option was introduced in curl 7.9.9.
This option was introduced in curl 7.10.
If this option is used several times, the last one will be used.
.IP "-l/--list-only"
@@ -425,6 +430,8 @@ or use several variables like:
curl http://{site,host}.host[1-5].com -o "#1_#2"
You may use this option as many times as you have number of URLs.
See also the --create-dirs option to create the local directories dynamically.
.IP "-O/--remote-name"
Write output to a local file named like the remote file we get. (Only
the file part of the remote file is used, the path is cut off.)
@@ -820,7 +827,8 @@ FTP couldn't set binary. Couldn't change transfer method to binary.
.IP 18
Partial file. Only a part of the file was transfered.
.IP 19
FTP couldn't RETR file. The RETR command failed.
FTP couldn't download/access the given file, the RETR (or similar) command
failed.
.IP 20
FTP write error. The transfer was reported bad by the server.
.IP 21

View File

@@ -83,7 +83,8 @@ int main(int argc, char **argv)
default:
/* one or more of curl's file descriptors say there's data to read
or write */
curl_multi_perform(multi_handle, &still_running);
while(CURLM_CALL_MULTI_PERFORM ==
curl_multi_perform(multi_handle, &still_running));
break;
}
}

View File

@@ -80,7 +80,8 @@ int main(int argc, char **argv)
case 0:
default:
/* timeout or readable/writable sockets */
curl_multi_perform(multi_handle, &still_running);
while(CURLM_CALL_MULTI_PERFORM ==
curl_multi_perform(multi_handle, &still_running));
break;
}
}

View File

@@ -74,7 +74,8 @@ int main(int argc, char **argv)
case 0:
default:
/* timeout or readable/writable sockets */
curl_multi_perform(multi_handle, &still_running);
while(CURLM_CALL_MULTI_PERFORM ==
curl_multi_perform(multi_handle, &still_running));
break;
}
}

View File

@@ -66,7 +66,7 @@ int main(int argc, char **argv)
curl = curl_easy_init();
if(curl) {
/* what call to write: */
curl_easy_setopt(curl, CURLOPT_URL, "HTTPS://curl.haxx.se");
curl_easy_setopt(curl, CURLOPT_URL, "HTTPS://your.favourite.ssl.site");
curl_easy_setopt(curl, CURLOPT_WRITEHEADER, headerfile);
while(1) /* do some ugly short cut... */

View File

@@ -232,6 +232,7 @@ Multi-threading issues
For SIGPIPE info see the UNIX Socket FAQ at
http://www.unixguide.net/network/socketfaq/2.22.shtml
Also, note that CURLOPT_DNS_USE_GLOBAL_CACHE is not thread-safe.
When It Doesn't Work
@@ -255,6 +256,9 @@ When It Doesn't Work
possible of your code that uses libcurl, operating system name and version,
compiler name and version etc.
If CURLOPT_VERBOSE is not enough, you increase the level of debug data your
application receive by using the CURLOPT_DEBUGFUNCTION.
Getting some in-depth knowledge about the protocols involved is never wrong,
and if you're trying to do funny things, you might very well understand
libcurl and how to use it better if you study the appropriate RFC documents
@@ -293,8 +297,8 @@ Upload Data to a Remote Site
curl_easy_setopt(easyhandle, CURLOPT_UPLOAD, TRUE);
A few protocols won't behave properly when uploads are done without any prior
knowledge of the expected file size. HTTP PUT is one example [1]. So, set the
upload file size using the CURLOPT_INFILESIZE like this:
knowledge of the expected file size. So, set the upload file size using the
CURLOPT_INFILESIZE for all known file sizes like this[1]:
curl_easy_setopt(easyhandle, CURLOPT_INFILESIZE, file_size);
@@ -404,7 +408,7 @@ HTTP POSTing
headers = curl_slist_append(headers, "Content-Type: text/xml");
/* post binary data */
curl_easy_setopt(easyhandle, CURLOPT_POSTFIELD, binaryptr);
curl_easy_setopt(easyhandle, CURLOPT_POSTFIELDS, binaryptr);
/* set the size of the postfields data */
curl_easy_setopt(easyhandle, CURLOPT_POSTFIELDSIZE, 23);
@@ -726,6 +730,35 @@ Persistancy Is The Way to Happiness
CURLOPT_FORBID_REUSE to TRUE.
HTTP Headers Used by libcurl
When you use libcurl to do HTTP requeests, it'll pass along a series of
headers automaticly. It might be good for you to know and understand these
ones.
Host
This header is required by HTTP 1.1 and even many 1.0 servers and should
be the name of the server we want to talk to. This includes the port
number if anything but default.
Pragma
"no-cache". Tells a possible proxy to not grap a copy from the cache but
to fetch a fresh one.
Accept:
"image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*". Cloned from a
browser once a hundred years ago.
Expect:
When doing multi-part formposts, libcurl will set this header to
"100-continue" to ask the server for an "OK" message before it proceeds
with sending the data part of the post.
Customizing Operations
There is an ongoing development today where more and more protocols are built
@@ -738,20 +771,24 @@ Customizing Operations
libcurl is your friend here too.
If just changing the actual HTTP request keyword is what you want, like when
GET, HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST is there
for you. It is very simple to use:
CUSTOMREQUEST
If just changing the actual HTTP request keyword is what you want, like
when GET, HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST
is there for you. It is very simple to use:
curl_easy_setopt(easyhandle, CURLOPT_CUSTOMREQUEST, "MYOWNRUQUEST");
When using the custom request, you change the request keyword of the actual
request you are performing. Thus, by default you make GET request but you can
also make a POST operation (as described before) and then replace the POST
keyword if you want to. You're the boss.
When using the custom request, you change the request keyword of the
actual request you are performing. Thus, by default you make GET request
but you can also make a POST operation (as described before) and then
replace the POST keyword if you want to. You're the boss.
Modify Headers
HTTP-like protocols pass a series of headers to the server when doing the
request, and you're free to pass any amount of extra headers that you think
fit. Adding headers are this easy:
request, and you're free to pass any amount of extra headers that you
think fit. Adding headers are this easy:
struct curl_slist *headers=NULL; /* init to NULL is important */
@@ -766,43 +803,59 @@ Customizing Operations
curl_slist_free_all(headers); /* free the header list */
... and if you think some of the internally generated headers, such as
User-Agent:, Accept: or Host: don't contain the data you want them to
contain, you can replace them by simply setting them too:
Accept: or Host: don't contain the data you want them to contain, you can
replace them by simply setting them too:
headers = curl_slist_append(headers, "User-Agent: 007");
headers = curl_slist_append(headers, "Accept: Agent-007");
headers = curl_slist_append(headers, "Host: munged.host.line");
If you replace an existing header with one with no contents, you will prevent
the header from being sent. Like if you want to completely prevent the
"Accept:" header to be sent, you can disable it with code similar to this:
Delete Headers
If you replace an existing header with one with no contents, you will
prevent the header from being sent. Like if you want to completely prevent
the "Accept:" header to be sent, you can disable it with code similar to
this:
headers = curl_slist_append(headers, "Accept:");
Both replacing and cancelling internal headers should be done with careful
consideration and you should be aware that you may violate the HTTP protocol
when doing so.
consideration and you should be aware that you may violate the HTTP
protocol when doing so.
Enforcing chunked transfer-encoding
By making sure a request uses the custom header "Transfer-Encoding:
chunked" when doing a non-GET HTTP operation, libcurl will switch over to
"chunked" upload, even though the size of the data to upload might be
known. By default, libcurl usually switches over to chunked upload
automaticly if the upload data size is unknown.
HTTP Version
There's only one aspect left in the HTTP requests that we haven't yet
mentioned how to modify: the version field. All HTTP requests includes the
version number to tell the server which version we support. libcurl speak
HTTP 1.1 by default. Some very old servers don't like getting 1.1-requests
and when dealing with stubborn old things like that, you can tell libcurl to
use 1.0 instead by doing something like this:
and when dealing with stubborn old things like that, you can tell libcurl
to use 1.0 instead by doing something like this:
curl_easy_setopt(easyhandle, CURLOPT_HTTP_VERSION, CURLHTTP_VERSION_1_0);
curl_easy_setopt(easyhandle, CURLOPT_HTTP_VERSION,
CURLHTTP_VERSION_1_0);
Not all protocols are HTTP-like, and thus the above may not help you when you
want to make for example your FTP transfers to behave differently.
FTP Custom Commands
Not all protocols are HTTP-like, and thus the above may not help you when
you want to make for example your FTP transfers to behave differently.
Sending custom commands to a FTP server means that you need to send the
comands exactly as the FTP server expects them (RFC959 is a good guide here),
and you can only use commands that work on the control-connection alone. All
kinds of commands that requires data interchange and thus needs a
data-connection must be left to libcurl's own judgement. Also be aware that
libcurl will do its very best to change directory to the target directory
before doing any transfer, so if you change directory (with CWD or similar)
you might confuse libcurl and then it might not attempt to transfer the file
in the correct remote directory.
comands exactly as the FTP server expects them (RFC959 is a good guide
here), and you can only use commands that work on the control-connection
alone. All kinds of commands that requires data interchange and thus needs
a data-connection must be left to libcurl's own judgement. Also be aware
that libcurl will do its very best to change directory to the target
directory before doing any transfer, so if you change directory (with CWD
or similar) you might confuse libcurl and then it might not attempt to
transfer the file in the correct remote directory.
A little example that deletes a given file before an operation:
@@ -815,24 +868,32 @@ Customizing Operations
curl_slist_free_all(headers); /* free the header list */
If you would instead want this operation (or chain of operations) to happen
_after_ the data transfer took place the option to curl_easy_setopt() would
instead be called CURLOPT_POSTQUOTE and used the exact same way.
If you would instead want this operation (or chain of operations) to
happen _after_ the data transfer took place the option to
curl_easy_setopt() would instead be called CURLOPT_POSTQUOTE and used the
exact same way.
The custom FTP command will be issued to the server in the same order they
are added to the list, and if a command gets an error code returned back from
the server, no more commands will be issued and libcurl will bail out with an
error code (CURLE_FTP_QUOTE_ERROR). Note that if you use CURLOPT_QUOTE to
send commands before a transfer, no transfer will actually take place when a
quote command has failed.
are added to the list, and if a command gets an error code returned back
from the server, no more commands will be issued and libcurl will bail out
with an error code (CURLE_FTP_QUOTE_ERROR). Note that if you use
CURLOPT_QUOTE to send commands before a transfer, no transfer will
actually take place when a quote command has failed.
If you set the CURLOPT_HEADER to true, you will tell libcurl to get
information about the target file and output "headers" about it. The headers
will be in "HTTP-style", looking like they do in HTTP.
information about the target file and output "headers" about it. The
headers will be in "HTTP-style", looking like they do in HTTP.
The option to enable headers or to run custom FTP commands may be useful to
combine with CURLOPT_NOBODY. If this option is set, no actual file content
transfer will be performed.
The option to enable headers or to run custom FTP commands may be useful
to combine with CURLOPT_NOBODY. If this option is set, no actual file
content transfer will be performed.
FTP Custom CUSTOMREQUEST
If you do what list the contents of a FTP directory using your own defined
FTP command, CURLOPT_CUSTOMREQUEST will do just that. "NLST" is the
default one for listing directories but you're free to pass in your idea
of a good alternative.
Cookies Without Chocolate Chips
@@ -1007,19 +1068,30 @@ SSL, Certificates and Other Tricks
[ seeding, passwords, keys, certificates, ENGINE, ca certs ]
Multiple Transfers Using the multi Interface
The easy interface as described in detail in this document is a synchronous
interface that transfers one file at a time and doesn't return until its
done.
The multi interface on the other hand, allows your program to transfer
multiple files in both directions at the same time, without forcing you to
use multiple threads.
[fill in lots of more multi stuff here]
Future
[ multi interface, sharing between handles, mutexes, pipelining ]
[ sharing between handles, mutexes, pipelining ]
-----
Footnotes:
[1] = HTTP PUT without knowing the size prior to transfer is indeed possible,
but libcurl does not support the chunked transfers on uploading that is
necessary for this feature to work. We'd gratefully appreciate patches
that bring this functionality...
[1] = libcurl 7.10.3 and later have the ability to switch over to chunked
Tranfer-Encoding in cases were HTTP uploads are done with data of an
unknown size.
[2] = This happens on Windows machines when libcurl is built and used as a
DLL. However, you can still do this on Windows if you link with a static

View File

@@ -1,3 +1,5 @@
Makefile
Makefile.in
*html
*ps
*pdf

View File

@@ -75,7 +75,42 @@ HTMLPAGES = \
libcurl-errors.html \
index.html
EXTRA_DIST = $(man_MANS) $(HTMLPAGES)
PDFPAGES = \
curl_easy_cleanup.pdf \
curl_easy_getinfo.pdf \
curl_easy_init.pdf \
curl_easy_perform.pdf \
curl_easy_setopt.pdf \
curl_easy_duphandle.pdf \
curl_formadd.pdf \
curl_formparse.pdf \
curl_formfree.pdf \
curl_getdate.pdf \
curl_getenv.pdf \
curl_slist_append.pdf \
curl_slist_free_all.pdf \
curl_version.pdf \
curl_version_info.pdf \
curl_escape.pdf \
curl_unescape.pdf \
curl_free.pdf \
curl_strequal.pdf \
curl_strnequal.pdf \
curl_mprintf.pdf \
curl_global_init.pdf \
curl_global_cleanup.pdf \
libcurl.pdf \
curl_multi_add_handle.pdf \
curl_multi_cleanup.pdf \
curl_multi_fdset.pdf \
curl_multi_info_read.pdf \
curl_multi_init.pdf \
curl_multi_perform.pdf \
curl_multi_remove_handle.pdf \
libcurl-multi.pdf \
libcurl-errors.pdf
EXTRA_DIST = $(man_MANS) $(HTMLPAGES) $(PDFPAGES)
MAN2HTML= gnroff -man $< | man2html >$@
@@ -88,3 +123,10 @@ html: $(HTMLPAGES)
.1.html:
$(MAN2HTML)
pdf:
for file in $(man_MANS); do \
foo=`echo $$file | sed -e 's/\.[0-9]$$//g'`; \
groff -Tps -man $$file >$$foo.ps; \
ps2pdf $$foo.ps $$foo.pdf; \
done

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_easy_cleanup 3 "4 March 2002" "libcurl 7.7" "libcurl Manual"
.TH curl_easy_cleanup 3 "13 Nov 2002" "libcurl 7.7" "libcurl Manual"
.SH NAME
curl_easy_cleanup - End a libcurl easy session
.SH SYNOPSIS
@@ -18,6 +18,9 @@ opposite of the \fIcurl_easy_init\fP function and must be called with the same
This will effectively close all connections this handle has used and possibly
has kept open until now. Don't call this function if you intend to transfer
more files.
When you've called this, you can safely remove all the strings you've
previously told libcurl to use, as it won't use them anymore now.
.SH RETURN VALUE
None
.SH "SEE ALSO"

View File

@@ -115,8 +115,12 @@ Pass a pointer to a 'char *' to receive the content-type of the downloaded
object. This is the value read from the Content-Type: field. If you get NULL,
it means that the server didn't send a valid Content-Type header or that the
protocol used doesn't support this. (Added in 7.9.4)
.TP
.B CURLINFO_PRIVATE
Pass a pointer to a 'char *' to receive the pointer to the private data
associated with the curl handle (set with the CURLOPT_PRIVATE option to curl_easy_setopt).
(Added in 7.10.3)
.PP
.SH RETURN VALUE
If the operation was successful, CURLE_OK is returned. Otherwise an
appropriate error code will be returned.

View File

@@ -1,8 +1,7 @@
.\" You can view this file with:
.\" nroff -man [file]
.\" $Id$
.\"
.TH curl_easy_setopt 3 "18 Sep 2002" "libcurl 7.10" "libcurl Manual"
.TH curl_easy_setopt 3 "3 Dec 2002" "libcurl 7.10.3" "libcurl Manual"
.SH NAME
curl_easy_setopt - set options for a curl easy handle
.SH SYNOPSIS
@@ -23,7 +22,8 @@ curl_easy_setopt() calls in the setup phase.
\fBNOTE:\fP strings passed to libcurl as 'char *' arguments, will not be
copied by the library. Instead you should keep them available until libcurl no
longer needs them. Failing to do so will cause very odd behavior or even
crashes.
crashes. libcurl will need them until you call curl_easy_cleanup() or you set
the same option again to use a different pointer.
\fBNOTE2:\fP options set with this function call are valid for the forthcoming
data transfers that are performed when you invoke \fIcurl_easy_perform\fP.
@@ -70,10 +70,10 @@ Function pointer that should match the following prototype: \fBsize_t
function( void *ptr, size_t size, size_t nmemb, void *stream);\fP This
function gets called by libcurl as soon as there is data reveiced that needs
to be saved. The size of the data pointed to by \fIptr\fP is \fIsize\fP
multiplied with \fInmemb\fP. Return the number of bytes actually taken care
of. If that amount differs from the amount passed to your function, it'll
signal an error to the library and it will abort the transfer and return
\fICURLE_WRITE_ERROR\fP.
multiplied with \fInmemb\fP, it will not be zero terminated. Return the number
of bytes actually taken care of. If that amount differs from the amount passed
to your function, it'll signal an error to the library and it will abort the
transfer and return \fICURLE_WRITE_ERROR\fP.
Set the \fIstream\fP argument with the \fBCURLOPT_FILE\fP option.
@@ -166,7 +166,7 @@ code). (Added in 7.7.2)
.B CURLOPT_WRITEHEADER
Pass a pointer to be used to write the header part of the received data to. If
you don't use your own callback to take care of the writing, this must be a
valid FILE *. See also the \fICURLOPT_HEADERFUNCTION\fP option below on how to
valid FILE *. See also the \fICURLOPT_HEADERFUNCTION\fP option above on how to
set a custom get-all-headers callback.
.TP
.B CURLOPT_DEBUGFUNCTION
@@ -175,6 +175,10 @@ curl_debug_callback (CURL *, curl_infotype, char *, size_t, void *);\fP
This function will receive debug information if CURLOPT_VERBOSE is
enabled. The curl_infotype argument specifies what kind of information it
is. This funtion must return 0.
NOTE: the data pointed to by the char * passed to this function WILL NOT be
zero terminated, but will be exactly of the size as told by the size_t
argument.
.TP
.B CURLOPT_DEBUGDATA
Pass a pointer to whatever you want passed in to your CURLOPT_DEBUGFUNCTION in
@@ -235,7 +239,7 @@ you tunnel through the HTTP proxy. Such tunneling is activated with
Pass a long with this option to set the proxy port to connect to unless it is
specified in the proxy string \fICURLOPT_PROXY\fP.
.TP
.B CURLOPT_PROXTYPE
.B CURLOPT_PROXYTYPE
Pass a long with this option to set type of the proxy. Available options for
this are CURLPROXY_HTTP and CURLPROXY_SOCKS5, with the HTTP one being
default. (Added in 7.10)
@@ -322,6 +326,12 @@ prompt function.
.PP
.SH HTTP OPTIONS
.TP 0.4i
.B CURLOPT_ENCODING
Two encodings are supported \fIdentity\fP, which does nothing, and
\fIdeflate\fP to request the server to compress its reponse using the
zlib algorithm. This is not an order, the server may or may not do it.
See the special file lib/README.encoding for details.
.TP
.B CURLOPT_FOLLOWLOCATION
A non-zero parameter tells the library to follow any Location: header that the
server sends as part of a HTTP header.
@@ -395,11 +405,31 @@ list. If you add a header that is otherwise generated and used by libcurl
internally, your added one will be used instead. If you add a header with no
contents as in 'Accept:' (no data on the right side of the colon), the
internally used header will get disabled. Thus, using this option you can add
new headers, replace internal headers and remove internal headers.
new headers, replace internal headers and remove internal headers. The
headers included in the linked list must not be CRLF-terminated, because
curl adds CRLF after each header item. Failure to comply with this will
result in strange bugs because the server will most likely ignore part
of the headers you specified.
\fBNOTE:\fPThe most commonly replaced headers have "shortcuts" in the options
CURLOPT_COOKIE, CURLOPT_USERAGENT and CURLOPT_REFERER.
.TP
.B CURLOPT_HTTP200ALIASES
Pass a pointer to a linked list of aliases to be treated as valid HTTP 200
responses. Some servers respond with a custom header response line. For
example, IceCast servers respond with "ICY 200 OK". By including this string
in your list of aliases, the response will be treated as a valid HTTP header
line such as "HTTP/1.0 200 OK". (Added in 7.10.3)
The linked list should be a fully valid list of struct curl_slist structs, and
be properly filled in. Use \fIcurl_slist_append(3)\fP to create the list and
\fIcurl_slist_free_all(3)\fP to clean up an entire list.
\fBNOTE:\fPThe alias itself is not parsed for any version strings. So if your
alias is "MYHTTP/9.9", Libcurl will not treat the server as responding with
HTTP version 9.9. Instead Libcurl will use the value set by option
\fICURLOPT_HTTP_VERSION\fP.
.TP
.B CURLOPT_COOKIE
Pass a pointer to a zero terminated string as parameter. It will be used to
set a cookie in the http request. The format of the string should be
@@ -577,7 +607,7 @@ aborting perfectly normal operations. This option will cause curl to use the
SIGALRM to enable time-outing system calls.
\fBNOTE:\fP this is not recommended to use in unix multi-threaded programs, as
it uses signals unless CURLOPT_NOSIGNAL (see below) is set.
it uses signals unless CURLOPT_NOSIGNAL (see above) is set.
.TP
.B CURLOPT_LOW_SPEED_LIMIT
Pass a long as parameter. It contains the transfer speed in bytes per second
@@ -640,7 +670,7 @@ connection timeout (it will then only timeout on the system's internal
timeouts). See also the \fICURLOPT_TIMEOUT\fP option.
\fBNOTE:\fP this is not recommended to use in unix multi-threaded programs, as
it uses signals unless CURLOPT_NOSIGNAL (see below) is set.
it uses signals unless CURLOPT_NOSIGNAL (see above) is set.
.PP
.SH SSL and SECURITY OPTIONS
.TP 0.4i
@@ -706,10 +736,13 @@ Pass a long as parameter. Set what version of SSL to attempt to use, 2 or
servers make this difficult why you at times may have to use this option.
.TP
.B CURLOPT_SSL_VERIFYPEER
Pass a long that is set to a non-zero value to make curl verify the peer's
certificate. The certificate to verify against must be specified with the
CURLOPT_CAINFO option (Added in 7.4.2) or a certificate directory must be specified
with the CURLOPT_CAPATH option (Added in 7.9.8).
Pass a long that is set to a zero value to stop curl from verifying the peer's
certificate (7.10 starting setting this option to TRUE by default). Alternate
certificates to verify against can be specified with the CURLOPT_CAINFO option
(Added in 7.4.2) or a certificate directory can be specified with the
CURLOPT_CAPATH option (Added in 7.9.8). As of 7.10, curl installs a default
bundle. CURLOPT_SSL_VERIFYHOST may also need to be set to 1 or 0 if
CURLOPT_SSL_VERIFYPEER is disabled (it defaults to 2).
.TP
.B CURLOPT_CAINFO
Pass a char * to a zero terminated string naming a file holding one or more
@@ -736,7 +769,8 @@ socket. It will be used to seed the random engine for SSL.
.B CURLOPT_SSL_VERIFYHOST
Pass a long. Set if we should verify the Common name from the peer certificate
in the SSL handshake, set 1 to check existence, 2 to ensure that it matches
the provided hostname. (Added in 7.8.1)
the provided hostname. This is by default set to 2. (Added in 7.8.1, default
changed in 7.10)
.TP
.B CURLOPT_SSL_CIPHER_LIST
Pass a char *, pointing to a zero terminated string holding the list of
@@ -757,6 +791,13 @@ krb4 awareness. This is a string, 'clear', 'safe', 'confidential' or
will be used. Set the string to NULL to disable kerberos4. The kerberos
support only works for FTP. (Added in 7.3)
.PP
.SH OTHER OPTIONS
.TP 0.4i
.B CURLOPT_PRIVATE
Pass a char * as parameter, pointing to data that should be associated with
the curl handle. The pointer can be subsequently retrieved using the
CURLINFO_PRIVATE options to curl_easy_getinfo. (Added in 7.10.3)
.PP
.SH RETURN VALUE
CURLE_OK (zero) means that the option was set properly, non-zero means an
error occurred as \fI<curl/curl.h>\fP defines. See the \fIlibcurl-errors.3\fP

View File

@@ -19,78 +19,104 @@ the \fIfirstitem\fP pointer as parameter to \fBCURLOPT_HTTPPOST\fP.
\fIlastitem\fP is set after each call and on repeated invokes it should be
left as set to allow repeated invokes to find the end of the list faster.
After the \fIlastitem\fP pointer follow the real arguments. (If the following
description confuses you, jump directly to the examples):
\fBCURLFORM_COPYNAME\fP or \fBCURLFORM_PTRNAME\fP followed by a string is used
for the name of the section. Optionally one may use \fBCURLFORM_NAMELENGTH\fP
to specify the length of the name (allowing null characters within the
name). All options that use the word COPY in their names copy the given
contents, while the ones with PTR in their names simply points to the (static)
data you must make sure remain until curl no longer needs it.
The options for providing values are: \fBCURLFORM_COPYCONTENTS\fP,
\fBCURLFORM_PTRCONTENTS\fP, \fBCURLFORM_FILE\fP, \fBCURLFORM_BUFFER\fP,
or \fBCURLFORM_FILECONTENT\fP followed by a char or void pointer
(allowed for PTRCONTENTS).
\fBCURLFORM_FILECONTENT\fP does a normal post like \fBCURLFORM_COPYCONTENTS\fP
but the actual value is read from the filename given as a string.
Other arguments may be \fBCURLFORM_CONTENTTYPE\fP if the user wishes to
specify one (for FILE if no type is given the library tries to provide the
correct one; for CONTENTS no Content-Type is sent in this case).
For \fBCURLFORM_PTRCONTENTS\fP or \fBCURLFORM_COPYNAME\fP the user may also
add \fBCURLFORM_CONTENTSLENGTH\fP followed by the length as a long (if not
given the library will use strlen to determine the length).
For \fBCURLFORM_FILE\fP the user may send multiple files in one section by
providing multiple \fBCURLFORM_FILE\fP arguments each followed by the filename
(and each FILE is allowed to have a CONTENTTYPE).
\fBCURLFORM_BUFFER\fP
tells libcurl that a buffer is to be used to upload data instead of using a
file. The value of the next parameter is used as the value of the "filename"
parameter in the content header.
\fBCURLFORM_BUFFERPTR\fP
tells libcurl that the address of the next parameter is a pointer to the buffer
containing data to upload. The buffer containing this data must not be freed
until after curl_easy_cleanup is called.
\fBCURLFORM_BUFFERLENGTH\fP
tells libcurl that the length of the buffer to upload is the value of the
next parameter.
Another possibility to send options to curl_formadd() is the
\fBCURLFORM_ARRAY\fP option, that passes a struct curl_forms array pointer as
its value. Each curl_forms structure element has a CURLformoption and a char
pointer. The final element in the array must be a CURLFORM_END. All available
options can be used in an array, except the CURLFORM_ARRAY option itself!
Should you need to specify extra headers for the form POST section, use
\fBCURLFORM_CONTENTHEADER\fP. This takes a curl_slist prepared in the usual way
using \fBcurl_slist_append\fP and appends the list of headers to those Curl
automatically generates for \fBCURLFORM_CONTENTTYPE\fP and the content
disposition. The list must exist while the POST occurs, if you free it before
the post completes you may experience problems.
The last argument in such an array must always be \fBCURLFORM_END\fP.
After the \fIlastitem\fP pointer follow the real arguments.
The pointers \fI*firstitem\fP and \fI*lastitem\fP should both be pointing to
NULL in the first call to this function. All list-data will be allocated by
the function itself. You must call \fIcurl_formfree\fP after the form post has
been done to free the resources again.
This function will copy all input data except the data pointed to by the
arguments after \fBCURLFORM_PTRNAME\fP and \fBCURLFORM_PTRCONTENTS\fP and keep
its own version of it allocated until you call \fIcurl_formfree\fP. When
you've passed the pointer to \fIcurl_easy_setopt\fP, you must not free the
list until after you've called \fIcurl_easy_cleanup\fP for the curl handle. If
you provide a pointer as an arguments after \fBCURLFORM_PTRNAME\fP or
\fBCURLFORM_PTRCONTENTS\fP you must ensure that the pointer stays valid until
you call \fIcurl_form_free\fP and \fIcurl_easy_cleanup\fP.
First, there are some basics you need to understand about multipart/formdata
posts. Each part consists of at least a NAME and a CONTENTS part. If the part
is made for file upload, there are also a stored CONTENT-TYPE and a
FILENAME. Below here, we'll discuss on what options you use to set these
properties in the parts you want to add to your post.
.SH OPTIONS
.B CURLFORM_COPYNAME
followed by string is used to set the name of this part. libcurl copies the
given data, so your application doesn't need to keep it around after this
function call. If the name isn't zero terminated properly, or if you'd like it
to contain zero bytes, you need to set the length of the name with
\fBCURLFORM_NAMELENGTH\fP.
.B CURLFORM_PTRNAME
followed by a string is used for the name of this part. libcurl will use the
pointer and refer to the data in your application, you must make sure it
remains until curl no longer needs it. If the name isn't zero terminated
properly, or if you'd like it to contain zero bytes, you need to set the
length of the name with \fBCURLFORM_NAMELENGTH\fP.
.B CURLFORM_COPYCONTENTS
followed by a string is used for the contents of this part, the actual data to
send away. libcurl copies the given data, so your application doesn't need to
keep it around after this function call. If the data isn't zero terminated
properly, or if you'd like it to contain zero bytes, you need to set the
length of the name with \fBCURLFORM_CONTENTSLENGTH\fP.
.B CURLFORM_PTRCONTENTS
followed by a string is used for the contents of this part, the actual data to
send away. libcurl will use the pointer and refer to the data in your
application, you must make sure it remains until curl no longer needs it. If
the data isn't zero terminated properly, or if you'd like it to contain zero
bytes, you need to set the length of the name with
\fBCURLFORM_CONTENTSLENGTH\fP.
.B CURLFORM_FILECONTENT
followed by a file name, makes that file read and the contents will be used in
as data in this part.
.B CURLFORM_FILE
followed by a file name, makes this part a file upload part. It sets the file
name field to the actual file name used here, it gets the contents of the file
and passes as data and sets the content-type if the given file match one of
the new internally known file extension. For \fBCURLFORM_FILE\fP the user may
send one or more files in one part by providing multiple \fBCURLFORM_FILE\fP
arguments each followed by the filename (and each CURLFORM_FILE is allowed to
have a CURLFORM_CONTENTTYPE).
.B CURLFORM_CONTENTTYPE
followed by a pointer to a string with a content-type will make curl use this
given content-type for this file upload part, possibly instead of an
internally chosen one.
.B CURLFORM_FILENAME
followed by a pointer to a string to a name, will make libcurl use the given
name in the file upload part, intead of the actual file name given to
\fICURLFORM_FILE\fP.
.B BCURLFORM_BUFFER
followed by a string, tells libcurl that a buffer is to be used to upload data
instead of using a file. The given string is used as the value of the file
name field in the content header.
.B CURLFORM_BUFFERPTR
followed by a pointer to a data area, tells libcurl the address of the buffer
containing data to upload (as indicated with \fICURLFORM_BUFFER\fP). The
buffer containing this data must not be freed until after curl_easy_cleanup is
called.
.B CURLFORM_BUFFERLENGTH
followed by a long with the size of the \fICURLFORM_BUFFERPTR\fP data area,
tells libcurl the length of the buffer to upload.
.B CURLFORM_ARRAY
Another possibility to send options to curl_formadd() is the
\fBCURLFORM_ARRAY\fP option, that passes a struct curl_forms array pointer as
its value. Each curl_forms structure element has a CURLformoption and a char
pointer. The final element in the array must be a CURLFORM_END. All available
options can be used in an array, except the CURLFORM_ARRAY option itself! The
last argument in such an array must always be \fBCURLFORM_END\fP.
.B CURLFORM_CONTENTHEADER
specifies extra headers for the form POST section. This takes a curl_slist
prepared in the usual way using \fBcurl_slist_append\fP and appends the list
of headers to those libcurl automatically generates. The list must exist while
the POST occurs, if you free it before the post completes you may experience
problems.
When you've passed the HttpPost pointer to \fIcurl_easy_setopt\fP (using the
\fICURLOPT_HTTPPOST\fP option), you must not free the list until after you've
called \fIcurl_easy_cleanup\fP for the curl handle.
See example below.
.SH RETURN VALUE

View File

@@ -2,7 +2,7 @@
.\"
.TH curl_multi_fdset 3 "3 May 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_fdset - add an easy handle to a multi session
curl_multi_fdset - extracts file descriptor information from a multi handle
.SH SYNOPSIS
#include <curl/curl.h>

View File

@@ -2,7 +2,7 @@
.\"
.TH curl_multi_perform 3 "1 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_perform - add an easy handle to a multi session
curl_multi_perform - reads/writes available data from each easy handle
.SH SYNOPSIS
#include <curl/curl.h>
@@ -19,6 +19,12 @@ integer-pointer.
.SH "RETURN VALUE"
CURLMcode type, general libcurl multi interface error code.
If you receive \fICURLM_CALL_MULTI_PERFORM\fP, this basicly means that you
should call \fIcurl_multi_perform\fP again, before you select() on more
actions. You don't have to do it immediately, but the return code means that
libcurl may have more data available to return or that there may be more data
to send off before it is "satisfied".
NOTE that this only returns errors etc regarding the whole multi stack. There
might still have occurred problems on invidual transfers even when this
function returns OK.

View File

@@ -2,7 +2,7 @@
.\"
.TH curl_multi_remove_handle 3 "6 March 2002" "libcurl 7.9.5" "libcurl Manual"
.SH NAME
curl_multi_remove_handle - add an easy handle to a multi session
curl_multi_remove_handle - remove an easy handle from a multi session
.SH SYNOPSIS
#include <curl/curl.h>

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" $Id$
.\"
.TH libcurl-errors 3 "10 April 2002" "libcurl 7.9.6" "libcurl errors"
.TH libcurl-errors 3 "18 Dec 2002" "libcurl 7.10.3" "libcurl errors"
.SH NAME
error codes in libcurl
.SH DESCRIPTION
@@ -104,7 +104,7 @@ After a completed file transfer, the FTP server did not respond a proper
When sending custom "QUOTE" commands to the remote server, one of the commands
returned an error code that was 400 or higher.
.TP
.B CURLE_HTTP_NOT_FOUND (22)
.B CURLE_HTTP_RETURNED_ERROR (22)
This is returned if CURLOPT_FAILONERROR is set TRUE and the HTTP server
returns an error code that is >= 400.
.TP

View File

@@ -2,7 +2,7 @@
.\" nroff -man [file]
.\" $Id$
.\"
.TH libcurl-multi 5 "20 March 2001" "libcurl 7.9.5" "libcurl multi interface"
.TH libcurl-multi 5 "13 Oct 2001" "libcurl 7.10.1" "libcurl multi interface"
.SH NAME
libcurl-multi \- how to use the multi interface
.SH DESCRIPTION
@@ -37,7 +37,7 @@ curl_multi_* functions.
Each single transfer is built up with an easy handle. You must create them,
and setup the appropriate options for each easy handle, as outlined in the
\fIlibcurl(3)\fP man page.
\fIlibcurl(3)\fP man page, using \fIcurl_easy_setopt(3)\fP.
When the easy handle is setup for a transfer, then instead of using
\fIcurl_easy_perform\fP (as when using the easy interface for transfers), you
@@ -49,11 +49,11 @@ handles.
Should you change your mind, the easy handle is again removed from the multi
stack using \fIcurl_multi_remove_handle\fP. Once removed from the multi
handle, you can again use other easy interface functions like
curl_easy_perform or whatever you think is necessary.
\fIcurl_easy_perform\fP on the handle or whatever you think is necessary.
Adding the easy handles to the multi handle does not start any
transfer. Remember that one of the main ideas with this interface is to let
your application drive. You drive the transfers by invoking
Adding the easy handle to the multi handle does not start the transfer.
Remember that one of the main ideas with this interface is to let your
application drive. You drive the transfers by invoking
\fIcurl_multi_perform\fP. libcurl will then transfer data if there is anything
available to transfer. It'll use the callbacks and everything else you have
setup in the individual easy handles. It'll transfer data on all current
@@ -62,24 +62,39 @@ all, it may be none.
Your application can acquire knowledge from libcurl when it would like to get
invoked to transfer data, so that you don't have to busy-loop and call that
\fIcurl_multi_perform\fP like a mad man! \fIcurl_multi_fdset\fP offers an
\fIcurl_multi_perform\fP like crazy. \fIcurl_multi_fdset\fP offers an
interface using which you can extract fd_sets from libcurl to use in select()
or poll() calls in order to get to know when the transfers in the multi stack
might need attention. This also makes it very easy for your program to wait
for input on your own private file descriptors at the same time or perhaps
timeout every now and then, should you want that.
A little note here about the return codes from the multi functions, and
especially the \fIcurl_multi_perform\fP: if you receive
\fICURLM_CALL_MULTI_PERFORM\fP, this basicly means that you should call
\fIcurl_multi_perform\fP again, before you select() on more actions. You don't
have to do it immediately, but the return code means that libcurl may have
more data available to return or that there may be more data to send off
before it is "satisfied".
\fIcurl_multi_perform\fP stores the number of still running transfers in one
of its input arguments, and by reading that you can figure out when all the
transfers in the multi handles are done. 'done' does not mean successful. One
or more of the transfers may have failed.
or more of the transfers may have failed. Tracking when this number changes,
you know when one or more transfers are done.
To get information about completed transfers, to figure out success or not and
similar, \fIcurl_multi_info_read\fP should be called. It can return a message
about a current or previous transfer. Repeated invokes of the function get
more messages until the message queue is empty.
more messages until the message queue is empty. The information you receive
there includes an easy handle pointer which you may use to identify which easy
handle the information regards.
When all transfers in the multi stack are done, cleanup the multi handle with
\fIcurl_multi_cleanup\fP. Be careful and please note that you \fBMUST\fP
invoke separate \fIcurl_easy_cleanup\fP calls on every single easy handle to
clean them up properly.
If you want to re-use an easy handle that was added to the multi handle for
transfer, you must first remove it from the multi stack and then re-add it
again (possbily after having altered some options at your own choice).

View File

@@ -96,7 +96,9 @@ typedef int (*curl_progress_callback)(void *clientp,
double ultotal,
double ulnow);
#define CURL_MAX_WRITE_SIZE 20480
/* Tests have proven that 20K is a very bad buffer size for uploads on
Windows, while 16K for some odd reason performed a lot better. */
#define CURL_MAX_WRITE_SIZE 16384
typedef size_t (*curl_write_callback)(char *buffer,
size_t size,
@@ -160,7 +162,7 @@ typedef enum {
CURLE_FTP_COULDNT_RETR_FILE, /* 19 */
CURLE_FTP_WRITE_ERROR, /* 20 */
CURLE_FTP_QUOTE_ERROR, /* 21 */
CURLE_HTTP_NOT_FOUND, /* 22 */
CURLE_HTTP_RETURNED_ERROR, /* 22 */
CURLE_WRITE_ERROR, /* 23 */
CURLE_MALFORMAT_USER, /* 24 - user name is illegally specified */
CURLE_FTP_COULDNT_STOR_FILE, /* 25 - failed FTP upload */
@@ -205,6 +207,10 @@ typedef enum {
CURL_LAST /* never use! */
} CURLcode;
/* Make a spelling correction for the operation timed-out define */
#define CURLE_OPERATION_TIMEDOUT CURLE_OPERATION_TIMEOUTED
#define CURLE_HTTP_NOT_FOUND CURLE_HTTP_RETURNED_ERROR
typedef enum {
CURLPROXY_HTTP = 0,
CURLPROXY_SOCKS4 = 4,
@@ -242,7 +248,15 @@ typedef enum {
* platforms.
*/
#if defined(__STDC__) || defined(_MSC_VER) || defined(__cplusplus) || \
defined(__HP_aCC)
defined(__HP_aCC) || defined(__BORLANDC__)
/* This compiler is believed to have an ISO compatible preprocessor */
#define CURL_ISOCPP
#else
/* This compiler is believed NOT to have an ISO compatible preprocessor */
#undef CURL_ISOCPP
#endif
#ifdef CURL_ISOCPP
#define CINIT(name,type,number) CURLOPT_ ## name = CURLOPTTYPE_ ## type + number
#else
/* The macro "##" is ISO C, we assume pre-ISO C doesn't support it. */
@@ -599,6 +613,11 @@ typedef enum {
the response to be compressed. */
CINIT(ENCODING, OBJECTPOINT, 102),
/* Set pointer to private data */
CINIT(PRIVATE, OBJECTPOINT, 103),
/* Set aliases for HTTP 200 in the HTTP Response header */
CINIT(HTTP200ALIASES, OBJECTPOINT, 104),
CURLOPT_LASTENTRY /* the last unused */
} CURLoption;
@@ -689,8 +708,7 @@ int curl_formparse(char *, struct curl_httppost **,
#undef CFINIT
#endif
#if defined(__STDC__) || defined(_MSC_VER) || defined(__cplusplus) || \
defined(__HP_aCC)
#ifdef CURL_ISOCPP
#define CFINIT(name) CURLFORM_ ## name
#else
/* The macro "##" is ISO C, we assume pre-ISO C doesn't support it. */
@@ -793,8 +811,8 @@ CURLcode curl_global_init(long flags);
void curl_global_cleanup(void);
/* This is the version number */
#define LIBCURL_VERSION "7.10"
#define LIBCURL_VERSION_NUM 0x070a00
#define LIBCURL_VERSION "7.10.3"
#define LIBCURL_VERSION_NUM 0x070a03
/* linked-list structure for the CURLOPT_QUOTE option (and other) */
struct curl_slist {
@@ -851,16 +869,13 @@ typedef enum {
CURLINFO_REDIRECT_TIME = CURLINFO_DOUBLE + 19,
CURLINFO_REDIRECT_COUNT = CURLINFO_LONG + 20,
CURLINFO_PRIVATE = CURLINFO_STRING + 21,
/* Fill in new entries here! */
CURLINFO_LASTONE = 21
CURLINFO_LASTONE = 22
} CURLINFO;
/* unfortunately, the easy.h and multi.h include files need options and info
stuff before they can be included! */
#include "easy.h" /* nothing in curl is fun without the easy stuff */
#include "multi.h"
typedef enum {
CURLCLOSEPOLICY_NONE, /* first, never use this */
@@ -884,35 +899,56 @@ typedef enum {
* Setup defines, protos etc for the sharing stuff.
*/
/* Different types of locks that a share can aquire */
/* Different data locks for a single share */
typedef enum {
CURL_LOCK_TYPE_NONE = 0,
CURL_LOCK_TYPE_COOKIE = 1<<0,
CURL_LOCK_TYPE_DNS = 1<<1,
CURL_LOCK_TYPE_SSL_SESSION = 2<<1,
CURL_LOCK_TYPE_CONNECT = 2<<2,
CURL_LOCK_TYPE_LAST
} curl_lock_type;
CURL_LOCK_DATA_NONE = 0,
CURL_LOCK_DATA_COOKIE = 1,
CURL_LOCK_DATA_DNS = 2,
CURL_LOCK_DATA_SSL_SESSION = 3,
CURL_LOCK_DATA_CONNECT = 4,
CURL_LOCK_DATA_LAST
} curl_lock_data;
typedef void (*curl_lock_function)(CURL *, curl_lock_type, void *);
typedef void (*curl_unlock_function)(CURL *, curl_lock_type, void *);
/* Different lock access types */
typedef enum {
CURL_LOCK_ACCESS_NONE = 0, /* unspecified action */
CURL_LOCK_ACCESS_SHARED = 1, /* for read perhaps */
CURL_LOCK_ACCESS_SINGLE = 2, /* for write perhaps */
CURL_LOCK_ACCESS_LAST /* never use */
} curl_lock_access;
typedef struct {
unsigned int specifier;
unsigned int locked;
unsigned int dirty;
typedef void (*curl_lock_function)(CURL *handle,
curl_lock_data data,
curl_lock_access access,
void *userptr);
typedef void (*curl_unlock_function)(CURL *handle,
curl_lock_data data,
void *userptr);
curl_lock_function lockfunc;
curl_unlock_function unlockfunc;
void *clientdata;
} curl_share;
typedef void CURLSH;
curl_share *curl_share_init (void);
CURLcode curl_share_setopt (curl_share *, curl_lock_type, int);
CURLcode curl_share_set_lock_function (curl_share *, curl_lock_function);
CURLcode curl_share_set_unlock_function (curl_share *, curl_unlock_function);
CURLcode curl_share_set_lock_data (curl_share *, void *);
CURLcode curl_share_destroy (curl_share *);
typedef enum {
CURLSHE_OK, /* all is fine */
CURLSHE_BAD_OPTION, /* 1 */
CURLSHE_IN_USE, /* 2 */
CURLSHE_INVALID, /* 3 */
CURLSHE_LAST /* never use */
} CURLSHcode;
typedef enum {
CURLSHOPT_NONE, /* don't use */
CURLSHOPT_SHARE, /* specify a data type to share */
CURLSHOPT_UNSHARE, /* specify shich data type to stop sharing */
CURLSHOPT_LOCKFUNC, /* pass in a 'curl_lock_function' pointer */
CURLSHOPT_UNLOCKFUNC, /* pass in a 'curl_unlock_function' pointer */
CURLSHOPT_USERDATA, /* pass in a user data pointer used in the lock/unlock
callback functions */
CURLSHOPT_LAST /* never use */
} CURLSHoption;
CURLSH *curl_share_init(void);
CURLSHcode curl_share_setopt(CURLSH *, CURLSHoption option, ...);
CURLSHcode curl_share_cleanup(CURLSH *);
/****************************************************************************
* Structures for querying information about the curl library at runtime.
@@ -955,4 +991,9 @@ curl_version_info_data *curl_version_info(CURLversion);
}
#endif
/* unfortunately, the easy.h and multi.h include files need options and info
stuff before they can be included! */
#include "easy.h" /* nothing in curl is fun without the easy stuff */
#include "multi.h"
#endif /* __CURL_CURL_H */

View File

@@ -5,4 +5,5 @@ Makefile
.deps
.libs
config.h
stamp-h1
stamp-*
ca-bundle.h

View File

@@ -5,9 +5,9 @@
AUTOMAKE_OPTIONS = foreign nostdinc
EXTRA_DIST = getdate.y Makefile.b32 Makefile.b32.resp Makefile.m32 \
Makefile.vc6 Makefile.riscos libcurl.def dllinit.c curllib.dsp \
Makefile.vc6 Makefile.riscos libcurl.def curllib.dsp \
curllib.dsw config-vms.h config-win32.h config-riscos.h config-mac.h \
config.h.in ca-bundle.crt README.encoding
config.h.in ca-bundle.crt README.encoding README.memoryleak
lib_LTLIBRARIES = libcurl.la
@@ -16,7 +16,8 @@ lib_LTLIBRARIES = libcurl.la
# we use srcdir/lib for the lib-private header files
INCLUDES = -I$(top_srcdir)/include -I$(top_builddir)/lib -I$(top_srcdir)/lib
libcurl_la_LDFLAGS = -version-info 2:2:0
VERSION=-version-info 2:2:0
# This flag accepts an argument of the form current[:revision[:age]]. So,
# passing -version-info 3:12:1 sets current to 3, revision to 12, and age to
# 1.
@@ -45,6 +46,16 @@ libcurl_la_LDFLAGS = -version-info 2:2:0
# set age to 0.
#
if NO_UNDEFINED
# The -no-undefined flag is CRUCIAL for this to build fine on Cygwin. If we
# find a case in which we need to remove this flag, we should most likely
# write a configure check that detects when this flag is needed and when its
# not.
libcurl_la_LDFLAGS = -no-undefined $(VERSION)
else
libcurl_la_LDFLAGS = $(VERSION)
endif
libcurl_la_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c \
base64.c file.h hostip.c progress.c timeval.h base64.h formdata.c \
hostip.h progress.h cookie.c formdata.h http.c sendf.c cookie.h ftp.c \
@@ -55,7 +66,7 @@ getpass.c netrc.c telnet.h getinfo.c getinfo.h transfer.c strequal.c \
strequal.h easy.c security.h security.c krb4.c krb4.h memdebug.c \
memdebug.h inet_ntoa_r.h http_chunks.c http_chunks.h strtok.c strtok.h \
connect.c connect.h llist.c llist.h hash.c hash.h multi.c \
content_encoding.c content_encoding.h
content_encoding.c content_encoding.h share.h
noinst_HEADERS = setup.h transfer.h
@@ -68,7 +79,7 @@ $(srcdir)/getdate.c: getdate.y
install-data-hook:
@if test -n "@CURL_CA_BUNDLE@"; then \
$(mkinstalldirs) `dirname $(DESTDIR)@CURL_CA_BUNDLE@`; \
@INSTALL_DATA@ ca-bundle.crt $(DESTDIR)@CURL_CA_BUNDLE@; \
@INSTALL_DATA@ $(srcdir)/ca-bundle.crt $(DESTDIR)@CURL_CA_BUNDLE@; \
fi
# this hook is mainly for non-unix systems to build even if configure

View File

@@ -59,7 +59,11 @@ SOURCES = \
easy.c \
strequal.c \
strtok.c \
connect.c
connect.c \
hash.c \
llist.c \
multi.c \
content_encoding.c
OBJECTS = $(SOURCES:.c=.obj)

View File

@@ -28,4 +28,8 @@
+easy.obj &
+strequal.obj &
+strtok.obj &
+connect.obj
+connect.obj &
+hash.obj &
+llist.obj &
+multi.obj &
+content_encoding.obj

View File

@@ -1,6 +1,6 @@
#############################################################
#
## Makefile for building libcurl.a with MingW32 (GCC-2.95) and
## Makefile for building libcurl.a with MingW32 (GCC-3.2) and
## optionally OpenSSL (0.9.6)
## Use: make -f Makefile.m32
##
@@ -9,9 +9,10 @@
CC = gcc
AR = ar
RM = rm -f
RANLIB = ranlib
STRIP = strip -g
OPENSSL_PATH = ../../openssl-0.9.6d
OPENSSL_PATH = ../../openssl-0.9.6g
ZLIB_PATH = ../../zlib-1.1.3
########################################################
@@ -60,16 +61,18 @@ OBJECTS = $(libcurl_a_OBJECTS)
all: libcurl.a libcurl.dll libcurldll.a
libcurl.a: $(libcurl_a_OBJECTS) $(libcurl_a_DEPENDENCIES)
-@erase libcurl.a
$(RM) libcurl.a
$(AR) cru libcurl.a $(libcurl_a_OBJECTS)
$(RANLIB) libcurl.a
$(STRIP) $@
DLLINITOBJ =
# remove the last line above to keep debug info
libcurl.dll libcurldll.a: libcurl.a libcurl.def dllinit.o
-@erase $@
dllwrap --dllname $@ --output-lib libcurldll.a --export-all --def libcurl.def $(libcurl_a_LIBRARIES) dllinit.o $(DLL_LIBS) -lwsock32 -lws2_32 -lwinmm
libcurl.dll libcurldll.a: libcurl.a libcurl.def $(DLLINITOBJ)
$(RM) $@
dllwrap --dllname $@ --output-lib libcurldll.a --export-all --def libcurl.def $(libcurl_a_LIBRARIES) $(DLLINITOBJ) $(DLL_LIBS) -lwsock32 -lws2_32 -lwinmm
$(STRIP) $@
# remove the last line above to keep debug info
@@ -84,9 +87,9 @@ libcurl.dll libcurldll.a: libcurl.a libcurl.def dllinit.o
$(COMPILE) -c $<
clean:
-@erase $(libcurl_a_OBJECTS)
$(RM) $(libcurl_a_OBJECTS)
distrib: clean
-@erase $(libcurl_a_LIBRARIES)
$(RM) $(libcurl_a_LIBRARIES)

View File

@@ -116,7 +116,7 @@ CFGSET = TRUE
!IF "$(CFG)" == "debug-dll"
TARGET =$(LIB_NAME_DEBUG).dll
DIROBJ =.\$(CFG)
LNK = $(LNKDLL) /out:$(TARGET) /IMPLIB:"$(LIB_NAME_DEBUG).lib"
LNK = $(LNKDLL) /DEBUG /out:$(TARGET) /IMPLIB:"$(LIB_NAME_DEBUG).lib" /PDB:"$(LIB_NAME_DEBUG).pdb"
CC = $(CCDEBUG)
CFGSET = TRUE
!ENDIF
@@ -139,7 +139,8 @@ CFGSET = TRUE
!IF "$(CFG)" == "debug-ssl-dll"
TARGET =$(LIB_NAME_DEBUG).dll
DIROBJ =.\$(CFG)
LNK = $(LNKDLL) $(LFLAGSSSL) /out:$(TARGET) /IMPLIB:"$(LIB_NAME_DEBUG).lib"
LFLAGSSSL = /LIBPATH:$(OPENSSL_PATH)/out32dll
LNK = $(LNKDLL) $(LFLAGSSSL) /DEBUG /out:$(TARGET) /IMPLIB:"$(LIB_NAME_DEBUG).lib" /PDB:"$(LIB_NAME_DEBUG).pdb"
LINKLIBS = $(LINKLIBS) $(SSLLIBS)
CC = $(CCDEBUG) $(CFLAGSSSL)
CFGSET = TRUE

56
lib/README.memoryleak Normal file
View File

@@ -0,0 +1,56 @@
$Id$
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
| (__| |_| | _ <| |___
\___|\___/|_| \_\_____|
How To Track Down Suspected Memory Leaks in libcurl
===================================================
Single-threaded
Please note that this memory leak system is not adjusted to work in more
than one thread. If you want/need to use it in a multi-threaded app. Please
adjust accordingly.
Build
Rebuild libcurl with -DMALLOCDEBUG (usually, rerunning configure with
--enable-debug fixes this). 'make clean' first, then 'make' so that all
files actually are rebuilt properly. It will also make sense to build
libcurl with the debug option (usually -g to the compiler) so that debugging
it will be easier if you actually do find a leak in the library.
This will create a library that has memory debugging enabled.
Modify Your Application
Add a line in your application code:
curl_memdebug("filename");
This will make the malloc debug system output a full trace of all resource
using functions to the given file name. Make sure you rebuild your program
and that you link with the same libcurl you built for this purpose as
described above.
Run Your Application
Run your program as usual. Watch the specified memory trace file grow.
Make your program exit and use the proper libcurl cleanup functions etc. So
that all non-leaks are returned/freed properly.
Analyze the Flow
Use the tests/memanalyze.pl perl script to analyze the memdump file:
tests/memanalyze.pl < memdump
This now outputs a report on what resources that were allocated but never
freed etc. This report is very fine for posting to the list!
If this doesn't produce any output, no leak was detected in libcurl. Then
the leak is mostly likely to be in your code.

View File

@@ -61,6 +61,8 @@ static void decodeQuantum(unsigned char *dest, char *src)
x = (x << 6) + 62;
else if(src[i] == '/')
x = (x << 6) + 63;
else if(src[i] == '=')
x = (x << 6);
}
dest[2] = (unsigned char)(x & 255); x >>= 8;
@@ -78,6 +80,7 @@ static void base64Decode(unsigned char *dest, char *src, int *rawLength)
int length = 0;
int equalsTerm = 0;
int i;
int numQuantums;
unsigned char lastQuantum[3];
while((src[length] != '=') && src[length])
@@ -85,16 +88,18 @@ static void base64Decode(unsigned char *dest, char *src, int *rawLength)
while(src[length+equalsTerm] == '=')
equalsTerm++;
numQuantums = (length + equalsTerm) / 4;
if(rawLength)
*rawLength = (length * 3 / 4) - equalsTerm;
*rawLength = (numQuantums * 3) - equalsTerm;
for(i = 0; i < length/4 - 1; i++) {
for(i = 0; i < numQuantums - 1; i++) {
decodeQuantum(dest, src);
dest += 3; src += 4;
}
decodeQuantum(lastQuantum, src);
for(i = 0; i < 3 - equalsTerm; i++) dest[i] = lastQuantum[i];
for(i = 0; i < 3 - equalsTerm; i++)
dest[i] = lastQuantum[i];
}
@@ -194,7 +199,8 @@ int Curl_base64_decode(const char *str, void *data)
#define TEST_NEED_SUCK
void *suck(int *);
int main(int argc, char **argv, char **envp) {
int main(int argc, char **argv, char **envp)
{
char *base64;
int base64Len;
unsigned char *data;
@@ -220,7 +226,8 @@ int main(int argc, char **argv, char **envp) {
#define TEST_NEED_SUCK
void *suck(int *);
int main(int argc, char **argv, char **envp) {
int main(int argc, char **argv, char **envp)
{
char *base64;
int base64Len;
unsigned char *data;
@@ -233,7 +240,6 @@ int main(int argc, char **argv, char **envp) {
fprintf(stderr, "%d\n", dataLen);
fwrite(data,1,dataLen,stdout);
free(base64); free(data);
return 0;
}
@@ -241,7 +247,8 @@ int main(int argc, char **argv, char **envp) {
#ifdef TEST_NEED_SUCK
/* this function 'sucks' in as much as possible from stdin */
void *suck(int *lenptr) {
void *suck(int *lenptr)
{
int cursize = 8192;
unsigned char *buf = NULL;
int lastread;
@@ -260,7 +267,6 @@ void *suck(int *lenptr) {
}
#endif
/*
* local variables:
* eval: (load-file "../curl-mode.el")

View File

@@ -176,10 +176,9 @@ int waitconnect(int sockfd, /* socket */
/* timeout, no connect today */
return 1;
if(FD_ISSET(sockfd, &errfd)) {
if(FD_ISSET(sockfd, &errfd))
/* error condition caught */
return 2;
}
/* we have a connect! */
return 0;
@@ -206,7 +205,7 @@ static CURLcode bindlocal(struct connectdata *conn,
*************************************************************/
if (strlen(data->set.device)<255) {
struct sockaddr_in sa;
Curl_addrinfo *h=NULL;
struct Curl_dns_entry *h=NULL;
size_t size;
char myhost[256] = "";
in_addr_t in;
@@ -247,12 +246,17 @@ static CURLcode bindlocal(struct connectdata *conn,
if (INADDR_NONE != in) {
if ( h ) {
Curl_addrinfo *addr = h->addr;
Curl_resolv_unlock(h);
/* we don't need it anymore after this function has returned */
memset((char *)&sa, 0, sizeof(sa));
#ifdef ENABLE_IPV6
memcpy((char *)&sa.sin_addr, h->ai_addr, h->ai_addrlen);
sa.sin_family = h->ai_family;
memcpy((char *)&sa.sin_addr, addr->ai_addr, addr->ai_addrlen);
sa.sin_family = addr->ai_family;
#else
memcpy((char *)&sa.sin_addr, h->h_addr, h->h_length);
memcpy((char *)&sa.sin_addr, addr->h_addr, addr->h_length);
sa.sin_family = AF_INET;
#endif
sa.sin_addr.s_addr = in;
@@ -375,6 +379,11 @@ CURLcode Curl_is_connected(struct connectdata *conn,
return CURLE_OPERATION_TIMEOUTED;
}
}
if(conn->bits.tcpconnect) {
/* we are connected already! */
*connected = TRUE;
return CURLE_OK;
}
/* check for connect without timeout as we want to return immediately */
rc = waitconnect(sockfd, 0);
@@ -387,6 +396,8 @@ CURLcode Curl_is_connected(struct connectdata *conn,
return CURLE_OK;
}
/* nope, not connected for real */
if(err)
return CURLE_COULDNT_CONNECT;
}
/*
@@ -408,7 +419,7 @@ CURLcode Curl_is_connected(struct connectdata *conn,
*/
CURLcode Curl_connecthost(struct connectdata *conn, /* context */
Curl_addrinfo *remotehost, /* use one in here */
struct Curl_dns_entry *remotehost, /* use this one */
int port, /* connect to this */
int *sockconn, /* the connected socket */
Curl_ipconnect **addr, /* the one we used */
@@ -477,7 +488,7 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
struct addrinfo *ai;
port =0; /* prevent compiler warning */
for (ai = remotehost; ai; ai = ai->ai_next, aliasindex++) {
for (ai = remotehost->addr; ai; ai = ai->ai_next, aliasindex++) {
sockfd = socket(ai->ai_family, ai->ai_socktype, ai->ai_protocol);
if (sockfd < 0)
continue;
@@ -569,7 +580,7 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
/*
* Connecting with IPv4-only support
*/
if(!remotehost->h_addr_list[0]) {
if(!remotehost->addr->h_addr_list[0]) {
/* If there is no addresses in the address list, then we return
error right away */
failf(data, "no address available");
@@ -596,16 +607,16 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
/* This is the loop that attempts to connect to all IP-addresses we
know for the given host. One by one. */
for(rc=-1, aliasindex=0;
rc && (struct in_addr *)remotehost->h_addr_list[aliasindex];
rc && (struct in_addr *)remotehost->addr->h_addr_list[aliasindex];
aliasindex++) {
struct sockaddr_in serv_addr;
/* do this nasty work to do the connect */
memset((char *) &serv_addr, '\0', sizeof(serv_addr));
memcpy((char *)&(serv_addr.sin_addr),
(struct in_addr *)remotehost->h_addr_list[aliasindex],
(struct in_addr *)remotehost->addr->h_addr_list[aliasindex],
sizeof(struct in_addr));
serv_addr.sin_family = remotehost->h_addrtype;
serv_addr.sin_family = remotehost->addr->h_addrtype;
serv_addr.sin_port = htons((unsigned short)port);
rc = connect(sockfd, (struct sockaddr *)&serv_addr,
@@ -639,6 +650,15 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
}
}
/* The '1 == rc' comes from the waitconnect(), and not from connect().
We can be sure of this since connect() cannot return 1. */
if((1 == rc) && (data->state.used_interface == Curl_if_multi)) {
/* Timeout when running the multi interface, we return here with a
CURLE_OK return code. */
rc = 0;
break;
}
if(0 == rc) {
int err = socketerror(sockfd);
if ((0 == err) || (EISCONN == err)) {
@@ -651,12 +671,6 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
}
if(0 != rc) {
if(data->state.used_interface == Curl_if_multi) {
/* When running the multi interface, we bail out here */
rc = 0;
break;
}
/* get a new timeout for next attempt */
after = Curl_tvnow();
timeout_ms -= Curl_tvdiff(after, before);
@@ -681,7 +695,7 @@ CURLcode Curl_connecthost(struct connectdata *conn, /* context */
if(addr)
/* this is the address we've connected to */
*addr = (struct in_addr *)remotehost->h_addr_list[aliasindex];
*addr = (struct in_addr *)remotehost->addr->h_addr_list[aliasindex];
#endif
/* allow NULL-pointers to get passed in */

View File

@@ -31,7 +31,7 @@ CURLcode Curl_is_connected(struct connectdata *conn,
bool *connected);
CURLcode Curl_connecthost(struct connectdata *conn,
Curl_addrinfo *host, /* connect to this */
struct Curl_dns_entry *host, /* connect to this */
int port, /* connect to this port number */
int *sockconn, /* not set if error is returned */
Curl_ipconnect **addr, /* the one we used */

View File

@@ -519,7 +519,7 @@ struct CookieInfo *Curl_cookie_init(char *file,
char *lineptr;
bool headerline;
while(fgets(line, MAX_COOKIE_LINE, fp)) {
if(strnequal("Set-Cookie:", line, 11)) {
if(checkprefix("Set-Cookie:", line)) {
/* This is a cookie line, get it! */
lineptr=&line[11];
headerline=TRUE;
@@ -588,7 +588,7 @@ struct Cookie *Curl_cookie_getlist(struct CookieInfo *c,
/* now check the left part of the path with the cookies path
requirement */
if(!co->path ||
strnequal(path, co->path, strlen(co->path))) {
checkprefix(co->path, path) ) {
/* and now, we know this is a match and we should create an
entry for the return-linked-list */

View File

@@ -31,19 +31,19 @@ RSC=rc.exe
!IF "$(CFG)" == "curllib - Win32 Release"
# PROP BASE Use_MFC 6
# PROP BASE Use_MFC 0
# PROP BASE Use_Debug_Libraries 0
# PROP BASE Output_Dir "Release"
# PROP BASE Intermediate_Dir "Release"
# PROP BASE Target_Dir ""
# PROP Use_MFC 6
# PROP Use_MFC 0
# PROP Use_Debug_Libraries 0
# PROP Output_Dir "Release"
# PROP Intermediate_Dir "Release"
# PROP Ignore_Export_Lib 0
# PROP Target_Dir ""
# ADD BASE CPP /nologo /MT /W3 /GX /O2 /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /YX /FD /c
# ADD CPP /nologo /MD /W3 /GX /Zi /O2 /I "..\include" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /FR /FD /c
# ADD CPP /nologo /MT /W3 /GX /Zi /O2 /I "..\include" /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /D "_WINDLL" /FR /FD /c
# SUBTRACT CPP /YX
# ADD BASE MTL /nologo /D "NDEBUG" /mktyplib203 /win32
# ADD MTL /nologo /D "NDEBUG" /mktyplib203 /win32
@@ -70,7 +70,7 @@ LINK32=link.exe
# PROP Ignore_Export_Lib 0
# PROP Target_Dir ""
# ADD BASE CPP /nologo /MTd /W3 /Gm /GX /ZI /Od /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /YX /FD /GZ /c
# ADD CPP /nologo /MDd /W3 /Gm /GX /Zi /Od /I "..\include" /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /FR /FD /GZ /c
# ADD CPP /nologo /MTd /W3 /Gm /GX /Zi /Od /I "..\include" /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /D "_MBCS" /D "_USRDLL" /D "CURLLIB_EXPORTS" /FR /FD /GZ /c
# SUBTRACT CPP /WX /YX
# ADD BASE MTL /nologo /D "_DEBUG" /mktyplib203 /win32
# ADD MTL /nologo /D "_DEBUG" /mktyplib203 /win32
@@ -111,10 +111,6 @@ SOURCE=.\dict.c
# End Source File
# Begin Source File
SOURCE=.\dllinit.c
# End Source File
# Begin Source File
SOURCE=.\easy.c
# End Source File
# Begin Source File

View File

@@ -1,99 +0,0 @@
#ifdef WIN32
/* dllinit.c -- Portable DLL initialization.
Copyright (C) 1998, 1999 Free Software Foundation, Inc.
Contributed by Mumit Khan (khan@xraylith.wisc.edu).
I've used DllMain as the DLL "main" since that's the most common
usage. MSVC and Mingw32 both default to DllMain as the standard
callback from the linker entry point. Cygwin, as of b20.1, also
uses DllMain as the default callback from the entry point.
The real entry point is typically always defined by the runtime
library, and usually never overridden by (casual) user. What you can
override however is the callback routine that the entry point calls,
and this file provides such a callback function, DllMain.
Mingw32: The default entry point for mingw32 is DllMainCRTStartup
which is defined in libmingw32.a This in turn calls DllMain which is
defined here. If not defined, there is a stub in libmingw32.a which
does nothing.
Cygwin: The default entry point for Cygwin b20.1 or newer is
__cygwin_dll_entry which is defined in libcygwin.a. This in turn
calls the routine DllMain. If not defined, there is a stub in
libcygwin.a which does nothing.
MSVC: MSVC runtime calls DllMain, just like Mingw32.
Summary: If you need to do anything special in DllMain, just add it
here. Otherwise, the default setup should be just fine for 99%+ of
the time. I strongly suggest that you *not* change the entry point,
but rather change DllMain as appropriate.
*/
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#undef WIN32_LEAN_AND_MEAN
#include <stdio.h>
BOOL APIENTRY DllMain (HINSTANCE hInst, DWORD reason,
LPVOID reserved /* Not used. */ );
/*
*----------------------------------------------------------------------
*
* DllMain --
*
* This routine is called by the Mingw32, Cygwin32 or VC++ C run
* time library init code, or the Borland DllEntryPoint routine. It
* is responsible for initializing various dynamically loaded
* libraries.
*
* Results:
* TRUE on sucess, FALSE on failure.
*
* Side effects:
*
*----------------------------------------------------------------------
*/
BOOL APIENTRY
DllMain (
HINSTANCE hInst /* Library instance handle. */ ,
DWORD reason /* Reason this function is being called. */ ,
LPVOID reserved /* Not used. */ )
{
/* prevent compiler warnings */
(void) hInst;
(void) reserved;
switch (reason)
{
case DLL_PROCESS_ATTACH:
break;
case DLL_PROCESS_DETACH:
break;
case DLL_THREAD_ATTACH:
break;
case DLL_THREAD_DETACH:
break;
}
return TRUE;
}
#else
#ifdef VMS
int VOID_VAR_DLLINIT;
#endif
#endif
/*
* local variables:
* eval: (load-file "../curl-mode.el")
* end:
* vim600: fdm=marker
* vim: et sw=2 ts=2 sts=2 tw=78
*/

View File

@@ -233,13 +233,15 @@ CURLcode curl_easy_perform(CURL *curl)
{
struct SessionHandle *data = (struct SessionHandle *)curl;
if (!data->hostcache) {
if (Curl_global_host_cache_use(data)) {
if (Curl_global_host_cache_use(data) && data->hostcache != Curl_global_host_cache_get()) {
if (data->hostcache) {
Curl_hash_destroy(data->hostcache);
}
data->hostcache = Curl_global_host_cache_get();
}
else {
data->hostcache = Curl_hash_alloc(7, Curl_freeaddrinfo);
}
if (!data->hostcache) {
data->hostcache = Curl_hash_alloc(7, Curl_freednsinfo);
}
return Curl_perform(data);

View File

@@ -41,6 +41,7 @@ char *curl_escape(const char *string, int length)
{
int alloc = (length?length:(int)strlen(string))+1;
char *ns = malloc(alloc);
char *testing_ptr = NULL;
unsigned char in;
int newlen = alloc;
int index=0;
@@ -55,10 +56,15 @@ char *curl_escape(const char *string, int length)
newlen += 2; /* the size grows with two, since this'll become a %XX */
if(newlen > alloc) {
alloc *= 2;
ns = realloc(ns, alloc);
if(!ns)
testing_ptr = realloc(ns, alloc);
if(!testing_ptr) {
free( ns );
return NULL;
}
else {
ns = testing_ptr;
}
}
sprintf(&ns[index], "%%%02X", in);
index+=3;
@@ -81,6 +87,10 @@ char *curl_unescape(const char *string, int length)
int index=0;
unsigned int hex;
if( !ns ) {
return NULL;
}
while(--alloc > 0) {
in = *string;
if('%' == in) {
@@ -97,7 +107,6 @@ char *curl_unescape(const char *string, int length)
}
ns[index]=0; /* terminate it */
return ns;
}
void curl_free(void *p)

View File

@@ -637,7 +637,7 @@ CURLFORMcode FormAdd(struct curl_httppost **httppost,
struct curl_httppost *post = NULL;
CURLformoption option;
struct curl_forms *forms = NULL;
char *array_value; /* value read from an array */
char *array_value=NULL; /* value read from an array */
/* This is a state variable, that if TRUE means that we're parsing an
array that we got passed to us. If FALSE we're parsing the input
@@ -1218,7 +1218,7 @@ CURLcode Curl_getFormData(struct FormData **finalform,
*/
if(file->contenttype &&
!strnequal("text/", file->contenttype, 5)) {
!checkprefix("text/", file->contenttype)) {
/* this is not a text content, mention our binary encoding */
size += AddFormData(&form, "\r\nContent-Transfer-Encoding: binary", 0);
}
@@ -1319,7 +1319,7 @@ int Curl_FormReader(char *buffer,
wantedsize = size * nitems;
if(!form->data)
return -1; /* nothing, error, empty */
return 0; /* nothing, error, empty */
do {

243
lib/ftp.c
View File

@@ -173,9 +173,9 @@ static CURLcode AllowServerConnect(struct SessionHandle *data,
* response and extract the relevant return code for the invoking function.
*/
int Curl_GetFTPResponse(char *buf,
CURLcode Curl_GetFTPResponse(int *nreadp, /* return number of bytes read */
struct connectdata *conn,
int *ftpcode)
int *ftpcode) /* return the ftp-code */
{
/* Brand new implementation.
* We cannot read just one byte per read() and then go back to select()
@@ -185,28 +185,21 @@ int Curl_GetFTPResponse(char *buf,
* line in a response or continue reading. */
int sockfd = conn->firstsocket;
int nread; /* total size read */
int perline; /* count bytes per line */
bool keepon=TRUE;
ssize_t gotbytes;
char *ptr;
int timeout = 3600; /* default timeout in seconds */
int timeout; /* timeout in seconds */
struct timeval interval;
fd_set rkeepfd;
fd_set readfd;
struct SessionHandle *data = conn->data;
char *line_start;
int code=0; /* default "error code" to return */
#define SELECT_OK 0
#define SELECT_ERROR 1 /* select() problems */
#define SELECT_TIMEOUT 2 /* took too long */
#define SELECT_MEMORY 3 /* no available memory */
#define SELECT_CALLBACK 4 /* aborted by callback */
int error = SELECT_OK;
int code=0; /* default ftp "error code" to return */
char *buf = data->state.buffer;
CURLcode result = CURLE_OK;
struct FTP *ftp = conn->proto.ftp;
struct timeval now = Curl_tvnow();
if (ftpcode)
*ftpcode = 0; /* 0 for errors */
@@ -221,20 +214,25 @@ int Curl_GetFTPResponse(char *buf,
ptr=buf;
line_start = buf;
nread=0;
*nreadp=0;
perline=0;
keepon=TRUE;
while((nread<BUFSIZE) && (keepon && !error)) {
while((*nreadp<BUFSIZE) && (keepon && !result)) {
/* check and reset timeout value every lap */
if(data->set.timeout) {
if(data->set.timeout)
/* if timeout is requested, find out how much remaining time we have */
timeout = data->set.timeout - /* timeout time */
Curl_tvdiff(Curl_tvnow(), conn->now)/1000; /* spent time */
else
/* Even without a requested timeout, we only wait response_time
seconds for the full response to arrive before we bail out */
timeout = ftp->response_time -
Curl_tvdiff(Curl_tvnow(), now)/1000; /* spent time */
if(timeout <=0 ) {
failf(data, "Transfer aborted due to timeout");
return -SELECT_TIMEOUT; /* already too little time */
}
return CURLE_OPERATION_TIMEDOUT; /* already too little time */
}
if(!ftp->cache) {
@@ -244,19 +242,18 @@ int Curl_GetFTPResponse(char *buf,
switch (select (sockfd+1, &readfd, NULL, NULL, &interval)) {
case -1: /* select() error, stop reading */
error = SELECT_ERROR;
failf(data, "Transfer aborted due to select() error");
result = CURLE_RECV_ERROR;
failf(data, "Transfer aborted due to select() error: %d", errno);
break;
case 0: /* timeout */
error = SELECT_TIMEOUT;
result = CURLE_OPERATION_TIMEDOUT;
failf(data, "Transfer aborted due to timeout");
break;
default:
error = SELECT_OK;
break;
}
}
if(SELECT_OK == error) {
if(CURLE_OK == result) {
/*
* This code previously didn't use the kerberos sec_read() code
* to read, but when we use Curl_read() it may do so. Do confirm
@@ -272,8 +269,7 @@ int Curl_GetFTPResponse(char *buf,
ftp->cache_size = 0; /* zero the size just in case */
}
else {
int res = Curl_read(conn, sockfd, ptr,
BUFSIZE-nread, &gotbytes);
int res = Curl_read(conn, sockfd, ptr, BUFSIZE-*nreadp, &gotbytes);
if(res < 0)
/* EWOULDBLOCK */
continue; /* go looping again */
@@ -286,7 +282,7 @@ int Curl_GetFTPResponse(char *buf,
;
else if(gotbytes <= 0) {
keepon = FALSE;
error = SELECT_ERROR;
result = CURLE_RECV_ERROR;
failf(data, "Connection aborted");
}
else {
@@ -295,7 +291,7 @@ int Curl_GetFTPResponse(char *buf,
* line */
int i;
nread += gotbytes;
*nreadp += gotbytes;
for(i = 0; i < gotbytes; ptr++, i++) {
perline++;
if(*ptr=='\n') {
@@ -315,7 +311,7 @@ int Curl_GetFTPResponse(char *buf,
result = Curl_client_write(data, CLIENTWRITE_HEADER,
line_start, perline);
if(result)
return -SELECT_CALLBACK;
return result;
#define lastline(line) (isdigit((int)line[0]) && isdigit((int)line[1]) && \
isdigit((int)line[2]) && (' ' == line[3]))
@@ -350,13 +346,13 @@ int Curl_GetFTPResponse(char *buf,
if(ftp->cache)
memcpy(ftp->cache, line_start, ftp->cache_size);
else
return -SELECT_MEMORY; /**BANG**/
return CURLE_OUT_OF_MEMORY; /**BANG**/
}
} /* there was data */
} /* if(no error) */
} /* while there's buffer left and loop is requested */
if(!error)
if(!result)
code = atoi(buf);
#ifdef KRB4
@@ -378,13 +374,10 @@ int Curl_GetFTPResponse(char *buf,
}
#endif
if(error)
return -error;
if(ftpcode)
*ftpcode=code; /* return the initial number like this */
return nread; /* total amount of bytes read */
return result;
}
/*
@@ -417,6 +410,7 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
/* no need to duplicate them, the data struct won't change */
ftp->user = data->state.user;
ftp->passwd = data->state.passwd;
ftp->response_time = 3600; /* set default response time-out */
if (data->set.tunnel_thru_httpproxy) {
/* We want "seamless" FTP operations through HTTP proxy tunnel */
@@ -436,9 +430,9 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
/* The first thing we do is wait for the "220*" line: */
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode != 220) {
failf(data, "This doesn't seem like a nice ftp-server response");
@@ -467,9 +461,9 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
FTPSENDF(conn, "USER %s", ftp->user);
/* wait for feedback */
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode == 530) {
/* 530 User ... access denied
@@ -481,9 +475,9 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
/* 331 Password required for ...
(the server requires to send the user's password too) */
FTPSENDF(conn, "PASS %s", ftp->passwd);
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode == 530) {
/* 530 Login incorrect.
@@ -516,8 +510,11 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
/* we may need to issue a KAUTH here to have access to the files
* do it if user supplied a password
*/
if(data->state.passwd && *data->state.passwd)
Curl_krb_kauth(conn);
if(data->state.passwd && *data->state.passwd) {
result = Curl_krb_kauth(conn);
if(result)
return result;
}
#endif
}
else {
@@ -529,9 +526,9 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
FTPSENDF(conn, "PWD", NULL);
/* wait for feedback */
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode == 257) {
char *dir = (char *)malloc(nread+1);
@@ -544,7 +541,7 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
The directory name can contain any character; embedded double-quotes
should be escaped by double-quotes (the "quote-doubling" convention).
*/
if('\"' == *ptr) {
if(dir && ('\"' == *ptr)) {
/* it started good */
ptr++;
while(ptr && *ptr) {
@@ -570,6 +567,8 @@ CURLcode Curl_ftp_connect(struct connectdata *conn)
}
else {
/* couldn't get the path */
free(dir);
infof(data, "Failed to figure out path\n");
}
}
@@ -594,7 +593,6 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
struct SessionHandle *data = conn->data;
struct FTP *ftp = conn->proto.ftp;
ssize_t nread;
char *buf = data->state.buffer; /* this is our buffer */
int ftpcode;
CURLcode result=CURLE_OK;
@@ -633,11 +631,24 @@ CURLcode Curl_ftp_done(struct connectdata *conn)
conn->secondarysocket = -1;
if(!ftp->no_transfer) {
/* now let's see what the server says about the transfer we just
performed: */
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
/* Let's see what the server says about the transfer we just performed,
but lower the timeout as sometimes this connection has died while
the data has been transfered. This happens when doing through NATs
etc that abandon old silent connections.
*/
ftp->response_time = 60; /* give it only a minute for now */
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
ftp->response_time = 3600; /* set this back to one hour waits */
if(!nread && (CURLE_OPERATION_TIMEDOUT == result)) {
failf(data, "control connection looks dead");
return result;
}
if(result)
return result;
if(!ftp->dont_check) {
/* 226 Transfer complete, 250 Requested file action okay, completed. */
@@ -680,9 +691,9 @@ CURLcode ftp_sendquote(struct connectdata *conn, struct curl_slist *quote)
if (item->data) {
FTPSENDF(conn, "%s", item->data);
nread = Curl_GetFTPResponse(conn->data->state.buffer, conn, &ftpcode);
if (nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if (result)
return result;
if (ftpcode >= 400) {
failf(conn->data, "QUOT string not accepted: %s", item->data);
@@ -711,9 +722,9 @@ CURLcode ftp_cwd(struct connectdata *conn, char *path)
CURLcode result;
FTPSENDF(conn, "CWD %s", path);
nread = Curl_GetFTPResponse(conn->data->state.buffer, conn, &ftpcode);
if (nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if (result)
return result;
if (ftpcode != 250) {
failf(conn->data, "Couldn't cd to %s", path);
@@ -741,11 +752,13 @@ CURLcode ftp_getfiletime(struct connectdata *conn, char *file)
again a grey area as the MDTM is not kosher RFC959 */
FTPSENDF(conn, "MDTM %s", file);
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode == 213) {
switch(ftpcode) {
case 213:
{
/* we got a time. Format should be: "YYYYMMDDHHMMSS[.sss]" where the
last .sss part is optional and means fractions of a second */
int year, month, day, hour, minute, second;
@@ -758,9 +771,15 @@ CURLcode ftp_getfiletime(struct connectdata *conn, char *file)
/* now, convert this into a time() value: */
conn->data->info.filetime = curl_getdate(buf, &secs);
}
else {
infof(conn->data, "unsupported MDTM reply format\n");
}
break;
default:
infof(conn->data, "unsupported MDTM reply format\n");
break;
case 550: /* "No such file or directory" */
failf(conn->data, "Given file does not exist");
result = CURLE_FTP_COULDNT_RETR_FILE;
break;
}
return result;
}
@@ -778,14 +797,13 @@ static CURLcode ftp_transfertype(struct connectdata *conn,
struct SessionHandle *data = conn->data;
int ftpcode;
ssize_t nread;
char *buf=data->state.buffer;
CURLcode result;
FTPSENDF(conn, "TYPE %s", ascii?"A":"I");
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode != 200) {
failf(data, "Couldn't set %s mode",
@@ -814,9 +832,9 @@ CURLcode ftp_getsize(struct connectdata *conn, char *file,
CURLcode result;
FTPSENDF(conn, "SIZE %s", file);
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode == 213) {
/* get the size from the ascii string: */
@@ -975,7 +993,6 @@ CURLcode ftp_use_port(struct connectdata *conn)
struct SessionHandle *data=conn->data;
int portsock=-1;
ssize_t nread;
char *buf = data->state.buffer; /* this is our buffer */
int ftpcode; /* receive FTP response codes in this */
CURLcode result;
@@ -999,7 +1016,6 @@ CURLcode ftp_use_port(struct connectdata *conn)
#endif
unsigned char *ap;
unsigned char *pp;
int alen, plen;
char portmsgbuf[4096], tmp[4096];
const char *mode[] = { "EPRT", "LPRT", "PORT", NULL };
@@ -1062,6 +1078,7 @@ CURLcode ftp_use_port(struct connectdata *conn)
for (modep = (char **)mode; modep && *modep; modep++) {
int lprtaf, eprtaf;
int alen=0, plen=0;
switch (sa->sa_family) {
case AF_INET:
@@ -1155,9 +1172,9 @@ CURLcode ftp_use_port(struct connectdata *conn)
return result;
}
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if (ftpcode != 200) {
failf(data, "Server does not grok %s", *modep);
@@ -1183,8 +1200,7 @@ CURLcode ftp_use_port(struct connectdata *conn)
*
*/
struct sockaddr_in sa;
struct hostent *h=NULL;
char *hostdataptr=NULL;
struct Curl_dns_entry *h=NULL;
unsigned short porttouse;
char myhost[256] = "";
bool sa_filled_in = FALSE;
@@ -1215,6 +1231,10 @@ CURLcode ftp_use_port(struct connectdata *conn)
sa_filled_in = TRUE; /* the sa struct is filled in */
}
if(h)
/* when we return from here, we can forget about this */
Curl_resolv_unlock(h);
if ( h || sa_filled_in) {
if( (portsock = socket(AF_INET, SOCK_STREAM, 0)) >= 0 ) {
int size;
@@ -1227,8 +1247,8 @@ CURLcode ftp_use_port(struct connectdata *conn)
if(!sa_filled_in) {
memset((char *)&sa, 0, sizeof(sa));
memcpy((char *)&sa.sin_addr,
h->h_addr,
h->h_length);
h->addr->h_addr,
h->addr->h_length);
sa.sin_family = AF_INET;
sa.sin_addr.s_addr = INADDR_ANY;
}
@@ -1250,19 +1270,16 @@ CURLcode ftp_use_port(struct connectdata *conn)
if ( listen(portsock, 1) < 0 ) {
failf(data, "listen(2) failed on socket");
free(hostdataptr);
return CURLE_FTP_PORT_FAILED;
}
}
else {
failf(data, "bind(2) failed on socket");
free(hostdataptr);
return CURLE_FTP_PORT_FAILED;
}
}
else {
failf(data, "socket(2) failed (%s)");
free(hostdataptr);
return CURLE_FTP_PORT_FAILED;
}
}
@@ -1277,7 +1294,7 @@ CURLcode ftp_use_port(struct connectdata *conn)
struct in_addr in;
unsigned short ip[5];
(void) memcpy(&in.s_addr,
h?*h->h_addr_list:(char *)&sa.sin_addr.s_addr,
h?*h->addr->h_addr_list:(char *)&sa.sin_addr.s_addr,
sizeof (in.s_addr));
#ifdef HAVE_INET_NTOA_R
@@ -1301,9 +1318,9 @@ CURLcode ftp_use_port(struct connectdata *conn)
return result;
}
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode != 200) {
failf(data, "Server does not grok PORT, try without it!");
@@ -1332,7 +1349,7 @@ CURLcode ftp_use_pasv(struct connectdata *conn,
char *buf = data->state.buffer; /* this is our buffer */
int ftpcode; /* receive FTP response codes in this */
CURLcode result;
Curl_addrinfo *addr=NULL;
struct Curl_dns_entry *addr=NULL;
Curl_ipconnect *conninfo;
/*
@@ -1363,7 +1380,7 @@ CURLcode ftp_use_pasv(struct connectdata *conn,
#endif
int modeoff;
unsigned short connectport; /* the local port connect() should use! */
unsigned short newport; /* remote port, not necessary the local one */
unsigned short newport=0; /* remote port, not necessary the local one */
/* newhost must be able to hold a full IP-style address in ASCII, which
in the IPv6 case means 5*8-1 = 39 letters */
@@ -1375,9 +1392,9 @@ CURLcode ftp_use_pasv(struct connectdata *conn,
result = Curl_ftpsendf(conn, "%s", mode[modeoff]);
if(result)
return result;
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if (ftpcode == results[modeoff])
break;
}
@@ -1480,6 +1497,8 @@ CURLcode ftp_use_pasv(struct connectdata *conn,
&conninfo,
connected);
Curl_resolv_unlock(addr); /* we're done using this address */
/*
* When this is used from the multi interface, this might've returned with
* the 'connected' set to FALSE and thus we are now awaiting a non-blocking
@@ -1520,7 +1539,7 @@ CURLcode Curl_ftp_nextconnect(struct connectdata *conn)
ssize_t nread;
int ftpcode; /* for ftp status */
/* the ftp struct is already inited in ftp_connect() */
/* the ftp struct is already inited in Curl_ftp_connect() */
struct FTP *ftp = conn->proto.ftp;
long *bytecountp = ftp->bytecountp;
@@ -1580,8 +1599,8 @@ CURLcode Curl_ftp_nextconnect(struct connectdata *conn)
readthisamountnow = BUFSIZE;
actuallyread =
data->set.fread(data->state.buffer, 1, readthisamountnow,
data->set.in);
conn->fread(data->state.buffer, 1, readthisamountnow,
conn->fread_in);
passed += actuallyread;
if(actuallyread != readthisamountnow) {
@@ -1612,7 +1631,7 @@ CURLcode Curl_ftp_nextconnect(struct connectdata *conn)
}
}
/* Send everything on data->set.in to the socket */
/* Send everything on data->state.in to the socket */
if(data->set.ftp_append) {
/* we append onto the file instead of rewriting it */
FTPSENDF(conn, "APPE %s", ftp->file);
@@ -1621,9 +1640,9 @@ CURLcode Curl_ftp_nextconnect(struct connectdata *conn)
FTPSENDF(conn, "STOR %s", ftp->file);
}
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode>=400) {
failf(data, "Failed FTP upload:%s", buf+3);
@@ -1797,9 +1816,9 @@ CURLcode Curl_ftp_nextconnect(struct connectdata *conn)
FTPSENDF(conn, "REST %d", conn->resume_from);
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if(ftpcode != 350) {
failf(data, "Couldn't use REST: %s", buf+4);
@@ -1810,9 +1829,9 @@ CURLcode Curl_ftp_nextconnect(struct connectdata *conn)
FTPSENDF(conn, "RETR %s", ftp->file);
}
nread = Curl_GetFTPResponse(buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
result = Curl_GetFTPResponse(&nread, conn, &ftpcode);
if(result)
return result;
if((ftpcode == 150) || (ftpcode == 125)) {
@@ -1917,7 +1936,7 @@ CURLcode ftp_perform(struct connectdata *conn,
struct SessionHandle *data=conn->data;
char *buf = data->state.buffer; /* this is our buffer */
/* the ftp struct is already inited in ftp_connect() */
/* the ftp struct is already inited in Curl_ftp_connect() */
struct FTP *ftp = conn->proto.ftp;
/* Send any QUOTE strings? */
@@ -1978,7 +1997,7 @@ CURLcode ftp_perform(struct connectdata *conn,
well, we "emulate" a HTTP-style header in our output. */
#ifdef HAVE_STRFTIME
if(data->set.get_filetime && data->info.filetime) {
if(data->set.get_filetime && (data->info.filetime>=0) ) {
struct tm *tm;
#ifdef HAVE_LOCALTIME_R
struct tm buffer;
@@ -2087,7 +2106,7 @@ CURLcode Curl_ftp(struct connectdata *conn)
retcode = Curl_ftp_nextconnect(conn);
else
/* since we didn't connect now, we want do_more to get called */
conn->do_more = TRUE;
conn->bits.do_more = TRUE;
}
return retcode;

View File

@@ -29,7 +29,7 @@ CURLcode Curl_ftp_done(struct connectdata *conn);
CURLcode Curl_ftp_connect(struct connectdata *conn);
CURLcode Curl_ftp_disconnect(struct connectdata *conn);
CURLcode Curl_ftpsendf(struct connectdata *, const char *fmt, ...);
int Curl_GetFTPResponse(char *buf, struct connectdata *conn,
CURLcode Curl_GetFTPResponse(int *nread, struct connectdata *conn,
int *ftpcode);
CURLcode Curl_ftp_nextconnect(struct connectdata *conn);
#endif

View File

@@ -72,9 +72,9 @@ CURLcode Curl_initinfo(struct SessionHandle *data)
CURLcode Curl_getinfo(struct SessionHandle *data, CURLINFO info, ...)
{
va_list arg;
long *param_longp;
double *param_doublep;
char **param_charp;
long *param_longp=NULL;
double *param_doublep=NULL;
char **param_charp=NULL;
va_start(arg, info);
switch(info&CURLINFO_TYPEMASK) {
@@ -158,6 +158,9 @@ CURLcode Curl_getinfo(struct SessionHandle *data, CURLINFO info, ...)
case CURLINFO_CONTENT_TYPE:
*param_charp = data->info.contenttype;
break;
case CURLINFO_PRIVATE:
*param_charp = data->set.private?data->set.private:(char *)"";
break;
default:
return CURLE_BAD_FUNCTION_ARGUMENT;
}

View File

@@ -25,6 +25,7 @@
#include <string.h>
#include <stdlib.h>
#include "hash.h"
#include "llist.h"
@@ -128,7 +129,6 @@ _mk_hash_element (curl_hash_element **e, char *key, size_t key_len, const void *
(*e)->key = strdup(key);
(*e)->key_len = key_len;
(*e)->ptr = (void *) p;
return 0;
}
/* }}} */
@@ -195,10 +195,10 @@ Curl_hash_delete(curl_hash *h, char *key, size_t key_len)
}
/* }}} */
/* {{{ int curl_hash_find (curl_hash *, char *, size_t, void **)
/* {{{ int curl_hash_pick (curl_hash *, char *, size_t, void **)
*/
int
Curl_hash_find(curl_hash *h, char *key, size_t key_len, void **p)
void *
Curl_hash_pick(curl_hash *h, char *key, size_t key_len)
{
curl_llist_element *le;
curl_hash_element *he;
@@ -209,12 +209,11 @@ Curl_hash_find(curl_hash *h, char *key, size_t key_len, void **p)
le = CURL_LLIST_NEXT(le)) {
he = CURL_LLIST_VALP(le);
if (_hash_key_compare(he->key, he->key_len, key, key_len)) {
*p = he->ptr;
return 1;
return he->ptr;
}
}
return 0;
return NULL;
}
/* }}} */
@@ -222,7 +221,7 @@ Curl_hash_find(curl_hash *h, char *key, size_t key_len, void **p)
*/
void
Curl_hash_apply(curl_hash *h, void *user,
void (*cb)(void *, curl_hash_element *))
void (*cb)(void *user, void *ptr))
{
curl_llist_element *le;
int i;
@@ -231,7 +230,8 @@ Curl_hash_apply(curl_hash *h, void *user,
for (le = CURL_LLIST_HEAD(h->table[i]);
le != NULL;
le = CURL_LLIST_NEXT(le)) {
cb(user, (curl_hash_element *) CURL_LLIST_VALP(le));
curl_hash_element *el = CURL_LLIST_VALP(le);
cb(user, el->ptr);
}
}
}

View File

@@ -49,15 +49,14 @@ void Curl_hash_init(curl_hash *, int, curl_hash_dtor);
curl_hash *Curl_hash_alloc(int, curl_hash_dtor);
int Curl_hash_add(curl_hash *, char *, size_t, const void *);
int Curl_hash_delete(curl_hash *h, char *key, size_t key_len);
int Curl_hash_find(curl_hash *, char *, size_t, void **p);
void Curl_hash_apply(curl_hash *h, void *user, void (*cb)(void *, curl_hash_element *));
void *Curl_hash_pick(curl_hash *, char *, size_t);
void Curl_hash_apply(curl_hash *h, void *user,
void (*cb)(void *user, void *ptr));
int Curl_hash_count(curl_hash *h);
void Curl_hash_clean(curl_hash *h);
void Curl_hash_clean_with_criterium(curl_hash *h, void *user, int (*comp)(void *, void *));
void Curl_hash_destroy(curl_hash *h);
#define Curl_hash_update Curl_hash_add
#endif
/*

View File

@@ -80,10 +80,15 @@
static curl_hash hostname_cache;
static int host_cache_initialized;
static Curl_addrinfo *my_getaddrinfo(struct SessionHandle *data,
char *hostname,
int port,
char **bufp);
void Curl_global_host_cache_init(void)
{
if (!host_cache_initialized) {
Curl_hash_init(&hostname_cache, 7, Curl_freeaddrinfo);
Curl_hash_init(&hostname_cache, 7, Curl_freednsinfo);
host_cache_initialized = 1;
}
}
@@ -101,11 +106,6 @@ void Curl_global_host_cache_dtor(void)
}
}
struct curl_dns_cache_entry {
Curl_addrinfo *addr;
time_t timestamp;
};
/* count the number of characters that an integer takes up */
static int _num_chars(int i)
{
@@ -129,7 +129,7 @@ static int _num_chars(int i)
/* Create a hostcache id */
static char *
_create_hostcache_id(char *server, int port, ssize_t *entry_len)
create_hostcache_id(char *server, int port, ssize_t *entry_len)
{
char *id = NULL;
@@ -162,16 +162,19 @@ struct hostcache_prune_data {
};
static int
_curl_hostcache_timestamp_remove(void *datap, void *hc)
hostcache_timestamp_remove(void *datap, void *hc)
{
struct hostcache_prune_data *data =
(struct hostcache_prune_data *) datap;
struct curl_dns_cache_entry *c = (struct curl_dns_cache_entry *) hc;
struct Curl_dns_entry *c = (struct Curl_dns_entry *) hc;
if (data->now - c->timestamp < data->cache_timeout) {
if ((data->now - c->timestamp < data->cache_timeout) ||
c->inuse) {
/* please don't remove */
return 0;
}
/* fine, remove */
return 1;
}
@@ -185,14 +188,30 @@ hostcache_prune(curl_hash *hostcache, int cache_timeout, int now)
Curl_hash_clean_with_criterium(hostcache,
(void *) &user,
_curl_hostcache_timestamp_remove);
hostcache_timestamp_remove);
}
#if defined(MALLOCDEBUG) && defined(AGGRESIVE_TEST)
/* Called from Curl_done() to check that there's no DNS cache entry with
a non-zero counter left. */
void Curl_scan_cache_used(void *user, void *ptr)
{
struct Curl_dns_entry *e = ptr;
(void)user; /* prevent compiler warning */
if(e->inuse) {
fprintf(stderr, "*** WARNING: locked DNS cache entry detected: %s\n",
e->entry_id);
/* perform a segmentation fault to draw attention */
*(void **)0 = 0;
}
}
#endif
/* Macro to save redundant free'ing of entry_id */
#define _hostcache_return(__v) \
#define HOSTCACHE_RETURN(dns) \
{ \
free(entry_id); \
return (__v); \
return dns; \
}
#ifdef HAVE_SIGSETJMP
@@ -200,86 +219,93 @@ hostcache_prune(curl_hash *hostcache, int cache_timeout, int now)
sigjmp_buf curl_jmpenv;
#endif
Curl_addrinfo *Curl_resolv(struct SessionHandle *data,
struct Curl_dns_entry *Curl_resolv(struct SessionHandle *data,
char *hostname,
int port)
{
char *entry_id = NULL;
struct curl_dns_cache_entry *p = NULL;
struct Curl_dns_entry *dns = NULL;
ssize_t entry_len;
time_t now;
char *bufp;
#ifdef HAVE_SIGSETJMP
if(sigsetjmp(curl_jmpenv, 1) != 0) {
/* this allows us to time-out from the name resolver, as the timeout
will generate a signal and we will siglongjmp() from that here */
if(!data->set.no_signal && sigsetjmp(curl_jmpenv, 1)) {
/* this is coming from a siglongjmp() */
failf(data, "name lookup time-outed");
return NULL;
}
#endif
/* If the host cache timeout is 0, we don't do DNS cach'ing
so fall through */
if (data->set.dns_cache_timeout == 0) {
return Curl_getaddrinfo(data, hostname, port, &bufp);
/* Create an entry id, based upon the hostname and port */
entry_len = strlen(hostname);
entry_id = create_hostcache_id(hostname, port, &entry_len);
/* If we can't create the entry id, fail */
if (!entry_id)
return NULL;
/* See if its already in our dns cache */
dns = Curl_hash_pick(data->hostcache, entry_id, entry_len+1);
if (!dns) {
Curl_addrinfo *addr = my_getaddrinfo(data, hostname, port, &bufp);
if (!addr) {
HOSTCACHE_RETURN(NULL);
}
/* Create a new cache entry */
dns = (struct Curl_dns_entry *) malloc(sizeof(struct Curl_dns_entry));
if (!dns) {
Curl_freeaddrinfo(addr);
HOSTCACHE_RETURN(NULL);
}
dns->inuse = 0;
dns->addr = addr;
/* Save it in our host cache */
Curl_hash_add(data->hostcache, entry_id, entry_len+1, (const void *) dns);
}
time(&now);
/* Remove outdated entries from the hostcache */
dns->timestamp = now;
dns->inuse++; /* mark entry as in-use */
#ifdef MALLOCDEBUG
dns->entry_id = entry_id;
#endif
/* Remove outdated and unused entries from the hostcache */
hostcache_prune(data->hostcache,
data->set.dns_cache_timeout,
now);
/* Create an entry id, based upon the hostname and port */
entry_len = strlen(hostname);
entry_id = _create_hostcache_id(hostname, port, &entry_len);
/* If we can't create the entry id, don't cache, just fall-through
to the plain Curl_getaddrinfo() */
if (!entry_id) {
return Curl_getaddrinfo(data, hostname, port, &bufp);
}
/* See if its already in our dns cache */
if (entry_id &&
Curl_hash_find(data->hostcache, entry_id, entry_len+1, (void **) &p)) {
_hostcache_return(p->addr);
}
/* Create a new cache entry */
p = (struct curl_dns_cache_entry *)
malloc(sizeof(struct curl_dns_cache_entry));
if (!p) {
_hostcache_return(NULL);
}
p->addr = Curl_getaddrinfo(data, hostname, port, &bufp);
if (!p->addr) {
free(p);
_hostcache_return(NULL);
}
p->timestamp = now;
/* Save it in our host cache */
Curl_hash_update(data->hostcache, entry_id, entry_len+1, (const void *) p);
_hostcache_return(p->addr);
HOSTCACHE_RETURN(dns);
}
/*
* This is a wrapper function for freeing name information in a protocol
* independent way. This takes care of using the appropriate underlaying
* proper function.
* function.
*/
void Curl_freeaddrinfo(void *freethis)
void Curl_freeaddrinfo(Curl_addrinfo *p)
{
struct curl_dns_cache_entry *p = (struct curl_dns_cache_entry *) freethis;
#ifdef ENABLE_IPV6
freeaddrinfo(p->addr);
freeaddrinfo(p);
#else
free(p->addr);
free(p);
#endif
}
/*
* Free a cache dns entry.
*/
void Curl_freednsinfo(void *freethis)
{
struct Curl_dns_entry *p = (struct Curl_dns_entry *) freethis;
Curl_freeaddrinfo(p->addr);
free(p);
}
@@ -331,7 +357,7 @@ void curl_freeaddrinfo(struct addrinfo *freethis,
* memory we need to free after use. That meory *MUST* be freed with
* Curl_freeaddrinfo(), nothing else.
*/
Curl_addrinfo *Curl_getaddrinfo(struct SessionHandle *data,
static Curl_addrinfo *my_getaddrinfo(struct SessionHandle *data,
char *hostname,
int port,
char **bufp)
@@ -507,7 +533,7 @@ static void hostcache_fixoffset(struct hostent *h, int offset)
/* The original code to this function was once stolen from the Dancer source
code, written by Bjorn Reese, it has since been patched and modified
considerably. */
Curl_addrinfo *Curl_getaddrinfo(struct SessionHandle *data,
static Curl_addrinfo *my_getaddrinfo(struct SessionHandle *data,
char *hostname,
int port,
char **bufp)
@@ -597,14 +623,37 @@ Curl_addrinfo *Curl_getaddrinfo(struct SessionHandle *data,
#endif
#ifdef HAVE_GETHOSTBYNAME_R_6
/* Linux */
while((res=gethostbyname_r(hostname,
do {
res=gethostbyname_r(hostname,
(struct hostent *)buf,
(char *)buf + sizeof(struct hostent),
step_size - sizeof(struct hostent),
&h, /* DIFFERENCE */
&h_errnop))==ERANGE) {
&h_errnop);
/* Redhat 8, using glibc 2.2.93 changed the behavior. Now all of a
sudden this function returns EAGAIN if the given buffer size is too
small. Previous versions are known to return ERANGE for the same
problem.
This wouldn't be such a big problem if older versions wouldn't
sometimes return EAGAIN on a common failure case. Alas, we can't
assume that EAGAIN *or* ERANGE means ERANGE for any given version of
glibc.
For now, we do that and thus we may call the function repeatedly and
fail for older glibc versions that return EAGAIN, until we run out
of buffer size (step_size grows beyond CURL_NAMELOOKUP_SIZE).
If anyone has a better fix, please tell us!
*/
if((ERANGE == res) || (EAGAIN == res)) {
step_size+=200;
continue;
}
break;
} while(step_size <= CURL_NAMELOOKUP_SIZE);
if(!h) /* failure */
res=1;

View File

@@ -23,6 +23,7 @@
* $Id$
***************************************************************************/
#include "setup.h"
#include "hash.h"
struct addrinfo;
@@ -35,17 +36,39 @@ curl_hash *Curl_global_host_cache_get(void);
#define Curl_global_host_cache_use(__p) ((__p)->set.global_dns_cache)
Curl_addrinfo *Curl_resolv(struct SessionHandle *data,
struct Curl_dns_entry {
Curl_addrinfo *addr;
time_t timestamp;
long inuse; /* use-counter, make very sure you decrease this
when you're done using the address you received */
#ifdef MALLOCDEBUG
char *entry_id;
#endif
};
/*
* Curl_resolv() returns an entry with the info for the specified host
* and port.
*
* The returned data *MUST* be "unlocked" with Curl_resolv_unlock() after
* use, or we'll leak memory!
*/
struct Curl_dns_entry *Curl_resolv(struct SessionHandle *data,
char *hostname,
int port);
/* Get name info */
Curl_addrinfo *Curl_getaddrinfo(struct SessionHandle *data,
char *hostname,
int port,
char **bufp);
/* unlock a previously resolved dns entry */
#define Curl_resolv_unlock(dns) dns->inuse--
/* for debugging purposes only: */
void Curl_scan_cache_used(void *user, void *ptr);
/* free name info */
void Curl_freeaddrinfo(void *freethis);
void Curl_freeaddrinfo(Curl_addrinfo *freeaddr);
/* free cached name info */
void Curl_freednsinfo(void *freethis);
#ifdef MALLOCDEBUG
void curl_freeaddrinfo(struct addrinfo *freethis,

View File

@@ -98,12 +98,65 @@
#include "memdebug.h"
#endif
/* fread() emulation to provide POST and/or request data */
static int readmoredata(char *buffer,
size_t size,
size_t nitems,
void *userp)
{
struct connectdata *conn = (struct connectdata *)userp;
struct HTTP *http = conn->proto.http;
int fullsize = size * nitems;
if(0 == http->postsize)
/* nothing to return */
return 0;
/* make sure that a HTTP request is never sent away chunked! */
conn->bits.forbidchunk= (http->sending == HTTPSEND_REQUEST)?TRUE:FALSE;
if(http->postsize <= fullsize) {
memcpy(buffer, http->postdata, http->postsize);
fullsize = http->postsize;
if(http->backup.postsize) {
/* move backup data into focus and continue on that */
http->postdata = http->backup.postdata;
http->postsize = http->backup.postsize;
conn->fread = http->backup.fread;
conn->fread_in = http->backup.fread_in;
http->sending++; /* move one step up */
http->backup.postsize=0;
}
else
http->postsize = 0;
return fullsize;
}
memcpy(buffer, http->postdata, fullsize);
http->postdata += fullsize;
http->postsize -= fullsize;
return fullsize;
}
/* ------------------------------------------------------------------------- */
/*
* The add_buffer series of functions are used to build one large memory chunk
* from repeated function invokes. Used so that the entire HTTP request can
* be sent in one go.
*/
struct send_buffer {
char *buffer;
size_t size_max;
size_t size_used;
};
typedef struct send_buffer send_buffer;
static CURLcode
add_buffer(send_buffer *in, const void *inptr, size_t size);
@@ -126,44 +179,66 @@ send_buffer *add_buffer_init(void)
* add_buffer_send() sends a buffer and frees all associated memory.
*/
static
CURLcode add_buffer_send(int sockfd, struct connectdata *conn, send_buffer *in,
long *bytes_written)
CURLcode add_buffer_send(send_buffer *in,
int sockfd,
struct connectdata *conn,
long *bytes_written) /* add the number of sent
bytes to this counter */
{
ssize_t amount;
CURLcode res;
char *ptr;
int size;
struct HTTP *http = conn->proto.http;
/* The looping below is required since we use non-blocking sockets, but due
to the circumstances we will just loop and try again and again etc */
ptr = in->buffer;
size = in->size_used;
do {
res = Curl_write(conn, sockfd, ptr, size, &amount);
if(CURLE_OK != res)
break;
if(CURLE_OK == res) {
if(conn->data->set.verbose)
/* this data _may_ contain binary stuff */
Curl_debug(conn->data, CURLINFO_HEADER_OUT, ptr, amount);
*bytes_written += amount;
if(amount != size) {
/* The whole request could not be sent in one system call. We must queue
it up and send it later when we get the chance. We must not loop here
and wait until it might work again. */
size -= amount;
ptr += amount;
/* backup the currently set pointers */
http->backup.fread = conn->fread;
http->backup.fread_in = conn->fread_in;
http->backup.postdata = http->postdata;
http->backup.postsize = http->postsize;
/* set the new pointers for the request-sending */
conn->fread = (curl_read_callback)readmoredata;
conn->fread_in = (void *)conn;
http->postdata = ptr;
http->postsize = size;
http->send_buffer = in;
http->sending = HTTPSEND_REQUEST;
return CURLE_OK;
}
else
break;
} while(1);
/* the full buffer was sent, clean up and return */
}
if(in->buffer)
free(in->buffer);
free(in);
*bytes_written += amount;
return res;
}
@@ -223,21 +298,75 @@ CURLcode add_buffer(send_buffer *in, const void *inptr, size_t size)
/* end of the add_buffer functions */
/* ------------------------------------------------------------------------- */
/*
* Curl_compareheader()
*
* Returns TRUE if 'headerline' contains the 'header' with given 'content'.
* Pass headers WITH the colon.
*/
bool
Curl_compareheader(char *headerline, /* line to check */
const char *header, /* header keyword _with_ colon */
const char *content) /* content string to find */
{
/* RFC2616, section 4.2 says: "Each header field consists of a name followed
* by a colon (":") and the field value. Field names are case-insensitive.
* The field value MAY be preceded by any amount of LWS, though a single SP
* is preferred." */
size_t hlen = strlen(header);
size_t clen;
size_t len;
char *start;
char *end;
if(!strnequal(headerline, header, hlen))
return FALSE; /* doesn't start with header */
/* pass the header */
start = &headerline[hlen];
/* pass all white spaces */
while(*start && isspace((int)*start))
start++;
/* find the end of the header line */
end = strchr(start, '\r'); /* lines end with CRLF */
if(!end) {
/* in case there's a non-standard compliant line here */
end = strchr(start, '\n');
if(!end)
/* hm, there's no line ending here, use the zero byte! */
end = strchr(start, '\0');
}
len = end-start; /* length of the content part of the input line */
clen = strlen(content); /* length of the word to find */
/* find the content string in the rest of the line */
for(;len>=clen;len--, start++) {
if(strnequal(start, content, clen))
return TRUE; /* match! */
}
return FALSE; /* no match */
}
/*
* This function checks the linked list of custom HTTP headers for a particular
* header (prefix).
*/
static bool checkheaders(struct SessionHandle *data, const char *thisheader)
static char *checkheaders(struct SessionHandle *data, const char *thisheader)
{
struct curl_slist *head;
size_t thislen = strlen(thisheader);
for(head = data->set.headers; head; head=head->next) {
if(strnequal(head->data, thisheader, thislen)) {
return TRUE;
if(strnequal(head->data, thisheader, thislen))
return head->data;
}
}
return FALSE;
return NULL;
}
/*
@@ -420,7 +549,7 @@ CURLcode Curl_http_connect(struct connectdata *conn)
* has occured, can we start talking SSL
*/
if(data->change.proxy && (data->set.proxytype == CURLPROXY_HTTP) &&
if(conn->bits.httpproxy &&
((conn->protocol & PROT_HTTPS) || data->set.tunnel_thru_httpproxy)) {
/* either HTTPS over proxy, OR explicitly asked for a tunnel */
@@ -440,6 +569,10 @@ CURLcode Curl_http_connect(struct connectdata *conn)
if(conn->bits.user_passwd && !data->state.this_is_a_follow) {
/* Authorization: is requested, this is not a followed location, get the
original host name */
if (data->state.auth_host)
/* Free to avoid leaking memory on multiple requests*/
free(data->state.auth_host);
data->state.auth_host = strdup(conn->hostname);
}
@@ -454,13 +587,21 @@ CURLcode Curl_http_done(struct connectdata *conn)
data=conn->data;
http=conn->proto.http;
/* set the proper values (possibly modified on POST) */
conn->fread = data->set.fread; /* restore */
conn->fread_in = data->set.in; /* restore */
if(http->send_buffer) {
send_buffer *buff = http->send_buffer;
free(buff->buffer);
free(buff);
}
if(HTTPREQ_POST_FORM == data->set.httpreq) {
conn->bytecount = http->readbytecount + http->writebytecount;
Curl_formclean(http->sendit); /* Now free that whole lot */
data->set.fread = http->storefread; /* restore */
data->set.in = http->in; /* restore */
}
else if(HTTPREQ_PUT == data->set.httpreq)
conn->bytecount = http->readbytecount + http->writebytecount;
@@ -475,7 +616,6 @@ CURLcode Curl_http_done(struct connectdata *conn)
return CURLE_OK;
}
CURLcode Curl_http(struct connectdata *conn)
{
struct SessionHandle *data=conn->data;
@@ -485,6 +625,7 @@ CURLcode Curl_http(struct connectdata *conn)
struct Cookie *co=NULL; /* no cookies from start */
char *ppath = conn->ppath; /* three previous function arguments */
char *host = conn->name;
const char *te = ""; /* tranfer-encoding */
if(!conn->proto.http) {
/* Only allocate this struct if we don't already have it! */
@@ -522,7 +663,7 @@ CURLcode Curl_http(struct connectdata *conn)
host due to a location-follow, we do some weirdo checks here */
if(!data->state.this_is_a_follow ||
!data->state.auth_host ||
strequal(data->state.auth_host, conn->hostname)) {
curl_strequal(data->state.auth_host, conn->hostname)) {
sprintf(data->state.buffer, "%s:%s",
data->state.user, data->state.passwd);
if(Curl_base64_encode(data->state.buffer, strlen(data->state.buffer),
@@ -546,12 +687,38 @@ CURLcode Curl_http(struct connectdata *conn)
conn->allocptr.cookie = aprintf("Cookie: %s\015\012", data->set.cookie);
}
if(!conn->bits.upload_chunky && (data->set.httpreq != HTTPREQ_GET)) {
/* not a chunky transfer but data is to be sent */
char *ptr = checkheaders(data, "Transfer-Encoding:");
if(ptr) {
/* Some kind of TE is requested, check if 'chunked' is chosen */
if(Curl_compareheader(ptr, "Transfer-Encoding:", "chunked"))
/* we have been told explicitly to upload chunky so deal with it! */
conn->bits.upload_chunky = TRUE;
}
}
if(conn->bits.upload_chunky) {
/* RFC2616 section 4.4:
Messages MUST NOT include both a Content-Length header field and a
non-identity transfer-coding. If the message does include a non-
identity transfer-coding, the Content-Length MUST be ignored. */
if(!checkheaders(data, "Transfer-Encoding:")) {
te = "Transfer-Encoding: chunked\r\n";
}
else {
/* The "Transfer-Encoding:" header was already added. */
te = "";
}
}
if(data->cookies) {
co = Curl_cookie_getlist(data->cookies,
host, ppath,
(conn->protocol&PROT_HTTPS?TRUE:FALSE));
}
if (data->change.proxy &&
if (data->change.proxy && *data->change.proxy &&
!data->set.tunnel_thru_httpproxy &&
!(conn->protocol&PROT_HTTPS)) {
/* The path sent to the proxy is in fact the entire URL */
@@ -717,7 +884,8 @@ CURLcode Curl_http(struct connectdata *conn)
"%s" /* pragma */
"%s" /* accept */
"%s" /* accept-encoding */
"%s", /* referer */
"%s" /* referer */
"%s",/* transfer-encoding */
data->set.customrequest?data->set.customrequest:
(data->set.no_body?"HEAD":
@@ -739,7 +907,8 @@ CURLcode Curl_http(struct connectdata *conn)
http->p_accept?http->p_accept:"",
(data->set.encoding && *data->set.encoding && conn->allocptr.accept_encoding)?
conn->allocptr.accept_encoding:"", /* 08/28/02 jhrg */
(data->change.referer && conn->allocptr.ref)?conn->allocptr.ref:"" /* Referer: <data> <CRLF> */
(data->change.referer && conn->allocptr.ref)?conn->allocptr.ref:"" /* Referer: <data> <CRLF> */,
te
);
if(co) {
@@ -836,14 +1005,14 @@ CURLcode Curl_http(struct connectdata *conn)
return CURLE_HTTP_POST_ERROR;
}
http->storefread = data->set.fread; /* backup */
http->in = data->set.in; /* backup */
/* set the read function to read from the generated form data */
conn->fread = (curl_read_callback)Curl_FormReader;
conn->fread_in = &http->form;
data->set.fread = (curl_read_callback)
Curl_FormReader; /* set the read function to read from the
generated form data */
data->set.in = (FILE *)&http->form;
http->sending = HTTPSEND_BODY;
if(!conn->bits.upload_chunky)
/* only add Content-Length if not uploading chunked */
add_bufferf(req_buffer,
"Content-Length: %d\r\n", http->postsize);
@@ -885,7 +1054,7 @@ CURLcode Curl_http(struct connectdata *conn)
Curl_pgrsSetUploadSize(data, http->postsize);
/* fire away the whole request to the server */
result = add_buffer_send(conn->firstsocket, conn, req_buffer,
result = add_buffer_send(req_buffer, conn->firstsocket, conn,
&data->info.request_size);
if(result)
failf(data, "Failed sending POST request");
@@ -903,22 +1072,22 @@ CURLcode Curl_http(struct connectdata *conn)
case HTTPREQ_PUT: /* Let's PUT the data to the server! */
if(data->set.infilesize>0) {
if((data->set.infilesize>0) && !conn->bits.upload_chunky)
/* only add Content-Length if not uploading chunked */
add_bufferf(req_buffer,
"Content-Length: %d\r\n\r\n", /* file size */
"Content-Length: %d\r\n", /* file size */
data->set.infilesize );
}
else
add_bufferf(req_buffer, "\015\012");
add_bufferf(req_buffer, "\r\n");
/* set the upload size to the progress meter */
Curl_pgrsSetUploadSize(data, data->set.infilesize);
/* this sends the buffer and frees all the buffer resources */
result = add_buffer_send(conn->firstsocket, conn, req_buffer,
result = add_buffer_send(req_buffer, conn->firstsocket, conn,
&data->info.request_size);
if(result)
failf(data, "Faied sending POST request");
failf(data, "Failed sending POST request");
else
/* prepare for transfer */
result = Curl_Transfer(conn, conn->firstsocket, -1, TRUE,
@@ -932,6 +1101,11 @@ CURLcode Curl_http(struct connectdata *conn)
case HTTPREQ_POST:
/* this is the simple POST, using x-www-form-urlencoded style */
if(!conn->bits.upload_chunky) {
/* We only set Content-Length and allow a custom Content-Length if
we don't upload data chunked, as RFC2616 forbids us to set both
kinds of headers (Transfer-Encoding: chunked and Content-Length) */
if(!checkheaders(data, "Content-Length:"))
/* we allow replacing this header, although it isn't very wise to
actually set your own */
@@ -940,6 +1114,7 @@ CURLcode Curl_http(struct connectdata *conn)
data->set.postfieldsize?
data->set.postfieldsize:
(data->set.postfields?strlen(data->set.postfields):0) );
}
if(!checkheaders(data, "Content-Type:"))
add_bufferf(req_buffer,
@@ -947,18 +1122,28 @@ CURLcode Curl_http(struct connectdata *conn)
add_buffer(req_buffer, "\r\n", 2);
/* and here comes the actual data */
if(data->set.postfieldsize && data->set.postfields) {
add_buffer(req_buffer, data->set.postfields,
data->set.postfieldsize);
}
else if(data->set.postfields)
add_bufferf(req_buffer,
"%s",
data->set.postfields );
/* and here we setup the pointers to the actual data */
if(data->set.postfields) {
if(data->set.postfieldsize)
http->postsize = data->set.postfieldsize;
else
http->postsize = strlen(data->set.postfields);
http->postdata = data->set.postfields;
/* issue the request */
result = add_buffer_send(conn->firstsocket, conn, req_buffer,
http->sending = HTTPSEND_BODY;
conn->fread = (curl_read_callback)readmoredata;
conn->fread_in = (void *)conn;
/* set the upload size to the progress meter */
Curl_pgrsSetUploadSize(data, http->postsize);
}
else
/* set the upload size to the progress meter */
Curl_pgrsSetUploadSize(data, data->set.infilesize);
/* issue the request, headers-only */
result = add_buffer_send(req_buffer, conn->firstsocket, conn,
&data->info.request_size);
if(result)
@@ -967,15 +1152,15 @@ CURLcode Curl_http(struct connectdata *conn)
result =
Curl_Transfer(conn, conn->firstsocket, -1, TRUE,
&http->readbytecount,
data->set.postfields?-1:conn->firstsocket,
data->set.postfields?NULL:&http->writebytecount);
conn->firstsocket,
&http->writebytecount);
break;
default:
add_buffer(req_buffer, "\r\n", 2);
/* issue the request */
result = add_buffer_send(conn->firstsocket, conn, req_buffer,
result = add_buffer_send(req_buffer, conn->firstsocket, conn,
&data->info.request_size);
if(result)
@@ -984,7 +1169,8 @@ CURLcode Curl_http(struct connectdata *conn)
/* HTTP GET/HEAD download: */
result = Curl_Transfer(conn, conn->firstsocket, -1, TRUE,
&http->readbytecount,
-1, NULL); /* nothing to upload */
http->postdata?conn->firstsocket:-1,
http->postdata?&http->writebytecount:NULL);
}
if(result)
return result;

View File

@@ -24,6 +24,10 @@
* $Id$
***************************************************************************/
#ifndef CURL_DISABLE_HTTP
bool Curl_compareheader(char *headerline, /* line to check */
const char *header, /* header keyword _with_ colon */
const char *content); /* content string to find */
/* ftp can use this as well */
CURLcode Curl_ConnectHTTPProxyTunnel(struct connectdata *conn,
int tunnelsocket,

View File

@@ -29,5 +29,40 @@ extern char *Curl_if2ip(char *interface, char *buf, int buf_size);
#else
#define Curl_if2ip(a,b,c) NULL
#endif
#ifdef __INTERIX
/* Nedelcho Stanev's work-around for SFU 3.0 */
struct ifreq {
#define IFNAMSIZ 16
#define IFHWADDRLEN 6
union {
char ifrn_name[IFNAMSIZ]; /* if name, e.g. "en0" */
} ifr_ifrn;
union {
struct sockaddr ifru_addr;
struct sockaddr ifru_broadaddr;
struct sockaddr ifru_netmask;
struct sockaddr ifru_hwaddr;
short ifru_flags;
int ifru_metric;
int ifru_mtu;
} ifr_ifru;
};
/* This define was added by Daniel to avoid an extra #ifdef INTERIX in the
C code. */
#define ifr_dstaddr ifr_addr
#define ifr_name ifr_ifrn.ifrn_name /* interface name */
#define ifr_addr ifr_ifru.ifru_addr /* address */
#define ifr_broadaddr ifr_ifru.ifru_broadaddr /* broadcast address */
#define ifr_netmask ifr_ifru.ifru_netmask /* interface net mask */
#define ifr_flags ifr_ifru.ifru_flags /* flags */
#define ifr_hwaddr ifr_ifru.ifru_hwaddr /* MAC address */
#define ifr_metric ifr_ifru.ifru_metric /* metric */
#define ifr_mtu ifr_ifru.ifru_mtu /* mtu */
#define SIOCGIFADDR _IOW('s', 102, struct ifreq) /* Get if addr */
#endif /* interix */
#endif

View File

@@ -202,6 +202,7 @@ krb4_auth(void *app_data, struct connectdata *conn)
ssize_t nread;
int l = sizeof(conn->local_addr);
struct SessionHandle *data = conn->data;
CURLcode result;
if(getsockname(conn->firstsocket,
(struct sockaddr *)LOCAL_ADDR, &l) < 0)
@@ -246,13 +247,15 @@ krb4_auth(void *app_data, struct connectdata *conn)
return AUTH_CONTINUE;
}
if(Curl_ftpsendf(conn, "ADAT %s", p))
result = Curl_ftpsendf(conn, "ADAT %s", p);
free(p);
if(result)
return -2;
nread = Curl_GetFTPResponse(data->state.buffer, conn, NULL);
if(nread < 0)
if(Curl_GetFTPResponse(&nread, conn, NULL))
return -1;
free(p);
if(data->state.buffer[0] != '2'){
Curl_failf(data, "Server didn't accept auth data");
@@ -299,7 +302,7 @@ struct Curl_sec_client_mech Curl_krb4_client_mech = {
krb4_decode
};
void Curl_krb_kauth(struct connectdata *conn)
CURLcode Curl_krb_kauth(struct connectdata *conn)
{
des_cblock key;
des_key_schedule schedule;
@@ -309,18 +312,19 @@ void Curl_krb_kauth(struct connectdata *conn)
char passwd[100];
int tmp;
ssize_t nread;
int save;
CURLcode result;
save = Curl_set_command_prot(conn, prot_private);
if(Curl_ftpsendf(conn, "SITE KAUTH %s", conn->data->state.user))
return;
result = Curl_ftpsendf(conn, "SITE KAUTH %s", conn->data->state.user);
nread = Curl_GetFTPResponse(conn->data->state.buffer,
conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/;
if(result)
return result;
result = Curl_GetFTPResponse(&nread, conn, NULL);
if(result)
return result;
if(conn->data->state.buffer[0] != '3'){
Curl_set_command_prot(conn, save);
@@ -331,7 +335,7 @@ void Curl_krb_kauth(struct connectdata *conn)
if(!p) {
Curl_failf(conn->data, "Bad reply from server");
Curl_set_command_prot(conn, save);
return;
return CURLE_FTP_WEIRD_SERVER_REPLY;
}
p += 2;
@@ -339,7 +343,7 @@ void Curl_krb_kauth(struct connectdata *conn)
if(tmp < 0) {
Curl_failf(conn->data, "Failed to decode base64 in reply.\n");
Curl_set_command_prot(conn, save);
return;
return CURLE_FTP_WEIRD_SERVER_REPLY;
}
tkt.length = tmp;
tktcopy.length = tkt.length;
@@ -348,7 +352,7 @@ void Curl_krb_kauth(struct connectdata *conn)
if(!p) {
Curl_failf(conn->data, "Bad reply from server");
Curl_set_command_prot(conn, save);
return;
return CURLE_FTP_WEIRD_SERVER_REPLY;
}
name = p + 2;
for(; *p && *p != ' ' && *p != '\r' && *p != '\n'; p++);
@@ -376,19 +380,21 @@ void Curl_krb_kauth(struct connectdata *conn)
if(Curl_base64_encode(tktcopy.dat, tktcopy.length, &p) < 0) {
failf(conn->data, "Out of memory base64-encoding.");
Curl_set_command_prot(conn, save);
return;
return CURLE_OUT_OF_MEMORY;
}
memset (tktcopy.dat, 0, tktcopy.length);
if(Curl_ftpsendf(conn, "SITE KAUTH %s %s", name, p))
return;
nread = Curl_GetFTPResponse(conn->data->state.buffer,
conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/;
result = Curl_ftpsendf(conn, "SITE KAUTH %s %s", name, p);
free(p);
if(result)
return result;
result = Curl_GetFTPResponse(&nread, conn, NULL);
if(result)
return result;
Curl_set_command_prot(conn, save);
return CURLE_OK;
}
#endif /* KRB4 */

View File

@@ -22,6 +22,6 @@
*
* $Id$
***************************************************************************/
void Curl_krb_kauth(struct connectdata *conn);
CURLcode Curl_krb_kauth(struct connectdata *conn);
#endif

View File

@@ -41,3 +41,4 @@ EXPORTS
curl_multi_cleanup @ 32;
curl_multi_info_read @ 33;
curl_free @ 34;
curl_version_info @ 35;

View File

@@ -57,6 +57,7 @@ FILE *curl_fopen(const char *file, const char *mode, int line,
int curl_fclose(FILE *file, int line, const char *source);
/* Set this symbol on the command-line, recompile all lib-sources */
#undef strdup
#define strdup(ptr) curl_dostrdup(ptr, __LINE__, __FILE__)
#define malloc(size) curl_domalloc(size, __LINE__, __FILE__)
#define realloc(ptr,size) curl_dorealloc(ptr, size, __LINE__, __FILE__)

View File

@@ -313,9 +313,8 @@ CURLMcode curl_multi_perform(CURLM *multi_handle, int *running_handles)
easy->easy_handle->hostcache = Curl_global_host_cache_get();
}
else {
if (multi->hostcache == NULL) {
multi->hostcache = Curl_hash_alloc(7, Curl_freeaddrinfo);
}
if (multi->hostcache == NULL)
multi->hostcache = Curl_hash_alloc(7, Curl_freednsinfo);
easy->easy_handle->hostcache = multi->hostcache;
}
@@ -360,7 +359,7 @@ CURLMcode curl_multi_perform(CURLM *multi_handle, int *running_handles)
if(CURLE_OK == easy->result) {
/* after do, go PERFORM... or DO_MORE */
if(easy->easy_conn->do_more) {
if(easy->easy_conn->bits.do_more) {
/* we're supposed to do more, but we need to sit down, relax
and wait a little while first */
easy->state = CURLM_STATE_DO_MORE;
@@ -403,15 +402,43 @@ CURLMcode curl_multi_perform(CURLM *multi_handle, int *running_handles)
case CURLM_STATE_PERFORM:
/* read/write data if it is ready to do so */
easy->result = Curl_readwrite(easy->easy_conn, &done);
/* hm, when we follow redirects, we may need to go back to the CONNECT
state */
if(easy->result) {
/* The transfer phase returned error, we mark the connection to get
* closed to prevent being re-used. This is becasue we can't
* possibly know if the connection is in a good shape or not now. */
easy->easy_conn->bits.close = TRUE;
if(-1 !=easy->easy_conn->secondarysocket) {
/* if we failed anywhere, we must clean up the secondary socket if
it was used */
sclose(easy->easy_conn->secondarysocket);
easy->easy_conn->secondarysocket=-1;
}
Curl_posttransfer(easy->easy_handle);
Curl_done(easy->easy_conn);
}
/* after the transfer is done, go DONE */
if(TRUE == done) {
else if(TRUE == done) {
/* call this even if the readwrite function returned error */
easy->result = Curl_posttransfer(easy->easy_handle);
Curl_posttransfer(easy->easy_handle);
/* When we follow redirects, must to go back to the CONNECT state */
if(easy->easy_conn->newurl) {
easy->result = Curl_follow(easy->easy_handle,
strdup(easy->easy_conn->newurl));
if(CURLE_OK == easy->result) {
easy->state = CURLM_STATE_CONNECT;
result = CURLM_CALL_MULTI_PERFORM;
}
}
else {
easy->state = CURLM_STATE_DONE;
result = CURLM_CALL_MULTI_PERFORM;
}
}
break;
case CURLM_STATE_DONE:
/* post-transfer command */

View File

@@ -278,32 +278,6 @@ Curl_sec_write(struct connectdata *conn, int fd, char *buffer, int length)
return tx;
}
int
Curl_sec_vfprintf2(struct connectdata *conn, FILE *f, const char *fmt, va_list ap)
{
char *buf;
int ret;
if(conn->data_prot == prot_clear)
return vfprintf(f, fmt, ap);
else {
buf = aprintf(fmt, ap);
ret = buffer_write(&conn->out_buffer, buf, strlen(buf));
free(buf);
return ret;
}
}
int
Curl_sec_fprintf2(struct connectdata *conn, FILE *f, const char *fmt, ...)
{
int ret;
va_list ap;
va_start(ap, fmt);
ret = Curl_sec_vfprintf2(conn, f, fmt, ap);
va_end(ap);
return ret;
}
int
Curl_sec_putc(struct connectdata *conn, int c, FILE *F)
{
@@ -313,7 +287,8 @@ Curl_sec_putc(struct connectdata *conn, int c, FILE *F)
buffer_write(&conn->out_buffer, &ch, 1);
if(c == '\n' || conn->out_buffer.index >= 1024 /* XXX */) {
Curl_sec_write(conn, fileno(F), conn->out_buffer.data, conn->out_buffer.index);
Curl_sec_write(conn, fileno(F), conn->out_buffer.data,
conn->out_buffer.index);
conn->out_buffer.index = 0;
}
return c;
@@ -346,53 +321,6 @@ Curl_sec_read_msg(struct connectdata *conn, char *s, int level)
return code;
}
/* modified to return how many bytes written, or -1 on error ***/
int
Curl_sec_vfprintf(struct connectdata *conn, FILE *f, const char *fmt, va_list ap)
{
int ret = 0;
char *buf;
void *enc;
int len;
if(!conn->sec_complete)
return vfprintf(f, fmt, ap);
buf = aprintf(fmt, ap);
len = (conn->mech->encode)(conn->app_data, buf, strlen(buf),
conn->command_prot, &enc,
conn);
free(buf);
if(len < 0) {
failf(conn->data, "Failed to encode command.");
return -1;
}
if(Curl_base64_encode(enc, len, &buf) < 0){
failf(conn->data, "Out of memory base64-encoding.");
return -1;
}
if(conn->command_prot == prot_safe)
ret = fprintf(f, "MIC %s", buf);
else if(conn->command_prot == prot_private)
ret = fprintf(f, "ENC %s", buf);
else if(conn->command_prot == prot_confidential)
ret = fprintf(f, "CONF %s", buf);
free(buf);
return ret;
}
int
Curl_sec_fprintf(struct connectdata *conn, FILE *f, const char *fmt, ...)
{
va_list ap;
int ret;
va_start(ap, fmt);
ret = Curl_sec_vfprintf(conn, f, fmt, ap);
va_end(ap);
return ret;
}
enum protection_level
Curl_set_command_prot(struct connectdata *conn, enum protection_level level)
{
@@ -414,14 +342,14 @@ sec_prot_internal(struct connectdata *conn, int level)
}
if(level){
int code;
if(Curl_ftpsendf(conn, "PBSZ %u", s))
return -1;
nread = Curl_GetFTPResponse(conn->data->state.buffer, conn, NULL);
if(nread < 0)
if(Curl_GetFTPResponse(&nread, conn, &code))
return -1;
if(conn->data->state.buffer[0] != '2'){
if(code/100 != '2'){
failf(conn->data, "Failed to set protection buffer size.");
return -1;
}
@@ -437,8 +365,7 @@ sec_prot_internal(struct connectdata *conn, int level)
if(Curl_ftpsendf(conn, "PROT %c", level["CSEP"]))
return -1;
nread = Curl_GetFTPResponse(conn->data->state.buffer, conn, NULL);
if(nread < 0)
if(Curl_GetFTPResponse(&nread, conn, NULL))
return -1;
if(conn->data->state.buffer[0] != '2'){
@@ -496,8 +423,7 @@ Curl_sec_login(struct connectdata *conn)
if(Curl_ftpsendf(conn, "AUTH %s", (*m)->name))
return -1;
nread = Curl_GetFTPResponse(conn->data->state.buffer, conn, &ftpcode);
if(nread < 0)
if(Curl_GetFTPResponse(&nread, conn, &ftpcode))
return -1;
if(conn->data->state.buffer[0] != '3'){

View File

@@ -153,6 +153,20 @@ void Curl_failf(struct SessionHandle *data, const char *fmt, ...)
if(data->set.errorbuffer && !data->state.errorbuf) {
vsnprintf(data->set.errorbuffer, CURL_ERROR_SIZE, fmt, ap);
data->state.errorbuf = TRUE; /* wrote error string */
if(data->set.verbose) {
int len = strlen(data->set.errorbuffer);
bool doneit=FALSE;
if(len < CURL_ERROR_SIZE) {
doneit = TRUE;
data->set.errorbuffer[len] = '\n';
data->set.errorbuffer[++len] = '\0';
}
Curl_debug(data, CURLINFO_TEXT, data->set.errorbuffer, len);
if(doneit)
/* cut off the newline again */
data->set.errorbuffer[--len]=0;
}
}
va_end(ap);
}
@@ -231,6 +245,9 @@ CURLcode Curl_write(struct connectdata *conn, int sockfd,
/* this is basicly the EWOULDBLOCK equivalent */
*written = 0;
return CURLE_OK;
case SSL_ERROR_SYSCALL:
failf(conn->data, "SSL_write() returned SYSCALL, errno = %d\n", errno);
return CURLE_SEND_ERROR;
}
/* a true error */
failf(conn->data, "SSL_write() return error %d\n", err);
@@ -324,36 +341,29 @@ int Curl_read(struct connectdata *conn,
ssize_t *n)
{
ssize_t nread;
*n=0; /* reset amount to zero */
#ifdef USE_SSLEAY
if (conn->ssl.use) {
bool loop=TRUE;
int err;
do {
nread = SSL_read(conn->ssl.handle, buf, buffersize);
if(nread >= 0)
/* successful read */
break;
err = SSL_get_error(conn->ssl.handle, nread);
if(nread < 0) {
/* failed SSL_read */
int err = SSL_get_error(conn->ssl.handle, nread);
switch(err) {
case SSL_ERROR_NONE: /* this is not an error */
case SSL_ERROR_ZERO_RETURN: /* no more data */
loop=0; /* get out of loop */
break;
case SSL_ERROR_WANT_READ:
case SSL_ERROR_WANT_WRITE:
/* if there's data pending, then we re-invoke SSL_read() */
break;
/* there's data pending, re-invoke SSL_read() */
return -1; /* basicly EWOULDBLOCK */
default:
failf(conn->data, "SSL read error: %d", err);
return CURLE_RECV_ERROR;
}
} while(loop);
if(loop && SSL_pending(conn->ssl.handle))
return -1; /* basicly EWOULDBLOCK */
}
}
else {
#endif

View File

@@ -30,13 +30,6 @@ void Curl_failf(struct SessionHandle *, const char *fmt, ...);
#define infof Curl_infof
#define failf Curl_failf
struct send_buffer {
char *buffer;
size_t size_max;
size_t size_used;
};
typedef struct send_buffer send_buffer;
#define CLIENTWRITE_BODY 1
#define CLIENTWRITE_HEADER 2
#define CLIENTWRITE_BOTH (CLIENTWRITE_BODY|CLIENTWRITE_HEADER)

View File

@@ -35,9 +35,8 @@
#define CURL_DISABLE_GOPHER
#endif
#if !defined(WIN32) && defined(_WIN32)
/* This _might_ be a good Borland fix. Please report whether this works or
not! */
#if !defined(WIN32) && defined(__WIN32__)
/* This should be a good Borland fix. Alexander J. Oss told us! */
#define WIN32
#endif

View File

@@ -24,142 +24,123 @@
#include "setup.h"
#include <stdlib.h>
#include <curl/curl.h>
#include "share.h"
#include "urldata.h"
#include "share.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
#define CURL_SHARE_SET_LOCKED(__share, __type) ((__share)->locked += (__type))
#define CURL_SHARE_SET_UNLOCKED(__share, __type) ((__share)->locked -= (__type))
#define CURL_SHARE_SET_USED(__share, __type) ((__share)->specifier += (__type))
#define CURL_SHARE_SET_UNUSED(__share, __type) ((__share)->specifier -= (__type))
#define CURL_SHARE_IS_USED(__share, __type) ((__share)->specifier & (__type))
#define CURL_SHARE_IS_LOCKED(__share, __type) ((__share)->locked & (__type))
#define CURL_SHARE_IS_DIRTY(__share) ((__share)->dirty)
#define CURL_SHARE_GET(__handle) (((struct SessionHandle *) (__handle))->share)
curl_share *
CURLSH *
curl_share_init(void)
{
curl_share *share = (curl_share *) malloc (sizeof (curl_share));
if (share) {
memset (share, 0, sizeof (curl_share));
}
struct Curl_share *share =
(struct Curl_share *)malloc(sizeof(struct Curl_share));
if (share)
memset (share, 0, sizeof(struct Curl_share));
return share;
}
CURLcode
curl_share_setopt (curl_share *share, curl_lock_type option, int enable)
CURLSHcode
curl_share_setopt(CURLSH *sh, CURLSHoption option, ...)
{
if (CURL_SHARE_IS_DIRTY(share)) {
return CURLE_SHARE_IN_USE;
struct Curl_share *share = (struct Curl_share *)sh;
va_list param;
int type;
curl_lock_function lockfunc;
curl_unlock_function unlockfunc;
void *ptr;
if (share->dirty)
/* don't allow setting options while one or more handles are already
using this share */
return CURLSHE_IN_USE;
va_start(param, option);
switch(option) {
case CURLSHOPT_SHARE:
/* this is a type this share will share */
type = va_arg(param, int);
share->specifier |= (1<<type);
break;
case CURLSHOPT_UNSHARE:
/* this is a type this share will no longer share */
type = va_arg(param, int);
share->specifier &= ~(1<<type);
break;
case CURLSHOPT_LOCKFUNC:
lockfunc = va_arg(param, curl_lock_function);
share->lockfunc = lockfunc;
break;
case CURLSHOPT_UNLOCKFUNC:
unlockfunc = va_arg(param, curl_unlock_function);
share->unlockfunc = unlockfunc;
break;
case CURLSHOPT_USERDATA:
ptr = va_arg(param, void *);
share->clientdata = ptr;
break;
default:
return CURLSHE_BAD_OPTION;
}
if (enable) {
CURL_SHARE_SET_USED (share, option);
}
else {
CURL_SHARE_SET_UNUSED (share, option);
return CURLSHE_OK;
}
return CURLE_OK;
}
CURLcode
curl_share_set_lock_function (curl_share *share, curl_lock_function lock)
CURLSHcode curl_share_cleanup(CURLSH *sh)
{
if (CURL_SHARE_IS_DIRTY(share)) {
return CURLE_SHARE_IN_USE;
}
share->lockfunc = lock;
return CURLE_OK;
}
CURLcode
curl_share_set_unlock_function (curl_share *share, curl_unlock_function unlock)
{
if (CURL_SHARE_IS_DIRTY(share)) {
return CURLE_SHARE_IN_USE;
}
share->unlockfunc = unlock;
return CURLE_OK;
}
CURLcode
curl_share_set_lock_data (curl_share *share, void *data)
{
if (CURL_SHARE_IS_DIRTY(share)) {
return CURLE_SHARE_IN_USE;
}
share->clientdata = data;
return CURLE_OK;
}
Curl_share_error
Curl_share_acquire_lock (CURL *handle, curl_lock_type type)
{
curl_share *share = CURL_SHARE_GET (handle);
if (share == NULL) {
return SHARE_ERROR_INVALID;
}
if (! (share->specifier & type)) {
return SHARE_ERROR_NOT_REGISTERED;
}
if (CURL_SHARE_IS_LOCKED (share, type)) {
return SHARE_ERROR_OK;
}
share->lockfunc (handle, type, share->clientdata);
CURL_SHARE_SET_LOCKED (share, type);
return SHARE_ERROR_OK;
}
Curl_share_error
Curl_share_release_lock (CURL *handle, curl_lock_type type)
{
curl_share *share = CURL_SHARE_GET(handle);
if (share == NULL) {
return SHARE_ERROR_INVALID;
}
if (! (share->specifier & type)) {
return SHARE_ERROR_NOT_REGISTERED;
}
if (!CURL_SHARE_IS_LOCKED (share, type)) {
return SHARE_ERROR_OK;
}
share->unlockfunc (handle, type, share->clientdata);
CURL_SHARE_SET_UNLOCKED (share, type);
return SHARE_ERROR_OK;
}
CURLcode curl_share_destroy (curl_share *share)
{
if (CURL_SHARE_IS_DIRTY(share)) {
return CURLE_SHARE_IN_USE;
}
struct Curl_share *share = (struct Curl_share *)sh;
if (share->dirty)
return CURLSHE_IN_USE;
free (share);
return CURLE_OK;
return CURLSHE_OK;
}
CURLSHcode
Curl_share_acquire_lock(struct SessionHandle *data, curl_lock_data type)
{
struct Curl_share *share = data->share;
if (share == NULL)
return CURLSHE_INVALID;
if(share->specifier & (1<<type)) {
share->lockfunc (data, type, CURL_LOCK_ACCESS_SINGLE, share->clientdata);
share->locked |= (1<<type);
}
/* else if we don't share this, pretend successful lock */
return CURLSHE_OK;
}
CURLSHcode
Curl_share_release_lock(struct SessionHandle *data, curl_lock_data type)
{
struct Curl_share *share = data->share;
if (share == NULL)
return CURLSHE_INVALID;
if(share->specifier & (1<<type)) {
share->unlockfunc (data, type, share->clientdata);
share->locked &= ~(1<<type);
}
return CURLSHE_OK;
}
/*
* local variables:
* eval: (load-file "../curl-mode.el")

View File

@@ -27,15 +27,19 @@
#include "setup.h"
#include <curl/curl.h>
typedef enum {
SHARE_ERROR_OK = 0,
SHARE_ERROR_INVALID,
SHARE_ERROR_NOT_REGISTERED,
SHARE_ERROR_LAST
} Curl_share_error;
/* this struct is libcurl-private, don't export details */
struct Curl_share {
unsigned int specifier;
unsigned int locked;
unsigned int dirty;
Curl_share_error Curl_share_aquire_lock (CURL *, curl_lock_type);
Curl_share_error Curl_share_release_lock (CURL *, curl_lock_type);
curl_lock_function lockfunc;
curl_unlock_function unlockfunc;
void *clientdata;
};
CURLSHcode Curl_share_aquire_lock (struct SessionHandle *, curl_lock_data);
CURLSHcode Curl_share_release_lock (struct SessionHandle *, curl_lock_data);
#endif /* __CURL_SHARE_H */

View File

@@ -275,7 +275,8 @@ int cert_stuff(struct connectdata *conn,
if (SSL_CTX_use_PrivateKey_file(conn->ssl.ctx,
key_file,
file_type) != 1) {
failf(data, "unable to set private key file\n");
failf(data, "unable to set private key file: '%s' type %s\n",
key_file, key_type?key_type:"PEM");
return 0;
}
break;
@@ -325,9 +326,14 @@ int cert_stuff(struct connectdata *conn,
ssl=SSL_new(conn->ssl.ctx);
x509=SSL_get_certificate(ssl);
if (x509 != NULL)
EVP_PKEY_copy_parameters(X509_get_pubkey(x509),
SSL_get_privatekey(ssl));
/* This version was provided by Evan Jordan and is supposed to not
leak memory as the previous version: */
if (x509 != NULL) {
EVP_PKEY *pktmp = X509_get_pubkey(x509);
EVP_PKEY_copy_parameters(pktmp,SSL_get_privatekey(ssl));
EVP_PKEY_free(pktmp);
}
SSL_free(ssl);
/* If we are using DSA, we can copy the parameters from
@@ -666,6 +672,44 @@ static int Curl_ASN1_UTCTIME_output(struct connectdata *conn,
#endif
/* ====================================================== */
static int
cert_hostcheck(const char *certname, const char *hostname)
{
char *tmp;
const char *certdomain;
if(!certname ||
strlen(certname)<3 ||
!hostname ||
!strlen(hostname)) /* sanity check */
return 0;
if(strequal(certname, hostname)) /* trivial case */
return 1;
certdomain = certname + 1;
if((certname[0] != '*') || (certdomain[0] != '.'))
return 0; /* not a wildcard certificate, check failed */
if(!strchr(certdomain+1, '.'))
return 0; /* the certificate must have at least another dot in its name */
/* find 'certdomain' within 'hostname' */
tmp = strstr(hostname, certdomain);
if(tmp) {
/* ok the certname's domain matches the hostname, let's check that it's a
tail-match */
if(strequal(tmp, certdomain))
/* looks like a match. Just check we havent swallowed a '.' */
return tmp == strchr(hostname, '.');
else
return 0;
}
return 0;
}
/* ====================================================== */
CURLcode
Curl_SSLConnect(struct connectdata *conn)
@@ -904,7 +948,7 @@ Curl_SSLConnect(struct connectdata *conn)
return CURLE_SSL_PEER_CERTIFICATE;
}
if (!strequal(peer_CN, conn->hostname)) {
if (!cert_hostcheck(peer_CN, conn->hostname)) {
if (data->set.ssl.verifyhost > 1) {
failf(data, "SSL: certificate subject name '%s' does not match "
"target host name '%s'",

View File

@@ -32,6 +32,10 @@ int curl_strnequal(const char *first, const char *second, size_t max);
#define strequal(a,b) curl_strequal(a,b)
#define strnequal(a,b,c) curl_strnequal(a,b,c)
/* checkprefix() is a shorter version of the above, used when the first
argument is zero-byte terminated */
#define checkprefix(a,b) strnequal(a,b,strlen(a))
#ifndef HAVE_STRLCAT
#define strlcat(x,y,z) Curl_strlcat(x,y,z)
size_t Curl_strlcat(char *dst, const char *src, size_t siz);

View File

@@ -1050,6 +1050,7 @@ CURLcode Curl_telnet(struct connectdata *conn)
char *buf = data->state.buffer;
ssize_t nread;
struct TELNET *tn;
struct timeval now; /* current time */
code = init_telnet(conn);
if(code)
@@ -1149,9 +1150,13 @@ CURLcode Curl_telnet(struct connectdata *conn)
keepfd = readfd;
while (keepon) {
readfd = keepfd; /* set this every lap in the loop */
struct timeval interval;
switch (select (sockfd + 1, &readfd, NULL, NULL, NULL)) {
readfd = keepfd; /* set this every lap in the loop */
interval.tv_sec = 1;
interval.tv_usec = 0;
switch (select (sockfd + 1, &readfd, NULL, NULL, &interval)) {
case -1: /* error, stop reading */
keepon = FALSE;
continue;
@@ -1199,10 +1204,20 @@ CURLcode Curl_telnet(struct connectdata *conn)
}
}
}
if(data->set.timeout) {
now = Curl_tvnow();
if(Curl_tvdiff(now, conn->created)/1000 >= data->set.timeout) {
failf(data, "Time-out");
code = CURLE_OPERATION_TIMEOUTED;
keepon = FALSE;
}
}
}
#endif
/* mark this as "no further transfer wanted" */
return Curl_Transfer(conn, -1, -1, FALSE, NULL, -1, NULL);
Curl_Transfer(conn, -1, -1, FALSE, NULL, -1, NULL);
return code;
}
/*

View File

@@ -114,66 +114,77 @@ enum {
KEEP_WRITE
};
/*
* compareheader()
*
* Returns TRUE if 'headerline' contains the 'header' with given 'content'.
* Pass headers WITH the colon.
*/
static bool
compareheader(char *headerline, /* line to check */
const char *header, /* header keyword _with_ colon */
const char *content) /* content string to find */
{
/* RFC2616, section 4.2 says: "Each header field consists of a name followed
* by a colon (":") and the field value. Field names are case-insensitive.
* The field value MAY be preceded by any amount of LWS, though a single SP
* is preferred." */
size_t hlen = strlen(header);
size_t clen;
size_t len;
char *start;
char *end;
if(!strnequal(headerline, header, hlen))
return FALSE; /* doesn't start with header */
/* pass the header */
start = &headerline[hlen];
/* pass all white spaces */
while(*start && isspace((int)*start))
start++;
/* find the end of the header line */
end = strchr(start, '\r'); /* lines end with CRLF */
if(!end) {
/* in case there's a non-standard compliant line here */
end = strchr(start, '\n');
if(!end)
/* hm, there's no line ending here, return false and bail out! */
return FALSE;
}
len = end-start; /* length of the content part of the input line */
clen = strlen(content); /* length of the word to find */
/* find the content string in the rest of the line */
for(;len>=clen;len--, start++) {
if(strnequal(start, content, clen))
return TRUE; /* match! */
}
return FALSE; /* no match */
}
/* We keep this static and global since this is read-only and NEVER
changed. It should just remain a blanked-out timeout value. */
static struct timeval notimeout={0,0};
/*
* This function will call the read callback to fill our buffer with data
* to upload.
*/
static int fillbuffer(struct connectdata *conn,
int bytes)
{
int buffersize = bytes;
int nread;
if(conn->bits.upload_chunky) {
/* if chunked Transfer-Encoding */
buffersize -= (8 + 2 + 2); /* 32bit hex + CRLF + CRLF */
conn->upload_fromhere += 10; /* 32bit hex + CRLF */
}
nread = conn->fread(conn->upload_fromhere, 1,
buffersize, conn->fread_in);
if(!conn->bits.forbidchunk && conn->bits.upload_chunky) {
/* if chunked Transfer-Encoding */
char hexbuffer[11];
int hexlen = snprintf(hexbuffer, sizeof(hexbuffer),
"%x\r\n", nread);
/* move buffer pointer */
conn->upload_fromhere -= hexlen;
nread += hexlen;
/* copy the prefix to the buffer */
memcpy(conn->upload_fromhere, hexbuffer, hexlen);
if(nread>hexlen) {
/* append CRLF to the data */
memcpy(conn->upload_fromhere +
nread, "\r\n", 2);
nread+=2;
}
else {
/* mark this as done once this chunk is transfered */
conn->keep.upload_done = TRUE;
}
}
return nread;
}
/*
* checkhttpprefix()
*
* Returns TRUE if member of the list matches prefix of string
*/
static bool
checkhttpprefix(struct SessionHandle *data,
const char *s)
{
struct curl_slist *head = data->set.http200aliases;
while (head) {
if (checkprefix(head->data, s))
return TRUE;
head = head->next;
}
if(checkprefix("HTTP/", s))
return TRUE;
return FALSE;
}
CURLcode Curl_readwrite(struct connectdata *conn,
bool *done)
{
@@ -220,6 +231,12 @@ CURLcode Curl_readwrite(struct connectdata *conn,
if((k->keepon & KEEP_READ) &&
(FD_ISSET(conn->sockfd, readfdp))) {
bool readdone = FALSE;
/* This is where we loop until we have read everything there is to
read or we get a EWOULDBLOCK */
do {
/* read! */
result = Curl_read(conn, conn->sockfd, k->buf,
data->set.buffer_size?
@@ -245,6 +262,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
else if (0 >= nread) {
k->keepon &= ~KEEP_READ;
FD_ZERO(&k->rkeepfd);
readdone = TRUE;
break;
}
@@ -291,10 +309,10 @@ CURLcode Curl_readwrite(struct connectdata *conn,
k->hbuflen += nread;
if (!k->headerline && (k->hbuflen>5)) {
/* make a first check that this looks like a HTTP header */
if(!strnequal(data->state.headerbuff, "HTTP/", 5)) {
if(!checkhttpprefix(data, data->state.headerbuff)) {
/* this is not the beginning of a HTTP first header line */
k->header = FALSE;
k->badheader = TRUE;
k->badheader = HEADER_ALLBAD;
break;
}
}
@@ -302,6 +320,9 @@ CURLcode Curl_readwrite(struct connectdata *conn,
break; /* read more and try again */
}
/* decrease the size of the remaining buffer */
nread -= (k->end_ptr - k->str)+1;
k->str = k->end_ptr + 1; /* move past new line */
/*
@@ -339,6 +360,17 @@ CURLcode Curl_readwrite(struct connectdata *conn,
* We now have a FULL header line that p points to
*****/
if(!k->headerline) {
/* the first read header */
if((k->hbuflen>5) &&
!checkhttpprefix(data, data->state.headerbuff)) {
/* this is not the beginning of a HTTP first header line */
k->header = FALSE;
k->badheader = HEADER_PARTHEADER;
break;
}
}
if (('\n' == *k->p) || ('\r' == *k->p)) {
int headerlen;
/* Zero-length header line means end of headers! */
@@ -458,6 +490,18 @@ CURLcode Curl_readwrite(struct connectdata *conn,
*/
nc=sscanf (k->p, " HTTP %3d", &k->httpcode);
k->httpversion = 10;
/* If user has set option HTTP200ALIASES,
compare header line against list of aliases
*/
if (!nc) {
if (checkhttpprefix(data, k->p)) {
nc = 1;
k->httpcode = 200;
k->httpversion =
(data->set.httpversion==CURL_HTTP_VERSION_1_0)? 10 : 11;
}
}
}
if (nc) {
@@ -471,7 +515,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
here is the check for that: */
/* serious error, go home! */
failf (data, "The requested file was not found");
return CURLE_HTTP_NOT_FOUND;
return CURLE_HTTP_RETURNED_ERROR;
}
if(k->httpversion == 10)
@@ -502,19 +546,18 @@ CURLcode Curl_readwrite(struct connectdata *conn,
}
else {
k->header = FALSE; /* this is not a header line */
k->badheader = TRUE; /* this was a bad header */
break;
}
}
/* check for Content-Length: header lines to get size */
if (strnequal("Content-Length:", k->p, 15) &&
if (checkprefix("Content-Length:", k->p) &&
sscanf (k->p+15, " %ld", &k->contentlength)) {
conn->size = k->contentlength;
Curl_pgrsSetDownloadSize(data, k->contentlength);
}
/* check for Content-Type: header lines to get the mime-type */
else if (strnequal("Content-Type:", k->p, 13)) {
else if (checkprefix("Content-Type:", k->p)) {
char *start;
char *end;
int len;
@@ -540,7 +583,8 @@ CURLcode Curl_readwrite(struct connectdata *conn,
}
else if((k->httpversion == 10) &&
conn->bits.httpproxy &&
compareheader(k->p, "Proxy-Connection:", "keep-alive")) {
Curl_compareheader(k->p,
"Proxy-Connection:", "keep-alive")) {
/*
* When a HTTP/1.0 reply comes when using a proxy, the
* 'Proxy-Connection: keep-alive' line tells us the
@@ -551,7 +595,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
infof(data, "HTTP/1.0 proxy connection set to keep alive!\n");
}
else if((k->httpversion == 10) &&
compareheader(k->p, "Connection:", "keep-alive")) {
Curl_compareheader(k->p, "Connection:", "keep-alive")) {
/*
* A HTTP/1.0 reply with the 'Connection: keep-alive' line
* tells us the connection will be kept alive for our
@@ -561,7 +605,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
conn->bits.close = FALSE; /* don't close when done */
infof(data, "HTTP/1.0 connection set to keep alive!\n");
}
else if (compareheader(k->p, "Connection:", "close")) {
else if (Curl_compareheader(k->p, "Connection:", "close")) {
/*
* [RFC 2616, section 8.1.2.1]
* "Connection: close" is HTTP/1.1 language and means that
@@ -570,7 +614,8 @@ CURLcode Curl_readwrite(struct connectdata *conn,
*/
conn->bits.close = TRUE; /* close when done */
}
else if (compareheader(k->p, "Transfer-Encoding:", "chunked")) {
else if (Curl_compareheader(k->p,
"Transfer-Encoding:", "chunked")) {
/*
* [RFC 2616, section 3.6.1] A 'chunked' transfer encoding
* means that the server will send a series of "chunks". Each
@@ -584,7 +629,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
/* init our chunky engine */
Curl_httpchunk_init(conn);
}
else if (strnequal("Content-Encoding:", k->p, 17) &&
else if (checkprefix("Content-Encoding:", k->p) &&
data->set.encoding) {
/*
* Process Content-Encoding. Look for the values: identity, gzip,
@@ -596,23 +641,23 @@ CURLcode Curl_readwrite(struct connectdata *conn,
char *start;
/* Find the first non-space letter */
for(start=k->p+18;
for(start=k->p+17;
*start && isspace((int)*start);
start++);
/* Record the content-encoding for later use. 08/27/02 jhrg */
if (strnequal("identity", start, 8))
if (checkprefix("identity", start))
k->content_encoding = IDENTITY;
else if (strnequal("deflate", start, 7))
else if (checkprefix("deflate", start))
k->content_encoding = DEFLATE;
else if (strnequal("gzip", start, 4)
|| strnequal("x-gzip", start, 6))
else if (checkprefix("gzip", start)
|| checkprefix("x-gzip", start))
k->content_encoding = GZIP;
else if (strnequal("compress", start, 8)
|| strnequal("x-compress", start, 10))
else if (checkprefix("compress", start)
|| checkprefix("x-compress", start))
k->content_encoding = COMPRESS;
}
else if (strnequal("Content-Range:", k->p, 14)) {
else if (checkprefix("Content-Range:", k->p)) {
if (sscanf (k->p+14, " bytes %d-", &k->offset) ||
sscanf (k->p+14, " bytes: %d-", &k->offset)) {
/* This second format was added August 1st 2000 by Igor
@@ -625,11 +670,10 @@ CURLcode Curl_readwrite(struct connectdata *conn,
}
}
else if(data->cookies &&
strnequal("Set-Cookie:", k->p, 11)) {
checkprefix("Set-Cookie:", k->p)) {
Curl_cookie_add(data->cookies, TRUE, k->p+11, conn->name);
}
else if(strnequal("Last-Modified:", k->p,
strlen("Last-Modified:")) &&
else if(checkprefix("Last-Modified:", k->p) &&
(data->set.timecondition || data->set.get_filetime) ) {
time_t secs=time(NULL);
k->timeofdoc = curl_getdate(k->p+strlen("Last-Modified:"),
@@ -639,7 +683,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
}
else if ((k->httpcode >= 300 && k->httpcode < 400) &&
(data->set.http_follow_location) &&
strnequal("Location:", k->p, 9)) {
checkprefix("Location:", k->p)) {
/* this is the URL that the server advices us to get instead */
char *ptr;
char *start=k->p;
@@ -657,10 +701,12 @@ CURLcode Curl_readwrite(struct connectdata *conn,
while(*ptr && !isspace((int)*ptr))
ptr++;
backup = *ptr; /* store the ending letter */
if(ptr != start) {
*ptr = '\0'; /* zero terminate */
conn->newurl = strdup(start); /* clone string */
*ptr = backup; /* restore ending letter */
}
}
/*
* End of header-checks. Write them to the client.
@@ -696,13 +742,6 @@ CURLcode Curl_readwrite(struct connectdata *conn,
there might be a non-header part left in the end of the read
buffer. */
if (!k->header) {
/* starting here, this is not part of the header! */
/* we subtract the remaining header size from the buffer */
nread -= (k->str - k->buf);
}
} /* end if header mode */
/* This is not an 'else if' since it may be a rest from the header
@@ -720,6 +759,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
infof (data, "Follow to new URL: %s\n", conn->newurl);
k->keepon &= ~KEEP_READ;
FD_ZERO(&k->rkeepfd);
*done = TRUE;
return CURLE_OK;
}
else if (conn->resume_from &&
@@ -744,6 +784,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
if(k->timeofdoc < data->set.timevalue) {
infof(data,
"The requested document is not new enough\n");
*done = TRUE;
return CURLE_OK;
}
break;
@@ -751,6 +792,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
if(k->timeofdoc > data->set.timevalue) {
infof(data,
"The requested document is not old enough\n");
*done = TRUE;
return CURLE_OK;
}
break;
@@ -763,8 +805,16 @@ CURLcode Curl_readwrite(struct connectdata *conn,
k->bodywrites++;
/* pass data to the debug function before it gets "dechunked" */
if(data->set.verbose)
if(data->set.verbose) {
if(k->badheader) {
Curl_debug(data, CURLINFO_DATA_IN, data->state.headerbuff,
k->hbuflen);
if(k->badheader == HEADER_PARTHEADER)
Curl_debug(data, CURLINFO_DATA_IN, k->str, nread);
}
else
Curl_debug(data, CURLINFO_DATA_IN, k->str, nread);
}
if(conn->bits.chunk) {
/*
@@ -819,12 +869,11 @@ CURLcode Curl_readwrite(struct connectdata *conn,
result = Curl_client_write(data, CLIENTWRITE_BODY,
data->state.headerbuff,
k->hbuflen);
k->badheader = FALSE; /* taken care of now */
}
if(k->badheader < HEADER_ALLBAD) {
/* This switch handles various content encodings. If there's an
error here, be sure to check over the almost identical code in
http_chunk.c. 08/29/02 jhrg */
error here, be sure to check over the almost identical code
in http_chunk.c. 08/29/02 jhrg */
#ifdef HAVE_LIBZ
switch (k->content_encoding) {
case IDENTITY:
@@ -853,12 +902,17 @@ CURLcode Curl_readwrite(struct connectdata *conn,
break;
}
#endif
}
k->badheader = HEADER_NORMAL; /* taken care of now */
if(result)
return result;
}
} /* if (! header and data to read ) */
} while(!readdone);
} /* if( read from socket ) */
/* If we still have writing to do, we check if we have a writable
@@ -871,20 +925,29 @@ CURLcode Curl_readwrite(struct connectdata *conn,
int i, si;
ssize_t bytes_written;
bool writedone=FALSE;
if ((k->bytecount == 0) && (k->writebytecount == 0))
Curl_pgrsTime(data, TIMER_STARTTRANSFER);
didwhat |= KEEP_WRITE;
/*
* We loop here to do the READ and SEND loop until we run out of
* data to send or until we get EWOULDBLOCK back
*/
do {
/* only read more data if there's no upload data already
present in the upload buffer */
if(0 == conn->upload_present) {
/* init the "upload from here" pointer */
conn->upload_fromhere = k->uploadbuf;
nread = data->set.fread(conn->upload_fromhere, 1,
BUFSIZE, data->set.in);
if(!k->upload_done)
nread = fillbuffer(conn, BUFSIZE);
else
nread = 0; /* we're done uploading/reading */
/* the signed int typecase of nread of for systems that has
unsigned size_t */
@@ -892,6 +955,7 @@ CURLcode Curl_readwrite(struct connectdata *conn,
/* done */
k->keepon &= ~KEEP_WRITE; /* we're done writing */
FD_ZERO(&k->wkeepfd);
writedone = TRUE;
break;
}
@@ -926,12 +990,12 @@ CURLcode Curl_readwrite(struct connectdata *conn,
that instead of reading more data */
}
/* write to socket */
/* write to socket (send away data) */
result = Curl_write(conn,
conn->writesockfd,
conn->upload_fromhere,
conn->upload_present,
&bytes_written);
conn->writesockfd, /* socket to send to */
conn->upload_fromhere, /* buffer pointer */
conn->upload_present, /* buffer size */
&bytes_written); /* actually send away */
if(result)
return result;
else if(conn->upload_present != bytes_written) {
@@ -943,11 +1007,20 @@ CURLcode Curl_readwrite(struct connectdata *conn,
/* advance the pointer where to find the buffer when the next send
is to happen */
conn->upload_fromhere += bytes_written;
writedone = TRUE; /* we are done, stop the loop */
}
else {
/* we've uploaded that buffer now */
conn->upload_fromhere = k->uploadbuf;
conn->upload_present = 0; /* no more bytes left */
if(k->upload_done) {
/* switch off writing, we're done! */
k->keepon &= ~KEEP_WRITE; /* we're done writing */
FD_ZERO(&k->wkeepfd);
writedone = TRUE;
}
}
if(data->set.verbose)
@@ -958,6 +1031,8 @@ CURLcode Curl_readwrite(struct connectdata *conn,
k->writebytecount += bytes_written;
Curl_pgrsSetUploadCounter(data, (double)k->writebytecount);
} while(!writedone); /* loop until we're done writing! */
}
} while(0); /* just to break out from! */
@@ -1052,13 +1127,13 @@ CURLcode Curl_readwrite_init(struct connectdata *conn)
Curl_pgrsSetUploadCounter(data, 0);
Curl_pgrsSetDownloadCounter(data, 0);
if (!conn->getheader) {
if (!conn->bits.getheader) {
k->header = FALSE;
if(conn->size > 0)
Curl_pgrsSetDownloadSize(data, conn->size);
}
/* we want header and/or body, if neither then don't do this! */
if(conn->getheader || !data->set.no_body) {
if(conn->bits.getheader || !data->set.no_body) {
FD_ZERO (&k->readfd); /* clear it */
if(conn->sockfd != -1) {
@@ -1140,7 +1215,7 @@ Transfer(struct connectdata *conn)
return CURLE_OK;
/* we want header and/or body, if neither then don't do this! */
if(!conn->getheader && data->set.no_body)
if(!conn->bits.getheader && data->set.no_body)
return CURLE_OK;
k->writefdp = &k->writefd; /* store the address of the set */
@@ -1198,6 +1273,22 @@ CURLcode Curl_pretransfer(struct SessionHandle *data)
data->state.this_is_a_follow = FALSE; /* reset this */
data->state.errorbuf = FALSE; /* no error has occurred */
/* If there was a list of cookie files to read and we haven't done it before,
do it now! */
if(data->change.cookielist) {
struct curl_slist *list = data->change.cookielist;
while(list) {
data->cookies = Curl_cookie_init(list->data,
data->cookies,
data->set.cookiesession);
list = list->next;
}
curl_slist_free_all(data->change.cookielist); /* clean up list */
data->change.cookielist = NULL; /* don't do this again! */
}
/* Allow data->set.use_port to set which port to use. This needs to be
* disabled for example when we follow Location: headers to URLs using
* different ports! */
@@ -1228,88 +1319,19 @@ CURLcode Curl_posttransfer(struct SessionHandle *data)
return CURLE_OK;
}
CURLcode Curl_perform(struct SessionHandle *data)
CURLcode Curl_follow(struct SessionHandle *data,
char *newurl) /* this 'newurl' is the Location: string,
and it must be malloc()ed before passed
here */
{
CURLcode res;
CURLcode res2;
struct connectdata *conn=NULL;
char *newurl = NULL; /* possibly a new URL to follow to! */
data->state.used_interface = Curl_if_easy;
res = Curl_pretransfer(data);
if(res)
return res;
/*
* It is important that there is NO 'return' from this function at any other
* place than falling down to the end of the function! This is because we
* have cleanup stuff that must be done before we get back, and that is only
* performed after this do-while loop.
*/
do {
Curl_pgrsTime(data, TIMER_STARTSINGLE);
res = Curl_connect(data, &conn);
if(res == CURLE_OK) {
res = Curl_do(&conn);
if(res == CURLE_OK) {
CURLcode res2; /* just a local extra result container */
if(conn->protocol&PROT_FTPS)
/* FTPS, disable ssl while transfering data */
conn->ssl.use = FALSE;
res = Transfer(conn); /* now fetch that URL please */
if(conn->protocol&PROT_FTPS)
/* FTPS, enable ssl again after havving transferred data */
conn->ssl.use = TRUE;
if(res == CURLE_OK)
/*
* We must duplicate the new URL here as the connection data
* may be free()ed in the Curl_done() function.
*/
newurl = conn->newurl?strdup(conn->newurl):NULL;
else {
/* The transfer phase returned error, we mark the connection to get
* closed to prevent being re-used. This is becasue we can't
* possibly know if the connection is in a good shape or not now. */
conn->bits.close = TRUE;
if(-1 !=conn->secondarysocket) {
/* if we failed anywhere, we must clean up the secondary socket if
it was used */
sclose(conn->secondarysocket);
conn->secondarysocket=-1;
}
}
/* Always run Curl_done(), even if some of the previous calls
failed, but return the previous (original) error code */
res2 = Curl_done(conn);
if(CURLE_OK == res)
res = res2;
}
/*
* Important: 'conn' cannot be used here, since it may have been closed
* in 'Curl_done' or other functions.
*/
if((res == CURLE_OK) && newurl) {
/* Location: redirect
This is assumed to happen for HTTP(S) only!
*/
/* Location: redirect */
char prot[16]; /* URL protocol string storage */
char letter; /* used for a silly sscanf */
if (data->set.maxredirs && (data->set.followlocation >= data->set.maxredirs)) {
if (data->set.maxredirs &&
(data->set.followlocation >= data->set.maxredirs)) {
failf(data,"Maximum (%d) redirects followed", data->set.maxredirs);
res=CURLE_TOO_MANY_REDIRECTS;
break;
return CURLE_TOO_MANY_REDIRECTS;
}
/* mark the next request as a followed location: */
@@ -1343,14 +1365,14 @@ CURLcode Curl_perform(struct SessionHandle *data)
char *pathsep;
char *newest;
char *useurl = newurl;
/* we must make our own copy of the URL to play with, as it may
point to read-only data */
char *url_clone=strdup(data->change.url);
if(!url_clone) {
res = CURLE_OUT_OF_MEMORY;
break; /* skip out of this loop NOW */
}
if(!url_clone)
return CURLE_OUT_OF_MEMORY; /* skip out of this NOW */
/* protsep points to the start of the host name */
protsep=strstr(url_clone, "//");
@@ -1360,6 +1382,8 @@ CURLcode Curl_perform(struct SessionHandle *data)
protsep+=2; /* pass the slashes */
if('/' != newurl[0]) {
int level=0;
/* First we need to find out if there's a ?-letter in the URL,
and cut it and the right-side of that off */
pathsep = strrchr(protsep, '?');
@@ -1371,6 +1395,40 @@ CURLcode Curl_perform(struct SessionHandle *data)
pathsep = strrchr(protsep, '/');
if(pathsep)
*pathsep=0;
/* Check if there's any slash after the host name, and if so,
remember that position instead */
pathsep = strchr(protsep, '/');
if(pathsep)
protsep = pathsep+1;
else
protsep = NULL;
/* now deal with one "./" or any amount of "../" in the newurl
and act accordingly */
if((useurl[0] == '.') && (useurl[1] == '/'))
useurl+=2; /* just skip the "./" */
while((useurl[0] == '.') &&
(useurl[1] == '.') &&
(useurl[2] == '/')) {
level++;
useurl+=3; /* pass the "../" */
}
if(protsep) {
while(level--) {
/* cut off one more level from the right of the original URL */
pathsep = strrchr(protsep, '/');
if(pathsep)
*pathsep=0;
else {
*protsep=0;
break;
}
}
}
}
else {
/* We got a new absolute path for this server, cut off from the
@@ -1382,15 +1440,15 @@ CURLcode Curl_perform(struct SessionHandle *data)
newest=(char *)malloc( strlen(url_clone) +
1 + /* possible slash */
strlen(newurl) + 1/* zero byte */);
strlen(useurl) + 1/* zero byte */);
if(!newest) {
res = CURLE_OUT_OF_MEMORY;
break; /* go go go out from this loop */
}
sprintf(newest, "%s%s%s", url_clone, ('/' == newurl[0])?"":"/",
newurl);
free(newurl);
if(!newest)
return CURLE_OUT_OF_MEMORY; /* go out from this */
sprintf(newest, "%s%s%s", url_clone,
(('/' == useurl[0]) || (protsep && !*protsep))?"":"/",
useurl);
free(newurl); /* newurl is the allocated pointer */
free(url_clone);
newurl = newest;
}
@@ -1489,9 +1547,88 @@ CURLcode Curl_perform(struct SessionHandle *data)
}
Curl_pgrsTime(data, TIMER_REDIRECT);
Curl_pgrsResetTimes(data);
return CURLE_OK;
}
CURLcode Curl_perform(struct SessionHandle *data)
{
CURLcode res;
CURLcode res2;
struct connectdata *conn=NULL;
char *newurl = NULL; /* possibly a new URL to follow to! */
data->state.used_interface = Curl_if_easy;
res = Curl_pretransfer(data);
if(res)
return res;
/*
* It is important that there is NO 'return' from this function at any other
* place than falling down to the end of the function! This is because we
* have cleanup stuff that must be done before we get back, and that is only
* performed after this do-while loop.
*/
do {
Curl_pgrsTime(data, TIMER_STARTSINGLE);
res = Curl_connect(data, &conn);
if(res == CURLE_OK) {
res = Curl_do(&conn);
if(res == CURLE_OK) {
CURLcode res2; /* just a local extra result container */
if(conn->protocol&PROT_FTPS)
/* FTPS, disable ssl while transfering data */
conn->ssl.use = FALSE;
res = Transfer(conn); /* now fetch that URL please */
if(conn->protocol&PROT_FTPS)
/* FTPS, enable ssl again after havving transferred data */
conn->ssl.use = TRUE;
if(res == CURLE_OK)
/*
* We must duplicate the new URL here as the connection data
* may be free()ed in the Curl_done() function.
*/
newurl = conn->newurl?strdup(conn->newurl):NULL;
else {
/* The transfer phase returned error, we mark the connection to get
* closed to prevent being re-used. This is becasue we can't
* possibly know if the connection is in a good shape or not now. */
conn->bits.close = TRUE;
if(-1 !=conn->secondarysocket) {
/* if we failed anywhere, we must clean up the secondary socket if
it was used */
sclose(conn->secondarysocket);
conn->secondarysocket=-1;
}
}
/* Always run Curl_done(), even if some of the previous calls
failed, but return the previous (original) error code */
res2 = Curl_done(conn);
if(CURLE_OK == res)
res = res2;
}
/*
* Important: 'conn' cannot be used here, since it may have been closed
* in 'Curl_done' or other functions.
*/
if((res == CURLE_OK) && newurl) {
res = Curl_follow(data, newurl);
if(CURLE_OK == res) {
newurl = NULL;
continue;
}
}
}
break; /* it only reaches here when this shouldn't loop */
} while(1); /* loop if Location: */
@@ -1527,7 +1664,7 @@ Curl_Transfer(struct connectdata *c_conn, /* connection data */
/* now copy all input parameters */
conn->sockfd = sockfd;
conn->size = size;
conn->getheader = getheader;
conn->bits.getheader = getheader;
conn->bytecountp = bytecountp;
conn->writesockfd = writesockfd;
conn->writebytecountp = writebytecountp;

View File

@@ -23,10 +23,9 @@
* $Id$
***************************************************************************/
CURLcode Curl_perform(struct SessionHandle *data);
CURLcode Curl_pretransfer(struct SessionHandle *data);
CURLcode Curl_posttransfer(struct SessionHandle *data);
CURLcode Curl_follow(struct SessionHandle *data, char *newurl);
CURLcode Curl_readwrite(struct connectdata *conn, bool *done);
void Curl_single_fdset(struct connectdata *conn,
fd_set *read_fd_set,

144
lib/url.c
View File

@@ -101,6 +101,7 @@
#include "strequal.h"
#include "escape.h"
#include "strtok.h"
#include "share.h"
/* And now for the protocols */
#include "ftp.h"
@@ -183,6 +184,9 @@ CURLcode Curl_close(struct SessionHandle *data)
if (data->share)
data->share->dirty--;
if(data->change.cookielist) /* clean up list if any */
curl_slist_free_all(data->change.cookielist);
if(data->state.auth_host)
free(data->state.auth_host);
@@ -552,8 +556,10 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option, ...)
*/
cookiefile = (char *)va_arg(param, void *);
if(cookiefile)
data->cookies = Curl_cookie_init(cookiefile, data->cookies,
data->set.cookiesession);
/* append the cookie file name to the list of file names, and deal with
them later */
data->change.cookielist =
curl_slist_append(data->change.cookielist, cookiefile);
break;
case CURLOPT_COOKIEJAR:
@@ -1066,8 +1072,8 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option, ...)
case CURLOPT_SHARE:
{
curl_share *set;
set = va_arg(param, curl_share *);
struct Curl_share *set;
set = va_arg(param, struct Curl_share *);
if(data->share)
data->share->dirty--;
@@ -1083,6 +1089,20 @@ CURLcode Curl_setopt(struct SessionHandle *data, CURLoption option, ...)
data->set.proxytype = va_arg(param, long);
break;
case CURLOPT_PRIVATE:
/*
* Set private data pointer.
*/
data->set.private = va_arg(param, char *);
break;
case CURLOPT_HTTP200ALIASES:
/*
* Set a list of aliases for HTTP 200 in response header
*/
data->set.http200aliases = va_arg(param, struct curl_slist *);
break;
default:
/* unknown tag and its companion, just ignore: */
return CURLE_FAILED_INIT; /* correct this */
@@ -1479,18 +1499,23 @@ static int handleSock5Proxy(
socksreq[3] = 1; /* IPv4 = 1 */
{
Curl_addrinfo *hp;
hp = Curl_resolv(conn->data, conn->hostname, conn->remote_port);
#ifndef ENABLE_IPV6
struct Curl_dns_entry *dns;
Curl_addrinfo *hp=NULL;
dns = Curl_resolv(conn->data, conn->hostname, conn->remote_port);
/*
* We cannot use 'hostent' as a struct that Curl_resolv() returns. It
* returns a Curl_addrinfo pointer that may not always look the same.
*/
#ifndef ENABLE_IPV6
if(dns)
hp=dns->addr;
if (hp && hp->h_addr_list[0]) {
socksreq[4] = ((char*)hp->h_addr_list[0])[0];
socksreq[5] = ((char*)hp->h_addr_list[0])[1];
socksreq[6] = ((char*)hp->h_addr_list[0])[2];
socksreq[7] = ((char*)hp->h_addr_list[0])[3];
Curl_resolv_unlock(dns); /* not used anymore from now on */
}
else {
failf(conn->data, "Failed to resolve \"%s\" for SOCKS5 connect.",
@@ -1543,7 +1568,7 @@ static int handleSock5Proxy(
}
static CURLcode ConnectPlease(struct connectdata *conn,
Curl_addrinfo *hostaddr,
struct Curl_dns_entry *hostaddr,
bool *connected)
{
CURLcode result;
@@ -1562,13 +1587,15 @@ static CURLcode ConnectPlease(struct connectdata *conn,
/* All is cool, then we store the current information from the hostaddr
struct to the serv_addr, as it might be needed later. The address
returned from the function above is crucial here. */
conn->connect_addr = hostaddr;
#ifdef ENABLE_IPV6
conn->serv_addr = addr;
#else
memset((char *) &conn->serv_addr, '\0', sizeof(conn->serv_addr));
memcpy((char *)&(conn->serv_addr.sin_addr),
(struct in_addr *)addr, sizeof(struct in_addr));
conn->serv_addr.sin_family = hostaddr->h_addrtype;
conn->serv_addr.sin_family = hostaddr->addr->h_addrtype;
conn->serv_addr.sin_port = htons((unsigned short)conn->port);
#endif
@@ -1591,8 +1618,11 @@ static CURLcode ConnectPlease(struct connectdata *conn,
return result;
}
/*
* ALERT! The 'dns' pointer being passed in here might be NULL at times.
*/
static void verboseconnect(struct connectdata *conn,
Curl_addrinfo *hostaddr)
struct Curl_dns_entry *dns)
{
#ifdef HAVE_INET_NTOA_R
char ntoa_buf[64];
@@ -1601,7 +1631,7 @@ static void verboseconnect(struct connectdata *conn,
/* Figure out the ip-number and display the first host name it shows: */
#ifdef ENABLE_IPV6
(void)hostaddr; /* not used in the IPv6 enabled version */
(void)dns; /* not used in the IPv6 enabled version */
{
char hbuf[NI_MAXHOST];
#ifdef NI_WITHSCOPEID
@@ -1624,6 +1654,7 @@ static void verboseconnect(struct connectdata *conn,
}
#else
{
Curl_addrinfo *hostaddr=dns?dns->addr:NULL;
struct in_addr in;
(void) memcpy(&in.s_addr, &conn->serv_addr.sin_addr, sizeof (in.s_addr));
infof(data, "Connected to %s (%s) port %d\n",
@@ -1649,11 +1680,17 @@ static void verboseconnect(struct connectdata *conn,
* 'serv_addr' field in the connectdata struct for most of it.
*/
CURLcode Curl_protocol_connect(struct connectdata *conn,
Curl_addrinfo *hostaddr)
struct Curl_dns_entry *hostaddr)
{
struct SessionHandle *data = conn->data;
CURLcode result=CURLE_OK;
if(conn->bits.tcpconnect)
/* We already are connected, get back. This may happen when the connect
worked fine in the first call, like when we connect to a local server
or proxy. */
return CURLE_OK;
Curl_pgrsTime(data, TIMER_CONNECT); /* connect done */
if(data->set.verbose)
@@ -1684,15 +1721,15 @@ static CURLcode CreateConnection(struct SessionHandle *data,
struct connectdata *conn;
struct connectdata *conn_temp;
int urllen;
Curl_addrinfo *hostaddr;
struct Curl_dns_entry *hostaddr;
#ifdef HAVE_ALARM
unsigned int prev_alarm;
unsigned int prev_alarm=0;
#endif
char endbracket;
#ifdef HAVE_SIGACTION
struct sigaction keep_sigact; /* store the old struct here */
bool keep_copysig; /* did copy it? */
bool keep_copysig=FALSE; /* did copy it? */
#else
#ifdef HAVE_SIGNAL
void *keep_sigact; /* store the old handler here */
@@ -1729,7 +1766,9 @@ static CURLcode CreateConnection(struct SessionHandle *data,
conn->firstsocket = -1; /* no file descriptor */
conn->secondarysocket = -1; /* no file descriptor */
conn->connectindex = -1; /* no index */
conn->bits.httpproxy = data->change.proxy?TRUE:FALSE; /* proxy-or-not */
conn->bits.httpproxy = (data->change.proxy && *data->change.proxy &&
(data->set.proxytype == CURLPROXY_HTTP))?
TRUE:FALSE; /* http proxy or not */
conn->bits.use_range = data->set.set_range?TRUE:FALSE; /* range status */
conn->range = data->set.set_range; /* clone the range setting */
conn->resume_from = data->set.set_resume_from; /* inherite resume_from */
@@ -1749,6 +1788,23 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* Store creation time to help future close decision making */
conn->created = Curl_tvnow();
/* Set the start time temporary to this creation time to allow easier
timeout checks before the transfer has started for real. The start time
is later set "for real" using Curl_pgrsStartNow(). */
conn->data->progress.start = conn->created;
conn->bits.upload_chunky =
((conn->protocol&PROT_HTTP) &&
data->set.upload &&
(data->set.infilesize == -1) &&
(data->set.httpversion != CURL_HTTP_VERSION_1_0))?
/* HTTP, upload, unknown file size and not HTTP 1.0 */
TRUE:
/* else, no chunky upload */
FALSE;
conn->fread = data->set.fread;
conn->fread_in = data->set.in;
/***********************************************************
* We need to allocate memory to store the path in. We get the size of the
@@ -1841,22 +1897,22 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* Note: if you add a new protocol, please update the list in
* lib/version.c too! */
if(strnequal(conn->gname, "FTP", 3)) {
if(checkprefix("FTP", conn->gname)) {
strcpy(conn->protostr, "ftp");
}
else if(strnequal(conn->gname, "GOPHER", 6))
else if(checkprefix("GOPHER", conn->gname))
strcpy(conn->protostr, "gopher");
#ifdef USE_SSLEAY
else if(strnequal(conn->gname, "HTTPS", 5))
else if(checkprefix("HTTPS", conn->gname))
strcpy(conn->protostr, "https");
else if(strnequal(conn->gname, "FTPS", 4))
else if(checkprefix("FTPS", conn->gname))
strcpy(conn->protostr, "ftps");
#endif /* USE_SSLEAY */
else if(strnequal(conn->gname, "TELNET", 6))
else if(checkprefix("TELNET", conn->gname))
strcpy(conn->protostr, "telnet");
else if (strnequal(conn->gname, "DICT", sizeof("DICT")-1))
else if (checkprefix("DICT", conn->gname))
strcpy(conn->protostr, "DICT");
else if (strnequal(conn->gname, "LDAP", sizeof("LDAP")-1))
else if (checkprefix("LDAP", conn->gname))
strcpy(conn->protostr, "LDAP");
else {
strcpy(conn->protostr, "http");
@@ -1959,7 +2015,7 @@ static CURLcode CreateConnection(struct SessionHandle *data,
if(strlen(nope) <= namelen) {
char *checkn=
conn->name + namelen - strlen(nope);
if(strnequal(nope, checkn, strlen(nope))) {
if(checkprefix(nope, checkn)) {
/* no proxy for this host! */
break;
}
@@ -2257,6 +2313,7 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* Setup a "faked" transfer that'll do nothing */
if(CURLE_OK == result) {
conn->bits.tcpconnect = TRUE; /* we are "connected */
result = Curl_Transfer(conn, -1, -1, FALSE, NULL, /* no download */
-1, NULL); /* no upload */
}
@@ -2434,6 +2491,9 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* no name given, get the password only */
sscanf(userpass, ":%127[^@]", data->state.passwd);
/* we have set the password */
data->state.passwdgiven = TRUE;
if(data->state.user[0]) {
char *newname=curl_unescape(data->state.user, 0);
if(strlen(newname) < sizeof(data->state.user)) {
@@ -2469,14 +2529,17 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* the name is given, get user+password */
sscanf(data->set.userpwd, "%127[^:]:%127[^\n]",
data->state.user, data->state.passwd);
if(strchr(data->set.userpwd, ':'))
/* a colon means the password was given, even if blank */
data->state.passwdgiven = TRUE;
}
else
/* no name given, get the password only */
/* no name given, starts with a colon, get the password only */
sscanf(data->set.userpwd+1, "%127[^\n]", data->state.passwd);
}
if (data->set.use_netrc != CURL_NETRC_IGNORED &&
data->state.passwd[0] == '\0' ) { /* need passwd */
!data->state.passwdgiven) { /* need passwd */
if(Curl_parsenetrc(conn->hostname,
data->state.user,
data->state.passwd)) {
@@ -2487,8 +2550,7 @@ static CURLcode CreateConnection(struct SessionHandle *data,
}
/* if we have a user but no password, ask for one */
if(conn->bits.user_passwd &&
!data->state.passwd[0] ) {
if(conn->bits.user_passwd && !data->state.passwdgiven ) {
if(data->set.fpasswd(data->set.passwd_client,
"password:", data->state.passwd,
sizeof(data->state.passwd)))
@@ -2499,9 +2561,12 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* If our protocol needs a password and we have none, use the defaults */
if ( (conn->protocol & (PROT_FTP|PROT_HTTP)) &&
!conn->bits.user_passwd) {
!conn->bits.user_passwd &&
!data->state.passwdgiven) {
strcpy(data->state.user, CURL_DEFAULT_USER);
strcpy(data->state.passwd, CURL_DEFAULT_PASSWORD);
/* This is the default password, so DON'T set conn->bits.user_passwd */
}
@@ -2753,14 +2818,21 @@ static CURLcode CreateConnection(struct SessionHandle *data,
/* Connect only if not already connected! */
result = ConnectPlease(conn, hostaddr, &connected);
if(connected)
if(connected) {
result = Curl_protocol_connect(conn, hostaddr);
if(CURLE_OK == result)
conn->bits.tcpconnect = TRUE;
}
else
conn->bits.tcpconnect = FALSE;
if(CURLE_OK != result)
return result;
}
else {
Curl_pgrsTime(data, TIMER_CONNECT); /* we're connected already */
conn->bits.tcpconnect = TRUE;
if(data->set.verbose)
verboseconnect(conn, hostaddr);
}
@@ -2804,7 +2876,6 @@ CURLcode Curl_connect(struct SessionHandle *data,
return code;
}
CURLcode Curl_done(struct connectdata *conn)
{
struct SessionHandle *data=conn->data;
@@ -2823,6 +2894,15 @@ CURLcode Curl_done(struct connectdata *conn)
conn->newurl = NULL;
}
if(conn->connect_addr)
Curl_resolv_unlock(conn->connect_addr); /* done with this */
#if defined(MALLOCDEBUG) && defined(AGGRESIVE_TEST)
/* scan for DNS cache entries still marked as in use */
Curl_hash_apply(data->hostcache,
NULL, Curl_scan_cache_used);
#endif
/* this calls the protocol-specific function pointer previously set */
if(conn->curl_done)
result = conn->curl_done(conn);
@@ -2852,7 +2932,7 @@ CURLcode Curl_do(struct connectdata **connp)
struct connectdata *conn = *connp;
struct SessionHandle *data=conn->data;
conn->do_more = FALSE; /* by default there's no curl_do_more() to use */
conn->bits.do_more = FALSE; /* by default there's no curl_do_more() to use */
if(conn->curl_do) {
/* generic protocol-specific function pointer set in curl_connect() */

View File

@@ -36,5 +36,5 @@ CURLcode Curl_do_more(struct connectdata *);
CURLcode Curl_done(struct connectdata *);
CURLcode Curl_disconnect(struct connectdata *);
CURLcode Curl_protocol_connect(struct connectdata *conn,
Curl_addrinfo *hostaddr);
struct Curl_dns_entry *dns);
#endif

View File

@@ -26,8 +26,6 @@
/* This file is for lib internal stuff */
#include "setup.h"
#include "hostip.h"
#include "hash.h"
#define PORT_FTP 21
#define PORT_TELNET 23
@@ -81,6 +79,8 @@
#include <curl/curl.h>
#include "http_chunks.h" /* for the structs and enum stuff */
#include "hostip.h"
#include "hash.h"
#ifdef HAVE_ZLIB_H
#include <zlib.h> /* for content-encoding 08/28/02 jhrg */
@@ -157,6 +157,8 @@ struct ssl_config_data {
struct HTTP {
struct FormData *sendit;
int postsize;
char *postdata;
const char *p_pragma; /* Pragma: string */
const char *p_accept; /* Accept: string */
long readbytecount;
@@ -164,10 +166,24 @@ struct HTTP {
/* For FORM posting */
struct Form form;
curl_read_callback storefread;
FILE *in;
struct Curl_chunker chunk;
struct back {
curl_read_callback fread; /* backup storage for fread pointer */
void *fread_in; /* backup storage for fread_in pointer */
char *postdata;
int postsize;
} backup;
enum {
HTTPSEND_NADA, /* init */
HTTPSEND_REQUEST, /* sending a request */
HTTPSEND_BODY, /* sending body */
HTTPSEND_LAST /* never use this */
} sending;
void *send_buffer; /* used if the request couldn't be sent in one chunk,
points to an allocated send_buffer struct */
};
/****************************************************************************
@@ -190,7 +206,9 @@ struct FTP {
read the line, just ignore the result. */
bool no_transfer; /* nothing was transfered, (possibly because a resumed
transfer already was complete) */
long response_time; /* When no timeout is given, this is the amount of
seconds we await for an FTP response. Initialized
in Curl_ftp_connect() */
};
/****************************************************************************
@@ -214,6 +232,20 @@ struct ConnectBits {
IP address */
bool use_range;
bool rangestringalloc; /* the range string is malloc()'ed */
bool do_more; /* this is set TRUE if the ->curl_do_more() function is
supposed to be called, after ->curl_do() */
bool upload_chunky; /* set TRUE if we are doing chunked transfer-encoding
on upload */
bool getheader; /* TRUE if header parsing is wanted */
bool forbidchunk; /* used only to explicitly forbid chunk-upload for
specific upload buffers. See readmoredata() in
http.c for details. */
bool tcpconnect; /* the tcp stream (or simimlar) is connected, this
is set the first time on the first connect function
call */
};
/*
@@ -229,7 +261,12 @@ struct Curl_transfer_keeper {
struct timeval start; /* transfer started at this time */
struct timeval now; /* current time */
bool header; /* incoming data has HTTP header */
bool badheader; /* the header was deemed bad and will be
enum {
HEADER_NORMAL, /* no bad header at all */
HEADER_PARTHEADER, /* part of the chunk is a bad header, the rest is
normal data */
HEADER_ALLBAD /* all was believed to be header */
} badheader; /* the header was deemed bad and will be
written as body */
int headerline; /* counts header lines to better track the
first one */
@@ -280,6 +317,8 @@ struct Curl_transfer_keeper {
fd_set wkeepfd;
int keepon;
bool upload_done; /* set to TRUE when doing chunked transfer-encoding upload
and we're uploading the last chunk */
};
@@ -306,8 +345,11 @@ struct connectdata {
#define PROT_FTPS (1<<9)
#define PROT_SSL (1<<10) /* protocol requires SSL */
/* the particular host we use, in two different ways */
struct Curl_dns_entry *connect_addr;
#ifdef ENABLE_IPV6
struct addrinfo *serv_addr; /* the particular host we use */
struct addrinfo *serv_addr;
#else
struct sockaddr_in serv_addr;
#endif
@@ -371,7 +413,6 @@ struct connectdata {
/* READ stuff */
int sockfd; /* socket to read from or -1 */
int size; /* -1 if unknown at this point */
bool getheader; /* TRUE if header parsing is wanted */
long *bytecountp; /* return number of bytes read or NULL */
/* WRITE stuff */
@@ -440,8 +481,8 @@ struct connectdata {
position */
char *upload_fromhere;
bool do_more; /* this is set TRUE if the ->curl_do_more() function is
supposed to be called, after ->curl_do() */
curl_read_callback fread; /* function that reads the input */
void *fread_in; /* pointer to pass to the fread() above */
};
/* The end of connectdata. 08/27/02 jhrg */
@@ -529,6 +570,9 @@ struct UrlState {
char proxyuser[MAX_CURL_USER_LENGTH];
char proxypasswd[MAX_CURL_PASSWORD_LENGTH];
bool passwdgiven; /* set TRUE if an application-provided password has been
set */
struct timeval keeps_speed; /* for the progress meter really */
/* 'connects' will be an allocated array with pointers. If the pointer is
@@ -582,6 +626,8 @@ struct DynamicStatic {
bool proxy_alloc; /* http proxy string is malloc()'ed */
char *referer; /* referer string */
bool referer_alloc; /* referer sting is malloc()ed */
struct curl_slist *cookielist; /* list of cookie files set by
curl_easy_setopt(COOKIEFILE) calls */
};
/*
@@ -615,7 +661,7 @@ struct UserDefined {
bool free_referer; /* set TRUE if 'referer' points to a string we
allocated */
char *useragent; /* User-Agent string */
char *encoding; /* Accept-Encoding string 08/28/02 jhrg */
char *encoding; /* Accept-Encoding string */
char *postfields; /* if POST, set the fields' values here */
size_t postfieldsize; /* if POST, this might have a size to use instead of
strlen(), and then the data *may* be binary (contain
@@ -671,6 +717,10 @@ struct UserDefined {
int dns_cache_timeout; /* DNS cache timeout */
long buffer_size; /* size of receive buffer to use */
char *private; /* Private data */
struct curl_slist *http200aliases; /* linked list of aliases for http200 */
/* Here follows boolean settings that define how to behave during
this session. They are STATIC, set by libcurl users or at least initially
and they don't change during operations. */
@@ -718,7 +768,7 @@ struct UserDefined {
struct SessionHandle {
curl_hash *hostcache;
curl_share *share; /* Share, handles global variable mutexing */
struct Curl_share *share; /* Share, handles global variable mutexing */
struct UserDefined set; /* values set by the libcurl user */
struct DynamicStatic change; /* possibly modified userdefined data */

View File

@@ -35,6 +35,10 @@ mv $HEADER.new $HEADER
# Replace version number in header file:
sed 's/#define CURL_VERSION.*/#define CURL_VERSION "'$curlversion'"/g' $CHEADER >$CHEADER.new
echo "curl version $curlversion"
echo "libcurl version $libversion"
echo "libcurl numerical $numeric"
# Save old header file
cp -p $CHEADER $CHEADER.old
@@ -83,6 +87,9 @@ fi
#
make html
# And the PDF versions
make pdf
############################################################################
#
# Now run make dist to generate a tar.gz archive

View File

@@ -19,17 +19,45 @@ cygwintmp = $(CURDIR)/tmp_binbuild
cygwinbin:
rm -rf $(cygwintmp)
$(MAKE) -C $(top_builddir) install-strip prefix=$(cygwintmp)/usr
$(STRIP) $(cygwintmp)/usr/bin/cygcurl-?.dll
$(mkinstalldirs) $(cygwintmp)/usr/doc/Cygwin \
$(cygwintmp)/usr/doc/$(PACKAGE)-$(VERSION)
rm -rf $(cygwintmp)-dev
$(MAKE) -C $(top_builddir) DESTDIR=$(cygwintmp) install-strip
# $(STRIP) $(cygwintmp)/usr/bin/cygcurl-?.dll
$(mkinstalldirs) \
$(cygwintmp)/usr/doc/Cygwin \
$(cygwintmp)/usr/doc/$(PACKAGE)-$(VERSION) \
$(cygwintmp)-dev/usr/doc/$(PACKAGE)-$(VERSION)/libcurl \
$(cygwintmp)-dev/usr/doc/$(PACKAGE)-$(VERSION)/examples \
$(cygwintmp)-dev/usr/man
#
# copy some files into the binary install dir
cp $(srcdir)/README \
$(cygwintmp)/usr/doc/Cygwin/$(PACKAGE)-$(VERSION)-$(CYGBUILD).README
cd $(top_srcdir) ; cp CHANGES LEGAL MPL-1.1.txt MITX.txt README \
docs/FAQ docs/FEATURES docs/TODO \
$(cygwintmp)/usr/doc/$(PACKAGE)-$(VERSION)
cd $(cygwintmp) ; \
tar cjf $(PACKAGE)-$(VERSION)-$(CYGBUILD).tar.bz2 usr
mv $(cygwintmp)/$(PACKAGE)-$(VERSION)-$(CYGBUILD).tar.bz2 . \
&& rm -rf $(cygwintmp)
cd $(top_srcdir) ; cp CHANGES COPYING README UPGRADE docs/* \
$(cygwintmp)/usr/doc/$(PACKAGE)-$(VERSION) ; pwd
cd $(cygwintmp)/usr/doc/$(PACKAGE)-$(VERSION) ; rm *.1 Makefile*
#
# copy some files into the -dev install dir, remove some from binary
cp $(top_srcdir)/docs/libcurl/*.html \
$(cygwintmp)-dev/usr/doc/$(PACKAGE)-$(VERSION)/libcurl
cp $(top_srcdir)/docs/examples/* \
$(cygwintmp)-dev/usr/doc/$(PACKAGE)-$(VERSION)/examples
rm $(cygwintmp)-dev/usr/doc/$(PACKAGE)-$(VERSION)/examples/Makefile*
cp $(top_srcdir)/docs/examples/Makefile.example \
$(cygwintmp)-dev/usr/doc/$(PACKAGE)-$(VERSION)/examples
mv $(cygwintmp)/usr/doc/$(PACKAGE)-$(VERSION)/BINDINGS \
$(cygwintmp)-dev/usr/doc/$(PACKAGE)-$(VERSION)
mv $(cygwintmp)/usr/doc/$(PACKAGE)-$(VERSION)/INTERNALS \
$(cygwintmp)-dev/usr/doc/$(PACKAGE)-$(VERSION)
mv $(cygwintmp)/usr/include $(cygwintmp)-dev/usr
mv $(cygwintmp)/usr/lib $(cygwintmp)-dev/usr
mv $(cygwintmp)/usr/man/man3 $(cygwintmp)-dev/usr/man
#
# create both tar files, and delete tmp folders
cd $(cygwintmp) ; tar cjf \
$(PACKAGE)-$(VERSION)-$(CYGBUILD).tar.bz2 usr
mv $(cygwintmp)/*.tar.bz2 . && rm -rf $(cygwintmp)
#
cd $(cygwintmp)-dev ; tar cjf \
$(PACKAGE)-devel-$(VERSION)-$(CYGBUILD).tar.bz2 usr
mv $(cygwintmp)-dev/*.tar.bz2 . && rm -rf $(cygwintmp)-dev

View File

@@ -5,13 +5,14 @@ Curl is a tool for transferring files with URL syntax, supporting
cookies, user+password authentication, file transfer resume,
http proxy tunneling and a busload of other useful tricks.
See /usr/doc/curl-<version>/FEATURES for more info.
See /usr/doc/curl-$(VERSION)/FEATURES for more info.
Dependencies:
- Cygwin
- OpenSSL 0.9.6b-2+ (*)
(*) cURL can be built without SSL support: ./configure --without-ssl
(*) cURL can be built without SSL support, see below for details
Canonical Homepage and Downloads:
@@ -24,14 +25,14 @@ Cygwin specific source files (a .README template and a Makefile
CVS at: <srctop>/packages/Win32/cygwin/
Build Instructions (as distributed via cygwin's setup.exe):
(NOTE: as of curl 7.9.1, compiles/tests 100% cleanly OOTB under cygwin)
Download the source, unpack it to a location of your choosing, and then:
Build Instructions (to recompile from the cygwin source tarball):
---STANDARD (with SSL) RELEASE---
Download the source (either the official release or the cygwin version),
unpack it (done for you if using setup.exe), then:
$ ./configure --prefix=/usr
$ make
$ make test # optional, requires perl
$ make test # optional
$ make install # (*)
(*) LibTool 1.4.2 had a bug related to cygwin's use of ".exe" extensions,
@@ -39,33 +40,49 @@ Build Instructions (as distributed via cygwin's setup.exe):
http://mail.gnu.org/pipermail/libtool/2001-September/005549.html
The copy of ltmain.sh that is distributed with cURL includes this patch.
As of curl 7.9.1, the official source compiles (under Cygwin) and tests
100% cleanly OOTB (Out Of The Box)
---NO SSL RELEASE---
Same as standard, except for the configure step, which changes to:
$ ./configure --prefix=/usr --without-ssl
NOTE: the standard release is what is available via Cygwin's setup.exe;
the no-ssl release is only available from the curl website
Packaging Instructions:
---BINARY---
Compile cleanly (./configure + make). Then:
Compile cleanly as described above, then:
$ make cygwinbin CYGBUILD=n
where n is the cygwin release number (e.g. the "1" in curl-7.9-1).
If you leave off "CYGBUILD=n", n defaults to 1.
where n is the cygwin release number (e.g. the "1" in curl-7.9-1),
and "CYGBUILD=n" is optional (n defaults to 1 if not specified)
Assuming everything worked properly, you'll find your binary tarball
in the packages/Win32/cygwin/ sub-directory.
Assuming everything worked, you'll find your binary tarballs in
$(buildtop)/packages/Win32/cygwin/
---SOURCE---
1. unpack the pristine source into an otherwise empty directory
1. download & unpack the pristine source
2. rename the source dir to add the "-$(REL)" suffix, e.g.:
$ mv curl-7.9 curl-7.9-1
3. unpack the pristine source once more, so you'll end up
with 2 directories: "curl-7.9" and "curl-7.9-1" in this example
3. add a CYGWIN-PATCHES directory, and add this readme to it
$ cd curl-7.9-$(REL); mkdir CYGWIN-PATCHES
$ cp packages/Win32/cygwin/README CYGWIN-PATCHES/curl-7.9-$(REL).README
$ cd curl-7.9-1; mkdir CYGWIN-PATCHES
$ cp packages/Win32/cygwin/README CYGWIN-PATCHES/curl-7.9-1.README
4. if applicable, document any changes in the README file
5. create a patch which, when applied (patch -p1 < curl-7.9-$(REL).patch)
will remove any patches you've applied:
5. create a patch which, when applied
(using `patch -p1 < curl-7.9-$(REL).patch`)
will remove any changes you've made to the pristine source:
$ cd ..
$ diff -Nrup (patched-src-dir) (pristine-src-dir) > curl-7.9-$(REL).patch
$ diff -Nrup curl-7.9-1 curl-7.9 > curl-7.9-1.patch
and then move it into the CYGWIN-PATCHES directory
6. repack
$ mv curl-7.9-1.patch curl-7.9-1/CYGWIN-PATCHES
6. pack the new source dir into a tar.bz2 file:
$ tar cfj curl-7.9-1-src.tar.bz2 curl-7.9-1
---SETUP.HINT---
sdesc: "a client that groks URLs"
@@ -80,6 +97,7 @@ Packaging Instructions:
Cygwin port maintained by:
Kevin Roth <kproth at bigfoot dot com>
Kevin Roth <kproth @ users . sourceforge . net>
Questions about cURL should be directed to curl@contactor.se.
Questions about its cygwin package should be directed to cygwin@cygwin.com.
Questions about this cygwin package go to cygwin@cygwin.com.

196
perl/contrib/formfind Executable file
View File

@@ -0,0 +1,196 @@
#!/usr/bin/env perl
# $Id$
#
# formfind.pl
#
# This script gets a HTML page from the specified URL and presents form
# information you may need in order to machine-make a respond to the form.
#
# Written to use 'curl' for URL fetching.
#
# Author: Daniel Stenberg <daniel@haxx.se>
# Version: 0.2 Nov 18, 2002
#
# HISTORY
#
# 0.1 - Nov 12 1998 - Created now!
# 0.2 - Nov 18 2002 - Enhanced. Removed URL support, use only stdin.
#
$in="";
if($ARGV[0] eq "-h") {
print "Usage: $0 < HTML\n";
exit;
}
sub namevalue {
my ($tag)=@_;
my $name=$tag;
if($name =~ /name *=/i) {
if($name =~ /name *= *([^\"\']([^ \">]*))/) {
$name = $1;
}
elsif($name =~ /name *= *(\"|\')([^\"\']*)(\"|\')/) {
$name=$2;
}
else {
# there is a tag but we didn't find the contents
$name="[weird]";
}
}
else {
# no name given
$name="";
}
# get value tag
my $value= $tag;
if($value =~ /[^\.a-zA-Z0-9]value *=/i) {
if($value =~ /[^\.a-zA-Z0-9]value *= *([^\"\']([^ \">]*))/) {
$value = $1;
}
elsif($value =~ /[^\.a-zA-Z0-9]value *= *(\"|\')([^\"\']*)(\"|\')/) {
$value=$2;
}
else {
# there is a tag but we didn't find the contents
$value="[weird]";
}
}
else {
$value="";
}
return ($name, $value);
}
while(<STDIN>) {
$line = $_;
push @indoc, $line;
$line=~ s/\n//g;
$line=~ s/\r//g;
$in=$in.$line;
}
while($in =~ /[^<]*(<[^>]+>)/g ) {
# we have a tag in $1
$tag = $1;
if($tag =~ /^<!--/) {
# this is a comment tag, ignore it
}
else {
if(!$form &&
($tag =~ /^< *form/i )) {
$method= $tag;
if($method =~ /method *=/i) {
$method=~ s/.*method *= *(\"|)([^ \">]*).*/$2/gi;
}
else {
$method="get"; # default method
}
$action= $tag;
$action=~ s/.*action *= *(\'|\"|)([^ \"\'>]*).*/$2/gi;
$method=uc($method);
$enctype=$tag;
if ($enctype =~ /enctype *=/) {
$enctype=~ s/.*enctype *= *(\'|\"|)([^ \"\'>]*).*/$2/gi;
if($enctype eq "multipart/form-data") {
$enctype="multipart form upload [use -F]"
}
$enctype = "\n--- type: $enctype";
}
else {
$enctype="";
}
print "--- FORM report. Uses $method to URL \"$action\"$enctype\n";
$form=1;
}
elsif($form &&
($tag =~ /< *\/form/i )) {
print "--- end of FORM\n";
$form=0;
if( 0 ) {
print "*** Fill in all or any of these: (default assigns may be shown)\n";
for(@vars) {
$var = $_;
$def = $value{$var};
print "$var=$def\n";
}
print "*** Pick one of these:\n";
for(@alts) {
print "$_\n";
}
}
undef @vars;
undef @alts;
}
elsif($form &&
($tag =~ /^< *(input|select)/i)) {
$mtag = $1;
($name, $value)=namevalue($tag);
if($mtag =~ /select/i) {
print "Select: NAME=\"$name\"\n";
push @vars, "$name";
$select = 1;
}
else {
$type=$tag;
if($type =~ /type *=/i) {
$type =~ s/.*type *= *(\'|\"|)([^ \"\'>]*).*/$2/gi;
}
else {
$type="text"; # default type
}
$type=uc($type);
if(lc($type) eq "reset") {
# reset types are for UI only, ignore.
}
elsif($name eq "") {
# let's read the value parameter
print "Button: \"$value\" ($type)\n";
push @alts, "$value";
}
else {
print "Input: NAME=\"$name\"";
if($value ne "") {
print " VALUE=\"$value\"";
}
print " ($type)\n";
push @vars, "$name";
# store default value:
$value{$name}=$value;
}
}
}
elsif($form &&
($tag =~ /^< *textarea/i)) {
my ($name, $value)=namevalue($tag);
print "Textarea: NAME=\"$name\"\n";
}
elsif($select) {
if($tag =~ /^< *\/ *select/i) {
print "[end of select]\n";
$select = 0;
}
elsif($tag =~ /[^\/] *option/i ) {
my ($name, $value)=namevalue($tag);
my $s;
if($tag =~ /selected/i) {
$s= " (SELECTED)";
}
print " Option VALUE=\"$value\"$s\n";
}
}
}
}

View File

@@ -1,273 +0,0 @@
#!@PERL@
#
# formfind.pl
#
# This script gets a HTML page from the specified URL and presents form
# information you may need in order to machine-make a respond to the form.
#
# Written to use 'curl' for URL fetching.
#
# Author: Daniel Stenberg <Daniel.Stenberg@sth.frontec.se>
# Version: 0.1 Nov 12, 1998
#
# HISTORY
#
# 0.1 - Created now!
#
# TODO
# respect file:// URLs for local file fetches!
$in="";
$usestdin = 0;
if($ARGV[0] eq "" ) {
$usestdin = 1;
}
else {
$geturl = $ARGV[0];
}
if(($geturl eq "") && !$usestdin) {
print "Usage: $0 <full source URL>\n",
" Use a traling slash for directory URLs!\n";
exit;
}
# If you need a proxy for web access, edit your .curlrc file to feature
# -x <proxy:port>
# linkchecker, URL will be appended to the right of this command line
# this is the one using HEAD:
$linkcheck = "curl -s -m 20 -I";
# as a second attempt, this will be used. This is not using HEAD but will
# get the whole frigging document!
$linkcheckfull = "curl -s -m 20 -i";
# htmlget, URL will be appended to the right of this command line
$htmlget = "curl -s";
# urlget, URL will be appended to the right of this command line
# this stores the file with the remote file name in the current dir
$urlget = "curl -O -s";
# Parse the input URL and split it into the relevant parts:
sub SplitURL {
my $inurl = $_[0];
if($inurl=~ /^([^:]+):\/\/([^\/]*)\/(.*)\/(.*)/ ) {
$getprotocol = $1;
$getserver = $2;
$getpath = $3;
$getdocument = $4;
}
elsif ($inurl=~ /^([^:]+):\/\/([^\/]*)\/(.*)/ ) {
$getprotocol = $1;
$getserver = $2;
$getpath = $3;
$getdocument = "";
if($getpath !~ /\//) {
$getpath ="";
$getdocument = $3;
}
}
elsif ($inurl=~ /^([^:]+):\/\/(.*)/ ) {
$getprotocol = $1;
$getserver = $2;
$getpath = "";
$getdocument = "";
}
else {
print "Couldn't parse the specified URL, retry please!\n";
exit;
}
}
if(!$usestdin) {
&SplitURL($geturl);
#print "protocol = $getprotocol\n";
#print "server = $getserver\n";
#print "path = $getpath\n";
#print "document = $getdocument\n";
#exit;
open(HEADGET, "$linkcheck $geturl|") ||
die "Couldn't get web page for some reason";
headget:
while(<HEADGET>) {
# print $_;
if($_ =~ /HTTP\/.*3\d\d /) {
$pagemoved=1;
}
elsif($pagemoved &&
($_ =~ /^Location: (.*)/)) {
$geturl = $1;
&SplitURL($geturl);
$pagemoved++;
last headget;
}
}
close(HEADGET);
if($pagemoved == 1) {
print "Page is moved but we don't know where. Did you forget the ",
"traling slash?\n";
exit;
}
open(WEBGET, "$htmlget $geturl|") ||
die "Couldn't get web page for some reason";
while(<WEBGET>) {
$line = $_;
push @indoc, $line;
$line=~ s/\n//g;
$line=~ s/\r//g;
# print $line."\n";
$in=$in.$line;
}
close(WEBGET);
}
else {
while(<STDIN>) {
$line = $_;
push @indoc, $line;
$line=~ s/\n//g;
$line=~ s/\r//g;
$in=$in.$line;
}
}
getlinkloop:
while($in =~ /[^<]*(<[^>]+>)/g ) {
# we have a tag in $1
$tag = $1;
if($tag =~ /^<!--/) {
# this is a comment tag, ignore it
}
else {
if(!$form &&
($tag =~ /^< *form/i )) {
$method= $tag;
if($method =~ /method *=/i) {
$method=~ s/.*method *= *(\"|)([^ \">]*).*/$2/gi;
}
else {
$method="get"; # default method
}
$action= $tag;
$action=~ s/.*action *= *(\"|)([^ \">]*).*/$2/gi;
$method=uc($method);
$enctype=$tag;
if ($enctype =~ /enctype *=/) {
$enctype=~ s/.*enctype *= *(\'|\"|)([^ \"\'>]*).*/$2/gi;
if($enctype eq "multipart/form-data") {
$enctype="multipart form upload [use -F]"
}
$enctype = "\n--- type: $enctype";
}
else {
$enctype="";
}
print "--- FORM report. Uses $method to URL \"$action\"$enctype\n";
# print "TAG: $tag\n";
# print "METHOD: $method\n";
# print "ACTION: $action\n";
$form=1;
}
elsif($form &&
($tag =~ /< *\/form/i )) {
# print "TAG: $tag\n";
print "--- end of FORM\n";
$form=0;
if( 0 ) {
print "*** Fill in all or any of these: (default assigns may be shown)\n";
for(@vars) {
$var = $_;
$def = $value{$var};
print "$var=$def\n";
}
print "*** Pick one of these:\n";
for(@alts) {
print "$_\n";
}
}
undef @vars;
undef @alts;
}
elsif($form &&
($tag =~ /^< *(input|select)/i)) {
$mtag = $1;
# print "TAG: $tag\n";
$name=$tag;
if($name =~ /name *=/i) {
$name=~ s/.*name *= *(\"|)([^ \">]*).*/$2/gi;
}
else {
# no name given
$name="";
}
# get value tag
$value= $tag;
if($value =~ /value *=/i) {
$value=~ s/.*value *= *(\"|)([^ \">]*).*/$2/gi;
}
else {
$value="";
}
if($mtag =~ /select/i) {
print "Select: $name\n";
push @vars, "$name";
$select = 1;
}
else {
$type=$tag;
if($type =~ /type *=/i) {
$type =~ s/.*type *= *(\"|)([^ \">]*).*/$2/gi;
}
else {
$type="text"; # default type
}
$type=uc($type);
if(lc($type) eq "reset") {
# reset types are for UI only, ignore.
}
elsif($name eq "") {
# let's read the value parameter
print "Button: \"$value\" ($type)\n";
push @alts, "$value";
}
else {
$info="";
if($value ne "") {
$info="=$value";
}
print "Input: $name$info ($type)\n";
push @vars, "$name";
# store default value:
$value{$name}=$value;
}
}
}
elsif($select &&
($tag =~ /^< *\/ *select/i)) {
$select = 0;
}
}
}

View File

@@ -1,7 +1,7 @@
#############################################################
# $Id$
#
## Makefile for building curl.exe with MingW32 (GCC-2.95) and
## Makefile for building curl.exe with MingW32 (GCC-3.2) and
## optionally OpenSSL (0.9.6)
##
## Use: make -f Makefile.m32 [SSL=1] [DYN=1]
@@ -10,8 +10,9 @@
## Joern Hartroth <hartroth@acm.org>
CC = gcc
RM = rm -f
STRIP = strip -s
OPENSSL_PATH = ../../openssl-0.9.6d
OPENSSL_PATH = ../../openssl-0.9.6g
ZLIB_PATH = ../../zlib-1.1.3
# We may need these someday
@@ -23,6 +24,9 @@ ZLIB_PATH = ../../zlib-1.1.3
INCLUDES = -I. -I.. -I../include
CFLAGS = -g -O2 -DMINGW32
ifdef SSL
CFLAGS += -DUSE_SSLEAY
endif
LDFLAGS =
COMPILE = $(CC) $(INCLUDES) $(CFLAGS)
LINK = $(CC) $(CFLAGS) $(LDFLAGS) -o $@
@@ -52,13 +56,13 @@ OBJECTS = $(curl_OBJECTS)
all: curl.exe
curl.exe: $(curl_OBJECTS) $(curl_DEPENDENCIES)
-@erase $@
$(RM) $@
$(LINK) $(curl_OBJECTS) $(curl_LDADD)
$(STRIP) $@
# We don't have nroff normally under win32
# hugehelp.c: ../README.curl ../curl.1 mkhelp.pl
# -@erase hugehelp.c
# $(RM) hugehelp.c
# $(NROFF) -man ../curl.1 | $(PERL) mkhelp.pl ../README.curl > hugehelp.c
.c.o:
@@ -71,7 +75,7 @@ curl.exe: $(curl_OBJECTS) $(curl_DEPENDENCIES)
$(COMPILE) -c $<
clean:
-@erase $(curl_OBJECTS)
$(RM) $(curl_OBJECTS)
distrib: clean
-@erase $(curl_PROGRAMS)
$(RM) $(curl_PROGRAMS)

View File

@@ -31,6 +31,7 @@
#include <sys/types.h>
#include <sys/stat.h>
#include <ctype.h>
#include <errno.h>
#include <curl/curl.h>
@@ -144,6 +145,12 @@ char *strdup(char *str)
}
#endif
#ifdef WIN32
#include <direct.h>
#define F_OK 0
#define mkdir(x,y) (mkdir)(x)
#endif
#ifdef VMS
int vms_show = 0;
#define FAC_CURL 0xC01
@@ -355,6 +362,7 @@ static void help(void)
" --ciphers <list> What SSL ciphers to use (SSL)\n"
" --compressed Request a compressed response (using deflate).");
puts(" --connect-timeout <seconds> Maximum time allowed for connection\n"
" --create-dirs Create the necessary local directory hierarchy\n"
" --crlf Convert LF to CRLF in upload. Useful for MVS (OS/390)\n"
" -f/--fail Fail silently (no output at all) on errors (H)\n"
" -F/--form <name=content> Specify HTTP POST data (H)\n"
@@ -484,6 +492,7 @@ struct Configurable {
bool globoff;
bool use_httpget;
bool insecure_ok; /* set TRUE to allow insecure SSL connects */
bool create_dirs;
char *writeout; /* %-styled format string to output */
bool writeenv; /* write results to environment, if available */
@@ -523,6 +532,7 @@ struct Configurable {
static int parseconfig(const char *filename,
struct Configurable *config);
static char *my_get_line(FILE *fp);
static int create_dir_hierarchy(char *outfile);
static void GetStr(char **string,
char *value)
@@ -1069,6 +1079,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
{"z", "time-cond", TRUE},
{"Z", "max-redirs", TRUE},
{"#", "progress-bar",FALSE},
{"@", "create-dirs", FALSE},
};
if(('-' != flag[0]) ||
@@ -1704,6 +1715,10 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
config->maxredirs = atoi(nextarg);
break;
case '@':
config->create_dirs = TRUE;
break;
default: /* unknown flag */
return PARAM_OPTION_UNKNOWN;
}
@@ -1774,7 +1789,7 @@ static int parseconfig(const char *filename,
case '\n':
case '*':
case '\0':
free(line);
free(aline);
continue;
}
@@ -2149,6 +2164,7 @@ void dump(const char *text,
}
fputc('\n', stream); /* newline */
}
fflush(stream);
}
static
@@ -2245,6 +2261,41 @@ void free_config_fields(struct Configurable *config)
curl_slist_free_all(config->headers); /* */
}
#if defined(WIN32) && !defined(__CYGWIN32__)
/* Function to find CACert bundle on a Win32 platform using SearchPath.
* (SearchPath is defined in windows.h, which is #included into libcurl)
* (Use the ASCII version instead of the unicode one!)
* The order of the directories it searches is:
* 1. application's directory
* 2. current working directory
* 3. Windows System directory (e.g. C:\windows\system32)
* 4. Windows Directory (e.g. C:\windows)
* 5. all directories along %PATH%
*/
static void FindWin32CACert(struct Configurable *config,
const char *bundle_file)
{
curl_version_info_data *info;
info = curl_version_info(CURLVERSION_NOW);
/* only check for cert file if "we" support SSL */
if(info->features & CURL_VERSION_SSL) {
DWORD buflen;
char *ptr = NULL;
char *retval = (char *) malloc(sizeof (TCHAR) * (MAX_PATH + 1));
if (!retval)
return;
retval[0] = '\0';
buflen = SearchPathA(NULL, bundle_file, NULL, MAX_PATH+2, retval, &ptr);
if (buflen > 0) {
GetStr(&config->cacert, retval);
}
free(retval);
}
}
#endif
static int
operate(struct Configurable *config, int argc, char *argv[])
@@ -2280,9 +2331,9 @@ operate(struct Configurable *config, int argc, char *argv[])
int res = 0;
int i;
char *env;
#ifdef MALLOCDEBUG
/* this sends all memory debug messages to a logfile named memdump */
char *env;
env = curl_getenv("CURL_MEMDEBUG");
if(env) {
free(env);
@@ -2297,6 +2348,7 @@ operate(struct Configurable *config, int argc, char *argv[])
config->showerror=TRUE;
config->conf=CONF_DEFAULT;
config->use_httpget=FALSE;
config->create_dirs=FALSE;
if(argc>1 &&
(!strnequal("--", argv[1], 2) && (argv[1][0] == '-')) &&
@@ -2384,6 +2436,27 @@ operate(struct Configurable *config, int argc, char *argv[])
else
allocuseragent = TRUE;
/* On WIN32 (non-cygwin), we can't set the path to curl-ca-bundle.crt
* at compile time. So we look here for the file in two ways:
* 1: look at the environment variable CURL_CA_BUNDLE for a path
* 2: if #1 isn't found, use the windows API function SearchPath()
* to find it along the app's path (includes app's dir and CWD)
*
* We support the environment variable thing for non-Windows platforms
* too. Just for the sake of it.
*/
if (! config->cacert) {
env = curl_getenv("CURL_CA_BUNDLE");
if(env) {
GetStr(&config->cacert, env);
free(env);
}
}
#if defined(WIN32) && !defined(__CYGWIN32__)
if (! config->cacert)
FindWin32CACert(config, "curl-ca-bundle.crt");
#endif
if (config->postfields) {
if (config->use_httpget) {
/* Use the postfields data for a http get */
@@ -2498,9 +2571,16 @@ operate(struct Configurable *config, int argc, char *argv[])
free(storefile);
}
/* Create the directory hierarchy, if not pre-existant to a multiple
file output call */
if(config->create_dirs)
if (-1 == create_dir_hierarchy(outfile))
return CURLE_WRITE_ERROR;
if(config->resume_from_current) {
/* we're told to continue where we are now, then we get the size of
the file as it is now and open it for append instead */
/* We're told to continue from where we are now. Get the
size of the file as it is now and open it for append instead */
struct stat fileinfo;
@@ -2963,3 +3043,89 @@ static char *my_get_line(FILE *fp)
return retval;
}
/* Create the needed directory hierarchy recursively in order to save
multi-GETs in file output, ie:
curl "http://my.site/dir[1-5]/file[1-5].txt" -o "dir#1/file#2.txt"
should create all the dir* automagically
*/
static int create_dir_hierarchy(char *outfile)
{
char *tempdir;
char *tempdir2;
char *outdup;
char *dirbuildup;
int result=0;
outdup = strdup(outfile);
dirbuildup = malloc(sizeof(char) * strlen(outfile));
if(!dirbuildup)
return -1;
dirbuildup[0] = '\0';
tempdir = strtok(outdup, DIR_CHAR);
while (tempdir != NULL) {
tempdir2 = strtok(NULL, DIR_CHAR);
/* since strtok returns a token for the last word even
if not ending with DIR_CHAR, we need to prune it */
if (tempdir2 != NULL) {
if (strlen(dirbuildup) > 0)
sprintf(dirbuildup,"%s%s%s",dirbuildup, DIR_CHAR, tempdir);
else {
if (0 != strncmp(outdup, DIR_CHAR, 1))
sprintf(dirbuildup,"%s",tempdir);
else
sprintf(dirbuildup,"%s%s", DIR_CHAR, tempdir);
}
if (access(dirbuildup, F_OK) == -1) {
result = mkdir(dirbuildup,(mode_t)0000750);
if (-1 == result) {
switch (errno) {
#ifdef EACCES
case EACCES:
fprintf(stderr,"You don't have permission to create %s.\n",
dirbuildup);
break;
#endif
#ifdef ENAMETOOLONG
case ENAMETOOLONG:
fprintf(stderr,"The directory name %s is too long.\n",
dirbuildup);
break;
#endif
#ifdef EROFS
case EROFS:
fprintf(stderr,"%s resides on a read-only file system.\n",
dirbuildup);
break;
#endif
#ifdef ENOSPC
case ENOSPC:
fprintf(stderr,"No space left on the file system that will "
"contain the directory %s.\n", dirbuildup);
break;
#endif
#ifdef EDQUOT
case EDQUOT:
fprintf(stderr,"Cannot create directory %s because you "
"exceeded your quota.\n", dirbuildup);
break;
#endif
default :
fprintf(stderr,"Error creating directory %s.\n", dirbuildup);
break;
}
break; /* get out of loop */
}
}
}
tempdir = tempdir2;
}
free(dirbuildup);
free(outdup);
return result; /* 0 is fine, -1 is badness */
}

View File

@@ -25,9 +25,8 @@
#include <stdio.h>
#if !defined(WIN32) && defined(_WIN32)
/* This _might_ be a good Borland fix. Please report whether this works or
not! */
#if !defined(WIN32) && defined(__WIN32__)
/* Borland fix */
#define WIN32
#endif

View File

@@ -1,3 +1,3 @@
#define CURL_NAME "curl"
#define CURL_VERSION "7.10"
#define CURL_VERSION "7.10.3"
#define CURL_ID CURL_NAME " " CURL_VERSION " (" OS ") "

View File

@@ -1,2 +1,4 @@
Makefile
Makefile.in
memdump
log

View File

@@ -38,6 +38,16 @@ reply is sent
</reply>
<client>
<server>
protocols as in 'http' 'ftp' etc. Give only one per line. Used for test cases
500+ (at this point) to specify which servers the test case requires. In the
future all test cases should use this. Makes us independent of the test
case number.
</server
<tool>
Name of tool to use instead of "curl". This tool must be built and exist
in the libtest/ directory.
</tool>
<name>
test case description
</name>
@@ -48,6 +58,15 @@ accordingly. more about them elsewhere
Set 'option=no-output' to prevent the test script to slap on the --output
argument that directs the output to a file. The --output is also not added if
the client/stdout section is used.
Available substitute variables include:
%HOSTIP - IP address of the host running this test
%HOSTPORT - Port number of the HTTP server
%HTTPSPORT - Port number of the HTTPS server
%FTPPORT - Port number of the FTP server
%FTPSPORT - Port number of the FTPS server
%SRCDIR - Full path to the source dir
%PWD - Current directory
</command>
<file name="log/filename">
this creates the named file with this content before the test case is run

View File

@@ -2,13 +2,10 @@ EXTRA_DIST = ftpserver.pl httpserver.pl httpsserver.pl runtests.pl \
ftpsserver.pl stunnel.pm getpart.pm FILEFORMAT README \
stunnel.pem memanalyze.pl
SUBDIRS = data server
SUBDIRS = data server libtest
PERLFLAGS = -I$(srcdir)
all:
install:
curl:
@(cd ..; make)
@@ -20,9 +17,5 @@ quiet-test: server/sws
@cd data && exec $(MAKE) test
srcdir=$(srcdir) $(PERL) $(PERLFLAGS) $(srcdir)/runtests.pl -s -a
clean:
rm -rf log
find . -name "*~" | xargs rm -f
server/sws:
cd server; make sws

View File

@@ -1,4 +1,4 @@
all:
iall:
install:
test:
@@ -16,4 +16,6 @@ test105 test114 test123 test19 test24 test302 test43 test31 \
test106 test115 test124 test190 test25 test303 test44 test38 \
test107 test116 test125 test2 test26 test33 test45 test126 \
test304 test39 test32 test128 test48 test306 \
test130 test131 test132 test133 test134 test135 test403 test305
test130 test131 test132 test133 test134 test135 test403 test305 \
test49 test50 test51 test52 test53 test54 test55 test56 \
test500 test501 test502 test503 test504 test136

29
tests/data/test136 Normal file
View File

@@ -0,0 +1,29 @@
# Server-side
<reply>
<data>
0123456789abcdef
</data>
</reply>
# Client-side
<client>
<name>
FTP with user and no password
</name>
<command>
-u user: ftp://%HOSTIP:%FTPPORT/136
</command>
</test>
# Verify data after the test has been "shot"
<verify>
<protocol>
USER user
PASS
PWD
EPSV
TYPE I
SIZE 136
RETR 136
</protocol>
</verify>

View File

@@ -1,6 +1,7 @@
#
# Server-side
<reply>
MOOOOO
</reply>
#
@@ -10,7 +11,7 @@
HTTPS GET over HTTP proxy fails
</name>
<command>
-k -U fake:user -x %HOSTIP:%HOSTPORT https://ssl.fakeurl-to.test/slash/302
-k -U fake:user -x %HOSTIP:%HOSTPORT https://bad.fakeurl-to.test/slash/302
</command>
</test>

64
tests/data/test49 Normal file
View File

@@ -0,0 +1,64 @@
#
# Server-side
<reply>
<data>
HTTP/1.1 302 OK
Location: ../moo.html/490002
Date: Thu, 09 Nov 2010 14:49:00 GMT
Connection: close
</data>
<data2>
HTTP/1.1 200 OK
Location: this should be ignored
Date: Thu, 09 Nov 2010 14:49:00 GMT
Connection: close
body
</data2>
<datacheck>
HTTP/1.1 302 OK
Location: ../moo.html/490002
Date: Thu, 09 Nov 2010 14:49:00 GMT
Connection: close
HTTP/1.1 200 OK
Location: this should be ignored
Date: Thu, 09 Nov 2010 14:49:00 GMT
Connection: close
body
</datacheck>
</reply>
#
# Client-side
<client>
<name>
HTTP follow redirect with ../
</name>
<command>
http://%HOSTIP:%HOSTPORT/we/are/all/twits/49 -L
</command>
</client>
#
# Verify data after the test has been "shot"
<verify>
<strip>
^User-Agent:.*
</strip>
<protocol>
GET /we/are/all/twits/49 HTTP/1.1
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /we/are/all/moo.html/490002 HTTP/1.1
User-Agent: curl/7.10 (i686-pc-linux-gnu) libcurl/7.10 OpenSSL/0.9.6c ipv6 zlib/1.1.3
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
</protocol>
</verify>

64
tests/data/test50 Normal file
View File

@@ -0,0 +1,64 @@
#
# Server-side
<reply>
<data>
HTTP/1.1 302 OK
Location: ../../moo.html/500002
Date: Thu, 09 Nov 2010 14:50:00 GMT
Connection: close
</data>
<data2>
HTTP/1.1 200 OK
Location: this should be ignored
Date: Thu, 09 Nov 2010 14:50:00 GMT
Connection: close
body
</data2>
<datacheck>
HTTP/1.1 302 OK
Location: ../../moo.html/500002
Date: Thu, 09 Nov 2010 14:50:00 GMT
Connection: close
HTTP/1.1 200 OK
Location: this should be ignored
Date: Thu, 09 Nov 2010 14:50:00 GMT
Connection: close
body
</datacheck>
</reply>
#
# Client-side
<client>
<name>
HTTP follow redirect with ../../
</name>
<command>
http://%HOSTIP:%HOSTPORT/we/are/all/twits/50 -L
</command>
</client>
#
# Verify data after the test has been "shot"
<verify>
<strip>
^User-Agent:.*
</strip>
<protocol>
GET /we/are/all/twits/50 HTTP/1.1
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
GET /we/are/moo.html/500002 HTTP/1.1
User-Agent: curl/7.10 (i686-pc-linux-gnu) libcurl/7.10 OpenSSL/0.9.6c ipv6 zlib/1.1.3
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
</protocol>
</verify>

48
tests/data/test500 Normal file
View File

@@ -0,0 +1,48 @@
#
# Server-side
<reply name="1">
<data>
HTTP/1.1 200 OK
Date: Thu, 09 Nov 2010 14:49:00 GMT
Server: test-server/fake
Last-Modified: Tue, 13 Jun 2000 12:10:00 GMT
ETag: "21025-dc7-39462498"
Accept-Ranges: bytes
Content-Length: 6
Connection: close
Content-Type: text/html
Funny-head: yesyes
<foo>
</data>
</reply>
# Client-side
<client>
<server>
http
</server>
# tool is what to use instead of 'curl'
<tool>
lib500
</tool>
<name>
simple libcurl HTTP GET tool
</name>
<command>
http://%HOSTIP:%HOSTPORT/500
</command>
</client>
#
# Verify data after the test has been "shot"
<verify>
<protocol>
GET /500 HTTP/1.1
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
</protocol>
</verify>

30
tests/data/test501 Normal file
View File

@@ -0,0 +1,30 @@
#
# Server-side
<reply name="1">
</reply>
# Client-side
<client>
<server>
file
</server>
# tool is what to use instead of 'curl'
<tool>
lib501
</tool>
<name>
simple libcurl attempt operation without URL set
</name>
<command>
http://%HOSTIP:%HOSTPORT/501
</command>
</client>
#
# Verify data after the test has been "shot"
<verify>
<errorcode>
3
</errorcode>
</verify>

Some files were not shown because too many files have changed in this diff Show More