Compare commits

...

56 Commits

Author SHA1 Message Date
Daniel Stenberg
d80f87554c version 7.6-pre3 2001-01-19 09:38:48 +00:00
Daniel Stenberg
c1d37470f6 spelling error FPL should be GPL 2001-01-19 09:38:29 +00:00
Daniel Stenberg
9c695393b2 edited the portable code section 2001-01-19 09:37:39 +00:00
Daniel Stenberg
444024ea14 brought up-to-date and extended 2001-01-17 14:17:49 +00:00
Daniel Stenberg
afcd933b4c Transfer and file renaming 2001-01-17 14:17:26 +00:00
Daniel Stenberg
ae0a6835bd Transfer is now Curl_Tranfer() and transfer.h is used instead of highlevel.h
and download.h
2001-01-17 13:23:01 +00:00
Daniel Stenberg
f2f11be8ba download.[ch] is renamed to transfer.[ch], highlevel.[ch] is history 2001-01-17 13:22:27 +00:00
Daniel Stenberg
e09eda9c7c download and highlevel are replaced with transfer 2001-01-17 13:19:01 +00:00
Daniel Stenberg
c6877a414e clarified that vcvars32.bat is not part of the curl package 2001-01-17 08:24:29 +00:00
Daniel Stenberg
a3eb91ffb1 shortened the "what is libcurl" text 2001-01-15 14:59:07 +00:00
Daniel Stenberg
12708473a6 Added a few more similar tools 2001-01-15 12:12:36 +00:00
Daniel Stenberg
9012f8cdb3 removed an old reference to previous license conditions 2001-01-15 10:28:41 +00:00
Daniel Stenberg
e26ee09586 4.2 and 4.3 were updated 2001-01-15 10:26:37 +00:00
Daniel Stenberg
7d09e51162 TELNET was missing in the basic initial description! Updated the language
in the thread-safe question 5.1 to be more clear.
2001-01-11 12:52:07 +00:00
Daniel Stenberg
18ebde6960 I successfully compiled on built curl for StrongARM NetBSD
Added other known platforms
Added the faked autoconf and autoheader trick posted about recently
2001-01-11 12:33:26 +00:00
Daniel Stenberg
b0c0e8d815 7.6-pre2 2001-01-11 09:29:30 +00:00
Daniel Stenberg
16502d7d15 -g added, no more space requirements between short options and their parameters 2001-01-11 08:02:07 +00:00
Daniel Stenberg
ce05deece8 Added -g, fixed so that short options worked again. My last "merged"fix did
screw a few things up.
2001-01-11 08:01:24 +00:00
Daniel Stenberg
b77e2528e7 made short options and their parmaters possible to specify without space
separation
2001-01-10 23:47:08 +00:00
Daniel Stenberg
27f8cf6dfc made "short options" possible to specify -m20 as well as -m 200. 2001-01-10 23:42:03 +00:00
Daniel Stenberg
f5aa7f64bd added missing newlines to two infof() functions about document dates 2001-01-10 22:46:26 +00:00
Daniel Stenberg
44254c4945 getpass_r() fix for SCO (hopefully) 2001-01-10 11:42:00 +00:00
Daniel Stenberg
a9ea507c6a version 7.6-pre1 2001-01-09 12:25:32 +00:00
Daniel Stenberg
b137d5ec23 bugfix for when more -o than URLs is used 2001-01-09 12:25:14 +00:00
Daniel Stenberg
4792eee5d0 multiple URL adjustments 2001-01-09 12:24:49 +00:00
Daniel Stenberg
a84625eca6 Added two tests for multiple URLs (26 + 27) 2001-01-09 12:24:08 +00:00
Daniel Stenberg
19d3fd1185 Loic's fix that removes the % from the instructions in the bottom 2001-01-09 10:09:39 +00:00
Daniel Stenberg
a9be9bc7f5 Additional "docs" about 'make rpms' added by Loic 2001-01-09 10:09:13 +00:00
Daniel Stenberg
e8b99d21e5 Added the curl source-header 2001-01-09 07:41:04 +00:00
Daniel Stenberg
f6c57990ee removed FILES from the RPM 2001-01-08 23:35:45 +00:00
Daniel Stenberg
370d7f7527 Added source header. Made the prototype not being set if HAVE_GETPASS_R is
set, as those systems are likely to have it already set in a system header
and this prototype has proven to cause problems on SCO systems.
2001-01-08 22:30:30 +00:00
Daniel Stenberg
7d38692c4f Added Loic Dachary as a contributor after his major makefile session! 2001-01-08 22:29:31 +00:00
Daniel Stenberg
a997d60304 Loic Dachary's updates to get 'make distcheck' work, including running the
test suite
2001-01-08 22:18:30 +00:00
Daniel Stenberg
ff8fb8cdb0 krb4.c header file, no source header (yet) 2001-01-08 22:02:23 +00:00
Daniel Stenberg
b915ca68f9 'make distcheck' works now 2001-01-08 17:38:23 +00:00
Daniel Stenberg
703fc264f0 Had to add this to get 'make distcheck' to run! 2001-01-08 17:28:53 +00:00
Daniel Stenberg
19d92834ed corrected 2001-01-08 16:32:36 +00:00
Daniel Stenberg
9ade752fa7 distcheck fixes 2001-01-08 16:31:29 +00:00
Daniel Stenberg
e8a5f3026f Added mprintf #include 2001-01-08 16:22:55 +00:00
Daniel Stenberg
2cac4a9c72 better cleanup when existing due to bad usage 2001-01-08 15:02:58 +00:00
Daniel Stenberg
39e939a507 corrected the separator when using URL globbing 2001-01-08 14:48:34 +00:00
Daniel Stenberg
803005892c mostly a dummy 2001-01-08 14:36:34 +00:00
Daniel Stenberg
08cfdf909e use .spec.in files instead of plain .spec files 2001-01-08 13:42:18 +00:00
Daniel Stenberg
434ce48016 removed multiple URL, we do that now! 2001-01-08 13:40:26 +00:00
Daniel Stenberg
10051e6916 generated file 2001-01-08 13:39:49 +00:00
Daniel Stenberg
d54cdf294b adjusted to work with automake 'make dist' 2001-01-08 13:39:21 +00:00
Daniel Stenberg
2e342d5d9b we're now using automake to build archives, this file is obsolete 2001-01-08 12:58:27 +00:00
Daniel Stenberg
fe84071e80 adjusted to use 'make dist' when building the package 2001-01-08 12:57:38 +00:00
Daniel Stenberg
044ca343ad Loic Dachary's makefile/dist/rpm fixes 2001-01-08 10:00:14 +00:00
Daniel Stenberg
f59ea9adb3 krb4 fix, big symbol renaming action, multiple URL support in the client 2001-01-08 07:45:43 +00:00
Daniel Stenberg
0cec4ba6bf generated 2001-01-08 07:42:35 +00:00
Daniel Stenberg
14ca732a8f Multiple URL support added 2001-01-08 07:37:44 +00:00
Daniel Stenberg
53c27c7722 generated file, don't CVS it 2001-01-08 07:37:13 +00:00
Daniel Stenberg
c2f5b71dc9 multiple uses of -d was wrong documented 2001-01-05 13:44:53 +00:00
Daniel Stenberg
6403257886 renamed Curl_ to curl_ for the printf() prefixes 2001-01-05 12:19:42 +00:00
Daniel Stenberg
4031104404 Internal symbols that aren't static are now prefixed with 'Curl_' 2001-01-05 10:11:41 +00:00
100 changed files with 1833 additions and 4199 deletions

51
CHANGES
View File

@@ -6,6 +6,57 @@
History of Changes
Daniel (17 January 2001)
- Made the two former files lib/download.c and lib/highlevel.c become the new
lib/transfer.c which makes more sense. I also did the rename from Transfer()
to Curl_Transfer() in the other source files that use the transfer function
in the spirit of using Curl_ prefix for library-scoped global symbols.
Daniel (11 January 2001)
- Added -g/--globoff that switches OFF the URL globbing and thus enables {}[]
letters to be part of the URL. Do note that RFC2396 section 2.4.3 explicitly
mention these letters to be escaped. This was posted as a feature request by
Jorge Gutierrez and as a bug by Terry.
- Short options to curl that requires parameters can now be specified without
having the option and its parameter space separated. -ofile works as good as
-o file. -m20 is equal to -m 20. Do note that this goes for single-letter
options only, verbose --long-style options still must be separated with
space from their parameters.
Daniel (8 January 2001)
- Francis Dagenais reported that the SCO compiler still fails when compiling
curl due to that getpass_r() prototype. I've now put it around #ifndef
HAVE_GETPASS_R in an attempt to please the SCO systems.
- Made some minor corrections to get the client to cleanup properly and I made
the separator work again when getting multiple globbed URLs to stdout.
- Worked with Loic Dachary to get the make dist and make distcheck work
correctly. The 'maketgz' script is now using the automake generated 'make
dist' when creating release archives. Loic successfully made 'make rpms'
automatically build RPMs!
Loic Dachary (6 January 2001)
- Automated generation of rpm packages, no need to be root.
- make distcheck generates a proper distribution (EXTRA_DIST
in all Makefile.am modified to match FILES).
Daniel (5 January 2001)
- Huge client-side hack: now multiple URLs are supported. Any number of URLs
can be specified on the command line, and they'll all be downloaded. There
must be a corresponding -o or -O for each URL or the data will be written to
stdout. This needs more testing, time to release a 7.6-pre package.
- The krb4 support was broken in the release. Fixed now.
- Huge internal symbol rename operation. All non-static but still lib-internal
symbols should now be prefixed with 'Curl_' to prevent collisions with other
libs. All public symbols should be prefixed with 'curl_' and the rest should
be static and thus invisible to the outside world. I updated the INTERNALS
document to say this as well.
Version 7.5.2
Daniel (4 January 2001)

View File

@@ -14,7 +14,9 @@ perl/ is a subdirectory with various perl scripts
To build after having extracted everything from CVS, do this:
% automake
% autoconf
% ./configure
% make
automake
aclocal
autoheader
autoconf
./configure
make

86
FILES
View File

@@ -1,86 +0,0 @@
CHANGES
FILES
LEGAL
MPL-1.1.txt
MITX.txt
README
docs/BUGS
docs/CONTRIBUTE
docs/FAQ
docs/FEATURES
docs/INSTALL
docs/INTERNALS
docs/MANUAL
docs/README.win32
docs/LIBCURL
docs/RESOURCES
docs/TODO
docs/curl.1
docs/Makefile.in
docs/Makefile.am
docs/TheArtOfHttpScripting
docs/*.3
docs/examples/README
docs/examples/*.c
maketgz
Makefile.in
Makefile.am
acconfig.h
acinclude.m4
aclocal.m4
config.guess
config.h.in
config-win32.h
config.sub
configure
configure.in
install-sh
missing
mkinstalldirs
reconf
stamp-h.in
ltconfig
ltmain.sh
src/config-win32.h
src/hugehelp.c
src/main.c
src/setup.h
src/urlglob.c
src/urlglob.h
src/version.h
src/writeout.c
src/writeout.h
src/*.in
src/*.am
src/mkhelp.pl
src/Makefile.vc6
src/Makefile.b32
src/*m32
lib/getdate.y
lib/*.[ch]
lib/*in
lib/*am
lib/Makefile.vc6
lib/*m32
lib/Makefile.b32
lib/Makefile.b32.resp
lib/libcurl.def
include/README
include/Makefile.in
include/Makefile.am
include/curl/*.h
include/curl/Makefile.in
include/curl/Makefile.am
packages/Linux/RPM/curl-ssl.spec
packages/Linux/RPM/curl.spec
packages/Linux/RPM/make_curl_rpm
packages/Linux/RPM/README
packages/Win32/README
packages/README
tests/Makefile.am
tests/Makefile.in
tests/runtests.pl
tests/README
tests/httpserver.pl
tests/ftpserver.pl
tests/data/*.txt

View File

@@ -4,9 +4,44 @@
AUTOMAKE_OPTIONS = foreign no-dependencies
EXTRA_DIST = curl.spec curl-ssl.spec
EXTRA_DIST = \
CHANGES LEGAL maketgz MITX.txt MPL-1.1.txt \
config-win32.h reconf packages/README Makefile.dist
SUBDIRS = docs lib src include tests
SUBDIRS = docs lib src include tests packages
# create a root makefile in the distribution:
dist-hook:
cp $(srcdir)/Makefile.dist $(distdir)/Makefile
check: test
test:
@(cd tests; make quiet-test)
#
# Build source and binary rpms. For rpm-3.0 and above, the ~/.rpmmacros
# must contain the following line:
# %_topdir /home/loic/local/rpm
# and that /home/loic/local/rpm contains the directory SOURCES, BUILD etc.
#
# cd /home/loic/local/rpm ; mkdir -p SOURCES BUILD RPMS/i386 SPECS SRPMS
#
# If additional configure flags are needed to build the package, add the
# following in ~/.rpmmacros
# %configure CFLAGS="%{optflags}" ./configure %{_target_platform} --prefix=%{_prefix} ${AM_CONFIGFLAGS}
# and run make rpm in the following way:
# AM_CONFIGFLAGS='--with-uri=/home/users/loic/local/RedHat-6.2' make rpm
#
rpms:
$(MAKE) RPMDIST=curl rpm
$(MAKE) RPMDIST=curl-ssl rpm
rpm:
RPM_TOPDIR=`rpm --showrc | $(PERL) -n -e 'print if(s/.*_topdir\s+(.*)/$$1/)'` ; \
cp $(srcdir)/packages/Linux/RPM/$(RPMDIST).spec $$RPM_TOPDIR/SPECS ; \
cp $(PACKAGE)-$(VERSION).tar.gz $$RPM_TOPDIR/SOURCES ; \
rpm -ba --clean --rmsource $$RPM_TOPDIR/SPECS/$(RPMDIST).spec ; \
mv $$RPM_TOPDIR/RPMS/i386/$(RPMDIST)-*.rpm . ; \
mv $$RPM_TOPDIR/SRPMS/$(RPMDIST)-*.src.rpm .

616
aclocal.m4 vendored
View File

@@ -1,616 +0,0 @@
dnl aclocal.m4 generated automatically by aclocal 1.4
dnl Copyright (C) 1994, 1995-8, 1999 Free Software Foundation, Inc.
dnl This file is free software; the Free Software Foundation
dnl gives unlimited permission to copy and/or distribute it,
dnl with or without modifications, as long as this notice is preserved.
dnl This program is distributed in the hope that it will be useful,
dnl but WITHOUT ANY WARRANTY, to the extent permitted by law; without
dnl even the implied warranty of MERCHANTABILITY or FITNESS FOR A
dnl PARTICULAR PURPOSE.
#serial 12
dnl By default, many hosts won't let programs access large files;
dnl one must use special compiler options to get large-file access to work.
dnl For more details about this brain damage please see:
dnl http://www.sas.com/standards/large.file/x_open.20Mar96.html
dnl Written by Paul Eggert <eggert@twinsun.com>.
dnl Internal subroutine of AC_SYS_LARGEFILE.
dnl AC_SYS_LARGEFILE_TEST_INCLUDES
AC_DEFUN(AC_SYS_LARGEFILE_TEST_INCLUDES,
[[#include <sys/types.h>
int a[(off_t) 9223372036854775807 == 9223372036854775807 ? 1 : -1];
]])
dnl Internal subroutine of AC_SYS_LARGEFILE.
dnl AC_SYS_LARGEFILE_MACRO_VALUE(C-MACRO, VALUE, CACHE-VAR, COMMENT, INCLUDES, FUNCTION-BODY)
AC_DEFUN(AC_SYS_LARGEFILE_MACRO_VALUE,
[AC_CACHE_CHECK([for $1 value needed for large files], $3,
[$3=no
AC_TRY_COMPILE(AC_SYS_LARGEFILE_TEST_INCLUDES
$5
,
[$6],
,
[AC_TRY_COMPILE([#define $1 $2]
AC_SYS_LARGEFILE_TEST_INCLUDES
$5
,
[$6],
[$3=$2])])])
if test "[$]$3" != no; then
AC_DEFINE_UNQUOTED([$1], [$]$3, [$4])
fi])
AC_DEFUN(AC_SYS_LARGEFILE,
[AC_ARG_ENABLE(largefile,
[ --disable-largefile omit support for large files])
if test "$enable_largefile" != no; then
AC_CACHE_CHECK([for special C compiler options needed for large files],
ac_cv_sys_largefile_CC,
[ac_cv_sys_largefile_CC=no
if test "$GCC" != yes; then
# IRIX 6.2 and later do not support large files by default,
# so use the C compiler's -n32 option if that helps.
AC_TRY_COMPILE(AC_SYS_LARGEFILE_TEST_INCLUDES, , ,
[ac_save_CC="$CC"
CC="$CC -n32"
AC_TRY_COMPILE(AC_SYS_LARGEFILE_TEST_INCLUDES, ,
ac_cv_sys_largefile_CC=' -n32')
CC="$ac_save_CC"])
fi])
if test "$ac_cv_sys_largefile_CC" != no; then
CC="$CC$ac_cv_sys_largefile_CC"
fi
AC_SYS_LARGEFILE_MACRO_VALUE(_FILE_OFFSET_BITS, 64,
ac_cv_sys_file_offset_bits,
[Number of bits in a file offset, on hosts where this is settable.])
AC_SYS_LARGEFILE_MACRO_VALUE(_LARGEFILE_SOURCE, 1,
ac_cv_sys_largefile_source,
[Define to make ftello visible on some hosts (e.g. HP-UX 10.20).],
[#include <stdio.h>], [return !ftello;])
AC_SYS_LARGEFILE_MACRO_VALUE(_LARGE_FILES, 1,
ac_cv_sys_large_files,
[Define for large files, on AIX-style hosts.])
dnl lftp does not need ftello, and _XOPEN_SOURCE=500 makes resolv.h fail.
dnl AC_SYS_LARGEFILE_MACRO_VALUE(_XOPEN_SOURCE, 500,
dnl ac_cv_sys_xopen_source,
dnl [Define to make ftello visible on some hosts (e.g. glibc 2.1.3).],
dnl [#include <stdio.h>], [return !ftello;])
fi
])
# Like AC_CONFIG_HEADER, but automatically create stamp file.
AC_DEFUN(AM_CONFIG_HEADER,
[AC_PREREQ([2.12])
AC_CONFIG_HEADER([$1])
dnl When config.status generates a header, we must update the stamp-h file.
dnl This file resides in the same directory as the config header
dnl that is generated. We must strip everything past the first ":",
dnl and everything past the last "/".
AC_OUTPUT_COMMANDS(changequote(<<,>>)dnl
ifelse(patsubst(<<$1>>, <<[^ ]>>, <<>>), <<>>,
<<test -z "<<$>>CONFIG_HEADERS" || echo timestamp > patsubst(<<$1>>, <<^\([^:]*/\)?.*>>, <<\1>>)stamp-h<<>>dnl>>,
<<am_indx=1
for am_file in <<$1>>; do
case " <<$>>CONFIG_HEADERS " in
*" <<$>>am_file "*<<)>>
echo timestamp > `echo <<$>>am_file | sed -e 's%:.*%%' -e 's%[^/]*$%%'`stamp-h$am_indx
;;
esac
am_indx=`expr "<<$>>am_indx" + 1`
done<<>>dnl>>)
changequote([,]))])
# Do all the work for Automake. This macro actually does too much --
# some checks are only needed if your package does certain things.
# But this isn't really a big deal.
# serial 1
dnl Usage:
dnl AM_INIT_AUTOMAKE(package,version, [no-define])
AC_DEFUN(AM_INIT_AUTOMAKE,
[AC_REQUIRE([AC_PROG_INSTALL])
PACKAGE=[$1]
AC_SUBST(PACKAGE)
VERSION=[$2]
AC_SUBST(VERSION)
dnl test to see if srcdir already configured
if test "`cd $srcdir && pwd`" != "`pwd`" && test -f $srcdir/config.status; then
AC_MSG_ERROR([source directory already configured; run "make distclean" there first])
fi
ifelse([$3],,
AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package])
AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package]))
AC_REQUIRE([AM_SANITY_CHECK])
AC_REQUIRE([AC_ARG_PROGRAM])
dnl FIXME This is truly gross.
missing_dir=`cd $ac_aux_dir && pwd`
AM_MISSING_PROG(ACLOCAL, aclocal, $missing_dir)
AM_MISSING_PROG(AUTOCONF, autoconf, $missing_dir)
AM_MISSING_PROG(AUTOMAKE, automake, $missing_dir)
AM_MISSING_PROG(AUTOHEADER, autoheader, $missing_dir)
AM_MISSING_PROG(MAKEINFO, makeinfo, $missing_dir)
AC_REQUIRE([AC_PROG_MAKE_SET])])
#
# Check to make sure that the build environment is sane.
#
AC_DEFUN(AM_SANITY_CHECK,
[AC_MSG_CHECKING([whether build environment is sane])
# Just in case
sleep 1
echo timestamp > conftestfile
# Do `set' in a subshell so we don't clobber the current shell's
# arguments. Must try -L first in case configure is actually a
# symlink; some systems play weird games with the mod time of symlinks
# (eg FreeBSD returns the mod time of the symlink's containing
# directory).
if (
set X `ls -Lt $srcdir/configure conftestfile 2> /dev/null`
if test "[$]*" = "X"; then
# -L didn't work.
set X `ls -t $srcdir/configure conftestfile`
fi
if test "[$]*" != "X $srcdir/configure conftestfile" \
&& test "[$]*" != "X conftestfile $srcdir/configure"; then
# If neither matched, then we have a broken ls. This can happen
# if, for instance, CONFIG_SHELL is bash and it inherits a
# broken ls alias from the environment. This has actually
# happened. Such a system could not be considered "sane".
AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken
alias in your environment])
fi
test "[$]2" = conftestfile
)
then
# Ok.
:
else
AC_MSG_ERROR([newly created file is older than distributed files!
Check your system clock])
fi
rm -f conftest*
AC_MSG_RESULT(yes)])
dnl AM_MISSING_PROG(NAME, PROGRAM, DIRECTORY)
dnl The program must properly implement --version.
AC_DEFUN(AM_MISSING_PROG,
[AC_MSG_CHECKING(for working $2)
# Run test in a subshell; some versions of sh will print an error if
# an executable is not found, even if stderr is redirected.
# Redirect stdin to placate older versions of autoconf. Sigh.
if ($2 --version) < /dev/null > /dev/null 2>&1; then
$1=$2
AC_MSG_RESULT(found)
else
$1="$3/missing $2"
AC_MSG_RESULT(missing)
fi
AC_SUBST($1)])
# serial 40 AC_PROG_LIBTOOL
AC_DEFUN(AC_PROG_LIBTOOL,
[AC_REQUIRE([AC_LIBTOOL_SETUP])dnl
# Save cache, so that ltconfig can load it
AC_CACHE_SAVE
# Actually configure libtool. ac_aux_dir is where install-sh is found.
CC="$CC" CFLAGS="$CFLAGS" CPPFLAGS="$CPPFLAGS" \
LD="$LD" LDFLAGS="$LDFLAGS" LIBS="$LIBS" \
LN_S="$LN_S" NM="$NM" RANLIB="$RANLIB" \
DLLTOOL="$DLLTOOL" AS="$AS" OBJDUMP="$OBJDUMP" \
${CONFIG_SHELL-/bin/sh} $ac_aux_dir/ltconfig --no-reexec \
$libtool_flags --no-verify $ac_aux_dir/ltmain.sh $lt_target \
|| AC_MSG_ERROR([libtool configure failed])
# Reload cache, that may have been modified by ltconfig
AC_CACHE_LOAD
# This can be used to rebuild libtool when needed
LIBTOOL_DEPS="$ac_aux_dir/ltconfig $ac_aux_dir/ltmain.sh"
# Always use our own libtool.
LIBTOOL='$(SHELL) $(top_builddir)/libtool'
AC_SUBST(LIBTOOL)dnl
# Redirect the config.log output again, so that the ltconfig log is not
# clobbered by the next message.
exec 5>>./config.log
])
AC_DEFUN(AC_LIBTOOL_SETUP,
[AC_PREREQ(2.13)dnl
AC_REQUIRE([AC_ENABLE_SHARED])dnl
AC_REQUIRE([AC_ENABLE_STATIC])dnl
AC_REQUIRE([AC_ENABLE_FAST_INSTALL])dnl
AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_CANONICAL_BUILD])dnl
AC_REQUIRE([AC_PROG_RANLIB])dnl
AC_REQUIRE([AC_PROG_CC])dnl
AC_REQUIRE([AC_PROG_LD])dnl
AC_REQUIRE([AC_PROG_NM])dnl
AC_REQUIRE([AC_PROG_LN_S])dnl
dnl
case "$target" in
NONE) lt_target="$host" ;;
*) lt_target="$target" ;;
esac
# Check for any special flags to pass to ltconfig.
libtool_flags="--cache-file=$cache_file"
test "$enable_shared" = no && libtool_flags="$libtool_flags --disable-shared"
test "$enable_static" = no && libtool_flags="$libtool_flags --disable-static"
test "$enable_fast_install" = no && libtool_flags="$libtool_flags --disable-fast-install"
test "$ac_cv_prog_gcc" = yes && libtool_flags="$libtool_flags --with-gcc"
test "$ac_cv_prog_gnu_ld" = yes && libtool_flags="$libtool_flags --with-gnu-ld"
ifdef([AC_PROVIDE_AC_LIBTOOL_DLOPEN],
[libtool_flags="$libtool_flags --enable-dlopen"])
ifdef([AC_PROVIDE_AC_LIBTOOL_WIN32_DLL],
[libtool_flags="$libtool_flags --enable-win32-dll"])
AC_ARG_ENABLE(libtool-lock,
[ --disable-libtool-lock avoid locking (might break parallel builds)])
test "x$enable_libtool_lock" = xno && libtool_flags="$libtool_flags --disable-lock"
test x"$silent" = xyes && libtool_flags="$libtool_flags --silent"
# Some flags need to be propagated to the compiler or linker for good
# libtool support.
case "$lt_target" in
*-*-irix6*)
# Find out which ABI we are using.
echo '[#]line __oline__ "configure"' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
case "`/usr/bin/file conftest.o`" in
*32-bit*)
LD="${LD-ld} -32"
;;
*N32*)
LD="${LD-ld} -n32"
;;
*64-bit*)
LD="${LD-ld} -64"
;;
esac
fi
rm -rf conftest*
;;
*-*-sco3.2v5*)
# On SCO OpenServer 5, we need -belf to get full-featured binaries.
SAVE_CFLAGS="$CFLAGS"
CFLAGS="$CFLAGS -belf"
AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf,
[AC_TRY_LINK([],[],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no])])
if test x"$lt_cv_cc_needs_belf" != x"yes"; then
# this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf
CFLAGS="$SAVE_CFLAGS"
fi
;;
ifdef([AC_PROVIDE_AC_LIBTOOL_WIN32_DLL],
[*-*-cygwin* | *-*-mingw*)
AC_CHECK_TOOL(DLLTOOL, dlltool, false)
AC_CHECK_TOOL(AS, as, false)
AC_CHECK_TOOL(OBJDUMP, objdump, false)
;;
])
esac
])
# AC_LIBTOOL_DLOPEN - enable checks for dlopen support
AC_DEFUN(AC_LIBTOOL_DLOPEN, [AC_BEFORE([$0],[AC_LIBTOOL_SETUP])])
# AC_LIBTOOL_WIN32_DLL - declare package support for building win32 dll's
AC_DEFUN(AC_LIBTOOL_WIN32_DLL, [AC_BEFORE([$0], [AC_LIBTOOL_SETUP])])
# AC_ENABLE_SHARED - implement the --enable-shared flag
# Usage: AC_ENABLE_SHARED[(DEFAULT)]
# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to
# `yes'.
AC_DEFUN(AC_ENABLE_SHARED, [dnl
define([AC_ENABLE_SHARED_DEFAULT], ifelse($1, no, no, yes))dnl
AC_ARG_ENABLE(shared,
changequote(<<, >>)dnl
<< --enable-shared[=PKGS] build shared libraries [default=>>AC_ENABLE_SHARED_DEFAULT],
changequote([, ])dnl
[p=${PACKAGE-default}
case "$enableval" in
yes) enable_shared=yes ;;
no) enable_shared=no ;;
*)
enable_shared=no
# Look at the argument we got. We use all the common list separators.
IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:,"
for pkg in $enableval; do
if test "X$pkg" = "X$p"; then
enable_shared=yes
fi
done
IFS="$ac_save_ifs"
;;
esac],
enable_shared=AC_ENABLE_SHARED_DEFAULT)dnl
])
# AC_DISABLE_SHARED - set the default shared flag to --disable-shared
AC_DEFUN(AC_DISABLE_SHARED, [AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
AC_ENABLE_SHARED(no)])
# AC_ENABLE_STATIC - implement the --enable-static flag
# Usage: AC_ENABLE_STATIC[(DEFAULT)]
# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to
# `yes'.
AC_DEFUN(AC_ENABLE_STATIC, [dnl
define([AC_ENABLE_STATIC_DEFAULT], ifelse($1, no, no, yes))dnl
AC_ARG_ENABLE(static,
changequote(<<, >>)dnl
<< --enable-static[=PKGS] build static libraries [default=>>AC_ENABLE_STATIC_DEFAULT],
changequote([, ])dnl
[p=${PACKAGE-default}
case "$enableval" in
yes) enable_static=yes ;;
no) enable_static=no ;;
*)
enable_static=no
# Look at the argument we got. We use all the common list separators.
IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:,"
for pkg in $enableval; do
if test "X$pkg" = "X$p"; then
enable_static=yes
fi
done
IFS="$ac_save_ifs"
;;
esac],
enable_static=AC_ENABLE_STATIC_DEFAULT)dnl
])
# AC_DISABLE_STATIC - set the default static flag to --disable-static
AC_DEFUN(AC_DISABLE_STATIC, [AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
AC_ENABLE_STATIC(no)])
# AC_ENABLE_FAST_INSTALL - implement the --enable-fast-install flag
# Usage: AC_ENABLE_FAST_INSTALL[(DEFAULT)]
# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to
# `yes'.
AC_DEFUN(AC_ENABLE_FAST_INSTALL, [dnl
define([AC_ENABLE_FAST_INSTALL_DEFAULT], ifelse($1, no, no, yes))dnl
AC_ARG_ENABLE(fast-install,
changequote(<<, >>)dnl
<< --enable-fast-install[=PKGS] optimize for fast installation [default=>>AC_ENABLE_FAST_INSTALL_DEFAULT],
changequote([, ])dnl
[p=${PACKAGE-default}
case "$enableval" in
yes) enable_fast_install=yes ;;
no) enable_fast_install=no ;;
*)
enable_fast_install=no
# Look at the argument we got. We use all the common list separators.
IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:,"
for pkg in $enableval; do
if test "X$pkg" = "X$p"; then
enable_fast_install=yes
fi
done
IFS="$ac_save_ifs"
;;
esac],
enable_fast_install=AC_ENABLE_FAST_INSTALL_DEFAULT)dnl
])
# AC_ENABLE_FAST_INSTALL - set the default to --disable-fast-install
AC_DEFUN(AC_DISABLE_FAST_INSTALL, [AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
AC_ENABLE_FAST_INSTALL(no)])
# AC_PROG_LD - find the path to the GNU or non-GNU linker
AC_DEFUN(AC_PROG_LD,
[AC_ARG_WITH(gnu-ld,
[ --with-gnu-ld assume the C compiler uses GNU ld [default=no]],
test "$withval" = no || with_gnu_ld=yes, with_gnu_ld=no)
AC_REQUIRE([AC_PROG_CC])dnl
AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_CANONICAL_BUILD])dnl
ac_prog=ld
if test "$ac_cv_prog_gcc" = yes; then
# Check if gcc -print-prog-name=ld gives a path.
AC_MSG_CHECKING([for ld used by GCC])
ac_prog=`($CC -print-prog-name=ld) 2>&5`
case "$ac_prog" in
# Accept absolute paths.
changequote(,)dnl
[\\/]* | [A-Za-z]:[\\/]*)
re_direlt='/[^/][^/]*/\.\./'
changequote([,])dnl
# Canonicalize the path of ld
ac_prog=`echo $ac_prog| sed 's%\\\\%/%g'`
while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do
ac_prog=`echo $ac_prog| sed "s%$re_direlt%/%"`
done
test -z "$LD" && LD="$ac_prog"
;;
"")
# If it fails, then pretend we aren't using GCC.
ac_prog=ld
;;
*)
# If it is relative, then search for the first ld in PATH.
with_gnu_ld=unknown
;;
esac
elif test "$with_gnu_ld" = yes; then
AC_MSG_CHECKING([for GNU ld])
else
AC_MSG_CHECKING([for non-GNU ld])
fi
AC_CACHE_VAL(ac_cv_path_LD,
[if test -z "$LD"; then
IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}${PATH_SEPARATOR-:}"
for ac_dir in $PATH; do
test -z "$ac_dir" && ac_dir=.
if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then
ac_cv_path_LD="$ac_dir/$ac_prog"
# Check to see if the program is GNU ld. I'd rather use --version,
# but apparently some GNU ld's only accept -v.
# Break only if it was the GNU/non-GNU ld that we prefer.
if "$ac_cv_path_LD" -v 2>&1 < /dev/null | egrep '(GNU|with BFD)' > /dev/null; then
test "$with_gnu_ld" != no && break
else
test "$with_gnu_ld" != yes && break
fi
fi
done
IFS="$ac_save_ifs"
else
ac_cv_path_LD="$LD" # Let the user override the test with a path.
fi])
LD="$ac_cv_path_LD"
if test -n "$LD"; then
AC_MSG_RESULT($LD)
else
AC_MSG_RESULT(no)
fi
test -z "$LD" && AC_MSG_ERROR([no acceptable ld found in \$PATH])
AC_PROG_LD_GNU
])
AC_DEFUN(AC_PROG_LD_GNU,
[AC_CACHE_CHECK([if the linker ($LD) is GNU ld], ac_cv_prog_gnu_ld,
[# I'd rather use --version here, but apparently some GNU ld's only accept -v.
if $LD -v 2>&1 </dev/null | egrep '(GNU|with BFD)' 1>&5; then
ac_cv_prog_gnu_ld=yes
else
ac_cv_prog_gnu_ld=no
fi])
])
# AC_PROG_NM - find the path to a BSD-compatible name lister
AC_DEFUN(AC_PROG_NM,
[AC_MSG_CHECKING([for BSD-compatible nm])
AC_CACHE_VAL(ac_cv_path_NM,
[if test -n "$NM"; then
# Let the user override the test.
ac_cv_path_NM="$NM"
else
IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}${PATH_SEPARATOR-:}"
for ac_dir in $PATH /usr/ccs/bin /usr/ucb /bin; do
test -z "$ac_dir" && ac_dir=.
if test -f $ac_dir/nm || test -f $ac_dir/nm$ac_exeext ; then
# Check to see if the nm accepts a BSD-compat flag.
# Adding the `sed 1q' prevents false positives on HP-UX, which says:
# nm: unknown option "B" ignored
if ($ac_dir/nm -B /dev/null 2>&1 | sed '1q'; exit 0) | egrep /dev/null >/dev/null; then
ac_cv_path_NM="$ac_dir/nm -B"
break
elif ($ac_dir/nm -p /dev/null 2>&1 | sed '1q'; exit 0) | egrep /dev/null >/dev/null; then
ac_cv_path_NM="$ac_dir/nm -p"
break
else
ac_cv_path_NM=${ac_cv_path_NM="$ac_dir/nm"} # keep the first match, but
continue # so that we can try to find one that supports BSD flags
fi
fi
done
IFS="$ac_save_ifs"
test -z "$ac_cv_path_NM" && ac_cv_path_NM=nm
fi])
NM="$ac_cv_path_NM"
AC_MSG_RESULT([$NM])
])
# AC_CHECK_LIBM - check for math library
AC_DEFUN(AC_CHECK_LIBM,
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
LIBM=
case "$lt_target" in
*-*-beos* | *-*-cygwin*)
# These system don't have libm
;;
*-ncr-sysv4.3*)
AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM="-lmw")
AC_CHECK_LIB(m, main, LIBM="$LIBM -lm")
;;
*)
AC_CHECK_LIB(m, main, LIBM="-lm")
;;
esac
])
# AC_LIBLTDL_CONVENIENCE[(dir)] - sets LIBLTDL to the link flags for
# the libltdl convenience library and INCLTDL to the include flags for
# the libltdl header and adds --enable-ltdl-convenience to the
# configure arguments. Note that LIBLTDL and INCLTDL are not
# AC_SUBSTed, nor is AC_CONFIG_SUBDIRS called. If DIR is not
# provided, it is assumed to be `libltdl'. LIBLTDL will be prefixed
# with '${top_builddir}/' and INCLTDL will be prefixed with
# '${top_srcdir}/' (note the single quotes!). If your package is not
# flat and you're not using automake, define top_builddir and
# top_srcdir appropriately in the Makefiles.
AC_DEFUN(AC_LIBLTDL_CONVENIENCE, [AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
case "$enable_ltdl_convenience" in
no) AC_MSG_ERROR([this package needs a convenience libltdl]) ;;
"") enable_ltdl_convenience=yes
ac_configure_args="$ac_configure_args --enable-ltdl-convenience" ;;
esac
LIBLTDL='${top_builddir}/'ifelse($#,1,[$1],['libltdl'])/libltdlc.la
INCLTDL='-I${top_srcdir}/'ifelse($#,1,[$1],['libltdl'])
])
# AC_LIBLTDL_INSTALLABLE[(dir)] - sets LIBLTDL to the link flags for
# the libltdl installable library and INCLTDL to the include flags for
# the libltdl header and adds --enable-ltdl-install to the configure
# arguments. Note that LIBLTDL and INCLTDL are not AC_SUBSTed, nor is
# AC_CONFIG_SUBDIRS called. If DIR is not provided and an installed
# libltdl is not found, it is assumed to be `libltdl'. LIBLTDL will
# be prefixed with '${top_builddir}/' and INCLTDL will be prefixed
# with '${top_srcdir}/' (note the single quotes!). If your package is
# not flat and you're not using automake, define top_builddir and
# top_srcdir appropriately in the Makefiles.
# In the future, this macro may have to be called after AC_PROG_LIBTOOL.
AC_DEFUN(AC_LIBLTDL_INSTALLABLE, [AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl
AC_CHECK_LIB(ltdl, main,
[test x"$enable_ltdl_install" != xyes && enable_ltdl_install=no],
[if test x"$enable_ltdl_install" = xno; then
AC_MSG_WARN([libltdl not installed, but installation disabled])
else
enable_ltdl_install=yes
fi
])
if test x"$enable_ltdl_install" = x"yes"; then
ac_configure_args="$ac_configure_args --enable-ltdl-install"
LIBLTDL='${top_builddir}/'ifelse($#,1,[$1],['libltdl'])/libltdl.la
INCLTDL='-I${top_srcdir}/'ifelse($#,1,[$1],['libltdl'])
else
ac_configure_args="$ac_configure_args --enable-ltdl-install=no"
LIBLTDL="-lltdl"
INCLTDL=
fi
])
dnl old names
AC_DEFUN(AM_PROG_LIBTOOL, [indir([AC_PROG_LIBTOOL])])dnl
AC_DEFUN(AM_ENABLE_SHARED, [indir([AC_ENABLE_SHARED], $@)])dnl
AC_DEFUN(AM_ENABLE_STATIC, [indir([AC_ENABLE_STATIC], $@)])dnl
AC_DEFUN(AM_DISABLE_SHARED, [indir([AC_DISABLE_SHARED], $@)])dnl
AC_DEFUN(AM_DISABLE_STATIC, [indir([AC_DISABLE_STATIC], $@)])dnl
AC_DEFUN(AM_PROG_LD, [indir([AC_PROG_LD])])dnl
AC_DEFUN(AM_PROG_NM, [indir([AC_PROG_NM])])dnl
dnl This is just to silence aclocal about the macro not being used
ifelse([AC_DISABLE_FAST_INSTALL])dnl

View File

@@ -609,12 +609,17 @@ dnl AC_SUBST(RANLIB)
AC_OUTPUT( Makefile \
docs/Makefile \
docs/examples/Makefile \
include/Makefile \
include/curl/Makefile \
src/Makefile \
lib/Makefile \
tests/Makefile)
dnl perl/checklinks.pl \
dnl perl/getlinks.pl \
dnl perl/formfind.pl \
dnl perl/recursiveftpget.pl )
tests/Makefile \
tests/data/Makefile \
packages/Makefile \
packages/Win32/Makefile \
packages/Linux/Makefile \
packages/Linux/RPM/Makefile \
packages/Linux/RPM/curl.spec \
packages/Linux/RPM/curl-ssl.spec )

View File

@@ -18,7 +18,7 @@ The License Issue
If you add a larger piece of code, you can opt to make that file or set of
files to use a different license as long as they don't enfore any changes to
the rest of the package and they make sense. Such "separate parts" can not be
GPL (as we don't want the FPL virus to attack users of libcurl) but they must
GPL (as we don't want the GPL virus to attack users of libcurl) but they must
use "GPL compatible" licenses.
Naming

View File

@@ -1,4 +1,4 @@
Updated: January 4, 2001 (http://curl.haxx.se/docs/faq.shtml)
Updated: January 15, 2001 (http://curl.haxx.se/docs/faq.shtml)
_ _ ____ _
___| | | | _ \| |
/ __| | | | |_) | |
@@ -33,7 +33,7 @@ FAQ
4. Running Problems
4.1 Problems connecting to SSL servers.
4.2 Why do I get problems when I use & in the URL?
4.2 Why do I get problems when I use & or % in the URL?
4.3 How can I use {, }, [ or ] to specify multiple URLs?
4.4 Why do I get downloaded data even though the web page doesn't exist?
4.5 Why do I get return code XXX from a HTTP server?
@@ -48,7 +48,7 @@ FAQ
4.9 Curl can't authenticate to the server that requires NTLM?
5. libcurl Issues
5.1 Is libcurl thread safe?
5.1 Is libcurl thread-safe?
5.2 How can I receive all data into a large memory chunk?
5.3 How do I fetch multiple files with libcurl?
5.4 Does libcurl do Winsock initing on win32 systems?
@@ -73,19 +73,16 @@ FAQ
fact it can also be pronounced 'see URL' also helped.
Curl supports a range of common internet protocols, currently including
HTTP, HTTPS, FTP, GOPHER, LDAP, DICT and FILE.
HTTP, HTTPS, FTP, GOPHER, LDAP, DICT, TELNET and FILE.
Please spell it cURL or just curl.
We spell it cURL or just curl.
1.2 What is libcurl?
libcurl is the engine inside curl that does all the work. curl is more or
less the command line interface that converts the given options into libcurl
function invokes. libcurl is a reliable, higly portable multiprotocol file
transfer library.
libcurl is a reliable, higly portable multiprotocol file transfer library.
Any application is free to use libcurl, even commercial or closed-source
ones. Just make sure changes to the lib itself are made public.
ones.
1.3 What is cURL not?
@@ -298,7 +295,7 @@ FAQ
I have also seen examples where the remote server didn't like the SSLv2
request and instead you had to force curl to use SSLv3 with -3/--sslv3.
4.2. Why do I get problems when I use & in the URL?
4.2. Why do I get problems when I use & or % in the URL?
In general unix shells, the & letter is treated special and when used it
runs the specified command in the background. To safely send the & as a part
@@ -309,6 +306,9 @@ FAQ
curl 'http://www.altavista.com/cgi-bin/query?text=yes&q=curl'
In win32, the standard DOS shell treats the %-letter specially and you may
need to quote the string properly when % is used in it.
4.3. How can I use {, }, [ or ] to specify multiple URLs?
Because those letters have a special meaning to the shell, and to be used in
@@ -318,6 +318,12 @@ FAQ
curl '{curl,www}.haxx.se'
To be able to use those letters as actual parts of the URL (without using
them for the curl URL "globbing" system), use the -g/--globoff option
(included in curl 7.6 and later):
curl -g 'www.site.com/weirdname[].html'
4.4. Why do I get downloaded data even though the web page doesn't exist?
Curl asks remote servers for the page you specify. If the page doesn't exist
@@ -392,8 +398,7 @@ FAQ
you have.
If there is a bug, post a bug report in the Curl Bug Track System over at
http://sourceforge.net/bugs/?group_id=976 or mail a detailed bug description
to curl-bug@haxx.se.
http://sourceforge.net/bugs/?group_id=976
Always include as many details you can think of, including curl version,
operating system name and version and complete instructions how to repeat
@@ -406,11 +411,13 @@ FAQ
5. libcurl Issues
5.1. Is libcurl thread safe?
5.1. Is libcurl thread-safe?
We have attempted to write the entire code adjusted for multi-threaded
programs. If your system has such, curl will attempt to use threadsafe
functions instead of non-safe ones.
Yes.
We have written the libcurl code specificly adjusted for multi-threaded
programs. libcurl will use thread-safe functions instead of non-safe ones if
your system has such.
I am very interested in once and for all getting some kind of report or
README file from those who have used libcurl in a threaded environment,

View File

@@ -86,13 +86,28 @@ UNIX
If you happen to have autoconf installed, but a version older than
2.12 you will get into trouble. Then you can still build curl by
issuing these commands: (from Ralph Beckmann <rabe@uni-paderborn.de>)
issuing these commands: (from Ralph Beckmann)
./configure [...]
cd lib; make; cd ..
cd src; make; cd ..
cp src/curl elsewhere/bin/
As suggested by David West, you can make a faked version of autoconf and
autoheader:
----start of autoconf----
#!/bin/bash
#fake autoconf for building curl
if [ "$1" = "--version" ] then
echo "Autoconf version 2.13"
fi
----end of autoconf----
Then make autoheader a symbolic link to the same script and make sure
they're executable and set to appear in the path *BEFORE* the actual (but
obsolete) autoconf and autoheader scripts.
OPTIONS
Remember, to force configure to use the standard cc compiler if both
@@ -154,6 +169,8 @@ Win32
set, then run 'nmake -f Makefile.vc6' in the lib/ dir and then
'nmake -f Makefile.vc6' in the src/ dir.
The vcvars32.bat file is part of the Microsoft development environment.
IDE-style
-------------------------
If you use VC++, Borland or similar compilers. Include all lib source
@@ -199,6 +216,8 @@ Win32
set, then run 'nmake -f Makefile.vc6 release-ssl' in the lib/ dir and
then 'nmake -f Makefile.vc6' in the src/ dir.
The vcvars32.bat file is part of the Microsoft development environment.
Microsoft / Borland style
-------------------------
If you have OpenSSL, and want curl to take advantage of it, edit your
@@ -256,18 +275,20 @@ PORTS
- PowerPC Mac OS X
- Sparc Linux
- Sparc Solaris 2.4, 2.5, 2.5.1, 2.6, 7, 8
- Sparc SunOS 4.1.*
- Sparc SunOS 4.1.X
- i386 BeOS
- i386 FreeBSD
- i386 Linux 1.3, 2.0, 2.2, 2.3, 2.4
- i386 NetBSD
- i386 OS/2
- i386 OpenBSD
- i386 SCO unix
- i386 Solaris 2.7
- i386 Windows 95, 98, NT, 2000
- i386 Windows 95, 98, ME, NT, 2000
- ia64 Linux 2.3.99
- m68k AmigaOS 3
- m68k OpenBSD
- StrongARM NetBSD 1.4.1
OpenSSL
=======

View File

@@ -12,9 +12,15 @@ INTERNALS
Thus, the largest amount of code and complexity is in the library part.
SYMBOLS
=======
All symbols used internally must use a 'Curl_' prefix if they're used in more
than a single file. Single-file symbols must be made static. Public
(exported) symbols must use a 'curl_' prefix. (There are exceptions, but they
are destined to be changed to follow this pattern in the future.)
CVS
===
All changes to the sources are committed to the CVS repository as soon as
they're somewhat verified to work. Changes shall be commited as independently
as possible so that individual changes can be easier spotted and tracked
@@ -27,25 +33,28 @@ Windows vs Unix
===============
There are a few differences in how to program curl the unix way compared to
the Windows way. The four most notable details are:
the Windows way. The four perhaps most notable details are:
1. Different function names for close(), read(), write()
In curl, this is solved with defines and macros, so that the source looks
the same at all places except for the header file that defines them.
2. Windows requires a couple of init calls for the socket stuff
Those must be made by the application that uses libcurl, in curl that means
src/main.c has some code #ifdef'ed to do just that.
3. The file descriptors for network communication and file operations are
not easily interchangable as in unix
We avoid this by not trying any funny tricks on file descriptors.
4. When writing data to stdout, Windows makes end-of-lines the DOS way, thus
destroying binary data, although you do want that conversion if it is
text coming through... (sigh)
In curl, (1) is made with defines and macros, so that the source looks the
same at all places except for the header file that defines them.
(2) must be made by the application that uses libcurl, in curl that means
src/main.c has some code #ifdef'ed to do just that.
(3) is simply avoided by not trying any funny tricks on file descriptors.
(4) we set stdout to binary under windows
We set stdout to binary under windows
Inside the source code, I do make an effort to avoid '#ifdef WIN32'. All
conditionals that deal with features *should* instead be in the format
@@ -54,6 +63,9 @@ Windows vs Unix
supposed to look exactly as a config.h file would have looked like on a
Windows machine!
Generally speaking: always remember that this will be compiled on dozens of
operating systems. Don't walk on the edge.
Library
=======
@@ -68,6 +80,9 @@ Library
rather small and easy-to-follow. All the ones prefixed with 'curl_easy' are
put in the lib/easy.c file.
All printf()-style functions use the supplied clones in lib/mprintf.c. This
makes sure we stay absolutely platform independent.
curl_easy_init() allocates an internal struct and makes some initializations.
The returned handle does not revail internals.
@@ -77,27 +92,31 @@ Library
curl_easy_perform() does a whole lot of things:
The function analyzes the URL, get the different components and connects to
the remote host. This may involve using a proxy and/or using SSL. The
GetHost() function in lib/hostip.c is used for looking up host names.
It starts off in the lib/easy.c file by calling curl_transfer(), but the main
work is lib/url.c. The function first analyzes the URL, it separates the
different components and connects to the remote host. This may involve using
a proxy and/or using SSL. The Curl_gethost() function in lib/hostip.c is used
for looking up host names.
When connected, the proper function is called. The functions are named after
the protocols they handle. ftp(), http(), dict(), etc. They all reside in
their respective files (ftp.c, http.c and dict.c).
When connected, the proper protocol-specific function is called. The
functions are named after the protocols they handle. Curl_ftp(), Curl_http(),
Curl_dict(), etc. They all reside in their respective files (ftp.c, http.c
and dict.c).
The protocol-specific functions deal with protocol-specific negotiations and
setup. They have access to the sendf() (from lib/sendf.c) function to send
printf-style formatted data to the remote host and when they're ready to make
the actual file transfer they call the Transfer() function (in
lib/download.c) to do the transfer. All printf()-style functions use the
supplied clones in lib/mprintf.c.
The protocol-specific functions of course deal with protocol-specific
negotiations and setup. They have access to the Curl_sendf() (from
lib/sendf.c) function to send printf-style formatted data to the remote host
and when they're ready to make the actual file transfer they call the
Curl_Transfer() function (in lib/transfer.c) to setup the transfer and
returns. curl_transfer() then calls _Tranfer() in lib/transfer.c that
performs the entire file transfer.
While transfering, the progress functions in lib/progress.c are called at a
During transfer, the progress functions in lib/progress.c are called at a
frequent interval (or at the user's choice, a specified callback might get
called). The speedcheck functions in lib/speedcheck.c are also used to verify
that the transfer is as fast as required.
When completed curl_easy_cleanup() should be called to free up used
When completed, the curl_easy_cleanup() should be called to free up used
resources.
HTTP(S)
@@ -106,9 +125,8 @@ Library
code. There is a special file (lib/formdata.c) that offers all the multipart
post functions.
base64-functions for user+password stuff is in (lib/base64.c) and all
functions for parsing and sending cookies are found in
(lib/cookie.c).
base64-functions for user+password stuff (and more) is in (lib/base64.c) and
all functions for parsing and sending cookies are found in (lib/cookie.c).
HTTPS uses in almost every means the same procedure as HTTP, with only two
exceptions: the connect procedure is different and the function used to read
@@ -118,9 +136,17 @@ Library
FTP
The if2ip() function can be used for getting the IP number of a specified
network interface, and it resides in lib/if2ip.c. It is only used for the FTP
PORT command.
The Curl_if2ip() function can be used for getting the IP number of a
specified network interface, and it resides in lib/if2ip.c.
Curl_ftpsendf() is used for sending FTP commands to the remote server. It was
made a separate function to prevent us programmers from forgetting that they
must be CRLF terminated. They must also be sent in one single write() to make
firewalls and similar happy.
Kerberos
The kerberos support is mainly in lib/krb4.c and lib/security.c.
TELNET
@@ -139,32 +165,54 @@ Library
URL encoding and decoding, called escaping and unescaping in the source code,
is found in lib/escape.c.
While transfering data in Transfer() a few functions might get
While transfering data in _Transfer() a few functions might get
used. curl_getdate() in lib/getdate.c is for HTTP date comparisons (and
more).
lib/getenv.c offers curl_getenv() which is for reading environment variables
in a neat platform independent way. That's used in the client, but also in
lib/url.c when checking the proxy environment variables.
lib/url.c when checking the proxy environment variables. Note that contrary
to the normal unix getenv(), this returns an allocated buffer that must be
free()ed after use.
lib/netrc.c holds the .netrc parser
lib/timeval.c features replacement functions for systems that don't have
gettimeofday().
gettimeofday() and a few support functions for timeval convertions.
A function named curl_version() that returns the full curl version string is
found in lib/version.c.
If authentication is requested but no password is given, a getpass_r() clone
exists in lib/getpass.c. libcurl offers a custom callback that can be used
instead of this, but it doesn't change much to us.
Return Codes and Informationals
===============================
I've made things simple. Almost every function in libcurl returns a CURLcode,
that must be CURLE_OK if everything is OK or otherwise a suitable error code
as the curl/curl.h include file defines. The very spot that detects an error
must use the Curl_failf() function to set the human-readable error
description.
In aiding the user to understand what's happening and to debug curl usage, we
must supply a fair amount of informational messages by using the Curl_infof()
function. Those messages are only displayed when the user explicitly asks for
them. They are best used when revealing information that isn't otherwise
obvious.
Client
======
main() resides in src/main.c together with most of the client code.
src/hugehelp.c is automatically generated by the mkhelp.pl perl script to
display the complete "manual" and the src/urlglob.c file holds the functions
used for the multiple-URL support.
used for the URL-"globbing" support. Globbing in the sense that the {} and []
expansion stuff is there.
The client mostly mess around to setup its config struct properly, then it
calls the curl_easy_*() functions of the library and when it gets back
The client mostly messes around to setup its 'config' struct properly, then
it calls the curl_easy_*() functions of the library and when it gets back
control after the curl_easy_perform() it cleans up the library, checks status
and exits.
@@ -173,10 +221,30 @@ Client
curl_easy_getinfo() function to extract useful information from the curl
session.
Recent versions may loop and do all that several times if many URLs were
specified on the command line or config file.
Memory Debugging
================
The file named lib/memdebug.c contains debug-versions of a few
functions. Functions such as malloc, free, fopen, fclose, etc that somehow
deal with resources that might give us problems if we "leak" them. The
functions in the memdebug system do nothing fancy, they do their normal
function and then log information about what they just did. The logged data
is then analyzed after a complete session,
memanalyze.pl is a perl script present only in CVS (not part of the release
archives) that analyzes a log file generated by the memdebug system. It
detects if resources are allocated but never freed and other kinds of errors
related to resource management.
Use -DMALLOCDEBUG when compiling to enable memory debugging.
Test Suite
==========
During November 2000, a test suite has evolved. It is placed in its own
Since November 2000, a test suite has evolved. It is placed in its own
subdirectory directly off the root in the curl archive tree, and it contains
a bunch of scripts and a lot of test case data.
@@ -186,3 +254,17 @@ Test Suite
You'll find a complete description of the test case data files in the README
file in the test directory.
The test suite automatically detects if curl was built with the memory
debugging enabled, and if it was it will detect memory leaks too.
Building Releases
=================
There's no magic to this. When you consider everything stable enough to be
released, run the 'maketgz' script (using 'make distcheck' will give you a
pretty good view on the status of the current sources). maketgz prompts for
version number of the client and the library before it creates a release
archive.
You must have autoconf installed to build release archives.

View File

@@ -12,11 +12,15 @@ man_MANS = \
curl_easy_perform.3 \
curl_easy_setopt.3 \
curl_formparse.3 \
curl_formfree.3 \
curl_getdate.3 \
curl_getenv.3 \
curl_slist_append.3 \
curl_slist_free_all.3 \
curl_version.3
EXTRA_DIST = $(man_MANS)
EXTRA_DIST = $(man_MANS) \
MANUAL BUGS CONTRIBUTE FAQ FEATURES INTERNALS \
LIBCURL README.win32 RESOURCES TODO TheArtOfHttpScripting
SUBDIRS = examples

View File

@@ -102,6 +102,12 @@ Similar Tools
Kermit - http://www.columbia.edu/kermit/ftpclient
Pavuk - http://www.idata.sk/~ondrej/pavuk/
httpr - http://zwolak.dhs.org/httpr/
puf - http://www.inf.tu-dresden.de/~ob6/sw/puf.html
Related Software
----------------
ftpparse - http://cr.yp.to/ftpparse.html parses FTP LIST responses

View File

@@ -16,10 +16,6 @@ For the future
* Make SSL session ids get used if multiple HTTPS documents from the same
host is requested.
* Improve the command line option parser to accept '-m300' as well as the '-m
300' convention. It should be able to work if '-m300' is considered to be
space separated to the next option.
* Make the curl tool support URLs that start with @ that would then mean that
the following is a plain list with URLs to download. Thus @filename.txt
reads a list of URLs from a local file. A fancy option would then be to
@@ -27,16 +23,12 @@ For the future
URLs mentioned in the list. I figure -O or something would have to be
implied by such an action.
* Make curl with multiple URLs, even outside of {}-letters. I could also
imagine an optional fork()ed system that downloads each URL in its own
thread. It should of course have a maximum amount of simultaneous fork()s.
* Improve the regular progress meter with --continue is used. It should be
noticable when there's a resume going on.
* Add a command line option that allows the output file to get the same time
stamp as the remote file. This requires some fiddling on FTP but comes
almost free for HTTP.
stamp as the remote file. We already are capable of fetching the remote
file's date.
* Make the SSL layer option capable of using the Mozilla Security Services as
an alternative to OpenSSL:
@@ -47,6 +39,7 @@ For the future
* Make the easy-interface support multiple file transfers. If they're done
to the same host, they should use persistant connections or similar.
Figure out a nice design for this.
* Add asynchronous name resolving, as this enables full timeout support for
fork() systems.

View File

@@ -2,18 +2,19 @@
.\" nroff -man curl.1
.\" Written by Daniel Stenberg
.\"
.TH curl 1 "4 January 2001" "Curl 7.5.2" "Curl Manual"
.TH curl 1 "9 January 2001" "Curl 7.6" "Curl Manual"
.SH NAME
curl \- get a URL with FTP, TELNET, LDAP, GOPHER, DICT, FILE, HTTP or
HTTPS syntax.
.SH SYNOPSIS
.B curl [options]
.I url
.I [URL...]
.SH DESCRIPTION
.B curl
is a client to get documents/files from servers, using any of the
supported protocols. The command is designed to work without user
interaction or any kind of interactivity.
is a client to get documents/files from or send documents to a server, using
any of the supported protocols (HTTP, HTTPS, FTP, GOPHER, DICT, TELNET, LDAP
or FILE). The command is designed to work without user interaction or any kind
of interactivity.
curl offers a busload of useful tricks like proxy support, user
authentication, ftp upload, HTTP post, SSL (https:) connections, cookies, file
@@ -37,6 +38,9 @@ It is possible to specify up to 9 sets or series for a URL, but no nesting is
supported at the moment:
http://www.any.org/archive[1996-1999]/volume[1-4]part{a,b,c,index}.html
Starting with curl 7.6, you can specify any amount of URLs on the command
line. They will be fetched in a sequential manner in the specified order.
.SH OPTIONS
.IP "-a/--append"
(FTP)
@@ -120,11 +124,13 @@ To post data purely binary, you should instead use the --data-binary option.
-d/--data is the same as --data-ascii.
If this option is used serveral times, the last one will be used.
If this option is used serveral times, the ones following the first will
append data.
.IP "--data-ascii <data>"
(HTTP) This is an alias for the -d/--data option.
If this option is used serveral times, the last one will be used.
If this option is used serveral times, the ones following the first will
append data.
.IP "--data-binary <data>"
(HTTP) This posts data in a similar manner as --data-ascii does, although when
using this option the entire context of the posted data is kept as-is. If you
@@ -132,6 +138,9 @@ want to post a binary file without the strip-newlines feature of the
--data-ascii option, this is for you.
If this option is used serveral times, the last one will be used.
If this option is used serveral times, the ones following the first will
append data.
.IP "-D/--dump-header <file>"
(HTTP/FTP)
Write the HTTP headers to this file. Write the FTP file info to this
@@ -311,11 +320,12 @@ or use several variables like:
curl http://{site,host}.host[1-5].com -o "#1_#2"
If this option is used serveral times, the last one will be used.
You may use this option as many times as you have number of URLs.
.IP "-O/--remote-name"
Write output to a local file named like the remote file we get. (Only
the file part of the remote file is used, the path is cut off.)
You may use this option as many times as you have number of URLs.
.IP "-p/--proxytunnel"
When an HTTP proxy is used, this option will cause non-HTTP protocols to
attempt to tunnel through the proxy instead of merely using it to do HTTP-like
@@ -436,10 +446,14 @@ password is specified, curl will ask for it interactively.
If this option is used serveral times, the last one will be used.
.IP "--url <URL>"
Set the URL to fetch. This option is mostly handy when you wanna specify URL
in a config file.
Specify a URL to fetch. This option is mostly handy when you wanna specify
URL(s) in a config file.
If this option is used serveral times, the last one will be used.
This option may be used any number of times. To control where this URL is written, use the
.I -o
or the
.I -O
options.
.IP "-v/--verbose"
Makes the fetching more verbose/talkative. Mostly usable for
debugging. Lines starting with '>' means data sent by curl, '<'
@@ -765,6 +779,7 @@ If you do find bugs, mail them to curl-bug@haxx.se.
- T. Bharath <TBharath@responsenetworks.com>
- Alexander Kourakos <awk@users.sourceforge.net>
- James Griffiths <griffiths_james@yahoo.com>
- Loic Dachary <loic@senga.org>
.SH WWW
http://curl.haxx.se

11
docs/examples/Makefile.am Normal file
View File

@@ -0,0 +1,11 @@
#
# $Id$
#
AUTOMAKE_OPTIONS = foreign no-dependencies
EXTRA_DIST =
README curlgtk.c sepheaders.c simple.c
all:
@echo "done"

View File

@@ -435,8 +435,10 @@ typedef enum {
NOTE: they return TRUE if the strings match *case insensitively*.
*/
extern int (strequal)(const char *s1, const char *s2);
extern int (strnequal)(const char *s1, const char *s2, size_t n);
extern int (Curl_strequal)(const char *s1, const char *s2);
extern int (Curl_strnequal)(const char *s1, const char *s2, size_t n);
#define strequal(a,b) Curl_strequal(a,b)
#define strnequal(a,b,c) Curl_strnequal(a,b,c)
/* external form function */
int curl_formparse(char *string,
@@ -454,8 +456,8 @@ char *curl_getenv(char *variable);
char *curl_version(void);
/* This is the version number */
#define LIBCURL_VERSION "7.5.2"
#define LIBCURL_VERSION_NUM 0x070502
#define LIBCURL_VERSION "7.6-pre3"
#define LIBCURL_VERSION_NUM 0x070600
/* linked-list structure for the CURLOPT_QUOTE option (and other) */
struct curl_slist {

View File

@@ -55,26 +55,28 @@
#include <stdarg.h>
int mprintf(const char *format, ...);
int mfprintf(FILE *fd, const char *format, ...);
int msprintf(char *buffer, const char *format, ...);
int msnprintf(char *buffer, size_t maxlength, const char *format, ...);
int mvprintf(const char *format, va_list args);
int mvfprintf(FILE *fd, const char *format, va_list args);
int mvsprintf(char *buffer, const char *format, va_list args);
int mvsnprintf(char *buffer, size_t maxlength, const char *format, va_list args);
char *maprintf(const char *format, ...);
char *mvaprintf(const char *format, va_list args);
int curl_mprintf(const char *format, ...);
int curl_mfprintf(FILE *fd, const char *format, ...);
int curl_msprintf(char *buffer, const char *format, ...);
int curl_msnprintf(char *buffer, size_t maxlength, const char *format, ...);
int curl_mvprintf(const char *format, va_list args);
int curl_mvfprintf(FILE *fd, const char *format, va_list args);
int curl_mvsprintf(char *buffer, const char *format, va_list args);
int curl_mvsnprintf(char *buffer, size_t maxlength, const char *format, va_list args);
char *curl_maprintf(const char *format, ...);
char *curl_mvaprintf(const char *format, va_list args);
#ifdef _MPRINTF_REPLACE
# define printf mprintf
# define fprintf mfprintf
# define sprintf msprintf
# define snprintf msnprintf
# define vprintf mvprintf
# define vfprintf mvfprintf
# define vsprintf mvsprintf
# define vsnprintf mvsnprintf
# define printf curl_mprintf
# define fprintf curl_mfprintf
# define sprintf curl_msprintf
# define snprintf curl_msnprintf
# define vprintf curl_mvprintf
# define vfprintf curl_mvfprintf
# define vsprintf curl_mvsprintf
# define vsnprintf curl_mvsnprintf
# define aprintf curl_maprintf
# define vaprintf curl_mvaprintf
#endif
#endif /* H_MPRINTF */

View File

@@ -4,6 +4,10 @@
AUTOMAKE_OPTIONS = foreign
EXTRA_DIST = getdate.y \
Makefile.b32 Makefile.b32.resp Makefile.m32 Makefile.vc6 \
libcurl.def dllinit.c
lib_LTLIBRARIES = libcurl.la
# Some flags needed when trying to cause warnings ;-)
@@ -49,12 +53,14 @@ cookie.c formdata.h http.c sendf.c \
cookie.h ftp.c http.h sendf.h url.c \
dict.c ftp.h if2ip.c speedcheck.c url.h \
dict.h getdate.c if2ip.h speedcheck.h urldata.h \
download.c getdate.h ldap.c ssluse.c version.c \
download.h getenv.c ldap.h ssluse.h \
getdate.h ldap.c ssluse.c version.c \
getenv.c ldap.h ssluse.h \
escape.c getenv.h mprintf.c telnet.c \
escape.h getpass.c netrc.c telnet.h \
getinfo.c highlevel.c strequal.c strequal.h easy.c \
security.h security.c krb4.c memdebug.c memdebug.h
getinfo.c transfer.c strequal.c strequal.h easy.c \
security.h security.c krb4.c krb4.h memdebug.c memdebug.h inet_ntoa_r.h
noinst_HEADERS = setup.h transfer.h
# Say $(srcdir), so GNU make does not report an ambiguity with the .y.c rule.
$(srcdir)/getdate.c: getdate.y

View File

@@ -29,7 +29,7 @@ LIBCURLLIB = libcurl.lib
SOURCES = \
base64.c \
cookie.c \
download.c \
transfer.c \
escape.c \
formdata.c \
ftp.c \
@@ -54,7 +54,6 @@ SOURCES = \
getinfo.c \
version.c \
easy.c \
highlevel.c \
strequal.c
OBJECTS = $(SOURCES:.c=.obj)

View File

@@ -1,6 +1,6 @@
+base64.obj &
+cookie.obj &
+download.obj &
+transfer.obj &
+escape.obj &
+formdata.obj &
+ftp.obj &
@@ -25,5 +25,4 @@
+getinfo.obj &
+version.obj &
+easy.obj &
+highlevel.obj &
+strequal.obj

View File

@@ -1,357 +0,0 @@
# Makefile.in generated automatically by automake 1.4 from Makefile.am
# Copyright (C) 1994, 1995-8, 1999 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
#
# $Id$
#
SHELL = @SHELL@
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
prefix = @prefix@
exec_prefix = @exec_prefix@
bindir = @bindir@
sbindir = @sbindir@
libexecdir = @libexecdir@
datadir = @datadir@
sysconfdir = @sysconfdir@
sharedstatedir = @sharedstatedir@
localstatedir = @localstatedir@
libdir = @libdir@
infodir = @infodir@
mandir = @mandir@
includedir = @includedir@
oldincludedir = /usr/include
DESTDIR =
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
top_builddir = ..
ACLOCAL = @ACLOCAL@
AUTOCONF = @AUTOCONF@
AUTOMAKE = @AUTOMAKE@
AUTOHEADER = @AUTOHEADER@
INSTALL = @INSTALL@
INSTALL_PROGRAM = @INSTALL_PROGRAM@ $(AM_INSTALL_PROGRAM_FLAGS)
INSTALL_DATA = @INSTALL_DATA@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
transform = @program_transform_name@
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
host_alias = @host_alias@
host_triplet = @host@
AS = @AS@
CC = @CC@
DLLTOOL = @DLLTOOL@
LIBTOOL = @LIBTOOL@
LN_S = @LN_S@
MAKEINFO = @MAKEINFO@
NROFF = @NROFF@
OBJDUMP = @OBJDUMP@
PACKAGE = @PACKAGE@
PERL = @PERL@
RANLIB = @RANLIB@
VERSION = @VERSION@
YACC = @YACC@
AUTOMAKE_OPTIONS = foreign
lib_LTLIBRARIES = libcurl.la
# Some flags needed when trying to cause warnings ;-)
# CFLAGS = -DMALLOCDEBUG -g # -Wall #-pedantic
INCLUDES = -I$(top_srcdir)/include
libcurl_la_LDFLAGS = -version-info 1:0:0
# This flag accepts an argument of the form current[:revision[:age]]. So,
# passing -version-info 3:12:1 sets current to 3, revision to 12, and age to
# 1.
#
# If either revision or age are omitted, they default to 0. Also note that age
# must be less than or equal to the current interface number.
#
# Here are a set of rules to help you update your library version information:
#
# 1.Start with version information of 0:0:0 for each libtool library.
#
# 2.Update the version information only immediately before a public release of
# your software. More frequent updates are unnecessary, and only guarantee
# that the current interface number gets larger faster.
#
# 3.If the library source code has changed at all since the last update, then
# increment revision (c:r:a becomes c:r+1:a).
#
# 4.If any interfaces have been added, removed, or changed since the last
# update, increment current, and set revision to 0.
#
# 5.If any interfaces have been added since the last public release, then
# increment age.
#
# 6.If any interfaces have been removed since the last public release, then
# set age to 0.
#
libcurl_la_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c base64.c file.h hostip.c progress.c timeval.h base64.h formdata.c hostip.h progress.h cookie.c formdata.h http.c sendf.c cookie.h ftp.c http.h sendf.h url.c dict.c ftp.h if2ip.c speedcheck.c url.h dict.h getdate.c if2ip.h speedcheck.h urldata.h download.c getdate.h ldap.c ssluse.c version.c download.h getenv.c ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c telnet.h getinfo.c highlevel.c strequal.c strequal.h easy.c security.h security.c krb4.c memdebug.c memdebug.h
mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs
CONFIG_HEADER = ../config.h ../src/config.h
CONFIG_CLEAN_FILES =
LTLIBRARIES = $(lib_LTLIBRARIES)
DEFS = @DEFS@ -I. -I$(srcdir) -I.. -I../src
CPPFLAGS = @CPPFLAGS@
LDFLAGS = @LDFLAGS@
LIBS = @LIBS@
libcurl_la_LIBADD =
libcurl_la_OBJECTS = file.lo timeval.lo base64.lo hostip.lo progress.lo \
formdata.lo cookie.lo http.lo sendf.lo ftp.lo url.lo dict.lo if2ip.lo \
speedcheck.lo getdate.lo download.lo ldap.lo ssluse.lo version.lo \
getenv.lo escape.lo mprintf.lo telnet.lo getpass.lo netrc.lo getinfo.lo \
highlevel.lo strequal.lo easy.lo security.lo krb4.lo memdebug.lo
CFLAGS = @CFLAGS@
COMPILE = $(CC) $(DEFS) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
LTCOMPILE = $(LIBTOOL) --mode=compile $(CC) $(DEFS) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
CCLD = $(CC)
LINK = $(LIBTOOL) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(LDFLAGS) -o $@
DIST_COMMON = Makefile.am Makefile.in
DISTFILES = $(DIST_COMMON) $(SOURCES) $(HEADERS) $(TEXINFOS) $(EXTRA_DIST)
TAR = gtar
GZIP_ENV = --best
SOURCES = $(libcurl_la_SOURCES)
OBJECTS = $(libcurl_la_OBJECTS)
all: all-redirect
.SUFFIXES:
.SUFFIXES: .S .c .lo .o .s
$(srcdir)/Makefile.in: Makefile.am $(top_srcdir)/configure.in $(ACLOCAL_M4)
cd $(top_srcdir) && $(AUTOMAKE) --foreign --include-deps lib/Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
cd $(top_builddir) \
&& CONFIG_FILES=$(subdir)/$@ CONFIG_HEADERS= $(SHELL) ./config.status
mostlyclean-libLTLIBRARIES:
clean-libLTLIBRARIES:
-test -z "$(lib_LTLIBRARIES)" || rm -f $(lib_LTLIBRARIES)
distclean-libLTLIBRARIES:
maintainer-clean-libLTLIBRARIES:
install-libLTLIBRARIES: $(lib_LTLIBRARIES)
@$(NORMAL_INSTALL)
$(mkinstalldirs) $(DESTDIR)$(libdir)
@list='$(lib_LTLIBRARIES)'; for p in $$list; do \
if test -f $$p; then \
echo "$(LIBTOOL) --mode=install $(INSTALL) $$p $(DESTDIR)$(libdir)/$$p"; \
$(LIBTOOL) --mode=install $(INSTALL) $$p $(DESTDIR)$(libdir)/$$p; \
else :; fi; \
done
uninstall-libLTLIBRARIES:
@$(NORMAL_UNINSTALL)
list='$(lib_LTLIBRARIES)'; for p in $$list; do \
$(LIBTOOL) --mode=uninstall rm -f $(DESTDIR)$(libdir)/$$p; \
done
.c.o:
$(COMPILE) -c $<
.s.o:
$(COMPILE) -c $<
.S.o:
$(COMPILE) -c $<
mostlyclean-compile:
-rm -f *.o core *.core
clean-compile:
distclean-compile:
-rm -f *.tab.c
maintainer-clean-compile:
.c.lo:
$(LIBTOOL) --mode=compile $(COMPILE) -c $<
.s.lo:
$(LIBTOOL) --mode=compile $(COMPILE) -c $<
.S.lo:
$(LIBTOOL) --mode=compile $(COMPILE) -c $<
mostlyclean-libtool:
-rm -f *.lo
clean-libtool:
-rm -rf .libs _libs
distclean-libtool:
maintainer-clean-libtool:
libcurl.la: $(libcurl_la_OBJECTS) $(libcurl_la_DEPENDENCIES)
$(LINK) -rpath $(libdir) $(libcurl_la_LDFLAGS) $(libcurl_la_OBJECTS) $(libcurl_la_LIBADD) $(LIBS)
tags: TAGS
ID: $(HEADERS) $(SOURCES) $(LISP)
list='$(SOURCES) $(HEADERS)'; \
unique=`for i in $$list; do echo $$i; done | \
awk ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
here=`pwd` && cd $(srcdir) \
&& mkid -f$$here/ID $$unique $(LISP)
TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS)'; \
unique=`for i in $$list; do echo $$i; done | \
awk ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(ETAGS_ARGS)$$unique$(LISP)$$tags" \
|| (cd $(srcdir) && etags $(ETAGS_ARGS) $$tags $$unique $(LISP) -o $$here/TAGS)
mostlyclean-tags:
clean-tags:
distclean-tags:
-rm -f TAGS ID
maintainer-clean-tags:
distdir = $(top_builddir)/$(PACKAGE)-$(VERSION)/$(subdir)
subdir = lib
distdir: $(DISTFILES)
@for file in $(DISTFILES); do \
d=$(srcdir); \
if test -d $$d/$$file; then \
cp -pr $$/$$file $(distdir)/$$file; \
else \
test -f $(distdir)/$$file \
|| ln $$d/$$file $(distdir)/$$file 2> /dev/null \
|| cp -p $$d/$$file $(distdir)/$$file || :; \
fi; \
done
info-am:
info: info-am
dvi-am:
dvi: dvi-am
check-am: all-am
check: check-am
installcheck-am:
installcheck: installcheck-am
install-exec-am: install-libLTLIBRARIES
install-exec: install-exec-am
install-data-am:
install-data: install-data-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
install: install-am
uninstall-am: uninstall-libLTLIBRARIES
uninstall: uninstall-am
all-am: Makefile $(LTLIBRARIES)
all-redirect: all-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) AM_INSTALL_PROGRAM_FLAGS=-s install
installdirs:
$(mkinstalldirs) $(DESTDIR)$(libdir)
mostlyclean-generic:
clean-generic:
distclean-generic:
-rm -f Makefile $(CONFIG_CLEAN_FILES)
-rm -f config.cache config.log stamp-h stamp-h[0-9]*
maintainer-clean-generic:
mostlyclean-am: mostlyclean-libLTLIBRARIES mostlyclean-compile \
mostlyclean-libtool mostlyclean-tags \
mostlyclean-generic
mostlyclean: mostlyclean-am
clean-am: clean-libLTLIBRARIES clean-compile clean-libtool clean-tags \
clean-generic mostlyclean-am
clean: clean-am
distclean-am: distclean-libLTLIBRARIES distclean-compile \
distclean-libtool distclean-tags distclean-generic \
clean-am
-rm -f libtool
distclean: distclean-am
maintainer-clean-am: maintainer-clean-libLTLIBRARIES \
maintainer-clean-compile maintainer-clean-libtool \
maintainer-clean-tags maintainer-clean-generic \
distclean-am
@echo "This command is intended for maintainers to use;"
@echo "it deletes files that may require special tools to rebuild."
maintainer-clean: maintainer-clean-am
.PHONY: mostlyclean-libLTLIBRARIES distclean-libLTLIBRARIES \
clean-libLTLIBRARIES maintainer-clean-libLTLIBRARIES \
uninstall-libLTLIBRARIES install-libLTLIBRARIES mostlyclean-compile \
distclean-compile clean-compile maintainer-clean-compile \
mostlyclean-libtool distclean-libtool clean-libtool \
maintainer-clean-libtool tags mostlyclean-tags distclean-tags \
clean-tags maintainer-clean-tags distdir info-am info dvi-am dvi check \
check-am installcheck-am installcheck install-exec-am install-exec \
install-data-am install-data install-am install uninstall-am uninstall \
all-redirect all-am all installdirs mostlyclean-generic \
distclean-generic clean-generic maintainer-clean-generic clean \
mostlyclean distclean maintainer-clean
# Say $(srcdir), so GNU make does not report an ambiguity with the .y.c rule.
$(srcdir)/getdate.c: getdate.y
cd $(srcdir) && \
$(YACC) $(YFLAGS) getdate.y; \
mv -f y.tab.c getdate.c
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@@ -30,16 +30,16 @@ libcurl_a_SOURCES = arpa_telnet.h file.c getpass.h netrc.h timeval.c base64.c \
file.h hostip.c progress.c timeval.h base64.h formdata.c hostip.h progress.h \
cookie.c formdata.h http.c sendf.c cookie.h ftp.c http.h sendf.h url.c dict.c \
ftp.h if2ip.c speedcheck.c url.h dict.h getdate.c if2ip.h speedcheck.h \
urldata.h download.c getdate.h ldap.c ssluse.c version.c download.h getenv.c \
urldata.h transfer.c getdate.h ldap.c ssluse.c version.c transfer.h getenv.c \
ldap.h ssluse.h escape.c getenv.h mprintf.c telnet.c escape.h getpass.c netrc.c \
telnet.h getinfo.c highlevel.c strequal.c strequal.h easy.c security.h \
telnet.h getinfo.c strequal.c strequal.h easy.c security.h \
security.c krb4.c
libcurl_a_OBJECTS = file.o timeval.o base64.o hostip.o progress.o \
formdata.o cookie.o http.o sendf.o ftp.o url.o dict.o if2ip.o \
speedcheck.o getdate.o download.o ldap.o ssluse.o version.o \
speedcheck.o getdate.o transfer.o ldap.o ssluse.o version.o \
getenv.o escape.o mprintf.o telnet.o getpass.o netrc.o getinfo.o \
highlevel.o strequal.o easy.o security.o krb4.o
strequal.o easy.o security.o krb4.o
LIBRARIES = $(libcurl_a_LIBRARIES)
SOURCES = $(libcurl_a_SOURCES)

View File

@@ -33,7 +33,7 @@ LINKSLIBS = libeay32.lib ssleay32.lib RSAglue.lib
RELEASE_OBJS= \
base64r.obj \
cookier.obj \
downloadr.obj \
transferr.obj \
escaper.obj \
formdatar.obj \
ftpr.obj \
@@ -58,13 +58,12 @@ RELEASE_OBJS= \
getinfor.obj \
versionr.obj \
easyr.obj \
highlevelr.obj \
strequalr.obj
DEBUG_OBJS= \
base64d.obj \
cookied.obj \
downloadd.obj \
transferd.obj \
escaped.obj \
formdatad.obj \
ftpd.obj \
@@ -89,13 +88,12 @@ DEBUG_OBJS= \
getinfod.obj \
versiond.obj \
easyd.obj \
highleveld.obj \
strequald.obj
RELEASE_SSL_OBJS= \
base64rs.obj \
cookiers.obj \
downloadrs.obj \
transferrs.obj \
escapers.obj \
formdatars.obj \
ftprs.obj \
@@ -120,13 +118,12 @@ RELEASE_SSL_OBJS= \
getinfors.obj \
versionrs.obj \
easyrs.obj \
highlevelrs.obj \
strequalrs.obj
LINK_OBJS= \
base64.obj \
cookie.obj \
download.obj \
transfer.obj \
escape.obj \
formdata.obj \
ftp.obj \
@@ -151,7 +148,6 @@ LINK_OBJS= \
getinfo.obj \
version.obj \
easy.obj \
highlevel.obj \
strequal.obj
all : release
@@ -170,8 +166,8 @@ base64r.obj: base64.c
$(CCR) $(CFLAGS) base64.c
cookier.obj: cookie.c
$(CCR) $(CFLAGS) cookie.c
downloadr.obj: download.c
$(CCR) $(CFLAGS) download.c
transferr.obj: transfer.c
$(CCR) $(CFLAGS) transfer.c
escaper.obj: escape.c
$(CCR) $(CFLAGS) escape.c
formdatar.obj: formdata.c
@@ -220,8 +216,6 @@ versionr.obj: version.c
$(CCR) $(CFLAGS) version.c
easyr.obj: easy.c
$(CCR) $(CFLAGS) easy.c
highlevelr.obj: highlevel.c
$(CCR) $(CFLAGS) highlevel.c
strequalr.obj: strequal.c
$(CCR) $(CFLAGS) strequal.c
@@ -230,8 +224,8 @@ base64d.obj: base64.c
$(CCD) $(CFLAGS) base64.c
cookied.obj: cookie.c
$(CCD) $(CFLAGS) cookie.c
downloadd.obj: download.c
$(CCD) $(CFLAGS) download.c
transferd.obj: transfer.c
$(CCD) $(CFLAGS) transfer.c
escaped.obj: escape.c
$(CCD) $(CFLAGS) escape.c
formdatad.obj: formdata.c
@@ -280,8 +274,6 @@ versiond.obj: version.c
$(CCD) $(CFLAGS) version.c
easyd.obj: easy.c
$(CCD) $(CFLAGS) easy.c
highleveld.obj: highlevel.c
$(CCD) $(CFLAGS) highlevel.c
strequald.obj: strequal.c
$(CCD) $(CFLAGS) strequal.c
@@ -291,8 +283,8 @@ base64rs.obj: base64.c
$(CCRS) $(CFLAGS) base64.c
cookiers.obj: cookie.c
$(CCRS) $(CFLAGS) cookie.c
downloadrs.obj: download.c
$(CCRS) $(CFLAGS) download.c
transferrs.obj: transfer.c
$(CCRS) $(CFLAGS) transfer.c
escapers.obj: escape.c
$(CCRS) $(CFLAGS) escape.c
formdatars.obj: formdata.c
@@ -341,8 +333,6 @@ versionrs.obj: version.c
$(CCRS) $(CFLAGS) version.c
easyrs.obj: easy.c
$(CCRS) $(CFLAGS) easy.c
highlevelrs.obj: highlevel.c
$(CCRS) $(CFLAGS) highlevel.c
strequalrs.obj: strequal.c
$(CCRS) $(CFLAGS) strequal.c

View File

@@ -63,6 +63,7 @@
#define SYNCH 242 /* for telfunc calls */
#ifdef TELCMDS
static
char *telcmds[] = {
"EOF", "SUSP", "ABORT", "EOR",
"SE", "NOP", "DMARK", "BRK", "IP", "AO", "AYT", "EC",
@@ -124,6 +125,7 @@ extern char *telcmds[];
#define NTELOPTS (1+TELOPT_NEW_ENVIRON)
#ifdef TELOPTS
static
char *telopts[NTELOPTS+1] = {
"BINARY", "ECHO", "RCP", "SUPPRESS GO AHEAD", "NAME",
"STATUS", "TIMING MARK", "RCTE", "NAOL", "NAOP",

View File

@@ -55,7 +55,7 @@ static int pos(char c)
}
#if 1
int base64_encode(const void *data, int size, char **str)
int Curl_base64_encode(const void *data, int size, char **str)
{
char *s, *p;
int i;
@@ -93,7 +93,7 @@ int base64_encode(const void *data, int size, char **str)
}
#endif
int base64_decode(const char *str, void *data)
int Curl_base64_decode(const char *str, void *data)
{
const char *p;
unsigned char *q;

View File

@@ -34,6 +34,7 @@
#ifndef __BASE64_H
#define __BASE64_H
int base64_encode(const void *data, int size, char **str);
int Curl_base64_encode(const void *data, int size, char **str);
int Curl_base64_decode(const char *str, void *data);
#endif

View File

@@ -100,9 +100,10 @@ Example set of cookies:
*
***************************************************************************/
struct Cookie *cookie_add(struct CookieInfo *c,
bool httpheader, /* TRUE if HTTP header-style line */
char *lineptr) /* first non-space of the line */
struct Cookie *
Curl_cookie_add(struct CookieInfo *c,
bool httpheader, /* TRUE if HTTP header-style line */
char *lineptr) /* first non-space of the line */
{
struct Cookie *clist;
char what[MAX_COOKIE_LINE];
@@ -347,7 +348,7 @@ struct Cookie *cookie_add(struct CookieInfo *c,
* called before any cookies are set. File may be NULL.
*
****************************************************************************/
struct CookieInfo *cookie_init(char *file)
struct CookieInfo *Curl_cookie_init(char *file)
{
char line[MAX_COOKIE_LINE];
struct CookieInfo *c;
@@ -375,7 +376,7 @@ struct CookieInfo *cookie_init(char *file)
while(*lineptr && isspace((int)*lineptr))
lineptr++;
cookie_add(c, TRUE, lineptr);
Curl_cookie_add(c, TRUE, lineptr);
}
else {
/* This might be a netscape cookie-file line, get it! */
@@ -383,7 +384,7 @@ struct CookieInfo *cookie_init(char *file)
while(*lineptr && isspace((int)*lineptr))
lineptr++;
cookie_add(c, FALSE, lineptr);
Curl_cookie_add(c, FALSE, lineptr);
}
}
if(fromfile)
@@ -405,8 +406,8 @@ struct CookieInfo *cookie_init(char *file)
*
****************************************************************************/
struct Cookie *cookie_getlist(struct CookieInfo *c,
char *host, char *path, bool secure)
struct Cookie *Curl_cookie_getlist(struct CookieInfo *c,
char *host, char *path, bool secure)
{
struct Cookie *newco;
struct Cookie *co;
@@ -473,7 +474,7 @@ struct Cookie *cookie_getlist(struct CookieInfo *c,
*
****************************************************************************/
void cookie_freelist(struct Cookie *co)
void Curl_cookie_freelist(struct Cookie *co)
{
struct Cookie *next;
if(co) {
@@ -493,7 +494,7 @@ void cookie_freelist(struct Cookie *co)
* Free a "cookie object" previous created with cookie_init().
*
****************************************************************************/
void cookie_cleanup(struct CookieInfo *c)
void Curl_cookie_cleanup(struct CookieInfo *c)
{
struct Cookie *co;
struct Cookie *next;

View File

@@ -63,10 +63,10 @@ struct CookieInfo {
#define MAX_NAME 256
#define MAX_NAME_TXT "255"
struct Cookie *cookie_add(struct CookieInfo *, bool, char *);
struct CookieInfo *cookie_init(char *);
struct Cookie *cookie_getlist(struct CookieInfo *, char *, char *, bool);
void cookie_freelist(struct Cookie *);
void cookie_cleanup(struct CookieInfo *);
struct Cookie *Curl_cookie_add(struct CookieInfo *, bool, char *);
struct CookieInfo *Curl_cookie_init(char *);
struct Cookie *Curl_cookie_getlist(struct CookieInfo *, char *, char *, bool);
void Curl_cookie_freelist(struct Cookie *);
void Curl_cookie_cleanup(struct CookieInfo *);
#endif

View File

@@ -71,7 +71,7 @@
#include "urldata.h"
#include <curl/curl.h>
#include "download.h"
#include "transfer.h"
#include "sendf.h"
#include "progress.h"
@@ -80,12 +80,12 @@
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
CURLcode dict_done(struct connectdata *conn)
CURLcode Curl_dict_done(struct connectdata *conn)
{
return CURLE_OK;
}
CURLcode dict(struct connectdata *conn)
CURLcode Curl_dict(struct connectdata *conn)
{
int nth;
char *word;
@@ -154,7 +154,7 @@ CURLcode dict(struct connectdata *conn)
word
);
result = Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
result = Curl_Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
-1, NULL); /* no upload */
if(result)
@@ -202,7 +202,7 @@ CURLcode dict(struct connectdata *conn)
word
);
result = Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
result = Curl_Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
-1, NULL); /* no upload */
if(result)
@@ -226,7 +226,7 @@ CURLcode dict(struct connectdata *conn)
"QUIT\n",
ppath);
result = Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
result = Curl_Transfer(conn, data->firstsocket, -1, FALSE, bytecount,
-1, NULL);
if(result)

View File

@@ -23,7 +23,7 @@
*
* $Id$
*****************************************************************************/
CURLcode dict(struct connectdata *conn);
CURLcode dict_done(struct connectdata *conn);
CURLcode Curl_dict(struct connectdata *conn);
CURLcode Curl_dict_done(struct connectdata *conn);
#endif

View File

@@ -1,100 +0,0 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2000, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* In order to be useful for every potential user, curl and libcurl are
* dual-licensed under the MPL and the MIT/X-derivate licenses.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the MPL or the MIT/X-derivate
* licenses. You may pick one of these licenses.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
* $Id$
*****************************************************************************/
#include "setup.h"
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#ifdef HAVE_SYS_TYPES_H
#include <sys/types.h>
#endif
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#ifdef HAVE_SYS_SELECT_H
#include <sys/select.h>
#endif
#include "urldata.h"
#include <curl/curl.h>
#ifdef __BEOS__
#include <net/socket.h>
#endif
#ifdef WIN32
#if !defined( __GNUC__) || defined(__MINGW32__)
#include <winsock.h>
#endif
#include <time.h> /* for the time_t typedef! */
#if defined(__GNUC__) && defined(TIME_WITH_SYS_TIME)
#include <sys/time.h>
#endif
#endif
#include "progress.h"
#include "speedcheck.h"
#include "sendf.h"
#include <curl/types.h>
/* --- download and upload a stream from/to a socket --- */
/* Parts of this function was brought to us by the friendly Mark Butler
<butlerm@xmission.com>. */
CURLcode
Transfer(CURLconnect *c_conn,
/* READ stuff */
int sockfd, /* socket to read from or -1 */
int size, /* -1 if unknown at this point */
bool getheader, /* TRUE if header parsing is wanted */
long *bytecountp, /* return number of bytes read or NULL */
/* WRITE stuff */
int writesockfd, /* socket to write to, it may very well be
the same we read from. -1 disables */
long *writebytecountp /* return number of bytes written or NULL */
)
{
struct connectdata *conn = (struct connectdata *)c_conn;
if(!conn)
return CURLE_BAD_FUNCTION_ARGUMENT;
/* now copy all input parameters */
conn->sockfd = sockfd;
conn->size = size;
conn->getheader = getheader;
conn->bytecountp = bytecountp;
conn->writesockfd = writesockfd;
conn->writebytecountp = writebytecountp;
return CURLE_OK;
}

View File

@@ -72,7 +72,7 @@
#include "urldata.h"
#include <curl/curl.h>
#include "highlevel.h"
#include "transfer.h"
#include <curl/types.h>
#define _MPRINTF_REPLACE /* use our functions only */

View File

@@ -106,7 +106,7 @@ CURLcode file(struct connectdata *conn)
struct UrlData *data = conn->data;
char *buf = data->buffer;
int bytecount = 0;
struct timeval start = tvnow();
struct timeval start = Curl_tvnow();
struct timeval now = start;
int fd;
char *actual_path = curl_unescape(path, 0);
@@ -139,7 +139,7 @@ CURLcode file(struct connectdata *conn)
it avoids problems with select() and recv() on file descriptors
in Winsock */
if(expected_size != -1)
pgrsSetDownloadSize(data, expected_size);
Curl_pgrsSetDownloadSize(data, expected_size);
while (res == CURLE_OK) {
nread = read(fd, buf, BUFSIZE-1);
@@ -155,16 +155,16 @@ CURLcode file(struct connectdata *conn)
to prevent CR/LF translation (this then goes to a binary mode
file descriptor). */
res = client_write(data, CLIENTWRITE_BODY, buf, nread);
res = Curl_client_write(data, CLIENTWRITE_BODY, buf, nread);
if(res)
return res;
now = tvnow();
if(pgrsUpdate(data))
now = Curl_tvnow();
if(Curl_pgrsUpdate(data))
res = CURLE_ABORTED_BY_CALLBACK;
}
now = tvnow();
if(pgrsUpdate(data))
now = Curl_tvnow();
if(Curl_pgrsUpdate(data))
res = CURLE_ABORTED_BY_CALLBACK;
close(fd);

View File

@@ -91,16 +91,10 @@ static void GetStr(char **string,
*
***************************************************************************/
int curl_formparse(char *input,
struct HttpPost **httppost,
struct HttpPost **last_post)
{
return FormParse(input, httppost, last_post);
}
#define FORM_FILE_SEPARATOR ','
#define FORM_TYPE_SEPARATOR ';'
static
int FormParse(char *input,
struct HttpPost **httppost,
struct HttpPost **last_post)
@@ -298,6 +292,13 @@ int FormParse(char *input,
return 0;
}
int curl_formparse(char *input,
struct HttpPost **httppost,
struct HttpPost **last_post)
{
return FormParse(input, httppost, last_post);
}
static int AddFormData(struct FormData **formp,
void *line,
long length)
@@ -339,7 +340,7 @@ static int AddFormDataf(struct FormData **formp,
}
char *MakeFormBoundary(void)
char *Curl_FormBoundary(void)
{
char *retstring;
static int randomizer=0; /* this is just so that two boundaries within
@@ -367,7 +368,7 @@ char *MakeFormBoundary(void)
}
/* Used from http.c */
void FormFree(struct FormData *form)
void Curl_FormFree(struct FormData *form)
{
struct FormData *next;
do {
@@ -400,8 +401,8 @@ void curl_formfree(struct HttpPost *form)
} while((form=next)); /* continue */
}
struct FormData *getFormData(struct HttpPost *post,
int *sizep)
struct FormData *Curl_getFormData(struct HttpPost *post,
int *sizep)
{
struct FormData *form = NULL;
struct FormData *firstform;
@@ -415,7 +416,7 @@ struct FormData *getFormData(struct HttpPost *post,
if(!post)
return NULL; /* no input => no output! */
boundary = MakeFormBoundary();
boundary = Curl_FormBoundary();
/* Make the first line of the output */
AddFormDataf(&form,
@@ -439,7 +440,7 @@ struct FormData *getFormData(struct HttpPost *post,
/* If used, this is a link to more file names, we must then do
the magic to include several files with the same field name */
fileboundary = MakeFormBoundary();
fileboundary = Curl_FormBoundary();
size += AddFormDataf(&form,
"\r\nContent-Type: multipart/mixed,"
@@ -535,24 +536,11 @@ struct FormData *getFormData(struct HttpPost *post,
return firstform;
}
int FormInit(struct Form *form, struct FormData *formdata )
int Curl_FormInit(struct Form *form, struct FormData *formdata )
{
if(!formdata)
return 1; /* error */
#if 0
struct FormData *lastnode=formdata;
/* find the last node in the list */
while(lastnode->next) {
lastnode = lastnode->next;
}
/* Now, make sure that we'll send a nice terminating sequence at the end
* of the post. We *DONT* add this string to the size of the data since this
* is actually AFTER the data. */
AddFormDataf(&lastnode, "\r\n\r\n");
#endif
form->data = formdata;
form->sent = 0;
@@ -560,10 +548,10 @@ int FormInit(struct Form *form, struct FormData *formdata )
}
/* fread() emulation */
int FormReader(char *buffer,
size_t size,
size_t nitems,
FILE *mydata)
int Curl_FormReader(char *buffer,
size_t size,
size_t nitems,
FILE *mydata)
{
struct Form *form;
int wantedsize;
@@ -638,7 +626,7 @@ int main(int argc, char **argv)
}
}
form=getFormData(httppost, &size);
form=Curl_getFormData(httppost, &size);
FormInit(&formread, form);

View File

@@ -36,23 +36,19 @@ struct Form {
been sent in a previous invoke */
};
int FormParse(char *string,
struct HttpPost **httppost,
struct HttpPost **last_post);
int Curl_FormInit(struct Form *form, struct FormData *formdata );
int FormInit(struct Form *form, struct FormData *formdata );
struct FormData *getFormData(struct HttpPost *post,
int *size);
struct FormData *Curl_getFormData(struct HttpPost *post,
int *size);
/* fread() emulation */
int FormReader(char *buffer,
size_t size,
size_t nitems,
FILE *mydata);
int Curl_FormReader(char *buffer,
size_t size,
size_t nitems,
FILE *mydata);
char *MakeFormBoundary(void);
char *Curl_FormBoundary(void);
void FormFree(struct FormData *);
void Curl_FormFree(struct FormData *);
#endif

144
lib/ftp.c
View File

@@ -26,6 +26,7 @@
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <stdarg.h>
#include <ctype.h>
#include <errno.h>
@@ -66,18 +67,27 @@
#include "if2ip.h"
#include "hostip.h"
#include "progress.h"
#include "download.h"
#include "transfer.h"
#include "escape.h"
#include "http.h" /* for HTTP proxy tunnel stuff */
#include "ftp.h"
#ifdef KRB4
#include "security.h"
#include "krb4.h"
#endif
#define _MPRINTF_REPLACE /* use our functions only */
#include <curl/mprintf.h>
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
#endif
/* easy-to-use macro: */
#define ftpsendf Curl_ftpsendf
/* returns last node in linked list */
static struct curl_slist *slist_get_last(struct curl_slist *list)
{
@@ -202,9 +212,13 @@ static CURLcode AllowServerConnect(struct UrlData *data,
#define lastline(line) (isdigit((int)line[0]) && isdigit((int)line[1]) && \
isdigit((int)line[2]) && (' ' == line[3]))
int GetLastResponse(int sockfd, char *buf,
struct connectdata *conn,
int *ftpcode)
/*
* We allow the ftpcode pointer to be NULL if no reply integer is wanted
*/
int Curl_GetFTPResponse(int sockfd, char *buf,
struct connectdata *conn,
int *ftpcode)
{
int nread;
int keepon=TRUE;
@@ -220,12 +234,13 @@ int GetLastResponse(int sockfd, char *buf,
#define SELECT_TIMEOUT 2
int error = SELECT_OK;
*ftpcode=0; /* 0 for errors */
if(ftpcode)
*ftpcode=0; /* 0 for errors */
if(data->timeout) {
/* if timeout is requested, find out how much remaining time we have */
timeout = data->timeout - /* timeout time */
(tvlong(tvnow()) - tvlong(conn->now)); /* spent time */
(Curl_tvlong(Curl_tvnow()) - Curl_tvlong(conn->now)); /* spent time */
if(timeout <=0 ) {
failf(data, "Transfer aborted due to timeout");
return -SELECT_TIMEOUT; /* already too little time */
@@ -306,13 +321,14 @@ int GetLastResponse(int sockfd, char *buf,
if(error)
return -error;
*ftpcode=atoi(buf); /* return the initial number like this */
if(ftpcode)
*ftpcode=atoi(buf); /* return the initial number like this */
return nread;
}
/* -- who are we? -- */
char *getmyhost(char *buf, int buf_size)
char *Curl_getmyhost(char *buf, int buf_size)
{
#if defined(HAVE_GETHOSTNAME)
gethostname(buf, buf_size);
@@ -330,7 +346,7 @@ char *getmyhost(char *buf, int buf_size)
/* ftp_connect() should do everything that is to be considered a part
of the connection phase. */
CURLcode ftp_connect(struct connectdata *conn)
CURLcode Curl_ftp_connect(struct connectdata *conn)
{
/* this is FTP and no proxy */
int nread;
@@ -356,14 +372,14 @@ CURLcode ftp_connect(struct connectdata *conn)
if (data->bits.tunnel_thru_httpproxy) {
/* We want "seamless" FTP operations through HTTP proxy tunnel */
result = GetHTTPProxyTunnel(data, data->firstsocket,
data->hostname, data->remote_port);
result = Curl_ConnectHTTPProxyTunnel(data, data->firstsocket,
data->hostname, data->remote_port);
if(CURLE_OK != result)
return result;
}
/* The first thing we do is wait for the "220*" line: */
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -396,7 +412,7 @@ CURLcode ftp_connect(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "USER %s", ftp->user);
/* wait for feedback */
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -410,7 +426,7 @@ CURLcode ftp_connect(struct connectdata *conn)
/* 331 Password required for ...
(the server requires to send the user's password too) */
ftpsendf(data->firstsocket, conn, "PASS %s", ftp->passwd);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -459,7 +475,7 @@ CURLcode ftp_connect(struct connectdata *conn)
/* argument is already checked for validity */
CURLcode ftp_done(struct connectdata *conn)
CURLcode Curl_ftp_done(struct connectdata *conn)
{
struct UrlData *data = conn->data;
struct FTP *ftp = data->proto.ftp;
@@ -496,7 +512,7 @@ CURLcode ftp_done(struct connectdata *conn)
if(!data->bits.no_body) {
/* now let's see what the server says about the transfer we
just performed: */
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -516,7 +532,7 @@ CURLcode ftp_done(struct connectdata *conn)
if (qitem->data) {
ftpsendf(data->firstsocket, conn, "%s", qitem->data);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -570,7 +586,7 @@ CURLcode _ftp(struct connectdata *conn)
if (qitem->data) {
ftpsendf(data->firstsocket, conn, "%s", qitem->data);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -587,7 +603,7 @@ CURLcode _ftp(struct connectdata *conn)
/* change directory first! */
if(ftp->dir && ftp->dir[0]) {
ftpsendf(data->firstsocket, conn, "CWD %s", ftp->dir);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -602,7 +618,7 @@ CURLcode _ftp(struct connectdata *conn)
again a grey area as the MDTM is not kosher RFC959 */
ftpsendf(data->firstsocket, conn, "MDTM %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -639,7 +655,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "TYPE %s",
(data->bits.ftp_ascii)?"A":"I");
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -652,7 +668,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -664,7 +680,7 @@ CURLcode _ftp(struct connectdata *conn)
filesize = atoi(buf+4);
sprintf(buf, "Content-Length: %d\r\n", filesize);
result = client_write(data, CLIENTWRITE_BOTH, buf, 0);
result = Curl_client_write(data, CLIENTWRITE_BOTH, buf, 0);
if(result)
return result;
@@ -680,7 +696,7 @@ CURLcode _ftp(struct connectdata *conn)
/* format: "Tue, 15 Nov 1994 12:45:26 GMT" */
strftime(buf, BUFSIZE-1, "Last-Modified: %a, %d %b %Y %H:%M:%S %Z\r\n",
tm);
result = client_write(data, CLIENTWRITE_BOTH, buf, 0);
result = Curl_client_write(data, CLIENTWRITE_BOTH, buf, 0);
if(result)
return result;
}
@@ -699,18 +715,20 @@ CURLcode _ftp(struct connectdata *conn)
char myhost[256] = "";
if(data->ftpport) {
if(if2ip(data->ftpport, myhost, sizeof(myhost))) {
h = GetHost(data, myhost, &hostdataptr);
if(Curl_if2ip(data->ftpport, myhost, sizeof(myhost))) {
h = Curl_gethost(data, myhost, &hostdataptr);
}
else {
if(strlen(data->ftpport)>1)
h = GetHost(data, data->ftpport, &hostdataptr);
h = Curl_gethost(data, data->ftpport, &hostdataptr);
if(h)
strcpy(myhost, data->ftpport); /* buffer overflow risk */
}
}
if(! *myhost) {
h=GetHost(data, getmyhost(myhost, sizeof(myhost)), &hostdataptr);
h=Curl_gethost(data,
Curl_getmyhost(myhost, sizeof(myhost)),
&hostdataptr);
}
infof(data, "We connect from %s\n", myhost);
@@ -788,7 +806,7 @@ CURLcode _ftp(struct connectdata *conn)
porttouse & 255);
}
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -801,7 +819,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "PASV");
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -856,7 +874,7 @@ CURLcode _ftp(struct connectdata *conn)
}
else {
/* normal, direct, ftp connection */
he = GetHost(data, newhost, &hostdataptr);
he = Curl_gethost(data, newhost, &hostdataptr);
if(!he) {
failf(data, "Can't resolve new host %s", newhost);
return CURLE_FTP_CANT_GET_HOST;
@@ -961,8 +979,8 @@ CURLcode _ftp(struct connectdata *conn)
if (data->bits.tunnel_thru_httpproxy) {
/* We want "seamless" FTP operations through HTTP proxy tunnel */
result = GetHTTPProxyTunnel(data, data->secondarysocket,
newhost, newport);
result = Curl_ConnectHTTPProxyTunnel(data, data->secondarysocket,
newhost, newport);
if(CURLE_OK != result)
return result;
}
@@ -977,7 +995,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "TYPE %s",
(data->bits.ftp_ascii)?"A":"I");
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1008,7 +1026,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1069,7 +1087,7 @@ CURLcode _ftp(struct connectdata *conn)
else
ftpsendf(data->firstsocket, conn, "STOR %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1090,9 +1108,9 @@ CURLcode _ftp(struct connectdata *conn)
/* When we know we're uploading a specified file, we can get the file
size prior to the actual upload. */
pgrsSetUploadSize(data, data->infilesize);
Curl_pgrsSetUploadSize(data, data->infilesize);
result = Transfer(conn, -1, -1, FALSE, NULL, /* no download */
result = Curl_Transfer(conn, -1, -1, FALSE, NULL, /* no download */
data->secondarysocket, bytecountp);
if(result)
return result;
@@ -1149,7 +1167,7 @@ CURLcode _ftp(struct connectdata *conn)
/* Set type to ASCII */
ftpsendf(data->firstsocket, conn, "TYPE A");
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1171,7 +1189,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "TYPE %s",
(data->bits.ftp_ascii)?"A":"I");
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1192,7 +1210,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "SIZE %s", ftp->file);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1236,7 +1254,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "REST %d", data->resume_from);
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1249,7 +1267,7 @@ CURLcode _ftp(struct connectdata *conn)
ftpsendf(data->firstsocket, conn, "RETR %s", ftp->file);
}
nread = GetLastResponse(data->firstsocket, buf, conn, &ftpcode);
nread = Curl_GetFTPResponse(data->firstsocket, buf, conn, &ftpcode);
if(nread < 0)
return CURLE_OPERATION_TIMEOUTED;
@@ -1321,7 +1339,7 @@ CURLcode _ftp(struct connectdata *conn)
infof(data, "Getting file with size: %d\n", size);
/* FTP download: */
result=Transfer(conn, data->secondarysocket, size, FALSE,
result=Curl_Transfer(conn, data->secondarysocket, size, FALSE,
bytecountp,
-1, NULL); /* no upload here */
if(result)
@@ -1341,7 +1359,7 @@ CURLcode _ftp(struct connectdata *conn)
/* -- deal with the ftp server! -- */
/* argument is already checked for validity */
CURLcode ftp(struct connectdata *conn)
CURLcode Curl_ftp(struct connectdata *conn)
{
CURLcode retcode;
@@ -1403,3 +1421,39 @@ CURLcode ftp(struct connectdata *conn)
return retcode;
}
/*
* ftpsendf() sends the formated string as a ftp command to a ftp server
*
* NOTE: we build the command in a fixed-length buffer, which sets length
* restrictions on the command!
*
*/
size_t Curl_ftpsendf(int fd, struct connectdata *conn, char *fmt, ...)
{
size_t bytes_written;
char s[256];
va_list ap;
va_start(ap, fmt);
vsnprintf(s, 250, fmt, ap);
va_end(ap);
if(conn->data->bits.verbose)
fprintf(conn->data->err, "> %s\n", s);
strcat(s, "\r\n"); /* append a trailing CRLF */
#ifdef KRB4
if(conn->sec_complete && conn->data->cmdchannel) {
bytes_written = sec_fprintf(conn, conn->data->cmdchannel, s);
fflush(conn->data->cmdchannel);
}
else
#endif /* KRB4 */
{
bytes_written = swrite(fd, s, strlen(s));
}
return(bytes_written);
}

View File

@@ -23,11 +23,18 @@
*
* $Id$
*****************************************************************************/
CURLcode ftp(struct connectdata *conn);
CURLcode ftp_done(struct connectdata *conn);
CURLcode ftp_connect(struct connectdata *conn);
CURLcode Curl_ftp(struct connectdata *conn);
CURLcode Curl_ftp_done(struct connectdata *conn);
CURLcode Curl_ftp_connect(struct connectdata *conn);
size_t Curl_ftpsendf(int fd, struct connectdata *, char *fmt, ...);
struct curl_slist *curl_slist_append(struct curl_slist *list, char *data);
void curl_slist_free_all(struct curl_slist *list);
/* The kerberos stuff needs this: */
int Curl_GetFTPResponse(int sockfd, char *buf,
struct connectdata *conn,
int *ftpcode);
#endif

View File

@@ -33,6 +33,7 @@
#include "memdebug.h"
#endif
static
char *GetEnv(char *variable)
{
#ifdef WIN32

View File

@@ -23,7 +23,6 @@
* $Id$
*****************************************************************************/
/* Unix and Win32 getenv function call */
char *GetEnv(char *variable);
#include <curl/curl.h>
#endif

View File

@@ -1,8 +1,35 @@
#ifndef __GETPASS_H
#define __GETPASS_H
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 2000, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* In order to be useful for every potential user, curl and libcurl are
* dual-licensed under the MPL and the MIT/X-derivate licenses.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the MPL or the MIT/X-derivate
* licenses. You may pick one of these licenses.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
* $Id$
*****************************************************************************/
#ifndef HAVE_GETPASS_R
/* If there's a system-provided function named like this, we trust it is
also found in one of the standard headers. */
/*
* Returning NULL will abort the continued operation!
*/
char* getpass_r(char *prompt, char* buffer, size_t buflen );
#endif
#endif

View File

@@ -62,7 +62,7 @@
/* --- resolve name or IP-number --- */
char *MakeIP(unsigned long num,char *addr, int addr_len)
static char *MakeIP(unsigned long num,char *addr, int addr_len)
{
#if defined(HAVE_INET_NTOA) || defined(HAVE_INET_NTOA_R)
struct in_addr in;
@@ -83,14 +83,17 @@ char *MakeIP(unsigned long num,char *addr, int addr_len)
return (addr);
}
/* The original code to this function was stolen from the Dancer source code,
written by Bjorn Reese, it has since been patched and modified. */
/* The original code to this function was once stolen from the Dancer source
code, written by Bjorn Reese, it has since been patched and modified
considerably. */
#ifndef INADDR_NONE
#define INADDR_NONE (unsigned long) ~0
#endif
struct hostent *GetHost(struct UrlData *data,
char *hostname,
char **bufp)
struct hostent *Curl_gethost(struct UrlData *data,
char *hostname,
char **bufp)
{
struct hostent *h = NULL;
unsigned long in;

View File

@@ -23,6 +23,8 @@
* $Id$
*****************************************************************************/
struct hostent *GetHost(struct UrlData *data, char *hostname, char **bufp );
struct hostent *Curl_gethost(struct UrlData *data,
char *hostname,
char **bufp);
#endif

View File

@@ -87,7 +87,7 @@
#include "urldata.h"
#include <curl/curl.h>
#include "download.h"
#include "transfer.h"
#include "sendf.h"
#include "formdata.h"
#include "progress.h"
@@ -105,6 +105,150 @@
#include "memdebug.h"
#endif
/*
* The add_buffer series of functions are used to build one large memory chunk
* from repeated function invokes. Used so that the entire HTTP request can
* be sent in one go.
*/
static CURLcode
add_buffer(send_buffer *in, void *inptr, size_t size);
/*
* add_buffer_init() returns a fine buffer struct
*/
static
send_buffer *add_buffer_init(void)
{
send_buffer *blonk;
blonk=(send_buffer *)malloc(sizeof(send_buffer));
if(blonk) {
memset(blonk, 0, sizeof(send_buffer));
return blonk;
}
return NULL; /* failed, go home */
}
/*
* add_buffer_send() sends a buffer and frees all associated memory.
*/
static
size_t add_buffer_send(int sockfd, struct connectdata *conn, send_buffer *in)
{
size_t amount;
if(conn->data->bits.verbose) {
fputs("> ", conn->data->err);
/* this data _may_ contain binary stuff */
fwrite(in->buffer, in->size_used, 1, conn->data->err);
}
amount = ssend(sockfd, conn, in->buffer, in->size_used);
if(in->buffer)
free(in->buffer);
free(in);
return amount;
}
/*
* add_bufferf() builds a buffer from the formatted input
*/
static
CURLcode add_bufferf(send_buffer *in, char *fmt, ...)
{
CURLcode result = CURLE_OUT_OF_MEMORY;
char *s;
va_list ap;
va_start(ap, fmt);
s = vaprintf(fmt, ap); /* this allocs a new string to append */
va_end(ap);
if(s) {
result = add_buffer(in, s, strlen(s));
free(s);
}
return result;
}
/*
* add_buffer() appends a memory chunk to the existing one
*/
static
CURLcode add_buffer(send_buffer *in, void *inptr, size_t size)
{
char *new_rb;
int new_size;
if(size > 0) {
if(!in->buffer ||
((in->size_used + size) > (in->size_max - 1))) {
new_size = (in->size_used+size)*2;
if(in->buffer)
/* we have a buffer, enlarge the existing one */
new_rb = (char *)realloc(in->buffer, new_size);
else
/* create a new buffer */
new_rb = (char *)malloc(new_size);
if(!new_rb)
return CURLE_OUT_OF_MEMORY;
in->buffer = new_rb;
in->size_max = new_size;
}
memcpy(&in->buffer[in->size_used], inptr, size);
in->size_used += size;
}
return CURLE_OK;
}
/* end of the add_buffer functions */
/*****************************************************************************/
/*
* Read everything until a newline.
*/
static
int GetLine(int sockfd, char *buf, struct UrlData *data)
{
int nread;
int read_rc=1;
char *ptr;
ptr=buf;
/* get us a full line, terminated with a newline */
for(nread=0;
(nread<BUFSIZE) && read_rc;
nread++, ptr++) {
#ifdef USE_SSLEAY
if (data->ssl.use) {
read_rc = SSL_read(data->ssl.handle, ptr, 1);
}
else {
#endif
read_rc = sread(sockfd, ptr, 1);
#ifdef USE_SSLEAY
}
#endif /* USE_SSLEAY */
if (*ptr == '\n')
break;
}
*ptr=0; /* zero terminate */
if(data->bits.verbose) {
fputs("< ", data->err);
fwrite(buf, 1, nread, data->err);
fputs("\n", data->err);
}
return nread;
}
/*
* This function checks the linked list of custom HTTP headers for a particular
* header (prefix).
@@ -123,13 +267,13 @@ bool static checkheaders(struct UrlData *data, char *thisheader)
}
/*
* GetHTTPProxyTunnel() requires that we're connected to a HTTP proxy. This
* ConnectHTTPProxyTunnel() requires that we're connected to a HTTP proxy. This
* function will issue the necessary commands to get a seamless tunnel through
* this proxy. After that, the socket can be used just as a normal socket.
*/
CURLcode GetHTTPProxyTunnel(struct UrlData *data, int tunnelsocket,
char *hostname, int remote_port)
CURLcode Curl_ConnectHTTPProxyTunnel(struct UrlData *data, int tunnelsocket,
char *hostname, int remote_port)
{
int httperror=0;
int subversion=0;
@@ -170,7 +314,7 @@ CURLcode GetHTTPProxyTunnel(struct UrlData *data, int tunnelsocket,
return CURLE_OK;
}
CURLcode http_connect(struct connectdata *conn)
CURLcode Curl_http_connect(struct connectdata *conn)
{
struct UrlData *data;
CURLcode result;
@@ -186,16 +330,15 @@ CURLcode http_connect(struct connectdata *conn)
if (conn->protocol & PROT_HTTPS) {
if (data->bits.httpproxy) {
/* HTTPS through a proxy can only be done with a tunnel */
result = GetHTTPProxyTunnel(data, data->firstsocket,
data->hostname, data->remote_port);
result = Curl_ConnectHTTPProxyTunnel(data, data->firstsocket,
data->hostname, data->remote_port);
if(CURLE_OK != result)
return result;
}
/* now, perform the SSL initialization for this socket */
if(UrgSSLConnect (data)) {
if(Curl_SSLConnect(data))
return CURLE_SSL_CONNECT_ERROR;
}
}
if(data->bits.user_passwd && !data->bits.this_is_a_follow) {
@@ -209,14 +352,14 @@ CURLcode http_connect(struct connectdata *conn)
/* called from curl_close() when this struct is about to get wasted, free
protocol-specific resources */
CURLcode http_close(struct connectdata *conn)
CURLcode Curl_http_close(struct connectdata *conn)
{
if(conn->data->auth_host)
free(conn->data->auth_host);
return CURLE_OK;
}
CURLcode http_done(struct connectdata *conn)
CURLcode Curl_http_done(struct connectdata *conn)
{
struct UrlData *data;
long *bytecount = &conn->bytecount;
@@ -228,7 +371,7 @@ CURLcode http_done(struct connectdata *conn)
if(data->bits.http_formpost) {
*bytecount = http->readbytecount + http->writebytecount;
FormFree(http->sendit); /* Now free that whole lot */
Curl_FormFree(http->sendit); /* Now free that whole lot */
data->fread = http->storefread; /* restore */
data->in = http->in; /* restore */
@@ -244,7 +387,7 @@ CURLcode http_done(struct connectdata *conn)
}
CURLcode http(struct connectdata *conn)
CURLcode Curl_http(struct connectdata *conn)
{
struct UrlData *data=conn->data;
char *buf = data->buffer; /* this is a short cut to the buffer */
@@ -284,29 +427,29 @@ CURLcode http(struct connectdata *conn)
!data->auth_host ||
strequal(data->auth_host, data->hostname)) {
sprintf(data->buffer, "%s:%s", data->user, data->passwd);
if(base64_encode(data->buffer, strlen(data->buffer),
&authorization) >= 0) {
data->ptr_userpwd = maprintf( "Authorization: Basic %s\015\012",
authorization);
if(Curl_base64_encode(data->buffer, strlen(data->buffer),
&authorization) >= 0) {
data->ptr_userpwd = aprintf( "Authorization: Basic %s\015\012",
authorization);
free(authorization);
}
}
}
if((data->bits.set_range) && !checkheaders(data, "Range:")) {
data->ptr_rangeline = maprintf("Range: bytes=%s\015\012", data->range);
data->ptr_rangeline = aprintf("Range: bytes=%s\015\012", data->range);
}
if((data->bits.http_set_referer) && !checkheaders(data, "Referer:")) {
data->ptr_ref = maprintf("Referer: %s\015\012", data->referer);
data->ptr_ref = aprintf("Referer: %s\015\012", data->referer);
}
if(data->cookie && !checkheaders(data, "Cookie:")) {
data->ptr_cookie = maprintf("Cookie: %s\015\012", data->cookie);
data->ptr_cookie = aprintf("Cookie: %s\015\012", data->cookie);
}
if(data->cookies) {
co = cookie_getlist(data->cookies,
host,
ppath,
conn->protocol&PROT_HTTPS?TRUE:FALSE);
co = Curl_cookie_getlist(data->cookies,
host,
ppath,
conn->protocol&PROT_HTTPS?TRUE:FALSE);
}
if ((data->bits.httpproxy) && !(conn->protocol&PROT_HTTPS)) {
/* The path sent to the proxy is in fact the entire URL */
@@ -315,7 +458,7 @@ CURLcode http(struct connectdata *conn)
if(data->bits.http_formpost) {
/* we must build the whole darned post sequence first, so that we have
a size of the whole shebang before we start to send it */
http->sendit = getFormData(data->httppost, &http->postsize);
http->sendit = Curl_getFormData(data->httppost, &http->postsize);
}
if(!checkheaders(data, "Host:")) {
@@ -323,9 +466,9 @@ CURLcode http(struct connectdata *conn)
(!(conn->protocol&PROT_HTTPS) && (data->remote_port == PORT_HTTP)) )
/* If (HTTPS on port 443) OR (non-HTTPS on port 80) then don't include
the port number in the host string */
data->ptr_host = maprintf("Host: %s\r\n", host);
data->ptr_host = aprintf("Host: %s\r\n", host);
else
data->ptr_host = maprintf("Host: %s:%d\r\n", host, data->remote_port);
data->ptr_host = aprintf("Host: %s:%d\r\n", host, data->remote_port);
}
if(!checkheaders(data, "Pragma:"))
@@ -389,7 +532,7 @@ CURLcode http(struct connectdata *conn)
if(count) {
add_buffer(req_buffer, "\r\n", 2);
}
cookie_freelist(store); /* free the cookie list */
Curl_cookie_freelist(store); /* free the cookie list */
co=NULL;
}
@@ -451,7 +594,7 @@ CURLcode http(struct connectdata *conn)
}
if(data->bits.http_formpost) {
if(FormInit(&http->form, http->sendit)) {
if(Curl_FormInit(&http->form, http->sendit)) {
failf(data, "Internal HTTP POST error!\n");
return CURLE_HTTP_POST_ERROR;
}
@@ -461,24 +604,24 @@ CURLcode http(struct connectdata *conn)
data->fread =
(size_t (*)(char *, size_t, size_t, FILE *))
FormReader; /* set the read function to read from the
generated form data */
Curl_FormReader; /* set the read function to read from the
generated form data */
data->in = (FILE *)&http->form;
add_bufferf(req_buffer,
"Content-Length: %d\r\n", http->postsize-2);
/* set upload size to the progress meter */
pgrsSetUploadSize(data, http->postsize);
Curl_pgrsSetUploadSize(data, http->postsize);
data->request_size =
add_buffer_send(data->firstsocket, conn, req_buffer);
result = Transfer(conn, data->firstsocket, -1, TRUE,
result = Curl_Transfer(conn, data->firstsocket, -1, TRUE,
&http->readbytecount,
data->firstsocket,
&http->writebytecount);
if(result) {
FormFree(http->sendit); /* free that whole lot */
Curl_FormFree(http->sendit); /* free that whole lot */
return result;
}
}
@@ -494,14 +637,14 @@ CURLcode http(struct connectdata *conn)
add_bufferf(req_buffer, "\015\012");
/* set the upload size to the progress meter */
pgrsSetUploadSize(data, data->infilesize);
Curl_pgrsSetUploadSize(data, data->infilesize);
/* this sends the buffer and frees all the buffer resources */
data->request_size =
add_buffer_send(data->firstsocket, conn, req_buffer);
/* prepare for transfer */
result = Transfer(conn, data->firstsocket, -1, TRUE,
result = Curl_Transfer(conn, data->firstsocket, -1, TRUE,
&http->readbytecount,
data->firstsocket,
&http->writebytecount);
@@ -547,7 +690,7 @@ CURLcode http(struct connectdata *conn)
add_buffer_send(data->firstsocket, conn, req_buffer);
/* HTTP GET/HEAD download: */
result = Transfer(conn, data->firstsocket, -1, TRUE, bytecount,
result = Curl_Transfer(conn, data->firstsocket, -1, TRUE, bytecount,
-1, NULL); /* nothing to upload */
}
if(result)

View File

@@ -25,13 +25,13 @@
*****************************************************************************/
/* ftp can use this as well */
CURLcode GetHTTPProxyTunnel(struct UrlData *data, int tunnelsocket,
char *hostname, int remote_port);
CURLcode Curl_ConnectHTTPProxyTunnel(struct UrlData *data, int tunnelsocket,
char *hostname, int remote_port);
/* protocol-specific functions set up to be called by the main engine */
CURLcode http(struct connectdata *conn);
CURLcode http_done(struct connectdata *conn);
CURLcode http_connect(struct connectdata *conn);
CURLcode http_close(struct connectdata *conn);
CURLcode Curl_http(struct connectdata *conn);
CURLcode Curl_http_done(struct connectdata *conn);
CURLcode Curl_http_connect(struct connectdata *conn);
CURLcode Curl_http_close(struct connectdata *conn);
#endif

View File

@@ -72,7 +72,7 @@
#define SYS_ERROR -1
char *if2ip(char *interface, char *buf, int buf_size)
char *Curl_if2ip(char *interface, char *buf, int buf_size)
{
int dummy;
char *ip=NULL;

View File

@@ -25,9 +25,9 @@
#include "setup.h"
#if ! defined(WIN32) && ! defined(__BEOS__)
extern char *if2ip(char *interface, char *buf, int buf_size);
extern char *Curl_if2ip(char *interface, char *buf, int buf_size);
#else
#define if2ip(a,b,c) NULL
#define Curl_if2ip(a,b,c) NULL
#endif
#endif

View File

@@ -47,6 +47,9 @@
#include <string.h>
#include <krb.h>
#include "ftp.h"
#include "sendf.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
@@ -95,7 +98,8 @@ strlcpy (char *dst, const char *src, size_t dst_sz)
else
return n + strlen (src);
}
#else
size_t strlcpy (char *dst, const char *src, size_t dst_sz);
#endif
static int
@@ -284,7 +288,8 @@ krb4_auth(void *app_data, struct connectdata *conn)
size_t nread;
int l = sizeof(local_addr);
if(getsockname(conn->data->firstsocket, LOCAL_ADDR, &l) < 0)
if(getsockname(conn->data->firstsocket,
(struct sockaddr *)LOCAL_ADDR, &l) < 0)
perror("getsockname()");
checksum = getpid();
@@ -327,15 +332,15 @@ krb4_auth(void *app_data, struct connectdata *conn)
/*printf("Local address is %s\n", inet_ntoa(localaddr->sin_addr));***/
/*printf("Remote address is %s\n", inet_ntoa(remoteaddr->sin_addr));***/
if(base64_encode(adat.dat, adat.length, &p) < 0) {
if(Curl_base64_encode(adat.dat, adat.length, &p) < 0) {
printf("Out of memory base64-encoding.\n");
return AUTH_CONTINUE;
}
/*ret = command("ADAT %s", p)*/
ftpsendf(conn->data->firstsocket, conn, "ADAT %s", p);
Curl_ftpsendf(conn->data->firstsocket, conn, "ADAT %s", p);
/* wait for feedback */
nread = GetLastResponse(conn->data->firstsocket,
conn->data->buffer, conn);
nread = Curl_GetFTPResponse(conn->data->firstsocket,
conn->data->buffer, conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/-1;
free(p);
@@ -351,7 +356,7 @@ krb4_auth(void *app_data, struct connectdata *conn)
return AUTH_ERROR;
}
p += 5;
len = base64_decode(p, adat.dat);
len = Curl_base64_decode(p, adat.dat);
if(len < 0){
printf("Failed to decode base64 from server.\n");
return AUTH_ERROR;
@@ -389,8 +394,6 @@ struct sec_client_mech krb4_client_mech = {
void krb_kauth(struct connectdata *conn)
{
int ret;
char buf[1024];
des_cblock key;
des_key_schedule schedule;
KTEXT_ST tkt, tktcopy;
@@ -405,10 +408,11 @@ void krb_kauth(struct connectdata *conn)
save = set_command_prot(conn, prot_private);
/*ret = command("SITE KAUTH %s", name);***/
ftpsendf(conn->data->firstsocket, conn,
Curl_ftpsendf(conn->data->firstsocket, conn,
"SITE KAUTH %s", conn->data->user);
/* wait for feedback */
nread = GetLastResponse(conn->data->firstsocket, conn->data->buffer, conn);
nread = Curl_GetFTPResponse(conn->data->firstsocket, conn->data->buffer,
conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/;
@@ -427,7 +431,7 @@ void krb_kauth(struct connectdata *conn)
return;
}
p += 2;
tmp = base64_decode(p, &tkt.dat);
tmp = Curl_base64_decode(p, &tkt.dat);
if(tmp < 0){
printf("Failed to decode base64 in reply.\n");
set_command_prot(conn, save);
@@ -476,7 +480,7 @@ void krb_kauth(struct connectdata *conn)
memset(key, 0, sizeof(key));
memset(schedule, 0, sizeof(schedule));
memset(passwd, 0, sizeof(passwd));
if(base64_encode(tktcopy.dat, tktcopy.length, &p) < 0) {
if(Curl_base64_encode(tktcopy.dat, tktcopy.length, &p) < 0) {
failf(conn->data, "Out of memory base64-encoding.\n");
set_command_prot(conn, save);
/*code = -1;***/
@@ -484,10 +488,11 @@ void krb_kauth(struct connectdata *conn)
}
memset (tktcopy.dat, 0, tktcopy.length);
/*ret = command("SITE KAUTH %s %s", name, p);***/
ftpsendf(conn->data->firstsocket, conn,
Curl_ftpsendf(conn->data->firstsocket, conn,
"SITE KAUTH %s %s", name, p);
/* wait for feedback */
nread = GetLastResponse(conn->data->firstsocket, conn->data->buffer, conn);
nread = Curl_GetFTPResponse(conn->data->firstsocket, conn->data->buffer,
conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/;
free(p);

View File

@@ -1,5 +1,5 @@
#ifndef __HIGHLEVEL_H
#define __HIGHLEVEL_H
#ifndef __KRB4_H
#define __KRB4_H
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
@@ -22,5 +22,6 @@
*
* $Id$
*****************************************************************************/
CURLcode curl_transfer(CURL *curl);
void krb_kauth(struct connectdata *conn);
#endif

View File

@@ -117,18 +117,18 @@ static void * DynaGetFunction(char *name)
static int WriteProc(void *param, char *text, int len)
{
struct UrlData *data = (struct UrlData *)param;
client_write(data, CLIENTWRITE_BODY, text, 0);
Curl_client_write(data, CLIENTWRITE_BODY, text, 0);
return 0;
}
CURLcode ldap_done(struct connectdata *conn)
CURLcode Curl_ldap_done(struct connectdata *conn)
{
return CURLE_OK;
}
/***********************************************************************
*/
CURLcode ldap(struct connectdata *conn)
CURLcode Curl_ldap(struct connectdata *conn)
{
CURLcode status = CURLE_OK;
int rc;

View File

@@ -23,7 +23,7 @@
*
* $Id$
*****************************************************************************/
CURLcode ldap(struct connectdata *conn);
CURLcode ldap_done(struct connectdata *conn);
CURLcode Curl_ldap(struct connectdata *conn);
CURLcode Curl_ldap_done(struct connectdata *conn);
#endif /* __LDAP_H */

View File

@@ -135,7 +135,7 @@ int curl_sclose(int sockfd, int line, char *source)
int res=sclose(sockfd);
fprintf(logfile?logfile:stderr, "FD %s:%d sclose(%d)\n",
source, line, sockfd);
return sockfd;
return res;
}
FILE *curl_fopen(char *file, char *mode, int line, char *source)

View File

@@ -207,7 +207,7 @@ struct asprintf {
size_t alloc; /* length of alloc */
};
int msprintf(char *buffer, const char *format, ...);
int curl_msprintf(char *buffer, const char *format, ...);
static int dprintf_DollarString(char *input, char **end)
{
@@ -955,11 +955,11 @@ static int dprintf_formatf(
if(width >= 0) {
/* RECURSIVE USAGE */
fptr += msprintf(fptr, "%d", width);
fptr += curl_msprintf(fptr, "%d", width);
}
if(prec >= 0) {
/* RECURSIVE USAGE */
fptr += msprintf(fptr, ".%d", prec);
fptr += curl_msprintf(fptr, ".%d", prec);
}
if (p->flags & FLAGS_LONG)
strcat(fptr, "l");
@@ -1025,7 +1025,7 @@ static int addbyter(int output, FILE *data)
return -1;
}
int msnprintf(char *buffer, size_t maxlength, const char *format, ...)
int curl_msnprintf(char *buffer, size_t maxlength, const char *format, ...)
{
va_list ap_save; /* argument pointer */
int retcode;
@@ -1045,7 +1045,7 @@ int msnprintf(char *buffer, size_t maxlength, const char *format, ...)
return retcode;
}
int mvsnprintf(char *buffer, size_t maxlength, const char *format, va_list ap_save)
int curl_mvsnprintf(char *buffer, size_t maxlength, const char *format, va_list ap_save)
{
int retcode;
struct nsprintf info;
@@ -1092,7 +1092,7 @@ static int alloc_addbyter(int output, FILE *data)
}
char *maprintf(const char *format, ...)
char *curl_maprintf(const char *format, ...)
{
va_list ap_save; /* argument pointer */
int retcode;
@@ -1113,7 +1113,7 @@ char *maprintf(const char *format, ...)
return NULL;
}
char *mvaprintf(const char *format, va_list ap_save)
char *curl_mvaprintf(const char *format, va_list ap_save)
{
int retcode;
struct asprintf info;
@@ -1140,7 +1140,7 @@ static int storebuffer(int output, FILE *data)
return output; /* act like fputc() ! */
}
int msprintf(char *buffer, const char *format, ...)
int curl_msprintf(char *buffer, const char *format, ...)
{
va_list ap_save; /* argument pointer */
int retcode;
@@ -1153,7 +1153,7 @@ int msprintf(char *buffer, const char *format, ...)
extern int fputc(int, FILE *);
int mprintf(const char *format, ...)
int curl_mprintf(const char *format, ...)
{
int retcode;
va_list ap_save; /* argument pointer */
@@ -1163,7 +1163,7 @@ int mprintf(const char *format, ...)
return retcode;
}
int mfprintf(FILE *whereto, const char *format, ...)
int curl_mfprintf(FILE *whereto, const char *format, ...)
{
int retcode;
va_list ap_save; /* argument pointer */
@@ -1173,7 +1173,7 @@ int mfprintf(FILE *whereto, const char *format, ...)
return retcode;
}
int mvsprintf(char *buffer, const char *format, va_list ap_save)
int curl_mvsprintf(char *buffer, const char *format, va_list ap_save)
{
int retcode;
retcode = dprintf_formatf(&buffer, storebuffer, format, ap_save);
@@ -1181,12 +1181,12 @@ int mvsprintf(char *buffer, const char *format, va_list ap_save)
return retcode;
}
int mvprintf(const char *format, va_list ap_save)
int curl_mvprintf(const char *format, va_list ap_save)
{
return dprintf_formatf(stdout, fputc, format, ap_save);
}
int mvfprintf(FILE *whereto, const char *format, va_list ap_save)
int curl_mvfprintf(FILE *whereto, const char *format, va_list ap_save)
{
return dprintf_formatf(whereto, fputc, format, ap_save);
}

View File

@@ -51,15 +51,15 @@ enum {
#define LOGINSIZE 64
#define PASSWORDSIZE 64
int ParseNetrc(char *host,
char *login,
char *password)
int Curl_parsenetrc(char *host,
char *login,
char *password)
{
FILE *file;
char netrcbuffer[256];
int retcode=1;
char *home = GetEnv("HOME"); /* portable environment reader */
char *home = curl_getenv("HOME"); /* portable environment reader */
int state=NOTHING;
char state_login=0;

View File

@@ -22,7 +22,7 @@
*
* $Id$
*****************************************************************************/
int ParseNetrc(char *host,
char *login,
char *password);
int Curl_parsenetrc(char *host,
char *login,
char *password);
#endif

View File

@@ -45,7 +45,7 @@
#include "progress.h"
void time2str(char *r, int t)
static void time2str(char *r, int t)
{
int h = (t/3600);
int m = (t-(h*3600))/60;
@@ -55,7 +55,7 @@ void time2str(char *r, int t)
/* The point of this function would be to return a string of the input data,
but never longer than 5 columns. Add suffix k, M, G when suitable... */
char *max5data(double bytes, char *max5)
static char *max5data(double bytes, char *max5)
{
#define ONE_KILOBYTE 1024
#define ONE_MEGABYTE (1024*1024)
@@ -91,16 +91,16 @@ char *max5data(double bytes, char *max5)
*/
void pgrsDone(struct UrlData *data)
void Curl_pgrsDone(struct UrlData *data)
{
if(!(data->progress.flags & PGRS_HIDE)) {
data->progress.lastshow=0;
pgrsUpdate(data); /* the final (forced) update */
Curl_pgrsUpdate(data); /* the final (forced) update */
fprintf(data->err, "\n");
}
}
void pgrsTime(struct UrlData *data, timerid timer)
void Curl_pgrsTime(struct UrlData *data, timerid timer)
{
switch(timer) {
default:
@@ -111,19 +111,19 @@ void pgrsTime(struct UrlData *data, timerid timer)
/* This is set at the start of a single fetch, there may be several
fetches within an operation, why we add all other times relative
to this one */
data->progress.t_startsingle = tvnow();
data->progress.t_startsingle = Curl_tvnow();
break;
case TIMER_NAMELOOKUP:
data->progress.t_nslookup += tvdiff(tvnow(),
data->progress.t_nslookup += Curl_tvdiff(Curl_tvnow(),
data->progress.t_startsingle);
break;
case TIMER_CONNECT:
data->progress.t_connect += tvdiff(tvnow(),
data->progress.t_connect += Curl_tvdiff(Curl_tvnow(),
data->progress.t_startsingle);
break;
case TIMER_PRETRANSFER:
data->progress.t_pretransfer += tvdiff(tvnow(),
data->progress.t_pretransfer += Curl_tvdiff(Curl_tvnow(),
data->progress.t_startsingle);
break;
case TIMER_POSTRANSFER:
@@ -132,22 +132,22 @@ void pgrsTime(struct UrlData *data, timerid timer)
}
}
void pgrsStartNow(struct UrlData *data)
void Curl_pgrsStartNow(struct UrlData *data)
{
data->progress.start = tvnow();
data->progress.start = Curl_tvnow();
}
void pgrsSetDownloadCounter(struct UrlData *data, double size)
void Curl_pgrsSetDownloadCounter(struct UrlData *data, double size)
{
data->progress.downloaded = size;
}
void pgrsSetUploadCounter(struct UrlData *data, double size)
void Curl_pgrsSetUploadCounter(struct UrlData *data, double size)
{
data->progress.uploaded = size;
}
void pgrsSetDownloadSize(struct UrlData *data, double size)
void Curl_pgrsSetDownloadSize(struct UrlData *data, double size)
{
if(size > 0) {
data->progress.size_dl = size;
@@ -155,7 +155,7 @@ void pgrsSetDownloadSize(struct UrlData *data, double size)
}
}
void pgrsSetUploadSize(struct UrlData *data, double size)
void Curl_pgrsSetUploadSize(struct UrlData *data, double size)
{
if(size > 0) {
data->progress.size_ul = size;
@@ -171,7 +171,7 @@ void pgrsSetUploadSize(struct UrlData *data, double size)
*/
int pgrsUpdate(struct UrlData *data)
int Curl_pgrsUpdate(struct UrlData *data)
{
struct timeval now;
int result;
@@ -210,15 +210,15 @@ int pgrsUpdate(struct UrlData *data)
data->progress.flags |= PGRS_HEADERS_OUT; /* headers are shown */
}
now = tvnow(); /* what time is it */
now = Curl_tvnow(); /* what time is it */
if(data->progress.lastshow == tvlong(now))
if(data->progress.lastshow == Curl_tvlong(now))
return 0; /* never update this more than once a second if the end isn't
reached */
data->progress.lastshow = now.tv_sec;
/* The exact time spent so far */
data->progress.timespent = tvdiff (now, data->progress.start);
data->progress.timespent = Curl_tvdiff (now, data->progress.start);
/* The average download speed this far */
data->progress.dlspeed = data->progress.downloaded/(data->progress.timespent!=0.0?data->progress.timespent:1.0);

View File

@@ -36,14 +36,14 @@ typedef enum {
TIMER_LAST /* must be last */
} timerid;
void pgrsDone(struct UrlData *data);
void pgrsStartNow(struct UrlData *data);
void pgrsSetDownloadSize(struct UrlData *data, double size);
void pgrsSetUploadSize(struct UrlData *data, double size);
void pgrsSetDownloadCounter(struct UrlData *data, double size);
void pgrsSetUploadCounter(struct UrlData *data, double size);
int pgrsUpdate(struct UrlData *data);
void pgrsTime(struct UrlData *data, timerid timer);
void Curl_pgrsDone(struct UrlData *data);
void Curl_pgrsStartNow(struct UrlData *data);
void Curl_pgrsSetDownloadSize(struct UrlData *data, double size);
void Curl_pgrsSetUploadSize(struct UrlData *data, double size);
void Curl_pgrsSetDownloadCounter(struct UrlData *data, double size);
void Curl_pgrsSetUploadCounter(struct UrlData *data, double size);
int Curl_pgrsUpdate(struct UrlData *data);
void Curl_pgrsTime(struct UrlData *data, timerid timer);
/* Don't show progress for sizes smaller than: */

View File

@@ -40,13 +40,22 @@
#ifdef KRB4
#define _MPRINTF_REPLACE /* we want curl-functions instead of native ones */
#include <curl/mprintf.h>
#include "security.h"
#include <stdlib.h>
#include <string.h>
#include <netdb.h>
#ifdef HAVE_UNISTD_H
#include <unistd.h>
#endif
#include "base64.h"
#include "sendf.h"
#include "ftp.h"
/* The last #include file should be: */
#ifdef MALLOCDEBUG
#include "memdebug.h"
@@ -64,6 +73,7 @@ static struct {
{ prot_private, "private" }
};
#if 0
static const char *
level_to_name(enum protection_level level)
{
@@ -73,6 +83,7 @@ level_to_name(enum protection_level level)
return level_names[i].name;
return "unknown";
}
#endif
#ifndef FTP_SERVER /* not used in server */
static enum protection_level
@@ -319,7 +330,7 @@ sec_vfprintf2(struct connectdata *conn, FILE *f, const char *fmt, va_list ap)
if(conn->data_prot == prot_clear)
return vfprintf(f, fmt, ap);
else {
buf = maprintf(fmt, ap);
buf = aprintf(fmt, ap);
ret = buffer_write(&conn->out_buffer, buf, strlen(buf));
free(buf);
return ret;
@@ -360,7 +371,7 @@ sec_read_msg(struct connectdata *conn, char *s, int level)
int code;
buf = malloc(strlen(s));
len = base64_decode(s + 4, buf); /* XXX */
len = Curl_base64_decode(s + 4, buf); /* XXX */
len = (*mech->decode)(conn->app_data, buf, len, level, conn);
if(len < 0)
@@ -390,7 +401,7 @@ sec_vfprintf(struct connectdata *conn, FILE *f, const char *fmt, va_list ap)
if(!conn->sec_complete)
return vfprintf(f, fmt, ap);
buf = maprintf(fmt, ap);
buf = aprintf(fmt, ap);
len = (*mech->encode)(conn->app_data, buf, strlen(buf),
conn->command_prot, &enc,
conn);
@@ -399,7 +410,7 @@ sec_vfprintf(struct connectdata *conn, FILE *f, const char *fmt, va_list ap)
failf(conn->data, "Failed to encode command.\n");
return -1;
}
if(base64_encode(enc, len, &buf) < 0){
if(Curl_base64_encode(enc, len, &buf) < 0){
failf(conn->data, "Out of memory base64-encoding.\n");
return -1;
}
@@ -461,7 +472,6 @@ sec_status(void)
static int
sec_prot_internal(struct connectdata *conn, int level)
{
int ret;
char *p;
unsigned int s = 1048576;
size_t nread;
@@ -472,11 +482,11 @@ sec_prot_internal(struct connectdata *conn, int level)
}
if(level){
ftpsendf(conn->data->firstsocket, conn,
"PBSZ %u", s);
Curl_ftpsendf(conn->data->firstsocket, conn,
"PBSZ %u", s);
/* wait for feedback */
nread = GetLastResponse(conn->data->firstsocket,
conn->data->buffer, conn);
nread = Curl_GetFTPResponse(conn->data->firstsocket,
conn->data->buffer, conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/-1;
if(/*ret != COMPLETE*/conn->data->buffer[0] != '2'){
@@ -491,11 +501,11 @@ sec_prot_internal(struct connectdata *conn, int level)
conn->buffer_size = s;
}
ftpsendf(conn->data->firstsocket, conn,
"PROT %c", level["CSEP"]);
Curl_ftpsendf(conn->data->firstsocket, conn,
"PROT %c", level["CSEP"]);
/* wait for feedback */
nread = GetLastResponse(conn->data->firstsocket,
conn->data->buffer, conn);
nread = Curl_GetFTPResponse(conn->data->firstsocket,
conn->data->buffer, conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/-1;
if(/*ret != COMPLETE*/conn->data->buffer[0] != '2'){
@@ -600,11 +610,11 @@ sec_login(struct connectdata *conn)
}
infof(data, "Trying %s...\n", (*m)->name);
/*ret = command("AUTH %s", (*m)->name);***/
ftpsendf(conn->data->firstsocket, conn,
Curl_ftpsendf(conn->data->firstsocket, conn,
"AUTH %s", (*m)->name);
/* wait for feedback */
nread = GetLastResponse(conn->data->firstsocket,
conn->data->buffer, conn);
nread = Curl_GetFTPResponse(conn->data->firstsocket,
conn->data->buffer, conn, NULL);
if(nread < 0)
return /*CURLE_OPERATION_TIMEOUTED*/-1;
if(/*ret != CONTINUE*/conn->data->buffer[0] != '3'){

View File

@@ -52,7 +52,7 @@
/* infof() is for info message along the way */
void infof(struct UrlData *data, char *fmt, ...)
void Curl_infof(struct UrlData *data, char *fmt, ...)
{
va_list ap;
if(data->bits.verbose) {
@@ -66,7 +66,7 @@ void infof(struct UrlData *data, char *fmt, ...)
/* failf() is for messages stating why we failed, the LAST one will be
returned for the user (if requested) */
void failf(struct UrlData *data, char *fmt, ...)
void Curl_failf(struct UrlData *data, char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
@@ -78,13 +78,13 @@ void failf(struct UrlData *data, char *fmt, ...)
}
/* sendf() sends the formated data to the server */
size_t sendf(int fd, struct UrlData *data, char *fmt, ...)
size_t Curl_sendf(int fd, struct UrlData *data, char *fmt, ...)
{
size_t bytes_written;
char *s;
va_list ap;
va_start(ap, fmt);
s = mvaprintf(fmt, ap);
s = vaprintf(fmt, ap);
va_end(ap);
if(!s)
return 0; /* failure */
@@ -104,43 +104,8 @@ size_t sendf(int fd, struct UrlData *data, char *fmt, ...)
return(bytes_written);
}
/*
* ftpsendf() sends the formated string as a ftp command to a ftp server
*
* NOTE: we build the command in a fixed-length buffer, which sets length
* restrictions on the command!
*
*/
size_t ftpsendf(int fd, struct connectdata *conn, char *fmt, ...)
{
size_t bytes_written;
char s[256];
va_list ap;
va_start(ap, fmt);
vsnprintf(s, 250, fmt, ap);
va_end(ap);
if(conn->data->bits.verbose)
fprintf(conn->data->err, "> %s\n", s);
strcat(s, "\r\n"); /* append a trailing CRLF */
#ifdef KRB4
if(conn->sec_complete && conn->data->cmdchannel) {
bytes_written = sec_fprintf(conn, conn->data->cmdchannel, s);
fflush(conn->data->cmdchannel);
}
else
#endif /* KRB4 */
{
bytes_written = swrite(fd, s, strlen(s));
}
return(bytes_written);
}
/* ssend() sends plain (binary) data to the server */
size_t ssend(int fd, struct connectdata *conn, void *mem, size_t len)
size_t Curl_ssend(int fd, struct connectdata *conn, void *mem, size_t len)
{
size_t bytes_written;
struct UrlData *data=conn->data; /* conn knows data, not vice versa */
@@ -170,10 +135,10 @@ size_t ssend(int fd, struct connectdata *conn, void *mem, size_t len)
The bit pattern defines to what "streams" to write to. Body and/or header.
The defines are in sendf.h of course.
*/
CURLcode client_write(struct UrlData *data,
int type,
char *ptr,
size_t len)
CURLcode Curl_client_write(struct UrlData *data,
int type,
char *ptr,
size_t len)
{
size_t wrote;
@@ -198,92 +163,3 @@ CURLcode client_write(struct UrlData *data,
return CURLE_OK;
}
/*
* add_buffer_init() returns a fine buffer struct
*/
send_buffer *add_buffer_init(void)
{
send_buffer *blonk;
blonk=(send_buffer *)malloc(sizeof(send_buffer));
if(blonk) {
memset(blonk, 0, sizeof(send_buffer));
return blonk;
}
return NULL; /* failed, go home */
}
/*
* add_buffer_send() sends a buffer and frees all associated memory.
*/
size_t add_buffer_send(int sockfd, struct connectdata *conn, send_buffer *in)
{
size_t amount;
if(conn->data->bits.verbose) {
fputs("> ", conn->data->err);
/* this data _may_ contain binary stuff */
fwrite(in->buffer, in->size_used, 1, conn->data->err);
}
amount = ssend(sockfd, conn, in->buffer, in->size_used);
if(in->buffer)
free(in->buffer);
free(in);
return amount;
}
/*
* add_bufferf() builds a buffer from the formatted input
*/
CURLcode add_bufferf(send_buffer *in, char *fmt, ...)
{
CURLcode result = CURLE_OUT_OF_MEMORY;
char *s;
va_list ap;
va_start(ap, fmt);
s = mvaprintf(fmt, ap); /* this allocs a new string to append */
va_end(ap);
if(s) {
result = add_buffer(in, s, strlen(s));
free(s);
}
return result;
}
/*
* add_buffer() appends a memory chunk to the existing one
*/
CURLcode add_buffer(send_buffer *in, void *inptr, size_t size)
{
char *new_rb;
int new_size;
if(size > 0) {
if(!in->buffer ||
((in->size_used + size) > (in->size_max - 1))) {
new_size = (in->size_used+size)*2;
if(in->buffer)
/* we have a buffer, enlarge the existing one */
new_rb = (char *)realloc(in->buffer, new_size);
else
/* create a new buffer */
new_rb = (char *)malloc(new_size);
if(!new_rb)
return CURLE_OUT_OF_MEMORY;
in->buffer = new_rb;
in->size_max = new_size;
}
memcpy(&in->buffer[in->size_used], inptr, size);
in->size_used += size;
}
return CURLE_OK;
}

View File

@@ -23,11 +23,15 @@
* $Id$
*****************************************************************************/
size_t ftpsendf(int fd, struct connectdata *, char *fmt, ...);
size_t sendf(int fd, struct UrlData *, char *fmt, ...);
size_t ssend(int fd, struct connectdata *, void *fmt, size_t len);
void infof(struct UrlData *, char *fmt, ...);
void failf(struct UrlData *, char *fmt, ...);
size_t Curl_sendf(int fd, struct UrlData *, char *fmt, ...);
size_t Curl_ssend(int fd, struct connectdata *, void *fmt, size_t len);
void Curl_infof(struct UrlData *, char *fmt, ...);
void Curl_failf(struct UrlData *, char *fmt, ...);
#define sendf Curl_sendf
#define ssend Curl_ssend
#define infof Curl_infof
#define failf Curl_failf
struct send_buffer {
char *buffer;
@@ -40,12 +44,7 @@ typedef struct send_buffer send_buffer;
#define CLIENTWRITE_HEADER 2
#define CLIENTWRITE_BOTH (CLIENTWRITE_BODY|CLIENTWRITE_HEADER)
CURLcode client_write(struct UrlData *data, int type, char *ptr,
size_t len);
send_buffer *add_buffer_init(void);
CURLcode add_buffer(send_buffer *in, void *inptr, size_t size);
CURLcode add_bufferf(send_buffer *in, char *fmt, ...);
size_t add_buffer_send(int sockfd, struct connectdata *conn, send_buffer *in);
CURLcode Curl_client_write(struct UrlData *data, int type, char *ptr,
size_t len);
#endif

View File

@@ -33,24 +33,24 @@
#include "sendf.h"
#include "speedcheck.h"
void speedinit(struct UrlData *data)
void Curl_speedinit(struct UrlData *data)
{
memset(&data->keeps_speed, 0, sizeof(struct timeval));
}
CURLcode speedcheck(struct UrlData *data,
struct timeval now)
CURLcode Curl_speedcheck(struct UrlData *data,
struct timeval now)
{
if((data->progress.current_speed >= 0) &&
data->low_speed_time &&
(tvlong(data->keeps_speed) != 0) &&
(Curl_tvlong(data->keeps_speed) != 0) &&
(data->progress.current_speed < data->low_speed_limit)) {
/* We are now below the "low speed limit". If we are below it
for "low speed time" seconds we consider that enough reason
to abort the download. */
if( tvdiff(now, data->keeps_speed) > data->low_speed_time) {
if( Curl_tvdiff(now, data->keeps_speed) > data->low_speed_time) {
/* we have been this slow for long enough, now die */
failf(data,
"Operation too slow. "

View File

@@ -27,8 +27,8 @@
#include "timeval.h"
void speedinit(struct UrlData *data);
CURLcode speedcheck(struct UrlData *data,
struct timeval now);
void Curl_speedinit(struct UrlData *data);
CURLcode Curl_speedcheck(struct UrlData *data,
struct timeval now);
#endif

View File

@@ -62,9 +62,10 @@ static int passwd_callback(char *buf, int num, int verify
* from) source from the SSLeay package written by Eric Young
* (eay@cryptsoft.com). */
int SSL_cert_stuff(struct UrlData *data,
char *cert_file,
char *key_file)
static
int cert_stuff(struct UrlData *data,
char *cert_file,
char *key_file)
{
if (cert_file != NULL) {
SSL *ssl;
@@ -124,6 +125,7 @@ int SSL_cert_stuff(struct UrlData *data,
#endif
#ifdef USE_SSLEAY
static
int cert_verify_callback(int ok, X509_STORE_CTX *ctx)
{
X509 *err_cert;
@@ -139,7 +141,7 @@ int cert_verify_callback(int ok, X509_STORE_CTX *ctx)
/* ====================================================== */
int
UrgSSLConnect (struct UrlData *data)
Curl_SSLConnect (struct UrlData *data)
{
#ifdef USE_SSLEAY
int err;
@@ -163,7 +165,7 @@ UrgSSLConnect (struct UrlData *data)
RAND_screen();
#else
int len;
char *area = MakeFormBoundary();
char *area = Curl_FormBoundary();
if(!area)
return 3; /* out of memory */
@@ -198,7 +200,7 @@ UrgSSLConnect (struct UrlData *data)
}
if(data->cert) {
if (!SSL_cert_stuff(data, data->cert, data->cert)) {
if (!cert_stuff(data, data->cert, data->cert)) {
failf(data, "couldn't use certificate!\n");
return 2;
}

View File

@@ -22,8 +22,5 @@
*
* $Id$
*****************************************************************************/
int SSL_cert_stuff(struct UrlData *data,
char *cert_file,
char *key_file);
int UrgSSLConnect (struct UrlData *data);
int Curl_SSLConnect (struct UrlData *data);
#endif

View File

@@ -25,7 +25,7 @@
#include <string.h>
int strequal(const char *first, const char *second)
int Curl_strequal(const char *first, const char *second)
{
#if defined(HAVE_STRCASECMP)
return !strcasecmp(first, second);
@@ -45,7 +45,7 @@ int strequal(const char *first, const char *second)
#endif
}
int strnequal(const char *first, const char *second, size_t max)
int Curl_strnequal(const char *first, const char *second, size_t max)
{
#if defined(HAVE_STRCASECMP)
return !strncasecmp(first, second, max);

View File

@@ -22,7 +22,10 @@
*
* $Id$
*****************************************************************************/
int strequal(const char *first, const char *second);
int strnequal(const char *first, const char *second, size_t max);
int Curl_strequal(const char *first, const char *second);
int Curl_strnequal(const char *first, const char *second, size_t max);
#define strequal(a,b) Curl_strequal(a,b)
#define strnequal(a,b,c) Curl_strnequal(a,b,c)
#endif

View File

@@ -71,7 +71,7 @@
#include "urldata.h"
#include <curl/curl.h>
#include "download.h"
#include "transfer.h"
#include "sendf.h"
#include "formdata.h"
#include "progress.h"
@@ -81,7 +81,6 @@
#define TELOPTS
#define TELCMDS
#define SLC_NAMES
#include "arpa_telnet.h"
@@ -98,10 +97,12 @@
#define SB_EOF() (subpointer >= subend)
#define SB_LEN() (subend - subpointer)
static
void telwrite(struct UrlData *data,
unsigned char *buffer, /* Data to write */
int count); /* Number of bytes to write */
static
void telrcv(struct UrlData *data,
unsigned char *inbuf, /* Data received from socket */
int count); /* Number of bytes received */
@@ -155,6 +156,7 @@ static int him[256];
static int himq[256];
static int him_preferred[256];
static
void init_telnet(struct UrlData *data)
{
telrcv_state = TS_DATA;
@@ -246,6 +248,7 @@ static void send_negotiation(struct UrlData *data, int cmd, int option)
printoption(data, "SENT", cmd, option);
}
static
void set_remote_option(struct UrlData *data, int option, int newstate)
{
if(newstate == YES)
@@ -326,6 +329,7 @@ void set_remote_option(struct UrlData *data, int option, int newstate)
}
}
static
void rec_will(struct UrlData *data, int option)
{
switch(him[option])
@@ -377,6 +381,7 @@ void rec_will(struct UrlData *data, int option)
}
}
static
void rec_wont(struct UrlData *data, int option)
{
switch(him[option])
@@ -500,6 +505,7 @@ void set_local_option(struct UrlData *data, int option, int newstate)
}
}
static
void rec_do(struct UrlData *data, int option)
{
switch(us[option])
@@ -550,7 +556,8 @@ void rec_do(struct UrlData *data, int option)
break;
}
}
static
void rec_dont(struct UrlData *data, int option)
{
switch(us[option])
@@ -669,6 +676,7 @@ static void suboption(struct UrlData *data)
return;
}
static
void telrcv(struct UrlData *data,
unsigned char *inbuf, /* Data received from socket */
int count) /* Number of bytes received */
@@ -689,7 +697,7 @@ void telrcv(struct UrlData *data,
break; /* Ignore \0 after CR */
}
client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
Curl_client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
continue;
case TS_DATA:
@@ -703,7 +711,7 @@ void telrcv(struct UrlData *data,
telrcv_state = TS_CR;
}
client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
Curl_client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
continue;
case TS_IAC:
@@ -727,7 +735,7 @@ void telrcv(struct UrlData *data,
telrcv_state = TS_SB;
continue;
case IAC:
client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
Curl_client_write(data, CLIENTWRITE_BODY, (char *)&c, 1);
break;
case DM:
case NOP:
@@ -818,6 +826,7 @@ void telrcv(struct UrlData *data,
}
}
static
void telwrite(struct UrlData *data,
unsigned char *buffer, /* Data to write */
int count) /* Number of bytes to write */
@@ -847,12 +856,12 @@ void telwrite(struct UrlData *data,
}
}
CURLcode telnet_done(struct connectdata *conn)
CURLcode Curl_telnet_done(struct connectdata *conn)
{
return CURLE_OK;
}
CURLcode telnet(struct connectdata *conn)
CURLcode Curl_telnet(struct connectdata *conn)
{
struct UrlData *data = conn->data;
int sockfd = data->firstsocket;

View File

@@ -23,7 +23,7 @@
*
* $Id$
*****************************************************************************/
CURLcode telnet(struct connectdata *conn);
CURLcode telnet_done(struct connectdata *conn);
CURLcode Curl_telnet(struct connectdata *conn);
CURLcode Curl_telnet_done(struct connectdata *conn);
#endif

View File

@@ -53,7 +53,7 @@ gettimeofday (struct timeval *tp, void *nothing)
#endif
#endif
struct timeval tvnow ()
struct timeval Curl_tvnow ()
{
struct timeval now;
#ifdef HAVE_GETTIMEOFDAY
@@ -65,12 +65,12 @@ struct timeval tvnow ()
return now;
}
double tvdiff (struct timeval t1, struct timeval t2)
double Curl_tvdiff (struct timeval t1, struct timeval t2)
{
return (double)(t1.tv_sec - t2.tv_sec) + ((t1.tv_usec-t2.tv_usec)/1000000.0);
}
long tvlong (struct timeval t1)
long Curl_tvlong (struct timeval t1)
{
return t1.tv_sec;
}

View File

@@ -42,8 +42,8 @@ struct timeval {
#endif
#endif
struct timeval tvnow ();
double tvdiff (struct timeval t1, struct timeval t2);
long tvlong (struct timeval t1);
struct timeval Curl_tvnow ();
double Curl_tvdiff (struct timeval t1, struct timeval t2);
long Curl_tvlong (struct timeval t1);
#endif

View File

@@ -84,7 +84,7 @@
#include "getenv.h"
#include "hostip.h"
#include "download.h"
#include "transfer.h"
#include "sendf.h"
#include "speedcheck.h"
#include "getpass.h"
@@ -103,14 +103,17 @@
#define min(a, b) ((a) < (b) ? (a) : (b))
#endif
CURLcode
/* Parts of this function was written by the friendly Mark Butler
<butlerm@xmission.com>. */
CURLcode static
_Transfer(struct connectdata *c_conn)
{
size_t nread; /* number of bytes read */
int bytecount = 0; /* total number of bytes read */
int writebytecount = 0; /* number of bytes written */
long contentlength=0; /* size of incoming data */
struct timeval start = tvnow();
struct timeval start = Curl_tvnow();
struct timeval now = start; /* current time */
bool header = TRUE; /* incoming data has HTTP header */
int headerline = 0; /* counts header lines to better track the
@@ -151,19 +154,19 @@ _Transfer(struct connectdata *c_conn)
myalarm (0); /* switch off the alarm-style timeout */
now = tvnow();
now = Curl_tvnow();
start = now;
#define KEEP_READ 1
#define KEEP_WRITE 2
pgrsTime(data, TIMER_PRETRANSFER);
speedinit(data);
Curl_pgrsTime(data, TIMER_PRETRANSFER);
Curl_speedinit(data);
if (!conn->getheader) {
header = FALSE;
if(conn->size > 0)
pgrsSetDownloadSize(data, conn->size);
Curl_pgrsSetDownloadSize(data, conn->size);
}
/* we want header and/or body, if neither then don't do this! */
if(conn->getheader ||
@@ -314,7 +317,7 @@ _Transfer(struct connectdata *c_conn)
if ('\n' == *p)
p++; /* pass the \n byte */
pgrsSetDownloadSize(data, conn->size);
Curl_pgrsSetDownloadSize(data, conn->size);
header = FALSE; /* no more header to parse! */
@@ -324,8 +327,8 @@ _Transfer(struct connectdata *c_conn)
if (data->bits.http_include_header)
writetype |= CLIENTWRITE_BODY;
urg = client_write(data, writetype, data->headerbuff,
p - data->headerbuff);
urg = Curl_client_write(data, writetype, data->headerbuff,
p - data->headerbuff);
if(urg)
return urg;
@@ -374,7 +377,7 @@ _Transfer(struct connectdata *c_conn)
}
else if(data->cookies &&
strnequal("Set-Cookie: ", p, 11)) {
cookie_add(data->cookies, TRUE, &p[12]);
Curl_cookie_add(data->cookies, TRUE, &p[12]);
}
else if(strnequal("Last-Modified:", p,
strlen("Last-Modified:")) &&
@@ -398,7 +401,7 @@ _Transfer(struct connectdata *c_conn)
if (data->bits.http_include_header)
writetype |= CLIENTWRITE_BODY;
urg = client_write(data, writetype, p, hbuflen);
urg = Curl_client_write(data, writetype, p, hbuflen);
if(urg)
return urg;
@@ -456,14 +459,14 @@ _Transfer(struct connectdata *c_conn)
default:
if(timeofdoc < data->timevalue) {
infof(data,
"The requested document is not new enough");
"The requested document is not new enough\n");
return CURLE_OK;
}
break;
case TIMECOND_IFUNMODSINCE:
if(timeofdoc > data->timevalue) {
infof(data,
"The requested document is not old enough");
"The requested document is not old enough\n");
return CURLE_OK;
}
break;
@@ -484,9 +487,9 @@ _Transfer(struct connectdata *c_conn)
bytecount += nread;
pgrsSetDownloadCounter(data, (double)bytecount);
Curl_pgrsSetDownloadCounter(data, (double)bytecount);
urg = client_write(data, CLIENTWRITE_BODY, str, nread);
urg = Curl_client_write(data, CLIENTWRITE_BODY, str, nread);
if(urg)
return urg;
@@ -513,7 +516,7 @@ _Transfer(struct connectdata *c_conn)
break;
}
writebytecount += nread;
pgrsSetUploadCounter(data, (double)writebytecount);
Curl_pgrsSetUploadCounter(data, (double)writebytecount);
/* convert LF to CRLF if so asked */
if (data->crlf) {
@@ -543,11 +546,11 @@ _Transfer(struct connectdata *c_conn)
break;
}
now = tvnow();
if(pgrsUpdate(data))
now = Curl_tvnow();
if(Curl_pgrsUpdate(data))
urg = CURLE_ABORTED_BY_CALLBACK;
else
urg = speedcheck (data, now);
urg = Curl_speedcheck (data, now);
if (urg)
return urg;
@@ -560,7 +563,7 @@ _Transfer(struct connectdata *c_conn)
conn->upload_bufsize=(long)min(data->progress.ulspeed, BUFSIZE);
}
if (data->timeout && (tvdiff (now, start) > data->timeout)) {
if (data->timeout && (Curl_tvdiff (now, start) > data->timeout)) {
failf (data, "Operation timed out with %d out of %d bytes received",
bytecount, conn->size);
return CURLE_OPERATION_TIMEOUTED;
@@ -573,7 +576,7 @@ _Transfer(struct connectdata *c_conn)
contentlength-bytecount);
return CURLE_PARTIAL_FILE;
}
if(pgrsUpdate(data))
if(Curl_pgrsUpdate(data))
return CURLE_ABORTED_BY_CALLBACK;
if(conn->bytecountp)
@@ -592,10 +595,10 @@ CURLcode curl_transfer(CURL *curl)
struct UrlData *data = curl;
struct connectdata *c_connect=NULL;
pgrsStartNow(data);
Curl_pgrsStartNow(data);
do {
pgrsTime(data, TIMER_STARTSINGLE);
Curl_pgrsTime(data, TIMER_STARTSINGLE);
res = curl_connect(curl, (CURLconnect **)&c_connect);
if(res == CURLE_OK) {
res = curl_do(c_connect);
@@ -733,3 +736,32 @@ CURLcode curl_transfer(CURL *curl)
return res;
}
CURLcode
Curl_Transfer(struct connectdata *c_conn, /* connection data */
int sockfd, /* socket to read from or -1 */
int size, /* -1 if unknown at this point */
bool getheader, /* TRUE if header parsing is wanted */
long *bytecountp, /* return number of bytes read or NULL */
int writesockfd, /* socket to write to, it may very well be
the same we read from. -1 disables */
long *writebytecountp /* return number of bytes written or
NULL */
)
{
struct connectdata *conn = (struct connectdata *)c_conn;
if(!conn)
return CURLE_BAD_FUNCTION_ARGUMENT;
/* now copy all input parameters */
conn->sockfd = sockfd;
conn->size = size;
conn->getheader = getheader;
conn->bytecountp = bytecountp;
conn->writesockfd = writesockfd;
conn->writebytecountp = writebytecountp;
return CURLE_OK;
}

View File

@@ -1,5 +1,5 @@
#ifndef __DOWNLOAD_H
#define __DOWNLOAD_H
#ifndef __TRANSFER_H
#define __TRANSFER_H
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
@@ -22,15 +22,23 @@
*
* $Id$
*****************************************************************************/
CURLcode curl_transfer(CURL *curl);
CURLcode
Transfer (struct connectdata *data,
int sockfd, /* socket to read from or -1 */
int size, /* -1 if unknown at this point */
bool getheader, /* TRUE if header parsing is wanted */
long *bytecountp, /* return number of bytes read */
int writesockfd, /* socket to write to, it may very well be
the same we read from. -1 disables */
long *writebytecountp /* return number of bytes written */
Curl_Transfer (struct connectdata *data,
int sockfd, /* socket to read from or -1 */
int size, /* -1 if unknown at this point */
bool getheader, /* TRUE if header parsing is wanted */
long *bytecountp, /* return number of bytes read */
int writesockfd, /* socket to write to, it may very well be
the same we read from. -1 disables */
long *writebytecountp /* return number of bytes written */
);
#ifdef _OLDCURL
/* "hackish" define to make sources compile without too much human editing.
Don't use "Tranfer()" anymore! */
#define Transfer(a,b,c,d,e,f,g) Curl_Transfer(a,b,c,d,e,f,g)
#endif
#endif

150
lib/url.c
View File

@@ -85,9 +85,8 @@
#include "ssluse.h"
#include "hostip.h"
#include "if2ip.h"
#include "download.h"
#include "transfer.h"
#include "sendf.h"
#include "speedcheck.h"
#include "getpass.h"
#include "progress.h"
#include "cookie.h"
@@ -224,7 +223,7 @@ void static urlfree(struct UrlData *data, bool totally)
/* the URL is allocated, free it! */
free(data->url);
cookie_cleanup(data->cookies);
Curl_cookie_cleanup(data->cookies);
free(data);
@@ -248,6 +247,7 @@ CURLcode curl_close(CURL *curl)
return CURLE_OK;
}
static
int my_getpass(void *clientp, char *prompt, char* buffer, int buflen )
{
char *retbuf;
@@ -381,7 +381,7 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
case CURLOPT_COOKIEFILE:
cookiefile = (char *)va_arg(param, void *);
if(cookiefile) {
data->cookies = cookie_init(cookiefile);
data->cookies = Curl_cookie_init(cookiefile);
}
break;
case CURLOPT_WRITEHEADER:
@@ -533,50 +533,11 @@ CURLcode curl_setopt(CURL *curl, CURLoption option, ...)
return CURLE_OK;
}
/*
* Read everything until a newline.
*/
int GetLine(int sockfd, char *buf, struct UrlData *data)
{
int nread;
int read_rc=1;
char *ptr;
ptr=buf;
/* get us a full line, terminated with a newline */
for(nread=0;
(nread<BUFSIZE) && read_rc;
nread++, ptr++) {
#ifdef USE_SSLEAY
if (data->ssl.use) {
read_rc = SSL_read(data->ssl.handle, ptr, 1);
}
else {
#endif
read_rc = sread(sockfd, ptr, 1);
#ifdef USE_SSLEAY
}
#endif /* USE_SSLEAY */
if (*ptr == '\n')
break;
}
*ptr=0; /* zero terminate */
if(data->bits.verbose) {
fputs("< ", data->err);
fwrite(buf, 1, nread, data->err);
fputs("\n", data->err);
}
return nread;
}
#ifndef WIN32
#ifndef RETSIGTYPE
#define RETSIGTYPE void
#endif
static
RETSIGTYPE alarmfunc(int signal)
{
/* this is for "-ansi -Wall -pedantic" to stop complaining! (rabe) */
@@ -865,9 +826,9 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
char *proxy=NULL;
char proxy_env[128];
no_proxy=GetEnv("no_proxy");
no_proxy=curl_getenv("no_proxy");
if(!no_proxy)
no_proxy=GetEnv("NO_PROXY");
no_proxy=curl_getenv("NO_PROXY");
if(!no_proxy || !strequal("*", no_proxy)) {
/* NO_PROXY wasn't specified or it wasn't just an asterisk */
@@ -899,22 +860,22 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
strcpy(envp, "_proxy");
/* read the protocol proxy: */
prox=GetEnv(proxy_env);
prox=curl_getenv(proxy_env);
if(!prox) {
/* There was no lowercase variable, try the uppercase version: */
for(envp = proxy_env; *envp; envp++)
*envp = toupper(*envp);
prox=GetEnv(proxy_env);
prox=curl_getenv(proxy_env);
}
if(prox && *prox) { /* don't count "" strings */
proxy = prox; /* use this */
}
else {
proxy = GetEnv("all_proxy"); /* default proxy to use */
proxy = curl_getenv("all_proxy"); /* default proxy to use */
if(!proxy)
proxy=GetEnv("ALL_PROXY");
proxy=curl_getenv("ALL_PROXY");
}
if(proxy && *proxy) {
@@ -935,7 +896,7 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
*/
char *reurl;
reurl = maprintf("%s://%s", conn->proto, data->url);
reurl = aprintf("%s://%s", conn->proto, data->url);
if(!reurl)
return CURLE_OUT_OF_MEMORY;
@@ -984,9 +945,9 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
data->port = PORT_HTTP;
data->remote_port = PORT_HTTP;
conn->protocol |= PROT_HTTP;
conn->curl_do = http;
conn->curl_done = http_done;
conn->curl_close = http_close;
conn->curl_do = Curl_http;
conn->curl_done = Curl_http_done;
conn->curl_close = Curl_http_close;
}
else if (strequal(conn->proto, "HTTPS")) {
#ifdef USE_SSLEAY
@@ -996,10 +957,10 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
conn->protocol |= PROT_HTTP;
conn->protocol |= PROT_HTTPS;
conn->curl_do = http;
conn->curl_done = http_done;
conn->curl_connect = http_connect;
conn->curl_close = http_close;
conn->curl_do = Curl_http;
conn->curl_done = Curl_http_done;
conn->curl_connect = Curl_http_connect;
conn->curl_close = Curl_http_close;
#else /* USE_SSLEAY */
failf(data, "libcurl was built with SSL disabled, https: not supported!");
@@ -1017,9 +978,9 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
conn->ppath = conn->path;
}
conn->protocol |= PROT_GOPHER;
conn->curl_do = http;
conn->curl_done = http_done;
conn->curl_close = http_close;
conn->curl_do = Curl_http;
conn->curl_done = Curl_http_done;
conn->curl_close = Curl_http_close;
}
else if(strequal(conn->proto, "FTP")) {
char *type;
@@ -1032,14 +993,14 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
!data->bits.tunnel_thru_httpproxy) {
/* Unless we have asked to tunnel ftp operations through the proxy, we
switch and use HTTP operations only */
conn->curl_do = http;
conn->curl_done = http_done;
conn->curl_close = http_close;
conn->curl_do = Curl_http;
conn->curl_done = Curl_http_done;
conn->curl_close = Curl_http_close;
}
else {
conn->curl_do = ftp;
conn->curl_done = ftp_done;
conn->curl_connect = ftp_connect;
conn->curl_do = Curl_ftp;
conn->curl_done = Curl_ftp_done;
conn->curl_connect = Curl_ftp_connect;
}
conn->ppath++; /* don't include the initial slash */
@@ -1076,8 +1037,8 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
data->port = PORT_TELNET;
data->remote_port = PORT_TELNET;
conn->curl_do = telnet;
conn->curl_done = telnet_done;
conn->curl_do = Curl_telnet;
conn->curl_done = Curl_telnet_done;
}
else if (strequal(conn->proto, "DICT")) {
@@ -1085,16 +1046,16 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
if(!data->port)
data->port = PORT_DICT;
data->remote_port = PORT_DICT;
conn->curl_do = dict;
conn->curl_done = dict_done;
conn->curl_do = Curl_dict;
conn->curl_done = Curl_dict_done;
}
else if (strequal(conn->proto, "LDAP")) {
conn->protocol |= PROT_LDAP;
if(!data->port)
data->port = PORT_LDAP;
data->remote_port = PORT_LDAP;
conn->curl_do = ldap;
conn->curl_done = ldap_done;
conn->curl_do = Curl_ldap;
conn->curl_done = Curl_ldap_done;
}
else if (strequal(conn->proto, "FILE")) {
conn->protocol |= PROT_FILE;
@@ -1102,7 +1063,7 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
conn->curl_do = file;
/* no done() function */
result = Transfer(conn, -1, -1, FALSE, NULL, /* no download */
result = Curl_Transfer(conn, -1, -1, FALSE, NULL, /* no download */
-1, NULL); /* no upload */
return CURLE_OK;
@@ -1114,7 +1075,7 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
}
if(data->bits.use_netrc) {
if(ParseNetrc(data->hostname, data->user, data->passwd)) {
if(Curl_parsenetrc(data->hostname, data->user, data->passwd)) {
infof(data, "Couldn't find host %s in the .netrc file, using defaults",
data->hostname);
}
@@ -1196,7 +1157,7 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
data->remote_port = data->port; /* it is the same port */
/* Connect to target host right on */
conn->hp = GetHost(data, conn->name, &conn->hostent_buf);
conn->hp = Curl_gethost(data, conn->name, &conn->hostent_buf);
if(!conn->hp) {
failf(data, "Couldn't resolve host '%s'", conn->name);
return CURLE_COULDNT_RESOLVE_HOST;
@@ -1252,7 +1213,7 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
}
/* connect to proxy */
conn->hp = GetHost(data, proxyptr, &conn->hostent_buf);
conn->hp = Curl_gethost(data, proxyptr, &conn->hostent_buf);
if(!conn->hp) {
failf(data, "Couldn't resolve proxy '%s'", proxyptr);
return CURLE_COULDNT_RESOLVE_PROXY;
@@ -1260,7 +1221,7 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
free(proxydup); /* free the duplicate pointer and not the modified */
}
pgrsTime(data, TIMER_NAMELOOKUP);
Curl_pgrsTime(data, TIMER_NAMELOOKUP);
data->firstsocket = socket(AF_INET, SOCK_STREAM, 0);
@@ -1292,12 +1253,12 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
char myhost[256] = "";
unsigned long in;
if(if2ip(data->device, myhost, sizeof(myhost))) {
h = GetHost(data, myhost, &hostdataptr);
if(Curl_if2ip(data->device, myhost, sizeof(myhost))) {
h = Curl_gethost(data, myhost, &hostdataptr);
}
else {
if(strlen(data->device)>1) {
h = GetHost(data, data->device, &hostdataptr);
h = Curl_gethost(data, data->device, &hostdataptr);
}
if(h) {
/* we know data->device is shorter than the myhost array */
@@ -1307,7 +1268,7 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
if(! *myhost) {
/* need to fix this
h=GetHost(data,
h=Curl_gethost(data,
getmyhost(*myhost,sizeof(myhost)),
hostent_buf,
sizeof(hostent_buf));
@@ -1384,7 +1345,7 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
}
if(hostdataptr)
free(hostdataptr); /* allocated by GetHost() */
free(hostdataptr); /* allocated by Curl_gethost() */
} /* end of device selection support */
#endif /* end of HAVE_INET_NTOA */
@@ -1443,32 +1404,35 @@ static CURLcode _connect(CURL *curl, CURLconnect **in_connect)
char *authorization;
snprintf(data->buffer, BUFSIZE, "%s:%s",
data->proxyuser, data->proxypasswd);
if(base64_encode(data->buffer, strlen(data->buffer),
&authorization) >= 0) {
if(Curl_base64_encode(data->buffer, strlen(data->buffer),
&authorization) >= 0) {
data->ptr_proxyuserpwd =
maprintf("Proxy-authorization: Basic %s\015\012", authorization);
aprintf("Proxy-authorization: Basic %s\015\012", authorization);
free(authorization);
}
}
if((conn->protocol&PROT_HTTP) || data->bits.httpproxy) {
if(data->useragent) {
data->ptr_uagent = maprintf("User-Agent: %s\015\012", data->useragent);
data->ptr_uagent =
aprintf("User-Agent: %s\015\012", data->useragent);
}
}
if(conn->curl_connect) {
/* is there a connect() procedure? */
conn->now = tvnow(); /* set this here for timeout purposes in the
connect procedure, it is later set again for the
progress meter purpose */
/* set start time here for timeout purposes in the
connect procedure, it is later set again for the
progress meter purpose */
conn->now = Curl_tvnow();
result = conn->curl_connect(conn);
if(result != CURLE_OK)
return result; /* pass back errors */
}
pgrsTime(data, TIMER_CONNECT); /* we're connected */
Curl_pgrsTime(data, TIMER_CONNECT); /* we're connected */
conn->now = tvnow(); /* time this *after* the connect is done */
conn->now = Curl_tvnow(); /* time this *after* the connect is done */
conn->bytecount = 0;
/* Figure out the ip-number and the first host name it shows: */
@@ -1560,7 +1524,7 @@ CURLcode curl_done(CURLconnect *c_connect)
else
result = CURLE_OK;
pgrsDone(data); /* done with the operation */
Curl_pgrsDone(data); /* done with the operation */
conn->state = CONN_DONE;

View File

@@ -22,6 +22,7 @@
*
* $Id$
*****************************************************************************/
int GetLine(int sockfd, char *buf, struct UrlData *data);
/* empty */
#endif

View File

@@ -33,9 +33,6 @@ char *curl_version(void)
{
static char version[200];
char *ptr;
#if defined(USE_SSLEAY)
static char sub[2];
#endif
strcpy(version, LIBCURL_NAME " " LIBCURL_VERSION );
ptr=strchr(version, '\0');
@@ -47,17 +44,19 @@ char *curl_version(void)
(SSLEAY_VERSION_NUMBER>>20)&0xff,
(SSLEAY_VERSION_NUMBER>>12)&0xf);
#else
if(SSLEAY_VERSION_NUMBER&0x0f) {
sub[0]=(SSLEAY_VERSION_NUMBER&0x0f) + 'a' -1;
{
char sub[2];
if(SSLEAY_VERSION_NUMBER&0x0f) {
sub[0]=(SSLEAY_VERSION_NUMBER&0x0f) + 'a' -1;
}
else
sub[0]=0;
sprintf(ptr, " (SSL %x.%x.%x%s)",
(SSLEAY_VERSION_NUMBER>>12)&0xff,
(SSLEAY_VERSION_NUMBER>>8)&0xf,
(SSLEAY_VERSION_NUMBER>>4)&0xf, sub);
}
else
sub[0]=0;
sprintf(ptr, " (SSL %x.%x.%x%s)",
(SSLEAY_VERSION_NUMBER>>12)&0xff,
(SSLEAY_VERSION_NUMBER>>8)&0xf,
(SSLEAY_VERSION_NUMBER>>4)&0xf, sub);
#endif
ptr=strchr(ptr, '\0');
#endif

87
maketgz
View File

@@ -58,29 +58,10 @@ findprog()
############################################################################
#
# If we have autoconf we can just as well update configure.in to contain our
# brand new version number:
# Enforce a rerun of configure (updates the VERSION)
#
#if { findprog autoconf >/dev/null 2>/dev/null; } then
# echo "- No autoconf found, we leave configure as it is"
#else
# # Replace version number in configure.in file:
#
# CONF="configure.in"
#
# sed 's/^AM_INIT_AUTOMAKE.*/AM_INIT_AUTOMAKE(curl,"'$version'")/g' $CONF >$CONF.new
#
# # Save old file
# cp -p $CONF $CONF.old
#
# # Make new configure.in
# mv $CONF.new $CONF
#
# # Update the configure script
# echo "Runs autoconf"
# autoconf
#fi
./config.status --recheck
############################################################################
#
@@ -97,65 +78,7 @@ fi
############################################################################
#
# Now run make first to make the file dates decent and make sure that it
# compiles just before release!
# Now run make dist
#
make
# get current dir
dir=`pwd`
# Get basename
orig=`basename $dir`
# Get the left part of the dash (-)
new=`echo $orig | cut -d- -f1`
# Build new directory name
n=$new-$version;
# Tell the world what we're doing
echo "Copying files into distribution archive";
if [ -r $n ]; then
echo "Directory already exists!"
exit
fi
# Create the new dir
mkdir $n
# Copy all relevant files, with path and permissions!
tar -cf - `cat FILES` | (cd $n; tar -xBpf -)
# Create the distribution root Makefile from Makefile.dist
cp -p Makefile.dist $n/Makefile
############################################################################
#
# Replace @SHELL@ with /bin/sh in the Makefile.in files!
#
echo "Replace @SHELL@ with /bin/sh in the Makefile.in files"
temp=/tmp/curl$$
for file in Makefile.in lib/Makefile.in src/Makefile.in; do
in="$n/$file"
sed "s:@SHELL@:/bin/sh:g" $in >$temp
cp $temp $in
done
rm -rf $temp
# Tell the world what we're doing
echo "creates $n.tar.gz";
# Make a tar archive of it all
tar -cvf $n.tar $n
# gzip the archive
gzip $n.tar
# Make it world readable
chmod a+r $n.tar.gz ;
# Delete the temp dir
rm -rf $n
make dist

View File

@@ -0,0 +1 @@
SUBDIRS = RPM

View File

@@ -0,0 +1,2 @@
EXTRA_DIST = README curl-ssl.spec.in curl.spec.in make_curl_rpm

View File

@@ -1,98 +0,0 @@
%define ver 7.4.2
%define rel 1
%define prefix /usr
Summary: get a file from a FTP, GOPHER or HTTP server.
Name: curl-ssl
Version: %ver
Release: %rel
Copyright: MPL
Group: Utilities/Console
Source: curl-%{version}.tar.gz
URL: http://curl.haxx.se
BuildPrereq: openssl
BuildRoot: /tmp/%{name}-%{version}-%{rel}-root
Packager: Fill In As You Wish
Docdir: %{prefix}/doc
%description
curl-ssl is a client to get documents/files from servers, using
any of the supported protocols. The command is designed to
work without user interaction or any kind of interactivity.
curl-ssl offers a busload of useful tricks like proxy support,
user authentication, ftp upload, HTTP post, file transfer
resume and more.
Note: this version is compiled with SSL (https:) support.
Authors:
Daniel Stenberg <daniel@haxx.se>
%prep
%setup -n %{name}-%{version}
%build
# Needed for snapshot releases.
if [ ! -f configure ]; then
CONF="./autogen.sh"
else
CONF="./configure"
fi
#
# Configuring the package
#
CFLAGS="${RPM_OPT_FLAGS}" ${CONF} \
--prefix=%{prefix} \
--with-ssl
[ "$SMP" != "" ] && JSMP = '"MAKE=make -k -j $SMP"'
make ${JSMP} CFLAGS="-DUSE_SSLEAY -I/usr/include/openssl";
%install
[ -d ${RPM_BUILD_ROOT} ] && rm -rf ${RPM_BUILD_ROOT}
make prefix=${RPM_BUILD_ROOT}%{prefix} install-strip
#
# Generating file lists and store them in file-lists
# Starting with the directory listings
#
find ${RPM_BUILD_ROOT}%{prefix}/{bin,lib,man} -type d | sed "s#^${RPM_BUILD_ROOT}#\%attr (-\,root\,root) \%dir #" > file-lists
#
# Then, the file listings
#
echo "%defattr (-, root, root)" >> file-lists
find ${RPM_BUILD_ROOT}%{prefix} -type f | sed -e "s#^${RPM_BUILD_ROOT}##g" >> file-lists
%clean
(cd ..; rm -rf curl-7.4.2 ${RPM_BUILD_ROOT})
%files -f file-lists
%defattr (-, root, root)
%doc BUGS
%doc CHANGES
%doc CONTRIBUTE
%doc FAQ
%doc FEATURES
%doc FILES
%doc INSTALL
%doc LEGAL
%doc MPL-1.0.txt
%doc README
%doc README.curl
%doc README.libcurl
%doc RESOURCES
%doc TODO
%doc %{name}-ssl.spec.in
%doc %{name}.spec.in

View File

@@ -0,0 +1,78 @@
%define name curl-ssl
%define tarball curl
%define version @VERSION@
%define release 1
%define prefix /usr
%define builddir $RPM_BUILD_DIR/%{tarball}-%{version}
Summary: get a file from a FTP, GOPHER or HTTP server.
Name: %{name}
Version: %{version}
Release: %{release}
Copyright: MPL
Vendor: Daniel Stenberg <Daniel.Stenberg@haxx.se>
Packager: Loic Dachary <loic@senga.org>
Group: Utilities/Console
Source: %{tarball}-%{version}.tar.gz
URL: http://curl.haxx.se/
BuildRoot: /tmp/%{tarball}-%{version}-root
Requires: openssl >= 0.9.5
%description
curl is a client to get documents/files from servers, using any of the
supported protocols. The command is designed to work without user
interaction or any kind of interactivity.
curl offers a busload of useful tricks like proxy support, user
authentication, ftp upload, HTTP post, file transfer resume and more.
%package devel
Summary: The includes, libs, and man pages to develop with libcurl
Group: Development/Libraries
Requires: openssl-devel >= 0.9.5
%description devel
libcurl is the core engine of curl; this packages contains all the libs,
headers, and manual pages to develop applications using libcurl.
%prep
rm -rf %{builddir}
%setup -n %{tarball}-%{version}
%build
%configure --prefix=%{prefix} --with-ssl
make
%install
rm -rf $RPM_BUILD_ROOT
make DESTDIR=$RPM_BUILD_ROOT install-strip
%clean
rm -rf $RPM_BUILD_ROOT
rm -rf %{builddir}
%post
/sbin/ldconfig
%postun
/sbin/ldconfig
%files
%defattr(-,root,root)
%attr(0755,root,root) %{_bindir}/curl
%attr(0644,root,root) %{_mandir}/man1/*
%{_libdir}/libcurl.so*
%doc CHANGES LEGAL MITX.txt MPL-1.1.txt README docs/BUGS
%doc docs/CONTRIBUTE docs/FAQ docs/FEATURES docs/INSTALL docs/INTERNALS
%doc docs/LIBCURL docs/MANUAL docs/README* docs/RESOURCES docs/TODO
%doc docs/TheArtOfHttpScripting
%files devel
%defattr(-,root,root)
%attr(0644,root,root) %{_mandir}/man3/*
%attr(0644,root,root) %{_includedir}/curl/*
%{_libdir}/libcurl.a
%{_libdir}/libcurl.la
%doc docs/examples/*

View File

@@ -1,96 +0,0 @@
%define ver 7.4.2
%define rel 1
%define prefix /usr
Summary: get a file from a FTP, GOPHER or HTTP server.
Name: curl
Version: %ver
Release: %rel
Copyright: MPL
Group: Utilities/Console
Source: %{name}-%{version}.tar.gz
URL: http://curl.haxx.se
BuildRoot: /tmp/%{name}-%{version}-%{rel}-root
Packager: Fill In As You Wish
Docdir: %{prefix}/doc
%description
curl is a client to get documents/files from servers, using
any of the supported protocols. The command is designed to
work without user interaction or any kind of interactivity.
curl offers a busload of useful tricks like proxy support,
user authentication, ftp upload, HTTP post, file transfer
resume and more.
Note: this version is compiled without SSL (https:) support.
Authors:
Daniel Stenberg <daniel@haxx.se>
%prep
%setup -n %{name}-%{version}
%build
# Needed for snapshot releases.
if [ ! -f configure ]; then
CONF="./autogen.sh"
else
CONF="./configure"
fi
#
# Configuring the package
#
CFLAGS="${RPM_OPT_FLAGS}" ${CONF} \
--prefix=%{prefix}
[ "$SMP" != "" ] && JSMP = '"MAKE=make -k -j $SMP"'
make ${JSMP};
%install
[ -d ${RPM_BUILD_ROOT} ] && rm -rf ${RPM_BUILD_ROOT}
make prefix=${RPM_BUILD_ROOT}%{prefix} install-strip
#
# Generating file lists and store them in file-lists
# Starting with the directory listings
#
find ${RPM_BUILD_ROOT}%{prefix}/{bin,lib,man} -type d | sed "s#^${RPM_BUILD_ROOT}#\%attr (-\,root\,root) \%dir #" > file-lists
#
# Then, the file listings
#
echo "%defattr (-, root, root)" >> file-lists
find ${RPM_BUILD_ROOT}%{prefix} -type f | sed -e "s#^${RPM_BUILD_ROOT}##g" >> file-lists
%clean
(cd ..; rm -rf %{name}-%{version} ${RPM_BUILD_ROOT})
%files -f file-lists
%defattr (-, root, root)
%doc BUGS
%doc CHANGES
%doc CONTRIBUTE
%doc FAQ
%doc FEATURES
%doc FILES
%doc INSTALL
%doc LEGAL
%doc MPL-1.0.txt
%doc README
%doc README.curl
%doc README.libcurl
%doc RESOURCES
%doc TODO
%doc %{name}-ssl.spec.in
%doc %{name}.spec.in

View File

@@ -0,0 +1,84 @@
%define name curl
%define version @VERSION@
%define release 1
%define prefix /usr
%define builddir $RPM_BUILD_DIR/%{name}-%{version}
Summary: get a file from a FTP, GOPHER or HTTP server.
Name: %{name}
Version: %{version}
Release: %{release}
Copyright: MPL
Vendor: Daniel Stenberg <Daniel.Stenberg@haxx.se>
Packager: Loic Dachary <loic@senga.org>
Group: Utilities/Console
Source: %{name}-%{version}.tar.gz
URL: http://curl.haxx.se/
BuildRoot: /tmp/%{name}-%{version}-root
%description
curl is a client to get documents/files from servers, using any of the
supported protocols. The command is designed to work without user
interaction or any kind of interactivity.
curl offers a busload of useful tricks like proxy support, user
authentication, ftp upload, HTTP post, file transfer resume and more.
Note: this version is compiled without SSL (https:) support.
%package devel
Summary: The includes, libs, and man pages to develop with libcurl
Group: Development/Libraries
%description devel
libcurl is the core engine of curl; this packages contains all the libs,
headers, and manual pages to develop applications using libcurl.
%prep
rm -rf %{builddir}
%setup
%build
%configure --without-ssl --prefix=%{prefix}
make
%install
rm -rf $RPM_BUILD_ROOT
make DESTDIR=$RPM_BUILD_ROOT install-strip
%clean
rm -rf $RPM_BUILD_ROOT
rm -rf %{builddir}
%post
/sbin/ldconfig
%postun
/sbin/ldconfig
%files
%defattr(-,root,root)
%attr(0755,root,root) %{_bindir}/curl
%attr(0644,root,root) %{_mandir}/man1/*
%{prefix}/lib/libcurl.so*
%doc CHANGES LEGAL MITX.txt MPL-1.1.txt README docs/BUGS
%doc docs/CONTRIBUTE docs/FAQ docs/FEATURES docs/INSTALL docs/INTERNALS
%doc docs/LIBCURL docs/MANUAL docs/README* docs/RESOURCES docs/TODO
%doc docs/TheArtOfHttpScripting
%files devel
%defattr(-,root,root)
%attr(0644,root,root) %{_mandir}/man3/*
%attr(0644,root,root) %{_includedir}/curl/*
%{prefix}/lib/libcurl.a
%{prefix}/lib/libcurl.la
%doc docs/examples/*
%changelog
* Sun Jan 7 2001 Loic Dachary <loic@senga.org>
- use _mandir instead of prefix to locate man pages because
_mandir is not always prefix/man/man?.

1
packages/Makefile.am Normal file
View File

@@ -0,0 +1 @@
SUBDIRS = Win32 Linux

View File

@@ -0,0 +1 @@
EXTRA_DIST = README

View File

@@ -10,17 +10,24 @@ INCLUDES = -I$(top_srcdir)/include
bin_PROGRAMS = curl #memtest
noinst_HEADERS = setup.h \
config-win32.h \
urlglob.h \
version.h \
writeout.h
#memtest_SOURCES = memtest.c
#memtest_LDADD = $(top_srcdir)/lib/libcurl.la
curl_SOURCES = main.c hugehelp.c urlglob.c writeout.c
curl_LDADD = $(top_srcdir)/lib/libcurl.la
curl_DEPENDENCIES = $(top_srcdir)/lib/libcurl.la
curl_LDADD = ../lib/libcurl.la
curl_DEPENDENCIES = ../lib/libcurl.la
BUILT_SOURCES = hugehelp.c
CLEANFILES = hugehelp.c
NROFF=@NROFF@
EXTRA_DIST = mkhelp.pl Makefile.vc6
EXTRA_DIST = mkhelp.pl config-win32.h \
Makefile.vc6 Makefile.b32 Makefile.m32
AUTOMAKE_OPTIONS = foreign no-dependencies

File diff suppressed because it is too large Load Diff

View File

@@ -39,7 +39,6 @@
#include "writeout.h"
#define CURLseparator "--_curl_--"
#define MIMEseparator "_curl_"
/* This define make use of the "Curlseparator" as opposed to the
MIMEseparator. We might add support for the latter one in the
@@ -222,6 +221,20 @@ static void helpf(char *fmt, ...)
fprintf(stderr, "curl: try 'curl --help' for more information\n");
}
/*
* A chain of these nodes contain URL to get and where to put the URL's
* contents.
*/
struct getout {
struct getout *next;
char *url;
char *outfile;
int flags;
};
#define GETOUT_OUTFILE (1<<0) /* set when outfile is deemed done */
#define GETOUT_URL (1<<1) /* set when URL is deemed done */
#define GETOUT_USEREMOTE (1<<2) /* use remote file name locally */
static void help(void)
{
printf(CURL_ID "%s\n"
@@ -242,7 +255,7 @@ static void help(void)
" --cacert <file> CA certifciate to verify peer against (HTTPS)\n"
" -f/--fail Fail silently (no output at all) on errors (H)\n"
" -F/--form <name=content> Specify HTTP POST data (H)\n"
" -g/--globoff Disable URL sequences and ranges using {} and []\n"
" -h/--help This help text\n"
" -H/--header <line> Custom header to pass to server. (H)\n"
" -i/--include Include the HTTP-header in the output (H)\n"
@@ -267,6 +280,7 @@ static void help(void)
" -S/--show-error Show error. With -s, make curl show errors when they occur\n"
" -t/--upload Transfer/upload stdin to remote site\n"
" -T/--upload-file <file> Transfer/upload <file> to remote site\n"
" --url <URL> Another way to specify URL to work with\n"
" -u/--user <user[:password]> Specify user and password to use\n"
" -U/--proxy-user <user[:password]> Specify Proxy authentication\n"
" -v/--verbose Makes the operation more talkative\n"
@@ -303,9 +317,7 @@ struct Configurable {
char *referer;
long timeout;
long maxredirs;
char *outfile;
char *headerfile;
char remotefile;
char *ftpport;
char *iface;
unsigned short porttouse;
@@ -320,7 +332,13 @@ struct Configurable {
bool configread;
bool proxytunnel;
long conf;
char *url;
struct getout *url_list; /* point to the first node */
struct getout *url_last; /* point to the last/current node */
struct getout *url_get; /* point to the node to fill in URL */
struct getout *url_out; /* point to the node to fill in outfile */
char *cert;
char *cacert;
char *cert_passwd;
@@ -330,6 +348,7 @@ struct Configurable {
char *krb4level;
bool progressmode;
bool nobuffer;
bool globoff;
char *writeout; /* %-styled format string to output */
@@ -426,6 +445,46 @@ static char *file2memory(FILE *file, long *size)
return NULL; /* no string */
}
void clean_getout(struct Configurable *config)
{
struct getout *node=config->url_list;
struct getout *next;
while(node) {
next = node->next;
if(node->url)
free(node->url);
if(node->outfile)
free(node->outfile);
free(node);
node = next; /* GOTO next */
}
}
struct getout *new_getout(struct Configurable *config)
{
struct getout *node =malloc(sizeof(struct getout));
struct getout *last= config->url_last;
if(node) {
/* clear the struct */
memset(node, 0, sizeof(struct getout));
/* append this new node last in the list */
if(last)
last->next = node;
else
config->url_list = node; /* first node */
/* move the last pointer */
config->url_last = node;
}
return node;
}
typedef enum {
PARAM_OK,
PARAM_OPTION_AMBIGUOUS,
@@ -453,6 +512,8 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
time_t now;
int hit=-1;
bool longopt=FALSE;
bool singleopt=FALSE; /* when true means '-o foo' used '-ofoo' */
/* single-letter,
long-name,
@@ -483,7 +544,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
{"Ea", "cacert", TRUE},
{"f", "fail", FALSE},
{"F", "form", TRUE},
{"g", "globoff", FALSE},
{"h", "help", FALSE},
{"H", "header", TRUE},
{"i", "include", FALSE},
@@ -583,7 +644,11 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
if(hit < 0) {
return PARAM_OPTION_UNKNOWN;
}
if((!nextarg || !*nextarg) && aliases[hit].extraparam) {
if(!longopt && aliases[hit].extraparam && parse[1]) {
nextarg=&parse[1]; /* this is the actual extra parameter */
singleopt=TRUE; /* don't loop anymore after this */
}
else if((!nextarg || !*nextarg) && aliases[hit].extraparam) {
return PARAM_REQUIRES_PARAMETER;
}
else if(nextarg && aliases[hit].extraparam)
@@ -610,7 +675,30 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
break;
case '5':
/* the URL! */
GetStr(&config->url, nextarg);
{
struct getout *url;
if(config->url_get || (config->url_get=config->url_list)) {
/* there's a node here, if it already is filled-in continue to find
an "empty" node */
while(config->url_get && (config->url_get->flags&GETOUT_URL))
config->url_get = config->url_get->next;
}
/* now there might or might not be an available node to fill in! */
if(config->url_get)
/* existing node */
url = config->url_get;
else
/* there was no free node, create one! */
url=new_getout(config);
if(url) {
/* fill in the URL */
GetStr(&url->url, nextarg);
url->flags |= GETOUT_URL;
}
}
break;
case '#': /* added 19990617 larsa */
config->progressmode ^= CURL_PROGRESS_BAR;
@@ -690,7 +778,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
/* we already have a string, we append this one
with a separating &-letter */
char *oldpost=config->postfields;
config->postfields=maprintf("%s&%s", oldpost, postdata);
config->postfields=aprintf("%s&%s", oldpost, postdata);
free(oldpost);
free(postdata);
}
@@ -749,6 +837,10 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
return PARAM_BAD_USE;
break;
case 'g': /* g disables URLglobbing */
config->globoff ^= TRUE;
break;
case 'h': /* h for help */
help();
return PARAM_HELP_REQUESTED;
@@ -794,12 +886,37 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
config->nobuffer ^= 1;
break;
case 'o':
/* output file */
GetStr(&config->outfile, nextarg); /* write to this file */
break;
case 'O':
/* output file */
config->remotefile ^= TRUE;
{
struct getout *url;
if(config->url_out || (config->url_out=config->url_list)) {
/* there's a node here, if it already is filled-in continue to find
an "empty" node */
while(config->url_out && (config->url_out->flags&GETOUT_OUTFILE))
config->url_out = config->url_out->next;
}
/* now there might or might not be an available node to fill in! */
if(config->url_out)
/* existing node */
url = config->url_out;
else
/* there was no free node, create one! */
url=new_getout(config);
if(url) {
/* fill in the outfile */
if('o' == letter)
GetStr(&url->outfile, nextarg);
else {
url->outfile=NULL; /* leave it */
url->flags |= GETOUT_USEREMOTE;
}
url->flags |= GETOUT_OUTFILE;
}
}
break;
case 'P':
/* This makes the FTP sessions use PORT instead of PASV */
@@ -951,7 +1068,7 @@ static ParameterError getparameter(char *flag, /* f or -long-flag */
}
hit = -1;
} while(*++parse && !*usedarg);
} while(!singleopt && *++parse && !*usedarg);
return PARAM_OK;
}
@@ -1253,8 +1370,6 @@ void progressbarinit(struct ProgressData *bar)
void free_config_fields(struct Configurable *config)
{
if(config->url)
free(config->url);
if(config->userpwd)
free(config->userpwd);
if(config->postfields)
@@ -1271,8 +1386,6 @@ void free_config_fields(struct Configurable *config)
free(config->krb4level);
if(config->headerfile)
free(config->headerfile);
if(config->outfile)
free(config->outfile);
if(config->ftpport)
free(config->ftpport);
if(config->infile)
@@ -1300,13 +1413,15 @@ operate(struct Configurable *config, int argc, char *argv[])
char errorbuffer[CURL_ERROR_SIZE];
char useragent[128]; /* buah, we don't want a larger default user agent */
struct ProgressData progressbar;
struct getout *urlnode;
struct getout *nextnode;
struct OutStruct outs;
struct OutStruct heads;
char *url = NULL;
URLGlob *urls;
URLGlob *urls=NULL;
int urlnum;
char *outfiles;
int separator = 0;
@@ -1323,9 +1438,6 @@ operate(struct Configurable *config, int argc, char *argv[])
int res;
int i;
outs.stream = stdout;
outs.config = config;
#ifdef MALLOCDEBUG
/* this sends all memory debug messages to a logfile named memdump */
curl_memdebug("memdump");
@@ -1356,7 +1468,7 @@ operate(struct Configurable *config, int argc, char *argv[])
return res;
}
if ((argc < 2) && !config->url) {
if ((argc < 2) && !config->url_list) {
helpf(NULL);
return CURLE_FAILED_INIT;
}
@@ -1397,6 +1509,7 @@ operate(struct Configurable *config, int argc, char *argv[])
/* no text */
break;
}
clean_getout(config);
return CURLE_FAILED_INIT;
}
@@ -1405,20 +1518,15 @@ operate(struct Configurable *config, int argc, char *argv[])
}
}
else {
if(url) {
helpf("only one URL is supported!\n");
return CURLE_FAILED_INIT;
}
url = argv[i];
bool used;
/* just add the URL please */
res = getparameter("--url", argv[i], &used, config);
if(res)
return res;
}
}
/* if no URL was specified and there was one in the config file, get that
one */
if(!url && config->url)
url = config->url;
if(!url) {
if(!config->url_list) {
helpf("no URL specified!\n");
return CURLE_FAILED_INIT;
}
@@ -1430,331 +1538,351 @@ operate(struct Configurable *config, int argc, char *argv[])
}
else
allocuseragent = TRUE;
#if 0
fprintf(stderr, "URL: %s PROXY: %s\n", url, config->proxy?config->proxy:"none");
#endif
/* expand '{...}' and '[...]' expressions and return total number of URLs
in pattern set */
res = glob_url(&urls, url, &urlnum);
if(res != CURLE_OK)
return res;
urlnode = config->url_list;
/* save outfile pattern befor expansion */
outfiles = config->outfile?strdup(config->outfile):NULL;
/* loop through the list of given URLs */
while(urlnode) {
if (!outfiles && !config->remotefile && urlnum > 1) {
#ifdef CURL_SEPARATORS
/* multiple files extracted to stdout, insert separators! */
separator = 1;
#endif
#ifdef MIME_SEPARATORS
/* multiple files extracted to stdout, insert MIME separators! */
separator = 1;
printf("MIME-Version: 1.0\n");
printf("Content-Type: multipart/mixed; boundary=%s\n\n", MIMEseparator);
#endif
}
for (i = 0; (url = next_url(urls)); ++i) {
if (config->outfile) {
free(config->outfile);
config->outfile = outfiles?strdup(outfiles):NULL;
/* get the full URL (it might be NULL) */
url=urlnode->url;
if(NULL == url) {
/* This node had no URL, skip it and continue to the next */
if(urlnode->outfile)
free(urlnode->outfile);
/* move on to the next URL */
nextnode=urlnode->next;
free(urlnode); /* free the node */
urlnode = nextnode;
continue; /* next please */
}
/* default output stream is stdout */
outs.stream = stdout;
outs.config = config;
if(!config->globoff) {
/* Unless explicitly shut off, we expand '{...}' and '[...]' expressions
and return total number of URLs in pattern set */
res = glob_url(&urls, url, &urlnum);
if(res != CURLE_OK)
return res;
}
/* save outfile pattern before expansion */
outfiles = urlnode->outfile?strdup(urlnode->outfile):NULL;
if ((!outfiles || strequal(outfiles, "-")) && urlnum > 1) {
/* multiple files extracted to stdout, insert separators! */
separator = 1;
}
for (i = 0; (url = urls?next_url(urls):(i?NULL:url)); ++i) {
char *outfile;
outfile = outfiles?strdup(outfiles):NULL;
if (config->outfile || config->remotefile) {
/*
* We have specified a file name to store the result in, or we have
* decided we want to use the remote file name.
*/
if((urlnode->flags&GETOUT_USEREMOTE) ||
(outfile && !strequal("-", outfile)) ) {
/*
* We have specified a file name to store the result in, or we have
* decided we want to use the remote file name.
*/
if(!config->outfile && config->remotefile) {
/* Find and get the remote file name */
char * pc =strstr(url, "://");
if(pc)
pc+=3;
else
pc=url;
pc = strrchr(pc, '/');
config->outfile = (char *) NULL == pc ? NULL : strdup(pc+1) ;
if(!config->outfile || !strlen(config->outfile)) {
helpf("Remote file name has no length!\n");
return CURLE_WRITE_ERROR;
if(!outfile) {
/* Find and get the remote file name */
char * pc =strstr(url, "://");
if(pc)
pc+=3;
else
pc=url;
pc = strrchr(pc, '/');
outfile = (char *) NULL == pc ? NULL : strdup(pc+1) ;
if(!outfile) {
helpf("Remote file name has no length!\n");
return CURLE_WRITE_ERROR;
}
}
else if(urls) {
/* fill '#1' ... '#9' terms from URL pattern */
char *storefile = outfile;
outfile = match_url(storefile, urls);
free(storefile);
}
if((0 == config->resume_from) && config->use_resume) {
/* we're told to continue where we are now, then we get the size of
the file as it is now and open it for append instead */
struct stat fileinfo;
if(0 == stat(outfile, &fileinfo)) {
/* set offset to current file size: */
config->resume_from = fileinfo.st_size;
}
/* else let offset remain 0 */
}
if(config->resume_from) {
/* open file for output: */
outs.stream=(FILE *) fopen(outfile, config->resume_from?"ab":"wb");
if (!outs.stream) {
helpf("Can't open '%s'!\n", outfile);
return CURLE_WRITE_ERROR;
}
}
else {
outs.filename = outfile;
outs.stream = NULL; /* open when needed */
}
}
else {
/* fill '#1' ... '#9' terms from URL pattern */
char *outfile = config->outfile;
config->outfile = match_url(config->outfile, urls);
free(outfile);
}
if((0 == config->resume_from) && config->use_resume) {
/* we're told to continue where we are now, then we get the size of the
file as it is now and open it for append instead */
if(config->infile) {
/*
* We have specified a file to upload
*/
struct stat fileinfo;
if(0 == stat(config->outfile, &fileinfo)) {
/* set offset to current file size: */
config->resume_from = fileinfo.st_size;
}
/* else let offset remain 0 */
}
if(config->resume_from) {
/* open file for output: */
outs.stream=(FILE *) fopen(config->outfile, config->resume_from?"ab":"wb");
if (!outs.stream) {
helpf("Can't open '%s'!\n", config->outfile);
return CURLE_WRITE_ERROR;
}
}
else {
outs.filename = config->outfile;
outs.stream = NULL; /* open when needed */
}
}
if (config->infile) {
/*
* We have specified a file to upload
*/
struct stat fileinfo;
/* If no file name part is given in the URL, we add this file name */
char *ptr=strstr(url, "://");
if(ptr)
ptr+=3;
else
ptr=url;
ptr = strrchr(ptr, '/');
if(!ptr || !strlen(++ptr)) {
/* The URL has no file name part, add the local file name. In order to
be able to do so, we have to create a new URL in another buffer.*/
urlbuffer=(char *)malloc(strlen(url) + strlen(config->infile) + 3);
if(!urlbuffer) {
helpf("out of memory\n");
return CURLE_OUT_OF_MEMORY;
}
/* If no file name part is given in the URL, we add this file name */
char *ptr=strstr(url, "://");
if(ptr)
/* there is a trailing slash on the URL */
sprintf(urlbuffer, "%s%s", url, config->infile);
ptr+=3;
else
/* thers is no trailing slash on the URL */
sprintf(urlbuffer, "%s/%s", url, config->infile);
url = urlbuffer; /* use our new URL instead! */
}
ptr=url;
ptr = strrchr(ptr, '/');
if(!ptr || !strlen(++ptr)) {
/* The URL has no file name part, add the local file name. In order
to be able to do so, we have to create a new URL in another
buffer.*/
infd=(FILE *) fopen(config->infile, "rb");
if (!infd || stat(config->infile, &fileinfo)) {
helpf("Can't open '%s'!\n", config->infile);
return CURLE_READ_ERROR;
}
infilesize=fileinfo.st_size;
urlbuffer=(char *)malloc(strlen(url) + strlen(config->infile) + 3);
if(!urlbuffer) {
helpf("out of memory\n");
return CURLE_OUT_OF_MEMORY;
}
if(ptr)
/* there is a trailing slash on the URL */
sprintf(urlbuffer, "%s%s", url, config->infile);
else
/* thers is no trailing slash on the URL */
sprintf(urlbuffer, "%s/%s", url, config->infile);
url = urlbuffer; /* use our new URL instead! */
}
infd=(FILE *) fopen(config->infile, "rb");
if (!infd || stat(config->infile, &fileinfo)) {
helpf("Can't open '%s'!\n", config->infile);
return CURLE_READ_ERROR;
}
infilesize=fileinfo.st_size;
}
if((config->conf&CONF_UPLOAD) &&
config->use_resume &&
(0==config->resume_from)) {
config->resume_from = -1; /* -1 will then force get-it-yourself */
}
if(config->headerfile) {
/* open file for output: */
if(strcmp(config->headerfile,"-")) {
heads.filename = config->headerfile;
headerfilep=NULL;
}
else
headerfilep=stdout;
heads.stream = headerfilep;
heads.config = config;
}
if((config->conf&CONF_UPLOAD) &&
config->use_resume &&
(0==config->resume_from)) {
config->resume_from = -1; /* -1 will then force get-it-yourself */
}
if(config->headerfile) {
/* open file for output: */
if(strcmp(config->headerfile,"-")) {
heads.filename = config->headerfile;
headerfilep=NULL;
}
else
headerfilep=stdout;
heads.stream = headerfilep;
heads.config = config;
}
if(outs.stream && isatty(fileno(outs.stream)) &&
!(config->conf&(CONF_UPLOAD|CONF_HTTPPOST)))
/* we send the output to a tty and it isn't an upload operation,
therefore we switch off the progress meter */
config->conf |= CONF_NOPROGRESS;
if(outs.stream && isatty(fileno(outs.stream)) &&
!(config->conf&(CONF_UPLOAD|CONF_HTTPPOST)))
/* we send the output to a tty and it isn't an upload operation,
therefore we switch off the progress meter */
config->conf |= CONF_NOPROGRESS;
if (urlnum > 1) {
fprintf(stderr, "\n[%d/%d]: %s --> %s\n",
i+1, urlnum, url, config->outfile ? config->outfile : "<stdout>");
if (separator) {
#ifdef CURL_SEPARATORS
printf("%s%s\n", CURLseparator, url);
#endif
#ifdef MIME_SEPARATORS
printf("--%s\n", MIMEseparator);
printf("Content-ID: %s\n\n", url);
#endif
if (urlnum > 1) {
fprintf(stderr, "\n[%d/%d]: %s --> %s\n",
i+1, urlnum, url, outfile ? outfile : "<stdout>");
if (separator)
printf("%s%s\n", CURLseparator, url);
}
}
if(!config->errors)
config->errors = stderr;
if(!config->errors)
config->errors = stderr;
#ifdef WIN32
if(!config->outfile && !(config->conf & CONF_GETTEXT)) {
/* We get the output to stdout and we have not got the ASCII/text flag,
then set stdout to be binary */
setmode( 1, O_BINARY );
}
if(!outfile && !(config->conf & CONF_GETTEXT)) {
/* We get the output to stdout and we have not got the ASCII/text flag,
then set stdout to be binary */
setmode( 1, O_BINARY );
}
#endif
main_init();
main_init();
/* The new, v7-style easy-interface! */
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_FILE, (FILE *)&outs); /* where to store */
/* what call to write: */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
curl_easy_setopt(curl, CURLOPT_INFILE, infd); /* for uploads */
/* size of uploaded file: */
curl_easy_setopt(curl, CURLOPT_INFILESIZE, infilesize);
curl_easy_setopt(curl, CURLOPT_URL, url); /* what to fetch */
curl_easy_setopt(curl, CURLOPT_PROXY, config->proxy); /* proxy to use */
curl_easy_setopt(curl, CURLOPT_VERBOSE, config->conf&CONF_VERBOSE);
curl_easy_setopt(curl, CURLOPT_HEADER, config->conf&CONF_HEADER);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, config->conf&CONF_NOPROGRESS);
curl_easy_setopt(curl, CURLOPT_NOBODY, config->conf&CONF_NOBODY);
curl_easy_setopt(curl, CURLOPT_FAILONERROR,
config->conf&CONF_FAILONERROR);
curl_easy_setopt(curl, CURLOPT_UPLOAD, config->conf&CONF_UPLOAD);
curl_easy_setopt(curl, CURLOPT_POST, config->conf&CONF_POST);
curl_easy_setopt(curl, CURLOPT_FTPLISTONLY,
config->conf&CONF_FTPLISTONLY);
curl_easy_setopt(curl, CURLOPT_FTPAPPEND, config->conf&CONF_FTPAPPEND);
curl_easy_setopt(curl, CURLOPT_NETRC, config->conf&CONF_NETRC);
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION,
config->conf&CONF_FOLLOWLOCATION);
curl_easy_setopt(curl, CURLOPT_TRANSFERTEXT, config->conf&CONF_GETTEXT);
curl_easy_setopt(curl, CURLOPT_PUT, config->conf&CONF_PUT);
curl_easy_setopt(curl, CURLOPT_MUTE, config->conf&CONF_MUTE);
curl_easy_setopt(curl, CURLOPT_USERPWD, config->userpwd);
curl_easy_setopt(curl, CURLOPT_PROXYUSERPWD, config->proxyuserpwd);
curl_easy_setopt(curl, CURLOPT_RANGE, config->range);
curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, errorbuffer);
curl_easy_setopt(curl, CURLOPT_TIMEOUT, config->timeout);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, config->postfields);
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_FILE, (FILE *)&outs); /* where to store */
/* what call to write: */
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite);
curl_easy_setopt(curl, CURLOPT_INFILE, infd); /* for uploads */
/* size of uploaded file: */
curl_easy_setopt(curl, CURLOPT_INFILESIZE, infilesize);
curl_easy_setopt(curl, CURLOPT_URL, url); /* what to fetch */
curl_easy_setopt(curl, CURLOPT_PROXY, config->proxy); /* proxy to use */
curl_easy_setopt(curl, CURLOPT_VERBOSE, config->conf&CONF_VERBOSE);
curl_easy_setopt(curl, CURLOPT_HEADER, config->conf&CONF_HEADER);
curl_easy_setopt(curl, CURLOPT_NOPROGRESS, config->conf&CONF_NOPROGRESS);
curl_easy_setopt(curl, CURLOPT_NOBODY, config->conf&CONF_NOBODY);
curl_easy_setopt(curl, CURLOPT_FAILONERROR,
config->conf&CONF_FAILONERROR);
curl_easy_setopt(curl, CURLOPT_UPLOAD, config->conf&CONF_UPLOAD);
curl_easy_setopt(curl, CURLOPT_POST, config->conf&CONF_POST);
curl_easy_setopt(curl, CURLOPT_FTPLISTONLY,
config->conf&CONF_FTPLISTONLY);
curl_easy_setopt(curl, CURLOPT_FTPAPPEND, config->conf&CONF_FTPAPPEND);
curl_easy_setopt(curl, CURLOPT_NETRC, config->conf&CONF_NETRC);
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION,
config->conf&CONF_FOLLOWLOCATION);
curl_easy_setopt(curl, CURLOPT_TRANSFERTEXT, config->conf&CONF_GETTEXT);
curl_easy_setopt(curl, CURLOPT_PUT, config->conf&CONF_PUT);
curl_easy_setopt(curl, CURLOPT_MUTE, config->conf&CONF_MUTE);
curl_easy_setopt(curl, CURLOPT_USERPWD, config->userpwd);
curl_easy_setopt(curl, CURLOPT_PROXYUSERPWD, config->proxyuserpwd);
curl_easy_setopt(curl, CURLOPT_RANGE, config->range);
curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, errorbuffer);
curl_easy_setopt(curl, CURLOPT_TIMEOUT, config->timeout);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, config->postfields);
/* new in libcurl 7.2: */
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, config->postfieldsize);
curl_easy_setopt(curl, CURLOPT_REFERER, config->referer);
curl_easy_setopt(curl, CURLOPT_AUTOREFERER,
config->conf&CONF_AUTO_REFERER);
curl_easy_setopt(curl, CURLOPT_USERAGENT, config->useragent);
curl_easy_setopt(curl, CURLOPT_FTPPORT, config->ftpport);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, config->low_speed_limit);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, config->low_speed_time);
curl_easy_setopt(curl, CURLOPT_RESUME_FROM,
config->use_resume?config->resume_from:0);
curl_easy_setopt(curl, CURLOPT_COOKIE, config->cookie);
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, config->headers);
curl_easy_setopt(curl, CURLOPT_HTTPPOST, config->httppost);
curl_easy_setopt(curl, CURLOPT_SSLCERT, config->cert);
curl_easy_setopt(curl, CURLOPT_SSLCERTPASSWD, config->cert_passwd);
if(config->cacert) {
/* available from libcurl 7.5: */
curl_easy_setopt(curl, CURLOPT_CAINFO, config->cacert);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, TRUE);
}
if(config->conf&(CONF_NOBODY|CONF_USEREMOTETIME)) {
/* no body or use remote time */
/* new in 7.5 */
curl_easy_setopt(curl, CURLOPT_FILETIME, TRUE);
}
/* new in libcurl 7.2: */
curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, config->postfieldsize);
curl_easy_setopt(curl, CURLOPT_REFERER, config->referer);
curl_easy_setopt(curl, CURLOPT_AUTOREFERER,
config->conf&CONF_AUTO_REFERER);
curl_easy_setopt(curl, CURLOPT_USERAGENT, config->useragent);
curl_easy_setopt(curl, CURLOPT_FTPPORT, config->ftpport);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_LIMIT, config->low_speed_limit);
curl_easy_setopt(curl, CURLOPT_LOW_SPEED_TIME, config->low_speed_time);
curl_easy_setopt(curl, CURLOPT_RESUME_FROM,
config->use_resume?config->resume_from:0);
curl_easy_setopt(curl, CURLOPT_COOKIE, config->cookie);
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, config->headers);
curl_easy_setopt(curl, CURLOPT_HTTPPOST, config->httppost);
curl_easy_setopt(curl, CURLOPT_SSLCERT, config->cert);
curl_easy_setopt(curl, CURLOPT_SSLCERTPASSWD, config->cert_passwd);
if(config->cacert) {
/* available from libcurl 7.5: */
curl_easy_setopt(curl, CURLOPT_CAINFO, config->cacert);
curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, TRUE);
}
if(config->conf&(CONF_NOBODY|CONF_USEREMOTETIME)) {
/* no body or use remote time */
/* new in 7.5 */
curl_easy_setopt(curl, CURLOPT_FILETIME, TRUE);
}
/* 7.5 news: */
if (config->maxredirs)
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, config->maxredirs);
else
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, DEFAULT_MAXREDIRS);
/* 7.5 news: */
if (config->maxredirs)
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, config->maxredirs);
else
curl_easy_setopt(curl, CURLOPT_MAXREDIRS, DEFAULT_MAXREDIRS);
curl_easy_setopt(curl, CURLOPT_CRLF, config->crlf);
curl_easy_setopt(curl, CURLOPT_QUOTE, config->quote);
curl_easy_setopt(curl, CURLOPT_POSTQUOTE, config->postquote);
curl_easy_setopt(curl, CURLOPT_WRITEHEADER,
config->headerfile?&heads:NULL);
curl_easy_setopt(curl, CURLOPT_COOKIEFILE, config->cookiefile);
curl_easy_setopt(curl, CURLOPT_SSLVERSION, config->ssl_version);
curl_easy_setopt(curl, CURLOPT_TIMECONDITION, config->timecond);
curl_easy_setopt(curl, CURLOPT_TIMEVALUE, config->condtime);
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, config->customrequest);
curl_easy_setopt(curl, CURLOPT_STDERR, config->errors);
curl_easy_setopt(curl, CURLOPT_CRLF, config->crlf);
curl_easy_setopt(curl, CURLOPT_QUOTE, config->quote);
curl_easy_setopt(curl, CURLOPT_POSTQUOTE, config->postquote);
curl_easy_setopt(curl, CURLOPT_WRITEHEADER,
config->headerfile?&heads:NULL);
curl_easy_setopt(curl, CURLOPT_COOKIEFILE, config->cookiefile);
curl_easy_setopt(curl, CURLOPT_SSLVERSION, config->ssl_version);
curl_easy_setopt(curl, CURLOPT_TIMECONDITION, config->timecond);
curl_easy_setopt(curl, CURLOPT_TIMEVALUE, config->condtime);
curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, config->customrequest);
curl_easy_setopt(curl, CURLOPT_STDERR, config->errors);
/* three new ones in libcurl 7.3: */
curl_easy_setopt(curl, CURLOPT_HTTPPROXYTUNNEL, config->proxytunnel);
curl_easy_setopt(curl, CURLOPT_INTERFACE, config->iface);
curl_easy_setopt(curl, CURLOPT_KRB4LEVEL, config->krb4level);
/* three new ones in libcurl 7.3: */
curl_easy_setopt(curl, CURLOPT_HTTPPROXYTUNNEL, config->proxytunnel);
curl_easy_setopt(curl, CURLOPT_INTERFACE, config->iface);
curl_easy_setopt(curl, CURLOPT_KRB4LEVEL, config->krb4level);
if((config->progressmode == CURL_PROGRESS_BAR) &&
!(config->conf&(CONF_NOPROGRESS|CONF_MUTE))) {
/* we want the alternative style, then we have to implement it
ourselves! */
progressbarinit(&progressbar);
curl_easy_setopt(curl, CURLOPT_PROGRESSFUNCTION, myprogress);
curl_easy_setopt(curl, CURLOPT_PROGRESSDATA, &progressbar);
if((config->progressmode == CURL_PROGRESS_BAR) &&
!(config->conf&(CONF_NOPROGRESS|CONF_MUTE))) {
/* we want the alternative style, then we have to implement it
ourselves! */
progressbarinit(&progressbar);
curl_easy_setopt(curl, CURLOPT_PROGRESSFUNCTION, myprogress);
curl_easy_setopt(curl, CURLOPT_PROGRESSDATA, &progressbar);
}
res = curl_easy_perform(curl);
if(config->writeout) {
ourWriteOut(curl, config->writeout);
}
/* always cleanup */
curl_easy_cleanup(curl);
if((res!=CURLE_OK) && config->showerror)
fprintf(config->errors, "curl: (%d) %s\n", res, errorbuffer);
}
else
fprintf(config->errors, "curl: failed to init libcurl!\n");
res = curl_easy_perform(curl);
main_free();
if(config->writeout) {
ourWriteOut(curl, config->writeout);
}
if((config->errors != stderr) &&
(config->errors != stdout))
/* it wasn't directed to stdout or stderr so close the file! */
fclose(config->errors);
if(config->headerfile && !headerfilep && heads.stream)
fclose(heads.stream);
/* always cleanup */
curl_easy_cleanup(curl);
if(urlbuffer)
free(urlbuffer);
if (outfile && !strequal(outfile, "-") && outs.stream)
fclose(outs.stream);
if (config->infile)
fclose(infd);
if(headerfilep)
fclose(headerfilep);
if(url)
free(url);
if((res!=CURLE_OK) && config->showerror)
fprintf(config->errors, "curl: (%d) %s\n", res, errorbuffer);
if(outfile)
free(outfile);
}
else
fprintf(config->errors, "curl: failed to init libcurl!\n");
if(outfiles)
free(outfiles);
main_free();
if(urls)
/* cleanup memory used for URL globbing patterns */
glob_cleanup(urls);
if((config->errors != stderr) &&
(config->errors != stdout))
/* it wasn't directed to stdout or stderr so close the file! */
fclose(config->errors);
/* empty this urlnode struct */
if(urlnode->url)
free(urlnode->url);
if(urlnode->outfile)
free(urlnode->outfile);
if(config->headerfile && !headerfilep && heads.stream)
fclose(heads.stream);
/* move on to the next URL */
nextnode=urlnode->next;
free(urlnode); /* free the node */
urlnode = nextnode;
if(urlbuffer)
free(urlbuffer);
if (config->outfile && outs.stream)
fclose(outs.stream);
if (config->infile)
fclose(infd);
if(headerfilep)
fclose(headerfilep);
if(url)
free(url);
}
if(outfiles)
free(outfiles);
#ifdef MIME_SEPARATORS
if (separator)
printf("--%s--\n", MIMEseparator);
#endif
} /* while-loop through all URLs */
if(allocuseragent)
free(config->useragent);
/* cleanup memory used for URL globbing patterns */
glob_cleanup(urls);
return res;
}

View File

@@ -213,6 +213,7 @@ int glob_url(URLGlob** glob, char* url, int *urlnum)
glob_expand->size = 0;
glob_expand->urllen = strlen(url);
glob_expand->glob_buffer = glob_buffer;
glob_expand->beenhere=0;
*urlnum = glob_word(glob_expand, url, 1);
*glob = glob_expand;
return CURLE_OK;
@@ -240,15 +241,14 @@ void glob_cleanup(URLGlob* glob)
char *next_url(URLGlob *glob)
{
static int beenhere = 0;
char *buf = glob->glob_buffer;
URLPattern *pat;
char *lit;
signed int i;
int carry;
if (!beenhere)
beenhere = 1;
if (!glob->beenhere)
glob->beenhere = 1;
else {
carry = 1;

View File

@@ -50,6 +50,7 @@ typedef struct {
int size;
int urllen;
char *glob_buffer;
char beenhere;
} URLGlob;
int glob_url(URLGlob**, char*, int *);

View File

@@ -1,3 +1,3 @@
#define CURL_NAME "curl"
#define CURL_VERSION "7.5.2"
#define CURL_VERSION "7.6-pre3"
#define CURL_ID CURL_NAME " " CURL_VERSION " (" OS ") "

View File

@@ -1,3 +1,6 @@
EXTRA_DIST = ftpserver.pl httpserver.pl runtests.pl
SUBDIRS = data
all:
install:
@@ -6,10 +9,12 @@ curl:
@(cd ..; make)
test:
$(PERL) runtests.pl
$(MAKE) -C data test
srcdir=$(srcdir) $(PERL) $(srcdir)/runtests.pl
quiet-test:
$(PERL) runtests.pl -s -a
$(MAKE) -C data test
srcdir=$(srcdir) $(PERL) $(srcdir)/runtests.pl -s -a
clean:
rm -rf log

56
tests/data/Makefile.am Normal file
View File

@@ -0,0 +1,56 @@
all:
install:
test:
[ -f command1.txt ] || ln -s $(srcdir)/*.txt .
EXTRA_DIST = command1.txt error113.txt name17.txt prot8.txt \
command10.txt error114.txt name18.txt prot9.txt \
command100.txt error115.txt name19.txt reply1.txt \
command101.txt error116.txt name2.txt reply10.txt \
command102.txt error117.txt name20.txt reply100.txt \
command103.txt error118.txt name200.txt reply101.txt \
command104.txt error119.txt name201.txt reply102.txt \
command105.txt error19.txt name21.txt reply103.txt \
command106.txt error20.txt name22.txt reply104.txt \
command107.txt error201.txt name23.txt reply105.txt \
command108.txt error21.txt name24.txt reply106.txt \
command109.txt error23.txt name25.txt reply11.txt \
command11.txt error24.txt name3.txt reply110.txt \
command110.txt error25.txt name4.txt reply110001.txt \
command111.txt ftpd113.txt name5.txt reply110002.txt \
command112.txt ftpd114.txt name6.txt reply12.txt \
command113.txt ftpd115.txt name7.txt reply13.txt \
command114.txt ftpd116.txt name8.txt reply14.txt \
command115.txt ftpd117.txt name9.txt reply15.txt \
command116.txt ftpd118.txt prot1.txt reply16.txt \
command117.txt name1.txt prot10.txt reply17.txt \
command118.txt name10.txt prot100.txt reply2.txt \
command119.txt name100.txt prot101.txt reply200.txt \
command12.txt name101.txt prot102.txt reply22.txt \
command13.txt name102.txt prot103.txt reply24.txt \
command14.txt name103.txt prot104.txt reply25.txt \
command15.txt name104.txt prot105.txt reply3.txt \
command16.txt name105.txt prot106.txt reply4.txt \
command17.txt name106.txt prot107.txt reply5.txt \
command18.txt name107.txt prot108.txt reply6.txt \
command19.txt name108.txt prot109.txt reply7.txt \
command2.txt name109.txt prot11.txt reply8.txt \
command20.txt name11.txt prot110.txt reply9.txt \
command200.txt name110.txt prot112.txt stdin17.txt \
command201.txt name111.txt prot12.txt stdout107.txt \
command21.txt name112.txt prot13.txt stdout108.txt \
command22.txt name113.txt prot14.txt stdout109.txt \
command23.txt name114.txt prot15.txt stdout110.txt \
command24.txt name115.txt prot16.txt stdout112.txt \
command25.txt name116.txt prot17.txt stdout15.txt \
command3.txt name117.txt prot18.txt stdout18.txt \
command4.txt name118.txt prot2.txt upload107.txt \
command5.txt name119.txt prot22.txt upload108.txt \
command6.txt name12.txt prot3.txt upload109.txt \
command7.txt name13.txt prot4.txt upload112.txt \
command8.txt name14.txt prot5.txt \
command9.txt name15.txt prot6.txt \
error111.txt name16.txt prot7.txt \
command26.txt prot26.txt command27.txt prot27.txt \
name26.txt reply26.txt name27.txt stdout27.txt

4
tests/data/command26.txt Normal file
View File

@@ -0,0 +1,4 @@
http://%HOSTIP:%HOSTPORT/want/25 -o - -o -

1
tests/data/command27.txt Normal file
View File

@@ -0,0 +1 @@
http://%HOSTIP:%HOSTPORT/want/25 http://%HOSTIP:%HOSTPORT/want/24 http://%HOSTIP:%HOSTPORT/want/22

1
tests/data/name26.txt Normal file
View File

@@ -0,0 +1 @@
specify more -o than URLs

1
tests/data/name27.txt Normal file
View File

@@ -0,0 +1 @@
getting three URLs in one command line (to stdout)

6
tests/data/prot26.txt Normal file
View File

@@ -0,0 +1,6 @@
GET /want/25 HTTP/1.0
User-Agent: curl/7.6-pre1 (sparc-sun-solaris2.7) libcurl 7.5.2 (SSL 0.9.6) (krb4 enabled)
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

6
tests/data/prot27.txt Normal file
View File

@@ -0,0 +1,6 @@
GET /want/22 HTTP/1.0
User-Agent: curl/7.6-pre1 (sparc-sun-solaris2.7) libcurl 7.6-pre1 (SSL 0.9.6) (krb4 enabled)
Host: 127.0.0.1:8999
Pragma: no-cache
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*

5
tests/data/reply26.txt Normal file
View File

@@ -0,0 +1,5 @@
HTTP/1.1 301 This is a weirdo text message
Server: test-server/fake
Location: data/reply/25
Redirect to the same URL again!

14
tests/data/stdout27.txt Normal file
View File

@@ -0,0 +1,14 @@
HTTP/1.1 301 This is a weirdo text message
Server: test-server/fake
Location: data/reply/25
Redirect to the same URL again!
HTTP/1.1 404 BAD BOY
Content-Type: text/html
This silly page doesn't reaaaaaly exist so you should not get it.
HTTP/1.1 200 OK
Funny-head: yesyes
This is the proof it works

View File

@@ -8,6 +8,7 @@
use strict;
my $srcdir = $ENV{'srcdir'} || '.';
my $HOSTIP="127.0.0.1";
my $HOSTPORT=8999; # bad name, but this is the HTTP server port
my $FTPPORT=8921; # this is the FTP server port
@@ -108,7 +109,7 @@ sub runhttpserver {
}
if ($RUNNING != 1) {
system("perl ./httpserver.pl $HOSTPORT &");
system("perl $srcdir/httpserver.pl $HOSTPORT &");
sleep 1; # give it a little time to start
}
else {
@@ -149,7 +150,7 @@ sub runftpserver {
}
if ($RUNNING != 1) {
system("perl ./ftpserver.pl $FTPPORT &");
system("perl $srcdir/ftpserver.pl $FTPPORT &");
sleep 1; # give it a little time to start
}
else {