As someone reported on the mailing list a while back, the hard-coded
arbitrary timeout of 7s in test 1112 is not sufficient in some build
environments. At Arista Networks we build and test curl as part of our
automated build system, and we've run into this timeout 170 times so
far. Our build servers are typically quite busy building and testing a
lot of code in parallel, so despite being beefy machines with 32 cores
and 128GB of RAM we still hit this 7s timeout regularly.
URL: http://curl.haxx.se/mail/lib-2010-02/0200.html
The libcurl date parser returns INT_MAX for all dates > 2037 so this
test is now made to use 2037 instead of 2038 to work the same for both
32bit and 64bit time_t systems.
Implement: Expired Cookies These following situation, curl removes
cookie(s) from struct CookieInfo if the cookie expired.
- Curl_cookie_add()
- Curl_cookie_getlist()
- cookie_output()
The message numbers given in the LIST response are an index into the
list, which are only valid for the current session, rather than being a
unique message identifier. An index would only be missing from the LIST
response if a DELE command had been issued within the same session and
had not been committed by the end of session QUIT command. Once
committed the POP3 server will regenerate the message numbers in the
next session to be contiguous again. As such our LIST response should
list message numbers contiguously until we support a DELE command in the
same session.
Should a POP3 user require the unique message ID for any or all
messages then they should use the extended UIDL command. This command
will be supported by the test ftpserver in an upcoming commit.
... this also makes sure that the progess callback gets called more
often during TFTP transfers.
Added test 1238 to verify.
Bug: http://curl.haxx.se/bug/view.cgi?id=1269
Reported-by: Jo3
The new multiply() function detects range value overflows. 32bit
machines will overflow on a 32bit boundary while 64bit hosts support
ranges up to the full 64 bit range.
Added test 1236 to verify.
Bug: http://curl.haxx.se/bug/view.cgi?id=1267
Reported-by: Will Dietz
A rather big overhaul and cleanup.
1 - curl wouldn't properly detect and reject globbing that ended with an
open brace if there were brackets or braces before it. Like "{}{" or
"[0-1]{"
2 - curl wouldn't properly reject empty lists so that "{}{}" would
result in curl getting (nil) strings in the output.
3 - By using strtoul() instead of sscanf() the code will now detected
over and underflows. It now also better parses the step argument to only
accept positive numbers and only step counters that is smaller than the
delta between the maximum and minimum numbers.
4 - By switching to unsigned longs instead of signed ints for the
counters, the max values for []-ranges are now very large (on 64bit
machines).
5 - Bumped the maximum number of globs in a single URL to 100 (from 10)
6 - Simplified the code somewhat and now it stores fixed strings as
single- entry lists. That's also one of the reasons why I did (5) as now
all strings between "globs" will take a slot in the array.
Added test 1234 and 1235 to verify. Updated test 87.
This commit fixes three separate bug reports.
Bug: http://curl.haxx.se/bug/view.cgi?id=1264
Bug: http://curl.haxx.se/bug/view.cgi?id=1265
Bug: http://curl.haxx.se/bug/view.cgi?id=1266
Reported-by: Will Dietz
CURLOPT_DNS_USE_GLOBAL_CACHE broke in commit c43127414d (been
broken since the libcurl 7.29.0 release). While this option has been
documented as deprecated for almost a decade and nobody even reported
this bug, it should remain functional.
Added test case 1512 to verify
This is a regression as this logic used to work. It isn't clear when it
broke, but I'm assuming in 7.28.0 when we went all-multi internally.
This likely never worked with the multi interface. As the failed
connection is detected once the multi state has reached DO_MORE, the
Curl_do_more() function was now expanded somewhat so that the
ftp_do_more() function can request to go "back" to the previous state
when it makes another attempt - using PASV.
Added test case 1233 to verify this fix. It has the little issue that it
assumes no service is listening/accepting connections on port 1...
Reported-by: byte_bucket in the #curl IRC channel
The internal function that's used to detect known file extensions for
the default Content-Type got the the wrong pointer passed in when
CURLFORM_BUFFER + CURLFORM_BUFFERPTR were used. This had the effect that
strlen() would be used which could lead to an out-of-bounds read (and
thus segfault). In most cases it would only lead to it not finding or
using the correct default content-type.
It also showed that test 554 and test 587 were testing for the
previous/wrong behavior and now they're updated as well.
Bug: http://curl.haxx.se/bug/view.cgi?id=1262
Reported-by: Konstantin Isakov