multi_socket: handles timer inaccuracy better for timeouts

Igor Novoseltsev reported a problem with the multi socket API and
using timeouts and timers. It boiled down to a problem with
libcurl's use of GetTickCount() interally to figure out the
current time, while Igor's own application code used another
function call.

It made his app call the socket API timeout function a bit
_before_ libcurl would consider the timeout to trigger, and that
could easily lead to timeouts or stalls in the app. It seems
GetTickCount() in general often has no better resolution than
16ms and switching to the alternative function
QueryPerformanceCounter has its share of problems:
http://www.virtualdub.org/blog/pivot/entry.php?id=106

We address this problem by simply having libcurl treat timers
that already has occured or will occur within 40ms subject for
treatment. I'm confident that there are other implementations and
operating systems with similarly in accurate timer functions so
it makes sense to have applied generically and I don't believe we
sacrifice much by adding a 40ms inaccuracy on these timeouts.
This commit is contained in:
Daniel Stenberg
2010-06-01 23:18:34 +02:00
parent e1c2c9be1a
commit 2c72732ebf
3 changed files with 28 additions and 4 deletions

View File

@@ -1994,9 +1994,11 @@ static CURLMcode multi_socket(struct Curl_multi *multi,
extracts a matching node if there is one */
now = Curl_tvnow();
now.tv_usec += 1000; /* to compensate for the truncating of 999us to 0ms,
we always add time here to make the comparison
below better */
now.tv_usec += 40000; /* compensate for bad precision timers */
if(now.tv_usec > 1000000) {
now.tv_sec++;
now.tv_usec -= 1000000;
}
multi->timetree = Curl_splaygetbest(now, multi->timetree, &t);
if(t) {