remove outdated stuff

This commit is contained in:
Daniel Stenberg 2008-12-20 17:16:45 +00:00
parent ffd08df863
commit 19c9b7c803
7 changed files with 0 additions and 1856 deletions

View File

@ -1,38 +0,0 @@
#
# Build test apps for the Hiper project
# During dev at least, we use a static libcurl.
#
LDFLAGS = -lcrypt -lidn -lssl -lcrypto -lz -lresolv -L../ares/.libs \
-lcares
LIBCURL = -L../lib/.libs/ -lcurl
CFLAGS = -I../include -g
all: shiper hiper hipev ulimiter
hiper: hiper.o $(LIBCURL)
$(CC) -o $@ $< $(LIBCURL) $(LDFLAGS)
hiper.o: hiper.c
$(CC) $(CFLAGS) -c $<
hipev: hipev.o $(LIBCURL)
$(CC) -o $@ $< $(LIBCURL) $(LDFLAGS) -levent
hipev.o: hipev.c
$(CC) $(CFLAGS) -c $<
shiper: shiper.o $(LIBCURL)
$(CC) -o $@ $< $(LIBCURL) $(LDFLAGS)
shiper.o: shiper.c
$(CC) $(CFLAGS) -c $<
ulimiter: ulimiter.c
$(CC) -o $@ $<
clean:
rm -f hiper.o hiper shiper shiper.o *~ ulimiter
$(LIBCURL):
(cd ../lib && make)

View File

@ -1,300 +0,0 @@
Date: January 5, 2006
Author: Daniel Stenberg
Status of project Hiper - high performance libcurl modifications
================================================================
What is Hiper
You won't find such a description in this document. See
http://curl.haxx.se/libcurl/hiper/ for further details.
Live Progress Info
During my work, I've posted occational updates on the curl-library mailing
list but more importantly done frequent updates of
http://curl.haxx.se/libcurl/hiper/schedule.html
Schedule
I took time off my regular job during Decemember 2005 and the first week of
January 2006 to work on hiper full-time.
Step 1 - Measure the Existing Solution
I started full-time work on project Hiper on December 1st 2005. I began by
putting together a test application that used the existing API to allow me
to properly and with accuracy measure execution and transfer speeds when
doing a large amount of transfers.
I soon discovered that it was impossible to do any sensible measurements by
using live and actual URLs since the transfers were too unrelialble and
uncontrolled. I then enhanced the current HTTP server in the curl test suite
and made that support a large amount of transfers and some extra magic
"commands" that would make the server either just sit "idle" or "stream"
(continuously sending data in a never-ending stream). I then wrote up two
files using the curl test suite file format and by acessing the properly
formatted URLs on my localhost the HTTP server would either run "idle" or
run "stream".
Having this working, I patched libcurl to always only recv() a single byte
off the network each time, just to make sure that the time spent on reading
data is constant and never very long.
I adjusted the test application (actually called 'hiper') to create Y idle
transfers and Z stream transfers, had it run for N seconds and then quit and
produce a summary on stdout. Now I got very solid and repeatable results. I
started to run repeated tests and save the results when I ran into the
dreaded 1024 socket maximum limit.
One side of the problem is that the fd_set type only allows 1024 file
descriptors (on my Linux), which I had to solve by simply making my own type
with room for more connections and do ugly typecasts in the code. The other
side of the problem is that user applications have a limit imposed by the
system on the maximum amount of file descriptors it can have open and I had
to work around that by writing a special tool that runs setuid root that
increases the limit, downgrades to a normal user again and then run the
command line of your choice. This second approach has to be used for both
'hiper' and the test HTTP server. (You need to build the HTTP server with
CURL_SWS_FORK_ENABLED defined to have it do forks since it isn't desirable
to do so when running the normal curl tests.)
Now I could run my test program without problems. I decided to run the tests
with 1 stream connection and a varying amount of idle ones. I did 1001,
2001, 3001, 5001 and 9001 connections and measured how long select() and
curl_multi_perform() (including the curl_multi_fdset() call) would take in
average, over a period of 20 seconds. I ran each test 5-6 times and I used
the average time of all the runs.
The times in number of microseconds:
Connections multi_perform select
1001 3504 951
2001 7606 1988
3001 11045 2715
5001 16406 4024
9001 32147 8030
Test system
CPU: Athlon XP 2800
RAM: 1 GB
Linux: 2.6
glibc: 2.3.5
libcurl: 7.15.1
The only reason I stopped at 9001 connections is that my test machine ran
out of avaiable memory by then as I ran the test server on the same machine,
and I didn't want to risk the test result accuracy by having it start using
the swap during the tests.
It means that at 9000 connections we spend 40ms for each socket action, even
when only one socket ever have action.
With these 32000 microseconds curl_multi_perform() takes for 9000
connections, it loops 18000 laps which makes less than 2 microseconds per
lap. (Of course counting time/laps is an oversimplification, but anyway.)
Hopefully we should achieve less than 10 microseconds for each call to
curl_multi_socket() for an active connection.
The timing graph displayed on the libevent site (duplicated on the hiper
project page) suggests that libevent is pretty much fixed at 50 microseconds
(although I don't know what test box was used in their testing, we can
compare the select()-times from my tests and see that they are at least
resonably close).
Summing up, the current ~40 ms spent at 9000 connections could then possibly
be lowered to something around 60 us!
Step 2 - Implement curl_multi_socket API
Most of the design decisions and debates about this new API have already
been held on the curl-library mailing list a long time ago so I had a basic
idea on what approach to use. The main ideas of the new API are simply:
1 - The application can use whatever event system it likes as it gets info
from libcurl about what file descriptors libcurl waits for what action
on. (The previous API returns fd_sets which is very select()-centric).
2 - When the application discovers action on a single socket, it calls
libcurl and informs that there was action on this particular socket and
libcurl can then act on that socket/transfer only and not care about
any other transfers. (The previous API always had to scan through all
the existing transfers.)
The idea is that curl_multi_socket() calls a given callback with information
about what socket to wait for what action on, and the callback only gets
called if the status of that socket has changed.
In the API draft from before, we have a timeout argument on a per socket
basis and we also allowed curl_multi_socket() to pass in an 'easy handle'
instead of socket to allow libcurl to shortcut a lookup and work on the
affected easy handle right away. Both these turned out to be bad ideas.
The timeout argument was removed from the socket callback since after much
thinking I came to the conclusion that we really don't want to handle
timeouts on a per socket basis. We need it on a per transfer (easy handle)
basis and thus we can't provide it in the callbacks in a nice way. Instead,
we have to offer a curl_multi_timeout() that returns the largest amount of
time we should wait before we call the "timeout action" of libcurl, to
trigger the proper internal timeout action on the affected transfer. To get
this to work, I added a struct to each easy handle in which we store an
"expire time" (if any). The structs are then "splay sorted" so that we can
add and remove times from the linked list and yet somewhat swiftly figure
out 1 - how long time there is until the next timer expires and 2 - which
timer (handle) should we take care of now. Of course, the upside of all this
is that we get a curl_multi_timeout() that should also work with old-style
applications that use curl_multi_perform().
The easy handle argument was removed fom the curl_multi_socket() function
because having it there would require the application to do a socket to easy
handle conversion on its own. I find it very unlikely that applications
would want to do that and since libcurl would need such a lookup on its own
anyway since we didn't want to force applications to do that translation
code (it would be optional), it seemed like an unnecessary option. I also
realized that when we use underlying libraries such as c-ares (for DNS
asynch resolving) there might in fact be more than one transfer waiting for
action on the same socket and thus it makes the lookup even tricker and even
less likely to ever get done by applications. Instead I created an internal
"socket to easy handles" hash table that given a socket (file descriptor)
returns a list of easy handles that waits for some action on that socket.
To make libcurl be able to report plain sockets in the socket callback, I
had to re-organize the internals of the curl_multi_fdset() etc so that the
conversion from sockets to fd_sets for that function is only done in the
last step before the data is returned. I also had to extend c-ares to get a
function that can return plain sockets, as that library too returned only
fd_sets and that is no longer good enough. The changes done to c-ares have
been committed and are available in the c-ares CVS repository destined to be
included in the upcoming c-ares 1.3.1 release.
The 'shiper' tool is the test application I wrote that uses the new
curl_multi_socket() in its current state. It seems to be working and it uses
the API as it is documented and supposed to work. It is still using
select(), because I needed that during development (like until I had the
socket hash implemented etc) and because I haven't yet learned how to use
libevent or similar.
The hiper/shiper tools are very simple and initiates lots of connections and
have them running for the test period and then kills them all.
Since I wasn't done with the implementation until early January I haven't
had time to run very many measurements and checks, but I have done a few
runs with up to a few hundred connections (with a single active one). The
curl_multi_socket() invoke then takes 3-6 microseconds in average (using the
read-only-1-byte-at-a-time hack). If this number does increase a lot when we
add connections, it certainly matches my in my opinion very ambitious goal.
We are now below the 60 microseconds "per socket action" goal. It is
destined to be somewhat higher the more connections we have since the hash
table gets more populated and the splay tree will grow etc.
Some tests at 7000 and 9000 connections showed that the socket hash lookup
is somewhat of a bottle neck. Its current implementation may be a bit too
limiting. It simply has a fixed-size array, and on each entry in the array
it has a linked list with entries. So the hash only checks which list to
scan through. The code I had used so for used a list with merely 7 slots (as
that is what the DNS hash uses) but with 7000 connections that would make an
average of 1000 nodes in each list to run through. I upped that to 97 slots
(I believe a prime is suitable) and noticed a significant speed increase. I
need to reconsider the hash implementation or use a rather large default
value like this. At 9000 connections I was still below 10us per call.
Status Right Now
The curl_multi_socket() API is implemented according to how it is
documented. The man pages for curl_multi_socket and curl_multi_timeout are
both committed to CVS and are available online for easy browsing:
http://curl.haxx.se/libcurl/c/curl_multi_socket.html
http://curl.haxx.se/libcurl/c/curl_multi_timeout.html
The hiper-5.patch I made available early morning January 5th, 2006 should
apply fine on a recent CVS checkout (at the time of this writing curl 7.15.1
is the latest public curl release but the hiper patch does not apply fine on
that).
What is Left for the curl_multi_socket API
1 - More measuring with more extreme number of connections
2 - More testing with actual URLs and complete from start to end transfers.
I'm quite sure we don't set expire times all over in the code properly, so
there is bound to be some timeout bugs left.
What it really takes is for me to commit the code and to make an official
release with it so that we get people "out there" to help out testing it.
What is Left for project Hiper
1 - Add HTTP pipelining support
2 - Add a zero (or at least close to zero) copy interface
Neither of these points have been planned or detailed exactly how they will
be implemented.
Roadmap Ahead
I plan and hope to return to full-time hiper work later on this spring or
possibly summer to continue where I pause now. Of course some spare time
might also be spent until then to get us moving forward.
---------------------------------------------------------------------------
April 11, 2006
While sitting staring on my screen trying to write up a *nice* sample script
using libevent, it strikes me that since libevent is pretty much based around
its structs that you setup for each event/file descriptor, my application
wants to figure out the correct struct that is associted with the file
descriptor that libcurl provides in the socket callback.
This feels like an operation most applications will need when using the
multi_socket API, so it feels like I should better try to figure out a decent
way to offer this basic functionality already in libcurl - and the fact that
we already have the file descriptors in a hash we can probably just as well
extend it somewhat and store some custom pointers as well.
We need to offer the app a way to set a private pointer to be associated with
the particular file descriptor, and then be able to provide that pointer on
subsequent callback calls.
---------------------------------------------------------------------------
April 20, 2006
I was wrong when I previously claimed we could have more than one easy handle
using the same socket. I've cleaned up and simplified code now to adjust to
this.
---------------------------------------------------------------------------
July 9, 2006
TODO: We need to alter how we use c-ares for getting info about its sockets,
as c-ares now provides a callback approach very similar to how libcurl is
about to work.
I'm adding a function called curl_multi_assign() that will set a private
pointer added to the internal libcurl hash table for the particular socket
passed in to this function:
CURLMcode curl_multi_assign(CURLM *multi_handle,
curl_socket_t sockfd,
void *sockp);
'sockp' being a custom pointer set by the application to be associated with
this socket. The socket has to be already existing and in-use by libcurl,
like having already called the callback telling about its existance.
The set hashp pointer will then be passed on to the callback in upcoming
calls when this same socket is used (in the brand new 'socketp' argument).
---------------------------------------------------------------------------
July 30, 2006
Shockingly stupid (of me not having realized this before), but we really need
to add a 'running_handles' argument to the curl_multi_socket() and
curl_multi_socket_all() prototypes so that the caller can get to know when
all the transfers are actually done!

View File

@ -1,34 +0,0 @@
#!/usr/bin/perl
# 1) http://randomurl.com/body.php
# 2) http://random.yahoo.com/fast/ryl
# 3) http://www.uroulette.com/visit
# 1) very slow, responds with URL in body meta style:
# <meta http-equiv="refresh" content="0; url=http://www.webmasterworld.com/forum85/735.htm">
# 2) Responds with non-HTTP headers like:
# Status: 301
# Location: http://www.adaptive.net/
# 3) ordinary 30X code and Location:
my $url;
map { $url .= " http://www.uroulette.com/visit"; } (1 .. 12);
print $url."\n";
my $count=0;
open(DUMP, ">>dump");
while(1) {
my @getit = `curl -si $url`;
for my $l (@getit) {
if($l =~ /^Location: (.*)/) {
print DUMP "$1\n";
print STDERR "$count\r";
$count++;
}
}
}

View File

@ -1,416 +0,0 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*
* Connect N connections. Z are idle, and X are active. Transfer as fast as
* possible.
*
* Run for a specific amount of time (10 secs for now). Output detailed timing
* information.
*
*/
/* The maximum number of simultanoues connections/transfers we support */
#define NCONNECTIONS 50000
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sys/time.h>
#include <time.h>
#include <unistd.h>
#include <sys/poll.h>
#include <curl/curl.h>
#define MICROSEC 1000000 /* number of microseconds in one second */
/* The maximum time (in microseconds) we run the test */
#define RUN_FOR_THIS_LONG (20*MICROSEC)
/* Number of loops (seconds) we allow the total download amount and alive
connections to remain the same until we bail out. Set this slightly higher
when using asynch supported libcurl. */
#define IDLE_TIME 10
struct globalinfo {
size_t dlcounter;
};
struct connection {
CURL *e;
int id; /* just a counter for easy browsing */
char *url;
size_t dlcounter;
struct globalinfo *global;
char error[CURL_ERROR_SIZE];
};
/* on port 8999 we run a modified (fork-) sws that supports pure idle and full
stream mode */
#define PORT "8999"
#define HOST "192.168.1.13"
#define URL_IDLE "http://" HOST ":" PORT "/1000"
#define URL_ACTIVE "http://" HOST ":" PORT "/1001"
static size_t
writecallback(void *ptr, size_t size, size_t nmemb, void *data)
{
size_t realsize = size * nmemb;
struct connection *c = (struct connection *)data;
c->dlcounter += realsize;
c->global->dlcounter += realsize;
#if 0
printf("%02d: %d, total %d\n",
c->id, c->dlcounter, c->global->dlcounter);
#endif
return realsize;
}
/* return the diff between two timevals, in us */
static long tvdiff(struct timeval *newer, struct timeval *older)
{
return (newer->tv_sec-older->tv_sec)*1000000+
(newer->tv_usec-older->tv_usec);
}
/* store the start time of the program in this variable */
static struct timeval timer;
static void timer_start(void)
{
/* capture the time of the start moment */
gettimeofday(&timer, NULL);
}
static struct timeval cont; /* at this moment we continued */
int still_running; /* keep number of running handles */
struct conncount {
long time_us;
long laps;
long maxtime;
};
static struct timeval timerpause;
static void timer_pause(void)
{
/* capture the time of the pause moment */
gettimeofday(&timerpause, NULL);
/* If we have a previous continue (all times except the first), we can now
store the time for a whole "lap" */
if(cont.tv_sec) {
long lap;
lap = tvdiff(&timerpause, &cont);
}
}
static long paused; /* amount of us we have been pausing */
static void timer_continue(void)
{
/* Capture the time of the restored operation moment, now calculate how long
time we were paused and added that to the 'paused' variable.
*/
gettimeofday(&cont, NULL);
paused += tvdiff(&cont, &timerpause);
}
static long total; /* amount of us from start to stop */
static void timer_total(void)
{
struct timeval stop;
/* Capture the time of the operation stopped moment, now calculate how long
time we were running and how much of that pausing.
*/
gettimeofday(&stop, NULL);
total = tvdiff(&stop, &timer);
}
struct globalinfo info;
struct connection *conns;
long selects;
long selectsalive;
long timeouts;
long perform;
long performalive;
long performselect;
long topselect;
int num_total;
int num_idle;
int num_active;
static void report(void)
{
int i;
long active = total - paused;
long numdl = 0;
for(i=0; i < num_total; i++) {
if(conns[i].dlcounter)
numdl++;
}
printf("Summary from %d simultanoues transfers (%d active)\n",
num_total, num_active);
printf("%d out of %d connections provided data\n", numdl, num_total);
printf("Total time: %ldus select(): %ldus curl_multi_perform(): %ldus\n",
total, paused, active);
printf("%d calls to curl_multi_perform() average %d alive "
"Average time: %dus\n",
perform, performalive/perform, active/perform);
printf("%d calls to select(), average %d alive "
"Average time: %dus\n",
selects, selectsalive/selects,
paused/selects);
printf(" Average number of readable connections per select() return: %d\n",
performselect/selects);
printf(" Max number of readable connections for a single select() "
"return: %d\n",
topselect);
printf("%ld select() timeouts\n", timeouts);
printf("Downloaded %ld bytes in %ld bytes/sec, %ld usec/byte\n",
info.dlcounter,
info.dlcounter/(total/1000000),
total/info.dlcounter);
#if 0
for(i=1; i< num_total; i++) {
if(timecount[i].laps) {
printf("Time %d connections, average %ld max %ld (%ld laps) "
"average/conn: %ld\n",
i,
timecount[i].time_us/timecount[i].laps,
timecount[i].maxtime,
timecount[i].laps,
(timecount[i].time_us/timecount[i].laps)/i );
}
}
#endif
}
struct ourfdset {
char fdbuffer[NCONNECTIONS/8];
};
#define FD2_ZERO(x) FD_ZERO((fd_set *)x)
typedef struct ourfdset fd2_set;
int main(int argc, char **argv)
{
CURLM *multi_handle;
CURLMsg *msg;
CURLcode code = CURLE_OK;
CURLMcode mcode = CURLM_OK;
int rc;
int i;
int prevalive=-1;
int prevsamecounter=0;
int prevtotal = -1;
fd2_set fdsizecheck;
int selectmaxamount;
memset(&info, 0, sizeof(struct globalinfo));
selectmaxamount = sizeof(fdsizecheck) * 8;
printf("select() supports max %d connections\n", selectmaxamount);
if(argc < 3) {
printf("Usage: hiper [num idle] [num active]\n");
return 1;
}
num_idle = atoi(argv[1]);
num_active = atoi(argv[2]);
num_total = num_idle + num_active;
if(num_total > selectmaxamount) {
printf("Requested more connections than supported!\n");
return 4;
}
conns = calloc(num_total, sizeof(struct connection));
if(!conns) {
printf("Out of memory\n");
return 3;
}
if(num_total >= NCONNECTIONS) {
printf("Increase NCONNECTIONS!\n");
return 2;
}
/* init the multi stack */
multi_handle = curl_multi_init();
for(i=0; i< num_total; i++) {
CURL *e;
char *nl;
memset(&conns[i], 0, sizeof(struct connection));
if(i < num_idle)
conns[i].url = URL_IDLE;
else
conns[i].url = URL_ACTIVE;
#if 0
printf("%d: Add URL %s\n", i, conns[i].url);
#endif
e = curl_easy_init();
if(!e) {
printf("curl_easy_init() for handle %d failed, exiting!\n", i);
return 2;
}
conns[i].e = e;
conns[i].id = i;
conns[i].global = &info;
curl_easy_setopt(e, CURLOPT_URL, conns[i].url);
curl_easy_setopt(e, CURLOPT_WRITEFUNCTION, writecallback);
curl_easy_setopt(e, CURLOPT_WRITEDATA, &conns[i]);
#if 1
curl_easy_setopt(e, CURLOPT_VERBOSE, 1);
#endif
curl_easy_setopt(e, CURLOPT_ERRORBUFFER, conns[i].error);
curl_easy_setopt(e, CURLOPT_PRIVATE, &conns[i]);
/* add the easy to the multi */
if(CURLM_OK != curl_multi_add_handle(multi_handle, e)) {
printf("curl_multi_add_handle() returned error for %d\n", i);
return 3;
}
}
/* we start some action by calling perform right away */
while(CURLM_CALL_MULTI_PERFORM ==
curl_multi_perform(multi_handle, &still_running));
printf("Starting timer, expects to run for %ldus\n", RUN_FOR_THIS_LONG);
timer_start();
while(still_running == num_total) {
struct timeval timeout;
int rc; /* select() return code */
long timeout_ms;
fd2_set fdread;
fd2_set fdwrite;
fd2_set fdexcep;
int maxfd;
FD2_ZERO(&fdread);
FD2_ZERO(&fdwrite);
FD2_ZERO(&fdexcep);
curl_multi_timeout(multi_handle, &timeout_ms);
/* set timeout to wait */
timeout.tv_sec = timeout_ms/1000;
timeout.tv_usec = (timeout_ms%1000)*1000;
/* get file descriptors from the transfers */
curl_multi_fdset(multi_handle,
(fd_set *)&fdread,
(fd_set *)&fdwrite,
(fd_set *)&fdexcep, &maxfd);
timer_pause();
selects++;
selectsalive += still_running;
rc = select(maxfd+1,
(fd_set *)&fdread,
(fd_set *)&fdwrite,
(fd_set *)&fdexcep, &timeout);
#if 0
/* Output this here to make it outside the timer */
printf("Running: %d (%d bytes)\n", still_running, info.dlcounter);
#endif
timer_continue();
switch(rc) {
case -1:
/* select error */
break;
case 0:
timeouts++;
default:
/* timeout or readable/writable sockets */
do {
perform++;
performalive += still_running;
}
while(CURLM_CALL_MULTI_PERFORM ==
curl_multi_perform(multi_handle, &still_running));
performselect += rc;
if(rc > topselect)
topselect = rc;
break;
}
if(total > RUN_FOR_THIS_LONG) {
printf("Stopped after %ldus\n", total);
break;
}
if(prevalive != still_running) {
printf("%d connections alive\n", still_running);
}
prevalive = still_running;
timer_total(); /* calculate the total time spent so far */
}
if(still_running != num_total) {
/* something made connections fail, extract the reason and tell */
int msgs_left;
struct connection *cptr;
while ((msg = curl_multi_info_read(multi_handle, &msgs_left))) {
if (msg->msg == CURLMSG_DONE) {
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, &cptr);
printf("%d => (%d) %s", cptr->id, msg->data.result, cptr->error);
}
}
}
curl_multi_cleanup(multi_handle);
/* cleanup all the easy handles */
for(i=0; i< num_total; i++)
curl_easy_cleanup(conns[i].e);
report();
return code;
}

View File

@ -1,410 +0,0 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*
* Connect N connections. Z are idle, and X are active. Transfer as fast as
* possible.
*
* Output detailed timing information.
*
* Uses libevent.
*
*/
/* The maximum number of simultanoues connections/transfers we support */
#define NCONNECTIONS 50000
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sys/time.h>
#include <time.h>
#include <unistd.h>
#include <sys/poll.h>
#include <curl/curl.h>
#include <event.h> /* for libevent */
#ifndef FALSE
#define FALSE 0
#endif
#ifndef TRUE
#define TRUE 1
#endif
#define MICROSEC 1000000 /* number of microseconds in one second */
/* The maximum time (in microseconds) we run the test */
#define RUN_FOR_THIS_LONG (5*MICROSEC)
/* Number of loops (seconds) we allow the total download amount and alive
connections to remain the same until we bail out. Set this slightly higher
when using asynch supported libcurl. */
#define IDLE_TIME 10
struct globalinfo {
size_t dlcounter;
};
struct connection {
CURL *e;
int id; /* just a counter for easy browsing */
char *url;
size_t dlcounter;
struct globalinfo *global;
char error[CURL_ERROR_SIZE];
};
/* this is the struct associated with each file descriptor libcurl tells us
it is dealing with */
struct fdinfo {
/* create a link list of fdinfo structs */
struct fdinfo *next;
struct fdinfo *prev;
curl_socket_t sockfd;
CURL *easy;
int action; /* as set by libcurl */
long timeout; /* as set by libcurl */
struct event ev; /* */
int evset; /* true if the 'ev' struct has been used in a event_set() call */
CURLM *multi; /* pointer to the multi handle */
int *running_handles; /* pointer to the running_handles counter */
};
static struct fdinfo *allsocks;
static int running_handles;
/* we have the timerevent global so that when the final socket-based event is
done, we can remove the timerevent as well */
static struct event timerevent;
static void update_timeout(CURLM *multi_handle);
/* called from libevent on action on a particular socket ("event") */
static void eventcallback(int fd, short type, void *userp)
{
struct fdinfo *fdp = (struct fdinfo *)userp;
CURLMcode rc;
fprintf(stderr, "EVENT callback type %d\n", type);
/* tell libcurl to deal with the transfer associated with this socket */
do {
rc = curl_multi_socket(fdp->multi, fd, fdp->running_handles);
} while (rc == CURLM_CALL_MULTI_PERFORM);
if(rc) {
fprintf(stderr, "curl_multi_socket() returned %d\n", (int)rc);
}
fprintf(stderr, "running_handles: %d\n", *fdp->running_handles);
if(!*fdp->running_handles) {
/* last transfer is complete, kill pending timeout */
fprintf(stderr, "last transfer done, kill timeout\n");
if(evtimer_pending(&timerevent, NULL))
evtimer_del(&timerevent);
}
else
update_timeout(fdp->multi);
}
/* called from libevent when our timer event expires */
static void timercallback(int fd, short type, void *userp)
{
(void)fd; /* not used for this */
(void)type; /* ignored in here */
CURLM *multi_handle = (CURLM *)userp;
int running_handles;
CURLMcode rc;
fprintf(stderr, "EVENT timeout\n");
/* tell libcurl to deal with the transfer associated with this socket */
do {
rc = curl_multi_socket(multi_handle, CURL_SOCKET_TIMEOUT,
&running_handles);
} while (rc == CURLM_CALL_MULTI_PERFORM);
if(running_handles)
/* Get the current timeout value from libcurl and set a new timeout */
update_timeout(multi_handle);
}
static void remsock(struct fdinfo *f)
{
if(!f)
/* did not find socket to remove! */
return;
if(f->evset)
event_del(&f->ev);
if(f->prev)
f->prev->next = f->next;
if(f->next)
f->next->prev = f->prev;
else
/* this was the last entry */
allsocks = NULL;
}
static void setsock(struct fdinfo *fdp, curl_socket_t s, CURL *easy,
int action)
{
fdp->sockfd = s;
fdp->action = action;
fdp->easy = easy;
if(fdp->evset)
/* first remove the existing event if the old setup was used */
event_del(&fdp->ev);
/* now use and add the current socket setup to libevent. The EV_PERSIST is
the key here as otherwise libevent will automatically remove the event
when it occurs the first time */
event_set(&fdp->ev, fdp->sockfd,
(action&CURL_POLL_IN?EV_READ:0)|
(action&CURL_POLL_OUT?EV_WRITE:0)| EV_PERSIST,
eventcallback, fdp);
fdp->evset=1;
fprintf(stderr, "event_add() for fd %d\n", s);
/* We don't use any socket-specific timeout but intead we use a single
global one. This is (mostly) because libcurl doesn't expose any
particular socket- based timeout value. */
event_add(&fdp->ev, NULL);
}
static void addsock(curl_socket_t s, CURL *easy, int action, CURLM *multi)
{
struct fdinfo *fdp = calloc(sizeof(struct fdinfo), 1);
fdp->multi = multi;
fdp->running_handles = &running_handles;
setsock(fdp, s, easy, action);
if(allsocks) {
fdp->next = allsocks;
allsocks->prev = fdp;
/* now set allsocks to point to the new struct */
allsocks = fdp;
}
else
allsocks = fdp;
/* Set this association in libcurl */
curl_multi_assign(multi, s, fdp);
}
/* on port 8999 we run a fork enabled sws that supports 'idle' and 'stream' */
#define PORT "8999"
#define HOST "127.0.0.1"
#define URL_IDLE "http://" HOST ":" PORT "/1000"
#if 1
#define URL_ACTIVE "http://" HOST ":" PORT "/1001"
#else
#define URL_ACTIVE "http://localhost/"
#endif
static int socket_callback(CURL *easy, /* easy handle */
curl_socket_t s, /* socket */
int what, /* see above */
void *cbp, /* callback pointer */
void *socketp) /* socket pointer */
{
struct fdinfo *fdp = (struct fdinfo *)socketp;
char *whatstr[]={
"none",
"IN",
"OUT",
"INOUT",
"REMOVE"};
fprintf(stderr, "socket %d easy %p what %s\n", s, easy,
whatstr[what]);
if(what == CURL_POLL_REMOVE)
remsock(fdp);
else {
if(!fdp) {
/* not previously known, add it and set association */
printf("Add info for socket %d %s%s\n", s,
what&CURL_POLL_IN?"READ":"",
what&CURL_POLL_OUT?"WRITE":"" );
addsock(s, easy, what, cbp);
}
else {
/* we already know about it, just change action/timeout */
printf("Changing info for socket %d from %d to %d\n",
s, fdp->action, what);
setsock(fdp, s, easy, what);
}
}
return 0; /* return code meaning? */
}
static size_t
writecallback(void *ptr, size_t size, size_t nmemb, void *data)
{
size_t realsize = size * nmemb;
struct connection *c = (struct connection *)data;
(void)ptr;
c->dlcounter += realsize;
c->global->dlcounter += realsize;
printf("%02d: %d, total %d\n",
c->id, c->dlcounter, c->global->dlcounter);
return realsize;
}
struct globalinfo info;
struct connection *conns;
int num_total;
int num_idle;
int num_active;
static void update_timeout(CURLM *multi_handle)
{
long timeout_ms;
struct timeval timeout;
/* Since we need a global timeout to occur after a given time of inactivity,
we use a single timeout-event. Get the timeout value from libcurl, and
update it after every call to libcurl. */
curl_multi_timeout(multi_handle, &timeout_ms);
/* convert ms to timeval */
timeout.tv_sec = timeout_ms/1000;
timeout.tv_usec = (timeout_ms%1000)*1000;
evtimer_add(&timerevent, &timeout);
}
int main(int argc, char **argv)
{
CURLM *multi_handle;
CURLMsg *msg;
CURLcode code = CURLE_OK;
int i;
memset(&info, 0, sizeof(struct globalinfo));
if(argc < 3) {
printf("Usage: hiper-event [num idle] [num active]\n");
return 1;
}
num_idle = atoi(argv[1]);
num_active = atoi(argv[2]);
num_total = num_idle + num_active;
conns = calloc(num_total, sizeof(struct connection));
if(!conns) {
printf("Out of memory\n");
return 3;
}
if(num_total >= NCONNECTIONS) {
printf("Too many connections requested, increase NCONNECTIONS!\n");
return 2;
}
event_init(); /* Initalize the event library */
printf("About to do %d connections\n", num_total);
/* init the multi stack */
multi_handle = curl_multi_init();
/* initialize the timeout event */
evtimer_set(&timerevent, timercallback, multi_handle);
for(i=0; i< num_total; i++) {
CURL *e;
memset(&conns[i], 0, sizeof(struct connection));
if(i < num_idle)
conns[i].url = URL_IDLE;
else
conns[i].url = URL_ACTIVE;
e = curl_easy_init();
if(!e) {
printf("curl_easy_init() for handle %d failed, exiting!\n", i);
return 2;
}
conns[i].e = e;
conns[i].id = i;
conns[i].global = &info;
curl_easy_setopt(e, CURLOPT_URL, conns[i].url);
curl_easy_setopt(e, CURLOPT_WRITEFUNCTION, writecallback);
curl_easy_setopt(e, CURLOPT_WRITEDATA, &conns[i]);
curl_easy_setopt(e, CURLOPT_VERBOSE, 0);
curl_easy_setopt(e, CURLOPT_ERRORBUFFER, conns[i].error);
curl_easy_setopt(e, CURLOPT_PRIVATE, &conns[i]);
/* add the easy to the multi */
if(CURLM_OK != curl_multi_add_handle(multi_handle, e)) {
printf("curl_multi_add_handle() returned error for %d\n", i);
return 3;
}
}
curl_multi_setopt(multi_handle, CURLMOPT_SOCKETFUNCTION, socket_callback);
curl_multi_setopt(multi_handle, CURLMOPT_SOCKETDATA, multi_handle);
/* we start the action by calling *socket_all() */
while(CURLM_CALL_MULTI_PERFORM == curl_multi_socket_all(multi_handle,
&running_handles));
/* update timeout */
update_timeout(multi_handle);
/* event_dispatch() runs the event main loop. It ends when no events are
left to wait for. */
event_dispatch();
{
/* something made connections fail, extract the reason and tell */
int msgs_left;
struct connection *cptr;
while ((msg = curl_multi_info_read(multi_handle, &msgs_left))) {
if (msg->msg == CURLMSG_DONE) {
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, &cptr);
printf("%d => (%d) %s\n",
cptr->id, msg->data.result, cptr->error);
}
}
}
curl_multi_cleanup(multi_handle);
/* cleanup all the easy handles */
for(i=0; i< num_total; i++)
curl_easy_cleanup(conns[i].e);
return code;
}

View File

@ -1,557 +0,0 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*
* Connect N connections. Z are idle, and X are active. Transfer as fast as
* possible.
*
* Run for a specific amount of time (10 secs for now). Output detailed timing
* information.
*
* The same is hiper.c but instead using the new *socket() API instead of the
* "old" *perform() call.
*
* Uses a select() approach but only for keeping the code simple and
* stand-alone. See hipev.c for a libevent-based example.
*
*/
/* The maximum number of simultanoues connections/transfers we support */
#define NCONNECTIONS 50000
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sys/time.h>
#include <time.h>
#include <unistd.h>
#include <sys/poll.h>
#include <curl/curl.h>
#ifndef FALSE
#define FALSE 0
#endif
#ifndef TRUE
#define TRUE 1
#endif
#define MICROSEC 1000000 /* number of microseconds in one second */
/* The maximum time (in microseconds) we run the test */
#define RUN_FOR_THIS_LONG (5*MICROSEC)
/* Number of loops (seconds) we allow the total download amount and alive
connections to remain the same until we bail out. Set this slightly higher
when using asynch supported libcurl. */
#define IDLE_TIME 10
struct ourfdset {
/* __fds_bits is what the Linux glibc headers use when they declare the
fd_set struct so by using this we can actually avoid the typecase for the
FD_SET() macro usage but it would hardly be portable */
char __fds_bits[NCONNECTIONS/8];
};
#define FD2_ZERO(x) memset(x, 0, sizeof(struct ourfdset))
typedef struct ourfdset fd2_set;
struct globalinfo {
size_t dlcounter;
};
struct connection {
CURL *e;
int id; /* just a counter for easy browsing */
char *url;
size_t dlcounter;
struct globalinfo *global;
char error[CURL_ERROR_SIZE];
};
struct fdinfo {
/* create a link list of fdinfo structs */
struct fdinfo *next;
struct fdinfo *prev;
curl_socket_t sockfd;
CURL *easy;
int action; /* as set by libcurl */
long timeout; /* as set by libcurl */
};
static struct fdinfo *allsocks;
static struct fdinfo *findsock(curl_socket_t s)
{
/* return the struct for the given socket */
struct fdinfo *fdp = allsocks;
while(fdp) {
if(fdp->sockfd == s)
break;
fdp = fdp->next;
}
return fdp; /* a struct pointer or NULL */
}
static void remsock(curl_socket_t s)
{
struct fdinfo *fdp = allsocks;
while(fdp) {
if(fdp->sockfd == s)
break;
fdp = fdp->next;
}
if(!fdp)
/* did not find socket to remove! */
return;
if(fdp->prev)
fdp->prev->next = fdp->next;
if(fdp->next)
fdp->next->prev = fdp->prev;
else
/* this was the last entry */
allsocks = NULL;
}
static void setsock(struct fdinfo *fdp, curl_socket_t s, CURL *easy,
int action)
{
fdp->sockfd = s;
fdp->action = action;
fdp->easy = easy;
}
static void addsock(curl_socket_t s, CURL *easy, int action)
{
struct fdinfo *fdp = calloc(sizeof(struct fdinfo), 1);
setsock(fdp, s, easy, action);
if(allsocks) {
fdp->next = allsocks;
allsocks->prev = fdp;
/* now set allsocks to point to the new struct */
allsocks = fdp;
}
else
allsocks = fdp;
}
static void fdinfo2fdset(fd2_set *fdread, fd2_set *fdwrite, int *maxfd)
{
struct fdinfo *fdp = allsocks;
int writable=0;
FD2_ZERO(fdread);
FD2_ZERO(fdwrite);
*maxfd = 0;
#if 0
printf("Wait for: ");
#endif
while(fdp) {
if(fdp->action & CURL_POLL_IN) {
FD_SET(fdp->sockfd, (fd_set *)fdread);
}
if(fdp->action & CURL_POLL_OUT) {
FD_SET(fdp->sockfd, (fd_set *)fdwrite);
writable++;
}
#if 0
printf("%d (%s%s) ",
fdp->sockfd,
(fdp->action & CURL_POLL_IN)?"r":"",
(fdp->action & CURL_POLL_OUT)?"w":"");
#endif
if(fdp->sockfd > *maxfd)
*maxfd = fdp->sockfd;
fdp = fdp->next;
}
#if 0
if(writable)
printf("Check for %d writable sockets\n", writable);
#endif
}
/* on port 8999 we run a fork enabled sws that supports 'idle' and 'stream' */
#define PORT "8999"
#define HOST "192.168.1.13"
#define URL_IDLE "http://" HOST ":" PORT "/1000"
#define URL_ACTIVE "http://" HOST ":" PORT "/1001"
static int socket_callback(CURL *easy, /* easy handle */
curl_socket_t s, /* socket */
int what, /* see above */
void *userp) /* "private" pointer */
{
struct fdinfo *fdp;
printf("socket %d easy %p what %d\n", s, easy, what);
if(what == CURL_POLL_REMOVE)
remsock(s);
else {
fdp = findsock(s);
if(!fdp) {
addsock(s, easy, what);
}
else {
/* we already know about it, just change action/timeout */
printf("Changing info for socket %d from %d to %d\n",
s, fdp->action, what);
setsock(fdp, s, easy, what);
}
}
return 0; /* return code meaning? */
}
static size_t
writecallback(void *ptr, size_t size, size_t nmemb, void *data)
{
size_t realsize = size * nmemb;
struct connection *c = (struct connection *)data;
c->dlcounter += realsize;
c->global->dlcounter += realsize;
#if 0
printf("%02d: %d, total %d\n",
c->id, c->dlcounter, c->global->dlcounter);
#endif
return realsize;
}
/* return the diff between two timevals, in us */
static long tvdiff(struct timeval *newer, struct timeval *older)
{
return (newer->tv_sec-older->tv_sec)*1000000+
(newer->tv_usec-older->tv_usec);
}
/* store the start time of the program in this variable */
static struct timeval timer;
static void timer_start(void)
{
/* capture the time of the start moment */
gettimeofday(&timer, NULL);
}
static struct timeval cont; /* at this moment we continued */
int still_running; /* keep number of running handles */
struct conncount {
long time_us;
long laps;
long maxtime;
};
static struct timeval timerpause;
static void timer_pause(void)
{
/* capture the time of the pause moment */
gettimeofday(&timerpause, NULL);
/* If we have a previous continue (all times except the first), we can now
store the time for a whole "lap" */
if(cont.tv_sec) {
long lap;
lap = tvdiff(&timerpause, &cont);
}
}
static long paused; /* amount of us we have been pausing */
static void timer_continue(void)
{
/* Capture the time of the restored operation moment, now calculate how long
time we were paused and added that to the 'paused' variable.
*/
gettimeofday(&cont, NULL);
paused += tvdiff(&cont, &timerpause);
}
static long total; /* amount of us from start to stop */
static void timer_total(void)
{
struct timeval stop;
/* Capture the time of the operation stopped moment, now calculate how long
time we were running and how much of that pausing.
*/
gettimeofday(&stop, NULL);
total = tvdiff(&stop, &timer);
}
struct globalinfo info;
struct connection *conns;
long selects;
long timeouts;
long multi_socket;
long performalive;
long performselect;
long topselect;
int num_total;
int num_idle;
int num_active;
static void report(void)
{
int i;
long active = total - paused;
long numdl = 0;
for(i=0; i < num_total; i++) {
if(conns[i].dlcounter)
numdl++;
}
printf("Summary from %d simultanoues transfers (%d active)\n",
num_total, num_active);
printf("%d out of %d connections provided data\n", numdl, num_total);
printf("Total time: %ldus paused: %ldus curl_multi_socket(): %ldus\n",
total, paused, active);
printf("%d calls to select() "
"Average time: %dus\n",
selects, paused/selects);
printf(" Average number of readable connections per select() return: %d\n",
performselect/selects);
printf(" Max number of readable connections for a single select() "
"return: %d\n",
topselect);
printf("%ld calls to multi_socket(), "
"Average time: %ldus\n",
multi_socket, active/multi_socket);
printf("%ld select() timeouts\n", timeouts);
printf("Downloaded %ld bytes in %ld bytes/sec, %ld usec/byte\n",
info.dlcounter,
info.dlcounter/(total/1000000),
total/info.dlcounter);
}
int main(int argc, char **argv)
{
CURLM *multi_handle;
CURLMsg *msg;
CURLcode code = CURLE_OK;
CURLMcode mcode = CURLM_OK;
int rc;
int i;
fd2_set fdsizecheck;
int selectmaxamount;
struct fdinfo *fdp;
char act;
int running_handles;
memset(&info, 0, sizeof(struct globalinfo));
selectmaxamount = sizeof(fdsizecheck) * 8;
printf("select() supports max %d connections\n", selectmaxamount);
if(argc < 3) {
printf("Usage: hiper [num idle] [num active]\n");
return 1;
}
num_idle = atoi(argv[1]);
num_active = atoi(argv[2]);
num_total = num_idle + num_active;
if(num_total > selectmaxamount) {
printf("Requested more connections than supported!\n");
return 4;
}
conns = calloc(num_total, sizeof(struct connection));
if(!conns) {
printf("Out of memory\n");
return 3;
}
if(num_total >= NCONNECTIONS) {
printf("Too many connections requested, increase NCONNECTIONS!\n");
return 2;
}
printf("About to do %d connections\n", num_total);
/* init the multi stack */
multi_handle = curl_multi_init();
for(i=0; i< num_total; i++) {
CURL *e;
char *nl;
memset(&conns[i], 0, sizeof(struct connection));
if(i < num_idle)
conns[i].url = URL_IDLE;
else
conns[i].url = URL_ACTIVE;
e = curl_easy_init();
if(!e) {
printf("curl_easy_init() for handle %d failed, exiting!\n", i);
return 2;
}
conns[i].e = e;
conns[i].id = i;
conns[i].global = &info;
curl_easy_setopt(e, CURLOPT_URL, conns[i].url);
curl_easy_setopt(e, CURLOPT_WRITEFUNCTION, writecallback);
curl_easy_setopt(e, CURLOPT_WRITEDATA, &conns[i]);
curl_easy_setopt(e, CURLOPT_VERBOSE, 0);
curl_easy_setopt(e, CURLOPT_ERRORBUFFER, conns[i].error);
curl_easy_setopt(e, CURLOPT_PRIVATE, &conns[i]);
/* add the easy to the multi */
if(CURLM_OK != curl_multi_add_handle(multi_handle, e)) {
printf("curl_multi_add_handle() returned error for %d\n", i);
return 3;
}
}
curl_multi_setopt(multi_handle, CURLMOPT_SOCKETFUNCTION, socket_callback);
curl_multi_setopt(multi_handle, CURLMOPT_SOCKETDATA, NULL);
/* we start the action by calling *socket() right away */
while(CURLM_CALL_MULTI_PERFORM == curl_multi_socket_all(multi_handle,
&running_handles));
printf("Starting timer, expects to run for %ldus\n", RUN_FOR_THIS_LONG);
timer_start();
timer_pause();
while(1) {
struct timeval timeout;
int rc; /* select() return code */
long timeout_ms;
fd2_set fdread;
fd2_set fdwrite;
int maxfd;
curl_multi_timeout(multi_handle, &timeout_ms);
/* set timeout to wait */
timeout.tv_sec = timeout_ms/1000;
timeout.tv_usec = (timeout_ms%1000)*1000;
/* convert file descriptors from the transfers to fd_sets */
fdinfo2fdset(&fdread, &fdwrite, &maxfd);
selects++;
rc = select(maxfd+1,
(fd_set *)&fdread,
(fd_set *)&fdwrite,
NULL, &timeout);
switch(rc) {
case -1:
/* select error */
break;
case 0:
timeouts++;
curl_multi_socket(multi_handle, CURL_SOCKET_TIMEOUT, &running_handles);
break;
default:
/* timeout or readable/writable sockets */
for(i=0, fdp = allsocks; fdp; fdp = fdp->next) {
act = 0;
if((fdp->action & CURL_POLL_IN) &&
FD_ISSET(fdp->sockfd, &fdread)) {
act |= CURL_POLL_IN;
i++;
}
if((fdp->action & CURL_POLL_OUT) &&
FD_ISSET(fdp->sockfd, &fdwrite)) {
act |= CURL_POLL_OUT;
i++;
}
if(act) {
multi_socket++;
timer_continue();
if(act & CURL_POLL_OUT)
act--;
curl_multi_socket(multi_handle, fdp->sockfd, &running_handles);
timer_pause();
}
}
performselect += rc;
if(rc > topselect)
topselect = rc;
break;
}
timer_total(); /* calculate the total time spent so far */
if(total > RUN_FOR_THIS_LONG) {
printf("Stopped after %ldus\n", total);
break;
}
}
if(still_running != num_total) {
/* something made connections fail, extract the reason and tell */
int msgs_left;
struct connection *cptr;
while ((msg = curl_multi_info_read(multi_handle, &msgs_left))) {
if (msg->msg == CURLMSG_DONE) {
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, &cptr);
printf("%d => (%d) %s", cptr->id, msg->data.result, cptr->error);
}
}
}
curl_multi_cleanup(multi_handle);
/* cleanup all the easy handles */
for(i=0; i< num_total; i++)
curl_easy_cleanup(conns[i].e);
report();
return code;
}

View File

@ -1,101 +0,0 @@
/*****************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* $Id$
*
* Little tool to raise the amount of maximum file descriptor and then run the
* given command line (using the hard-coded uid/gid).
*
*/
#include <stdio.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <unistd.h>
#include <errno.h>
#include <string.h> /* for errno translation */
/* ulimiter
*
* Source code inspiration from:
* http://www.cs.wisc.edu/condor/condorg/linux_scalability.html
*/
#define UID 1000 /* the user who must run this */
#define GID 1000 /* group id to run the program as */
/* Number of open files to increase to */
#define NEW_MAX 10000
int main(int argc, char *argv[])
{
int ret;
struct rlimit rl;
char *brgv[20];
int brgc=argc-1;
int i;
for(i=1; i< argc; i++)
brgv[i-1]=argv[i];
brgv[i-1]=NULL; /* terminate the list */
if(getuid() != UID) {
fprintf(stderr, "Only uid %d is allowed to run this\n", UID);
return 1;
}
ret = getrlimit(RLIMIT_NOFILE, &rl);
if(ret != 0) {
fprintf(stderr, "Unable to read open file limit.\n"
"(getrlimit(RLIMIT_NOFILE, &rl) failed)\n"
"(%d, %s)", errno, strerror(errno));
return 1;
}
fprintf(stderr, "Limit was %d (max %d), setting to %d\n",
rl.rlim_cur, rl.rlim_max, NEW_MAX);
rl.rlim_cur = rl.rlim_max = NEW_MAX;
ret = setrlimit(RLIMIT_NOFILE, &rl);
if(ret != 0) {
fprintf(stderr, "Unable to set open file limit.\n"
"(setrlimit(RLIMIT_NOFILE, &rl) failed)\n"
"(%d, %s)", errno, strerror(errno));
return 1;
}
ret = getrlimit(RLIMIT_NOFILE, &rl);
if(ret != 0) {
fprintf(stderr, "Unable to read new open file limit.\n"
"(getrlimit(RLIMIT_NOFILE, &rl) failed)\n"
"(%d, %s)", errno, strerror(errno));
return 1;
}
if(rl.rlim_cur < NEW_MAX) {
fprintf(stderr, "Failed to set new open file limit.\n"
"Limit is %d, expected %d\n",
rl.rlim_cur, NEW_MAX);
return 1;
}
if(setgid(GID) != 0) {
fprintf(stderr, "setgid failed (%d, %s)\n", errno, strerror(errno));
return 1;
}
if(setuid(UID) != 0) {
fprintf(stderr, "setuid failed (%d, %s)\n", errno, strerror(errno));
return 1;
}
ret = execv(brgv[0], brgv);
fprintf(stderr, "execl returned, failure\n"
"returned %d, errno is %d (%s)\n",
ret, errno, strerror(errno));
return 1;
}