* Problem: Still need to port over more files to VxWorks 6.x
Solution: Port more files to VxWorks 6.x
* Problem: Need to port over remaining files to VxWorks 6.x. Also remove POSIX thread dependency for VxWorks (because of priority inversion problem in POSIX mutexes with VxWorks 6.x processes)
Solution: Port over remaining files to VxWorks 6.x. Also removed POSIX thread dependency for VxWorks
* Problem: Needed to modify TCP, UDP, TIPC classes with #ifdefs to be compatible with VxWorks 6.x.
Solution: Modify TCP, UDP, TIPC classes with #ifdefs to be compatible with VxWorks 6.x
reuse
Solution: extract into functions defined in ip.hpp
Problem: signaler_t::make_fdpair not reusable
Solution: move make_fdpair to ip.hpp
Problem: epoll worker with no fds cannot be stopped
Solution: use interruptible epoll_pwait call
Problem: insufficient unit tests for poller
Solution: add test cases
* add define for windows/UWP
* prevent issue with COM references
* gettickcount not available on uwp
* add compiler definitions
* add convenitnece cmake file
* brute force uwp compilation
* fix compiler version
* cosmetics
Solution: restore inclusion of poll.h if using poll before zmq.h as
it was originally, as AIX redefines the POSIX structures and provides
compatibility macros.
Also add alternative aliases for 32 bit AIX's pollitem struct:
events -> reqevents
revents -> rtnevents
Solution: Provide poll() for Windows as well. This is a build option that
defaults to off as the resulting binary will only run on Windows Vista or
newer.
This is not tested with alternative Winsock service providers like VMCI,
but the documentation for WSAPoll does not mention limitations.
On my local machine, throughput improves by ~10 % (20 simultaneous
remote_thr workes to one local_thr, 10 byte messages), while latency
improves by ~30 % (measured with remote/local_lat).
Solution: The Coverity Static Code Analyzer was used on libzmq code and found
many issues with uninitialized member variables, some redefinition of variables
hidding previous instances of same variable name and a couple of functions
where return values were not checked, even though all other occurrences were
checked (e.g. init_size() return).
See issue #1608.
This is an old issue with Windows 7. The effect is that we see a latency
ramp on the first 500 messages.
* The ramp is unaffected by message size.
* Sleeping up to 100msec between sends has no effect except to switch
off ZeroMQ batching so making the ramp more visible.
* After 500 messages, latency falls back down to ~10-40 usec.
* Over inproc:// the ramp happens when we use the signaler class.
* Client-server over inproc:// does not show the ramp.
* Client-server over tcp:// shows a similar ramp.
We know that the signaller is using TCP on Windows. We can 'prime' the
connection by doing 500 dummy sends. This potentially causes new sockets
to be delayed on creation, which is not a good solution.
Note that the signaller sends zero-byte messages. This may also be
confusing TCP.
Solution: flood the receive buffer when creating a new FD pair; send a
1M buffer and discard it.
Fixes#1608
In real world usage, there have been reported signaler failures where the
eventfd read() or socket recv() system call in signaler::recv() fails,
despite having made a prior successful signaler::wait() call.
this patch creates a signaler::recv_failable() method that allows
unreadable eventfd / socket to return an error without asserting.
Of course people still "can" distributed the sources under the
LGPLv3. However we provide COPYING.LESSER with additional grants.
Solution: specify these grants in the header of each source file.
This is a silly assertion that causes problems if libzmq.dll is
called in some esoteric ways.
Solution: if the shutdown code detects WSANOTINITIALISED, then
exit silently.
Fixes#1377Fixes#1144
Updated:
src/signaler.cpp: Add close_wait_ms() static function to loop
when receiving EAGAIN in response to close(), with ms long
sleeps, up to a maximum limit (default 2000ms == 2 seconds);
used in signaler_t::~signaler_t() destructor.