VMCI transport allows fast communication between the Host
and a virtual machine, between virtual machines on the same host,
and within a virtual machine (like IPC).
It requires VMware to be installed on the host and Guest Additions
to be installed on a guest.
This reduces chances of race between writer deactivation and activation.
Reader sends activation command to writer when number or messages is
multiple of LWM. In situation with high throughput (millions of messages
per second) and correspondingly large HWM (e.g. 10M) the difference
between HWM needs to be large enough - so that activation command is
received before pipe becomes full.
Only assert on errors we know are our fault,
instead of trying to whitelist every possible network-related failure.
This makes ZeroMQ more portable to other platforms
where the possible errors are different.
In particular, the previous code would often die under iOS.
See issue #1608.
This is an old issue with Windows 7. The effect is that we see a latency
ramp on the first 500 messages.
* The ramp is unaffected by message size.
* Sleeping up to 100msec between sends has no effect except to switch
off ZeroMQ batching so making the ramp more visible.
* After 500 messages, latency falls back down to ~10-40 usec.
* Over inproc:// the ramp happens when we use the signaler class.
* Client-server over inproc:// does not show the ramp.
* Client-server over tcp:// shows a similar ramp.
We know that the signaller is using TCP on Windows. We can 'prime' the
connection by doing 500 dummy sends. This potentially causes new sockets
to be delayed on creation, which is not a good solution.
Note that the signaller sends zero-byte messages. This may also be
confusing TCP.
Solution: flood the receive buffer when creating a new FD pair; send a
1M buffer and discard it.
Fixes#1608
This causes assertion failures after network reconnects.
Solution: allow EINVAL as a possible condition after read/write.
Fixes#829Fixes#1399
Patch provided by Michele Dionisio @mdionisio, thanks :)
In real world usage, there have been reported signaler failures where the
eventfd read() or socket recv() system call in signaler::recv() fails,
despite having made a prior successful signaler::wait() call.
this patch creates a signaler::recv_failable() method that allows
unreadable eventfd / socket to return an error without asserting.
These tests connected CLIENT and SERVER to DEALER... this isn't
allowed. I changed to CLIENT-to-SERVER in both cases. The result
was aborts in client.cpp and server.cpp which cannot handle
invalid multipart data.
I removed the asserts in each of these in xsend.
Solution: fix the test cases and remove the (unwanted?) asserts
in client.cpp:xsend and server.cpp:xsend.