* This is passed to the ZAP handler in the 'domain' field
* If not set, or empty, then NULL security does not call the ZAP handler
* This resolves the phantom ZAP request syndrome seen with sockets where
security was never intended (e.g. in test cases)
* This means if you install a ZAP handler, it will not get any requests
for new connections until you take some explicit action, which can be
setting a username/password for PLAIN, a key for CURVE, or the domain
for NULL.
- tests that system can provide at least 1,000 sockets
- we could expand on this but this covers the main case of OS/X
having a too-low default limit of 256 handles per process
* Command names changed from null terminated to length-specified
* Command frames use the correct flag (bit 2)
* test_stream acts as test case for command frames
* Some code cleanups
- if ZAP server returns anything except 200, connection is closed
- all security tests now pass correctly
- test_security_curve now does proper client key authentication using test key
- test_security_plain now does proper password authentication
- Split off NULL security check from PLAIN
- Cleaned up test_linger code a little
- Got all tests to pass, added TODOs for outstanding issues
- Added ZAP authentication for NULL test case
- NULL mechanism was not passing server identity - fixed
- cleaned up test_security_plain and removed option double-checks (made code ugly)
- lowered timeout on expect_bounce_fail to 150 msec to speed up checks
- removed all sleeps from test_fork and simplified code (it still passes :-)
This change adds the socket identity infomartion from the socket to the
zap frames. In doing this the ZAP is able preform different operations
based on different sockets. This is not compaitable with the current ZAP
RFC, but that can be updated. As the ZAP rfc is currently draft for I
did not change the version number.
Tests also modified and passing.
This allows making a new request on a REQ socket by sending a new
message. Without the option set, calling send() after the first message
is done will continue to return an EFSM error.
It's useful for when a REQ is not getting a response. Previously that
meant creating a new socket or switching to DEALER.
* Documentation:
The default behavior of REQ sockets is to rely on the ordering of messages
to match requests and responses and that is usually sufficient. When this option
is set to 1, the REQ socket will prefix outgoing messages with an extra frame
containing a request id. That means the full message is (request id, 0,
user frames...). The REQ socket will discard all incoming messages that don't
begin with these two frames.
* Behavior change: When a REQ socket gets an invalid reply, it used to
discard the message and return EAGAIN. REQ sockets still discard
invalid messages, but keep looking at the next one automatically
until a good one is found or there are no more messages.
* Add test_req_request_ids.
* Add lb_t::sendpipe() that returns the pipe that was used for sending,
similar to fq_t::recvpipe().
* Add forwarder functions to dealer_t to access these two.
* Add logic to req_t to ignore replies on pipes that are not the one
where the request was sent.
* Enable test in test_spec_req.
* disabled the specific tests that do not work (yet) on libzmq
* cleaned up one source (test_spec_rep.c) but the others need similar work
* added sleep in test_spec_rep to allow connects time to happen; this would
not be needed if we connected out to the REP peers instead in from them,
but I didn't want to change the logic of the test code.
* See http://rfc.zeromq.org/spec:28/REQREP
* Not all testable statements are covered.
* At this point, there are several failures:
- test_spec_req: The REQ socket does not correctly discard messages
from peers that are not currently being talked to.
- test_spec_dealer/router: On disconnect, the queues seem to not be
emptied. The DEALER can still receive a message the disconnected
peer sent, the ROUTER can still send to the identity of the dis-
connected peer.
The use of binary for CURVE keys is painful; you cannot easily copy
these in e.g. email, or use them directly in source code. There are
various encoding possibilities. Base16 and Base64 are not optimal.
Ascii85 is not safe for source (it generates quotes and escapes).
So, I've designed a new Base85 encoding, Z85, which is safe to use
in code and elsewhere, and I've modified libzmq to use this where
it also uses binary keys (in get/setsockopt).
Very simply, if you use a 32-byte value, it's Base256 (binary),
and if you use a 40-byte value, it's Base85 (Z85).
I've put the Z85 codec into z85_codec.hpp, it's not elegant C++
but it is minimal and it works. Feel free to rewrap as a real class
if this annoys you.
- designed for TCP clients and servers
- added HTTP client / server example in tests/test_stream.cpp
- same as ZMQ_ROUTER + ZMQ_ROUTER_RAW + ZMQ_ROUTER_MANDATORY
- includes b893ce set ZMQ_IDENTITY on outgoing connect
- deprecates ZMQ_ROUTER_RAW
- we need to switch to PLAIN according to options.mechanism
- we need to catch case when both peers are as-server (or neither is)
- and to use username/password from options, for client
* ZMQ_PLAIN_SERVER, ZMQ_PLAIN_USERNAME, ZMQ_PLAIN_PASSWORD options
* Man page changes to zmq_setsockopt and zmq_getsockopt
* Man pages for ZMQ_NULL, ZMQ_PLAIN, and ZMQ_CURVE
* Test program test_security
1) VSM - you cannot hand out the 'data' address as it was not allocated on the heap
2) for other messages the 'data' address cannot be handed out either, as it not the address
originally returned by malloc and hence cannot be passed to 'free'.
see msg.cpp
u.lmsg.content = (content_t*) malloc (sizeof (content_t) + size_);
....
u.lmsg.content->data = u.lmsg.content + 1;
So the function is changed to always malloc a data buffer and copy the data into it.
There is a possible optimisation using memmove for the non-VSM case but that is not done yet.
* Removed or truncated sleeps so the tests run faster
* Removed dependencies on zmq_utils
* Rewrote a few tests that were confusing
* Minor code cleanups
Simplify the test connect delay test script, removing the threads and
moving to a serialised version. AFAICS this should provide the same
test, but without the race conditions that happened with the previous
test.
- Created a new option ZMQ_ROUTER_RAW_SOCK
- Added new raw_encoder and raw_decoder to receive and send messages in raw form to remote client
- Added test case file tests/test_raw_sock.cpp
o To create a raw router sock set the ZMQ_ROUTER_RAW_SOCK option
o ZMQ_MSGMORE flag is ignored for non-id messages
o To terminate a remote connection send id message followed by zero length data message
This change makes sure that even if the tests are built in a
"release" configuration (with optimizations and NDEBUG turned on),
the assertions won't get compiled out of the tests themselves.
The C standard guarantees that the most recent inclusion of
<assert.h> is the one that counts, so it's important that the
"#undef NDEBUG/#include <assert.h>" come as the last thing in
the block of header files.
"testutil.hpp" includes <assert.h>, so I've left <assert.h> out
of any test that #includes "testutil.hpp", just for the sake of
brevity.
Hopefully fixed LIBZMQ-427 - there was a slight typo in the init_address
refactor. The encoder refactoring had also broken pgm_sender and
receiver, but just required updating to use the new functions.
This formerly unused parameter actually represents the socket
on which the event was received. As such, we should check that
its value makes sense: it must be either "rep" or "req", and in
the case of some kinds of events, it must be specifically one
or the other.
After this change, "s" is no longer unused.
Both memcmp and strcmp return zero on equal, nonzero on nonequal;
so all of these tests were backwards.
The original committer fixed the failure by comparing 22 bytes instead
of the correct 21, so that the assertions would trigger only if the
22nd byte happened to match exactly --- which was rare.
The correct fix is to compare the right number of bytes with the
right sense. (I think all of the ".addr" fields are null-terminated,
in which case it's more appropriate to use strcmp throughout.)
This patch, salvaged from a trainwreck accidental merge earlier, adds a
new sockopt, ZMQ_DELAY_ATTACH_ON_CONNECT which prevents a end point
being available to push messages to until it has fully connected, making
connect work more like bind. This also applies to reconnecting sockets,
which may cause message loss of in-queue messages, so it is sensible to
use this in conjunction with a low HWM and potentially an alternative
acknowledgement path.
Notes on most of the individual commits can be found the repository log.
This patch, salvaged from a trainwreck accidental merge earlier, adds a
new sockopt, ZMQ_DELAY_ATTACH_ON_CONNECT which prevents a end point
being available to push messages to until it has fully connected, making
connect work more like bind. This also applies to reconnecting sockets,
which may cause message loss of in-queue messages, so it is sensible to
use this in conjunction with a low HWM and potentially an alternative
acknowledgement path.
Notes on most of the individual commits can be found the repository log.
- invalid hostname set to 0mq.is.the.best (naturally!)
- issue happens as other valid-like non-existent hostnames were
redirected by buggy Cable/ISP DNS servers
GCC 4.1.2 on RHEL5 and SLES10 don't like not having a newline at the
end of a source file, and error out if it's missing.
Signed-off-by: AJ Lewis <aj.lewis@quantum.com>
It didn't seem straightforward to use any of the existing process calls, so I have added a new command to command_t and friends called detach. This instructs the socket_base to remove the pipe from it's pipe list. The session base stores a copy of the outpipe, and will resend the bind command on reconnection. This should allow balancing again.
This patch adds a sockopt ZMQ_DELAY_ATTACH_ON_CONNECT, which if set to 1 will attempt to preempt this behavior. It does this by extending the use of the session_base to include in the outbound as well as the inbound pipe, and only associates the pipe with the socket once it receives the connected callback via a process_attach message. This works, and a test has been added to show so, but may introduce unexpected complications. The shutdown logic in this class has become marginally more awkward because of this, requiring the session to serve as the sink for both pipes if shutdown occurs with a still-connecting pipe in place. It is also possible there could be issues around flushing the messages, but as I could not directly think how to create such an issue I have not written any code with regards to that.
The documentation has been updated to reflect the change, but please do check over the code and test and review.
The current ZMQ_MONITOR code does not compile in gcc 4.7, as -pedantic
and -Werror are enabled, and ISO C++ doesn't allow casting between
normal pointers (void*) and function pointers, as pedantically their
size could be different. This caused the library not compilable. This
commit workaround the problem by introducing one more indirection, i.e.
instead of calling
(void *)listener
which is an error, we have to use
*(void **)&listener
which is an undefined behavior :) but works on most platforms
Also, `optval_ = monitor` will not set the parameter in getsockopt(),
and the extra casting caused the LHS to be an rvalue which again makes
the code not compilable. The proper way is to pass a pointer of function
pointer and assign with indirection, i.e. `*optval_ = monitor`.
Also, fixed an asciidoc error in zmq_getsockopt.txt because the `~~~~`
is too long.
* Implemented new ctx API (_new, _destroy, _get, _set)
* Removed 'typesafe' macros from zmq.h
* Added support for MAX_SOCKETS (was tied into change for #337)
* Created new man pages