Previously, orf_token_tx was only incremented on initial send,
this is obviously wrong and resulted in the TX count being
significantly lower than any RX count. Now we increment it every
time the ORF token is sent or resent.
As a quick test, on a single node system the RX and TX stats
will now match.
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Function message_handler_orf_token contains extra debug info enabled by
defining GIVEINFO. Insted of using long long unsigned int use better
suited uint64_t and make use of QB_TIME_NS_IN_MSEC constant instead
of hardcoded number. Also compile tv_old conditionally so it is not used
by accident.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Timestamp diff is very unlikely to be larger than 32-bit integer but it
is still worth to use 64-bit.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Token rx and tx timestamps were computed and stored as 32-bit unsigned
integer but substracted in other parts of code from 64-bit integer.
Result was, that node with uptime larger than 49.71 days
(2^32/(1000*60*60*24)) reported wrong numbers for
stats.srp.time_since_token_last_received and in log message during long
pause (function timer_function_orf_token_warning).
Solution is to store rx and tx data as 64-bit integer.
Fixes#761
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
If strdup of kv_item key or value failed only kv_item itself was freed.
Free also key and value (kv_item is zeroed so free of NULL variable is
safe).
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
new_config interfaces was freed on success, but not if some previous
configuration step failed.
Solution is to move free of interfaces to same point as where
orig_interfaces are freed.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
This patch adds support to change the default corosync pid file lock
path. This is useful to run corosync net namespace environment only and
since the pid lock file cannot be clarified over the conf because the
pid lock file exists before config parsing we allow the user to specify
it over the command line.
Signed-off-by: Alexander Aring <aahringo@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Link Corosync project archived copy of Yair Amir's PhD thesis
and paper about totem protocol.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Fabio M. Di Nitto <fdinitto@redhat.com>
Because crypto changing happens in the 'commit' phase
of the reload and we can't get sure that knet will
allow the new parameters, the result gets ignored.
This can happen in FIPS mode if a non-FIPS cipher
is requested.
This patch reports the errors back in a cmap key
so that the command-line can spot those errors
and report them back to the user.
It also restores the internal values for crypto
so that subsequent attempts to change things have
predictable results. Otherwise further attempts can
do nothing but not report any errors back.
I've also added some error reporting back for the
knet ping counters using this mechanism.
The alternative to all of this would be to check for FIPS
in totemconfig.c and then exclude certain options, but this
would be duplicating code that could easily get out of sync.
This system could also be a useful mechanism for reporting
back other 'impossible' errors.
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Avoiding signed integer overflows by converting size
related types to size_t.
Signed-off-by: Machiry Aravind Kumar <makrvcs@gmail.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
This required adding a lot of return values to two previously
'void' functions. I did two rather than just the one that was
needed because it seemed to make sense to do them both together.
Although these functions now return errors, they are probably
still ignored higher up. this really needs a comprehensive audit.
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
non-breaking spaces are depressingly easy to enter in some
editors and can make a mess of a corosync.conf file, as the
character can break keyword names and generate some very strange
error messages.
So here we include it (0xA0) as a valid whitespace character.
The (unsigned char) cast is for portability - Intel systems use
signed chars so we'd need something there, but this should
protect us against unsigned char systems too.
No attempt is made to protect against UTF-8 characters, that's very
much out of scope for this project I suspect.
ref: https://github.com/corosync/corosync/issues/723
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
totem.knet_mtu is new configuration option which allows setting
of automatic or manual knet MTU.
Also reload of totem.knet_pmtud_interval is fixed now, so it works when
key is deleted (and set back default value).
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Before this, all knet messages, including debug, were sent
over the pipe from knet to corosync and filtered in corosync.
This was obviously a waste, so now we tell knet the logging
level we need from it and so only get the messages that the
user has requested.
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
uname in Solaris/Illumos returns non-negative value when succesful.
Signed-off-by: Andreas Grueninger <andreas.grueninger@noemail.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Some platforms requires aligned memory access. For such platforms,
special code was added using address modulo 4 to check if aligning is
needed or not. This may be problem for 64 bits platforms. Also check in
app_deliver_fn was incorrect and always true.
Solution is to use modulo sizeof pointer and add parentheses to fix the
check in app_deliver_fn function.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Commit to drop packets from unlisted IPs made ifdown case not working
because msg_name is unset for socketpair.
solution is to drop packets from unlisted IPs only when bind state is
BIND_STATE_REGULAR.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Commit 92e0f9c7bb added switching of
totempg buffers in sync phase. But because buffers got switch too early
there was a problem when delivering recovered messages (messages got
corrupted and/or lost). Solution is to switch buffers after recovered
messages got delivered.
I think it is worth to describe complete history with reproducers so it
doesn't get lost.
It all started with 402638929e (more info
about original problem is described in
https://bugzilla.redhat.com/show_bug.cgi?id=820821). This patch
solves problem which is way to be reproduced with following reproducer:
- 2 nodes
- Both nodes running corosync and testcpg
- Pause node 1 (SIGSTOP of corosync)
- On node 1, send some messages by testcpg
(it's not answering but this doesn't matter). Simply hit ENTER key
few times is enough)
- Wait till node 2 detects that node 1 left
- Unpause node 1 (SIGCONT of corosync)
and on node 1 newly mcasted cpg messages got sent before sync barrier,
so node 2 logs "Unknown node -> we will not deliver message".
Solution was to add switch of totemsrp new messages buffer.
This patch was not enough so new one
(92e0f9c7bb) was created. Reproducer of
problem was similar, just cpgverify was used instead of testcpg.
Occasionally when node 1 was unpaused it hang in sync phase because
there was a partial message in totempg buffers. New sync message had
different frag cont so it was thrown away and never delivered.
After many years problem was found which is solved by this patch
(original issue describe in
https://github.com/corosync/corosync/issues/660).
Reproducer is more complex:
- 2 nodes
- Node 1 is rate-limited (used script on the hypervisor side):
```
iface=tapXXXX
# ~0.1MB/s in bit/s
rate=838856
# 1mb/s
burst=1048576
tc qdisc add dev $iface root handle 1: htb default 1
tc class add dev $iface parent 1: classid 1:1 htb rate ${rate}bps \
burst ${burst}b
tc qdisc add dev $iface handle ffff: ingress
tc filter add dev $iface parent ffff: prio 50 basic police rate \
${rate}bps burst ${burst}b mtu 64kb "drop"
```
- Node 2 is running corosync and cpgverify
- Node 1 keeps restarting of corosync and running cpgverify in cycle
- Console 1: while true; do corosync; sleep 20; \
kill $(pidof corosync); sleep 20; done
- Console 2: while true; do ./cpgverify;done
And from time to time (reproduced usually in less than 5 minutes)
cpgverify reports corrupted message.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Fabio M. Di Nitto <fdinitto@redhat.com>
Thanks Ryan Cai <ycaibb@gmail.com> for reporting the problem.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Previously, existence of retransmit messages canceled holding
of token (and never allowed representative to enter token hold
state).
This makes token rotating maximum speed and keeps processor
resending messages over and over again - overloading network
and reducing chance to successfully deliver the messages.
Also there were reports of various Antivirus / IPS / IDS which slows
down delivery of packets with certain sizes (packets bigger than token)
what make Corosync retransmit messages over and over again.
Proposed solution is to allow representative to enter token hold
state when there are only retransmit messages. This allows network to
handle overload and/or gives Antivirus/IPS/IDS enough time scan and
deliver packets without corosync entering "FAILED TO RECEIVE" state and
adding more load to network.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Knet limits maximum node id to 16-bit type. This was not ensured in
corosync and it was possible to set nodeid to value >= 65536 and
(surprisingly) most of the things were working quite well because of
overflow. corosync-cmapctl -m stats contained knet nodeid in
stats.knet. subtree, so for nodeid 65536 result was:
Can't get value of stats.knet.node0.link0.connected. Error
CS_ERR_NOT_EXIST
Commit implements checking of nodeid and limits it to KNET_MAX_HOST
value when knet is used.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Nodeid is required for knet for every node. Right now, existence of
nodeid is checked only for local for local node, so broaden the test.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
totem.nodeid is relict from times when nodelist was not required and
totemsrp was sending whole membership with ip addresses.
With Corosync 3 ip addresses are no longer sent so
it is not possible to find "next" node ip address where to send token
(because only nodeid is sent) without having information about all of
the nodes stored locally.
When totem.nodeid was configured it was partly used and other parts
(most notably totemudpu_token_target_set) were using autogenerated
nodeid. Together it was not possible to create even single node
membership.
Solution is to ignore totem.nodeid completely (and display warning when
it is set).
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Currently if there is a gap in the links (eg link0 is missing)
corosync-cfgtool -s will still display the links as 0,1,2,3...
even if they are 1,2,5,6...
Also display the KNET transport type with the link in
corosync-cfgtool -s & -n
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Support for cgroup v2 is very similar to cgroup v1 just checking (and
writing) different file.
Because of all the problems described later with cgroup v2 new "auto"
mode (new default) is added. This mode first tries to set rr scheduling
and moves Corosync to root cgroup only if it fails.
Testing this feature is a bit harder than with cgroup v1 so it's
probably worh noting in this commit message.
1. Copy some service file (I've used httpd service) and set
CPUQuota=30% in the [service] section.
2. Check /sys/fs/cgroup/cgroup.subtree_control - there should be no
"cpu"
3. Start modified service
4. Check /sys/fs/cgroup/cgroup.subtree_control - there should be "cpu"
5. Start corosync - It should be able to get rt priority
When move_to_root_cgroup is disabled (applies only for kernels
with CONFIG_RT_GROUP_SCHED enabled), behavior differs:
- If corosync is started before modified service, so
there is no "cpu" in /sys/fs/cgroup/cgroup.subtree_control
corosync starts without problem and gets rt priority.
Starting modified service later will never add "cpu" into
/sys/fs/cgroup/cgroup.subtree_control (because corosync is holding
rt priority and it is placed in the non-root cgroup by systemd).
- When corosync is started after modified service, so "cpu"
is in /sys/fs/cgroup/cgroup.subtree_control, corosync is not
able to get RT priority.
It's worth noting problems when cgroup v2 is used together with systemd
logging described in corosync.conf(5) man page.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
The libqb map API leaves 'ownership' of the data with the caller
but does its own lifetime management, so it can easily happen that
map_rm() is called and the data deleted by the caller.
But if an iterator is running over that item then the map entry
will not get removed (leaving dangling pointers) until later.
libqb has a hack-y callback that tells the owner when it is safe to
delete the allocated memory, so we hook into that. icmap is already
using this.
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
corosync_cfg_trackstop expects reply but that was never sent. Make sure
to send reply so corosync_cfg_trackstop works.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Support for cgroup v2 is very similar to cgroup v1 just checking (and
writing) different file.
Testing this feature is a bit harder than with cgroup v1 so it's
probably worh noting in this commit message.
1. Copy some service file (I've used httpd service) and set
CPUQuota=30% in the [service] section.
2. Check /sys/fs/cgroup/cgroup.subtree_control - there should be no
"cpu"
3. Start modified service
4. Check /sys/fs/cgroup/cgroup.subtree_control - there should be "cpu"
5. Start corosync - It should be able to get rt priority
When move_to_root_cgroup is disabled, behavior differs:
- If corosync is started before modified service, so
there is no "cpu" in /sys/fs/cgroup/cgroup.subtree_control
corosync starts without problem and gets rt priority.
Starting modified service later will never add "cpu" into
/sys/fs/cgroup/cgroup.subtree_control (because corosync is holding
rt priority and it is placed in the non-root cgroup by systemd).
- When corosync is started after modified service, so "cpu"
is in /sys/fs/cgroup/cgroup.subtree_control, corosync is not
able to get RT priority.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
... to be in align with crypto_cypher and crypto_hash.
Reload (corosync-cfgtool -R) works without any problem and changing of
key is not supported anyway,
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Use knet_get_crypto_list to find knet supported crypto models and use
them instead of hardcoded list.
Also fix compression handling. Previously knet_compression_model
value was not checked at all and was directly passed to knet.
Use knet_get_compress_list to find knet supported compress models and
use them to check validity of config file and for more informative
error message.
Lastly enhance corosync version display with information
about available crypto/compression models.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
totemknet_configure_compression was using knet_context
just to gather the knet handle / instance.
On first time config knet_contex is not initialized till
much later in the code, passing some random garbage pointers
to knet_handle_compress, that would crash later trying
to acquire a mutex lock.
Signed-off-by: Fabio M. Di Nitto <fdinitto@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Fix integer underflow when computing `namelen` in `nodelist_byname`,
always use computed `namelen`.
Fixes#626.
Signed-off-by: Johannes Krupp <johannes.krupp@cispa.saarland>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Retry knet_handle_new without privileged operations if it fails
knet_handle_new can fail with ENAMETOOLONG if its privileged operations
fail, which can happen if we're running as a user process or in an
unprivileged container.
This adds a cmap key 'allow_knet_handle_fallback' that defaults to no,
which is the current behavior of exiting with error if the knet_handle
can't be created with privileged operations. If the new cmap key is set
to 'yes' and the knet_handle creation fails, fallback to creating the
handle using unprivileged operations is tried.
Signed-off-by: Dan Streetman <ddstreet@canonical.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Don't lock all current and future memory if can't
increase memlock rlimit.
If we fail to increase our RLIMIT_MEMLOCK, then locking all our current
and future memory is extremely dangerous; once our memory use reaches
our RLIMIT_MEMLOCK, memory allocations will start failing, very likely
leading to our entire process crashing.
This can happen if we aren't a privileged process, for example if
running as non-root user, or inside an unprivileged container.
Signed-off-by: Dan Streetman <ddstreet@canonical.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Found by covscan which also didn't like us 'leaking' the
fd to the lockfile. So close that too.
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
CFG tracking was removed in 815375411e,
probably as a mistake, as part of the tidy up of cfg and the removal of
dynamic loading. This means that shutdown tracking (using
cfg_try_shutdown()) stopped working.
This patch restores the trackstart & trackstop API calls (renamed to be
more consistent with the exiting libraries) so that shutdown tracking
can be used again.
Change cfg.shutdown_timeout to be in milliseconds rather than seconds
nd use libqb macros for conversion.
Add --force option to corosync-cfgtool -H
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Patch tries to make nodestatusget really extendable. Following changes
are implemented:
- corosync_cfg_node_status_version_t is added with (for now) single
value CFG_NODE_STATUS_V1
- corosync_knet_node_status renamed to corosync_cfg_node_status_v1 (it
isn't really knet because it works as well for udp(u()
- struct res_lib_cfg_nodestatusget_version is added which holds only ipc
result header and version on same position as for
corosync_cfg_node_status_v1
- corosync_cfg_node_status_get requires version and pointer to one of
corosync_cfg_node_status_v structures
- request is handled in case switches to make adding new version easier
Also fix following bugs:
- totempg_nodestatus_get error was retyped to cs_error_t without any
meaning.
- header.error was not checked at all in the library
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Current we horribly over-use totempg_ifaces_get() to
retrieve information about knet interfaces. This is an attempt to
improve on that.
All transports are supported (so not only Knet but also UDP(U)).
This patch builds best against the "onwire-upgrade" branch of knet
as that's what sparked my interest in getting more information out.
Signed-off-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Previously only crypto cipher was used as a way to find out if crypto is
enabled or disabled.
This usually works ok until cipher is set to none and hash to some other
value (like sha1). Such config is perfectly valid and it was not
supported correctly.
As a solution, check both cipher and hash.
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Fabio M. Di Nitto <fdinitto@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
At the same time simplify the overwrite logic and stop clearing the
umask (which is unexpected and quite pointless here, as applications
can't really protect the users from their own pathological settings).
Signed-off-by: Ferenc Wágner <wferi@debian.org>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
... when token and consensus timeouts pop.
Signed-off-by: Aleksei Burlakov <aburlakov@suse.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com>
Default token timeout of 1000 ms was often changed by users because of
other workloads on machine which may make corosync responding a bit
later than needed and resulting in token loss.
3000 ms was chosen as a compromise between token timeout increase
and allow live cluster upgrade (other nodes should receive token
by node with new default on time).
It doesn't affect token token_coefficient so final token timeout still
depends on number of configured nodes (just base is higher).
This change slows down failover a bit so for clusters where failover
times are important, please change the token timeout in configuration
file corosync.conf as a:
totem {
version: 2
token: 1000
...
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
Current quorum callback contains only actual view list and there is no
way how to find out joined/left nodes. This cannot be emulated by user
app, because when corosync restarts before other nodes notices then view
list is unchanged (ring id is changed tho).
Solution is to implement similar callback as for cpg which contains ring
id, member list, joined list and left list.
To implement such callback and keep backwards compatibility,
quorum_model_initialize is introduced. Its behavior is similar to
cpg_model_initialize. This allows passing model v1, which contains
enhanced quorum (full ring id is passed instead of just seq number)
and nodelist callbacks.
To find out which events should be sent by corosync daemon, new message
MESSAGE_REQ_QUORUM_MODEL_GETTYPE is used. Quorum library on init was
sending MESSAGE_REQ_QUORUM_GETTYPE. Whem model v1 is requested the
MESSAGE_REQ_QUORUM_MODEL_GETTYPE is used, which contains model number
so corosync knows that client is using model v1 and can send enhanced
quorum and nodelist events.
Nodelist event is (for now) send both in case of change of membership
and also when requested, also when CS_TRACK_CURRENT is requested, but
then left_list and joined_list is left empty, because they don't make
too much sense there.
New test application testquorummodel is added as an example of new API
usage.
Also during patch developement, I found few bugs here and there, which
are also fixed:
- quorum_initialize was never returning error code returned by
MESSAGE_REQ_QUORUM_GETTYPE call (always returned CS_OK)
- Allocated memory in send_library_notification was based
on sizeof(unsigned int) instead of mar_uint32_t. That's not wrong,
but it make more sense to use sizeof(mar_uint32_t) instead
(big thanks to Chrissie for englishify the man pages)
Signed-off-by: Jan Friesse <jfriesse@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>