Go to file
Fabio M. Di Nitto b05477859f votequorum: fix expected_votes propagation
it is not correct to randomly accept expected_votes from any node in
the cluster. We can only allow expected_votes from quorate nodes.

A quorate cluster is "always" right and have the correct expected_votes.

One of the different bug triggers:

quorum {
  expected_votes: 8
  auto_tie_breaker: 1
  last_man_standing: 1
}

start all 8 nodes.
clean shut down 2 nodes.
wait for lms to kick in.
kill 3 nodes with highest nodeid
(we want to retain a quorate partition of 3 nodes)
start one node again -> cluster will be unquorate

This happens because the node rebooting/rejoining with
non current cluster status will propagate an expected_votes of 8,
while in reality the cluster is down to expected_votes: 3.

4 nodes are still < 5 (quorum for 8 nodes/votes).

In order to avoid this condition, we need to exchange expected_votes
information among nodes but we cannot randomly trust everybody.

1) Allow expected_votes to be changed cluster-wide only if the
   information is coming from a quorate node.
2) Fix node->expected_votes based on quorate status
3) allow a joining node to decrease quorum and expected_votes
   if the node is not yet quorate, but it's joining a quorate
   cluster

Signed-off-by: Fabio M. Di Nitto <fdinitto@redhat.com>
Reviewed-by: Christine Caulfield <ccaulfie@redhat.com>
2012-01-26 14:32:54 +01:00
build-aux add release script and git based versioning 2010-11-10 07:46:53 -07:00
conf Change all ais references to corosync 2012-01-12 07:29:15 -07:00
cts CTS: make basic tests config-generic 2012-01-25 11:33:09 +11:00
exec votequorum: fix expected_votes propagation 2012-01-26 14:32:54 +01:00
include votequorum: drop NODESTATE_LEAVING 2012-01-26 14:32:54 +01:00
init Add systemd unit files for corosync and corosync-notifyd 2011-08-09 08:21:02 +10:00
lib votequorum: ifdef qdiskd API out 2012-01-18 14:23:06 +01:00
man votequorum: fix auto_tie_breaker design and simplify code a lot 2012-01-26 14:32:54 +01:00
pkgconfig Remove lcr directory, files, and references since it is no longer needed 2012-01-16 09:30:40 -07:00
test votequorum: drop NODESTATE_LEAVING 2012-01-26 14:32:54 +01:00
tools Change the last references from objctl to cmapctl 2012-01-24 09:47:51 +11:00
.gitignore update .gitignore 2012-01-17 12:40:42 +01:00
AUTHORS Update to AUTHORS file. 2009-12-07 23:18:44 +00:00
autobuild.sh autobuild: make sure systemd is enabled on f15+ 2012-01-19 22:06:20 +11:00
autogen.sh Sanitize output of autogen.sh. 2009-06-18 23:08:16 +00:00
configure.ac Remove lcr directory, files, and references since it is no longer needed 2012-01-16 09:30:40 -07:00
corosync.spec.in votequorum: add documentation and man pages 2012-01-25 14:06:27 +01:00
Doxyfile.in docs: auto-generate the version 2011-03-12 19:39:04 +11:00
INSTALL Change the last references from objctl to cmapctl 2012-01-24 09:47:51 +11:00
LICENSE Add license information to LICENSE file about build process files 2010-11-10 07:05:45 -07:00
loc Remove services directory from loc command 2012-01-16 09:30:53 -07:00
Makefile.am Remove lcr directory, files, and references since it is no longer needed 2012-01-16 09:30:40 -07:00
README.recovery remove all trailing blanks 2009-04-22 08:03:55 +00:00
SECURITY Change all ais references to corosync 2012-01-12 07:29:15 -07:00
TODO update TODO list 2012-01-25 14:06:27 +01:00

SYNCHRONIZATION ALGORITHM:
-------------------------
The synchronization algorithm is used for every service in corosync to
synchronize state of he system.

There are 4 events of the synchronization algorithm.  These events are in fact
functions that are registered in the service handler data structure.  They
are called by the synchronization system whenever a network partitions or
merges.

init:
Within the init event a service handler should record temporary state variables
used by the process event.

process:
The process event is responsible for executing synchronization.  This event
will return a state as to whether it has completed or not.  This allows for
synchronization to be interrupted and recontinue when the message queue buffer
is full.  The process event will be called again by the synchronization service
if requesed to do so by the return variable returned in process.

abort:
The abort event occurs when during synchronization a processor failure occurs.

activate:
The activate event occurs when process has returned no more processing is
necessary for any node in the cluster and all messages originated by process
have completed.

CHECKPOINT SYNCHRONIZATION ALGORITHM:
------------------------------------
The purpose of the checkpoint syncrhonization algorithm is to synchronize
checkpoints after a paritition or merge of two or more partitions.  The
secondary purpose of the algorithm is to determine the cluster-wide reference
count for every checkpoint.

Every cluster contains a group of checkpoints.  Each checkpoint has a
checkpoint name and checkpoint number.  The number is used to uniquely reference
an unlinked but still open checkpoint in the cluser.

Every checkpoint contains a reference count which is used to determine when
that checkpoint may be released.  The algorithm rebuilds the reference count
information each time a partition or merge occurs.

local variables
my_sync_state may have the values SYNC_CHECKPOINT, SYNC_REFCOUNT
my_current_iteration_state contains any data used to iterate the checkpoints
	and sections.
checkpoint data
	refcount_set contains reference count for every node consisting of
	number of opened connections to checkpoint and node identifier
	refcount contains a summation of every reference count in the refcount_set

pseudocode executed by a processor when the syncrhonization service calls
the init event
	call process_checkpoints_enter

pseudocode executed by a processor when the synchronization service calls
the process event in the SYNC_CHECKPOINT state
	if lowest processor identifier of old ring in new ring
		transmit checkpoints or sections starting from my_current_iteration_state
	if all checkpoints and sections could be queued
		call sync_refcounts_enter
	else
		record my_current_iteration_state

	require process to continue

pseudocode executed by a processor when the synchronization service calls
the process event in the SYNC_REFCOUNT state
	if lowest processor identifier of old ring in new ring
		transmit checkpoint reference counts
	if all checkpoint reference counts could be queued
		require process to not continue
	else
		record my_current_iteration_state for checkpoint reference counts

sync_checkpoints_enter:
	my_sync_state = SYNC_CHECKPOINT
	my_current_iteration_state set to start of checkpont list

sync_refcounts_enter:
	my_sync_state = SYNC_REFCOUNT

on event receipt of foreign ring id message
	ignore message

pseudocode executed on event receipt of checkpoint update
	if checkpoint exists in temporary storage
		ignore message
	else
		create checkpoint
		reset checkpoint refcount array

pseudocode executed on event receipt of checkpoint section update
	if checkpoint section exists in temporary storage
		ignore message
	else
		create checkpoint section

pseudocode executed on event receipt of reference count update
	update temporary checkpoint data storage reference count set by adding
	any reference counts in the temporary message set to those from the
	event
	update that checkpoint's reference count
	set the global checkpoint id to the current checkpoint id + 1 if it
	would increase the global checkpoint id

pseudocode called when the synchronization service calls the activate event:
for all checkpoints
	free all previously committed checkpoints and sections
	convert temporary checkpoints and sections to regular sections
copy my_saved_ring_id to my_old_ring_id

pseudocode called when the synchronization service calls the abort event:
	free all temporary checkpoints and temporary sections