2005-06-01 Paul Jakma <paul.jakma@sun.com>

* bgpd/(general) refcount struct peer and bgp_info, hence allowing us
	  add work_queues for bgp_process.
	* bgpd/bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
	  Add bgp_info_{lock,unlock} helper functions.
	  Add bgp_info_{add,delete} helpers, to remove need for
	  users managing locking/freeing of bgp_info and bgp_node's.
	* bgpd/bgp_table.h: (struct bgp_node) Add a flags field, and
	  BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
	  nodes.
	* bgpd/bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
	  peer reference as appropriate.
	* bgpd/bgp_damp.c: Remove its internal prototypes for
	  bgp_info_delete/free. Just use bgp_info_delete.
	* bgpd/bgpd.h: (struct bgp_master) Add work_queue pointers.
	  (struct peer) Add reference count 'lock'
	  (peer_lock,peer_unlock) New helpers to take/release reference
	  on struct peer.
	* bgpd/bgp_advertise.c: (general) Add peer and bgp_info refcounting
	  and balance how references are taken and released.
	  (bgp_advertise_free) release bgp_info reference, if appropriate
	  (bgp_adj_out_free) unlock peer
	  (bgp_advertise_clean) leave the adv references alone, or else
	  call bgp_advertise_free cant unlock them.
	  (bgp_adj_out_set) lock the peer on new adj's, leave the reference
	  alone otherwise. lock the new bgp_info reference.
	  (bgp_adj_in_set) lock the peer reference
	  (bgp_adj_in_remove) and unlock it here
	  (bgp_sync_delete) make hash_free on peer conditional, just in
	  case.
	* bgpd/bgp_fsm.c: (general) document that the timers depend on
	  bgp_event to release a peer reference.
	  (bgp_fsm_change_status) moved up the file, unchanged.
	  (bgp_stop) Decrement peer lock as many times as cancel_event
	  canceled - shouldnt be needed but just in case.
	  stream_fifo_clean of obuf made conditional, just in case.
	  (bgp_event) always unlock the peer, regardless of return value
	  of bgp_fsm_change_status.
	* bgpd/bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
	  (bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
	  along with a comment on it.
	* bgpd/bgp_route.c: (general) Add refcounting of bgp_info, cleanup
	  some of the resource management around bgp_info. Refcount peer.
	  Add workqueues for bgp_process and clear_table.
	  (bgp_info_new) make static
	  (bgp_info_free) Ditto, and unlock the peer reference.
	  (bgp_info_lock,bgp_info_unlock) new exported functions
	  (bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
	  taking care of reference counts.
	  (bgp_info_delete) do the opposite of bgp_info_add.
	  (bgp_process_rsclient) Converted into a work_queue work function.
	  (bgp_process_main) ditto.
	  (bgp_processq_del) process work queue item deconstructor
	  (bgp_process_queue_init) process work queue init
	  (bgp_process) call init function if required, set up queue item
	  and add to queue, rather than calling process functions directly.
	  (bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
	  (bgp_rib_withdraw) ditto
	  (bgp_update_rsclient) let bgp_info_add manage refcounts
	  (bgp_update_main) ditto
	  (bgp_clear_route_node) clear_node_queue work function, does
	  per-node aspects of what bgp_clear_route_table did previously
	  (bgp_clear_node_queue_del) clear_node_queue item delete function
	  (bgp_clear_node_complete) clear_node_queue completion function,
	  it unplugs the process queues, which have to be blocked while
	  clear_node_queue is being processed to prevent a race.
	  (bgp_clear_node_queue_init) init function for clear_node_queue
	  work queues
	  (bgp_clear_route_table) Sets up items onto a workqueue now, rather
	  than clearing each node directly. Plugs both process queues to
	  avoid potential race.
	  (bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
	  bgp_info refcounts.
	  (bgp_static_update_rsclient) ditto
	  (bgp_static_update_main) ditto
	  (bgp_static_update_vpnv4) ditto, remove unneeded cast.
	  (bgp_static_withdraw) see bgp_static_withdraw_rsclient
	  (bgp_static_withdraw_vpnv4) ditto
	  (bgp_aggregate_{route,add,delete}) ditto
	  (bgp_redistribute_{add,delete,withdraw}) ditto
	* bgpd/bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
	  reference
	  (peer_rsclient_unset_vty) ditto, but unlock same reference
	* bgpd/bgpd.c: (peer_free) handle frees of info to be kept for lifetime
	  of struct peer.
	  (peer_lock,peer_unlock) peer refcount helpers
	  (peer_new) add initial refcounts
	  (peer_create,peer_create_accept) lock peer as appropriate
	  (peer_delete) unlock as appropriate, move out some free's to
	  peer_free.
	  (peer_group_bind,peer_group_unbind) peer refcounting as
	  appropriate.
	  (bgp_create) check CALLOC return value.
	  (bgp_terminate) free workqueues too.
	* lib/memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
	  MTYPE_BGP_CLEAR_NODE_QUEUE
This commit is contained in:
paul 2005-06-01 11:17:05 +00:00
parent b31b813671
commit 200df115ea
14 changed files with 718 additions and 232 deletions

View File

@ -1,3 +1,99 @@
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* (general) refcount struct peer and bgp_info, hence allowing us
add work_queues for bgp_process.
* bgp_route.h: (struct bgp_info) Add 'lock' field for refcount.
Add bgp_info_{lock,unlock} helper functions.
Add bgp_info_{add,delete} helpers, to remove need for
users managing locking/freeing of bgp_info and bgp_node's.
* bgp_table.h: (struct bgp_node) Add a flags field, and
BGP_NODE_PROCESS_SCHEDULED to merge redundant processing of
nodes.
* bgp_fsm.h: Make the ON/OFF/ADD/REMOVE macros lock and unlock
peer reference as appropriate.
* bgp_damp.c: Remove its internal prototypes for
bgp_info_delete/free. Just use bgp_info_delete.
* bgpd.h: (struct bgp_master) Add work_queue pointers.
(struct peer) Add reference count 'lock'
(peer_lock,peer_unlock) New helpers to take/release reference
on struct peer.
* bgp_advertise.c: (general) Add peer and bgp_info refcounting
and balance how references are taken and released.
(bgp_advertise_free) release bgp_info reference, if appropriate
(bgp_adj_out_free) unlock peer
(bgp_advertise_clean) leave the adv references alone, or else
call bgp_advertise_free cant unlock them.
(bgp_adj_out_set) lock the peer on new adj's, leave the reference
alone otherwise. lock the new bgp_info reference.
(bgp_adj_in_set) lock the peer reference
(bgp_adj_in_remove) and unlock it here
(bgp_sync_delete) make hash_free on peer conditional, just in
case.
* bgp_fsm.c: (general) document that the timers depend on
bgp_event to release a peer reference.
(bgp_fsm_change_status) moved up the file, unchanged.
(bgp_stop) Decrement peer lock as many times as cancel_event
canceled - shouldnt be needed but just in case.
stream_fifo_clean of obuf made conditional, just in case.
(bgp_event) always unlock the peer, regardless of return value
of bgp_fsm_change_status.
* bgp_packet.c: (general) change several bgp_stop's to BGP_EVENT's.
(bgp_read) Add a mysterious extra peer_unlock for ACCEPT_PEERs
along with a comment on it.
* bgp_route.c: (general) Add refcounting of bgp_info, cleanup
some of the resource management around bgp_info. Refcount peer.
Add workqueues for bgp_process and clear_table.
(bgp_info_new) make static
(bgp_info_free) Ditto, and unlock the peer reference.
(bgp_info_lock,bgp_info_unlock) new exported functions
(bgp_info_add) Add a bgp_info to a bgp_node in correct fashion,
taking care of reference counts.
(bgp_info_delete) do the opposite of bgp_info_add.
(bgp_process_rsclient) Converted into a work_queue work function.
(bgp_process_main) ditto.
(bgp_processq_del) process work queue item deconstructor
(bgp_process_queue_init) process work queue init
(bgp_process) call init function if required, set up queue item
and add to queue, rather than calling process functions directly.
(bgp_rib_remove) let bgp_info_delete manage bgp_info refcounts
(bgp_rib_withdraw) ditto
(bgp_update_rsclient) let bgp_info_add manage refcounts
(bgp_update_main) ditto
(bgp_clear_route_node) clear_node_queue work function, does
per-node aspects of what bgp_clear_route_table did previously
(bgp_clear_node_queue_del) clear_node_queue item delete function
(bgp_clear_node_complete) clear_node_queue completion function,
it unplugs the process queues, which have to be blocked while
clear_node_queue is being processed to prevent a race.
(bgp_clear_node_queue_init) init function for clear_node_queue
work queues
(bgp_clear_route_table) Sets up items onto a workqueue now, rather
than clearing each node directly. Plugs both process queues to
avoid potential race.
(bgp_static_withdraw_rsclient) let bgp_info_{add,delete} manage
bgp_info refcounts.
(bgp_static_update_rsclient) ditto
(bgp_static_update_main) ditto
(bgp_static_update_vpnv4) ditto, remove unneeded cast.
(bgp_static_withdraw) see bgp_static_withdraw_rsclient
(bgp_static_withdraw_vpnv4) ditto
(bgp_aggregate_{route,add,delete}) ditto
(bgp_redistribute_{add,delete,withdraw}) ditto
* bgp_vty.c: (peer_rsclient_set_vty) lock rsclient list peer
reference
(peer_rsclient_unset_vty) ditto, but unlock same reference
* bgpd.c: (peer_free) handle frees of info to be kept for lifetime
of struct peer.
(peer_lock,peer_unlock) peer refcount helpers
(peer_new) add initial refcounts
(peer_create,peer_create_accept) lock peer as appropriate
(peer_delete) unlock as appropriate, move out some free's to
peer_free.
(peer_group_bind,peer_group_unbind) peer refcounting as
appropriate.
(bgp_create) check CALLOC return value.
(bgp_terminate) free workqueues too.
2005-05-28 Hasso Tepper <hasso at quagga.net>
* bgp_routemap.c: Sync set_metric_addsub_cmd with ripd.

View File

@ -87,6 +87,8 @@ bgp_advertise_new ()
void
bgp_advertise_free (struct bgp_advertise *adv)
{
if (adv->binfo)
bgp_info_unlock (adv->binfo); /* bgp_advertise bgp_info reference */
XFREE (MTYPE_BGP_ADVERTISE, adv);
}
@ -148,6 +150,7 @@ bgp_advertise_unintern (struct hash *hash, struct bgp_advertise_attr *baa)
void
bgp_adj_out_free (struct bgp_adj_out *adj)
{
peer_unlock (adj->peer); /* adj_out peer reference */
XFREE (MTYPE_BGP_ADJ_OUT, adj);
}
@ -191,8 +194,6 @@ bgp_advertise_clean (struct peer *peer, struct bgp_adj_out *adj,
/* Unintern BGP advertise attribute. */
bgp_advertise_unintern (peer->hash[afi][safi], baa);
adv->baa = NULL;
adv->rn = NULL;
}
/* Unlink myself from advertisement FIFO. */
@ -228,7 +229,8 @@ bgp_adj_out_set (struct bgp_node *rn, struct peer *peer, struct prefix *p,
if (! adj)
{
adj = XCALLOC (MTYPE_BGP_ADJ_OUT, sizeof (struct bgp_adj_out));
adj->peer = peer_lock (peer); /* adj_out peer reference */
if (rn)
{
BGP_ADJ_OUT_ADD (rn, adj);
@ -238,13 +240,15 @@ bgp_adj_out_set (struct bgp_node *rn, struct peer *peer, struct prefix *p,
if (adj->adv)
bgp_advertise_clean (peer, adj, afi, safi);
adj->peer = peer;
adj->adv = bgp_advertise_new ();
adv = adj->adv;
adv->rn = rn;
adv->binfo = binfo;
assert (adv->binfo == NULL);
adv->binfo = bgp_info_lock (binfo); /* bgp_info adj_out reference */
if (attr)
adv->baa = bgp_advertise_intern (peer->hash[afi][safi], attr);
else
@ -338,7 +342,7 @@ bgp_adj_in_set (struct bgp_node *rn, struct peer *peer, struct attr *attr)
}
}
adj = XCALLOC (MTYPE_BGP_ADJ_IN, sizeof (struct bgp_adj_in));
adj->peer = peer;
adj->peer = peer_lock (peer); /* adj_in peer reference */
adj->attr = bgp_attr_intern (attr);
BGP_ADJ_IN_ADD (rn, adj);
bgp_lock_node (rn);
@ -349,6 +353,7 @@ bgp_adj_in_remove (struct bgp_node *rn, struct bgp_adj_in *bai)
{
bgp_attr_unintern (bai->attr);
BGP_ADJ_IN_DEL (rn, bai);
peer_unlock (bai->peer); /* adj_in peer reference */
XFREE (MTYPE_BGP_ADJ_IN, bai);
}
@ -399,7 +404,8 @@ bgp_sync_delete (struct peer *peer)
if (peer->sync[afi][safi])
XFREE (MTYPE_TMP, peer->sync[afi][safi]);
peer->sync[afi][safi] = NULL;
hash_free (peer->hash[afi][safi]);
if (peer->hash[afi][safi])
hash_free (peer->hash[afi][safi]);
}
}

View File

@ -350,8 +350,6 @@ void
bgp_damp_info_free (struct bgp_damp_info *bdi, int withdraw)
{
struct bgp_info *binfo;
void bgp_info_delete (struct bgp_node *, struct bgp_info *);
void bgp_info_free (struct bgp_info *);
if (! bdi)
return;
@ -368,11 +366,8 @@ bgp_damp_info_free (struct bgp_damp_info *bdi, int withdraw)
UNSET_FLAG (binfo->flags, BGP_INFO_HISTORY);
if (bdi->lastrecord == BGP_RECORD_WITHDRAW && withdraw)
{
bgp_info_delete (bdi->rn, binfo);
bgp_info_free (binfo);
bgp_unlock_node (bdi->rn);
}
bgp_info_delete (bdi->rn, binfo);
XFREE (MTYPE_BGP_DAMP_INFO, bdi);
}

View File

@ -214,7 +214,7 @@ bgp_start_timer (struct thread *thread)
"%s [FSM] Timer (start timer expire).", peer->host);
THREAD_VAL (thread) = BGP_Start;
bgp_event (thread);
bgp_event (thread); /* bgp_event unlocks peer */
return 0;
}
@ -233,7 +233,7 @@ bgp_connect_timer (struct thread *thread)
peer->host);
THREAD_VAL (thread) = ConnectRetry_timer_expired;
bgp_event (thread);
bgp_event (thread); /* bgp_event unlocks peer */
return 0;
}
@ -253,7 +253,7 @@ bgp_holdtime_timer (struct thread *thread)
peer->host);
THREAD_VAL (thread) = Hold_Timer_expired;
bgp_event (thread);
bgp_event (thread); /* bgp_event unlocks peer */
return 0;
}
@ -273,7 +273,7 @@ bgp_keepalive_timer (struct thread *thread)
peer->host);
THREAD_VAL (thread) = KeepAlive_timer_expired;
bgp_event (thread);
bgp_event (thread); /* bgp_event unlocks peer */
return 0;
}
@ -388,12 +388,31 @@ bgp_graceful_stale_timer_expire (struct thread *thread)
return 0;
}
/* Called after event occured, this function change status and reset
read/write and timer thread. */
void
bgp_fsm_change_status (struct peer *peer, int status)
{
bgp_dump_state (peer, peer->status, status);
/* Preserve old status and change into new status. */
peer->ostatus = peer->status;
peer->status = status;
if (BGP_DEBUG (normal, NORMAL))
zlog_debug ("%s went from %s to %s",
peer->host,
LOOKUP (bgp_status_msg, peer->ostatus),
LOOKUP (bgp_status_msg, peer->status));
}
/* Administrative BGP peer stop event. */
int
bgp_stop (struct peer *peer)
{
afi_t afi;
safi_t safi;
unsigned int i;
char orf_name[BUFSIZ];
/* Increment Dropped count. */
@ -468,9 +487,11 @@ bgp_stop (struct peer *peer)
BGP_TIMER_OFF (peer->t_asorig);
BGP_TIMER_OFF (peer->t_routeadv);
/* Delete all existing events of the peer. */
BGP_EVENT_DELETE (peer);
/* Delete all existing events of the peer,
and corresponding peer ref-count */
for (i = thread_cancel_event (master, peer); i > 0; i--)
peer_unlock (peer); /* thread event reference */
/* Stream reset. */
peer->packet_size = 0;
@ -479,7 +500,8 @@ bgp_stop (struct peer *peer)
stream_reset (peer->ibuf);
if (peer->work)
stream_reset (peer->work);
stream_fifo_clean (peer->obuf);
if (peer->obuf)
stream_fifo_clean (peer->obuf);
/* Close of file descriptor. */
if (peer->fd >= 0)
@ -683,24 +705,6 @@ bgp_fsm_open (struct peer *peer)
return 0;
}
/* Called after event occured, this function change status and reset
read/write and timer thread. */
void
bgp_fsm_change_status (struct peer *peer, int status)
{
bgp_dump_state (peer, peer->status, status);
/* Preserve old status and change into new status. */
peer->ostatus = peer->status;
peer->status = status;
if (BGP_DEBUG (normal, NORMAL))
zlog_debug ("%s went from %s to %s",
peer->host,
LOOKUP (bgp_status_msg, peer->ostatus),
LOOKUP (bgp_status_msg, peer->status));
}
/* Keepalive send to peer. */
int
bgp_fsm_keepalive_expire (struct peer *peer)
@ -1021,15 +1025,16 @@ bgp_event (struct thread *thread)
ret = (*(FSM [peer->status - 1][event - 1].func))(peer);
/* When function do not want proceed next job return -1. */
if (ret < 0)
return ret;
/* If status is changed. */
if (next != peer->status)
bgp_fsm_change_status (peer, next);
if (ret >= 0)
{
/* If status is changed. */
if (next != peer->status)
bgp_fsm_change_status (peer, next);
/* Make sure timer is set. */
bgp_timer_set (peer);
return 0;
/* Make sure timer is set. */
bgp_timer_set (peer);
}
peer_unlock (peer); /* bgp-event peer reference */
return ret;
}

View File

@ -23,20 +23,72 @@ Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
#define _QUAGGA_BGP_FSM_H
/* Macro for BGP read, write and timer thread. */
#define BGP_READ_ON(T,F,V) THREAD_READ_ON(master,T,F,peer,V)
#define BGP_READ_OFF(X) THREAD_READ_OFF(X)
#define BGP_READ_ON(T,F,V) \
do { \
if (!T) \
{ \
peer_lock (peer); \
THREAD_READ_ON(master,T,F,peer,V); \
} \
} while (0)
#define BGP_WRITE_ON(T,F,V) THREAD_WRITE_ON(master,T,F,peer,V)
#define BGP_WRITE_OFF(X) THREAD_WRITE_OFF(X)
#define BGP_READ_OFF(T) \
do { \
if (T) \
{ \
peer_unlock (peer); \
THREAD_READ_OFF(T); \
} \
} while (0)
#define BGP_TIMER_ON(T,F,V) THREAD_TIMER_ON(master,T,F,peer,V)
#define BGP_TIMER_OFF(X) THREAD_TIMER_OFF(X)
#define BGP_WRITE_ON(T,F,V) \
do { \
if (!T) \
{ \
peer_lock (peer); \
THREAD_WRITE_ON(master,(T),(F),peer,(V)); \
} \
} while (0)
#define BGP_WRITE_OFF(T) \
do { \
if (T) \
{ \
peer_unlock (peer); \
THREAD_WRITE_OFF(T); \
} \
} while (0)
#define BGP_EVENT_ADD(P,E) \
thread_add_event (master, bgp_event, (P), (E))
#define BGP_TIMER_ON(T,F,V) \
do { \
if (!T) \
{ \
peer_lock (peer); \
THREAD_TIMER_ON(master,(T),(F),peer,(V)); \
} \
} while (0)
#define BGP_EVENT_DELETE(P) \
thread_cancel_event (master, (P))
#define BGP_TIMER_OFF(T) \
do { \
if (T) \
{ \
peer_unlock (peer); \
THREAD_TIMER_OFF(T); \
} \
} while (0)
#define BGP_EVENT_ADD(P,E) \
do { \
peer_lock (peer); /* bgp event reference */ \
thread_add_event (master, bgp_event, (P), (E)); \
} while (0)
#define BGP_EVENT_DELETE(P) \
do { \
peer_unlock (peer); /* bgp event peer reference */ \
assert (peer); \
thread_cancel_event (master, (P)); \
} while (0)
/* Prototypes. */
extern int bgp_event (struct thread *);

View File

@ -637,7 +637,7 @@ bgp_write (struct thread *thread)
if (write_errno == EWOULDBLOCK || write_errno == EAGAIN)
break;
bgp_stop (peer);
BGP_EVENT_ADD (peer, BGP_Stop);
peer->status = Idle;
bgp_timer_set (peer);
return 0;
@ -673,8 +673,8 @@ bgp_write (struct thread *thread)
if (peer->v_start >= (60 * 2))
peer->v_start = (60 * 2);
/* BGP_EVENT_ADD (peer, BGP_Stop); */
bgp_stop (peer);
BGP_EVENT_ADD (peer, BGP_Stop);
/*bgp_stop (peer);*/
peer->status = Idle;
bgp_timer_set (peer);
return 0;
@ -722,7 +722,7 @@ bgp_write_notify (struct peer *peer)
ret = writen (peer->fd, STREAM_DATA (s), stream_get_endp (s));
if (ret <= 0)
{
bgp_stop (peer);
BGP_EVENT_ADD (peer, BGP_Stop);
peer->status = Idle;
bgp_timer_set (peer);
return 0;
@ -1278,7 +1278,7 @@ bgp_open_receive (struct peer *peer, bgp_size_t size)
/* Transfer status. */
realpeer->status = peer->status;
bgp_stop (peer);
/* peer pointer change. Open packet send to neighbor. */
peer = realpeer;
bgp_open_send (peer);
@ -2376,6 +2376,14 @@ bgp_read (struct thread *thread)
if (BGP_DEBUG (events, EVENTS))
zlog_debug ("%s [Event] Accepting BGP peer delete", peer->host);
peer_delete (peer);
/* we've lost track of a reference to ACCEPT_PEER somehow. It doesnt
* _seem_ to be the 'update realpeer with accept peer' hack, yet it
* *must* be.. Very very odd, but I give up trying to
* root cause this - ACCEPT_PEER is a dirty hack, it should be fixed
* instead, which would make root-causing this a moot point..
* A hack because of a hack, appropriate.
*/
peer_unlock (peer); /* god knows what reference... ACCEPT_PEER sucks */
}
return 0;
}

View File

@ -33,6 +33,7 @@ Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
#include "sockunion.h"
#include "plist.h"
#include "thread.h"
#include "workqueue.h"
#include "bgpd/bgpd.h"
#include "bgpd/bgp_table.h"
@ -85,7 +86,7 @@ bgp_afi_node_get (struct bgp_table *table, afi_t afi, safi_t safi, struct prefix
}
/* Allocate new bgp info structure. */
struct bgp_info *
static struct bgp_info *
bgp_info_new ()
{
struct bgp_info *new;
@ -97,7 +98,7 @@ bgp_info_new ()
}
/* Free bgp route information. */
void
static void
bgp_info_free (struct bgp_info *binfo)
{
if (binfo->attr)
@ -106,21 +107,61 @@ bgp_info_free (struct bgp_info *binfo)
if (binfo->damp_info)
bgp_damp_info_free (binfo->damp_info, 0);
peer_unlock (binfo->peer); /* bgp_info peer reference */
XFREE (MTYPE_BGP_ROUTE, binfo);
}
struct bgp_info *
bgp_info_lock (struct bgp_info *binfo)
{
binfo->lock++;
return binfo;
}
struct bgp_info *
bgp_info_unlock (struct bgp_info *binfo)
{
assert (binfo && binfo->lock > 0);
binfo->lock--;
if (binfo->lock == 0)
{
#if 0
zlog_debug ("%s: unlocked and freeing", __func__);
zlog_backtrace (LOG_DEBUG);
#endif
bgp_info_free (binfo);
return NULL;
}
#if 0
if (binfo->lock == 1)
{
zlog_debug ("%s: unlocked to 1", __func__);
zlog_backtrace (LOG_DEBUG);
}
#endif
return binfo;
}
void
bgp_info_add (struct bgp_node *rn, struct bgp_info *ri)
{
struct bgp_info *top;
top = rn->info;
ri->next = rn->info;
ri->prev = NULL;
if (top)
top->prev = ri;
rn->info = ri;
bgp_info_lock (ri);
bgp_lock_node (rn);
peer_lock (ri->peer); /* bgp_info peer reference */
}
void
@ -132,6 +173,9 @@ bgp_info_delete (struct bgp_node *rn, struct bgp_info *ri)
ri->prev->next = ri->next;
else
rn->info = ri->next;
bgp_info_unlock (ri);
bgp_unlock_node (rn);
}
/* Get MED value. If MED value is missing and "bgp bestpath
@ -1162,68 +1206,94 @@ bgp_process_announce_selected (struct peer *peer, struct bgp_info *selected,
break;
}
return 0;
}
}
int
bgp_process_rsclient (struct bgp *bgp, struct peer *rsclient,
struct bgp_node *rn, afi_t afi, safi_t safi)
struct bgp_process_queue
{
struct prefix *p;
struct bgp *bgp;
struct bgp_node *rn;
afi_t afi;
safi_t safi;
};
static wq_item_status
bgp_process_rsclient (struct bgp_process_queue *pq)
{
struct bgp *bgp = pq->bgp;
struct bgp_node *rn = pq->rn;
afi_t afi = pq->afi;
safi_t safi = pq->safi;
struct bgp_info *new_select;
struct bgp_info *old_select;
struct bgp_info_pair old_and_new;
struct attr attr;
struct peer_group *group;
struct listnode *node, *nnode;
p = &rn->p;
struct peer *rsclient = rn->table->owner;
/* we shouldn't run if the clear_route_node queue is still running
* or scheduled to run, or we can race with session coming up
* and adding routes back before we've cleared them
*/
if (bm->clear_node_queue && bm->clear_node_queue->thread)
return WQ_QUEUE_BLOCKED;
/* Best path selection. */
bgp_best_selection (bgp, rn, &old_and_new);
new_select = old_and_new.new;
old_select = old_and_new.old;
if (CHECK_FLAG(rsclient->sflags, PEER_STATUS_GROUP))
{
group = rsclient->group;
for (ALL_LIST_ELEMENTS (group->peer, node, nnode, rsclient))
{
/* Nothing to do. */
if (old_select && old_select == new_select)
if (! CHECK_FLAG (old_select->flags, BGP_INFO_ATTR_CHANGED))
continue;
if (old_select)
UNSET_FLAG (old_select->flags, BGP_INFO_SELECTED);
if (new_select)
{
SET_FLAG (new_select->flags, BGP_INFO_SELECTED);
UNSET_FLAG (new_select->flags, BGP_INFO_ATTR_CHANGED);
}
if (CHECK_FLAG (rsclient->sflags, PEER_STATUS_GROUP))
{
for (ALL_LIST_ELEMENTS (rsclient->group->peer, node, nnode, rsclient))
{
/* Nothing to do. */
if (old_select && old_select == new_select)
if (!CHECK_FLAG (old_select->flags, BGP_INFO_ATTR_CHANGED))
continue;
bgp_process_announce_selected (rsclient, new_select, rn, &attr,
afi, safi);
}
return 0;
}
if (old_select)
UNSET_FLAG (old_select->flags, BGP_INFO_SELECTED);
if (new_select)
{
SET_FLAG (new_select->flags, BGP_INFO_SELECTED);
UNSET_FLAG (new_select->flags, BGP_INFO_ATTR_CHANGED);
}
bgp_process_announce_selected (rsclient, new_select, rn, &attr, afi, safi);
bgp_process_announce_selected (rsclient, new_select, rn, &attr,
afi, safi);
}
}
else
{
bgp_process_announce_selected (rsclient, new_select, rn,
&attr, afi, safi);
}
return 0;
UNSET_FLAG (rn->flags, BGP_NODE_PROCESS_SCHEDULED);
return WQ_SUCCESS;
}
int
bgp_process_main (struct bgp *bgp, struct bgp_node *rn, afi_t afi, safi_t safi)
{
struct prefix *p;
static wq_item_status
bgp_process_main (struct bgp_process_queue *pq)
{
struct bgp *bgp = pq->bgp;
struct bgp_node *rn = pq->rn;
afi_t afi = pq->afi;
safi_t safi = pq->safi;
struct prefix *p = &rn->p;
struct bgp_info *new_select;
struct bgp_info *old_select;
struct bgp_info_pair old_and_new;
struct listnode *node, *nnode;
struct peer *peer;
struct attr attr;
p = &rn->p;
/* we shouldn't run if the clear_route_node queue is still running
* or scheduled to run, or we can race with session coming up
* and adding routes back before we've cleared them
*/
if (bm->clear_node_queue && bm->clear_node_queue->thread)
return WQ_QUEUE_BLOCKED;
/* Best path selection. */
bgp_best_selection (bgp, rn, &old_and_new);
@ -1234,11 +1304,13 @@ bgp_process_main (struct bgp *bgp, struct bgp_node *rn, afi_t afi, safi_t safi)
if (old_select && old_select == new_select)
{
if (! CHECK_FLAG (old_select->flags, BGP_INFO_ATTR_CHANGED))
{
if (CHECK_FLAG (old_select->flags, BGP_INFO_IGP_CHANGED))
bgp_zebra_announce (p, old_select, bgp);
return 0;
}
{
if (CHECK_FLAG (old_select->flags, BGP_INFO_IGP_CHANGED))
bgp_zebra_announce (p, old_select, bgp);
UNSET_FLAG (rn->flags, BGP_NODE_PROCESS_SCHEDULED);
return WQ_SUCCESS;
}
}
if (old_select)
@ -1273,21 +1345,78 @@ bgp_process_main (struct bgp *bgp, struct bgp_node *rn, afi_t afi, safi_t safi)
bgp_zebra_withdraw (p, old_select);
}
}
return 0;
UNSET_FLAG (rn->flags, BGP_NODE_PROCESS_SCHEDULED);
return WQ_SUCCESS;
}
int
static void
bgp_processq_del (struct bgp_process_queue *pq)
{
bgp_unlock_node (pq->rn);
XFREE (MTYPE_BGP_PROCESS_QUEUE, pq);
}
static void
bgp_process_queue_init (void)
{
bm->process_main_queue
= work_queue_new (bm->master, "process_main_queue");
bm->process_rsclient_queue
= work_queue_new (bm->master, "process_rsclient_queue");
if ( !(bm->process_main_queue && bm->process_rsclient_queue) )
{
zlog_err ("%s: Failed to allocate work queue", __func__);
exit (1);
}
bm->process_main_queue->spec.workfunc = &bgp_process_main;
bm->process_rsclient_queue->spec.workfunc = &bgp_process_rsclient;
bm->process_main_queue->spec.del_item_data = &bgp_processq_del;
bm->process_rsclient_queue->spec.del_item_data
= bm->process_main_queue->spec.del_item_data;
bm->process_main_queue->spec.max_retries
= bm->process_main_queue->spec.max_retries = 0;
bm->process_rsclient_queue->spec.hold
= bm->process_main_queue->spec.hold = 500;
bm->process_rsclient_queue->spec.delay
= bm->process_main_queue->spec.delay = 10;
}
void
bgp_process (struct bgp *bgp, struct bgp_node *rn, afi_t afi, safi_t safi)
{
struct bgp_process_queue *pqnode;
/* already scheduled for processing? */
if (CHECK_FLAG (rn->flags, BGP_NODE_PROCESS_SCHEDULED))
return;
if ( (bm->process_main_queue == NULL) ||
(bm->process_rsclient_queue == NULL) )
bgp_process_queue_init ();
pqnode = XCALLOC (MTYPE_BGP_PROCESS_QUEUE,
sizeof (struct bgp_process_queue));
if (!pqnode)
return;
pqnode->rn = bgp_lock_node (rn); /* unlocked by bgp_processq_del */
pqnode->bgp = bgp;
pqnode->afi = afi;
pqnode->safi = safi;
switch (rn->table->type)
{
case BGP_TABLE_MAIN:
return bgp_process_main (bgp, rn, afi, safi);
case BGP_TABLE_RSCLIENT:
return bgp_process_rsclient (bgp, (struct peer *) rn->table->owner,
rn, afi, safi);
case BGP_TABLE_MAIN:
work_queue_add (bm->process_main_queue, pqnode);
break;
case BGP_TABLE_RSCLIENT:
work_queue_add (bm->process_rsclient_queue, pqnode);
break;
}
return 0;
return;
}
int
@ -1399,8 +1528,6 @@ bgp_rib_remove (struct bgp_node *rn, struct bgp_info *ri, struct peer *peer,
bgp_process (peer->bgp, rn, afi, safi);
}
bgp_info_delete (rn, ri);
bgp_info_free (ri);
bgp_unlock_node (rn);
}
void
@ -1437,11 +1564,7 @@ bgp_rib_withdraw (struct bgp_node *rn, struct bgp_info *ri, struct peer *peer,
SET_FLAG (ri->flags, BGP_INFO_VALID);
if (status != BGP_DAMP_USED)
{
bgp_info_delete (rn, ri);
bgp_info_free (ri);
bgp_unlock_node (rn);
}
bgp_info_delete (rn, ri);
}
void
@ -1600,7 +1723,10 @@ bgp_update_rsclient (struct peer *rsclient, afi_t afi, safi_t safi,
/* Register new BGP information. */
bgp_info_add (rn, new);
/* route_node_get lock */
bgp_unlock_node (rn);
/* Process change. */
bgp_process (bgp, rn, afi, safi);
@ -1935,7 +2061,10 @@ bgp_update_main (struct peer *peer, struct prefix *p, struct attr *attr,
/* Register new BGP information. */
bgp_info_add (rn, new);
/* route_node_get lock */
bgp_unlock_node (rn);
/* If maximum prefix count is configured and current prefix
count exeed it. */
if (bgp_maximum_prefix_overflow (peer, afi, safi, 0))
@ -2274,15 +2403,99 @@ bgp_soft_reconfig_in (struct peer *peer, afi_t afi, safi_t safi)
bgp_soft_reconfig_table (peer, afi, safi, table);
}
struct bgp_clear_node_queue
{
struct bgp_node *rn;
struct peer *peer;
afi_t afi;
safi_t safi;
};
static wq_item_status
bgp_clear_route_node (struct bgp_clear_node_queue *cq)
{
struct bgp_adj_in *ain;
struct bgp_adj_out *aout;
struct bgp_info *ri;
assert (cq->rn && cq->peer);
for (ri = cq->rn->info; ri; ri = ri->next)
if (ri->peer == cq->peer)
{
/* graceful restart STALE flag set. */
if (CHECK_FLAG (cq->peer->sflags, PEER_STATUS_NSF_WAIT)
&& cq->peer->nsf[cq->afi][cq->safi]
&& ! CHECK_FLAG (ri->flags, BGP_INFO_STALE)
&& ! CHECK_FLAG (ri->flags, BGP_INFO_HISTORY)
&& ! CHECK_FLAG (ri->flags, BGP_INFO_DAMPED))
{
SET_FLAG (ri->flags, BGP_INFO_STALE);
cq->peer->pcount[cq->afi][cq->safi]--;
}
else
bgp_rib_remove (cq->rn, ri, cq->peer, cq->afi, cq->safi);
break;
}
for (ain = cq->rn->adj_in; ain; ain = ain->next)
if (ain->peer == cq->peer)
{
bgp_adj_in_remove (cq->rn, ain);
bgp_unlock_node (cq->rn);
break;
}
for (aout = cq->rn->adj_out; aout; aout = aout->next)
if (aout->peer == cq->peer)
{
bgp_adj_out_remove (cq->rn, aout, cq->peer, cq->afi, cq->safi);
bgp_unlock_node (cq->rn);
break;
}
return WQ_SUCCESS;
}
static void
bgp_clear_node_queue_del (struct bgp_clear_node_queue *cq)
{
bgp_unlock_node (cq->rn);
peer_unlock (cq->peer); /* bgp_clear_node_queue_del */
XFREE (MTYPE_BGP_CLEAR_NODE_QUEUE, cq);
}
static void
bgp_clear_node_complete (struct workqueue *wq)
{
/* unplug the 2 processing queues */
if (bm->process_main_queue)
work_queue_unplug (bm->process_main_queue);
if (bm->process_rsclient_queue)
work_queue_unplug (bm->process_rsclient_queue);
}
static void
bgp_clear_node_queue_init (void)
{
if ( (bm->clear_node_queue
= work_queue_new (bm->master, "clear_route_node")) == NULL)
{
zlog_err ("%s: Failed to allocate work queue", __func__);
exit (1);
}
bm->clear_node_queue->spec.hold = 10;
bm->clear_node_queue->spec.delay = 0; /* no gathering to be gained */
bm->clear_node_queue->spec.workfunc = &bgp_clear_route_node;
bm->clear_node_queue->spec.del_item_data = &bgp_clear_node_queue_del;
bm->clear_node_queue->spec.completion_func = &bgp_clear_node_complete;
bm->clear_node_queue->spec.max_retries = 0;
}
static void
bgp_clear_route_table (struct peer *peer, afi_t afi, safi_t safi,
struct bgp_table *table, struct peer *rsclient)
{
struct bgp_clear_node_queue *cqnode;
struct bgp_node *rn;
struct bgp_adj_in *ain;
struct bgp_adj_out *aout;
struct bgp_info *ri;
if (! table)
table = (rsclient) ? rsclient->rib[afi][safi] : peer->bgp->rib[afi][safi];
@ -2290,40 +2503,34 @@ bgp_clear_route_table (struct peer *peer, afi_t afi, safi_t safi,
if (! table)
return;
if (bm->clear_node_queue == NULL)
bgp_clear_node_queue_init ();
/* plug the two bgp_process queues to avoid any chance of racing
* with a session coming back up and adding routes before we've
* cleared them all. We'll unplug them with completion callback.
*/
if (bm->process_main_queue)
work_queue_plug (bm->process_main_queue);
if (bm->process_rsclient_queue)
work_queue_plug (bm->process_rsclient_queue);
for (rn = bgp_table_top (table); rn; rn = bgp_route_next (rn))
{
for (ri = rn->info; ri; ri = ri->next)
if (ri->peer == peer)
{
/* graceful restart STALE flag set. */
if (CHECK_FLAG (peer->sflags, PEER_STATUS_NSF_WAIT)
&& peer->nsf[afi][safi]
&& ! CHECK_FLAG (ri->flags, BGP_INFO_STALE)
&& ! CHECK_FLAG (ri->flags, BGP_INFO_HISTORY)
&& ! CHECK_FLAG (ri->flags, BGP_INFO_DAMPED))
{
SET_FLAG (ri->flags, BGP_INFO_STALE);
peer->pcount[afi][safi]--;
}
else
bgp_rib_remove (rn, ri, peer, afi, safi);
break;
}
for (ain = rn->adj_in; ain; ain = ain->next)
if (ain->peer == peer)
{
bgp_adj_in_remove (rn, ain);
bgp_unlock_node (rn);
break;
}
for (aout = rn->adj_out; aout; aout = aout->next)
if (aout->peer == peer)
{
bgp_adj_out_remove (rn, aout, peer, afi, safi);
bgp_unlock_node (rn);
break;
}
if (rn->info == NULL)
continue;
if ( (cqnode = XCALLOC (MTYPE_BGP_CLEAR_NODE_QUEUE,
sizeof (struct bgp_clear_node_queue))) == NULL)
continue;
cqnode->rn = bgp_lock_node (rn); /* unlocked: bgp_clear_node_queue_del */
cqnode->afi = afi;
cqnode->safi = safi;
cqnode->peer = peer_lock (peer); /* bgp_clear_node_queue_del */
work_queue_add (bm->clear_node_queue, cqnode);
}
return;
}
void
@ -2643,8 +2850,6 @@ bgp_static_withdraw_rsclient (struct bgp *bgp, struct peer *rsclient,
UNSET_FLAG (ri->flags, BGP_INFO_VALID);
bgp_process (bgp, rn, afi, safi);
bgp_info_delete (rn, ri);
bgp_info_free (ri);
bgp_unlock_node (rn);
}
/* Unlock bgp_node_lookup. */
@ -2780,7 +2985,10 @@ bgp_static_update_rsclient (struct peer *rsclient, struct prefix *p,
/* Register new BGP information. */
bgp_info_add (rn, new);
/* route_node_get lock */
bgp_unlock_node (rn);
/* Process change. */
bgp_process (bgp, rn, afi, safi);
@ -2887,7 +3095,10 @@ bgp_static_update_main (struct bgp *bgp, struct prefix *p,
/* Register new BGP information. */
bgp_info_add (rn, new);
/* route_node_get lock */
bgp_unlock_node (rn);
/* Process change. */
bgp_process (bgp, rn, afi, safi);
@ -2930,11 +3141,14 @@ bgp_static_update_vpnv4 (struct bgp *bgp, struct prefix *p, u_int16_t afi,
memcpy (new->tag, tag, 3);
/* Aggregate address increment. */
bgp_aggregate_increment (bgp, p, (struct bgp_info *) new, afi, safi);
bgp_aggregate_increment (bgp, p, new, afi, safi);
/* Register new BGP information. */
bgp_info_add (rn, (struct bgp_info *) new);
bgp_info_add (rn, new);
/* route_node_get lock */
bgp_unlock_node (rn);
/* Process change. */
bgp_process (bgp, rn, afi, safi);
}
@ -2962,8 +3176,6 @@ bgp_static_withdraw (struct bgp *bgp, struct prefix *p, afi_t afi,
UNSET_FLAG (ri->flags, BGP_INFO_VALID);
bgp_process (bgp, rn, afi, safi);
bgp_info_delete (rn, ri);
bgp_info_free (ri);
bgp_unlock_node (rn);
}
/* Unlock bgp_node_lookup. */
@ -3013,8 +3225,6 @@ bgp_static_withdraw_vpnv4 (struct bgp *bgp, struct prefix *p, u_int16_t afi,
UNSET_FLAG (ri->flags, BGP_INFO_VALID);
bgp_process (bgp, rn, afi, safi);
bgp_info_delete (rn, ri);
bgp_info_free (ri);
bgp_unlock_node (rn);
}
/* Unlock bgp_node_lookup. */
@ -3883,6 +4093,7 @@ bgp_aggregate_route (struct bgp *bgp, struct prefix *p, struct bgp_info *rinew,
new->uptime = time (NULL);
bgp_info_add (rn, new);
bgp_unlock_node (rn);
bgp_process (bgp, rn, afi, safi);
}
else
@ -4053,7 +4264,8 @@ bgp_aggregate_add (struct bgp *bgp, struct prefix *p, afi_t afi, safi_t safi,
new->uptime = time (NULL);
bgp_info_add (rn, new);
bgp_unlock_node (rn);
/* Process change. */
bgp_process (bgp, rn, afi, safi);
}
@ -4125,8 +4337,6 @@ bgp_aggregate_delete (struct bgp *bgp, struct prefix *p, afi_t afi,
UNSET_FLAG (ri->flags, BGP_INFO_VALID);
bgp_process (bgp, rn, afi, safi);
bgp_info_delete (rn, ri);
bgp_info_free (ri);
bgp_unlock_node (rn);
}
/* Unlock bgp_node_lookup. */
@ -4677,6 +4887,7 @@ bgp_redistribute_add (struct prefix *p, struct in_addr *nexthop,
bgp_aggregate_increment (bgp, p, new, afi, SAFI_UNICAST);
bgp_info_add (bn, new);
bgp_unlock_node (bn);
bgp_process (bgp, bn, afi, SAFI_UNICAST);
}
}
@ -4713,8 +4924,6 @@ bgp_redistribute_delete (struct prefix *p, u_char type)
UNSET_FLAG (ri->flags, BGP_INFO_VALID);
bgp_process (bgp, rn, afi, SAFI_UNICAST);
bgp_info_delete (rn, ri);
bgp_info_free (ri);
bgp_unlock_node (rn);
}
bgp_unlock_node (rn);
}
@ -4744,8 +4953,6 @@ bgp_redistribute_withdraw (struct bgp *bgp, afi_t afi, int type)
UNSET_FLAG (ri->flags, BGP_INFO_VALID);
bgp_process (bgp, rn, afi, SAFI_UNICAST);
bgp_info_delete (rn, ri);
bgp_info_free (ri);
bgp_unlock_node (rn);
}
}
}

View File

@ -21,12 +21,17 @@ Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
#ifndef _QUAGGA_BGP_ROUTE_H
#define _QUAGGA_BGP_ROUTE_H
#include "bgp_table.h"
struct bgp_info
{
/* For linked list. */
struct bgp_info *next;
struct bgp_info *prev;
/* reference count */
unsigned int lock;
/* BGP route type. This can be static, RIP, OSPF, BGP etc. */
u_char type;
@ -140,6 +145,11 @@ void bgp_clear_route_all (struct peer *);
void bgp_clear_adj_in (struct peer *, afi_t, safi_t);
void bgp_clear_stale_route (struct peer *, afi_t, safi_t);
extern struct bgp_info *bgp_info_lock (struct bgp_info *);
extern struct bgp_info *bgp_info_unlock (struct bgp_info *);
extern void bgp_info_add (struct bgp_node *rn, struct bgp_info *ri);
extern void bgp_info_delete (struct bgp_node *rn, struct bgp_info *ri);
int bgp_nlri_sanity_check (struct peer *, int, u_char *, bgp_size_t);
int bgp_nlri_parse (struct peer *, struct attr *, struct bgp_nlri *);

View File

@ -58,6 +58,9 @@ struct bgp_node
void *aggregate;
struct bgp_node *prn;
u_char flags;
#define BGP_NODE_PROCESS_SCHEDULED (1 << 0)
};
struct bgp_table *bgp_table_init (void);

View File

@ -1342,7 +1342,7 @@ DEFUN (no_neighbor,
{
peer = peer_lookup (vty->index, &su);
if (peer)
peer_delete (peer);
peer_delete (peer);
}
return CMD_SUCCESS;
@ -2043,7 +2043,10 @@ peer_rsclient_set_vty (struct vty *vty, const char *peer_str,
return CMD_SUCCESS;
if ( ! peer_rsclient_active (peer) )
listnode_add_sort (bgp->rsclient, peer);
{
peer = peer_lock (peer); /* rsclient peer list reference */
listnode_add_sort (bgp->rsclient, peer);
}
ret = peer_af_flag_set (peer, afi, safi, PEER_FLAG_RSERVER_CLIENT);
if (ret < 0)
@ -2143,7 +2146,10 @@ peer_rsclient_unset_vty (struct vty *vty, const char *peer_str,
return bgp_vty_return (vty, ret);
if ( ! peer_rsclient_active (peer) )
listnode_delete (bgp->rsclient, peer);
{
peer_unlock (peer); /* peer bgp rsclient reference */
listnode_delete (bgp->rsclient, peer);
}
bgp_table_finish (peer->rib[bgp_node_afi(vty)][bgp_node_safi(vty)]);

View File

@ -34,6 +34,7 @@ Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
#include "log.h"
#include "plist.h"
#include "linklist.h"
#include "workqueue.h"
#include "bgpd/bgpd.h"
#include "bgpd/bgp_table.h"
@ -685,7 +686,71 @@ peer_sort (struct peer *peer)
}
}
/* Allocate new peer object. */
static inline void
peer_free (struct peer *peer)
{
if (peer->desc)
XFREE (MTYPE_PEER_DESC, peer->desc);
/* Free allocated host character. */
if (peer->host)
XFREE (MTYPE_BGP_PEER_HOST, peer->host);
/* Update source configuration. */
if (peer->update_source)
sockunion_free (peer->update_source);
if (peer->update_if)
XFREE (MTYPE_PEER_UPDATE_SOURCE, peer->update_if);
memset (peer, 0, sizeof (struct peer));
XFREE (MTYPE_BGP_PEER, peer);
}
/* increase reference count on a struct peer */
struct peer *
peer_lock (struct peer *peer)
{
assert (peer && (peer->lock >= 0));
peer->lock++;
return peer;
}
/* decrease reference count on a struct peer
* struct peer is freed and NULL returned if last reference
*/
struct peer *
peer_unlock (struct peer *peer)
{
assert (peer && (peer->lock > 0));
peer->lock--;
if (peer->lock == 0)
{
#if 0
zlog_debug ("unlocked and freeing");
zlog_backtrace (LOG_DEBUG);
#endif
peer_free (peer);
return NULL;
}
#if 0
if (peer->lock == 1)
{
zlog_debug ("unlocked to 1");
zlog_backtrace (LOG_DEBUG);
}
#endif
return peer;
}
/* Allocate new peer object, implicitely locked. */
static struct peer *
peer_new ()
{
@ -700,6 +765,7 @@ peer_new ()
/* Set default value. */
peer->fd = -1;
peer->lock = 1;
peer->v_start = BGP_INIT_START_TIMER;
peer->v_connect = BGP_DEFAULT_CONNECT_RETRY;
peer->v_asorig = BGP_DEFAULT_ASORIGINATE;
@ -755,6 +821,8 @@ peer_create (union sockunion *su, struct bgp *bgp, as_t local_as,
peer->v_routeadv = BGP_DEFAULT_IBGP_ROUTEADV;
else
peer->v_routeadv = BGP_DEFAULT_EBGP_ROUTEADV;
peer = peer_lock (peer); /* bgp peer list reference */
listnode_add_sort (bgp->peer, peer);
active = peer_active (peer);
@ -790,6 +858,8 @@ peer_create_accept (struct bgp *bgp)
peer = peer_new ();
peer->bgp = bgp;
peer = peer_lock (peer); /* bgp peer list reference */
listnode_add_sort (bgp->peer, peer);
return peer;
@ -1087,18 +1157,20 @@ peer_delete (struct peer *peer)
relationship. */
if (peer->group)
{
peer = peer_unlock (peer); /* peer-group reference */
listnode_delete (peer->group->peer, peer);
peer->group = NULL;
}
/* Withdraw all information from routing table. We can not use
BGP_EVENT_ADD (peer, BGP_Stop) at here. Because the event is
executed after peer structure is deleted. */
* BGP_EVENT_ADD (peer, BGP_Stop) at here. Because the event is
* executed after peer structure is deleted.
*/
peer->last_reset = PEER_DOWN_NEIGHBOR_DELETE;
bgp_stop (peer);
bgp_fsm_change_status (peer, Idle);
/* Stop all timers. */
bgp_fsm_change_status (peer, Idle); /* stops all timers */
/* Stop all timers - should already have been done in bgp_stop */
BGP_TIMER_OFF (peer->t_start);
BGP_TIMER_OFF (peer->t_connect);
BGP_TIMER_OFF (peer->t_holdtime);
@ -1112,9 +1184,14 @@ peer_delete (struct peer *peer)
/* Delete from all peer list. */
if (! CHECK_FLAG (peer->sflags, PEER_STATUS_GROUP))
{
listnode_delete (bgp->peer, peer);
peer_unlock (peer); /* bgp peer list reference */
listnode_delete (bgp->peer, peer);
if (peer_rsclient_active (peer))
listnode_delete (bgp->rsclient, peer);
{
peer_unlock (peer); /* rsclient list reference */
listnode_delete (bgp->rsclient, peer);
}
}
/* Free RIB for any family in which peer is RSERVER_CLIENT, and is not
@ -1124,30 +1201,22 @@ peer_delete (struct peer *peer)
if (peer->rib[afi][safi] && ! peer->af_group[afi][safi])
bgp_table_finish (peer->rib[afi][safi]);
/* Buffer. */
/* Buffers. */
if (peer->ibuf)
stream_free (peer->ibuf);
if (peer->obuf)
stream_fifo_free (peer->obuf);
if (peer->work)
stream_free (peer->work);
/* Free allocated host character. */
if (peer->host)
XFREE (MTYPE_BGP_PEER_HOST, peer->host);
/* Local and remote addresses. */
if (peer->su_local)
sockunion_free (peer->su_local);
if (peer->su_remote)
sockunion_free (peer->su_remote);
/* Peer description string. */
if (peer->desc)
XFREE (MTYPE_PEER_DESC, peer->desc);
bgp_sync_delete (peer);
/* Free filter related memory. */
@ -1164,11 +1233,16 @@ peer_delete (struct peer *peer)
free (filter->plist[i].name);
if (filter->aslist[i].name)
free (filter->aslist[i].name);
}
for (i = RMAP_IN; i < RMAP_MAX; i++)
{
filter->dlist[i].name = NULL;
filter->plist[i].name = NULL;
filter->aslist[i].name = NULL;
}
for (i = RMAP_IN; i < RMAP_MAX; i++)
{
if (filter->map[i].name)
free (filter->map[i].name);
filter->map[i].name = NULL;
}
if (filter->usmap.name)
@ -1176,22 +1250,12 @@ peer_delete (struct peer *peer)
if (peer->default_rmap[afi][safi].name)
free (peer->default_rmap[afi][safi].name);
filter->usmap.name = NULL;
peer->default_rmap[afi][safi].name = NULL;
}
/* Update source configuration. */
if (peer->update_source)
{
sockunion_free (peer->update_source);
peer->update_source = NULL;
}
if (peer->update_if)
{
XFREE (MTYPE_PEER_UPDATE_SOURCE, peer->update_if);
peer->update_if = NULL;
}
/* Free peer structure. */
XFREE (MTYPE_BGP_PEER, peer);
peer_unlock (peer); /* initial reference */
return 0;
}
@ -1625,6 +1689,8 @@ peer_group_bind (struct bgp *bgp, union sockunion *su,
peer = peer_create (su, bgp, bgp->as, group->conf->as, afi, safi);
peer->group = group;
peer->af_group[afi][safi] = 1;
peer = peer_lock (peer); /* peer-group group->peer reference */
listnode_add (group->peer, peer);
peer_group2peer_config_copy (group, peer, afi, safi);
@ -1664,6 +1730,8 @@ peer_group_bind (struct bgp *bgp, union sockunion *su,
if (! peer->group)
{
peer->group = group;
peer = peer_lock (peer); /* peer-group group->peer reference */
listnode_add (group->peer, peer);
}
@ -1693,7 +1761,10 @@ peer_group_bind (struct bgp *bgp, union sockunion *su,
family, without being member of a peer_group, remove it from
list bgp->rsclient.*/
if (! peer_rsclient_active (peer))
listnode_delete (bgp->rsclient, peer);
{
peer_unlock (peer); /* peer rsclient reference */
listnode_delete (bgp->rsclient, peer);
}
bgp_table_finish (peer->rib[afi][safi]);
@ -1748,6 +1819,7 @@ peer_group_unbind (struct bgp *bgp, struct peer *peer,
if (! peer_group_active (peer))
{
peer_unlock (peer); /* peer group list reference */
listnode_delete (group->peer, peer);
peer->group = NULL;
if (group->conf->as)
@ -1778,8 +1850,9 @@ bgp_create (as_t *as, const char *name)
afi_t afi;
safi_t safi;
bgp = XCALLOC (MTYPE_BGP, sizeof (struct bgp));
if ( (bgp = XCALLOC (MTYPE_BGP, sizeof (struct bgp))) == NULL)
return NULL;
bgp->peer_self = peer_new ();
bgp->peer_self->host = strdup ("Static announcement");
@ -1941,7 +2014,7 @@ bgp_delete (struct bgp *bgp)
list_delete (bgp->rsclient);
listnode_delete (bm->bgp, bgp);
if (bgp->name)
free (bgp->name);
@ -4839,6 +4912,7 @@ bgp_master_init ()
bm->master = thread_master_create ();
bm->start_time = time (NULL);
}
void
bgp_init ()
@ -4903,5 +4977,7 @@ bgp_terminate ()
BGP_NOTIFY_CEASE_PEER_UNCONFIG);
bgp_cleanup_routes ();
work_queue_free (bm->process_main_queue);
work_queue_free (bm->process_rsclient_queue);
}

View File

@ -37,6 +37,11 @@ struct bgp_master
/* BGP thread master. */
struct thread_master *master;
/* work queues */
struct work_queue *process_main_queue;
struct work_queue *process_rsclient_queue;
struct work_queue *clear_node_queue;
/* BGP port number. */
u_int16_t port;
@ -58,7 +63,7 @@ struct bgp
/* Name of this BGP instance. */
char *name;
/* Self peer. */
struct peer *peer_self;
@ -245,6 +250,13 @@ struct peer
/* BGP structure. */
struct bgp *bgp;
/* reference count, primarily to allow bgp_process'ing of route_node's
* to be done after a struct peer is deleted.
*
* named 'lock' for hysterical reasons within Quagga.
*/
int lock;
/* BGP peer group. */
struct peer_group *group;
u_char af_group[AFI_MAX][SAFI_MAX];
@ -785,6 +797,8 @@ struct peer_group *peer_group_lookup (struct bgp *, const char *);
struct peer_group *peer_group_get (struct bgp *, const char *);
struct peer *peer_lookup_with_open (union sockunion *, as_t, struct in_addr *,
int *);
extern struct peer *peer_lock (struct peer *);
extern struct peer *peer_unlock (struct peer *);
int peer_sort (struct peer *peer);
int peer_active (struct peer *);
int peer_active_nego (struct peer *);

View File

@ -1,3 +1,8 @@
2005-06-01 Paul Jakma <paul.jakma@sun.com>
* memtypes.c: Add MTYPE_BGP_PROCESS_QUEUE and
MTYPE_BGP_CLEAR_NODE_QUEUE
2005-05-24 Paul Jakma <paul@dishone.st>
* memtypes.h: update this auto-built file. (maybe we should just

View File

@ -6,7 +6,7 @@
* The script is sensitive to the format (though not whitespace), see
* the top of memtypes.awk for more details.
*
* $Id: memtypes.c,v 1.5 2005/05/19 02:13:28 paul Exp $
* $Id: memtypes.c,v 1.6 2005/06/01 11:17:05 paul Exp $
*/
#include "zebra.h"
@ -125,6 +125,9 @@ struct memory_list memory_list_bgp[] =
{ MTYPE_CLUSTER, "Cluster list" },
{ MTYPE_CLUSTER_VAL, "Cluster list val" },
{ 0, NULL },
{ MTYPE_BGP_PROCESS_QUEUE, "BGP Process queue" },
{ MTYPE_BGP_CLEAR_NODE_QUEUE, "BGP node clear queue" },
{ 0, NULL },
{ MTYPE_TRANSIT, "BGP transit attr" },
{ MTYPE_TRANSIT_VAL, "BGP transit val" },
{ 0, NULL },