Current code is a complete misuse of SLIST structure. Instead of just
adding a SLIST_ENTRY to struct bgp_damp_info, it allocates a separate
structure to be a node in the list.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
The only difference in daemons' interface node definition is the config
write function. No need to define the node in every daemon, just pass
the callback as an argument to a library function and define the node
there.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
One more crash in dampening code...
When bgp_damp_withdraw is called, if there's already a BDI structure,
bgp_damp_info_claim is called to re-assign the bdi->config in case it
was changed. The problem is that bgp_damp_info_claim actually removes
the BDI from the reuse list of the old config and never adds it to the
reuse list of the new config. We must do this to prevent the crash
because all the code assumes that BDI is always in some list.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
The test_simple_snmp.py test starts bgp, zebra and snmpd at the
same time. Then zebra configuration is read in and interface
addresses are applied. If snmp start slower than zebra
the snmp process can properly get it's ip address to bind to
if it is faster than zebra, it will fail. Ensure that the
test has addresses before we start daemons.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
There exists some rare situations where fpm will attempt
to send a route update with no valid nexthops. In that
case an assert would be hit. This is not good for
trying to keep your routing daemons up and running
when we can safely just recover the situation.
Fixes#7588
Signed-off-by: batmancn <batmanustc@gmail.com>
<fixed commit message, and used zlog_err>
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When running `make check` against a build that zeromq enabled
the test_zmq unit test was crashing. The commit:
e08165def1
Introduced this crash. Removing the part of the commit
that was causing the crash in the test. This is mainly
to get `make check` working again for those people using
zeromq in their builds.
Fixes: #9176
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The bgp ipv6 unicast node should be called `bgp ipv6 unicast`
to make it consistent with other nodes where we list the afi/safi
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When running this test on a locally loaded system I am seeing the
static route as `queued` still after 1 second. Let's just blanket
increase the timeout to something longer to give a very loaded system
more time to install the route.
Output on my test system when it was loaded:
INFO topolog.r1:topogen.py:880 vtysh result:
{
"4.5.1.0/24":[
{
"prefix":"4.5.1.0/24",
"prefixLen":24,
"protocol":"static",
"vrfId":0,
"vrfName":"default",
"selected":true,
"destSelected":true,
"distance":1,
"metric":0,
"queued":true,
"table":254,
"internalStatus":8,
"internalFlags":73,
"internalNextHopNum":1,
"internalNextHopActiveNum":1,
"uptime":"00:00:00",
"nexthops":[
{
"flags":1,
"ip":"192.168.216.3",
"afi":"ipv4",
"interfaceIndex":11,
"interfaceName":"r1-eth6",
"active":true,
"weight":1
}
]
},
I suspect 10 seconds should be enough( I would hope ).
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Let's be less radical. There's no reason to stop the whole daemon when
there's a socket creation error in a single VRF. The user can always
restart this single VRF to retry to create a socket.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
RCA: When Type-7 LSA is updated, the LSDB is searched, if the
LSA is present in the LSDB then the LSA is updated with next
sequence number and if not then it is originated with the
INITIAL sequence number.
Here while originating Type-7 LSA Process Level LSDB is searched
for instead of area level LSDB.
Fix: Search in the area level LSDB and not in the process level.
Fixes#9099
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
This commit is to correct the order in which the fields are
accessed while verifying it. First the fields should be
verified, and if it is valid then access it.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>