two things:
On shutdown cleanup any events associated with the update walker.
Also do not allow new events to be created.
Fixes this mem-leak:
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790:Direct leak of 8 byte(s) in 1 object(s) allocated from:
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #0 0x7f0dd0b08037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #1 0x7f0dd06c19f9 in qcalloc lib/memory.c:105
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #2 0x55b42fb605bc in rib_update_ctx_init zebra/zebra_rib.c:4383
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #3 0x55b42fb6088f in rib_update zebra/zebra_rib.c:4421
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #4 0x55b42fa00344 in netlink_link_change zebra/if_netlink.c:2221
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #5 0x55b42fa24622 in netlink_information_fetch zebra/kernel_netlink.c:399
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #6 0x55b42fa28c02 in netlink_parse_info zebra/kernel_netlink.c:1183
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #7 0x55b42fa24951 in kernel_read zebra/kernel_netlink.c:493
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #8 0x7f0dd0797f0c in event_call lib/event.c:1995
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #9 0x7f0dd0684fd9 in frr_run lib/libfrr.c:1185
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #10 0x55b42fa30caa in main zebra/main.c:465
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #11 0x7f0dd01b5d09 in __libc_start_main ../csu/libc-start.c:308
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790-
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790-SUMMARY: AddressSanitizer: 8 byte(s) leaked in 1 allocation(s).
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The changes involve adding show debuggin command for MGMTd which resolves
command incomplete issue.
issue #13154
Signed-off-by: Manoj Naragund <mnaragund@vmware.com>
Following correction of IS-IS subnet creation, this patch adjust the way the
TED is created in sharpd to automatically adjust subnet deletion.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Subnets may be incorrectly created in the IS-IS Traffic Engineering Database
(TED). Indeed, to be usable, the subnets advertised by IS-IS peers must be
adjusted to avoid misinterpretation. For example, consider R1 which is
connected to R2 with IP addresses 10.0.0.1/24 (R1) and 10.0.0.2/24 (R2).
R1 and R2 will advertize the prefix 10.0.0.0/24. By leaving the subnet with the
prefix 10.0.0.0/24 in the TED, it is not possible to determine whether
10.0.0.1 is attached to R1 or R2 or whether 10.0.0.3 exists.
So to avoid this, the subnet prefixes are adjusted with the IP addresses of the
local interface. But IS-IS can start to advertise the subnet when not all
adjacencies are up, especially when IPv4 and IPv6 are configured on the same
interface. This results in an uncorrected prefix, e.g. 10.0.0.0/24, remaining
in the TED when it should be removed.
This problem affects some isis-related tests such as the CSPF test.
This patch fixes this bug by removing the uncorrected prefix before adding the
the corrected version.
Signed-off-by: Olivier Dugeon <olivier.dugeon@orange.com>
Fix for missing neighbor description in "show bgp summary [wide]"
when its length exceeds 20[64] chars and it doesn't contain
withespaces.
Existing behavior remains if description contains whitespaces
before size limit.
Signed-off-by: rbarroetavena <rbarroetavena@gmail.com>
Test failed time to time, let's try this way:
```
$ for x in $(seq 1 20); do cp test_bgp_labeled_unicast_addpath.py test_$x.py; done
$ sudo pytest -s -n 20
```
Ran 10 times using this pattern, no failure 🤷
Before this change, we checked advertised routes, and at some point `=` was
missing from the output, but advertised correctly. Receiving router gets as
much routes as expected to receive.
I reversed checking received routes, not advertised.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
Memory leaks are observed in the cleanup code. When “no router bgp" is executed,
cleanup in that flow for aggregate-address command is not taken care.
fixes the below leak:
--
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444:Direct leak of 152 byte(s) in 1 object(s) allocated from:
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #0 0x7f163e911037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #1 0x7f163e4b9259 in qcalloc lib/memory.c:105
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #2 0x562bf42ebbd5 in bgp_aggregate_new bgpd/bgp_route.c:7239
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #3 0x562bf42f14e8 in bgp_aggregate_set bgpd/bgp_route.c:8421
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #4 0x562bf42f1e55 in aggregate_addressv6_magic bgpd/bgp_route.c:8592
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #5 0x562bf42be3f5 in aggregate_addressv6 bgpd/bgp_route_clippy.c:341
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #6 0x7f163e3f1e1b in cmd_execute_command_real lib/command.c:988
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #7 0x7f163e3f219c in cmd_execute_command lib/command.c:1048
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #8 0x7f163e3f2df4 in cmd_execute lib/command.c:1215
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #9 0x7f163e5a2d73 in vty_command lib/vty.c:544
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #10 0x7f163e5a79c8 in vty_execute lib/vty.c:1307
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #11 0x7f163e5ad299 in vtysh_read lib/vty.c:2216
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #12 0x7f163e593f16 in event_call lib/event.c:1995
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #13 0x7f163e47c839 in frr_run lib/libfrr.c:1185
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #14 0x562bf414e58d in main bgpd/bgp_main.c:505
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #15 0x7f163de66d09 in __libc_start_main ../csu/libc-start.c:308
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444-
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444:Direct leak of 152 byte(s) in 1 object(s) allocated from:
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #0 0x7f163e911037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #1 0x7f163e4b9259 in qcalloc lib/memory.c:105
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #2 0x562bf42ebbd5 in bgp_aggregate_new bgpd/bgp_route.c:7239
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #3 0x562bf42f14e8 in bgp_aggregate_set bgpd/bgp_route.c:8421
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #4 0x562bf42f1cde in aggregate_addressv4_magic bgpd/bgp_route.c:8543
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #5 0x562bf42bd258 in aggregate_addressv4 bgpd/bgp_route_clippy.c:255
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #6 0x7f163e3f1e1b in cmd_execute_command_real lib/command.c:988
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #7 0x7f163e3f219c in cmd_execute_command lib/command.c:1048
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #8 0x7f163e3f2df4 in cmd_execute lib/command.c:1215
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #9 0x7f163e5a2d73 in vty_command lib/vty.c:544
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #10 0x7f163e5a79c8 in vty_execute lib/vty.c:1307
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #11 0x7f163e5ad299 in vtysh_read lib/vty.c:2216
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #12 0x7f163e593f16 in event_call lib/event.c:1995
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #13 0x7f163e47c839 in frr_run lib/libfrr.c:1185
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #14 0x562bf414e58d in main bgpd/bgp_main.c:505
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444- #15 0x7f163de66d09 in __libc_start_main ../csu/libc-start.c:308
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444-
./bgp_local_asn_dot.test_bgp_local_asn_dot_agg/r3.bgpd.asan.3410444-SUMMARY: AddressSanitizer: 304 byte(s) leaked in 2 allocation(s).
Signed-off-by: Samanvitha B Bhargav <bsamanvitha@vmware.com>
BGP signals to zebra that a afi has converged immediately
after it has finished processing all routes for a given
afi/safi. This generates events in zebra in this order
a) Routes received from BGP, placed on early-rib Meta-Q
b) Signal GR for the afi.
Now imagine that zebra reads GR code and immediately
processes routes that are in the actual rib and
removes some routes. This generates a
c) route deletion to the kernel for some number of
routes that may be in the the early-rib Meta-Q
d) Process the Meta-Q, and re-install the routes
This is undesirable behavior in zebra. In that
while we may end up in a correct state, there
will be a blip for some number of routes that
happen to be in the early rib Meta-Q.
Modify the GR code to have it's own processing
entry at the end of the Meta-Q. This will
allow all routes to be processed and ready
for handling by the Graceful Restart code.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
After the restructure of the gr code to allow zebra_gr
to have individual cleanups of afi, this is no longer necessary.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The GR code in FRR used to wait till all AFI's were complete
before cleaning up the routes from the upper level protocol.
This of course can lead to some weird situations where say
ipv4 finishes and then v6 is stuck waiting for a peer to come
up and never finishes. v4 when it finishes signals zebra that
it is done but no action is taken at that moment.
Modify the code to allow the zebra_gr.c code to handle a per
afi removal, instead of doing it all at the end.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The zebra_gr code had 3 functions when effectively only
1 was needed. Cleans up some code weirdness around
multiple switch statements for the same api->cap
as well as consolidating down to only caring about
SAFI_UNICAST, since that is all we care about at the
moment.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
We have code that tracks both afi and safi's,
but we only ever operate on the afi's. So lets
limit our work being done to something more sensible.
I'm leaving the safi being broadcast through the zapi
message, as that I am not sure what else should be ripped
out at this point in time.
Finally re-arrange the zread_client_capabilites function
to stop the multiple levels of function calling that really
serve no purpose.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
BGP_PREFIX_SID_SRV6_L3_SERVICE attributes must not
fully trust the length value specified in the nlri.
Always ensure that the amount of data we need to read
can be fullfilled.
Reported-by: Iggy Frankovic <iggyfran@amazon.com>
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
By the time this function is called we have already
ensured that the pointers are good several times.
I like consistency but this is a bit much
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When GR is running and attempting to clear up a node
if the node that is currently saved and we are coming
back to happens to be deleted during the time zebra
suspends the GR code due to hitting the node limit
then zebra GR code will just completely stop processing
and potentially leave stale nodes around forever.
Let's just remove this hole and process what we can.
Can you imagine trying to debug this after the fact?
If we remove a node then that counts toward the maximum
to process of ZEBRA_MAX_STALE_ROUTE_COUNT. This should
prevent any non-processing with a slightly larger cost
of having to look at a few nodes repeatedly
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The info->do_delete variable was being set to true only when
u.val was 1. The problem with this is that u.val is a union
and the various ways that we can call this event causes
different values to be written to the union value on the thread.
This makes no sense. Just set the variable to what we want it to
be when we need it to be true. Since it was only ever set during
a thread_execute section.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The safi has no 0 value which it is set to as part of the
initialization. Let's just set it to a sensible value.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Adding a new drop counters section to "show isis summary".
New output:
Drop counters per PDU type:
P2P IIH: <count>
L2 LSP: <count>
L2 CSNP: <count>
L2 PSNP: <count>
...
Before:
r1# show isis summary
vrf : default
Process Id : 972
System Id : 0000.0000.0001
Up time : 00:00:48 ago
Number of areas : 1
Area TE:
Net: 49.0000.0000.0000.0001.00
TX counters per PDU type:
P2P IIH: 36
L2 LSP: 8
L2 CSNP: 12
L2 PSNP: 11
RX counters per PDU type:
P2P IIH: 37
L2 LSP: 17
L2 CSNP: 12
L2 PSNP: 6
Advertise high metrics: Disabled
...
After:
r1# show isis summary
vrf : default
Process Id : 972
System Id : 0000.0000.0001
Up time : 00:00:19 ago
Number of areas : 1
Area TE:
Net: 49.0000.0000.0000.0001.00
TX counters per PDU type:
P2P IIH: 16
L2 LSP: 2
L2 CSNP: 4
L2 PSNP: 6
LSP RXMT: 0
RX counters per PDU type:
P2P IIH: 16
L2 LSP: 5
L2 CSNP: 4
L2 PSNP: 2
Drop counters per PDU type:
P2P IIH: 2
Advertise high metrics: Disabled
...
Signed-off-by: Isabella de Leon <ideleon@microsoft.com>
If we set `bgp route-map delay-timer X`, we should ignore starting to announce
routes immediately, and wait for delay timer to expire (or ignore at all if set
to zero).
f1aa49293a broke this because we always sent
route refresh and on receiving BoRR before sending back EoRR.
Let's get fix this.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
It was being used for -b only; we should be able to use it for -f as
well.
This also merges the codepaths for -b and -f since they have no real
functional difference.
Signed-off-by: Quentin Young <qlyoung@nvidia.com>
To handle multi-instance daemons (ospf, e.g.), each forked
vtysh handles all of the instances of a daemon type.
Signed-off-by: Mark Stapp <mstapp@nvidia.com>