Commit Graph

4759 Commits

Author SHA1 Message Date
Donald Sharp
c8453cd77e zebra: Fix v6 route replace failure turned into success
Currently when we have a route replace operation for v6 routes
with a new nexthop group the order of kernel installation is this:

a) New nexthop group insertion seq  1
b) Route delete operation seq 3
c) Route insertion operation seq 2

Currently the code in nl_batch_read_resp is attempting
to handle this situation by skipping the delete operation.
*BUT* it is enqueuing the context into the zebra dplane
queue before we read the response.  Since we create the ctx
with an implied success, success is being reported to the
upper level dplane and the zebra rib thinks the route has
been properly handled.

This is showing up in the zebra_seg6_route test code because
the test code is installing a seg6 route w/ sharpd and it
is failing to install because the route's nexthop is rejected:

First installation:

2021/10/29 09:28:10.218 ZEBRA: [JGWSB-SMNVE] dplane: incoming new work counter: 2
2021/10/29 09:28:10.218 ZEBRA: [Q52A7-211QJ] dplane enqueues 2 new work to provider 'Kernel'
2021/10/29 09:28:10.218 ZEBRA: [JVY1P-93VFY] dplane provider 'Kernel': processing
2021/10/29 09:28:10.218 ZEBRA: [TX9N0-9JKDF] ID (9) Dplane nexthop update ctx 0x56125390a820 op NH_INSTALL
2021/10/29 09:28:10.218 ZEBRA: [PM9ZJ-07RCP] 0:1::1/128 Dplane route update ctx 0x56125390add0 op ROUTE_INSTALL
2021/10/29 09:28:10.218 ZEBRA: [TJ327-ET8HE] netlink_send_msg: >> netlink message dump [sent]
2021/10/29 09:28:10.218 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=104 type=(104) NEWNEXTHOP flags=(0x0501) {REQUEST,DUMP,(ROOT|REPLACE|CAPPED),(ATOMIC|CREATE)} seq=9 pid=3539131282]
2021/10/29 09:28:10.218 ZEBRA: [WCX94-SW894]   nhm [family=(10) AF_INET6 scope=(0) UNIVERSE protocol=(11) ZEBRA flags=0x00000000 {}]
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(1) ID]
2021/10/29 09:28:10.218 ZEBRA: [Z4E9C-GD9EP]       9
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(6) GATEWAY]
2021/10/29 09:28:10.218 ZEBRA: [STTSM-27M81]       2001::1
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(5) OIF]
2021/10/29 09:28:10.218 ZEBRA: [JR4EA-BKPTA]       6
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=6 (payload=2) type=(7) ENCAP_TYPE]
2021/10/29 09:28:10.218 ZEBRA: [JR4EA-BKPTA]       5
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=36 (payload=32) type=(32776) UNKNOWN]
2021/10/29 09:28:10.218 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=64 type=(24) NEWROUTE flags=(0x0401) {REQUEST,(ATOMIC|CREATE)} seq=10 pid=3539131282]
2021/10/29 09:28:10.218 ZEBRA: [GCEGC-W8YBF]   rtmsg [family=(10) AF_INET6 dstlen=128 srclen=0 tos=0 table=254 protocol=(194) UNKNOWN scope=(0) UNIVERSE type=(1) UNICAST flags=0x0000 {}]
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(1) DST]
2021/10/29 09:28:10.218 ZEBRA: [STTSM-27M81]       1::1
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(6) PRIORITY]
2021/10/29 09:28:10.218 ZEBRA: [Z4E9C-GD9EP]       20
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(30) NH_ID]
2021/10/29 09:28:10.218 ZEBRA: [Z4E9C-GD9EP]       9
2021/10/29 09:28:10.218 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:28:10.218 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=76 type=(2) ERROR flags=(0x0300) {DUMP,(ROOT|REPLACE|CAPPED),(MATCH|EXCLUDE|ACK_TLVS)} seq=9 pid=3539131282]
2021/10/29 09:28:10.218 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-22) Invalid argument]
2021/10/29 09:28:10.218 ZEBRA: [HSYZM-HV7HF] Extended Error: Gateway can not be a local address
2021/10/29 09:28:10.218 ZEBRA: [WVJCK-PPMGD][EC 4043309093] netlink-dp (NS 0) error: Invalid argument, type=RTM_NEWNEXTHOP(104), seq=9, pid=3539131282
2021/10/29 09:28:10.218 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:28:10.218 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=68 type=(2) ERROR flags=(0x0300) {DUMP,(ROOT|REPLACE|CAPPED),(MATCH|EXCLUDE|ACK_TLVS)} seq=10 pid=3539131282]
2021/10/29 09:28:10.218 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-22) Invalid argument]
2021/10/29 09:28:10.218 ZEBRA: [HSYZM-HV7HF] Extended Error: Nexthop id does not exist
2021/10/29 09:28:10.218 ZEBRA: [WVJCK-PPMGD][EC 4043309093] netlink-dp (NS 0) error: Invalid argument, type=RTM_NEWROUTE(24), seq=10, pid=3539131282
2021/10/29 09:28:10.218 ZEBRA: [VCDW6-A7ZF1] dplane dequeues 2 completed work from provider Kernel
2021/10/29 09:28:10.218 ZEBRA: [JTWAB-1MH4Y] dplane has 2 completed, 0 errors, for zebra main
2021/10/29 09:28:10.218 ZEBRA: [J7K9Z-9M7DT] Nexthop dplane ctx 0x56125390a820, op NH_INSTALL, nexthop ID (9), result FAILURE
2021/10/29 09:28:10.218 ZEBRA: [P2XBZ-RAFQ5][EC 4043309074] Failed to install Nexthop ID (9) into the kernel
2021/10/29 09:28:10.218 ZEBRA: [RMK34-61HV5] default(0:254):1::1/128 Processing dplane result ctx 0x56125390add0, op ROUTE_INSTALL result FAILURE

Note the last line `op ROUTE_INSTALL result FAILURE` because we are attempting to use a
a gw nexthop that is local.  This is the result.

Then the test code was installing the route again:

2021/10/29 09:30:00.493 ZEBRA: [JGWSB-SMNVE] dplane: incoming new work counter: 2
2021/10/29 09:30:00.493 ZEBRA: [Q52A7-211QJ] dplane enqueues 2 new work to provider 'Kernel'
2021/10/29 09:30:00.493 ZEBRA: [JVY1P-93VFY] dplane provider 'Kernel': processing
2021/10/29 09:30:00.493 ZEBRA: [TX9N0-9JKDF] ID (9) Dplane nexthop update ctx 0x561253916a00 op NH_INSTALL
2021/10/29 09:30:00.493 ZEBRA: [PM9ZJ-07RCP] 0:1::1/128 Dplane route update ctx 0x561253915f40 op ROUTE_UPDATE
2021/10/29 09:30:00.493 ZEBRA: [TJ327-ET8HE] netlink_send_msg: >> netlink message dump [sent]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=104 type=(104) NEWNEXTHOP flags=(0x0501) {REQUEST,DUMP,(ROOT|REPLACE|CAPPED),(ATOMIC|CREATE)} seq=11 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [WCX94-SW894]   nhm [family=(10) AF_INET6 scope=(0) UNIVERSE protocol=(11) ZEBRA flags=0x00000000 {}]
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(1) ID]
2021/10/29 09:30:00.493 ZEBRA: [Z4E9C-GD9EP]       9
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(6) GATEWAY]
2021/10/29 09:30:00.493 ZEBRA: [STTSM-27M81]       2001::1
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(5) OIF]
2021/10/29 09:30:00.493 ZEBRA: [JR4EA-BKPTA]       6
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=6 (payload=2) type=(7) ENCAP_TYPE]
2021/10/29 09:30:00.493 ZEBRA: [JR4EA-BKPTA]       5
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=36 (payload=32) type=(32776) UNKNOWN]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=56 type=(25) DELROUTE flags=(0x0401) {REQUEST,(ATOMIC|CREATE)} seq=13 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [GCEGC-W8YBF]   rtmsg [family=(10) AF_INET6 dstlen=128 srclen=0 tos=0 table=254 protocol=(194) UNKNOWN scope=(0) UNIVERSE type=(0) UNSPEC flags=0x0000 {}]
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(1) DST]
2021/10/29 09:30:00.493 ZEBRA: [STTSM-27M81]       1::1
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(6) PRIORITY]
2021/10/29 09:30:00.493 ZEBRA: [Z4E9C-GD9EP]       20
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=64 type=(24) NEWROUTE flags=(0x0401) {REQUEST,(ATOMIC|CREATE)} seq=12 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [GCEGC-W8YBF]   rtmsg [family=(10) AF_INET6 dstlen=128 srclen=0 tos=0 table=254 protocol=(194) UNKNOWN scope=(0) UNIVERSE type=(1) UNICAST flags=0x0000 {}]
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(1) DST]
2021/10/29 09:30:00.493 ZEBRA: [STTSM-27M81]       1::1
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(6) PRIORITY]
2021/10/29 09:30:00.493 ZEBRA: [Z4E9C-GD9EP]       20
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(30) NH_ID]
2021/10/29 09:30:00.493 ZEBRA: [Z4E9C-GD9EP]       9
2021/10/29 09:30:00.493 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=76 type=(2) ERROR flags=(0x0300) {DUMP,(ROOT|REPLACE|CAPPED),(MATCH|EXCLUDE|ACK_TLVS)} seq=11 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-22) Invalid argument]
2021/10/29 09:30:00.493 ZEBRA: [HSYZM-HV7HF] Extended Error: Gateway can not be a local address
2021/10/29 09:30:00.493 ZEBRA: [WVJCK-PPMGD][EC 4043309093] netlink-dp (NS 0) error: Invalid argument, type=RTM_NEWNEXTHOP(104), seq=11, pid=3539131282
2021/10/29 09:30:00.493 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=36 type=(2) ERROR flags=(0x0100) {DUMP,(ROOT|REPLACE|CAPPED)} seq=13 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-3) No such process]
2021/10/29 09:30:00.493 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=68 type=(2) ERROR flags=(0x0300) {DUMP,(ROOT|REPLACE|CAPPED),(MATCH|EXCLUDE|ACK_TLVS)} seq=12 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-22) Invalid argument]
2021/10/29 09:30:00.493 ZEBRA: [VCDW6-A7ZF1] dplane dequeues 2 completed work from provider Kernel
2021/10/29 09:30:00.493 ZEBRA: [JTWAB-1MH4Y] dplane has 2 completed, 0 errors, for zebra main
2021/10/29 09:30:00.493 ZEBRA: [J7K9Z-9M7DT] Nexthop dplane ctx 0x561253916a00, op NH_INSTALL, nexthop ID (9), result FAILURE
2021/10/29 09:30:00.493 ZEBRA: [P2XBZ-RAFQ5][EC 4043309074] Failed to install Nexthop ID (9) into the kernel
2021/10/29 09:30:00.493 ZEBRA: [RMK34-61HV5] default(0:254):1::1/128 Processing dplane result ctx 0x561253915f40, op ROUTE_UPDATE result SUCCESS

Note that this time we do these three operations

a) nexthop installation seq 11
b) route delete seq 13
c) route add seq 12

Note the last line, we report the install as a success but it clearly failed from the seq=12 decode.
When we look at the v6 rib it thinks it is installed:

unet> r1 show ipv6 route
Codes: K - kernel route, C - connected, S - static, R - RIPng,
       O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
       v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
       f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

D>* 1::1/128 [150/0] via 2001::1, dum0, seg6local unspec unknown(seg6local_context2str), seg6 a::, weight 1, 00:00:17

So let's modify nl_batch_read_resp to not dequeue/enqueue the context until we are sure we have
the right one.  This fixes the test code to do the right thing on the second installation.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 15:33:58 -05:00
Donald Sharp
e3ee55d4bd zebra: set zd_is_update in 1 spot
The ctx->zd_is_update is being set in various
spots based upon the same value that we are
passing into dplane_ctx_ns_init.  Let's just
consolidate all this into the dplane_ctx_ns_init
so that the zd_is_udpate value is set at the
same time that we increment the sequence numbers
to use.

As a note for future me's reading this.  The sequence
number choosen for the seq number passed to the
kernel is that each context gets a copy of the
appropriate nlsock to use.  Since it's a copy
at a point in time, we know we have a unique sequence
number value.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 15:33:58 -05:00
Donald Sharp
00249e255e zebra: When we get an implicit or ack or full failure mark status
When nl_batch_read_resp gets a full on failure -1 or an implicit
ack 0 from the kernel for a batch of code.  Let's immediately
mark all of those in the batch pass/fail as needed.  Instead
of having them marked else where.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 15:33:58 -05:00
Mark Stapp
3d1ff4bfdb
Merge pull request #10409 from idryzhov/zebra-mq-clean-crash
zebra: fix cleanup of meta queues on vrf disable
2022-02-02 08:35:00 -05:00
Igor Ryzhov
0ef6eacc95 zebra: fix cleanup of meta queues on vrf disable
Current code treats all metaqueues as lists of route_node structures.
However, some queues contain other structures that need to be cleaned up
differently. Casting the elements of those queues to struct route_node
and dereferencing them leads to a crash. The crash may be seen when
executing bgp_multi_vrf_topo2.

Fix the code by using the proper list element types.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-02-01 18:20:30 +03:00
Igor Ryzhov
461a8d7aba
Merge pull request #10443 from mjstapp/zebra_re_opaque
zebra: name the route_entry opaque struct more specifically
2022-02-01 12:19:11 +03:00
Mark Stapp
b86c1f4fcc zebra: name the route_entry opaque struct more specifically
The name 'opaque' is a little general - call the route_entry
struct 're_opaque' to make it more specific.

Signed-off-by: Mark Stapp <mstapp@nvidia.com>
2022-01-31 08:50:50 -05:00
Xiao Liang
8244ba34aa zebra: Fix EVPN route nexthop config order
EVPN route add should be queued to preserve the config order.
In particular, against deletion in rib_delete().

Signed-off-by: Xiao Liang <shaw.leon@gmail.com>
2022-01-28 20:51:10 +08:00
Donald Sharp
0955f8757b zebra: Don't double delete the table we are cleaning up
vrf_disable is always called first before
vrf_delete.  The rnh_table and rnh_table_multicast tables
are already deleted as part of vrf_disable.  No need
to do it again.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-26 08:21:03 -05:00
Russ White
e48b2fea63
Merge pull request #10411 from idryzhov/if-config-vrf-name
*: do not print vrf name for interface config when using vrf-lite
2022-01-25 11:34:59 -05:00
Russ White
bbf1101240
Merge pull request #10412 from idryzhov/zebra-vrf-delete
zebra: fix vrf deletion
2022-01-24 07:33:53 -05:00
Igor Ryzhov
788a036fdb *: do not print vrf name for interface config when using vrf-lite
VRF name should not be printed in the config since 574445ec. The update
was done for NB config output but I missed it for regular vty output.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-01-24 14:44:05 +03:00
Donatas Abraitis
6b968475ed
Merge pull request #10406 from idryzhov/zebra-opaque-memleak
zebra: fix opaque data memleak
2022-01-24 09:38:54 +02:00
Russ White
2d9e10d095
Merge pull request #10318 from donaldsharp/redistribution
OSPF Redistribution
2022-01-23 22:30:24 -05:00
Igor Ryzhov
e4c5b3ba06 zebra: fix vrf deletion
VRF deletion code must be called after the corresponding interface
deletion code.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-01-24 01:51:10 +03:00
Igor Ryzhov
dc00940b66 zebra: fix opaque data memleak
Opaque data should be freed together with route entry in case of errors.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-01-23 15:39:04 +03:00
Donald Sharp
e8b3a2f74b lib, zebra: Add ability to tell thread system to ignore late timers
Add a thread_ignore_late_timer(struct thread *thread) function
that allows thread.c to ignore when timers are late to the party.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-20 11:58:48 -05:00
Russ White
4ec148523c
Merge pull request #10355 from opensourcerouting/noisy-startup
lib, zebra: make startup less noisy
2022-01-18 11:30:13 -05:00
Russ White
05786ac774
Merge pull request #9644 from opensourcerouting/ospf-opaque-attrs
OSPF opaque route attributes
2022-01-18 09:08:38 -05:00
Donald Sharp
db80a7e2f5 zebra: Add table and instance data to debugs for redistribute_delete
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:41 -05:00
Donald Sharp
659ec5e9c2 zebra: Cleanup temp variable usage in debugs for %pFX
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:41 -05:00
Donald Sharp
5ef3568f22 zebra: Use %pRN instead of %pFX whenver possible
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:41 -05:00
Donald Sharp
a7704e1b98 zebra: Modify route_notify_internal to use a route_node
Pass in the route_node that is under consideration
into route_notify_internal to allow calling functions
to reduce stack size as well as looking up data.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:41 -05:00
Donald Sharp
69a2d597d2 zebra: Cleanup %pFX to %pRN in rib_process_result
The dest_pfx was pretty much only ever used for
debug output and FRR already knows the rn.  So
use that instead.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:41 -05:00
Donald Sharp
085e8f8f6b zebra: Reduce unneeded lookup in rib_process
the lookup of the src_p and dest_p is not needed
since the values are never used.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:40 -05:00
Donald Sharp
898b4a6d3b zebra: Reduce lookups in rib_process_dplane_notify
the dest_p and src_p values were only ever used for
debugs and %pFX, when we already have the rn.
There is no need to do this lookup

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:40 -05:00
Donald Sharp
a1742ba8af zebra: Fix redistribute.h up to our standards
FRR must give variable names instead of not defining
them in the .h file.  This just cleans up this
problem for redistribute.h

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:40 -05:00
Donald Sharp
84faa42066 zebra: Modify zsend_redistribute_route to receive struct route_node
The function zsend_redistribute_route uses the prefix and
source prefix.  Just pass in the route_node instead.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:40 -05:00
Donald Sharp
c8671e9f9b zebra: Pass rn to zebra_redistribute_check instead
FRR is using struct prefix and afi to be passed
around.  When all that is needed is the route node.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:40 -05:00
Donald Sharp
ee78ed680b zebra: Convert redistribute_update to use a route_node
FRR is passing around a bunch of data that is encapsulated
within the route node.  Let's just pass that around instead.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:40 -05:00
Donald Sharp
b3a9fca150 zebra: Convert redistribute_delete to use a route_node
FRR is passing around a bunch of data that is encapsulated
within the route node.  Let's just pass that around instead.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:40 -05:00
Donald Sharp
deb39cca21 zebra: Do not allow instance redistribution to happen no matter what
If you have this setup:

router ospf 3
  redistribute sharp
!

and then install:
sharp install route 4.5.6.7 nexthop 192.168.100.1 1
sharp install route 4.5.6.8 nexthop 192.168.100.1 1 instance 3
sharp install route 4.5.6.9 nexthop 192.168.100.1 1 instance 4

The .8 and .9 routes are auto redistributed into ospf instance 3:

eva# show ip ospf data

OSPF Instance: 3

       OSPF Router with ID (192.168.122.1)

                AS External Link States

Link ID         ADV Router      Age  Seq#       CkSum  Route
4.5.6.7        192.168.122.1     13 0x80000001 0x477c E2 4.5.6.7/32 [0x0]
4.5.6.8        192.168.122.1      5 0x80000001 0x3d85 E2 4.5.6.8/32 [0x0]
4.5.6.9        192.168.122.1      5 0x80000001 0x338e E2 4.5.6.9/32 [0x0]

This cannot be correct behavior.  When redistributing in the absense
of an instance number the default instance of 0 should be used and should
be the only route redistributed.  Here is the correct behavior:

eva# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

K>* 0.0.0.0/0 [0/100] via 192.168.119.1, enp39s0, 00:00:28
D>* 4.5.6.7/32 [150/0] via 192.168.100.1, virbr1, weight 1, 00:00:02
D[3]>* 4.5.6.8/32 [150/0] via 192.168.100.1, virbr1, weight 1, 00:00:02
D[4]>* 4.5.6.9/32 [150/0] via 192.168.100.1, virbr1, weight 1, 00:00:02
C>* 192.168.100.0/24 is directly connected, virbr1, 00:00:28
C>* 192.168.110.0/24 is directly connected, virbr2, 00:00:28
C>* 192.168.119.0/24 is directly connected, enp39s0, 00:00:28
C>* 192.168.122.0/24 is directly connected, virbr0, 00:00:28
eva# show ip ospf data

OSPF Instance: 3

       OSPF Router with ID (192.168.122.1)

                AS External Link States

Link ID         ADV Router      Age  Seq#       CkSum  Route
4.5.6.7        192.168.122.1      6 0x80000001 0x477c E2 4.5.6.7/32 [0x0]

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:39:40 -05:00
Donald Sharp
1945fb7a07 zebra: Add hint to what instance we are looking at
FRR allows redistribution to a client with a specific
instance in mind.  The code was not allowing you to figure
out what instance was being looked at.  So let's clarify this
in the debugs.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-18 08:36:26 -05:00
Rafael Zalamena
4e4c027803
Merge pull request #10183 from idryzhov/rework-vrf-rename
*: rework renaming the default VRF
2022-01-17 08:45:12 -03:00
David Lamparter
17a4c65576 zebra: remove netlink buffer size log message
... really not much point in printing this.

Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
2022-01-17 09:46:19 +01:00
Renato Westphal
5b5d66c431 lib, ospfd, ospf6d, zebra: add OSPF opaque route attributes
Update ospfd and ospf6d to send opaque route attributes to
zebra. Those attributes are stored in the RIB and can be viewed
using the "show ip[v6] route" commands (other than that, they are
completely ignored by zebra).

Example:
```
debian# show ip route 192.168.1.0/24
Routing entry for 192.168.1.0/24
  Known via "ospf", distance 110, metric 20, best
  Last update 01:57:08 ago
  * 10.0.1.2, via eth-rt2, weight 1
    OSPF path type        : External-2
    OSPF tag              : 0

debian#
debian# show ip route 192.168.1.0/24 json
{
  "192.168.1.0\/24":[
    {
      "prefix":"192.168.1.0\/24",
      "prefixLen":24,
      "protocol":"ospf",
      "vrfId":0,
      "vrfName":"default",
      "selected":true,
      [snip]
      "ospfPathType":"External-2",
      "ospfTag":"0"
    }
  ]
}
```

Signed-off-by: Renato Westphal <renato@opensourcerouting.org>
2022-01-15 17:22:27 +01:00
Sarita Patra
f99f1ff50a zebra: Fix for route node having no tracking NHT
Topology:
IXIA-----(ens192)FRR(ens224)------iXIA

Configuration:
1. Create 8 sub-interfaces on ens192 under Default VRF and configure 8
EBGP session between FRR and IXIA.
2. Create 1000 sub-interfaces on ens224 under Default VRF and configure
1000 EBGP session between FRR and IXIA.
3. 2M prefixes distributed from Left side Ixia each with 8 ECMP path.
4. So in total, there are 2M prefixes * 8 ECMP = 16M prefixes entries
in RIB and FIB.

Issue:
Shut ens192 and ens224, this is taking 1hr 15 mins to clean up the routes.

Root Cause:
In the case of route deletion, if the particular route node is having
nht count = 0, we are going to the parent and doing nht evaluation,
which is not needed.

Fix:
If the deleted the route node is having nht count > 0, then do a nht
evaluation on the parent node.
Shut ens192 and ens224, it is taking 1 min to clean up the routes
with the fix.

Signed-off-by: Sarita Patra <saritap@vmware.com>
2022-01-13 14:15:05 -05:00
Donatas Abraitis
df8d723c5f *: Add FOREACH_AFI_SAFI_NSF(afi, safi) macro to reduce nesting
Used for graceful-restart mostly.

Especially for bgp_show_neighbor_graceful_restart_capability_per_afi_safi()

Signed-off-by: Donatas Abraitis <donatas.abraitis@gmail.com>
2022-01-13 14:29:54 +02:00
anlan_cs
5498a9f13d bfdd: correct one spelling error of comment
Signed-off-by: anlan_cs <anlan_cs@tom.com>
2021-12-31 05:32:49 -05:00
Igor Ryzhov
63011ec1c5
Merge pull request #10256 from anlancs/cleanup-zebra_evpn_mac_add
zebra: cleanup checking zebra_evpn_mac_add function's return value
2021-12-23 13:10:18 +03:00
anlan_cs
07361b8fdf zebra: cleanup checking zebra_evpn_mac_add function's return value
This function is sure to return correct value by "assert", so the
checking its return value should be removed.

Signed-off-by: anlan_cs <anlan_cs@tom.com>
2021-12-22 21:13:26 -05:00
Igor Ryzhov
ac2cb9bf94 *: rework renaming the default VRF
Currently, it is possible to rename the default VRF either by passing
`-o` option to zebra or by creating a file in `/var/run/netns` and
binding it to `/proc/self/ns/net`.

In both cases, only zebra knows about the rename and other daemons learn
about it only after they connect to zebra. This is a problem, because
daemons may read their config before they connect to zebra. To handle
this rename after the config is read, we have some special code in every
single daemon, which is not very bad but not desirable in my opinion.
But things are getting worse when we need to handle this in northbound
layer as we have to manually rewrite the config nodes. This approach is
already hacky, but still works as every daemon handles its own NB
structures. But it is completely incompatible with the central
management daemon architecture we are aiming for, as mgmtd doesn't even
have a connection with zebra to learn from it. And it shouldn't have it,
because operational state changes should never affect configuration.

To solve the problem and simplify the code, I propose to expand the `-o`
option to all daemons. By using the startup option, we let daemons know
about the rename before they read their configs so we don't need any
special code to deal with it. There's an easy way to pass the option to
all daemons by using `frr_global_options` variable.

Unfortunately, the second way of renaming by creating a file in
`/var/run/netns` is incompatible with the new mgmtd architecture.
Theoretically, we could force daemons to read their configs only after
they connect to zebra, but it means adding even more code to handle a
very specific use-case. And anyway this won't work for mgmtd as it
doesn't have a connection with zebra. So I had to remove this option.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2021-12-21 22:09:29 +03:00
Lou Berger
4b7d297d4b
Merge pull request #9750 from mjstapp/zebra_installed_nhg_id
zebra: add installed nexthop-group id value
2021-12-21 10:31:04 -05:00
Donatas Abraitis
20044d8090
Merge pull request #10245 from anlancs/fix-spell-error
zebra: correct one spell error
2021-12-20 15:14:53 +02:00
anlan_cs
b816de6213 zebra: correct one spell error
Signed-off-by: anlan_cs <anlan_cs@tom.com>
2021-12-19 20:47:01 -05:00
Donatas Abraitis
e69ae079b7
Merge pull request #9899 from Drumato/zebra-srv6-locator-detail-json-support
zebra: Add support for json output in srv6 locator detail command
2021-12-14 09:11:36 +02:00
Stephen Worley
f6f16b6073 zebra: add optional NHG ID output to show ip ro
Add optional NHG ID output to `show ip route` dumps. We have
this in json output already as nexthopGroupID but nice
to have the option in a normal dump as well. Not including in main
output for now to avoid breaking screen scrapers.

Signed-off-by: Stephen Worley <sworley@nvidia.com>
2021-12-02 11:28:37 -05:00
Donald Sharp
c3343a755f zebra: Prevent thread usage of data after it being freed
On startup we create a thread timer event to do a rib sweep
of the system.  On shutdown we never stopped this timer and
as such we have a situation where a thread event could be run
on shutdown after the data for it has been freed.  Here is the
crash I am seeing:

(gdb) bt
(gdb)

Save the thread data in zebra_router and stop the thread so we don't
accidently do work on shutdown we don't mean to.  In this case
it happened in our topotests with some severe system load.
Essentially we happened to kill the zebra daemon just as the
graceful_restart timer popped here.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2021-11-29 15:51:45 -05:00
Yamato Sugawara
559f4b2f2a zebra: Add support for json output in srv6 locator detail command
Signed-off-by: Yamato Sugawara <yamato.sugawara@linecorp.com>
2021-11-28 23:53:41 +00:00
Igor Ryzhov
cb3fa0a612
Merge pull request #10124 from ton31337/feature/vty_json 2021-11-29 02:11:29 +03:00