Commit Graph

4791 Commits

Author SHA1 Message Date
Donald Sharp
b9d95135a8 zebra: Fix spelling mistake
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-14 12:56:44 -05:00
Trey Aspelund
e54cd97838 zebra: cleanup multiline strings in debug_nl.c
NetDEF CI has been whining about multiline string style.
Make the strings single-line and call it a day.

Signed-off-by: Trey Aspelund <taspelund@nvidia.com>
2022-02-10 21:37:45 +00:00
Trey Aspelund
95fe32880f zebra: add netlink debugs for ip rules
Adds functions to parse + decode netlink rules.
Adds RTM_NEWRULE + RTM_DELRULE to "debug zebra kernel".

Signed-off-by: Trey Aspelund <taspelund@nvidia.com>
2022-02-10 21:36:34 +00:00
Rafael Zalamena
70d79c359b
Merge pull request #10537 from mjstapp/fix_dplane_strdup
zebra: use frr mem apis in dplane
2022-02-10 10:24:22 -03:00
Donald Sharp
2cf7651f0b zebra: Make netlink buffer reads resizeable when needed
Currently when the kernel sends netlink messages to FRR
the buffers to receive this data is of fixed length.
The kernel, with certain configurations, will send
netlink messages that are larger than this fixed length.
This leads to situations where, on startup, zebra gets
really confused about the state of the kernel.  Effectively
the current algorithm is this:

read up to buffer in size
while (data to parse)
     get netlink message header, look at size
        parse if you can

The problem is that there is a 32k buffer we read.
We get the first message that is say 1k in size,
subtract that 1k to 31k left to parse.  We then
get the next header and notice that the length
of the message is 33k.  Which is obviously larger
than what we read in.  FRR has no recover mechanism
nor is there a way to know, a priori, what the maximum
size the kernel will send us.

Modify FRR to look at the kernel message and see if the
buffer is large enough, if not, make it large enough to
read in the message.

This code has to be per netlink socket because of the usage
of pthreads.  So add to `struct nlsock` the buffer and current
buffer length.  Growing it as necessary.

Fixes: #10404
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-08 17:28:19 -05:00
Donald Sharp
d4000d7ba3 zebra: Remove struct nlsock from dataplane information and use int fd
Store the fd that corresponds to the appropriate `struct nlsock` and pass
that around in the dplane context instead of the pointer to the nlsock.
Modify the kernel_netlink.c code to store in a hash the `struct nlsock`
with the socket fd as the key.

Why do this?  The dataplane context is used to pass around the `struct nlsock`
but the zebra code has a bug where the received buffer for kernel netlink
messages from the kernel is not big enough.  So we need to dynamically
grow the receive buffer per socket, instead of having a non-dynamic buffer
that we read into.  By passing around the fd we can look up the `struct nlsock`
that will soon have the associated buffer and not have to worry about `const`
issues that will arise.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-08 17:28:19 -05:00
Donald Sharp
3670f5047c zebra: Store the sequence number to use as part of the dp_info
Store and use the sequence number instead of using what is in
the `struct nlsock`.  Future commits are going away from storing
the `struct nlsock` and the copy of the nlsock was guaranteeing
unique sequence numbers per message.  So let's store the
sequence number to use instead.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-08 17:28:19 -05:00
Mark Stapp
b6b6e59c6e zebra: use frr mem apis
Replace a couple of strdup/free with XSTRDUP/XFREE.

Signed-off-by: Mark Stapp <mstapp@nvidia.com>
2022-02-08 15:57:57 -05:00
Russ White
1a8a7016a6
Merge pull request #9066 from donaldsharp/ships_in_the_night
zebra: Fix ships in the night issue
2022-02-08 14:41:01 -05:00
Igor Ryzhov
60cda04dda *: use ipaddr_cmp instead of memcmp
Using memcmp is wrong because struct ipaddr may contain unitialized
padding bytes that should not be compared.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-02-08 20:31:34 +03:00
Russ White
e735c8073c
Merge pull request #9649 from proelbtn/add-support-for-end-dt4
add support for SRv6 IPv4 L3VPN
2022-02-08 08:30:02 -05:00
Donald Sharp
ce649b9d11 zebra: Abstract nhg deletion to reduce code duplication
Reduce code duplication when we are cleaning up nexthop
groups.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-07 16:10:36 -05:00
Donald Sharp
c6eee91f66 zebra: Fix ships in the night issue
When using wait for install there exists situations where
zebra will issue several route change operations to the kernel
but end up in a state where we shouldn't be at the end
due to extra data being received.  Example:

a) zebra receives from bgp a route change, installs sends the
route to the kernel.
b) zebra receives a route deletion from bgp, removes the
struct route entry and then sends to the kernel a deletion.
c) zebra receives an asynchronous notification that (a) succeeded
but we treat this as a new route.

This is the ships in the night problem.  In this case if we receive
notification from the kernel about a route that we know nothing
about and we are not in startup and we are doing asic offload
then we can ignore this update.

Ticket: #2563300
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-07 16:10:03 -05:00
Donald Sharp
81ef8a69ae zebra: Use AF_UNSPEC instead of setting to 0
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-07 13:22:41 -05:00
Jafar Al-Gharaibeh
4333379fca
Merge pull request #9926 from donaldsharp/update_issues
zebra: Fix v6 route replace failure turned into success
2022-02-04 19:40:55 -06:00
Jafar Al-Gharaibeh
2da1428ab2
Merge pull request #10501 from donaldsharp/more_zebra_show
More zebra show
2022-02-04 15:13:45 -06:00
Donald Sharp
c8453cd77e zebra: Fix v6 route replace failure turned into success
Currently when we have a route replace operation for v6 routes
with a new nexthop group the order of kernel installation is this:

a) New nexthop group insertion seq  1
b) Route delete operation seq 3
c) Route insertion operation seq 2

Currently the code in nl_batch_read_resp is attempting
to handle this situation by skipping the delete operation.
*BUT* it is enqueuing the context into the zebra dplane
queue before we read the response.  Since we create the ctx
with an implied success, success is being reported to the
upper level dplane and the zebra rib thinks the route has
been properly handled.

This is showing up in the zebra_seg6_route test code because
the test code is installing a seg6 route w/ sharpd and it
is failing to install because the route's nexthop is rejected:

First installation:

2021/10/29 09:28:10.218 ZEBRA: [JGWSB-SMNVE] dplane: incoming new work counter: 2
2021/10/29 09:28:10.218 ZEBRA: [Q52A7-211QJ] dplane enqueues 2 new work to provider 'Kernel'
2021/10/29 09:28:10.218 ZEBRA: [JVY1P-93VFY] dplane provider 'Kernel': processing
2021/10/29 09:28:10.218 ZEBRA: [TX9N0-9JKDF] ID (9) Dplane nexthop update ctx 0x56125390a820 op NH_INSTALL
2021/10/29 09:28:10.218 ZEBRA: [PM9ZJ-07RCP] 0:1::1/128 Dplane route update ctx 0x56125390add0 op ROUTE_INSTALL
2021/10/29 09:28:10.218 ZEBRA: [TJ327-ET8HE] netlink_send_msg: >> netlink message dump [sent]
2021/10/29 09:28:10.218 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=104 type=(104) NEWNEXTHOP flags=(0x0501) {REQUEST,DUMP,(ROOT|REPLACE|CAPPED),(ATOMIC|CREATE)} seq=9 pid=3539131282]
2021/10/29 09:28:10.218 ZEBRA: [WCX94-SW894]   nhm [family=(10) AF_INET6 scope=(0) UNIVERSE protocol=(11) ZEBRA flags=0x00000000 {}]
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(1) ID]
2021/10/29 09:28:10.218 ZEBRA: [Z4E9C-GD9EP]       9
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(6) GATEWAY]
2021/10/29 09:28:10.218 ZEBRA: [STTSM-27M81]       2001::1
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(5) OIF]
2021/10/29 09:28:10.218 ZEBRA: [JR4EA-BKPTA]       6
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=6 (payload=2) type=(7) ENCAP_TYPE]
2021/10/29 09:28:10.218 ZEBRA: [JR4EA-BKPTA]       5
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=36 (payload=32) type=(32776) UNKNOWN]
2021/10/29 09:28:10.218 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=64 type=(24) NEWROUTE flags=(0x0401) {REQUEST,(ATOMIC|CREATE)} seq=10 pid=3539131282]
2021/10/29 09:28:10.218 ZEBRA: [GCEGC-W8YBF]   rtmsg [family=(10) AF_INET6 dstlen=128 srclen=0 tos=0 table=254 protocol=(194) UNKNOWN scope=(0) UNIVERSE type=(1) UNICAST flags=0x0000 {}]
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(1) DST]
2021/10/29 09:28:10.218 ZEBRA: [STTSM-27M81]       1::1
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(6) PRIORITY]
2021/10/29 09:28:10.218 ZEBRA: [Z4E9C-GD9EP]       20
2021/10/29 09:28:10.218 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(30) NH_ID]
2021/10/29 09:28:10.218 ZEBRA: [Z4E9C-GD9EP]       9
2021/10/29 09:28:10.218 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:28:10.218 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=76 type=(2) ERROR flags=(0x0300) {DUMP,(ROOT|REPLACE|CAPPED),(MATCH|EXCLUDE|ACK_TLVS)} seq=9 pid=3539131282]
2021/10/29 09:28:10.218 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-22) Invalid argument]
2021/10/29 09:28:10.218 ZEBRA: [HSYZM-HV7HF] Extended Error: Gateway can not be a local address
2021/10/29 09:28:10.218 ZEBRA: [WVJCK-PPMGD][EC 4043309093] netlink-dp (NS 0) error: Invalid argument, type=RTM_NEWNEXTHOP(104), seq=9, pid=3539131282
2021/10/29 09:28:10.218 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:28:10.218 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=68 type=(2) ERROR flags=(0x0300) {DUMP,(ROOT|REPLACE|CAPPED),(MATCH|EXCLUDE|ACK_TLVS)} seq=10 pid=3539131282]
2021/10/29 09:28:10.218 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-22) Invalid argument]
2021/10/29 09:28:10.218 ZEBRA: [HSYZM-HV7HF] Extended Error: Nexthop id does not exist
2021/10/29 09:28:10.218 ZEBRA: [WVJCK-PPMGD][EC 4043309093] netlink-dp (NS 0) error: Invalid argument, type=RTM_NEWROUTE(24), seq=10, pid=3539131282
2021/10/29 09:28:10.218 ZEBRA: [VCDW6-A7ZF1] dplane dequeues 2 completed work from provider Kernel
2021/10/29 09:28:10.218 ZEBRA: [JTWAB-1MH4Y] dplane has 2 completed, 0 errors, for zebra main
2021/10/29 09:28:10.218 ZEBRA: [J7K9Z-9M7DT] Nexthop dplane ctx 0x56125390a820, op NH_INSTALL, nexthop ID (9), result FAILURE
2021/10/29 09:28:10.218 ZEBRA: [P2XBZ-RAFQ5][EC 4043309074] Failed to install Nexthop ID (9) into the kernel
2021/10/29 09:28:10.218 ZEBRA: [RMK34-61HV5] default(0:254):1::1/128 Processing dplane result ctx 0x56125390add0, op ROUTE_INSTALL result FAILURE

Note the last line `op ROUTE_INSTALL result FAILURE` because we are attempting to use a
a gw nexthop that is local.  This is the result.

Then the test code was installing the route again:

2021/10/29 09:30:00.493 ZEBRA: [JGWSB-SMNVE] dplane: incoming new work counter: 2
2021/10/29 09:30:00.493 ZEBRA: [Q52A7-211QJ] dplane enqueues 2 new work to provider 'Kernel'
2021/10/29 09:30:00.493 ZEBRA: [JVY1P-93VFY] dplane provider 'Kernel': processing
2021/10/29 09:30:00.493 ZEBRA: [TX9N0-9JKDF] ID (9) Dplane nexthop update ctx 0x561253916a00 op NH_INSTALL
2021/10/29 09:30:00.493 ZEBRA: [PM9ZJ-07RCP] 0:1::1/128 Dplane route update ctx 0x561253915f40 op ROUTE_UPDATE
2021/10/29 09:30:00.493 ZEBRA: [TJ327-ET8HE] netlink_send_msg: >> netlink message dump [sent]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=104 type=(104) NEWNEXTHOP flags=(0x0501) {REQUEST,DUMP,(ROOT|REPLACE|CAPPED),(ATOMIC|CREATE)} seq=11 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [WCX94-SW894]   nhm [family=(10) AF_INET6 scope=(0) UNIVERSE protocol=(11) ZEBRA flags=0x00000000 {}]
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(1) ID]
2021/10/29 09:30:00.493 ZEBRA: [Z4E9C-GD9EP]       9
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(6) GATEWAY]
2021/10/29 09:30:00.493 ZEBRA: [STTSM-27M81]       2001::1
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(5) OIF]
2021/10/29 09:30:00.493 ZEBRA: [JR4EA-BKPTA]       6
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=6 (payload=2) type=(7) ENCAP_TYPE]
2021/10/29 09:30:00.493 ZEBRA: [JR4EA-BKPTA]       5
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=36 (payload=32) type=(32776) UNKNOWN]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=56 type=(25) DELROUTE flags=(0x0401) {REQUEST,(ATOMIC|CREATE)} seq=13 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [GCEGC-W8YBF]   rtmsg [family=(10) AF_INET6 dstlen=128 srclen=0 tos=0 table=254 protocol=(194) UNKNOWN scope=(0) UNIVERSE type=(0) UNSPEC flags=0x0000 {}]
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(1) DST]
2021/10/29 09:30:00.493 ZEBRA: [STTSM-27M81]       1::1
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(6) PRIORITY]
2021/10/29 09:30:00.493 ZEBRA: [Z4E9C-GD9EP]       20
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=64 type=(24) NEWROUTE flags=(0x0401) {REQUEST,(ATOMIC|CREATE)} seq=12 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [GCEGC-W8YBF]   rtmsg [family=(10) AF_INET6 dstlen=128 srclen=0 tos=0 table=254 protocol=(194) UNKNOWN scope=(0) UNIVERSE type=(1) UNICAST flags=0x0000 {}]
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=20 (payload=16) type=(1) DST]
2021/10/29 09:30:00.493 ZEBRA: [STTSM-27M81]       1::1
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(6) PRIORITY]
2021/10/29 09:30:00.493 ZEBRA: [Z4E9C-GD9EP]       20
2021/10/29 09:30:00.493 ZEBRA: [KFBSR-XYJV1]     rta [len=8 (payload=4) type=(30) NH_ID]
2021/10/29 09:30:00.493 ZEBRA: [Z4E9C-GD9EP]       9
2021/10/29 09:30:00.493 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=76 type=(2) ERROR flags=(0x0300) {DUMP,(ROOT|REPLACE|CAPPED),(MATCH|EXCLUDE|ACK_TLVS)} seq=11 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-22) Invalid argument]
2021/10/29 09:30:00.493 ZEBRA: [HSYZM-HV7HF] Extended Error: Gateway can not be a local address
2021/10/29 09:30:00.493 ZEBRA: [WVJCK-PPMGD][EC 4043309093] netlink-dp (NS 0) error: Invalid argument, type=RTM_NEWNEXTHOP(104), seq=11, pid=3539131282
2021/10/29 09:30:00.493 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=36 type=(2) ERROR flags=(0x0100) {DUMP,(ROOT|REPLACE|CAPPED)} seq=13 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-3) No such process]
2021/10/29 09:30:00.493 ZEBRA: [V8KNF-8EXH8] netlink_recv_msg: << netlink message dump [recv]
2021/10/29 09:30:00.493 ZEBRA: [JAS4D-NCWGP] nlmsghdr [len=68 type=(2) ERROR flags=(0x0300) {DUMP,(ROOT|REPLACE|CAPPED),(MATCH|EXCLUDE|ACK_TLVS)} seq=12 pid=3539131282]
2021/10/29 09:30:00.493 ZEBRA: [KWP1C-6CSXF]   nlmsgerr [error=(-22) Invalid argument]
2021/10/29 09:30:00.493 ZEBRA: [VCDW6-A7ZF1] dplane dequeues 2 completed work from provider Kernel
2021/10/29 09:30:00.493 ZEBRA: [JTWAB-1MH4Y] dplane has 2 completed, 0 errors, for zebra main
2021/10/29 09:30:00.493 ZEBRA: [J7K9Z-9M7DT] Nexthop dplane ctx 0x561253916a00, op NH_INSTALL, nexthop ID (9), result FAILURE
2021/10/29 09:30:00.493 ZEBRA: [P2XBZ-RAFQ5][EC 4043309074] Failed to install Nexthop ID (9) into the kernel
2021/10/29 09:30:00.493 ZEBRA: [RMK34-61HV5] default(0:254):1::1/128 Processing dplane result ctx 0x561253915f40, op ROUTE_UPDATE result SUCCESS

Note that this time we do these three operations

a) nexthop installation seq 11
b) route delete seq 13
c) route add seq 12

Note the last line, we report the install as a success but it clearly failed from the seq=12 decode.
When we look at the v6 rib it thinks it is installed:

unet> r1 show ipv6 route
Codes: K - kernel route, C - connected, S - static, R - RIPng,
       O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table,
       v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR,
       f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

D>* 1::1/128 [150/0] via 2001::1, dum0, seg6local unspec unknown(seg6local_context2str), seg6 a::, weight 1, 00:00:17

So let's modify nl_batch_read_resp to not dequeue/enqueue the context until we are sure we have
the right one.  This fixes the test code to do the right thing on the second installation.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 15:33:58 -05:00
Donald Sharp
e3ee55d4bd zebra: set zd_is_update in 1 spot
The ctx->zd_is_update is being set in various
spots based upon the same value that we are
passing into dplane_ctx_ns_init.  Let's just
consolidate all this into the dplane_ctx_ns_init
so that the zd_is_udpate value is set at the
same time that we increment the sequence numbers
to use.

As a note for future me's reading this.  The sequence
number choosen for the seq number passed to the
kernel is that each context gets a copy of the
appropriate nlsock to use.  Since it's a copy
at a point in time, we know we have a unique sequence
number value.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 15:33:58 -05:00
Donald Sharp
00249e255e zebra: When we get an implicit or ack or full failure mark status
When nl_batch_read_resp gets a full on failure -1 or an implicit
ack 0 from the kernel for a batch of code.  Let's immediately
mark all of those in the batch pass/fail as needed.  Instead
of having them marked else where.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 15:33:58 -05:00
Jafar Al-Gharaibeh
40ec6ef9e0
Merge pull request #10161 from donaldsharp/hash_crash
zebra: Fix improper usage of hash_iterate that caused crashes
2022-02-04 14:18:03 -06:00
Donald Sharp
07b9ebca65 zebra: Ensure zebra_nhg_sweep_table accounts for double deletes
I'm seeing this crash in various forms:
Program terminated with signal SIGSEGV, Segmentation fault.
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
[Current thread is 1 (Thread 0x7f418efbc7c0 (LWP 3580253))]
(gdb) bt
(gdb) f 4
267 (*func)(hb, arg);
(gdb) p hb
$1 = (struct hash_bucket *) 0x558cdaafb250
(gdb) p *hb
$2 = {len = 0, next = 0x0, key = 0, data = 0x0}
(gdb)

I've also seen a crash where data is 0x03.

My suspicion is that hash_iterate is calling zebra_nhg_sweep_entry which
does delete the particular entry we are looking at as well as possibly other
entries when the ref count for those entries gets set to 0 as well.

Then we have this loop in hash_iterate.c:

   for (i = 0; i < hash->size; i++)
            for (hb = hash->index[i]; hb; hb = hbnext) {
                    /* get pointer to next hash bucket here, in case (*func)
                     * decides to delete hb by calling hash_release
                     */
                    hbnext = hb->next;
                    (*func)(hb, arg);
            }
Suppose in the previous loop hbnext is set to hb->next and we call
zebra_nhg_sweep_entry. This deletes the previous entry and also
happens to cause the hbnext entry to be deleted as well, because of nhg
refcounts. At this point in time the memory pointed to by hbnext is
not owned by the pthread anymore and we can end up on a state where
it's overwritten by another pthread in zebra with data for other incoming events.

What to do?  Let's change the sweep function to a hash_walk and have
it stop iterating and to start over if there is a possible double
delete operation.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 12:05:38 -05:00
Russ White
ab68283cee
Merge pull request #10401 from donaldsharp/donot_agree
zebra: Make Router Advertisement warnings show up once every 6 hours
2022-02-04 10:55:00 -05:00
Donatas Abraitis
66a59f8743
Merge pull request #10469 from mjstapp/fix_dplane_netlink_groups
zebra: reduce incoming netlink messages for dplane thread
2022-02-04 17:51:31 +02:00
Donald Sharp
530c9fc4f5 zebra: Convert some show zebra output to a table
Make the output a bit easier to interpret and use by
converting to usage of a table.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Donald Sharp
954e1a2bc9 zebra: Add knowledge about RA and RFC 5549 to show zebra
Add to `show zebra` whether or not RA is compiled into FRR
and whether or not BGP is using RFC 5549 at the moment.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Donald Sharp
281686819d zebra: Add evpn status to show zebra
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Donald Sharp
1777ba2ac4 zebra: Add os and version to show zebra
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Donald Sharp
090ee85656 zebra: Add kernel nexthop group support to show zebra
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Donald Sharp
1a97e35eb8 zebra: Add MPLS status to show zebra
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Donald Sharp
9783de6faf zebra: Add if v4/v6 forwarding is turned on/off to show zebra
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Donald Sharp
dd42779ff9 zebra: Add to show zebra the type of vrf devices being used
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Donald Sharp
88fd4cb8ca zebra: Add ability to know when FRR is not asic offloaded
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-02-04 10:29:38 -05:00
Mark Stapp
3d1ff4bfdb
Merge pull request #10409 from idryzhov/zebra-mq-clean-crash
zebra: fix cleanup of meta queues on vrf disable
2022-02-02 08:35:00 -05:00
Mark Stapp
ceab66b7f4 zebra: reduce incoming netlink messages for dplane thread
The dataplane pthread only processes a limited set of incoming
netlink notifications: only register for that set of events,
reducing duplicate incoming netlink messages.

Signed-off-by: Mark Stapp <mstapp@nvidia.com>
2022-02-01 13:43:51 -05:00
Igor Ryzhov
0ef6eacc95 zebra: fix cleanup of meta queues on vrf disable
Current code treats all metaqueues as lists of route_node structures.
However, some queues contain other structures that need to be cleaned up
differently. Casting the elements of those queues to struct route_node
and dereferencing them leads to a crash. The crash may be seen when
executing bgp_multi_vrf_topo2.

Fix the code by using the proper list element types.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-02-01 18:20:30 +03:00
Igor Ryzhov
461a8d7aba
Merge pull request #10443 from mjstapp/zebra_re_opaque
zebra: name the route_entry opaque struct more specifically
2022-02-01 12:19:11 +03:00
Mark Stapp
b86c1f4fcc zebra: name the route_entry opaque struct more specifically
The name 'opaque' is a little general - call the route_entry
struct 're_opaque' to make it more specific.

Signed-off-by: Mark Stapp <mstapp@nvidia.com>
2022-01-31 08:50:50 -05:00
Donald Sharp
637f95bf2d zebra: Make Router Advertisement warnings show up once every 6 hours
RA packets are pretty chatty and when there is a warning from
a missconfiguration on the network, the log file gets filed
up with warnings.  Modify the code in rtadv.c to only spit
out the warning in these cases at most every 6 hours.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-28 11:07:01 -05:00
Xiao Liang
8244ba34aa zebra: Fix EVPN route nexthop config order
EVPN route add should be queued to preserve the config order.
In particular, against deletion in rib_delete().

Signed-off-by: Xiao Liang <shaw.leon@gmail.com>
2022-01-28 20:51:10 +08:00
Donald Sharp
0955f8757b zebra: Don't double delete the table we are cleaning up
vrf_disable is always called first before
vrf_delete.  The rnh_table and rnh_table_multicast tables
are already deleted as part of vrf_disable.  No need
to do it again.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-26 08:21:03 -05:00
Russ White
e48b2fea63
Merge pull request #10411 from idryzhov/if-config-vrf-name
*: do not print vrf name for interface config when using vrf-lite
2022-01-25 11:34:59 -05:00
Russ White
bbf1101240
Merge pull request #10412 from idryzhov/zebra-vrf-delete
zebra: fix vrf deletion
2022-01-24 07:33:53 -05:00
Igor Ryzhov
788a036fdb *: do not print vrf name for interface config when using vrf-lite
VRF name should not be printed in the config since 574445ec. The update
was done for NB config output but I missed it for regular vty output.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-01-24 14:44:05 +03:00
Donatas Abraitis
6b968475ed
Merge pull request #10406 from idryzhov/zebra-opaque-memleak
zebra: fix opaque data memleak
2022-01-24 09:38:54 +02:00
Russ White
2d9e10d095
Merge pull request #10318 from donaldsharp/redistribution
OSPF Redistribution
2022-01-23 22:30:24 -05:00
Igor Ryzhov
e4c5b3ba06 zebra: fix vrf deletion
VRF deletion code must be called after the corresponding interface
deletion code.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-01-24 01:51:10 +03:00
Igor Ryzhov
dc00940b66 zebra: fix opaque data memleak
Opaque data should be freed together with route entry in case of errors.

Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
2022-01-23 15:39:04 +03:00
Donald Sharp
e8b3a2f74b lib, zebra: Add ability to tell thread system to ignore late timers
Add a thread_ignore_late_timer(struct thread *thread) function
that allows thread.c to ignore when timers are late to the party.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>
2022-01-20 11:58:48 -05:00
Russ White
4ec148523c
Merge pull request #10355 from opensourcerouting/noisy-startup
lib, zebra: make startup less noisy
2022-01-18 11:30:13 -05:00
Russ White
05786ac774
Merge pull request #9644 from opensourcerouting/ospf-opaque-attrs
OSPF opaque route attributes
2022-01-18 09:08:38 -05:00