1. No any configuration in FRR, and `ip link add vrf1 type vrf ...`.
Currently, everything is ok.
2. `ip link del vrf1`.
`zebra` will wrongly/redundantly notify clients to add "vrf1" as a normal
interface after correct deletion of "vrf1".
```
ZEBRA: [KMXEB-K771Y] netlink_parse_info: netlink-listen (NS 0) type RTM_DELLINK(17), len=588, seq=0, pid=0
ZEBRA: [TDJW2-B9KJW] RTM_DELLINK for vrf1(93) <- Wrongly as normal interface, not vrf
ZEBRA: [WEEJX-M4HA0] interface vrf1 vrf vrf1(93) index 93 is now inactive.
ZEBRA: [NXAHW-290AC] MESSAGE: ZEBRA_INTERFACE_DELETE vrf1 vrf vrf1(93)
ZEBRA: [H97XA-ABB3A] MESSAGE: ZEBRA_INTERFACE_VRF_UPDATE/DEL vrf1 VRF Id 93 -> 0
ZEBRA: [HP8PZ-7D6D2] MESSAGE: ZEBRA_INTERFACE_VRF_UPDATE/ADD vrf1 VRF Id 93 -> 0 <-
ZEBRA: [Y6R2N-EF2N4] interface vrf1 is being deleted from the system
ZEBRA: [KNFMR-AFZ53] RTM_DELLINK for VRF vrf1(93)
ZEBRA: [P0CZ5-RF5FH] VRF vrf1 id 93 is now inactive
ZEBRA: [XC3P3-1DG4D] MESSAGE: ZEBRA_VRF_DELETE vrf1
ZEBRA: [ZMS2F-6K837] VRF vrf1 id 4294967295 deleted
OSPF: [JKWE3-97M3J] Zebra: interface add vrf1 vrf default[0] index 0 flags 480 metric 0 mtu 65575 speed 0 <- Wrongly add interface
```
`if_handle_vrf_change()` moved the interface from specific vrf to default
vrf. But it doesn't skip interface of vrf type. So, the wrong/redundant
add operation is done.
Note, the wrong add operation is regarded as an normal interface because
the `ifp->status` is cleared too early, so it is without VRF flag
( `ZEBRA_INTERFACE_VRF_LOOPBACK` ). Now, ospfd will initialize `ifp->type`
to `OSPF_IFTYPE_BROADCAST`.
3. `ip link add vrf1 type vrf ...`, add "vrf1" again. FRR will be with
wrong display:
```
interface vrf1
ip ospf network broadcast
exit
```
Here, zebra will send `ZEBRA_INTERFACE_ADD` again for "vrf1" with
correct `ifp->status`, so it will be updated into vrf type. But
it can't update `ifp->type` from `OSPF_IFTYPE_BROADCAST` to
`OSPF_IFTYPE_LOOPBACK` because it had been already configured in above
step 2.
Two changes to fix it:
1. Skip the procedure of switching VRF for interfaces of vrf type.
It means, don't send `ZEBRA_INTERFACE_ADD` to clients when deleting vrf.
2. Put the deletion of this flag at the last.
It means, clients should get correct `ifp->status`.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Remove the pointer check for ctx. At this point in the
function it has to be non null since we deref'ed it.
Additionally the alloc function that creates it cannot
fail.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The current local mac delete event send to flag with force
always which breaks the duplicate detected MACs where
it requires to be resynced from bgpd to earlier state.
Ticket:#3233019
Issue:3233019
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Upon receiving local mobility event for MAC + NEIGH,
both are detected as duplicate upon hitting DAD threshold.
Duplicated detected ( freezed) MAC + NEIGH are not known
to bgpd.
If locally learnt MAC + NEIGH are deleted in kernel,
the MAC is marked as AUTO after sending delete event
to bgpd.
Bgpd only reinstalls best route for MAC_IP route (NEIGH)
but not for MAC event.
This puts a situation where MAC is AUTO state and
associated neigh as remote.
Fix:
DUPLICATE + LOCAL MAC deletion, set MAC delete request
as reinstall from bgpd.
Ticket:#2873307
Reviewed By:
Testing Done:
Freeze MAC + two NEIGHs in local mobility event.
Delete MAC and NEIGH from kerenl.
bgp rsync remote mac route which puts MAC to remote state.
Signed-off-by: Chirag Shah <chirag@nvidia.com>
EVPN MH ES reduendant VTEPs need to install
sync MAC as notify inactive and generate
ND:Proxy stamped extended community on Type-2
route.
Ticket:#3436621
Issue:3436621
Testing Done:
tor-11 originates type-2 MAC route:
tor-11# bridge -d fdb show | grep 00:65:00:00:00:01
00:65:00:00:00:01 dev hostbond1 vlan 1000 notify master bridge static
tor-12 receives sync MAC route:
Before fix:
----------
tor-12:/# bridge -d fdb show | grep 00:65:00:00:00:01
00:65:00:00:00:01 dev hostbond1 vlan 1000 notify master bridge static
After fix: inactive is set to MAC entry
----------
tor-12:/#bridge -d fdb show | grep 00:65:00:00:00:01
00:65:00:00:00:01 dev hostbond1 vlan 1000 notify inactive master bridge
static
Notice the difference in `inactive` post notify on tor-12
with the fix.
Signed-off-by: Trey Aspelund <taspelund@nvidia.com>
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Srv6 nexthop segments may not be set when configuring seg6local
attributes. This is the case for the following seg6local route:
Dump in vtysh, extract from 'show ipv6 route'
> B>* 2001:db8:1:1:1::/128 [20/0] is directly connected, vrf1, seg6local End.DT46 table 10, seg6 ::, weight 1, 00:02:10
Dump in iproute2, extract from 'ip -6 route show'
> 2001:db8:1:1:1:: nhid 22 encap seg6local action End.DT46 vrftable 10 dev vrf1 proto bgp metric 20 pref medium
As can be seen, the 'seg6 ::' nexthop segment is not visible on iproute2,
because it is not set. Do not display seg6 ipv6 nexthop when not set.
After:
> B>* 2001:db8:1:1:1::/128 [20/0] is directly connected, vrf1, seg6local End.DT46 table 10, weight 1, 00:02:10
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Srv6 routes which configure encap method, may not have
seg6local instructions. Generally speaking, seg6local
attributes that are not specified should not be dumped.
Before:
> B>* 10.200.0.0/24 [20/0] via fd00:125::2, ntfp2 (vrf default), label 16, seg6local unspec unknown(seg6local_context2str), seg6 2001:db8:1:1:1::, weight 1, 0\
0:00:17
After:
> B>* 10.200.0.0/24 [20/0] via fd00:125::2, ntfp2 (vrf default), label 16, seg6 2001:db8:1:1:1::, weight 1, 00:00:17
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Issue:
After vlan flap, zebra was not marking the selected/best route as installed.
As a result, when a static route was configured with nexthop as directly
connected interface's(vlan) IP, the static route was not being installed
in the kernel since its nexthop was unresolved. The nexthop was marked
unresolved because zebra failed to mark the best route as installed after
interface flap.
This was happening because, in dplane_route_update_internal() if the old and
new context type, and nexthop group id are the same, then zebra doesn't send
down a route replace request to kernel. But, the installed (ROUTE_ENTRY_INSTALLED)
flag is set when zebra receives a response from kernel. Since the
request to kernel was being skipped for the route entry, installed flag
was not being set
Fix:
In dplane_route_update_internal() if the old and new context type, and
nexthop group id are the same, then before returning, installed flag will
be set on the route-entry if it's not set already.
Signed-off-by: Pooja Jagadeesh Doijode <pdoijode@nvidia.com>
"show evpn json" returns nothing when evpn is disabled.
Code has been fixed to return {} when evpn is disabled or no entry
available.
Before Fix:-
```
cumulus@r2:mgmt:~$ sudo vtysh -c "show evpn json"
cumulus@r2:mgmt:~$
```
After Fix:-
```
cumulus@r1:mgmt:~$ sudo vtysh -c "show evpn json"
{
}
cumulus@r1:mgmt:~$
```
Ticket:#3417955
Issue:3417955
Testing: UT done
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Signed-off-by: Sindhu Parvathi Gopinathan <sgopinathan@nvidia.com>
During shutdown, the main pthread stops the dplane pthread
before exiting. Don't try to clean up any events scheduled
to the dplane pthread at that point - just let the thread
exit and clean up.
Signed-off-by: Mark Stapp <mjs@labn.net>
two things:
On shutdown cleanup any events associated with the update walker.
Also do not allow new events to be created.
Fixes this mem-leak:
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790:Direct leak of 8 byte(s) in 1 object(s) allocated from:
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #0 0x7f0dd0b08037 in __interceptor_calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #1 0x7f0dd06c19f9 in qcalloc lib/memory.c:105
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #2 0x55b42fb605bc in rib_update_ctx_init zebra/zebra_rib.c:4383
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #3 0x55b42fb6088f in rib_update zebra/zebra_rib.c:4421
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #4 0x55b42fa00344 in netlink_link_change zebra/if_netlink.c:2221
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #5 0x55b42fa24622 in netlink_information_fetch zebra/kernel_netlink.c:399
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #6 0x55b42fa28c02 in netlink_parse_info zebra/kernel_netlink.c:1183
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #7 0x55b42fa24951 in kernel_read zebra/kernel_netlink.c:493
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #8 0x7f0dd0797f0c in event_call lib/event.c:1995
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #9 0x7f0dd0684fd9 in frr_run lib/libfrr.c:1185
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #10 0x55b42fa30caa in main zebra/main.c:465
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790- #11 0x7f0dd01b5d09 in __libc_start_main ../csu/libc-start.c:308
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790-
./msdp_topo1.test_msdp_topo1/r2.zebra.asan.1117790-SUMMARY: AddressSanitizer: 8 byte(s) leaked in 1 allocation(s).
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
BGP signals to zebra that a afi has converged immediately
after it has finished processing all routes for a given
afi/safi. This generates events in zebra in this order
a) Routes received from BGP, placed on early-rib Meta-Q
b) Signal GR for the afi.
Now imagine that zebra reads GR code and immediately
processes routes that are in the actual rib and
removes some routes. This generates a
c) route deletion to the kernel for some number of
routes that may be in the the early-rib Meta-Q
d) Process the Meta-Q, and re-install the routes
This is undesirable behavior in zebra. In that
while we may end up in a correct state, there
will be a blip for some number of routes that
happen to be in the early rib Meta-Q.
Modify the GR code to have it's own processing
entry at the end of the Meta-Q. This will
allow all routes to be processed and ready
for handling by the Graceful Restart code.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
After the restructure of the gr code to allow zebra_gr
to have individual cleanups of afi, this is no longer necessary.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The GR code in FRR used to wait till all AFI's were complete
before cleaning up the routes from the upper level protocol.
This of course can lead to some weird situations where say
ipv4 finishes and then v6 is stuck waiting for a peer to come
up and never finishes. v4 when it finishes signals zebra that
it is done but no action is taken at that moment.
Modify the code to allow the zebra_gr.c code to handle a per
afi removal, instead of doing it all at the end.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The zebra_gr code had 3 functions when effectively only
1 was needed. Cleans up some code weirdness around
multiple switch statements for the same api->cap
as well as consolidating down to only caring about
SAFI_UNICAST, since that is all we care about at the
moment.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
We have code that tracks both afi and safi's,
but we only ever operate on the afi's. So lets
limit our work being done to something more sensible.
I'm leaving the safi being broadcast through the zapi
message, as that I am not sure what else should be ripped
out at this point in time.
Finally re-arrange the zread_client_capabilites function
to stop the multiple levels of function calling that really
serve no purpose.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
By the time this function is called we have already
ensured that the pointers are good several times.
I like consistency but this is a bit much
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When GR is running and attempting to clear up a node
if the node that is currently saved and we are coming
back to happens to be deleted during the time zebra
suspends the GR code due to hitting the node limit
then zebra GR code will just completely stop processing
and potentially leave stale nodes around forever.
Let's just remove this hole and process what we can.
Can you imagine trying to debug this after the fact?
If we remove a node then that counts toward the maximum
to process of ZEBRA_MAX_STALE_ROUTE_COUNT. This should
prevent any non-processing with a slightly larger cost
of having to look at a few nodes repeatedly
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The info->do_delete variable was being set to true only when
u.val was 1. The problem with this is that u.val is a union
and the various ways that we can call this event causes
different values to be written to the union value on the thread.
This makes no sense. Just set the variable to what we want it to
be when we need it to be true. Since it was only ever set during
a thread_execute section.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Effectively a massive search and replace of
`struct thread` to `struct event`. Using the
term `thread` gives people the thought that
this event system is a pthread when it is not
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This is a first in a series of commits, whose goal is to rename
the thread system in FRR to an event system. There is a continual
problem where people are confusing `struct thread` with a true
pthread. In reality, our entire thread.c is an event system.
In this commit rename the thread.[ch] files to event.[ch].
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The 'show mpls table json' command displays the outgoing interface
name only when the nexthop type is either NEXTHOP_TYPE_IFINDEX or
NEXTHOP_TYPE_IPV6_IFINDEX. add the interface name for the nexthop
type NEXTHOP_TYPE_IPV4_IFINDEX.
Fixes: ("b78b820d46d6") MPLS: Display enhancements and JSON support
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
This commit addresses the case where a service wants to install
an LSP entry to a next-hop located in a VRF instance. The incoming
MPLS packet is on the namespace and has to be directed to a nexthop
located behind an interface that sits in a specific VRF instance.
The below iproute command can illustrate:
> ip link add vrf1 type vrf table 10
> ip link set dev vrf1 up
> ip link set dev eth0 master vrf1
> ip a a 192.0.2.1/24 dev eth0
> ip -f mpls route add 105 via inet 192.0.2.45 dev eth0
If a service uses the ZEBRA_MPLS_LABELS messages, then the LSP
message is ignored: from zebra perspective, the MPLS entries are
visible via the 'show mpls table' command, but no LSP entry is
installed in the kernel.
The issue is in the nhlfe_nexthop_active_ipv[4/6] function: the
outgoing interface mentioned in the nexthop is searched in the
main VRF, whereas the interface is in a separate VRF. The interface
is not found, and the nhlfe to install is considered not active.
To address this issue, reuse the incoming vrf_id parameter transmitted
in the nexthop structure from the ZEBRA_MPLS_LABELS message. When
creating an NHLFE entry, the vrf_id is used instead of the DEFAULT_VRF.
And the nhlfe entry can be considered as active.
One alternate solution to reuse the vrf_id parameter in the mpls network
context would be to modify the search function in nhlfe_nexthop_active..()
function: looking for an existing ifindex in the zns. However, this
solution may not fit later when netns backend would be used.
Note that some changes have not been done yet and are considered
sufficient for now:
- The 'nhlfe_find' API: the assumption is done that only the linux vrf
backend is used for now.
- The 'mpls_lsp_install()' API: It is currently used by the CLI command
which does not handle the interface parameter, and the SRTE service, whih
always sends LSPs towards a nexthop located in the VRF_DEFAULT.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The ZEBRA_MPLS_LABELS_[ADD/DELETE/REPLACE] messages may change an
LSP entry based on an incoming MPLS entry, followed by a given
next-hop.
Having a next hop with no label information inside is rejected
by the zebra layer. As illustration, the following ZAPI message
would be rejected, because the next hop does not contain any
label information.
> ip -f mpls route add 105 via inet 192.0.2.45
At the same time, such configuration is desirable to be
supported:
An attempt has been done to configure the next-hop with an implicit-
null label. But the message is rejected by the kernel:
> ip -f mpls route add 104 as 3 via inet 192.0.2.45
> Error: Implicit NULL Label (3) can not be used in encapsulation.
The commit proposes to accept ZEBRA_MPLS_LABELS_[XX] messages with
a nexthop that does not contain any label information.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Add a hash_clean_and_free() function as well as convert
the code to use it. This function also takes a double
pointer to the hash to set it NULL. Also it cleanly
does nothing if the pointer is NULL( as a bunch of
code tested for ).
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Issue:
When a netns is deleted, since zebra doesn’t receive interface down/delete
notifications from kernel, it manually deletes the interface without removing
the association between zebra_l3vni and the interface that is being deleted
(i.e it deletes the interface without setting “zl3vni->vxlan_if” to NULL).
Later, during the deletion of netns, when zl3vni_rmac_uninstall() is called to
uninstall the remote RMAC from the kernel, zebra ends up accessing stale
“zl3vni->vxlan_if” pointer, which now points to freed memory.
This was causing heap use-after-free.
Fix:
Before zebra starts deleting the interfaces when it receives netns delete notification,
appropriate functions() are being called to remove the association between evpn structs
and interface and set “zl3vni->vxlan_if” to NULL. This ensures that when
zl3vni_rmac_uninstall() is called during netns deletion, it will bail because
“zl3vni->vxlan_if” is NULL.
Signed-off-by: Pooja Jagadeesh Doijode <pdoijode@nvidia.com>
The "show zebra mpls .. json" vty command may return empty information
in case the MPLS database is empty or a given label entry is not
available. When those errors occur, add the braces to return a
valid json format.
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
The GR debug logs are doing all sorts of wonderful stuff
but they were not actually displaying anything useful to the operator
about what vrf we are operating in.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Create VRF and interfaces:
ip netns add vrf1
ip link add veth1 index 100 type veth
ip link add link veth1 veth1.200 type vlan id 200
ip link set veth1.200 netns vrf1
ip -n vrf1 link add veth2 index 100 type veth
After reloading zebra, "show interface veth1.200" shows wrong parent
interface:
test# show interface veth1.200
Interface veth1.200 is down
...
Parent interface: veth2
This is because veth1.200 and veth1 are in different netns, and veth2
happens to have the same ifindex as veth1, in the same netns of
veth1.200.
When looking for parent, link-ifindex 100 should be looked up within
link-netns, rather than that of the child interface.
Add link_nsid to zebra interface, so that the <link_nsid, link_ifindex>
pair can uniquely identify the link interface.
Signed-off-by: Xiao Liang <shaw.leon@gmail.com>
Once RP/BSR address is learned in PIMD, PIMD does nexthop tracking
in Zebra.
For IPV6 address, the nexthop type is either NEXTHOP_TYPE_IPV6
or NEXTHOP_TYPE_IPV6_IFINDEX.
Zebra should send nexthop ifindex information along with nexthop address
to the client (PIMD).
Issue: #11526
Issue: #11957
Signed-off-by: Sarita Patra <saritap@vmware.com>
Coverity rightly points out that a call into zebra_l2_bridge_if_vlan_find
is NULL checked 4/5 times. Let's make it 5/5
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
a) Consolidate v4 and v6 versions of rib_match_multicast
b) Improve debug to show what we matched against as well.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In `rib_link`, if is_zebra_import_table_enabled returns
true, `rib_queue_add` will not called, resulting in other
table route node never processed. This actually should not
be dependent on whether the route is imported.
In `rib_delnode`, if is_zebra_import_table_enabled returns
true, it will use `rib_unlink` instead of enqueuing the
route node for process. There is no reason that imported
route nodes should not be reprocessed. Long ago, the
behaviour was dependent on whether the route_entry comes
from a table other than main.
Signed-off-by: zyxwvu Shi <i@shiyc.cn>
When we are installing the flood entry for a vtep in SVD,
ensure VNI is set on the ctx object so that it gets
sent to the kernel and set appropriately with src_vni.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Ticket: 2698649
Testing Done: precommit and evpn-min
Problem:
When the mcast-group is updated, the changes were being read from the netlink
and populated by zebra, but when kernel sends the delete of fdb delete for the
group, we are deleting the mcast-group that we newly updated. This is because,
currently we blindly reset the mcast-group during fdb delete without checking
for mcast-group associated to the vni.
Fix is to separate add/update and delete mcast-group functions and to check
for mcast-group before resetting during delete.
Signed-off-by: sramamurthy <sramamurthy@nvidia.com>
Ticket: 2674793
Testing Done: precommit, evpn-min and evpn-smoke
The problem in this case is whenever we are triggering ifdown
followed by ifup of bridge, we see that remote mac entries
are programmed with vlan-1 in the fdb from zebra and never cleaned up.
bridge has vlan_default_pvid 1 which means any port that gets added
will initially have vlan 1 which then gets deleted by ifupdown2 and
the proper vlan gets added.
The problem lies in zebra where we are not cleaning up the remote
macs during vlan change.
Fix is to uninstall the remote macs and then install them
during vlan change.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
When the VLAN-VNI mapping is configured via a map and not using
individual VXLAN interfaces, upon removal of a VNI ensure that the
remote FDB entries are uninstalled correctly.
Signed-off-by: Vivek Venkatraman <vivek@nvidia.com>
Ticket: #2613048
Reviewed By:
Testing Done:
1. Manual verification - logs in the ticket
2. Precommit (user job #171) and evpn-min (user job #170)
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Ticket: 2730328, 2724075
Reviewed By: CCR-11741, CCR-11746
Testing Done: Unit Test
2730328: At high bridge-vids count, VNI devices are not added in FRR if
FRR restarts after loading e/n/i
The issue is the wrt buffer overflow for netlink_recv_msg.
We have defined the kernel recv message buffer in stack which is of size 32768 (32K).
When the configuration is applied without FRR restart things work fine
because the recv message from kernel is well within the limit of 32K.
However with this configuration, when the FRR was restarted I could see that
some recv messages were crossing the 32K limit and hence weren't processed.
Below error logs were seen when frr was restarted with the confuguration.
2021/08/09 05:59:55 ZEBRA: [EC 4043309092] netlink-cmd (NS 0) error: data remnant size 32768
Fix is to increase the buffer size by another 2K
2724075: evpn mh/SVD - some of the remote neighs/macs aren't installed
in kernel post ifdown/ifup bridge
The issue was specific to SVD. During ifdown/ifup of the bridge,
I could see that the access-bd was not associated with the vni and hence
the remote neighs were not getting programmed in the kernel.
Fix is to reference (or associate) vxlan vni to the access-bd when
the vni is reported up. With this fix, I was able to see the remote
neighs getting programmed to the kernel.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
ignore GETVLAN errors at startup like we are doing
for nexthop groups. Older platforms don't support the API.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Ignore zebra_mac updates if they do not contain a VNI for vxlan
interface. We don't have anything we can do with them.
'''
==443593== Process terminating with default action of signal 6 (SIGABRT): dumping core
==443593== at 0x4E1156C: __pthread_kill_implementation (in /usr/lib64/libc.so.6)
==443593== by 0x4DC4D15: raise (in /usr/lib64/libc.so.6)
==443593== by 0x49823C7: core_handler (sigevent.c:261)
==443593== by 0x4DC4DBF: ??? (in /usr/lib64/libc.so.6)
==443593== by 0x4E1156B: __pthread_kill_implementation (in /usr/lib64/libc.so.6)
==443593== by 0x4DC4D15: raise (in /usr/lib64/libc.so.6)
==443593== by 0x4D987F2: abort (in /usr/lib64/libc.so.6)
==443593== by 0x49C3064: _zlog_assert_failed (zlog.c:700)
==443593== by 0x4F5E6D: zebra_vxlan_if_vni_find (zebra_vxlan_if.c:661)
==443593== by 0x4EEAC3: zebra_vxlan_check_readd_vtep (zebra_vxlan.c:4244)
==443593== by 0x450967: netlink_macfdb_change (rt_netlink.c:3722)
==443593== by 0x450011: netlink_neigh_change (rt_netlink.c:4458)
'''
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Properly handle ipv6-mapped-ipv4 with DVNI by converting
the address to ipv4 and setting that as the DST field for
the encap.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
The `show evpn next-hop svd *` command doesn't provide much
for users right now. Make it hidden so we can still debug
the tables with it.
Also remove SVD output from `show evpn next-hop vni all`.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Don't install implict NULL labels with non-vni label'd
routes.
This returns behavior to how it was before adding the DVNI code.
Ticket: #2677036
Testing Done: precommit, manual
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Read in STP state changes for a Single Vxlan Device
via bridge vlan netlink messages. Map the vlanid to a
VNI in the SVD table and treat it similar to how
we handle proto down of the Vxlan device traditionally
in a non-SVD device scenario.
Forwarding == Interface UP
Blocking == Interface DOWN
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add some show commands and expand some already existing
commands so we can get debug info from the SVD global
neigh table inside zebra.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add code in the nhg resolution path for determining if Downstream
VNI is in play. This is the only place in all of zebra where
we should be arbitrarily setting the ifindex/labels since
this is where new nhgs are created/destroyed. If something
changes, it must happen here.
We determine if D-VNI is being used by matching the carried
label (VNI) on the nexthop with the vrf VNI from the route.
If they do not match, we can assume this is a D-VNI labeled
nexthop.
We loop through all of the group to see if any are D-VNI. If even
one is, we must treat them all as such. Otherwise, fallback to
traditional EVPN route handling and remove all the labels.
If they are going to be treated as D-VNI we retain the labels and
verify the underlying VRF vxlan interface is a Single VXlan Device.
If it is not, we cannot use D-VNI. If it is, continue on. The VNI label
will encapped via LWTUNNEL and sent to the kernel.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Install neigh entries always on SVD if it exists in
zebra. If zebra is using a Single Vxlan Device, we must
duplicate the install of our neigh entries to it so that
vxlan communication can also work across it in the downstream VNI
case.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Encode the vni label during route install on linux
systems via lwt encap 64bit LWTUNNEL_IP_ID. The kernel expects
this in network byte order, so we convert it.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add the ability to specify the label type along with the labels
you are passing to zebra in zapi_nexthop. This is needed as we
abstract the label code to be re-used by evpn as well as mpls.
Protocols need to be able to set the type of label they have attached.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Use the already existing mpls label code to store VNI
info for vxlan. VNI's are defined as labels just like mpls,
we should be using the same code for both.
This patch is the first part of that. Next we will need to
abstract the label code to not be so mpls specific. Currently
in this, we are just treating VXLAN as a label type and storing
it that way.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
This patch addresses fix for issues found during static analysis.
rt_netlink - initialise vtep if there is NDA_DST attribute
if_netlink - initialise vni_start and vni_end
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
zebra_vxlan_if.h header file was missed in noinst_HEADERS resulting
in build failure for some platforms.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
zebra_l2_bridge_if.h header file was missed in noinst_HEADERS resulting
in build failure for some platforms.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
This patch addresses following issues,
- When the VLAN-VNI mapping is configured via a map and not using
individual VXLAN interfaces, upon removal of a VNI ensure that the
remote FDB entries are uninstalled correctly.
- When VNI configuration is performed using VLAN-VNI mapping (i.e., without
individual VXLAN interfaces) and flooded traffic is handled via multicast,
the multicast group corresponding to the VNI needs to be explicitly read
from the bridge FDB. This is relevant in the case of netlink interface to
the kernel and for the scenario where a new VNI is provisioned or comes up.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This patch addresses following
- Remove unused VLAN Id parameter when trying to determine the VNI associated
with a non-VLAN aware bridge. Also, add a check to ensure that in this case,
we have a per-VNI VXLAN interface. Due to sequence of events, it is possible
that we may have VLAN-VNI mappings, in which case the code should return
gracefully.
- With support for a container VXLAN interface that has VLAN-VNI mappings,
the VXLAN interface itself may be up but a particular VNI might have
been removed. Ensure that VNI mapping exists before proceeding with
further processing.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This patch addresses following bug fixes
- Fix vtysh doc string in "show evpn access-vlan..." command
- Multicast group handling was little complex. This change avoids calling
multiple functions and directly calls the zebra_vxlan_if_update_vni for
mcast group updates.
- When a vlan-vni map is removed, the removed vni deletion was happening
in FRR with SVD config. This was resulting in stale vni and not
resulting propagation of the vni deletion.
During vni cleanup (zebra_vxlan_if_vni_clean) zebra_vxlan_if_vni_del
was called for vni delete which is not correct. We should be calling
zebra_vxlan_if_vni_entry_del for the given vni entry.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
Today to find the vni for a given (vlan, bridge) we walk over all interfaces
and filter the vxlan device associated with the bridge. With multiple vlan aware
bridge changes, we can derive the vni directly by looking up the hash table i.e.
the vlan_table of the associated (vlan, bridge) which would give the vni.
During vrf_terminate() call zebra_l2_bridge_if_cleanup if the interface
that we are removing is of type bridge. In this case, we walk over all
the vlan<->access_bd association and clean them up.
zebra_evpn_t is modified to record (vlan, bridge) details and the
corresponding vty is modified to print the same.
zevpn_bridge_if_set and zl3vni_bridge_if_set is used to set/unset the
association.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
Multiple vlan aware bridge data structure changes and its corresponding bridge
handling changes.
A new vlan-table is maintained for each bridge which records the zebra_l2_bridge_vlan
entry. zebra_l2_bridge_vlan maps vlan to access_bd associated to this bridge.
Existing zebra_evpn_access_bd structure is vlan aware which is now modified to be
(vlan, bridge) aware.
Whenever a new access_bd is instantiated, a corresponding entry is also recorded
in the zebra l2 bridge for the vlan.
When the access_bd is dereferenced or whenever a bridge is deleted, the
association is cleaned up.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change brings in following functionality
- netlink_bridge_vxlan_vlan_vni_map_update for single vxlan devices
This function is responsible for reading the vlan-vni map information
received from netlink and populating a new hash_table with the vlan-vni
data. Once all the vlan-vni data is collected, zebra_vxlan_if_vni_table_add_update
is called to update vni_table in vxlan interface and process each of the
vlan-vni data.
- refactoring changes for zevpn_build_hash_table
- existing zevpn_build_hash_table was walking over all the vxlan interfaces
and then processing the vni for each of them. In case of single vxlan device,
we will have more than one vni entries. This function is abstracted so that
it iterates over all the vni entries for single vxlan device. For traditional
vxlan device the zebra_vxlan_if_vni_iterate would only process single vni
associated with that device.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change modifies zebra_vxlan_if_up/down/add/update and del functionality
to be per vni based.
zebra_vxlan_if_add/update/del and zebra_vxlan_if_up/down now handles
the vni operations based on vxlan device type (single or traditional vxlan device).
zebra_vxlan_if_vni_table_add_update
- This function handles the vlan-vni map update received from the netlink
interface to single vxlan device vni_table hash table.
zebra_vxlan_if_vni_mcast_group_update
- This function handles the new multicast group update received from
the netlink interface to single vxlan device vni_table hash table.
For traditional vxlan interfaces, the vni and mcast group
handling follows the traditional approach.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change refactors the zebra_vxlan_if related functionality
to a new zebra_vxlan_if.c file. zebra_vxlan_if_up/down,
zebra_vxlan_if_add/update/del is moved zebra_vxlan_if.c
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
dplane_mac_info and dplane_neigh_info is modified to be vni aware.
dplane_rem_mac_add/del dplane_mac_init is modified to be vni aware.
During dplane context update (mac and neigh), we use the vni information
and if set, corresponding netlink attribute NDA_SRC_VNI is set and passed to the
dplane.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change set introduces data structure changes required for multiple vlan aware bridge
functionality. A new structure zebra_l2_bridge_if encapsulates the vlan to access_bd
association of the bridge. A vlan_table hash_table is used to record each instance
of the vlan to access_bd of the bridge via zebra_l2_bridge_vlan structure.
vxlan iftype derivation: netlink attribute IFLA_VXLAN_COLLECT_METADATA is used
to derive the iftype of the vxlan device. If the attribute is present, then the
vxlan interface is treated as single vxlan device, otherwise it would default to
traditional vxlan device.
zebra_vxlan_check_readd_vtep, zebra_vxlan_dp_network_mac_add/del is modified to
be vni aware.
mac_fdb_read_for_bridge - is modified to be (vlan, bridge) aware
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This changeset introduces the data structure changes needed for
single vxlan device functionality. A new struct zebra_vxlan_vni_info
encodes the iftype and vni information for vxlan device.
The change addresses related access changes of the new data structure
fields from different files
zebra_vty is modified to take care of the vni dump information according
to the new vni data structure for vxlan devices.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
Currently `ip import-table 33` imports routes with
a distance of 15, as defined by zebra.h. zebra_rib.c
on the other hand believes the default value for the table
is 150. Let's make them agree with each other.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use the defines for distance that are in zebra.h. We could
easily have a cluster where we don't agree with ourselves. So
let's convert zebra to use the defines in zebra.h
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The define of ZEBRA_ON_RIB_PROCESS_HOOK_CALL was in zebra.h
which exposes it to everyone, except zebra is the only daemon
to use this define. This does not beling in zebra.h
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add affinity-map hooks to check the utilization of affinity-map in
link-params before its deletion and to update link-params when the
affinity-map bit-position is updated.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Add the support of Extended Admin-Group (RFC7308) to the zebra interface
link-params Traffic-Engineering context.
Extended admin-groups can be configured with the affinity-map:
> affinity-map blue bit-position 221
> int eth-rt1
> link-params
> affinity blue
> exit-link-params
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
Add the affinity-map global command to zebra. The syntax is:
> affinity-map NAME bit-position (0-1023)
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
The files converted in this commit either had some random misspelling or
formatting weirdness that made them escape automated replacement, or
have a particularly "weird" licensing setup (e.g. dual-licensed.)
This also marks a bunch of "public domain" files as SPDX License "NONE".
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
There existed the idea, from Volta, that a nexthop group would not have
the same nexthops installed -vs- what FRR actually sent down. The
dplane would notify you.
With the addition of 06525c4f99
the code was put behind a bit of a wall controlled the usage
of it.
The flag ROUTE_ENTRY_USE_FIB_NHG flag was being used
to control which set was being sent up to concerned parties
in nexthop tracking. Put this flag behind the wall and
do not necessarily set it when we receive a data plane
notification about a route being installed or not.
Fixes: #12706
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Locking around the list of providers/plugins is not
helpful - these only change at init time. Clear some SA
warnings by removing the locking.
Signed-off-by: Mark Stapp <mjs@labn.net>
1. Renamed "gates" to "nexthops"
2. Displaying afi of the nexthops being dispalyed in place of
"nexthops" JSON object in the old JSON output
3. Calling show_route_nexthop_helper() and show_nexthop_json_helper()
instead of print_nh() inorder to keeps the fields in "nexthops"
JSON object in sync with "nexthops" JSON object of
"show nexthop-group rib json".
Updated vtysh:
r1# show ip nht
192.168.0.2
resolved via connected
is directly connected, r1-eth0 (vrf default)
Client list: static(fd 28)
192.168.0.4
resolved via connected
is directly connected, r1-eth0 (vrf default)
Client list: static(fd 28)
Updated JSON:
r1# show ip nht json
{
"default":{
"ipv4":{
"192.168.0.2":{
"nhtConnected":false,
"clientList":[
{
"protocol":"static",
"socket":28,
"protocolFiltered":"none"
}
],
"nexthops":[
{
"flags":3,
"fib":true,
"directlyConnected":true,
"interfaceIndex":2,
"interfaceName":"r1-eth0",
"vrf":"default",
"active":true
}
],
"resolvedProtocol":"connected"
}
}
}
}
Signed-off-by: Pooja Jagadeesh Doijode <pdoijode@nvidia.com>
fpm:netlink format doesn't indicate the protocol information
in routes of BGP, OSPF and other protocols. Routes of those
protocols just indicate protocol as zebra.
The below route is actually BGP route but 'proto': 11
indicates that it is zebra.
{'attrs': [('RTA_DST', 'dummy'),
('RTA_PRIORITY', 0),
('RTA_GATEWAY', 'dummy'),
('RTA_OIF', 2)],
'dst_len': 32,
'family': 2,
'flags': 0,
'header': {'flags': 1025,
'length': 60,
'pid': 3160253895,
'sequence_number': 0,
'type': 24},
'proto': 11,
'scope': 0,
'src_len': 0,
'table': 254,
'tos': 0,
'type': 1}
with this change it is now seen with 'proto': 186
indicates that it is BGP.
{'attrs': [('RTA_DST', 'dummy'),
('RTA_PRIORITY', 0),
('RTA_GATEWAY', 'dummy'),
('RTA_OIF', 2)],
'dst_len': 32,
'family': 2,
'flags': 0,
'header': {'flags': 1025,
'length': 60,
'pid': 3160253895,
'sequence_number': 0,
'type': 24},
'proto': 186,
'scope': 0,
'src_len': 0,
'table': 254,
'tos': 0,
'type': 1}
Signed-off-by: Spoorthi K <spk@redhat.com>
Don't directly use `time()` for generating sequence numbers for two
reasons:
1. `time()` can go backwards (due to NTP or time adjustments)
2. Coverity Scan warns every time we truncate a `time_t` variable for
good reason (verify that we are Y2K38 ready).
Found by Coverity Scan (CID 1519812, 1519786, 1519783 and 1519772)
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
The two commands ( `advertise-svi-ip` and `advertise-default-gw` ) can
be set in both `BGP_EVPN_NODE` and `BGP_EVPN_VNI_NODE`. So, when
configuring one of them, need to consider the configuration of the
other. Configuring it under `BGP_EVPN_NODE`, it does check the other.
However, the conversion is wrong when configured under `BGP_EVPN_VNI_NODE`.
One example:
With the following steps, the evpn routes with `SVI` will be mistakenly
withdrawn.
```
anlan(config-router-af)# advertise-svi-ip
anlan(config-router-af)# vni 100
anlan(config-router-af-vni)# advertise-svi-ip
anlan(config-router-af-vni)# no advertise-svi-ip
```
This commit fixed the conversion under `BGP_EVPN_VNI_NODE` for the
two commands.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Don't attempt to dereference `ifp` directly if it might be null: there
is a check right before this usage: `ifp ? ifp->info : NULL`.
In this context it should be safe to assume `ifp` is not NULL because
the only caller of this function checks that for this `ifindex`. For
consistency we'll check for null anyway in case this ever changes (and
with this the coverity scan warning gets silenced).
Found by Coverity Scan (CID 1519776)
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
Don't attempt to encode the pointer address instead pass the pointer
directly so the real contents can be accessed.
(`ri->pref_src` type is `union g_addr *`)
Found by Coverity Scan (CID 1482162)
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
Do extra inotify data structure checks and copy the file name to a stack
buffer making sure it is null byte terminated.
Found by Coverity Scan (CID 1465494)
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
After calling `rib_unlink` the variable `re` will point to `free()`d
memory, so don't attempt to use it after this point.
Found by Coverity Scan (Coverity ID 1519784)
Signed-off-by: Rafael Zalamena <rzalamena@opensourcerouting.org>
When FRR receives a netlink message that it decides to stop parsing
it returns a 0 ( instead of a -1 ). Just make the dplane continue
reading other data instead of aborting the read.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Most 32-bit architectures cannot do atomic loads and stores of data
wider than their pointer size, i.e. 32 bit. Funnily enough they
generally *can* do a CAS2, i.e., 64-bit compare-and-swap, but while a
CAS can emulate atomic add/bitops, loads and stores aren't available.
Replace with a mutex; since this is 99% used from the zserv thread, the
mutex should take the local-to-thread fast path anyway. And while one
atomic might be faster than a mutex lock/unlock, we're doing several
here, and at some point a mutex wins on speed anyway.
This fixes build on armel, mipsel, m68k, powerpc, and sh4.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
When FRR receives a route from the kernel about the route
offload success/failure. The metric being reported is not
going to be correct since we may not know it appropriately
at this point in time. If we can set the metric to something
appropriate.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When we are notified about the kernel about a route being offloaded
or not correctly set the distance.
Ticket: CM-33097
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
New show command "show evpn mac vni xx detail [json]"
to display details of all the mac entries for the
requested VNI.
Output of show evpn mac vni xx detail json:
{
"numMacs":2,
"macs":{
"ca:be:63:7c:81:05":{
"type":"local",
"intf":"veth100",
"ifindex":8,
"uptime":"00:06:55",
"localSequence":0,
"remoteSequence":0,
"detectionCount":0,
"isDuplicate":false,
"syncNeighCount":0,
"neighbors":{
"active":[
"fe80::c8be:63ff:fe7c:8105"
],
"inactive":[
]
}
}
}
}
Also added remoteEs field in the JSON output of
"show evpn mac vni xx json".
Output of show evpn mac vni xx json:
"00:02:00:00:00:0d":{
"type":"remote",
"remoteEs":"03:44:38:39:ff:ff:02:00:00:02",
"localSequence":0,
"remoteSequence":0,
"detectionCount":0,
"isDuplicate":false
}
Signed-off-by: Pooja Jagadeesh Doijode <pdoijode@nvidia.com>
netlink_route_multipath_msg_encode checks whether the local kernel
supports NextHop Netlink message and doesn't send the message if the
local kernel doesn't have support. This is also applied to the FPM since
kernel dataplane and FPM shares the same code. However, for the FPM,
it's not necessary to have this limit.
This commit adds extra check if netlink_route_multipath_msg_encode is
called from the FPM and bypass kernel support check if it is from the
FPM.
Signed-off-by: Yutaro Hayakawa <yutaro.hayakawa@isovalent.com>
Zebra has a shutdown setup where it asks the dplane to shutdown but can
still be processing data. This is especially true if something the dplane
is listening on receives data that will be processed by the main dplane thread
from netlink. When zebra_finalize is called it is possible that a bit
of data comes in before the zebra_dplane_shutdown() function is called
and the memory freed in ns_walk_func() causes the main dplane event
to crash when it cannot find the ns data anymore.
Reverse the order, stop the zebra dplane pthread and then free the
memory associated with the namespaces.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The wrong parameter is passed in `inet_ntop()` of `zfpm_log_route_info()` in
old fpm module, so the display of gateway is always wrong. Just remove
that extra ampersand.
Additionally, use "none" as gateway value for the case of no gateway.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
When the last IPv4 address of an interface is deleted, Linux removes
all routes using this interface without any Netlink advertisement.
Routes that have a IPv4 nexthop are correctly removed from the FRR RIB.
However, routes that only have an interface with no more IPv4 addresses
as a nexthop remains in the FRR RIB.
In this situation, among the routes that this particular interface
nexthop:
- remove from the zebra kernel routes
- reinstall the routes that have been added from FRR. It is useful when
the nexthop is for example a VRF interface.
Add related test cases in the zebra_netlink topotest.
Signed-off-by: Louis Scalbert <louis.scalbert@6wind.com>
The early route queue has a series of `struct zebra_early_route *`
entries. Zebra is treating this memory as just a `struct route entry`.
This is wrong. Correct this to free the memory correctly.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The wq->spec.errorfunc is never used in the code.
It's been in the code base since 2005 and I also
do not remember ever seeing it being called. No
workqueue process function ever returns error.
Since it's not used let's just remove it from the
code base.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Address Sanitizer found this:
=================================================================
==418623==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 128 byte(s) in 4 object(s) allocated from:
#0 0x4bd732 in calloc (/usr/lib/frr/zebra+0x4bd732)
#1 0x7feaeab8f798 in qcalloc /home/sharpd/frr8/lib/memory.c:116:27
#2 0x7feaeaba40f4 in nexthop_group_new /home/sharpd/frr8/lib/nexthop_group.c:270:9
#3 0x56859b in netlink_route_change_read_unicast /home/sharpd/frr8/zebra/rt_netlink.c:950:9
#4 0x5651c2 in netlink_route_change /home/sharpd/frr8/zebra/rt_netlink.c:1204:2
#5 0x54af15 in netlink_information_fetch /home/sharpd/frr8/zebra/kernel_netlink.c:407:10
#6 0x53e7a3 in netlink_parse_info /home/sharpd/frr8/zebra/kernel_netlink.c:1184:12
#7 0x548d46 in kernel_read /home/sharpd/frr8/zebra/kernel_netlink.c:501:2
#8 0x7feaeacc87f6 in thread_call /home/sharpd/frr8/lib/thread.c:2006:2
#9 0x7feaeab36503 in frr_run /home/sharpd/frr8/lib/libfrr.c:1198:3
#10 0x550d38 in main /home/sharpd/frr8/zebra/main.c:476:2
#11 0x7feaea492d09 in __libc_start_main csu/../csu/libc-start.c:308:16
Indirect leak of 576 byte(s) in 4 object(s) allocated from:
#0 0x4bd732 in calloc (/usr/lib/frr/zebra+0x4bd732)
#1 0x7feaeab8f798 in qcalloc /home/sharpd/frr8/lib/memory.c:116:27
#2 0x7feaeab9b3f8 in nexthop_new /home/sharpd/frr8/lib/nexthop.c:373:7
#3 0x56875e in netlink_route_change_read_unicast /home/sharpd/frr8/zebra/rt_netlink.c:960:15
#4 0x5651c2 in netlink_route_change /home/sharpd/frr8/zebra/rt_netlink.c:1204:2
#5 0x54af15 in netlink_information_fetch /home/sharpd/frr8/zebra/kernel_netlink.c:407:10
#6 0x53e7a3 in netlink_parse_info /home/sharpd/frr8/zebra/kernel_netlink.c:1184:12
#7 0x548d46 in kernel_read /home/sharpd/frr8/zebra/kernel_netlink.c:501:2
#8 0x7feaeacc87f6 in thread_call /home/sharpd/frr8/lib/thread.c:2006:2
#9 0x7feaeab36503 in frr_run /home/sharpd/frr8/lib/libfrr.c:1198:3
#10 0x550d38 in main /home/sharpd/frr8/zebra/main.c:476:2
#11 0x7feaea492d09 in __libc_start_main csu/../csu/libc-start.c:308:16
SUMMARY: AddressSanitizer: 704 byte(s) leaked in 8 allocation(s).
Fix this!
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Read from the fpm dplane a route update that will
include status about whether or not the asic was
successfull in offloading the route.
Have this data passed up to zebra for processing and disseminate
this data as appropriate.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add the initial step of passing in a dplane context
to reading route netlink messages. This code
will be run in two contexts:
a) The normal pthread for reading netlink messages from
the kernel
b) The dplane_fpm_nl pthread.
The goal of this commit is too just allow a) to work
b) will be filled in in the future. Effectively
everything should still be working as it should
pre this change. We will just possibly allow
the passing of the context around( but not used )
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
In order for a future commit to abstract the dplane_ctx_route_init
so that the kernel can use it, let's move some stuff around
and add a dplane_ctx_route_init_basic that can be used by multiple
different paths
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
create a dplane_ctx_route_init_basic so it can be used
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Zebra needs the ability to pass this data around.
Add it to the dplanes ability to pass.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
zebra: Add a dplane_ctx_set_flags
The dplane_ctx_set_flags call is missing, we will need it. Add it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
If we have this semantics:
int ret = FAILURE;
if (foo)
goto done;
....
done:
return ret;
This pattern does us no favors and makes it harder to figure out what is going
on. Let's remove.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Volta submitted notification changes for the dplane that had a
special use case for their system. Volta is no more, the code
is not being actively developed and from talking with ex-Volta
employees there is no current plans to even maintain this code.
Wrap the special handling of nexthops that their asic-dataplane
did in a bit of code to isolate it and allow for future removal,
as that I do not actually believe anyone else is using this code.
Add a CPP_NOTICE several years into the future that will tell us
to remove the code. If someone starts using it then they will
have to notice this variable to set it and hopefully they will
see my CPP_NOTICE to come talk to us. If this is being used then
we can just remove this wrapper.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
On shutdown a use after free was being seen of a route table.
Basically the pointer was kept around and resent for cleanup.
Probably something needs to be unwound to make this better
in the future. Just cleaning up the use after free.
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929-=================================================================
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929:==911929==ERROR: AddressSanitizer: heap-use-after-free on address 0x606000127a00 at pc 0x7fb9ad546f5b bp 0x7ffc3cff0330 sp 0x7ffc3
cff0328
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929-READ of size 8 at 0x606000127a00 thread T0
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #0 0x7fb9ad546f5a in route_table_free /home/sharpd/frr8/lib/table.c:103:13
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #1 0x7fb9ad546f04 in route_table_finish /home/sharpd/frr8/lib/table.c:61:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #2 0x6b94ba in zebra_ns_disable_internal /home/sharpd/frr8/zebra/zebra_ns.c:141:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #3 0x6b9158 in zebra_ns_disabled /home/sharpd/frr8/zebra/zebra_ns.c:116:9
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #4 0x7fb9ad43f0f5 in ns_disable_internal /home/sharpd/frr8/lib/netns_linux.c:273:4
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #5 0x7fb9ad43e634 in ns_disable /home/sharpd/frr8/lib/netns_linux.c:368:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #6 0x7fb9ad43e251 in ns_delete /home/sharpd/frr8/lib/netns_linux.c:330:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #7 0x7fb9ad43fbb3 in ns_terminate /home/sharpd/frr8/lib/netns_linux.c:524:3
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #8 0x54f8de in zebra_finalize /home/sharpd/frr8/zebra/main.c:232:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #9 0x7fb9ad5655e6 in thread_call /home/sharpd/frr8/lib/thread.c:2006:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #10 0x7fb9ad3d3343 in frr_run /home/sharpd/frr8/lib/libfrr.c:1198:3
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #11 0x550b48 in main /home/sharpd/frr8/zebra/main.c:476:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #12 0x7fb9acd30d09 in __libc_start_main csu/../csu/libc-start.c:308:16
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #13 0x443549 in _start (/usr/lib/frr/zebra+0x443549)
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929-
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929-0x606000127a00 is located 0 bytes inside of 56-byte region [0x606000127a00,0x606000127a38)
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929-freed by thread T0 here:
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #0 0x4bd33d in free (/usr/lib/frr/zebra+0x4bd33d)
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #1 0x7fb9ad42cc80 in qfree /home/sharpd/frr8/lib/memory.c:141:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #2 0x7fb9ad547305 in route_table_free /home/sharpd/frr8/lib/table.c:141:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #3 0x7fb9ad546f04 in route_table_finish /home/sharpd/frr8/lib/table.c:61:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #4 0x6b94ba in zebra_ns_disable_internal /home/sharpd/frr8/zebra/zebra_ns.c:141:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #5 0x6b9692 in zebra_ns_early_shutdown /home/sharpd/frr8/zebra/zebra_ns.c:164:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #6 0x7fb9ad43f228 in ns_walk_func /home/sharpd/frr8/lib/netns_linux.c:386:9
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #7 0x55014f in sigint /home/sharpd/frr8/zebra/main.c:194:2
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #8 0x7fb9ad50db99 in frr_sigevent_process /home/sharpd/frr8/lib/sigevent.c:130:6
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #9 0x7fb9ad560d07 in thread_fetch /home/sharpd/frr8/lib/thread.c:1775:4
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #10 0x7fb9ad3d332d in frr_run /home/sharpd/frr8/lib/libfrr.c:1197:9
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #11 0x550b48 in main /home/sharpd/frr8/zebra/main.c:476:2
--
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929- #7 0x7fb9acd30d09 in __libc_start_main csu/../csu/libc-start.c:308:16
./bfd_vrf_topo1.test_bfd_vrf_topo1/r2.zebra.asan.911929-
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The `behavior usid` command is installed under the SRv6 Locator node in
the zebra VTY. However, in the SRv6 config write function this command
is wrongly put on the same line as the `prefix X:X::X:X/M` command.
This causes a failure when an SRv6 uSID locator is configured in zebra
and `frr-reload.py` is used to reload the FRR configuration.
This commit prepends a newline character to the `behavior usid` command
in the SRv6 config write function. The output of `show running-config`
before and after this commit is shown below.
Before:
```
Building configuration...
Current configuration:
!
frr version 8.5-dev
!
segment-routing
srv6
locators
locator loc1
prefix fc00:0:1::/48 block-len 32 node-len 16 behavior usid
exit
!
exit
!
exit
!
exit
!
end
```
After:
```
Building configuration...
Current configuration:
!
segment-routing
srv6
locators
locator loc1
prefix fc00:0:1::/48 block-len 32 node-len 16
behavior usid
exit
!
exit
!
exit
!
exit
!
end
```
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>
If we enable MPLS for an interface via sysctl, we should write `mpls enable`,
not `mpls`.
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
The recent tracepoint additions in c317d3f246
did not properly setup the tracepoints for lttng. Fix this.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
This commit adds ZAPI encoders & decoders for traffic control operations, which
include tc_qdisc, tc_class and tc_filter.
Signed-off-by: Siger Yang <siger.yang@outlook.com>
This allows Zebra to manage QDISC, TCLASS, TFILTER in kernel and do cleaning
jobs when it starts up.
Signed-off-by: Siger Yang <siger.yang@outlook.com>
The latest FRR's frr-reload.py is broken and we can't reload FRR
gracefully with segment routing locator configuration (if we
execute frr-reload.py, FRR will stop suddenly).
The root cause of this issue is very simple. FRR will display the
current configuration like this (the below is the result of
"show running-configuration").
``
segment-routing
srv6
locators
locator default
prefix fd00:1:0:1::/64 block-len 40 node-len 24 func-bits 16
exit
!
exit
!
exit
!
exit
```
However, FRR doesn't accept segment routing locator parameters
if we specify block-len and node-len earlier than func-bits.
Signed-off-by: Ryoga Saito <ryoga.saito@linecorp.com>
Currently, in `zebra_srte_client_close_cleanup` we use the `RB_FOREACH`
macro to traverse the SR policies tree. We remove the SR policies within
the loop. Removing elements from the tree and freeing them is not safe
and causes a use-after-free crash whenever the
`zebra_srte_client_close_cleanup` is called to perform cleanup.
This commit replaces the `RB_FOREACH` macro with its variant
`RB_FOREACH_SAFE`. Unlike `RB_FOREACH`, `RB_FOREACH_SAFE` permits both
the removal of tree elements as well as freeing them from within the
loop safely.
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>
If you have this order in your configuration file:
no fpm use-next-hop-groups
fpm address 127.0.0.1
the dplane code was using the same event thread t_event and the second
add event in the code was going, you already have an event scheduled
and as such the second event does not overwrite it. Leaving
no code to actually start the whole processing. There are probably
other cli iterations that will cause this fun as well, but I'm
not going to spend the time sussing them out at the moment.
Fixes: #12314
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The API for configuring ES in zebra had a strict check for if_type
"isBond" that prevented the ES config from being created before the
interface.
Ticket: CM-29454
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Install a new command `behavior usid` into the `SRV6_LOC_NODE` CLI node.
This command allows the user to set/unset the `SRV6_LOCATOR_USID` flag
for an SRv6 locator. The `SRV6_LOCATOR_USID` flag indicates whether a
locator is a uSID locator or not. When the flag is set, the routing
daemons (e.g., bgpd) will install SRv6 behaviors with the uSID in the
dataplane.
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>
In this commit, we add two helper functions
`zebra_notify_srv6_locator_add` and `zebra_notify_srv6_locator_delete`.
These functions are used to notify locator additions/deletions to
zclients.
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>
Fix the following build failure raised since version 8.4 and
d53dc9bd81:
zebra/netconf_netlink.c: In function 'netlink_netconf_change':
zebra/netconf_netlink.c:109:32: error: 'AF_MPLS' undeclared (first use in this function)
109 | if (ncm->ncm_family == AF_MPLS)
| ^~~~~~~
Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
In file included from /usr/include/net/ethernet.h:10,
from ./lib/prefix.h:26,
from zebra/tc_netlink.c:32:
/usr/include/netinet/if_ether.h:115:8: error: redefinition of 'struct ethhdr'
115 | struct ethhdr {
| ^~~~~~
In file included from zebra/tc_netlink.c:28:
/usr/include/linux/if_ether.h:169:8: note: originally defined here
169 | struct ethhdr {
| ^~~~~~
Signed-off-by: Donatas Abraitis <donatas@opensourcerouting.org>
At this point add abilty for the encode/decode of the
resilience down ZAPI to zebra. Just hookup sharpd
at this point in time.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
FRR does not use the NLM_F_APPEND semantics ( in fact I would argue that
the NLM_F_APPEND semantics just introduce pain for all parties involved )
I would also argue that most people who use the kernel netlink api
have recognized that NLM_F_APPEND for a route is a recipe for disaster
that is well documented and as such it is not used as anything other
than a curiousity by operators.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1337855https://github.com/thom311/libnl/issues/226
Are 2 great examples of how confusing it is for anyone in user
space to know what the correct thing to do is. Given that
new fields can be added with no semantics to allow us to know
what has resulted in a change or not.
In an attempt to recognize this, let's note that FRR
believes it has gotten out of sync with the kernel.
Future commits will react to the desynchronized route
and request from the kernel a reload of that specific
route if possible.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use "get" as the name for checking the status of the bgp
accept lower seq knob. This already has an equivalent "set"
so makes sense to keep it consistent.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
In this commit, we extend the ZAPI to support encoding and decoding the
locator flags contained in the messages exchanged between zebra and the
routing daemons.
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>
When zebra receives routes from upper level protocols it decodes the
zapi message and places the routes on the metaQ for processing. Suppose
we have a route A that is already installed by some routing protocol.
And there is a route B that has a nexthop that will be recursively
resolved through A. Imagine if a route replace operation for A is
going to happen from an upper level protocol at about the same time
the route B is going to be installed into zebra. If these routes
are received, and decoded, at about the same time there exists a
chance that the metaQ will contain both of them at the same time.
If the order of installation is [ B, A ]. B will be resolved
correctly through A and installed, A will be processed and
re-installed into the FIB. If the nexthops have changed for
A then the owner of B should be notified about the change( and B
can do the correct action here and decide to withdraw or re-install ).
Now imagine if the order of routes received for processing on the
metaQ is [ A, B ]. A will be received, processed and sent to the
dataplane for reinstall. B will then be pulled off the metaQ and
fail the install since A is in a `not Installed` state.
Let's loosen the restriction in nexthop resolution for B such
that if the route we are dependent on is a route replace operation
allow the resolution to suceed. This requires zebra to track a new
route state( ROUTE_ENTRY_ROUTE_REPLACING ) that can be looked at
during nexthop resolution. I believe this is ok because A is
a route replace operation, which could result in this:
-route install failed, in which case B should be nht'ing and
will receive the nht failure and the upper level protocol should
remove B.
-route install succeeded, no nexthop changes. In this case
allowing the resolution for B is ok, NHT will not notify the upper
level protocol so no action is needed.
-route install succeeded, nexthops changes. In this case
allowing the resolution for B is ok, NHT will notify the upper
level protocol and it can decide to reinstall B or not based
upon it's own algorithm.
This set of events was found by the bgp_distance_change topotest(s).
Effectively the tests were looking for the bug ( A, B order in the metaQ )
as the `correct` state. When under very heavy load, the A, B ordering
caused A to just be installed and fully resolved in the dataplane before
B is gotten to( which is entirely possible ).
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Rather than running selected source files through the preprocessor and a
bunch of perl regex'ing to get the list of all DEFUNs, use the data
collected in frr.xref.
This not only eliminates issues we've been having with preprocessor
failures due to nonexistent header files, but is also much faster.
Where extract.pl would take 5s, this now finishes in 0.2s. And since
this is a non-parallelizable build step towards the end of the build
(dependent on a lot of other things being done already), the speedup is
actually noticeable.
Also files containing CLI no longer need to be listed in `vtysh_scan`
since the .xref data covers everything. `#ifndef VTYSH_EXTRACT_PL`
checks are equally obsolete.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
The debug for notification about a filtered prefix was
just printing the nexthop ifindex and vrf id. Not all
nexthops have this data. Just print out the actual nexthop
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently, the SID transposition algorithm implemented in bgpd handles
incorrectly the SRv6 locators with function length greater than 20 bits.
To prevent issues, we currently limit the function length to 20 bits.
This limit will be removed when the bgpd SID transposition is fixed.
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>
According to RFC 8986, the SRv6 SID length cannot exceed 128 bits. This
commit ensures that the condition
`block_len + node_len + function_len + arg_len <= 128` is satisfied when
a new SRv6 locator is created.
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>
This commit adds SRv6 locator's block length, node length and argument
length to the output of the command
"show segment-routing srv6 locator NAME detail [json]".
Signed-off-by: Carmine Scarpitta <carmine.scarpitta@uniroma2.it>