Don't install implict NULL labels with non-vni label'd
routes.
This returns behavior to how it was before adding the DVNI code.
Ticket: #2677036
Testing Done: precommit, manual
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Encode the vni label during route install on linux
systems via lwt encap 64bit LWTUNNEL_IP_ID. The kernel expects
this in network byte order, so we convert it.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
This patch addresses fix for issues found during static analysis.
rt_netlink - initialise vtep if there is NDA_DST attribute
if_netlink - initialise vni_start and vni_end
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This patch addresses following issues,
- When the VLAN-VNI mapping is configured via a map and not using
individual VXLAN interfaces, upon removal of a VNI ensure that the
remote FDB entries are uninstalled correctly.
- When VNI configuration is performed using VLAN-VNI mapping (i.e., without
individual VXLAN interfaces) and flooded traffic is handled via multicast,
the multicast group corresponding to the VNI needs to be explicitly read
from the bridge FDB. This is relevant in the case of netlink interface to
the kernel and for the scenario where a new VNI is provisioned or comes up.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change brings in following functionality
- netlink_bridge_vxlan_vlan_vni_map_update for single vxlan devices
This function is responsible for reading the vlan-vni map information
received from netlink and populating a new hash_table with the vlan-vni
data. Once all the vlan-vni data is collected, zebra_vxlan_if_vni_table_add_update
is called to update vni_table in vxlan interface and process each of the
vlan-vni data.
- refactoring changes for zevpn_build_hash_table
- existing zevpn_build_hash_table was walking over all the vxlan interfaces
and then processing the vni for each of them. In case of single vxlan device,
we will have more than one vni entries. This function is abstracted so that
it iterates over all the vni entries for single vxlan device. For traditional
vxlan device the zebra_vxlan_if_vni_iterate would only process single vni
associated with that device.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
dplane_mac_info and dplane_neigh_info is modified to be vni aware.
dplane_rem_mac_add/del dplane_mac_init is modified to be vni aware.
During dplane context update (mac and neigh), we use the vni information
and if set, corresponding netlink attribute NDA_SRC_VNI is set and passed to the
dplane.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change set introduces data structure changes required for multiple vlan aware bridge
functionality. A new structure zebra_l2_bridge_if encapsulates the vlan to access_bd
association of the bridge. A vlan_table hash_table is used to record each instance
of the vlan to access_bd of the bridge via zebra_l2_bridge_vlan structure.
vxlan iftype derivation: netlink attribute IFLA_VXLAN_COLLECT_METADATA is used
to derive the iftype of the vxlan device. If the attribute is present, then the
vxlan interface is treated as single vxlan device, otherwise it would default to
traditional vxlan device.
zebra_vxlan_check_readd_vtep, zebra_vxlan_dp_network_mac_add/del is modified to
be vni aware.
mac_fdb_read_for_bridge - is modified to be (vlan, bridge) aware
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This changeset introduces the data structure changes needed for
single vxlan device functionality. A new struct zebra_vxlan_vni_info
encodes the iftype and vni information for vxlan device.
The change addresses related access changes of the new data structure
fields from different files
zebra_vty is modified to take care of the vni dump information according
to the new vni data structure for vxlan devices.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
fpm:netlink format doesn't indicate the protocol information
in routes of BGP, OSPF and other protocols. Routes of those
protocols just indicate protocol as zebra.
The below route is actually BGP route but 'proto': 11
indicates that it is zebra.
{'attrs': [('RTA_DST', 'dummy'),
('RTA_PRIORITY', 0),
('RTA_GATEWAY', 'dummy'),
('RTA_OIF', 2)],
'dst_len': 32,
'family': 2,
'flags': 0,
'header': {'flags': 1025,
'length': 60,
'pid': 3160253895,
'sequence_number': 0,
'type': 24},
'proto': 11,
'scope': 0,
'src_len': 0,
'table': 254,
'tos': 0,
'type': 1}
with this change it is now seen with 'proto': 186
indicates that it is BGP.
{'attrs': [('RTA_DST', 'dummy'),
('RTA_PRIORITY', 0),
('RTA_GATEWAY', 'dummy'),
('RTA_OIF', 2)],
'dst_len': 32,
'family': 2,
'flags': 0,
'header': {'flags': 1025,
'length': 60,
'pid': 3160253895,
'sequence_number': 0,
'type': 24},
'proto': 186,
'scope': 0,
'src_len': 0,
'table': 254,
'tos': 0,
'type': 1}
Signed-off-by: Spoorthi K <spk@redhat.com>
When FRR receives a route from the kernel about the route
offload success/failure. The metric being reported is not
going to be correct since we may not know it appropriately
at this point in time. If we can set the metric to something
appropriate.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
netlink_route_multipath_msg_encode checks whether the local kernel
supports NextHop Netlink message and doesn't send the message if the
local kernel doesn't have support. This is also applied to the FPM since
kernel dataplane and FPM shares the same code. However, for the FPM,
it's not necessary to have this limit.
This commit adds extra check if netlink_route_multipath_msg_encode is
called from the FPM and bypass kernel support check if it is from the
FPM.
Signed-off-by: Yutaro Hayakawa <yutaro.hayakawa@isovalent.com>
Address Sanitizer found this:
=================================================================
==418623==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 128 byte(s) in 4 object(s) allocated from:
#0 0x4bd732 in calloc (/usr/lib/frr/zebra+0x4bd732)
#1 0x7feaeab8f798 in qcalloc /home/sharpd/frr8/lib/memory.c:116:27
#2 0x7feaeaba40f4 in nexthop_group_new /home/sharpd/frr8/lib/nexthop_group.c:270:9
#3 0x56859b in netlink_route_change_read_unicast /home/sharpd/frr8/zebra/rt_netlink.c:950:9
#4 0x5651c2 in netlink_route_change /home/sharpd/frr8/zebra/rt_netlink.c:1204:2
#5 0x54af15 in netlink_information_fetch /home/sharpd/frr8/zebra/kernel_netlink.c:407:10
#6 0x53e7a3 in netlink_parse_info /home/sharpd/frr8/zebra/kernel_netlink.c:1184:12
#7 0x548d46 in kernel_read /home/sharpd/frr8/zebra/kernel_netlink.c:501:2
#8 0x7feaeacc87f6 in thread_call /home/sharpd/frr8/lib/thread.c:2006:2
#9 0x7feaeab36503 in frr_run /home/sharpd/frr8/lib/libfrr.c:1198:3
#10 0x550d38 in main /home/sharpd/frr8/zebra/main.c:476:2
#11 0x7feaea492d09 in __libc_start_main csu/../csu/libc-start.c:308:16
Indirect leak of 576 byte(s) in 4 object(s) allocated from:
#0 0x4bd732 in calloc (/usr/lib/frr/zebra+0x4bd732)
#1 0x7feaeab8f798 in qcalloc /home/sharpd/frr8/lib/memory.c:116:27
#2 0x7feaeab9b3f8 in nexthop_new /home/sharpd/frr8/lib/nexthop.c:373:7
#3 0x56875e in netlink_route_change_read_unicast /home/sharpd/frr8/zebra/rt_netlink.c:960:15
#4 0x5651c2 in netlink_route_change /home/sharpd/frr8/zebra/rt_netlink.c:1204:2
#5 0x54af15 in netlink_information_fetch /home/sharpd/frr8/zebra/kernel_netlink.c:407:10
#6 0x53e7a3 in netlink_parse_info /home/sharpd/frr8/zebra/kernel_netlink.c:1184:12
#7 0x548d46 in kernel_read /home/sharpd/frr8/zebra/kernel_netlink.c:501:2
#8 0x7feaeacc87f6 in thread_call /home/sharpd/frr8/lib/thread.c:2006:2
#9 0x7feaeab36503 in frr_run /home/sharpd/frr8/lib/libfrr.c:1198:3
#10 0x550d38 in main /home/sharpd/frr8/zebra/main.c:476:2
#11 0x7feaea492d09 in __libc_start_main csu/../csu/libc-start.c:308:16
SUMMARY: AddressSanitizer: 704 byte(s) leaked in 8 allocation(s).
Fix this!
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Add the initial step of passing in a dplane context
to reading route netlink messages. This code
will be run in two contexts:
a) The normal pthread for reading netlink messages from
the kernel
b) The dplane_fpm_nl pthread.
The goal of this commit is too just allow a) to work
b) will be filled in in the future. Effectively
everything should still be working as it should
pre this change. We will just possibly allow
the passing of the context around( but not used )
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
FRR does not use the NLM_F_APPEND semantics ( in fact I would argue that
the NLM_F_APPEND semantics just introduce pain for all parties involved )
I would also argue that most people who use the kernel netlink api
have recognized that NLM_F_APPEND for a route is a recipe for disaster
that is well documented and as such it is not used as anything other
than a curiousity by operators.
See:
https://bugzilla.redhat.com/show_bug.cgi?id=1337855https://github.com/thom311/libnl/issues/226
Are 2 great examples of how confusing it is for anyone in user
space to know what the correct thing to do is. Given that
new fields can be added with no semantics to allow us to know
what has resulted in a change or not.
In an attempt to recognize this, let's note that FRR
believes it has gotten out of sync with the kernel.
Future commits will react to the desynchronized route
and request from the kernel a reload of that specific
route if possible.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The new route code path was using a combination of
both rib_add() and rib_add_multipath() let's clean
it up some to use rib_add_multipath()
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The IPv6 version needs rtm_src_len and rtm_dst_len filled in due to
strict validation. IPv4 also has this requirement, but zebra is running
in non-strict mode there so the kernel accepts it...
Also the table ID hack is IPv4 only.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
The multicast routing RTM_GETROUTE command does not use IIF/OIF
attributes, and the IPv6 version will refuse them with an error due to
being new netlink API and thus using strict validation.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
These two structs happen to be the same size and have the family field
in the same spot, but the correct one to use here is rtmsg not ndmsg.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
zebra does not care about _notifications_ from the kernel regarding
multicast routing; we only use the MR netlink API to request stats from
the kernel by actively sending a RTM_GETROUTE.
Signed-off-by: David Lamparter <equinox@opensourcerouting.org>
Just some missing ones. Make zebra stop complaining, was getting
some messages from proto2zebra when doing testing, let's clean
that up from happening.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When reading a multipath route and we detect an encoding
error from the kernel( yeah I don't think so either ),
let's tell the operator what happened to that route.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Currently specific local neighbors (attached to SVIs) are maintatined
in an EVPN specific database. There is a need to maintain L3 neighbors
for other purposes including MAC resolution for PBR nexthops.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Cleanup compile and fix crash
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
When a nexthop is set RTNH_F_LINKDOWN, start noticing
that this flag is set. Allow FRR to know about this
flag but at this point do not do anything with it.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The global vrf in zebra is always non-NULL. In general, it is bound to
default vrf by `zebra_vrf_init()`, at other times bound to some specific
vrf. Anyway, non-NULL.
So remove all redundant checkings for the returned value of
`zebra_vrf_get_evpn()`.
Additionally, remove the unnecessary check for `zvrf` in
`zebra_vxlan_cleanup_tables()`.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Modify the structure mcast_route_data to store ipv4/ipv6
addr and lastused multicast information from kernel.
Adjust the related APIs to parse ipv4/ipv6 informations.
Signed-off-by: Mobashshera Rasool <mrasool@vmware.com>
Since `NDA_VLAN` is no longer mannually defined in header file,
the check for `NDA_VLAN` should be removed.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Store the fd that corresponds to the appropriate `struct nlsock` and pass
that around in the dplane context instead of the pointer to the nlsock.
Modify the kernel_netlink.c code to store in a hash the `struct nlsock`
with the socket fd as the key.
Why do this? The dataplane context is used to pass around the `struct nlsock`
but the zebra code has a bug where the received buffer for kernel netlink
messages from the kernel is not big enough. So we need to dynamically
grow the receive buffer per socket, instead of having a non-dynamic buffer
that we read into. By passing around the fd we can look up the `struct nlsock`
that will soon have the associated buffer and not have to worry about `const`
issues that will arise.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When using wait for install there exists situations where
zebra will issue several route change operations to the kernel
but end up in a state where we shouldn't be at the end
due to extra data being received. Example:
a) zebra receives from bgp a route change, installs sends the
route to the kernel.
b) zebra receives a route deletion from bgp, removes the
struct route entry and then sends to the kernel a deletion.
c) zebra receives an asynchronous notification that (a) succeeded
but we treat this as a new route.
This is the ships in the night problem. In this case if we receive
notification from the kernel about a route that we know nothing
about and we are not in startup and we are doing asic offload
then we can ignore this update.
Ticket: #2563300
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
When a operator has a FRR based route installed into the
FIB and a better route comes in from the system. There
is code in the data plane to schedule the batching
and continue processing. But in this case we are done
so we can just return
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Since f60a1188 we store a pointer to the VRF in the interface structure.
There's no need anymore to store a separate vrf_id field.
Signed-off-by: Igor Ryzhov <iryzhov@nfware.com>
With the addition of resillient hashing for nexthops, the
parsing of nexthops requires telling the decoder functions
that there may be nested attributes. This was found by
code inspection of iproute2/ipnexthop.c when trying to
understand resillient hashing as well as statistics
gathering for nexthops that are / will be in upstream
kernels in the near future.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>