When we are installing the flood entry for a vtep in SVD,
ensure VNI is set on the ctx object so that it gets
sent to the kernel and set appropriately with src_vni.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Ticket: 2698649
Testing Done: precommit and evpn-min
Problem:
When the mcast-group is updated, the changes were being read from the netlink
and populated by zebra, but when kernel sends the delete of fdb delete for the
group, we are deleting the mcast-group that we newly updated. This is because,
currently we blindly reset the mcast-group during fdb delete without checking
for mcast-group associated to the vni.
Fix is to separate add/update and delete mcast-group functions and to check
for mcast-group before resetting during delete.
Signed-off-by: sramamurthy <sramamurthy@nvidia.com>
Ticket: 2674793
Testing Done: precommit, evpn-min and evpn-smoke
The problem in this case is whenever we are triggering ifdown
followed by ifup of bridge, we see that remote mac entries
are programmed with vlan-1 in the fdb from zebra and never cleaned up.
bridge has vlan_default_pvid 1 which means any port that gets added
will initially have vlan 1 which then gets deleted by ifupdown2 and
the proper vlan gets added.
The problem lies in zebra where we are not cleaning up the remote
macs during vlan change.
Fix is to uninstall the remote macs and then install them
during vlan change.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
When the VLAN-VNI mapping is configured via a map and not using
individual VXLAN interfaces, upon removal of a VNI ensure that the
remote FDB entries are uninstalled correctly.
Signed-off-by: Vivek Venkatraman <vivek@nvidia.com>
Ticket: #2613048
Reviewed By:
Testing Done:
1. Manual verification - logs in the ticket
2. Precommit (user job #171) and evpn-min (user job #170)
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Ticket: 2730328, 2724075
Reviewed By: CCR-11741, CCR-11746
Testing Done: Unit Test
2730328: At high bridge-vids count, VNI devices are not added in FRR if
FRR restarts after loading e/n/i
The issue is the wrt buffer overflow for netlink_recv_msg.
We have defined the kernel recv message buffer in stack which is of size 32768 (32K).
When the configuration is applied without FRR restart things work fine
because the recv message from kernel is well within the limit of 32K.
However with this configuration, when the FRR was restarted I could see that
some recv messages were crossing the 32K limit and hence weren't processed.
Below error logs were seen when frr was restarted with the confuguration.
2021/08/09 05:59:55 ZEBRA: [EC 4043309092] netlink-cmd (NS 0) error: data remnant size 32768
Fix is to increase the buffer size by another 2K
2724075: evpn mh/SVD - some of the remote neighs/macs aren't installed
in kernel post ifdown/ifup bridge
The issue was specific to SVD. During ifdown/ifup of the bridge,
I could see that the access-bd was not associated with the vni and hence
the remote neighs were not getting programmed in the kernel.
Fix is to reference (or associate) vxlan vni to the access-bd when
the vni is reported up. With this fix, I was able to see the remote
neighs getting programmed to the kernel.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
ignore GETVLAN errors at startup like we are doing
for nexthop groups. Older platforms don't support the API.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Ignore zebra_mac updates if they do not contain a VNI for vxlan
interface. We don't have anything we can do with them.
'''
==443593== Process terminating with default action of signal 6 (SIGABRT): dumping core
==443593== at 0x4E1156C: __pthread_kill_implementation (in /usr/lib64/libc.so.6)
==443593== by 0x4DC4D15: raise (in /usr/lib64/libc.so.6)
==443593== by 0x49823C7: core_handler (sigevent.c:261)
==443593== by 0x4DC4DBF: ??? (in /usr/lib64/libc.so.6)
==443593== by 0x4E1156B: __pthread_kill_implementation (in /usr/lib64/libc.so.6)
==443593== by 0x4DC4D15: raise (in /usr/lib64/libc.so.6)
==443593== by 0x4D987F2: abort (in /usr/lib64/libc.so.6)
==443593== by 0x49C3064: _zlog_assert_failed (zlog.c:700)
==443593== by 0x4F5E6D: zebra_vxlan_if_vni_find (zebra_vxlan_if.c:661)
==443593== by 0x4EEAC3: zebra_vxlan_check_readd_vtep (zebra_vxlan.c:4244)
==443593== by 0x450967: netlink_macfdb_change (rt_netlink.c:3722)
==443593== by 0x450011: netlink_neigh_change (rt_netlink.c:4458)
'''
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Properly handle ipv6-mapped-ipv4 with DVNI by converting
the address to ipv4 and setting that as the DST field for
the encap.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
The `show evpn next-hop svd *` command doesn't provide much
for users right now. Make it hidden so we can still debug
the tables with it.
Also remove SVD output from `show evpn next-hop vni all`.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Don't install implict NULL labels with non-vni label'd
routes.
This returns behavior to how it was before adding the DVNI code.
Ticket: #2677036
Testing Done: precommit, manual
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Read in STP state changes for a Single Vxlan Device
via bridge vlan netlink messages. Map the vlanid to a
VNI in the SVD table and treat it similar to how
we handle proto down of the Vxlan device traditionally
in a non-SVD device scenario.
Forwarding == Interface UP
Blocking == Interface DOWN
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add some show commands and expand some already existing
commands so we can get debug info from the SVD global
neigh table inside zebra.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add code in the nhg resolution path for determining if Downstream
VNI is in play. This is the only place in all of zebra where
we should be arbitrarily setting the ifindex/labels since
this is where new nhgs are created/destroyed. If something
changes, it must happen here.
We determine if D-VNI is being used by matching the carried
label (VNI) on the nexthop with the vrf VNI from the route.
If they do not match, we can assume this is a D-VNI labeled
nexthop.
We loop through all of the group to see if any are D-VNI. If even
one is, we must treat them all as such. Otherwise, fallback to
traditional EVPN route handling and remove all the labels.
If they are going to be treated as D-VNI we retain the labels and
verify the underlying VRF vxlan interface is a Single VXlan Device.
If it is not, we cannot use D-VNI. If it is, continue on. The VNI label
will encapped via LWTUNNEL and sent to the kernel.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Install neigh entries always on SVD if it exists in
zebra. If zebra is using a Single Vxlan Device, we must
duplicate the install of our neigh entries to it so that
vxlan communication can also work across it in the downstream VNI
case.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Encode the vni label during route install on linux
systems via lwt encap 64bit LWTUNNEL_IP_ID. The kernel expects
this in network byte order, so we convert it.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add the ability to specify the label type along with the labels
you are passing to zebra in zapi_nexthop. This is needed as we
abstract the label code to be re-used by evpn as well as mpls.
Protocols need to be able to set the type of label they have attached.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Use the already existing mpls label code to store VNI
info for vxlan. VNI's are defined as labels just like mpls,
we should be using the same code for both.
This patch is the first part of that. Next we will need to
abstract the label code to not be so mpls specific. Currently
in this, we are just treating VXLAN as a label type and storing
it that way.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
This patch addresses fix for issues found during static analysis.
rt_netlink - initialise vtep if there is NDA_DST attribute
if_netlink - initialise vni_start and vni_end
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
zebra_vxlan_if.h header file was missed in noinst_HEADERS resulting
in build failure for some platforms.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
zebra_l2_bridge_if.h header file was missed in noinst_HEADERS resulting
in build failure for some platforms.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
This patch addresses following issues,
- When the VLAN-VNI mapping is configured via a map and not using
individual VXLAN interfaces, upon removal of a VNI ensure that the
remote FDB entries are uninstalled correctly.
- When VNI configuration is performed using VLAN-VNI mapping (i.e., without
individual VXLAN interfaces) and flooded traffic is handled via multicast,
the multicast group corresponding to the VNI needs to be explicitly read
from the bridge FDB. This is relevant in the case of netlink interface to
the kernel and for the scenario where a new VNI is provisioned or comes up.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This patch addresses following
- Remove unused VLAN Id parameter when trying to determine the VNI associated
with a non-VLAN aware bridge. Also, add a check to ensure that in this case,
we have a per-VNI VXLAN interface. Due to sequence of events, it is possible
that we may have VLAN-VNI mappings, in which case the code should return
gracefully.
- With support for a container VXLAN interface that has VLAN-VNI mappings,
the VXLAN interface itself may be up but a particular VNI might have
been removed. Ensure that VNI mapping exists before proceeding with
further processing.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This patch addresses following bug fixes
- Fix vtysh doc string in "show evpn access-vlan..." command
- Multicast group handling was little complex. This change avoids calling
multiple functions and directly calls the zebra_vxlan_if_update_vni for
mcast group updates.
- When a vlan-vni map is removed, the removed vni deletion was happening
in FRR with SVD config. This was resulting in stale vni and not
resulting propagation of the vni deletion.
During vni cleanup (zebra_vxlan_if_vni_clean) zebra_vxlan_if_vni_del
was called for vni delete which is not correct. We should be calling
zebra_vxlan_if_vni_entry_del for the given vni entry.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
Today to find the vni for a given (vlan, bridge) we walk over all interfaces
and filter the vxlan device associated with the bridge. With multiple vlan aware
bridge changes, we can derive the vni directly by looking up the hash table i.e.
the vlan_table of the associated (vlan, bridge) which would give the vni.
During vrf_terminate() call zebra_l2_bridge_if_cleanup if the interface
that we are removing is of type bridge. In this case, we walk over all
the vlan<->access_bd association and clean them up.
zebra_evpn_t is modified to record (vlan, bridge) details and the
corresponding vty is modified to print the same.
zevpn_bridge_if_set and zl3vni_bridge_if_set is used to set/unset the
association.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
Multiple vlan aware bridge data structure changes and its corresponding bridge
handling changes.
A new vlan-table is maintained for each bridge which records the zebra_l2_bridge_vlan
entry. zebra_l2_bridge_vlan maps vlan to access_bd associated to this bridge.
Existing zebra_evpn_access_bd structure is vlan aware which is now modified to be
(vlan, bridge) aware.
Whenever a new access_bd is instantiated, a corresponding entry is also recorded
in the zebra l2 bridge for the vlan.
When the access_bd is dereferenced or whenever a bridge is deleted, the
association is cleaned up.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change brings in following functionality
- netlink_bridge_vxlan_vlan_vni_map_update for single vxlan devices
This function is responsible for reading the vlan-vni map information
received from netlink and populating a new hash_table with the vlan-vni
data. Once all the vlan-vni data is collected, zebra_vxlan_if_vni_table_add_update
is called to update vni_table in vxlan interface and process each of the
vlan-vni data.
- refactoring changes for zevpn_build_hash_table
- existing zevpn_build_hash_table was walking over all the vxlan interfaces
and then processing the vni for each of them. In case of single vxlan device,
we will have more than one vni entries. This function is abstracted so that
it iterates over all the vni entries for single vxlan device. For traditional
vxlan device the zebra_vxlan_if_vni_iterate would only process single vni
associated with that device.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change modifies zebra_vxlan_if_up/down/add/update and del functionality
to be per vni based.
zebra_vxlan_if_add/update/del and zebra_vxlan_if_up/down now handles
the vni operations based on vxlan device type (single or traditional vxlan device).
zebra_vxlan_if_vni_table_add_update
- This function handles the vlan-vni map update received from the netlink
interface to single vxlan device vni_table hash table.
zebra_vxlan_if_vni_mcast_group_update
- This function handles the new multicast group update received from
the netlink interface to single vxlan device vni_table hash table.
For traditional vxlan interfaces, the vni and mcast group
handling follows the traditional approach.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change refactors the zebra_vxlan_if related functionality
to a new zebra_vxlan_if.c file. zebra_vxlan_if_up/down,
zebra_vxlan_if_add/update/del is moved zebra_vxlan_if.c
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
dplane_mac_info and dplane_neigh_info is modified to be vni aware.
dplane_rem_mac_add/del dplane_mac_init is modified to be vni aware.
During dplane context update (mac and neigh), we use the vni information
and if set, corresponding netlink attribute NDA_SRC_VNI is set and passed to the
dplane.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This change set introduces data structure changes required for multiple vlan aware bridge
functionality. A new structure zebra_l2_bridge_if encapsulates the vlan to access_bd
association of the bridge. A vlan_table hash_table is used to record each instance
of the vlan to access_bd of the bridge via zebra_l2_bridge_vlan structure.
vxlan iftype derivation: netlink attribute IFLA_VXLAN_COLLECT_METADATA is used
to derive the iftype of the vxlan device. If the attribute is present, then the
vxlan interface is treated as single vxlan device, otherwise it would default to
traditional vxlan device.
zebra_vxlan_check_readd_vtep, zebra_vxlan_dp_network_mac_add/del is modified to
be vni aware.
mac_fdb_read_for_bridge - is modified to be (vlan, bridge) aware
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
This changeset introduces the data structure changes needed for
single vxlan device functionality. A new struct zebra_vxlan_vni_info
encodes the iftype and vni information for vxlan device.
The change addresses related access changes of the new data structure
fields from different files
zebra_vty is modified to take care of the vni dump information according
to the new vni data structure for vxlan devices.
Signed-off-by: Sharath Ramamurthy <sramamurthy@nvidia.com>
Currently `ip import-table 33` imports routes with
a distance of 15, as defined by zebra.h. zebra_rib.c
on the other hand believes the default value for the table
is 150. Let's make them agree with each other.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
Use the defines for distance that are in zebra.h. We could
easily have a cluster where we don't agree with ourselves. So
let's convert zebra to use the defines in zebra.h
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
The define of ZEBRA_ON_RIB_PROCESS_HOOK_CALL was in zebra.h
which exposes it to everyone, except zebra is the only daemon
to use this define. This does not beling in zebra.h
Signed-off-by: Donald Sharp <sharpd@nvidia.com>