Currently, `zif->es_info.esi` is always set even for a few unnecessary
cases in `zebra_evpn_local_es_update()`.
Delay setting `zif->es_info.esi` and remove the annoying rollback
(i.e. unset `zif->es_info.esi`) operation on failure case.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
The global vrf in zebra is always non-NULL. In general, it is bound to
default vrf by `zebra_vrf_init()`, at other times bound to some specific
vrf. Anyway, non-NULL.
So remove all redundant checkings for the returned value of
`zebra_vrf_get_evpn()`.
Additionally, remove the unnecessary check for `zvrf` in
`zebra_vxlan_cleanup_tables()`.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Firstly, *keep no change* for `hash_get()` with NULL
`alloc_func`.
Only focus on cases with non-NULL `alloc_func` of
`hash_get()`.
Since `hash_get()` with non-NULL `alloc_func` parameter
shall not fail, just ignore the returned value of it.
The returned value must not be NULL.
So in this case, remove the unnecessary checking NULL
or not for the returned value and add `void` in front
of it.
Importantly, also *keep no change* for the two cases with
non-NULL `alloc_func` -
1) Use `assert(<returned_data> == <searching_data>)` to
ensure it is a created node, not a found node.
Refer to `isis_vertex_queue_insert()` of isisd, there
are many examples of this case in isid.
2) Use `<returned_data> != <searching_data>` to judge it
is a found node, then free <searching_data>.
Refer to `aspath_intern()` of bgpd, there are many
examples of this case in bgpd.
Here, <returned_data> is the returned value from `hash_get()`,
and <searching_data> is the data, which is to be put into
hash table.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Since there are two kinds of ESI (Type-0 and Type-3), the warnings
should distinguish between the two cases.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Since the `RB_INSERT()` is called after not found in RB tree, it MUST be ok and
and return zero. The check of returning value of `RB_INSERT()` is redundant,
just remove them.
Signed-off-by: anlan_cs <vic.lan@pica8.com>
Cleanup the logs in the netlink code for setting
protodown on/off to be more useful to a user parsing them
after an issue.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Use the SET/UNSET/CHECK/COND macros for flag bifields
where appropriate throught the protodown code base.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Ensure we include the old reason when we are updating the reason
code for a evpn-mh bond member. Now that this is a common API
it could include things external to EVPN in this reason code
bitfield (ex: vrrp).
Signed-off-by: Stephen Worley <sworley@nvidia.com>
When setting the protodown reason use the update api
where we can directly update the entire reason bitfield
since we have to set more than one.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
Add support for setting the protodown reason code.
829eb208e8
These patches handle all our netlink code for setting the reason.
For protodown reason we only set `frr` as the reason externally
but internally we have more descriptive reasoning available via
`show interface IFNAME`. The kernel only provides a bitwidth of 32
that all userspace programs have to share so this makes the most sense.
Since this is new functionality, it needs to be added to the dplane
pthread instead. So these patches, also move the protodown setting we
were doing before into the dplane pthread. For this, we abstract it a
bit more to make it a general interface LINK update dplane API. This
API can be expanded to support gernal link creation/updating when/if
someone ever adds that code.
We also move a more common entrypoint for evpn-mh and from zapi clients
like vrrpd. They both call common code now to set our internal flags
for protodown and protodown reason.
Also add debugging code for dumping netlink packets with
protodown/protodown_reason.
Signed-off-by: Stephen Worley <sworley@nvidia.com>
When an ES is deleted and re-added bgpd can start sending MAC-IP sync updates
before the dataplane and zebra have setup the VLAN membership for the ES. Such
MAC entries are not installed in the dataplane till the ES-EVI is created.
Ticket: #2668488
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
This addresses deletion of ES interfaces that are were not completely
configured.
Ticket: #2668488
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
In zebra_evpn_proc_remote_nh if we do not pass in a long
enough stream, the stream reads will fail. Ensure that
we have enough data.
Signed-off-by: Donald Sharp <sharpd@nvidia.com>
v4 and v6 host/refernce prefixes need to be setup separately for
[RMAC, VTEP] entries as the VTEP is always normalized to a v4 addr.
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
Enqueue incoming vxlan remote macip updates on the main
workqueue, instead of performing the updates immediately,
in-line.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
Add workqueue subqueue for EVPN/VxLAN updates; migrate the
evpn route and remote ES processing from their ZAPI handlers
to the workqueue.
Signed-off-by: Mark Stapp <mjs@voltanet.io>
EVPN nexthops are installed as remote neighs by zebra. This was earlier
done only via VRF IPvX uni routes imported from EVPN routes.
With EVPN-MH these VRF routes now reference a L3NHG which is setup based
on the EAD and doesn't include the RMAC. To workaround that BGP now
consolidates and maintains EVPN nexthops which are then sent to zebra.
zebra sets up these nexthops as L3-VNI nh entries using a dummy type-1
route as reference.
Ticket: CM-31398
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
This one also needed a bit of shuffling around, but MTYPE_RE is the only
one left used across file boundaries now.
Signed-off-by: David Lamparter <equinox@diac24.net>
When a ES-bond is in bypass state MACs learnt on it are linked to the
access port instead of the ES. When LACP converges on the bond it moves
out of bypass and the MACs previously learnt on it are flushed to force
a re-learn on new traffic.
Ticket: CM-31326
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
Feature overview:
=================
A 802.3ad bond can be setup to allow lacp-bypass. This is done to enable
servers to pxe boot without a LACP license i.e. allows the bond to go oper
up (with a single link) without LACP converging.
If an ES-bond is oper-up in an "LACP-bypass" state MH treats it as a non-ES
bond. This involves the following special handling -
1. If the bond is in a bypass-state the associated ES is placed in a
bypass state.
2. If an ES is in a bypass state -
a. DF election is disabled (i.e. assumed DF)
b. SPH filter is not installed.
3. MACs learnt via the host bond are advertised with a zero ESI.
When the ES moves out of "bypass" the MACs are moved from a zero-ESI to
the correct non-zero id. This is treated as a local station move.
Implementation:
===============
When (a) an ES is detached from a hostbond or (b) an ES-bond goes into
LACP bypass zebra deletes all the local macs (with that ES as destination)
in the kernel and its local db. BGP re-sends any imported MAC-IP routes
that may exist with this ES destination as remote routes i.e. zebra can
end up programming a MAC that was perviously local as remote pointing
to a VTEP-ECMP group.
When an ES is attached to a hostbond or an ES-bond goes
LACP-up (out of bypss) zebra again deletes all the local macs in the
kernel and its local db. At this point BGP resends any imported MAC-IP
routes that may exist with this ES destination as sync routes i.e.
zebra can end up programming a MAC that was perviously remote
as local pointing to an access port.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
For MH the SVI MAC is advertised to prevent flooding of ARP replies.
But because of a bug the SVI MAC was being added to the zebra database
but not sent to bgpd for advertising.
Ticket: CM-33329
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
As a part of FRR shutdown interfaces are force flushed (in an arbitary
order). Interfaces are already down at that point i.e. resources like
SVI-MAC have already been released. Attempting to clean it up again
as a part of the force-flush was resulting in access of freed up memory -
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
==26457== Thread 1:
==26457== Invalid read of size 8
==26457== at 0x1AE6B0: zebra_evpn_acc_bd_svi_set (zebra_evpn_mh.c:606)
==26457== by 0x1B1460: zebra_evpn_if_cleanup (zebra_evpn_mh.c:1040)
==26457== by 0x13CA69: if_zebra_delete_hook (interface.c:244)
==26457== by 0x48A0E34: hook_call_if_del (if.c:59)
==26457== by 0x48A0E34: if_delete_retain (if.c:290)
==26457== by 0x48A2F94: if_delete (if.c:313)
==26457== by 0x48A3169: if_terminate (if.c:1217)
==26457== by 0x48E0024: vrf_delete (vrf.c:254)
==26457== by 0x48E0024: vrf_delete (vrf.c:225)
==26457== by 0x48E02FE: vrf_terminate (vrf.c:551)
==26457== by 0x1442E1: sigint (main.c:203)
==26457== by 0x1442E1: sigint (main.c:141)
==26457== by 0x48CF862: quagga_sigevent_process (sigevent.c:103)
==26457== by 0x48DD324: thread_fetch (thread.c:1404)
==26457== by 0x48A926A: frr_run (libfrr.c:1122)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
(gdb) bt
(gdb) fr 5
1037 zebra/zebra_evpn_mh.c: No such file or directory.
(gdb) p zif->ifp->name
$2 = "vlan131", '\000' <repeats 12 times>
(gdb) p zif->link->info
$5 = (void *) 0x1
(gdb) p/x zif->ifp->flags
$7 = 0x1002
(gdb)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-32435
Signed-off-by: Anuradha Karuppiah <anuradhak@nvidia.com>
zebra crash is seen while cleaning up evpn interface
during shutdown event.
evpn interface clean up is called from vrf_delete callback
(gdb) frame 4
(is_up=false, br_zif=0x0, vlan_zif=0x557f31fb36f0) at zebra/zebra_evpn_mh.c:614
614 zebra/zebra_evpn_mh.c: No such file or directory.
(gdb) p tmp_br_zif
$1 = (struct zebra_if *) 0x0
(gdb) p vlan_zif->link
$2 = (struct interface *) 0x557f31fb2d40
(gdb) p vlan_zif->link->info
$3 = (void *) 0x0
(gdb) p zebra_if->ifp->name
No symbol "zebra_if" in current context.
(gdb) p vlan_zif->ifp->name
$4 = "peerlink-3.4094\000\000\000\000"
Ticket:CM-32435
Reviewed By:CCR-10957
Testing Done:
Signed-off-by: Chirag Shah <chirag@nvidia.com>
Added support for advertising SVI MAC if EVPN-MH is enabled.
In the case of EVPN MH arp replies from an attached server can be sent to
the ES-peer. To prevent flooding of the reply the SVI MAC needs to be
advertised by default.
Note:
advertise-svi-ip could have been used as an alternate way to advertise
SVI MAC. However that config cannot be turned on if SVI IPs are
re-used (which is done to avoid wasting IP addresses in a subnet).
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
zebra maintains pseudo interface for hanging off user config after
the interface is deleted in the kernel. If an user tried to config
an ES against such an interface zebra would crash with the following
call stack -
at zebra/zebra_evpn_mh.c:2095
sysmac=sysmac@entry=0x55cfbadd3160) at zebra/zebra_evpn_mh.c:2258
at zebra/zebra_evpn_mh.c:3222
argv=<optimized out>, es_lid_str=<optimized out>, es_lid=1, no=0x0, vty=0x55cfbaf4c7b0)
at zebra/zebra_evpn_mh.c:3222
argv=<optimized out>) at ./zebra/zebra_evpn_mh_clippy.c:202
vty=vty@entry=0x55cfbaf4c7b0, cmd=cmd@entry=0x0, filter=FILTER_RELAXED)
at lib/command.c:1073
Ticket: CM-31702
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
With EVPN-MH, Type-2 routes are also used for MAC-IP syncing between
ES peers so a change was done to only treat REACHABLE local neigh
entries as local-active and advertise them as Type-2 routes i.e. STALE
neigh entries are no longer advertised as Type-2s.
This however exposed some unexpected problems with MLAG where a
secondary reboot followed by a primary reboot left a lot of neighs
in STALE state (on the primary) resulting in them not being
advertised. And remote routed traffic to those hosts being
blackholed in a sym-IRB setup.
This commit is a workaround to fix the regression (it doesn't fix
the underlying problems with entries not becoming REACHABLE; which
maybe a day-1 problem). The workaround is to continue advertising
STALE neighbors if EVPN-MH is not enabled.
Ticket: CM-30303
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
zebra was crashing when the command was run on a non-existent VNI.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215
VNI 16777215 doesn't exist
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 detail
VNI 16777215 doesn't exist
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 json
[
]
root@torm-12:mgmt:~# net show evpn es-evi vni 16777215 detail json
[
]
root@torm-12:mgmt:~#
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ticket: CM-30232
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When a new ES is created it is held in a non-DF state for 3 seconds
as specified by RFC7432. This allows the switch time to import
the Type-4 routes from the peers. And the peers time to rx the new
Type-4 route.
root@torm-11:mgmt:~# vtysh -c "show evpn es 03:44:38:39:ff:ff:01:00:00:01"|grep DF
DF status: non-df
DF delay: 00:00:01
DF preference: 50000
root@torm-11:mgmt:~# vtysh -c "show evpn es 03:44:38:39:ff:ff:01:00:00:01"|grep DF
DF status: df
DF preference: 50000
root@torm-11:mgmt:~#
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
When all the uplinks go down the VTEP is disconnected from the
VxLAN overlay and this was handled by proto-downing the ES bonds. When
the uplinks come up again we need to re-enable the ES bonds but that
needs to be done after a delay to allow the EVPN network to converge.
And that is done by firing off the startup-delay timer on first
uplink-up.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
1. When a bond is associated with an ES we may need to re-sync
the dplane protodown state (which maybe stale/set by some other
app).
2. Also change the uplink state display to avoid confusion with
protodown reason code (both used to show uplink-up).
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
protodown state is a combination of the dplane and zebra states.
protodown reason is maintained exclusively by zebra. Display this
information on two separate lines to make that ownership clearer.
Also display n/a for bonds as the dplane doesn't support protodowning
the bond device.
Sample output -
==============
root@torm-11:mgmt:~# vtysh -c "show interface hostbond1"|grep -i protodown
protodown: off (n/a)
protodown reasons: (uplinks-down)
root@torm-11:mgmt:~# vtysh -c "show interface swp5"|grep -i protodown
protodown: on
protodown reasons: (uplinks-down)
root@torm-11:mgmt:~#
PS: Cosmetic changes only, no functional change.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
The code for this was already there but was not kicking in because of a
zebra local reason-code dup check. Even if the reason-code is the same,
if the dplane and zebra disagree about the protodown state zebra will
need to re-program the dplane.
Fixed a couple of spelling errors in the protodown logs to make greps
easy.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>
This is an optimization to reduce the number of L2 nexthops. A
l2 or fdb nexthop simply provides the dataplane with a nexthop ip-
torm-12:mgmt:~# ip nexthop
id 268435461 via 27.0.0.20 scope link fdb
id 268435463 via 27.0.0.20 scope link fdb
id 268435465 via 27.0.0.20 scope link fdb
So there is no need to allocate a nexthop per-ES/per-VTEP. There
can be 100+ ESs per-VTEP so this change cuts the scale down by a
factor of 100.
Signed-off-by: Anuradha Karuppiah <anuradhak@cumulusnetworks.com>